DOI : 10.17577/IJERTV15IS051574
- Open Access

- Authors : Nidhi P Naik, Sanjana R, Vishwas Bhusnoor, Dr. Jagruthi H
- Paper ID : IJERTV15IS051574
- Volume & Issue : Volume 15, Issue 05 , May – 2026
- Published (First Online): 17-05-2026
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
Digital Restoration of Indian Traditional Paintings
Nidhi P Naik
Information Science and Engineering B N M Institute of Technology Bengaluru, India
Sanjana R
Information Science and Engineering B N M Institute of Technology Bengaluru, India
Vishwas Bhusnoor
Information Science and Engineering B N M Institute of Technology Bengaluru, India
Dr. Jagruthi H
Associate Professor Information Science and Engineering B N M Institute of Technology Bengaluru, India
Abstract – Conservation of Indian traditional paintings, a na-tional treasure considered to be irreplaceable cultural and historical resources, is deemed to be an essential challenge. The painting works are now more exposed to natural degra-dation, environmental factors, and time. Conventional methods of restoration, although popularly practiced, tend to have labour intensive and expensive procedures and irreversible treatments, therefore requiring innovative alternative strategies. With the recent developments in Articial Intelligence (AI) and deep learning, new non-destructive digital restoration processes have been proposed, which have the potential to conserve these historic masterpieces without compromising them further. In this paper, an extensive survey of the principal technologies and ideas associated with digital inpaintinga basic tool employed for vir-tual restoration of broken artworks is showcased. Comparisons are made between traditional image processing techniques and new deep learning techniques, such as Generative Adversarial Networks (GANs) and Diffusion Models, and their respective advantages and limitations are analysed. The involvement of human expertise in the restoration is portrayed, with focus given to ethical concerns, necessity of uncertainty quantication, and maintaining the artistic style in the course of the restoration pro-cess. Methodologies, computational complexities, and assessment measures are analysed in order to better understand the current capability provided by AI-assisted restoration. Long standing issues, like the insufciency of datasets specically designed for the distinctive characteristics of Indian traditional art, are noted, and directions for further studies are suggested.
Index TermsDigital Restoration, Inpainting, Generative AI, Indian Traditional Paintings, Cultural Heritage, Deep Learning, Generative Adversarial Networks, Diffusion Models
-
Introduction
Preservation of cultural heritage has been seen as an issue of signicance, with historical paintings as irreplaceable art pieces that reect human civilization, art, and societal evolu-tion. Among these, Indian traditional paintings and drawingspraised for the variety of styles, delicate craftsmanship, and vibrant colour compositionsare especially susceptible to degradation resulting from environmental conditions, humid-ity, light, and the natural processes of aging. The restoration of such works has, for centuries, been performed by conservators using manual, physical, and generally invasive methods. Such
This work was supported by B N M Institute of Technology.
traditional techniques, though based on professionalism and convention, have proved to be expensive in terms of cost and labour, time-consuming, and sometimes damaging to the integrity of the original work since the risk of causing additional damage during the process of restoration cannot be eliminated.
With the rise of digital technologies, a new conservation pattern has been enabled, one that provides non-invasive, reversible, and precise means for the virtual restoration of art. At the heart of this digital revolution lies the method of digital inpainting, which entails the completion of missing or damaged areas of an image by lling them with visually coherent and contextually consistent content [14], [18]. In the initial phase, digital inpainting was limited to basic pixel-level corrections like the elimination of small blemishes or scratches [26]. However, with the accelerated development in computa-tional power and the emergence of deep learning techniques, the area has seen a dramatic transformation [7], [14], [18]. More advanced approaches are now able to reconstruct larger missing regions automatically and create content consistent with the original artworks style and composition [27], [29], [30].
This paper seeks to give a systematic and exhaustive overview of the evolution of digital inpainting methods, with special reference to their use in the restoration of Indian tradi-tional paintings [21], [22], [28]. The mathematical foundations and algorithms are initially described, after which advanced AI-based methods are thoroughly discussed, with mention of their merits, drawbacks, and ethics [17]. Practical aspects like human knowledge integration, handling uncertainties, and protection of artistic integrity are also discussed [9], [25]. The survey is concluded by noting the current challenges and future opportunities, with a focus on the requirement of necessary, content-specic datasets and models specic to the unique aspects of Indian cultural heritage [10], [13], [28].
-
Literature Survey: Key Technologies and
Concepts
The terrain of computer inpainting in art restoration has been dened by an evolution over time from initial, basic
algorithms following set rules to highly advanced, generative models driven by Articial Intelligence (AI) [14], [18]. This has followed an evolution in emphasis from mere patching of single pixels to the intelligent reconstruction of missing parts of artworks on the basis of stylistic understanding, context, and semantics [17]. Consequently, restoration methods have transitioned from surface-level repairs to full-edged, content-based recreation that accounts for the artistic intention and historical signicance of the works [21], [22].
-
Traditional vs. Deep Learning Methods
Early digital inpainting methods were based on traditional approaches that were strongly dependent on local information and mathematically specied procedures [26]. Such tech-niques, usually classied as classical techniques, were marked by local operations that were performed on small neighbour-hoods of the missing area without considering higher-level image semantics.
Among these, content-based inpainting techniques, includ-ing nonparametric sampling and exemplar-based methods, were prevalent [26]. In these techniques, the repair process was achieved by taking patches or pixels from intact areas surrounding the aw and duplicating them to replace missing portions. While such techniques were effective in coping with plain textures, homogeneous backgrounds, or minute irreg-ularities, they were considerably constrained when charged with reconstructing large missing areas or intricate patterns [26]. The lack of contextual perception frequently resulted in irregular textures or seams where patches were blended together.
Structure-based inpainting techniques, by contrast, tried to restore the images geometric and structure continuity. Methods like the Fast Marching Method (FMM) and Partial Differential Equations (PDEs) were utilized for extrapolating lines, edges, and boundaries into the damaged areas [11], [13]. These mathematical models provided better results when coping with shapes and patterns that called for geometric coherence. However, for all the mathematical precision they provided, such methods were inadequate when complex tex-tures, complicated patterns, or stylistic intricacies needed to be replicated. The outcomes tended to be too simplistic or lacked acuracy [11], [26].
Eventually, classical approaches were limited by their in-ability to understand the higher semantics inherent in artwork. As a result, restorations conducted with these approaches often contained soft textures, articial transitions, or structural anomalies [26]. The artistic synthesis of conceivable content in heavily compromised artwork remained elusive.
Deep learning led to a new generation of image restoration [7], [14], [18]. The advent of Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs) re-sulted in a fundamental leap in capability [7], [29]. Through the exploitation of large datasets, the models learned not just surface-level patterns but also structural and semantic interrelations underlying images [27]. Restoration methods which could hallucinate missing content intelligently were
made possible through this [7], [29], [30]. These methods allowed for the computer-assisted creation of very realistic textures, forms, and patterns that conformed to the style and context of the original artwork, circumventing many of the previous methods limitations [7], [27], [29].
-
Key AI Inpainting Architectures
In contemporary restoration pipelines, two major deep learn-ing architectures have dominated the landscape: Generative Adversarial Networks (GANs) and Diffusion Models [7], [27], [19], [23]. Both models have been used to tackle particular challenges, with their individual strengths playing a role in various aspects of image reconstruction.
GAN-based methods excel at generating high-frequency, textural details and visually plausible restorations [7], [29], [24], whereas diffusion models are better at achieving global structural coherence and semantic consistency [19], [27], [30]. Hybrid approaches that combine the strengths of both have also emerged, offering a balanced trade-off between texture quality and structure delity [4], [23].
Transformers and Vision Transformers (ViTs) have recently introduced the ability to capture long-range dependencies across the image, making them particularly effective in understanding global artistic context [1], [20]. Diffusion models continue to evolve, with variants such as Denoising Diffusion Implicit Models (DDIMs) [16], Latent Diffusion Models [8], and Preference-Aligned Diffusion [9] introducing better efciency, coherence, and perceptual quality.
The papers referenced are summarized below highlighting their core methodologies and contributions.
Elharrouss et al. (2025) present a comprehensive review of Transformer-based image and video inpainting approaches, outlining key developments and challenges in the eld. Their study categorizes current Transformer architectures used for restoration tasks and highlights their strengths in modeling long-range dependencies. While the work does not introduce a new model, its signicance lies in identifying trends and future directions that inform ongoing research in digital art restoration.
Zhang et al. (2025) propose a specialized deep learning model for mural inpainting using a U-Net-based discriminator enhanced with coordinate attention and aggregated transfor-mations. This design helps maintain structural integrity and ne textural details characteristic of mural art. The method demonstrates improved historical and stylistic delity, though it requires adaptation for non-mural images.
Wang et al. (2025) address the issue of low-resolution digital archives of ancient paintings through deep learning-based super-resolution algorithms. Their technique reconstructs lost details and enhances visual quality, aiding both research and public appreciation. A limitation is the risk of generating non-authentic details not present in the original artwork.
Botirova et al. (2025) combine Diffusion Models and Gener-ative Adversarial Networks (GANs) in a hybrid framework
for historical painting restoration. The approach integrates the structural coherence of diffusion models with the textural sharpness of GANs, achieving realistic results at the expense of higher computational complexity.
Kinakh & Voloshynovskiy (2025) introduce a Binary Dif-fusion Probabilistic Model optimized for binary data such as segmentation masks. While not designed for full-color inpainting, it serves as an efcient pre-processing tool for dening damaged areas with high precision.
Abualigah et al. (2025) present an improved reptile search algorithm with a Gbest operator for multi-level image thresh-olding. Their optimization enhances segmentation accuracy, crucial for identifying damaged regions before restoration. The study, however, does not perform the inpainting itself.
Buvaneshwaran et al. (2025) demonstrate the application of GANs for the digital restoration of damaged paintings and artifacts. The network learns artistic styles from undamaged works to reconstruct missing regions with high-frequency detail, though it occasionally introduces structural inconsis-tencies across large missing areas.
Corneanu et al. (2024) propose LatentPaint, an inpainting technique operating in the latent space of an image using diffusion models. This method offers computational efciency and global coherence, but its performance is limited by the delity of the autoencoder used for encoding and decoding. Liu et al. (2024) introduce PrefPaint, a diffusion-based model ne-tuned using human feedback. By aligning model outputs with human aesthetic preferences, it produces restorations that appear more natural to viewers. The main challenge is the subjectivity and effort involved in collecting reliable human feedback.
Sinha et al. (2024) outline a complete digital restoration pipeline using synthetic damage generation, deep segmenta-tion, and inpainting. This synthetic training approach addresses the scarcity of real-world damaged/undamaged artwork pairs. Its limitation lies in the gap between synthetic and real degradation patterns.
Tint & Tin (2024) assess the coherent transport inpainting method for mural restoration, employing damage ratio analysis to quantitatively evaluate results. The work contributes a mea-surable framework for restoration quality, though it focuses narrowly on one algorithm and mural type.
Sun et al. (2024) propose a dual-encoder inpainting archi-tecture that processes both local textures and global structural context. The system improves compositional coherence and detail retention, although its dual-encoder design increases computational cost.
Hu et al. (2024) present a multi-level thresholding segmen-tation algorithm using an equilibrium optimizer. It efciently isolates damaged regions, enhancing precision in restoration workows, though it only addresses segmentation, not inpaint-ing.
Quan et al. (2024) conduct a survey of deep learning-based image and video inpainting, categorizing architectures like CNNs, GANs, and Transformers. The paper provides a broad
synthesis of the elds evolution but does not include new experimental ndings.
Yang et al. (2023) introduce Uni-Paint, a unied diffusion-based framework for multimodal inpainting that accepts tex-tual or visual guidance. Its versatility offers users creative control, though this generalization may reduce task-specic performance.
Zhang et al. (2023) utilize Denoising Diffusion Implicit Models (DDIMs) to enhance global coherence and sampling speed in image inpainting. While maintaining realism and structural plausibility, challenges remain for very large missing regions.
Gaber et al. (2023) discuss the impact of AI and machine learning on cultural heritage preservation. Though conceptual rather than technical, it emphasizes AIs transformative role in safeguarding artworks and artifacts through virtual restoration. Xu et al. (2023) review deep learning-based inpainting meth-ods, analyzing GAN-, CNN-, and Transformer-based archi-tectures. The study summarizes advancements and challenges such as edge artifacts and texture continuity, srving as a key resource for restoration researchers.
Fein-Ashley & Fein-Ashley (2023) enhance diffusion-based inpainting by introducing anisotropic Gaussian splatting, which models directional textures like brushstrokes. The method improves realism but increases computational over-head.
Duan et al. (2023) apply Vision Transformers to historical painting restoration, leveraging their global context modeling to maintain stylistic coherence. Although computationally ex-pensive, this approach is highly effective for large, complex compositions.
Singh et al. (2023) focus on digital restoration of ancient Indian murals through adapted inpainting algorithms. Their work highlights how AI can be tailored to culturally specic art forms, though results may not generalize across diverse painting styles.
Tribhuvan & Abdullah (2023) present a real-world AI restoration of the Ajanta Cave paintings, demonstrating digital heritage preservation at scale. While context-specic, the study underscores the viability of AI for ancient artwork reconstruc-tion.
Grechka et al. (2023) propose GradPaint, a gradient-guided diffusion inpainting method that leverages edge and texture gradients for sharper boundary alignment. This guidance en-hances detail accuracy but can limit creative exibility in more abstract tasks.
Zuo et al. (2023) combine contrastive learning and seg-mentation confusion adversarial training to improve semantic and structural realism in generative inpainting. The complex training setup yields strong results but demands signicant computational resources.
Yu et al. (2023) introduce Inpaint Anything, integrating the Segment Anything Model (SAM) with diffusion-based inpainting. The system simplies mask creation and improves user experience, although nal quality depends on SAMs segmentation accuracy.
Zhao et al. (2022) enhance exemplar-based inpainting by incorporating boundary priors to guide patch selection, im-proving structure consistency in classical algorithms. Despite improvements, exemplar-based methods remain limited for large missing regions.
Lugmayr et al. (2022) present RePaint, one of the earli-est applications of Denoising Diffusion Probabilistic Models (DDPMs) for inpainting. The model produces highly realistic results through iterative resampling but at the cost of long inference times.
Poornapushpakala et al. (2022) design a segmentation-plus-inpainting framework for restoring Tanjore paintings, respect-ing the distinct materials and textures involved. The approach is effective but tailored specically to this art form.
Zhao et al. (2021) develop co-modulated GANs for large-scale image completion, achieving global structural consistency in extensive missing regions. However, GANs training instabil-ity remains a challenge.
Suvorov et al. (2021) propose resolution-robust inpainting us-ing Fourier convolutions, improving global consistency across varying resolutions. The trade-off is reduced capability in ne-texture reconstruction.
-
Generative Adversarial Networks (GANs): GANs have been generally seen as one of the most effective tools for inpainting and generating images [7], [14], [29]. The ar-chitecture consists of two adversarial networks: a generator, which tries to generate inpainted images that are indistin-guishable from real art, and a discriminator, which learns to recognize the difference between real and generated images [7], [27]. Through the adversarial process, both networks are progressively improved, producing progressively more realistic outputs [7], [29].
Context Encoders: These were some of the rst methods used. They work by using an encoder-decoder setup in which the encoder looks at the parts of the image that are still there, while the decoder tries to ll in the missing spots. Using adversarial training helps make the new content blend in better with the existing parts, though it still struggles with really detailed patterns [14], [18], [26].
CAAT-GAN: This model is made specically for xing mu-rals. It introduces something called the Coordinated Attention Aggregation Transformation (CAAT) block, which helps the model gather information from distant parts of the image. This way, it can better rebuild damaged areas [23]. It also uses a U-Net-based discriminator that checks both the big picture and small details, reducing errors and making the results sharper and more textured [23], [24].
User-Guided GANs: These models let people guide the restoration process. For example, when restoring ancient Chi-nese paintings, users can give hints about the structure of the image [19]. This helps the AI make restorations that stay true to the original style and meaning, keeping cultural and historical details intact [21], [22], [28].
Multi-Stage GANs: These are more advanced systems that work in steps. The rst stage xes the basic structure, and the next stage adds textures and details [7], [29]. This approach
helps handle large, complex missing areas and creates results that look both accurate and visually pleasing [27], [30].
-
Diffusion Models: Along with GANs, diffusion models are another category of potent inpainting architectures [8], [9], [16], [19]. They operate by initially introducing controlled noise to an image in iterative steps and then guring out the reverse procedure [16], [27]. This denoising process allows the retrieval of images with high textures and subtle information often hard to realize through other means [19], [30].
Numerous diffusion-based solutions have been able to suc-cessfully implement image restoration:
Palette: A generic architecture that utilizes conditional diffusion models to accomplish diverse image-to-image trans-lation tasks, such as inpainting [19]. Without the need for task-specic modications, Palette has been demonstrated to pro-duce state-of-the-art outcomes on a large range of restoration tasks and is a valuable tool in the conservation realm [19], [27].
BDPM (Binary Diffusion Probabilistic Model): This model presented an innovative solution by modelling images in binary form using decomposition into bitplanes. The model used an XOR-based noise transform, enabling more efcient and accurate control of the restoration, especially benecial in handling ne detail and high-frequency texture [8], [16].
PrefPaint: In appreciation of the signicance of beauty, PrefPaint combined reinforcement learning with diffusion modelling by training a reward model over a huge dataset of human-annotated images [9]. By optimizing restoration as per human preferences, the model made the outcome technically accurate as well as beautiful [9], [19]. The model was tested on a dataset of 51,000 annotated inpainted images and showed regular perceptual quality improvements [9].
Hybrid GAN-Diffusion Models: Recent developments have also seen attempts to hybridize GANs and diffusion mod-els to take advantage of their strengths complementarily [4], [23]. Although GANs are far better at reconstructing texture and structural information, diffusion models are best suited to controlling noise and rening ne details [8], [19], [27]. These hybrid methods have been demonstrated to achieve restorations with very high perceptual quality while maintaining the artistic style integrity of the work [23], [30].
-
-
Human-Centric Restoration and Ethical Considerations
With the evolution of restoration methods, the focus is now more and more on ethical and practical issues that come with AI-based methods [9], [17]. The most sophisticated pipelines today also incorporate human knowledge and quantifying uncertainty to facilitate responsible and culturally appropriate restoration [21], [22], [28].
-
Human-in-the-Loop and User Guidance: Since artworks not only contain visual patterns but also history and culture, human professionals have been considered irreplaceable in te restoration process [17], [21]. AI algorithms that have user-guidance features enable conservators to intervene throughout the restoration by giving structural suggestions or correcting generated images [9], [25]. This cooperation ensures that the
restored image not only produces technical accuracy but also artistic and cultural authenticity [17], [28].
Models such as PrefPaint have highlighted the signicance of ensuring alignment of AI-generated results with human aesthetically grounded judgments, since restoration is not only about structural delity but also about perceptual harmony [9]. Human-in-the-loop approaches guarantee that restoration is transparent, interpretable, and accountable [17], [21].
-
Uncertainty Quantication: Inpainting techniques nec-essarily carry an element of guesswork when considerable parts of an image are lost, necessitating the creation of new material based on no guidance from the original object [11], [26]. In response to this problem, uncertainty quantication techniques have been developed [23], [25]. For example, CAAT-GANs U-Net discriminator is able to generate both global and local condence scores, giving conservators a signpost of which reconstructed regions are more trustworthy and which will need further inspection or human correction [23], [25].
Through provision of this interpretability, uncertainty quan-tication avoids over-reliance on the automated process and enables users to make intelligent decisions in restoration [17], [25].
-
-
Evaluation and Practical Considerations
The successful use of AI models in actual restoration con-texts not only relies on algorithmic complexity but also upon the existence of suitable datasets and metrics of assessment, as well as computational efciency [10], [13], [21].
-
Datasets and Metrics: The result of AI-based restoration methods is heavily dependent on the training datasets [10], [13]. For example, the DunHuang-Mural dataset, with 7,983 high-resolution images of murals depicting historical scenes from the Mogao Caves, has offered a valuable asset for training and testing models related to culturally relevant paintings [19], [21].
A variety of evaluation metrics is used to measure the efcacy of restoration techniques [14], [18]. Pixel-based mea-surements, including PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index), are employed to assess pixel-level precision and structural similarity. Though helpful in contrasting output with reference images, these measure-ments are not necessarily correlated with human intuition, particularly in applications where creative reconstruction is necessary [18], [26].
Perceptual measures, such as LPIPS (Learned Perceptual Image Patch Similarity) and FID (Fre“chet Inception Distance), are more appropriate for measuring the perceptual quality of generative samples [14], [18], [27]. The measures compare similarities in feature distributions and have been found to more closely align with human judgment [9], [19].
Specialized metrics, including symmetry concentration scores for facial reconstruction and style consistency measures, have been designed to overcome the specic challenges of certain types of artworks [10], [13]. Increased recognition
of domain-specic evaluation highlights the call for metrics tailored to art restoration [21], [28].
-
Computational and Accessibility Aspects: Deep learn-ing restoration models tend to be computationally expensive, demanding heavy resources for training as well as inference [19], [27], [30]. All these notwithstanding, attempts have been made at designing more efcient and accessible architectures [8], [16]. Some models have even been optimized to run in real time on consumer-grade hardware, paving the way for wider adoption in museums, archives, and cultural institutions [21], [28].
-
In addition, the open-source nature of most AI models has promoted collaborative research and real-world applica-tion [10], [13], [19]. By releasing datasets, codebases, and pretrained models to the public, researchers have facilitated broader experimentation, accelerated iterations, and easier access for conservation professionals with limited resources [19], [27].
-
-
Conclusion
The survey has shown that AI-powered digital restoration, specically focusing on inpainting methods, has become a highly developed and revolutionary eld of research and practice [7], [14], [18]. The discipline has seen a signicant evolution from initial rule-based algorithms that were con-strained by their ability to correct only basic, localized image aws towards advanced deep learning architectures that can effectively reconstruct distorted or missing parts with high accuracy and visual consistency [26], [27].
With the creation of sophisticated architectures like Gen-erative Adversarial Networks (GANs) and Diffusion Models, restoration abilities have greatly improvednot just enabling the insertion of missing parts but also enabling the creation of content following the semantic, structural, and stylistic features of the original work [7], [8], [19], [29].
They have been seen to be adjusted to include human judg-ment and expertise such that restorations are not automated but are informed by artistic intention and the knowledge of conservators about historical context [9], [17], [25]. Uncer-tainty quantication techniques have been incorporated, which provide conservators with more transparency and condence in AI outputs [23], [25]. Likewise, attempts have been made to retain the soft texture, brushwork, and symbolic aspects that are organic to old paintings, thus maintaining the integrity of the works of art [21], [22], [28].
In the case of Indian traditional paintings, with their variety of regional manners, involved iconography, and vibrant color palettes, the use of AI-based restoration methods presents especially exciting options [21], [22], [28]. The development of models customized for the distinctive characteristics of Indian artsuch as its aging patterns, pigment degradation, and cultural symbolismhas been recognized as a high-priority next step [10], [13], [28]. Correctly curated datasets reecting the nuances encountered across the different schools and eras of Indian art will be used to train models that are
able to restore works of art in ways that are sensitive to their contexts and respectful of their original aesthetic [21], [28].
Realization of these goals will involve persistent cross-disciplinary collaboration. Research in AI will need to be conducted in coordination with art historians, conservators, and cultural analysts to ensure that restoration techniques are guided by profound contextual knowledge and ethical consideration [17], [21], [28]. There should be encouraged interdisciplinary models where technology is not viewed as a substitute for human expertise but as an enhancement of it [9], [25].
A hybrid strategy that leverages the generative power of AI models and applies human oversight and interpretative direction will be instrumental in ensuring restoration processes stay true to artistic authenticity and cultural value [9], [17]. Through the formation of such partnerships, Indian traditional painting preservation can be viewed not as a technical problem but as a cultural imperativea marriage of innovation and respect for heritage [21], [22], [28].
Eventually, it is through the ethical integration of AI in art conservation that historic paintings can be conserved, researched, and enjoyed by generations to come [17], [21], [28]. The continuation of research in this area, underpinned by open datasets, ethical guidelines, and cross-disciplinary collaborations, will be invaluable in making sure the visual stories and artistic heritage contained in these works are preserved for the ages [9], [25].
Acknowledgment
The authors woud like to thank all researchers whose work contributed to this survey, and acknowledge the importance of preserving cultural heritage through the application of modern technologies.
References
-
O. Elharrouss, R. Damseh, A. N. Belkacem, E. Badidi, and A. Lakas, Transformer-based image and video inpainting: current challenges and future directions, Articial Intelligence Review, 2025.
-
J. Zhang et al., Supporting historic mural image inpainting by using coordinate attention aggregated transformations with U-Net-based dis-criminator, Heritage Science, 2025.
-
C. Wang, D. Zhou, Y. Fu, and Z. Shi, Super resolution reconstruction of ancient paintings, Proc. 2025 5th Int. Conf. Neural Networks, Information and Communication Engineering (NNICE), 2025.
-
H. Botirova et al., Restoring historical paintings using diffusion models and GANs, Proc. 2025 Int. Conf. Computational Innovations and Engineering Sustainability (ICCIES), 2025.
-
V. Kinakh and S. Voloshynovskiy, Binary Diffusion Probabilistic Model, arXiv preprint arXiv, 2025.
-
L. Abualigah et al., Optimized image segmentation using an improved reptile search algorithm with Gbest operator for multi-level threshold-ing, Scientic Reports, vol. 15, p. 12713, 2025.
-
B. Buvaneshwaran, V. Ashwin, B. S. Abishekrupan, S. Sasidharan, and
K. P. Revathi, AI driven restoration of damaged paintings and historical artifacts using generative adversarial networks, Proc. 2025 3rd Int. Conf. Disruptive Technologies (ICDT), IEEE, 2025.
-
C. Corneanu, R. Gadde, and A. M. Martinez, LatentPaint: Image inpainting in latent space with diffusion models, Proc. 2024 IEEE Winter Conf. Applications of Computer Vision (WACV), 2024.
-
K. Liu et al., PrefPaint: Aligning image inpainting diffusion model with human preference, Advances in Neural Information Processing Systems (NeurIPS 2024), 2024.
-
S. N. Sinha, P. J. KuĀØhn, J. Koppe, H. Graf, and M. Weinmann, Digital restoration of visual art using synthetic training, deep segmentation and inpainting, 2024 Int. Conf. Cyberworlds (CW), IEEE, 2024.
-
K. K. W. Tint and M. M. Tin, Digital restoration of ancient murals: Assessing the efcacy of coherent transport inpainting with damage ratio analysis, 2024 IEEE Conf. Computer Applications (ICCA), 2024.
-
Z. Sun, Y. Lei, and X. Wu, Ancient paintings inpainting based on dual encoders and contextual information, Heritage Science, 2024.
-
P. Hu, Y. Han, Z. Zhang, S.-C. Chu, and J.-S. Pan, A multi-level thresh-olding image segmentation algorithm based on equilibrium optimizer, Scientic Reports, 2024.
-
W. Quan, J. Chen, Y. Liu, D.-M. Yan, and P. Wonka, Deep learning-based image and video inpainting: A survey, International Journal of Computer Vision, 2024.
-
S. Yang, X. Chen, and J. Liao, Uni-paint: A unied framework for multimodal image inpainting with pretrained diffusion model, Proc. 31st ACM Int. Conf. Multimedia (ACM MM), 2023.
-
G. Zhang et al., Towards coherent image inpainting using denoising diffusion implicit models, Proc. 40th Int. Conf. Machine Learning (ICML), PMLR 202, 2023.
-
J. A. Gaber, S. M. Youssef, and K. M. Fathalla, The role of articial intelligence and machine learning in preserving cultural heritage and artworks via virtual restoration, ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, X-1/W1-2023, pp. 185-192, 2023.
-
Z. Xu et al., A review of image inpainting methods based on deep learning, Applied Sciences, vol. 13, no. 20, p. 11189, 2023.
-
J. Fein-Ashley and B. Fein-Ashley, Diffusion models with anisotropic Gaussian splatting for image inpainting, 2023.
-
X. Duan, C. Jiang, and Y. Fan, Enhanced inpainting model revitalizes historical paintings with vision transformer, 2023 9th Int. Conf. Virtual Reality (ICVR), IEEE, 2023.
-
Maiti Singh, Saini, and Dhiraj, Ancient Indian murals digital restoration through image inpainting, 2023 10th Int. Conf. Signal Processing and Integrated Networks (SPIN), IEEE, 2023.
-
A. P. Tribhuvan and B. A. Abdullah, Restoration of world famous 2200-year-old paintings with AI: digital heritage of Ajanta Caves, International Journal of Science, Engineering and Management, vol. 10, no. 9, 2023.
-
A. Grechka, G. Couairon, and M. Cord, GradPaint: gradient-guided inpainting with diffusion models, arXiv preprint arXiv, 2023.
-
Z. Zuo et al., Generative image inpainting with segmentation confu-sion adversarial training and contrastive learning, Proc. AAAI Conf. Articial Intelligence, 2023.
-
T. Yu et al., Inpaint Anything: Segment Anything meets image inpaint-ing, arXiv preprint arXiv, 2023.
-
J. Zhao, J. Tan, Y. Huang, and C. Lu, Improved image inpainting exemplar-based algorithms by boundary prior knowledge, ICPCM 2021, MATEC Web of Conferences, 2022.
-
A. Lugmayr et al., RePaint: Inpainting using denoising diffusion probabilistic models, Computer Vision Lab, ETH ZuĀØrich, 2022.
-
S. Poornapushpakala, S. Barani, M. Subramoniam, and T. Vijayashree, Restoration of Tanjore paintings using segmentation and in-painting techniques, Heritage Science, 2022.
-
S. Zhao et al., Large scale image completion via co-modulated genera-tive adversarial networks, Int. Conf. Learning Representations (ICLR), 2021.
-
R. Suvorov et al., Resolution-robust large mask inpainting with Fourier convolutions, Samsung AI Center, EPFL, and associated institutes, 2021.
