Comparison of Different Deep Neural Network Models in the Cultural Heritage Domain
By: Teodor Boyadzhiev , Gabriele Lagani , Luca Ciampi and more
Potential Business Impact:
Helps computers learn old art better.
The integration of computer vision and deep learning is an essential part of documenting and preserving cultural heritage, as well as improving visitor experiences. In recent years, two deep learning paradigms have been established in the field of computer vision: convolutional neural networks and transformer architectures. The present study aims to make a comparative analysis of some representatives of these two techniques of their ability to transfer knowledge from generic dataset, such as ImageNet, to cultural heritage specific tasks. The results of testing examples of the architectures VGG, ResNet, DenseNet, Visual Transformer, Swin Transformer, and PoolFormer, showed that DenseNet is the best in terms of efficiency-computability ratio.
Similar Papers
Comparative and Interpretative Analysis of CNN and Transformer Models in Predicting Wildfire Spread Using Remote Sensing Data
CV and Pattern Recognition
Predicts wildfires better using smart computer eyes.
Evaluation and Analysis of Deep Neural Transformers and Convolutional Neural Networks on Modern Remote Sensing Datasets
CV and Pattern Recognition
Helps satellites spot things better from space.
Multi-task Learning for Identification of Porcelain in Song and Yuan Dynasties
CV and Pattern Recognition
Helps tell old pottery apart by computer.