Automatic image annotation system using deep learning method to analyse ambiguous images
Abstract
Full Text:
PDFReferences
B. K. A. A. R. M. Nizar Zaghden, “Text Recognition in both ancient and cartographic documents,” vol. 1308, no. 6309, 2013..
J. Chen, P. Ying, X. Fu, X. Luo, H. Guan and K. Wei, “Automatic tagging by leveraging visual and annotated features in social media,” IEEE, vol. 9210, pp. 1-12, 2021.
A. Stangl, M. Morris and D. P. S. Gurari, “, Tree. Is the Person Naked? What People with Vision Impairments Want in Image Descriptions.,” in In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 2020.
H. Ben, Y. Pan, Y. Li, T. Yao, R. Hong, M. Wang and T. Mei, “Unpaired Image Captioning with Semantic-Constrained Self-Learning,” IEEE Trans. Multimed. , 2021.
N. Zaghden, S. Ben Moussa and A. M Alimi, “Reconnaissance des fontes arabes par l'utilisation des dimensions fractales et des ondelettes,” in Actes du 9ème Colloque International Francophone sur l'Ecrit et le Document, 2006.
C. .. Ambrosi and T. Strozzi, “Study of landslides in Ticino: Photointerpretation and analysis of deformations with satellite radar interferometry,” Bull. Of the Ticino Society of Sci. Nat., pp. 19-27, 2008.
F. a. W. J. El-Baz, Remote Sensing In Archaeology, New York: Springer Books, 2007.
L. L. J. N. J. G. I. a. J. L. E. Kurimo, “The Effect of Motion Blur and Signal Noise on Image Quality in Low Light Imaging,” in in Proc. Scandinavian Conference on Image Analysis,, 2009.
M. HAMOUDA and M. S. BOUHLEL, “Modified Convolutional Neural Networks Architecture for Hyperspectral Image Classification (Extra-Convolutional Neural Networks),” IET Image Processing, vol. DOI: 10.1049/ipr2., no. 12169, pp. 1-8, 2021.
D. S. a. S. Kulkarni, “Different types of Noises in Images and Noise Removing Technique,” International Journal of Advanced Technology in Engineering and Science, vol. 03, no. 01, pp. 50-62, 2015.
S. J. a. S. Goswami, “A Comparative Study of Various Image Restoration Techniques with Different Types of Blur,” International Journal of Research in Computer Applications and Robotics, vol. 3, no. 11, pp. 54-60, 2015.
Q. Cheng, Q. Zhang, P. Fu, C. Tu and S. Li, “A survey and analysis on automatic image annotation,” Pattern Recognition, vol. 79, pp. 242-259, 2018.
R. M. A. M. Nizar ZAGHDEN, “A proposition of a robust system for historical document images indexation,” International Journal of Computer Applications, vol. 11, pp. 224-235, 2010.
A. Hanbury, “A survey of methods for image annotation,” Elsevier, vol. 19, p. 617–627, 2008.
I. I. Marina Ivasic-Kos, “A Lightweight Network for Building Extraction from Remote Sensing Images,” IEEE, Vols. 0196-2892, pp. 99-112, 2021.
Jian Kang, “PiCoCo: Pixelwise Contrast and Consistency Learning for Semisupervised Building Footprint Segmentation,” IEEE, vol. 14, pp. 10548 - 10559, 2021.
S. Ammar, T. Bouwmans, Nizar ZAGHDEN and M. Neji, “Deep detector classifier (DeepDC) for moving objects,” IET Image Processing, vol. 14, no. 1212, pp. 1490-1501, 2020.
K. P. Ferentinos, “Deep learning models for plant disease detection and diagnosis,” Computers and Electronics in Agriculture, vol. 145, pp. 311-318., 2018.
V. J. P. A. P. L. B. N. S. a. S. C. G. P. Mahajan, “Perceptual Quality Evaluation of Hazy Natural Images,” IEEE Transactions on Industrial Informatics, vol. 10.1109/TII.2021.3065439, 2021.
E. S. ,. T. D. Jonathan Long, “Fully Convolutional Networks for Semantic Segmentation,” IEEE, pp. 3431-3440, 2015.
I. S. a. G. E. H. A. Krizhevsky, “Imagenet classification with deep convolutional neural networks,” NIPS, vol. 3, no. 5, pp. 1-2, 2012.
K. S. a. A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, abs/1409.1556, vol. 4, no. 5, pp. 1-3, 2014.
W. L. Y. J. P. S. S. R. D. A. D. E. V. V. a. A. R. C. Szegedy, “Going deeper with convolutions.,” CoRR, vol. 1409, no. .4842,, pp. 1-3, 2014.
Nizar ZAGHDEN, M. B. MS Jasim, “Identified of Collision Alert in Vehicle Ad hoc based on Machine learning,” 2021.
J. S. E. D. T. Long, “ Fully convolutional networks for semantic segmentation,” proceedings of IEEE conference on Computer Vision and pattern recognition, pp. 3431-3440, 2015.
A. Krizhevsky, I. Sutskever and G. Hinton, “Imagenet classification with deep convolutional neural networks communication,” ACM, vol. 60, p. 84–90, 2017.
M. T. a. Q. V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” in International Conference on Machine Learning, 2019.
Q. Chenga, Q. Zhang, P. Fu, C. Tu and S. Li, “A survey and analysis on automatic image annotation,” ELSEVIER, vol. 79, pp. 242-259, 2018.
J. J.-K. P. K. M. G. a. M. B. A. Brodzicki, Pre-Trained Deep Convolutional Neural Network for Clostridioides Difficile Bacteria Cytotoxicity Classification Based on Fluorescence Images, sensors, 2020.
M. T. a. D. Hanbay, “Plant disease and pest detection using deep learning-based features,” Turkish Journal of Electrical Engineering & Computer Sciences, pp. 1636-1651., 2019.
X. Guo, “Comparison and evaluation of annual NDVI time series in china,” NOAA AVHRR LTDR and terra MODIS mod13c1 products., 17 7 2017. [Online].
Q. W. D. C. Y. Z. ,. W. L. ,. G. D. X. L. Wei Wei, “Automatic image annotation based on an improved nearest neighbor technique with tag semantic extension model,” Elsever, Vols. 1877-0509, pp. 616-623, 2021.
X. Yang, “Pixel-level automatic annotation for forest fire image,” Elsevier, pp. 1-14, 2021.
N. W. D. R. C. J. W. M. M. a. C. W. Calum R. Wilson, “Receiver Operating Characteristic curve analysis determines association of individual potato foliage volatiles with onion thrips preference,” cultivar and plant age, pp. 5-9, 2019.
F. P. J. B. N. B. M. B. N Jaouedi, “Prediction of Human Activities Based on a New Structure of Skeleton Features and Deep Learning Model,” Sensors, vol. 17, no. 4944, pp. 1-15, 2020.
DOI: http://dx.doi.org/10.21533/pen.v11i2.3517
Refbacks
- There are currently no refbacks.
Copyright (c) 2023 Ali Abbas Al-Shammary, Nizar Zaghden, Med Salim Bouhlel
This work is licensed under a Creative Commons Attribution 4.0 International License.
ISSN: 2303-4521
Digital Object Identifier DOI: 10.21533/pen
This work is licensed under a Creative Commons Attribution 4.0 International License