Emulation and understanding the emotion according to Generative Artificial Intelligence - Case study of emotional component extracted from visual artworks

Autori

  • Umberto Bilotti University of Salerno
  • Lucia Campitiello University of Salerno
  • Michele Domenico Todino University of Salerno
  • Maurizio Sibilio University of Salerno

DOI:

https://doi.org/10.32043/jimtlt.v3i4.124

Parole chiave:

Generative Artificial Intelligence, text-to-image, AI learning

Abstract

Artificial Intelligence can emerge as a new generative force in educational technologies, particularly through Generative Artificial Intelligence (GAI), making the production of various types of digital content faster and more accessible than ever. For the proper implementation of these technologies in the educational context, multidisciplinary and shared reflection is necessary. This article explores current GAI methods for image creation and proposes a study on the machine's current capabilities in interpreting and emulating human emotional phenomena. The emotional component extraction from visual artworks is chosen as a case study, and several possible interventions are identified.

Riferimenti bibliografici

Achlioptas, P., Ovsjanikov, M., Haydarov, K., Elhoseiny, M., & Guibas, L. J. (2021). Artemis: Affective language for visual art. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11569-11579).

Di Tore, S. (2016). La tecnologia della parola. Didattica inclusiva e lettura. FrancoAngeli.

Di Tore, S., Campitiello, L., Caldarelli, A., Todino, M. P., Di Tore, P. A., Iannaccone, A. & Sibilio, M. (2022). EDUCATION IN THE METAVERSE: AMIDST THE VIRTUAL AND REALITY L'EDU-CAZIONE NEL METAVERSO: TRA VIRTUALE E REALE. https://www.researchgate.net/publication/367092002

Guo, C., Lu, Y., Dou, Y., & Wang, F. Y. (2023). Can ChatGPT boost artistic creation: The need of imagi-native intelligence for parallel art. IEEE/CAA Journal of Automatica Sinica, 10(4), 835-838.

Hsu, T. C., Abelson, H., Lao, N., Tseng, Y. H., & Lin, Y. T. (2021). Behavioral-pattern exploration and de-velopment of an instructional tool for young children to learn AI. Computers and Education: Arti-ficial Intelligence, 2, 100012.

Jovanovic, M., & Campbell, M. (2022). Generative artificial intelligence: Trends and prospects. Computer, 55(10), 107-112.

Lu, Y., Guo, C., Dai, X., & Wang, F. Y. (2023). Generating Emotion Descriptions for Fine Art Paintings via Multiple Painting Representations. IEEE Intelligent Systems.

Luigini, A. (2023). AI imaging, imagery and imagination: Considerations on a future that is already present, for a digital humanism in poietic and educational processes. In IMG23: Atti del IV Convegno In-ternazionale e Interdisciplinare su Immagini e Immaginazione= Proceedings of 4th International and Interdisciplinary Conference on Images and Imagination (pp. 336-343). Publica.

Luo, S., Lan, Y. T., Peng, D., Li, Z., Zheng, W. L., & Lu, B. L. (2022, July). Multimodal emotion recognition in response to oil paintings. In 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) (pp. 4167-4170). IEEE.

Mao, H., Cheung, M., & She, J. (2017, October). Deepart: Learning joint representations of visual arts. In Proceedings of the 25th ACM international conference on Multimedia (pp. 1183-1191).

Mohammad, S., & Kiritchenko, S. (2018, May). Wikiart emotions: An annotated dataset of emotions evoked by art. In Proceedings of the eleventh international conference on language resources and evalua-tion (LREC 2018).

Panciroli, C., Rivoltella, P. C., Gabbrielli, M., & Richter, O. Z. (2020). Artificial Intelligence and education: new research perspectives Intelligenza artificiale e educazione: nuove prospettive di ricerca. Form@ re-Open Journal per la formazione in rete, 20(3), 1-12.

Park, T., Liu, M. Y., Wang, T. C., & Zhu, J. Y. (2019). Gaugan: semantic image synthesis with spatially adaptive normalization. In ACM SIGGRAPH 2019 Real-Time Live! (pp. 1-1).

Patrick, G. (2023, July). Artificial Intelligence and Art: A University Curriculum Course for Undergradu-ates. In 2023 14th IIAI International Congress on Advanced Applied Informatics (IIAI-AAI) (pp. 667-670). IEEE.

Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen, M. (2022). Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 1(2), 3.

Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., ... & Sutskever, I. (2021, July). Zero-shot text-to-image generation. In International Conference on Machine Learning (pp. 8821-8831). PMLR.

Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E. L., ... & Norouzi, M. (2022). Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 35, 36479-36494.

Todino M.D., (2023) Accessibility in museums: how to combine accessible spaces, technology, and “shared languages” to ensure the inclusion of all visitors, Giornale Italiano di Educazione alla Salute, Sport e Didattica Inclusiva - Italian Journal of Health Education, Sports and Inclusive Didactics. Anno 7, V 3. Edizioni Universitarie Romane

Wang, S., Saharia, C., Montgomery, C., Pont-Tuset, J., Noy, S., Pellegrini, S., ... & Chan, W. (2023). Imagen editor and editbench: Advancing and evaluating text-guided image inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 18359-18369).

Yanulevskaya, V., Uijlings, J., Bruni, E., Sartori, A., Zamboni, E., Bacci, F., ... & Sebe, N. (2012, October). In the eye of the beholder: employing statistical analysis and eye tracking for analyzing abstract paintings. In Proceedings of the 20th ACM international conference on multimedia (pp. 349-358).

Zhang, J., Wang, Y., Tohidypour, H. R., & Nasiopoulos, P. (2023, October). Detecting Stable Diffusion Generated Images Using Frequency Artifacts: A Case Study on Disney-Style Art. In 2023 IEEE In-ternational Conference on Image Processing (ICIP) (pp. 1845-1849). IEEE.

##submission.downloads##

Pubblicato

2024-02-27

Come citare

Bilotti, U., Campitiello, L., Todino, M. D., & Sibilio, M. (2024). Emulation and understanding the emotion according to Generative Artificial Intelligence - Case study of emotional component extracted from visual artworks. Journal of Inclusive Methodology and Technology in Learning and Teaching, 3(4). https://doi.org/10.32043/jimtlt.v3i4.124

Puoi leggere altri articoli dello stesso autore/i

1 2 > >>