Transitional Inferences: Cognitive and Technological Hybridizations for an Epistemically Re-sponsible Education
Parole chiave:
Hybrid inference, Explainable Artificial Intelligence (XAI), Epistemic responsibility, Critical abduction, Digital citizenship, Artificial intelligence in educationAbstract
In today’s educational landscape, the growing presence of Artificial Intelligence (AI) is re-shaping learning processes and introducing new epistemological challenges. Inferential prac-tices – deductive, inductive and above all abductive – today constitute a meeting point between the human mind and algorithmic logic. The present paper explores these “fertile hybridiza-tions” between cognitive models and intelligent systems, analyzing how symbolic, sub-symbolic and neuro-symbolic technologies implement the different forms of inference and how these influence educational contexts. The research is developed in three phases: (1) a conceptual reconstruction of the inferential forms in the logical tradition and in their algo-rithmic translation; (2) a critical analysis of AI-based educational technologies, with a focus on intelligent tutoring, adaptive learning and predictive assessment tools; (3) the proposal of a teaching model based on hybrid inference and oriented towards critical digital citizenship. The expected results concern the development of transversal epistemic skills such as critical awareness of sources, uncertainty management and the ability to distinguish between intuitive and explainable inference. In this sense, education is not configured as a simple adoption of technologies, but as the construction of a dialogical, cooperative and ethically founded envi-ronment, capable of forming responsible citizens in the age of AI.
Riferimenti bibliografici
Ane, B., & Nepa, G. (2024). Artificial intelligence prediction model for educational knowledge representation through learning per-formance. Research on Education and Media, 16(2), 1-10. doi: 10.2478/rem-2024-0011.
Aristotele. (1984). Organon (trad. it.). Roma-Bari: Laterza.
Bacon, F. (1620/2000). Novum Organum. Milano: Bompiani.
Bruner, J. (1960/2009). The Process of Education. Cambridge, Mass.: Harvard University Press.
Bruner, J. S. (1961). The act of discovery. Harvard Educational Review, 31, 21–32.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1). https://doi.org/10.1177/2053951715622512
Chen, J., Liao, Q. V., Vaughan, J. W., & Bansal, G. (2023). Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations. arXiv:2301.07255. https://doi.org/10.48550/arXiv.2301.07255.
Dorsch, J., & Moll, M. (2024). Explainable and Human-Grounded AI for Decision Support Systems: The Theory of Epistemic Quasi-Partnerships (EQP). arXiv preprint. https://arxiv.org/abs/2409.14839
Eco, U. (1979). Lector in fabula. La cooperazione interpretativa nei testi narrativi. Milano: Bompiani.
Flach, P. (2012). Machine Learning: The Art and Science of Algorithms that Make Sense of Data. New York: Cambridge University Press.
Floridi, L. (2023). The Ethics of Artificial Intelligence in Education. Oxford: Oxford University Press.
Garcez, A. d.,Gori, M., Lamb, L. C., Serafini, L. Spranger, M., & Tran, S, N. (2019). Neural-symbolic computing: An effective methodology for principled integration of machine learning and reasoning. arXiv:1905.06088. https://doi.org/10.48550/arXiv.1905.06088
Gödel, K. (1931/1992). On Formally Undecidable Propositions. New York: Dover Publications.
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. Cambridge, Mass.: MIT Press.
Gunning, D., Aha, D., & Hodges, J. (2019). DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Magazine, 40(2), 44–58. https://doi.org/10.1609/aimag.v40i2.2850
Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G. Z. (2019). XAI—Explainable Artificial Intelligence. Science Robotics, 4(37), eaay7120. https://doi.org/10.1126/scirobotics.aay7120
Hilbert, D., & Ackermann, W. (1959). Grundzüge der theoretischen Logik. Berlin, Gottingen, Heidelberg: Springer-Verlag.
Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. Boston: Center for Curriculum Redesign.
Johnson-Laird, P. N., & Byrne, R. M. J. (1991). Deduction. Hove, Hillsdale: L. Erlbaum Associates.
Kahneman, D. (2011). Thinking, Fast and Slow. New York, NY: Farrar, Straus and Giroux.
Lipman, M. (2003). Thinking in Education. New York, NY: Cambridge University Press.
Luckin, R. (2018). Machine Learning and Human Intelligence: The Future of Education for the 21st Century. London: UCL IOE Press.
Magnani, L. (2009). Abductive Cognition. Berlin/Heidelberg: Springer.
Mill, J. S. (1882). A system of logic, ratiocinative and inductive. s.n: University of Toronto Press.
Newell, A., & Simon, H. A. (1976). Computer science as empirical inquiry: Symbols and search. Communications of the ACM, 19(3), 113–126. https://doi.org/10.1145/360018.36002
Nuzzaci, A. (2025). Promoting Inferential Processes in Educational Contexts in the Age of Artificial Intelligence. Italian Journal of Health Education, Sports and Inclusive Didactics, 9(2), 1-25. https://doi.org/10.32043/gsd.v9i2_Sup.1
Pedwell, C. (2023). Intuition as a “trained thing”: sensing, thinking, and speculating in computational cultures. Subjectivity 30, 348–372. https://doi.org/10.1057/s41286-023-00170-x
Peirce, C. S. (1934). Collected Papers of Charles Sanders Peirce. Cambridge, MA: Harvard University Press.
Polanyi, M. (2009). The Tacit Dimension. Chicago: The University of Chicago Press
Russell, S., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach (4th ed.). Hoboken, NJ: Pearson.
Scardamalia, M., & Bereiter, C. (2014). Knowledge building and knowledge creation: Theory, pedagogy, and technology. In R. K. Sawyer (Ed.), The Cambridge Handbook of the Learning Sciences (pp. 397–417). New York: Cambridge University Press.
Shortliffe, E. H. (1976). Computer-Based Medical Consultations: New York, NY: MYCIN. Elsevier.
Suresh, H., & Guttag, J. (2021). A framework for understanding unintended consequences of machine learning. arXiv:1901.10002v5. https://arxiv.org/abs/1901.10002v5
UNESCO. (2019). Artificial Intelligence in Education: Challenges and Opportunities for Sustainable Development. Paris: UNESCO.
Valiant, L. G. (2013). Probably Approximately Correct: Nature’s Algorithms for Learning and Prospering in a Complex World. New York: Basic Books.
Vygotskij, L. S. (1978). Mind in Society. Cambridge, MA: Harvard University Press.
Zhang, J. (2024). Strengthening Human Epistemic Agency in the Symbiotic Learning Partnership With Generative Artificial Intelligence, 54(1), 1-11. https://www.researchgate.net/publication/391337614
##submission.downloads##
Pubblicato
Come citare
Fascicolo
Sezione
Licenza
Copyright (c) 2025 Antonella Nuzzaci

Questo lavoro è fornito con la licenza Creative Commons Attribuzione - Non commerciale - Non opere derivate 4.0 Internazionale.