Trust, Credibility and Transparency in Human-AI Interaction: Why We Need Explainable and Trustworthy AI and Why We Need It Now
Main Article Content
Abstract
Generative Artificial Intelligence (GenAI) has rapidly evolved to perform complex tasks across diverse domains. Despite its potential to redefine how we work and learn, generative AI’s effectiveness hinges on the extent to which it is trusted—by individuals, organizations, and broader societal systems. At the heart of this issue lie three interrelated concepts: trust, credibility, and transparency. In particular, the opaque nature of AI “black boxes,” where sophisticated machine learning algorithms yield outcomes without clear explanations, exacerbates public concern and highlights the necessity of more explainable, responsible AI solutions. Current literature and practice indicate that trust and credibility in AI are multifaceted, encompassing technical, ethical, social, and psychological considerations. This complexity is compounded in educational settings, where generative AI’s integration demands robust transparency to mitigate fear, enhance learning outcomes, and secure a social license for AI-driven interventions. Explainable and trustworthy AI stands out as a dynamic paradigm shift, offering interpretability at both model and outcome levels. This approach enables end-users and developers alike to examine the rationale behind AI-driven decisions, preserving human oversight and reinforcing user confidence. However, merely defining explainable and trustworthy AI does not ensure its adoption: the ongoing challenge lies in building AI systems that are simultaneously innovative, transparent, and robust. Moving forward, the credibility and long-term sustainability of AI applications will depend on our collective ability to integrate technical refinements, adaptive regulations, and societal dialogue. By doing so, we can harness GenAI’s vast potential as a transformative force—guided by enduring human values rather than overshadowed by unchecked power.
Article Details
This work is licensed under a Creative Commons Attribution 4.0 International License.
The work published in AjDE is licensed under a Creative Commons Attribution ShareAlike 4.0 International Licence (CC-BY).