From Stories to Sense-Making: Ethical AI as a Learning Practice
Learning to Listen in Times of AI
Across education and organizational learning, AI is no longer a future topic — it is already shaping how we learn, teach, assess and make sense of knowledge. Yet many institutions experience a gap between policy discussions and lived reality. We talk about AI, but we rarely listen with those who are already navigating its everyday implications.
In my work as a community of practice facilitator, researcher and learning designer, I have found that sustainable learning futures emerge from peer exchange and collective sense-making. This is where the AI-enhanced 2CG® “storylistening” approach, combined with the Telluz™ diagnostic tool, has become a meaningful pathway for working with uncertainty rather than trying to eliminate it.
From Stories to Shared Understanding
With our storylistening formats, we invite learners, educators and organizational stakeholders to share their experiences as narratives — including tensions, doubts and ethical ambivalence. We do not treat these stories as anecdotes, but as data with meaning. AI is used carefully to identify patterns across the narratives, while interpretation remains a human, dialogic process. The Telluz diagnostic tool adds a complementary perspective by mapping dimensions such as trust, clarity, readiness for change and perceived fairness. Together, the narrative insights and diagnostics create a mixed-methods learning space where meaning and measurement inform one another.
Good Practice: The TGM Case Study
In the past semester, our approach was applied at TGM – Technologisches Gewerbemuseum in Vienna, Austria’s second-largest vocational school at upper secondary level (K-12). To the best of my knowledge, this represents the first structured AI ethics and learning scan in an Austrian vocational school context and the findings resonate far beyond TGM.
We observed a strong moral dissonance: students and teachers often articulate ethical concerns about AI, yet still engage in practices they themselves question. AI use frequently remains hidden, from homework preparation to feedback, which undermines transparency and trust. What is more, teachers described role tensions. They feel ethically responsible for modeling appropriate AI use, while lacking sufficient guidance and institutional support. At the same time, the scan has revealed a broader redefinition of knowledge and learning goals: factual knowledge remains important, but is increasingly reframed through AI-supported practices. Interestingly, assessment practices have emerged as a particularly fragile area: AI use in tests, creative tasks and grading is widespread, yet rarely acknowledged, blurring academic norms. Inclusion also has surfaced as a critical issue, with gender-diverse and underrepresented students calling for AI systems that reflect their identities and values.
Importantly, students are not passive recipients of change: many showed active, critical reflection on how AI shapes their learning and sense of self, but lack shared frameworks to navigate this reflection.
While grounded in an educational setting, these insights are highly relevant for HR and L&D professionals. As a matter of fact, public and corporate organisations do face similar challenges around skills development, assessment, responsibility and identity in AI-mediated working environments. Personally, I think that listening systematically to lived experience and grounding this listening in ethical, mixed-methods research, becomes a strategic learning capability. A book chapter exploring the TGM case and its implications is currently in the pipeline. For me, this work is less about showcasing innovation, and more about contributing to a learning culture that treats listening, ethics and uncertainty as core competencies for the future.
Methodology Box
What is 2CG® Storylistening?
A structured qualitative approach that collects and analyses narratives from learners, educators or employees to understand lived experience, meaning-making and cultural dynamics.
What does Telluz™ add?
Telluz is a diagnostic tool that quantitatively assesses dimensions such as trust, clarity, readiness for change and perceived fairness.
How is AI used?
AI supports transparent pattern recognition across large sets of stories (e.g. themes, emotions, tensions). Interpretation and ethical judgment remain human-led.
Where can the mixed-methods approach be applied?
Educational institutions, vocational schools, universities, HR and L&D contexts, and organisations undergoing cultural or learning transformation.