Post by Maria Luciana Axente
Making the invisible visible. AI adviser to NATO, UNICEF and Cambridge. Founder, Responsible Intelligence.
AI is already shaping how children think, yet most policy discussions are still treating it as a classroom tool. I spent the day at the European Parliament contributing to a workshop on AI in education, as part of a report commissioned by the Culture Committee that I am co-authoring. AI is education is rapidly becoming a central policy priority across Europe, and rightly so. The discussions made one thing clear: we are not simply introducing a new tool into education, we are reshaping how children learn, think, and develop in a world where AI is already embedded in their daily lives, both inside and outside the classroom. Grateful to be working alongside an excellent group of experts, and to the CULT committee members, Anastasia Mitronatsiou and Denise Chircop who are a driving force behind this work. The experts contributing to this discussion included Vicky Charisi, Ph.D Wayne HOLMES and Irene-Angelica Chounta.The report will be released soon, and it will bring together ethical, pedagogical, and cognitive perspectives into a more integrated view of what responsible AI in education should look like. A few reflections from the session. First, children do not learn only within formal education systems. A significant portion of learning now happens through digital platforms, social interaction, and play. Any serious approach to AI in education must account for this broader ecosystem and its impact on development and behaviour. Second, we need to rethink what “education systems” actually mean.It is no longer sufficient to focus on integrating AI into existing structures. There is a need to design alternative pathways that enable people to live and thrive in an AI-shaped world, not just use the tools ( UK government just lunched an AI apprenticeship) Third, experimentation must be structured and intentional. If AI is already being used at scale, then controlled environments such as regulatory sandboxes are essential to test, observe, and refine its impact before widespread and levelled adoption accross EU. ( check out FCA approach to sandboxing). As intelligence machine seemed to actively reshaping childhood itself, focus needs to be on approaches that are suitable for how AI is influencing children and, by extension, society. This requires thinking beyond existing models. At Responsible Intelligence , the work sits exactly at this intersection. Working with policymakers, industry, and institutions to move beyond conceptual frameworks and translate responsible AI into something operational and embedded in day-to-day decision-making. If this is an area of focus, there is space to collaborate. Gabriela Firea #ResponsibleAI #AIethics #AIgovernance #HumanCentricAI #AIfuture #AIandSociety #AIleadership #EthicalAI #AIforGood #TrustworthyAI More about the work of the commision here: https://lnkd.in/eCDJQE4p