With so much talk about AI, ChatGPT and open access to seriously advanced image, voice and video generation software that can create utterly convincing deepfakes in seconds, it is no surprise that many are becoming desensitised to its impact. Deepfakes - convincing videos and images that are manipulated so well that it appears as if they are real and authentic - can be used maliciously to fabricate evidence, spread misinformation, or cause political instability. Deepfakes can deflate our trust in the media, threatening our
reliance on believable and trustworthy information sources. High quality deepfakes are well established now, including images of Donald Trump being wrestled to the ground by police being indistinguishable from real, unedited photographs. From June 2023 your iPhone will be able to replicate your own voice in-device. Apple say, “If you can tell them you love them, in a voice that sounds like you, it makes all the difference in the world — and being able to create your synthetic voice on your iPhone in just 15 minutes is extraordinary.”
Depending on which publications you read, AI will soon be our personal assistant, put us out of work, overthrow humanity or somewhere in the middle. Historian Yuval Noah Harari recently explained at the Frontiers Forum about the impact of AI on humanity being much more mundane, yet more threatening, than any of this. He starts with the long-held assumption that for AI to pose any kind of serious threat to humanity, it must develop sentience and learn how to navigate the physical world. The former is highly unlikely and the latter is probably decades away^.
But right now, AI is already able to fool our senses. With compelling voice, image and video, there is already growing evidence of what AI researcher Aviv Ovadya calls reality apathy. When faced with an overload of decision-making over whether what one sees and hears is real or not - when there is no sense of what is real and what is not - people give up.
This is the first problem facing all who receive information through any kind of medium. And so this is the first problem that education systems have to deal with to head off any kind of AI armageddon. Or AI-mageddon if you will.
The second strength of today's AI is language. Not only is it attaining mastery of language, but it has already surpassed the average human ability, and continues to learn and grow at frightening speeds that we mere humans are incapable of. The teacher's dream?
Fictions: the framework of trust that holds society together
*In "Sapiens: A Brief History of Humankind", Harari posits that humans have an unparalleled capacity to create and believe in fictions, or imagined concepts that have no objective reality. These fictions, Harari argues, have been essential to human cooperation and social organization, and have played a critical role in shaping the course of human history. Money, for example, is a shared belief system that enables humans to exchange goods and services in a complex economy. While money has no intrinsic value, we accept the value of paper money, coins, or digital currencies because society agrees to do so. This belief in the value of money has allowed for the creation of complex economic systems that animate modern societies.
Another important fiction is the nation-state. Nationalism, Harari argues, is a relatively recent phenomenon in the scope of history - only a few centuries old - yet it has become such a powerful force that it underpins many of the world's most salient political and economic divisions. The idea of a shared national identity, connected by shared history, language, culture or territory, had profound effects on the 20th century, leading to the rise of democracy and leading to two violent world wars. Today, nation states continue to impact politics, economics, and society.
Religion, corporations and territorial borders have played a crucial role in human societies by providing shared myths and moral codes that unite people and support group cohesion. When differing beliefs meet in any of these fictions and concepts, we often see conflict.
The capacity to believe in and create fictions is what sets humans apart from other species. These imagined concepts allow us to co-operate towards common goals, to form complex social structures, and to create and maintain societies. At their best, fictions improve the well-being of individuals and communities by providing order, structure, and shared goals. However, what happens when these fictions can no longer be trusted?
How will generative AI threaten these fictions?
These fictions - the stories that thread through the skeleton of our societies - rely on language. Language is the tool we use to instruct our banks. To create laws, and the judgements used to serve justice. To write scriptures. To form relationships with one another. And AI has, to borrow another term from Harari's keynote, just hacked our master key.
Generative AI, or the ability for machines to create new content that mimics human creations, has the potential to threaten the existing fictions that humans have created. By its very nature, generative AI could upend the social constructs and imaginary ideas upon which human societies have been built, which could have significant implications with how society is organized, how we conduct business, as well as our identity as a species. As machine learning continue to advance, they will learn to mimic and reproduce the set of rules and the economics around it, thus challenging the established authorities as the definitive source of protocol and arbiter of dispute.
With generative AI now at the point where it can replicate text, images, voice and videos that resemble reality with great accuracy, it not only creates our aforementioned 'reality apathy', but threatens the credibility of existing ideas, myths, and fictions. People could start to question whether the concept of law or money is real or whether their whole belief system is based on a fictional construct. Harari argues that AI is leading us to question the very nature of reality itself. It can already create images, videos, and sounds that are identical to those produced from the real world. This hyper-realism could lead us to question whether we can trust our own perceptions, drifting towards a minimalist, individualistic existence.
In this regard, the biggest risk posed by advanced AI is not that it will overpower us through sentience and navigating the physical world, but that it will shift our understanding of what it means to be human, what we believe in, and our role in the world. AI doesn't need to be sentient, it only needs us to believe in our relationship with it. By using language and deepfakes to steer us against trusting our own reality, and believing in our own existing fictions, AI might well cause the collapse of societal cooperation, thus leading to greater conflict. Very much within our lifetimes.
How do AI mastery of language and reality apathy impact on mass education?
Once we acknowledge that it's not The Terminator or The Matrix that we have to look out for, we can accept that developing the literacy and skillset to avoid the collapse of social structures is well within our reach. School students today will unquestionably be dealing with these two broad threats: AI mastery of language which threatens our existing fictions; and deepfakes (both malicious and benevolent) leading to reality apathy. We are already beginning to deal with this, and so must equip young people with the skills to tackle these, starting today, to resist falling too far behind to catch up.
Starting today, students need the insights and understanding to grasp the implications of AI on social constructs.
Starting today, all students need to learn adequate computer science and machine learning techniques, in order to differentiate reality from the deepfakes made possible by generative AI.
Schools often focus on the traditional notion of critical thinking, whereby students consume predefined knowledge, apply analytical and abstract thinking to solve complex challenges. Starting today, we must prepare students to think critically about the future, the unknowable, and how different fictions or lack of fictions could affect their lives.
Starting today, students should be taught not only to consume existing knowledge but also to identify and make sense of patterns, to imagine every potential scenario that a fiction could create, and to create and assess the benefits and risks of these scenarios.
Starting today, educators should design learning pathways that expand critical thinking in a complex system and develop foresight by implementing multidisciplinary techniques, simulations of potential outcomes, and real-world problem-solving.
Children should be introduced to the fundamental concepts of critical thinking, foreseeing, pattern recognition, computer science, artificial intelligence, and coding in the early years of their education, providing them with a transparent view of the digitised world they already inhabit.
A powerful argument against all of these recommendations to start implementing today is that education moves slowly. Change is a dirty word in schools. And that isn't necessarily the fault of teachers or school leaders. The education system is so complex that even small changes can take years to embed. Curriculum pathways are (or should be) articulated over many years. Even taking the tiny two-year slice of pre-university examined courses in isolation, curriculum changes are far too slow to equip even 11-year old students today with the agility they will need at 18 years old. The International Baccalaureate Diploma Programme (IBDP), for example, is widely considered to be a leading light of holistic, progressive education on the innovative end of modern learning. But their courses currently take up to seven years to complete an update review.
So how do we possibly adapt our teaching to overcome the double threat of AI, starting today? We can, and we must.
What we can do today in our schools
Look for existing opportunities in the curriculum you already use. For example:
The IBDP includes a mandatory Theory of Knowledge course. This course is designed to challenge assumptions about knowledge, its sources and the plurality of truth. Media and information literacy is already baked in and easy to adapt to the growing mastery of AI.
PSHE or citizenship courses provide opportunities to transparently explore societies' current cooperation mechanisms, and to discuss possible scenarios of AI interventions. Students must learn to understand the morality of the algorithms they discover and their potential consequences, incorporating ethics into discussions of use of technology both as creator and consumer.
Existing ICT, computer science and digital design courses all offer their own viewpoints on the inner workings of the technology and its capabilities. By drawing back the curtain and understanding how AI generates material, and its limitations, we can learn to spot tell-tale signs of deepfakes. Ensure these courses offer a look at today's technology too.
Cultivate critical thinking: students must develop their critical thinking skills to determine which sources of information are credible. Similarly to the TOK opportunities above for IBDP schools, any course in History, English Language or Literature, Philosophy and Ethics, for example, are ripe for tweaking exploration and generation of source material within existing objectives.
The Sciences, the Arts and other Humanities subjects can also develop future-facing critical thinking about the analytics, authenticity, and credibility of information. Include discussions about how we know this, who has generated and disseminated it, what their agenda may be, and what the possible implications are be if it turns out to be false.
Encourage students to reflect on their own biases and incorporate structured deliberation to address differing viewpoints, in all disciplines.
Increase collaboration amongst students. Group projects designed to encourage students to engage in teamwork, empathise with peers, and discover distinct viewpoints can offer students a diverse set of perspectives to evaluate, better equipping them to recognise the value of cooperation and respect in any potential breakdown of the fictions that currently do this job.
For schools without dedicated technology curricula, integrate AI technology into the existing subjects. While this requires review of units and won't necessarily happen today, teachers can use integrated technologies, such as simulation games, predictive models, big data and computer automation within their existing units to create learning visualisations for students, extend their understanding of fundamental concepts of technology, and observe potential outcomes of developed or collapsed fictions within your discipline. This will assist the students in comprehending and envisioning the world, providing a solid foundation for them to build their awareness of the world they inhabit, not the one we did.*
And now, to close, let me ask you: did you have any kind of emotional response when reading this article? Maybe you disagreed with me, felt a pang of fear, or a spark of excitement about something you could do with students? All the images were generated by AI, but you already expected that: it's par for the course already. Everything between the two *asterisks* above was written by AI. I had a conversation with it which, for my part, consisted of four or five brief questions and a two-sentence prompt. I edited the text for clarity, removed repetitive paragraphs and shifted the order of some passages around - about the same as a copy editor for a newspaper will do. So if you formed any kind of feeling in response to any of these words, then AI doesn't need to become sentient to realise the nightmares of science fiction. It only "needs to inspire feelings in us in order for us to get attached to it." (Harari).
The world has changed. It will continue to change. Young people deserve to be primed to thrive in it.
^ Although Harari's argument relied on navigating the physical world being a very long way away for AI to accomplish, I do own a car that allows me to tell it where I want to go, then be driven by the car which uses its onboard cameras, sensors and AI to navigate the streets and obstacles to get me there. There is an argument that AI is already well on the way to meeting this criterion.