Reviewing the AILit Framework
The European Commission and the OECD just released an AI Literacy Framework for teachers, leaders, policymakers, and learning designers. Here’s their mission statement:
Artificial Intelligence (AI) has become a widely used tool in our everyday life, including for learning, personalized assistance, and entertainment. Therefore, young people must be able to understand how AI works, its societal impact, and how to use it ethically in order to be prepared for a society and economy in the age of AI.
I sympathize with Miriam Reynoldson’s skepticism of the term “AI literacy” and the place of AI in education writ large. Terminology aside, she and I agree that there is a real need for students to have critical literacy and build skills for good judgement, and nurturing those competencies requires us to understand and incorporate AI. In some ways, it’s no different than the impact of past technologies—the World Wide Web and social media, to name two.
I encourage you to read the whole framework. It synthesizes a lot of important concepts and reflects a lot of careful thought. If you have your own thoughts, you can submit feedback. I did, so I thought I’d share my ideas here.
About the framework
The framework is organized around four domains: engaging with AI, creating with AI, managing AI, and designing AI. Within each are a set of practical competences that people can use to design learning activities. Underlying all of these are a set of knowledge, skills, and attitudes.
Most of my feedback related to knowledge and competences. I think they’re generally very good, and I had no real suggestions for improving the existing ones, but I identified gaps that I thought should be filled. I’ll quote relevant context from the document, then add my feedback. (Note: I’ve done some minor edits since submitting the form online.)
Missing knowledge
The framework divides knowledge into five groups; I had ideas for three of them.
The Nature of AI
K1.1: AI systems use algorithms that combine step-by-step procedures with statistical inferences (e.g., weights and biases) to process data, detect patterns, and generate probable outputs.
K1.2: Machines “learn” by inferring how to generate outputs such as predictions, content, and recommendations that influence physical or virtual environments, in response to information from the input they receive. They do so with varying levels of autonomy and adaptiveness after deployment.
K1.3: Generative AI uses probabilities to generate human-like outputs across various modalities (e.g., text, audio, visuals) but lacks authentic understanding and intent.
K1.4: AI systems operate differently depending on their purpose, whether to create, predict, recommend, or respond.
To these, I’ve added two more:
K1.5 What people commonly call AI today is part of a much broader field of study which has existed since computers existed. Not all AI is designed to be general-purpose intelligence. For example, people are also applying AI narrowly to solve challenging problems in areas like games, science, and math.
K1.6 People are continually improving the capabilities of AI, and the way it works now is not necessarily the way it will work in the future, which may change our fundamental understanding of it. Its progress is accelerating rapidly; very large changes may happen very quickly.
AI Reflects Human Choices and Perspectives
K2.1: Building and maintaining AI systems relies on humans to design algorithms, collect and label data, and moderate harmful content. These systems reflect human choices, assumptions, and labor practices, shaped by unequal global conditions.
K2.2: AI is trained on vast datasets sourced from publicly available information, user-generated content, curated databases, and real-world data collected through sensors, interactions, and digital systems.
K2.3: AI systems gather new data from interactions with users; decisions, processes, and outputs may be directly influenced by inputs in real time.
K2.4: AI systems are trained to identify patterns among data elements that humans have selected, categorized, and prioritized.
K2.5: Bias inherently exists in AI systems, which can also reflect societal biases embedded in its training data or algorithm design. Humans can perpetuate or mitigate harmful biases in AI systems during the design, development, or testing process.
I think there’s one missing:
K2.6 Digital information does not fully represent human knowledge and experience, so AI cannot reflect the full complexity of humanity. Unequal access to the Internet means that many voices, cultures, and perspectives are not represented in training data proportional to their presence in the world. Some ways people exist and interact in the world are not explicitly encoded in digital information, which means that those forms of data cannot be included in training of AI.
This proposed addition is similar to K2.1 and K2.5, but I think it’s distinct because it addresses challenges that are far downstream from human choices. K2.1 seems to stress that humans are making choices about collecting and labeling data, but doesn’t address the conditions implicit in online information. K2.5 relates more to biases contained in training data, while not talking about the absence of certain classes of data entirely.
AI’s Capabilities and Limitations
K4.1: AI excels at pattern recognition and automation but lacks emotions, ethical reasoning, context, and originality.
K4.2: AI requires vast amounts of computing power and data, which consumes energy, thus demanding limited natural resources and increasing carbon emissions. AI’s long-term sustainability impact, both positive and negative, largely depends on how it is implemented and utilized.
K4.3: The capability of generative AI, particularly large language models (LLMs), to generate human-like content can make it difficult to distinguish fact from fabrication, increasing the potential to generate misinformation, deepfakes, or manipulative materials.
I would add one more:
K4.4 As AI improves, it will become harder to recognize what may be beyond its capability, and easier to assume that it has human qualities that it may not have. At some point, it may reach a level where many, or all, people believe that it has achieved or surpassed human intelligence and capability, and that it possesses qualities such as morals and emotions.
Missing Competencies
Engaging with AI
Recognize AI’s role and influence in different contexts.
Learners identify the presence of AI in everyday tools and systems and consider its purpose in various situations, such as content recommendations or adaptive learning. They reflect on how AI influences their choices, learning, and perceptions.
Evaluate whether AI outputs should be accepted, revised, or rejected.
Learners critically assess the accuracy and fairness of AI-generated content, recognizing that AI can generate misinformation or biased outputs. They decide whether to trust, modify, or override AI outputs by considering their potential impact on themselves and others.
Examine how predictive AI systems provide recommendations that can inform and limit perspectives.
Learners explore how AI uses data patterns to offer suggestions (e.g., what to watch, buy, or read) and consider how those recommendations may both support learning or decision-making and reinforce narrow viewpoints or biases.
Explain how AI could be used to amplify societal biases.
Learners investigate how AI systems, such as facial recognition or hiring algorithms, reflect human decisions and data, and identify ways that bias in data or design can lead to unfair outcomes for different groups of people.
Describe how AI systems consume energy and natural resources.
Learners explore the environmental impact of AI, including its energy and data infrastructure, and consider how responsible design and use can support sustainability.
Analyze how well the use of an AI system aligns with ethical principles and human values.
Learners assess whether using AI in a given situation, such as surveillance cameras in public spaces or moderating online content, supports values such as fairness, transparency, and privacy. They reflect on whether its use is appropriate, beneficial, or potentially harmful.
Connect AI’s social and ethical impacts to its technical capabilities and limitations.
Learners explore how AI’s strengths and weaknesses affect how it’s used in society. They connect the design and function of AI systems to real-world impact on people, communities, and systems.
To these, I’ve added three more:
Compare different types and applications of AI.
Learners explore broader context of AI beyond generative models. They contrast different kinds of AI and explore their impacts.
Evaluate the motives of, and the forces that act on, the people and companies who are creating AI and providing AI services.
Learners reflect on why people are pursuing the development and use of AI. They examine the forces of society, government, and economics, and the influence they have on the development, application, and deployment of AI.
Recognize the impact of AI on human relationships.
Learners discuss the potential for people to engage in relationships with AI, both now and in the future. They predict possible positive and negative outcomes for individuals and for society.
Creating with AI
Use AI systems to explore new perspectives and approaches that build upon original ideas.
Learners experiment with AI to expand their thinking, generate new ideas, or consider alternative viewpoints. They stay accountable for the final content while letting AI support their creative process.
Visualize, prototype, and combine ideas using different types of AI systems.
Learners try out AI tools that operate in different formats (text, images, music, etc.) to explore and refine new ideas. They combine outputs into a meaningful product or solution.
Collaborate with generative AI systems to elicit feedback, refine results, and reflect on thought processes.
Learners engage in an iterative process with AI by testing prompts and refining AI-generated outputs, and then reflect on how the interaction shaped their thinking and choices.
Analyze how AI can safeguard or violate content authenticity and intellectual property.
Learners explore how AI-generated content may borrow from or replicate existing work, and consider when that use is fair, original, or in need of attribution. They reflect on the ethical implications of AI-assisted creation.
Explain how AI systems perform tasks using precise language that avoids anthropomorphism.
Learners describe how AI operates in realistic, accurate terms, avoiding language that suggests AI has human feelings or understanding. They understand that their language can either clarify or perpetuate misconceptions about AI.
To these, I’ve added one:
Analyze how the input to a generative AI tool affects its output.
Learners discover how generative AI is sensitive to the content of its prompts. They explore techniques for achieving their goals, and reflect on their discoveries.
Conclusion
The framework is a great start. My primary concern after a first reading is that it fails to capture some of the important context that all learners should have. AI is imperfect, its impact is wide-ranging, and it’s subject to powerful forces. Thinking critically about AI requires a deep understanding of those concepts.
I hope that many others provide detailed feedback; this kind of resource is critically important for education. It’s not scheduled for publication until 2026, and who knows where the technology will be by then, but there are constants they’ve captured, and that I’ve tried to flesh out further, that we’ll need to grapple with for a long time to come.