# Fairness and Equity
## "Equity"/"Equitable"
- Page 7: "Equity. AI systems should be used in ways that promote equity between different groups of learners and not in ways that discriminate against any group of learners"
- Page 7: "Develop and implement a strategy to reduce the digital divide amongst the cohort of learners you have responsibility for"
- Page 6: "ALEKS helps instructors provide the equitable support and structure each student
needs for success."
- Page 4: "In the long-term, the pandemic may prove to be a watershed moment for education. By utilising AI ethically and with purpose, societies can look forward to addressing previously overwhelming educational inequalities"
## "Bias"/"Biases"
- Page 7: "Insist that suppliers provide relevant information to confirm that appropriate measures were taken, and continue to be taken, to mitigate against biases as part of the design of the resource and within the data sets used for training"
- Page 7: "What information have you received from the suppliers, and are you satisfied that appropriate measures were taken, and continue to be taken, to mitigate against biases as part of the design of the resource and within the data sets used for training? (Pre-procurement)"
## "Discrimination"/"Discriminate"
- Page 7: "Equity. AI systems should be used in ways that promote equity between different groups of learners and not in ways that discriminate against any group of learners (see Annex Section 4 for justification)"
- Page 3: "Those designing Al resources are ultimately responsible for ensuring that systems do not, amongst other things, discriminate against any group of learners, that they do not manipulate users, and that resources are designed in a pedagogically sound way."
- Page 11: "We're dedicated to ensuring that AI-enabled pathways such as Maths Flex are just one of the many unrivalled tools in the Educator's Toolkit: one that allows schools to focus on the areas that most require it; to refine what they know about the pupils they care for without
discrimination or bias"
## "Digital divide"
- Page 4: "It is clear to the Institute for Ethical AI in Education that reforms in education are needed to ensure that all learners can benefit optimally from the use of AI in education... Whilst it is outside of the Institute's scope to put forward a blueprint for how these reforms could be facilitated through the use of AI, it can be said with certainty that reforms will not deliver benefit to all learners if the digital divide is not closed decisively and quickly."
- Page 4: "During school closures due to Covid-19 the reality of digital exclusion was laid bare. Those learners who lacked adequate access to devices and internet connections suffered most."
- Page 7: "Develop and implement a strategy to reduce the digital divide amongst the cohort of learners you have responsibility for"
## "Accessibility"/"Accessible"
- Page 7: "Insist that suppliers provide relevant information to confirm that resources have been designed in order to be accessible to and suited to the needs of learners with additional needs, which could be either cognitive or physical"
- Page 7: "What information have you received from the suppliers, and are you satisfied that AI resources have been designed in order to be accessible to and suited to the needs of learners with additional needs, which could be either cognitive or physical? (Pre-procurement)"
- Page 4: "The Institute for Ethical AI in Education hence urges all governments to guarantee that every single learner has adequate access to a device and an internet connection, and to heed the recommendations in the Framework."
# Privacy and Security
## "Privacy"
- Page 8: "Privacy. A balance should be struck between privacy and legitimate use of data for achieving well-defined and desirable educational goals"
- Page 8: "Can you confirm that your organisation complies with all relevant legal frameworks? (All Stages)"
- Page 11: "At Pearson, it is the people in education who are our priority... Given our passion and focus on digital, lifelong learning, we pride ourselves on working hand in hand with educators to deliver learning through engaging, immersive and highly personalised experiences... placing our user privacy as paramount"
- Page 14: "MH solutions give faculty choice, protect the learner's privacy, and help all learners achieve."
## "Data protection"
- Page 3: "It is also expected that designers must adhere to local laws and policies in relation to data protection, for example the Age Appropriate Design Code (also commonly referred to as the Children's Code) developed by the Information Commissioner's Office."
- Page 8: "Ensure compliance with relevant legal frameworks to ensure that the use of pupil data for the stated purposes is permitted"
## "Surveillance"
- Page 8: "Where the use of AI could be considered to be surveillance of learners, provide a clear justification of why this use of AI benefits learners either directly or indirectly."
- Page 8: "What uses of AI could be considered to be surveillance of learners, and how could these benefit learners - either directly or indirectly? (Pre-procurement)"
- Page 14: "McGraw Hill Connect® offers remote proctoring and browser-locking capabilities that enable instructors to support academic integrity and assessment security, with features like preventing students from navigating away from a test environment, verifying students' identities, and monitoring them as they complete assessments."
## "Safe spaces"
- Page 8: "Ensure that where organisations have chosen, or are obligated to assess students on a continuous basis (potentially as a replacement for summative assessments), there are designated safe spaces in which learners are not assessed"
- Page 8: "In contexts where institutions have chosen or are obligated to assess students on a -continuous basis, how have you ensured that there are designated safe -spaces in which learners are not assessed? (Implementation)"
# Non-maleficence and Beneficence
## "Benefit"/"Benefits"
- Page 2: "The Institute for Ethical Al in Education was conceived by Sir Anthony Seldon, Priya Lakhani OBE, and Professor Rose Luckin in the summer of 2018, and launched in October of hat year, with the aim of developing an ethical framework that would enable all learners to benefit optimally from AI in education, whilst also being protected against the known risks this technology presents."
- Page 3: "The Framework is grounded in a shared vision of ethical Al in education and will help to enable all learners to benefit optimally from Al in education, whilst also being protected against the risks this technology presents."
- Page 8: "Where a system processes data (including but not limited to personal or sensitive data) that could be considered health data insist that suppliers provide relevant information to confirm that this data is required for educational purposes and that processing this data will benefit learners"
- Page 8: "Where the use of AI could be considered to be surveillance of learners, provide a clear justification of why this use of AI benefits learners either directly or indirectly."
## "Educational goals"
- Page 5: "Achieving Educational Goals. AI should be used to achieve well-defined educational goals based on strong societal, educational or scientific evidence that this is for the benefit of learner"
- Page 5: "Have you clearly identified the educational goal that is to achieved through the use of AI? (Pre-procurement)"
- Page 8: "Privacy. A balance should be struck between privacy and the legitimate use of data for achieving well-defined and desirable educational goals"
## "Evidence"
- Page 5: "AI should be used to achieve well-defined educational goals based on strong societal, educational or scientific evidence that this is for the benefit of learner"
- Page 5: "What information have you received from the suppliers, and are you satisfied that measures of student performance are aligned with recognised and accepted test instruments and/or measures that are based on societal, educational or scientific evidence?"
- Page 7: "Insist that suppliers provide relevant information to confirm that where AI is used to positively influence learners' behaviours, this use of AI is supported by societal, educational or scientific evidence"
## "Harm"/"Harmful"
- Page 5: "Insist that suppliers conduct periodic reviews of their AI resources to ensure these are achieving the intended goals and not behaving in harmful, unintended ways"
- Page 5: "Can the supplier confirm that periodic reviews are conducted, and that these reviews verify that the AI resource is effective and performing as intended? (Monitoring and Evaluation)"
## "Well-being"/"Wellbeing"
- Page 6: "Establish how AI can be used to provide insights into a broad range of knowledge, understanding, skills and personal well-being development in a way that is based on evidence"
- Page 6: "Establish how AI resources can be used to enhance and demonstrate the value of: formative approaches to assessment, studying learning processes as well as outcomes, and supporting social and emotional development and learner well-being"
- Page 6: "In what ways is AI being used to enhance and demonstrate the value of formative approaches to assessment, studying learning processes as well as outcomes, and supporting social and emotional development and learner well-being? (Implementation)"
# Agency and Autonomy
## "Autonomy"
- Page 7: "Autonomy. AI systems should be used to increase the level of control that learners have over their learning and development"
- Page 7: "Insist that suppliers provide relevant information to confirm that AI resources were not designed, and will never be designed, to coerce learners"
- Page 7: "What information have you received from the suppliers, and are you satisfied that AI resources were not designed, and will never be designed, to coerce learners? (Pre-procurement)"
## "Control"
- Page 7: "AI systems should be used to increase the level of control that learners have over their learning and development"
- Page 7: "Insist that suppliers provide relevant information to confirm that AI resources are not designed to encourage addiction amongst learners, or to compel learners to extend their use of a resource beyond a point that is beneficial for their learning"
## "Coerce"/"Coercion"
- Page 7: "Insist that suppliers provide relevant information to confirm that AI resources were not designed, and will never be designed, to coerce learners"
- Page 7: "What information have you received from the suppliers, and are you satisfied that AI resources were not designed, and will never be designed, to coerce learners? (Pre- procurement)"
## "Predict"/"Predictive"
- Page 7: "Where a predictive AI system legitimately predicts that an nfavourable outcome will occur (e.g. a student being expelled, failing an exam, or dropping out of a programme), do not penalise or hold the relevant individual to account for an unrealised outcome. Instead, take pre-emptive action to prevent the unfavourable outcome occurring"
- Page 7: "In your context, what unfavourable outcomes might an AI system predict? What harmful action could potentially be taken based on this prediction? What positive steps could be taken to prevent the predicted outcome from happening? (Implementation)"
## "Personalized"/"Personalised"
- Page 11: "At Pearson, it is the people in education who are our priority... Given our passion and focus on digital, lifelong learning, we pride ourselves on working hand in hand with educators to deliver learning through engaging, immersive and highly personalised experiences."
- Page 11: "Using established techniques and expert knowledge, and placing our user privacy as paramount, the service provides a truly personalised learning pathway for pupils, with the programme 'flexing' to each individual's style."
- Page 14: "Delivering personalized reading and study experiences through SmartBook® 2.0: Instructors can assign Connect's adaptive reading experience with SmartBook 2.0. Rooted in advanced learning science principles, SmartBook 2.0 delivers to each student a personalized experience"
# Transparency and Intelligibility
## "Transparency"
- Page 9: "Transparency and Accountability. Humans are ultimately responsible for educational outcomes and should therefore have an appropriate level of oversight of how AI systems operate"
- Page 13: "For us, these principles are the cornerstone of responsible and trustworthy approach to AI" (in reference to Microsoft's six including transparency)
## "Accountability"
- Page 9: "Transparency and Accountability. Humans are ultimately responsible for educational outcomes and should therefore have an appropriate level of oversight of how AI systems operate"
- Page 9: "Conduct a risk assessment to establish whether resources could undermine the authority of practitioners and disrupt accountability structures and take action based on the
risk assessment"
- Page 9: "Will implementing the actions arising from this risk assessment ensure that the authority of educators and/or other relevant practitioners is not undermined, and that accountability structures are not disrupted as a result of using AI? (Pre-procurement)"
## "Explainability"/"Explainable"
- Page 9: "Insist that suppliers make explicit whether there were any trade-offs between accuracy and explainability in the design of the AI resource, specifying where any compromises have been made and providing a justification"
- Page 9: "Have you received the relevant information from the suppliers? Where compromises have been made, are you satisfied with the justification you have received? (Pre- procurement)"
## "Oversight"
- Page 9: "Transparency and Accountability. Humans are ultimately responsible for educational outcomes and should therefore have an appropriate level of oversight of how AI systems operate"
## "Training"
- Page 9: "Provide educators and/or other relevant practitioners with sufficient training to ensure that they are able to use AI resources effectively, discerningly and with confidence. As part of this training, educators and practitioners should be trained to scrutinise the decisions made and behaviours displayed by AI systems, in order to guard against undue deference"
- Page 9: "What will the content of this training be, and how much training will educators and/ or other relevant practitioners receive? (Implementation)"
- Page 9: "Ensure that the supplier can confirm that AI resources were designed by practitioners who have had training on the ethical implications of AI in education"
## "Scrutinize"/"Scrutiny"
- Page 9: "As part of this training, educators and practitioners should be trained to scrutinise
the decisions made and behaviours displayed by AI systems, in order to guard against undue
deference"
- Page 9: "What will the content of this training be, and how much training will educators and/ or other relevant practitioners receive? (Implementation)"
## "Consultation"/"Consulted"
- Page 2: "To create consensus, following on from The Interim Report, the Institute embarked upon a programme of wide consultation designed to listen to and learn from the perspectives of a cross-section of stakeholders."
- Page 9: "Insist that suppliers provide relevant information to confirm that a range of stakeholders (e.g. learners, educators, careers advisers, youth workers) were consulted as part of the design process"
- Page 9: "What information have you received from the suppliers, and are you satisfied that a range of stakeholders (e.g. learners, educators, careers advisers) were consulted as part of the design process? (Pre-procurement)"
# Stakeholders
## Educational Leaders and Practitioners
- Assigned the most substantial role as primary decision-makers:
- Responsible for procurement decisions that determine which AI resources are used in educational settings (page 3).
- Required to "clearly identify the educational goal that is tobe achieved through the use of AI"(page 5).
- Expected to "monitor and evaluate the extent to which the intended impacts and your stated objectives are being achieved" (page 5).
- Tasked with implementing strategies "to reduce the digital divide amongst the cohort of learners" (page 7).
- Responsible for providing educators with "sufficient training to ensure that they are able to use AI resources effectively, discerningly and with confidence" (page 9).
## Developers and Suppliers
Held accountable for ethical design and transparency:
- "Ultimately responsible for ensuring that systems do not, amongst other things, discriminate against any group of learners, that they do not manipulate users, and that resources are designed in a pedagogically sound way" (page 3).
- Expected to provide information about "how their AI resource-achieves the desired objectives-and impacts" (page 5).
- Required to mitigate biases "as part of the design of the resource and within the data sets used for training" (page 7).
- Expected to conduct "periodic reviews of their AI resources ensure these are achieving the-intended goals and not behaving in harmful, unintended ways (page 5).
- Must confirm that "a diverse range of people contributed to the design and development of the AI resource" (page 9).
## Educators
Positioned as implementers who require specific support:
- Need training "to scrutinise the decisions made and behaviours displayed by AI systems, in order to guard against undue deference" (page 9).
- Should teach "students about artificial intelligence and its societal and ethical implications" (page 9).
- Must ensure their authority is not undermined by AI systems (page 9).
## Students
Portrayed primarily as beneficiaries requiring protection:
- Should benefit from AI's capacity to "assess and recognise a broader range of learners' talents" (page 6).
- Must be protected from coercive or addictive design (page 7).
- Should learn about "artificial intelligence and its societal and ethical implications" (page 9).
- Need "designated safe spaces in which learners are not assessed" when continuous assessment is implemented (page 8).
## Governments
Given broader systemic responsibilities:
- Urged to "guarantee that every single learner has adequate access to a device and an internet connection" (page 4).
- Expected to take "steps to ensure that learners, educators and all members of society have a strong understanding of AI and its ethical implications" (page10).
- Should develop and enforce relevant legal frameworks for data protection (page 3).
## Parents
- Parents receive minimal explicit mention in the framework, representing a notable gap in stakeholder consideration. Their role is largely implicit rather than specifically defined.