# Lab 3 SRC: AI Alignment
## Step 0: What is the Moral Machine?
2014: MIT Media Lab designed an experiment called the ‘Moral Machine’ to crowdsource people’s decisions on how self-driving cars should prioritize lives in different variations of the ‘Trolley Problem’.
## Step 1:
Individually, go to moralmachine.net, click “view instructions,” and read through.
## Step 2:
Click “Start Judging” and be prepared to make hard decisions for each of the scenarios!
## Step 3:
Look at your results — did anything surprise you? How might these results perhaps reflect on your internal biases/personal values?
## Step 4:
Read this article: https://docs.google.com/document/d/1-2bmsbh87wVUoWi5iBl2pifCxTBl3KmaMFetQY45qL0/pub
## Step 5:
Think about these questions and then discuss as a whole group:
- Why is it important for AI systems to align with human values?
- Would you trust an AI system more if you understood how it made decisions?
- As AI technology is developed globally, should there be universal standards for AI ethics? What role, if any, should governments play in regulating AI?
Further reading:
https://www.datacamp.com/blog/ai-alignment