AI Ethics Curriculum
An accessible introduction to the fundamentals of ethical AI for students and beginners in computer science.
Learn how to approach AI with responsibility and care—developing systems that are fair, transparent, and aligned with societal values.
Course Overview
Video Lectures
Comprehensive video content covering core concepts
Hands-on Labs
Interactive Colab notebooks with practical exercises
What You'll Learn
- Core concepts in data science, including cleaning data and visualizing distributions
- Introduction to machine learning, its types, workflows, and ethical considerations
- Understanding bias and fairness through metrics, group/individual fairness models
- Hands-on robustness testing with adversarial attacks and noise-based methods
- Exploration of the regulatory landscape surrounding AI safety and accountability
- Apply your learning in a capstone project on real-world issues like hospital or scholarship allocation
A Note About Coding
There are many different ways to write code to solve the same problem. The code examples provided are just one approach—what matters most is that you understand what you're writing and that it works correctly. Don't worry if your solution looks different from the examples; as long as you can explain your approach and it produces the right results, you're on the right track!
Video Lectures

Introduction
Get to know our team and why we created this curriculum. In this video, we’ll introduce ShiftSC, explain the purpose behind our AI Ethics initiative, and share what you can expect from the series.

Foundations of Data Science
Explore the foundations of data science through hands-on work with datasets, visualizations, and distributions, while reflecting on how ethics shape data collection and analysis.

Machine Learning
Learn the core principles of machine learning, including types of learning, the model development process, and the ethical considerations involved in building intelligent systems.

Bias and Fairness
Understand how bias enters machine learning systems and explore key definitions, fairness metrics, and approaches to achieving both group and individual fairness.

Safety and Robustness
Examine how models can fail under adversarial conditions, and learn how to test and strengthen them using techniques like FGSM, noise-based attacks, and a look into policy and regulatory safeguards.

Capstone
Apply everything you’ve learned in a real-world challenge—tackling fairness in hospital or scholarship allocation or revisiting criminal justice tools like COMPAS—to design more equitable, responsible AI systems.

Closing Remarks
A brief thank-you to students for engaging with the course, along with a reminder to stay tuned for future modules and opportunities to keep learning.
Hands-on Labs
Make sure to make a copy of the notebook before starting
The Power of Data
Hands-on practice with data manipulation, cleaning, and visualization using pandas and matplotlib on the classic Iris dataset.
Practice Activities
- Complete pandas exercises to sort, filter, and analyze iris flower data
- Create scatter plots, histograms, and multi-panel visualizations using matplotlib
- Explore a self-selected dataset and build custom data visualizations
Machine Learning Models: Decision Trees and Neural Networks
Build and train decision tree classifiers and PyTorch neural networks while experimenting with hyperparameters to optimize performance.
Practice Activities
- Train and visualize a decision tree classifier using scikit-learn's DecisionTreeClassifier and plot_tree functions
- Experiment with neural network hyperparameters (hidden_dim, learning_rate, num_epochs) to optimize model accuracy
- Build and train PyTorch neural networks with different architectures to understand performance impacts
Measuring and Mitigating Bias
Hands-on practice identifying and measuring bias in datasets using statistical methods and visualization techniques.
Practice Activities
- Calculate group-level fairness metrics like false positive rates across demographic groups
- Explore group-level fairness by computing accuracy scores for different demographic groups
- Implement bias mitigation techniques using sample weights to balance group representation
Model Robustness and Adversarial Testing
Explore neural network robustness by testing with Gaussian noise and visualizing performance drops.
Practice Activities
- Add Gaussian noise to test images and compare model predictions on clean vs noisy data
- Evaluate model accuracy across increasing noise levels to measure robustness
- Visualize the relationship between noise standard deviation and model performance degradation
Capstone Project
Apply your learning in a comprehensive capstone project that addresses ethical AI challenges in healthcare allocation, scholarship distribution, or criminal justice reform.
Practice Activities
- Design and implement a fair hospital bed allocation system
- Create an equitable scholarship distribution algorithm
- Analyze and improve the COMPAS recidivism prediction tool