Skip to content

Instantly share code, notes, and snippets.

@luisjunco
Last active March 23, 2026 12:15
Show Gist options
  • Select an option

  • Save luisjunco/135a7db08c775337152e67b9a4fe4f77 to your computer and use it in GitHub Desktop.

Select an option

Save luisjunco/135a7db08c775337152e67b9a4fe4f77 to your computer and use it in GitHub Desktop.
AI Ethics Session (proposal)

AI Ethics

Overview

Engineers don't just build systems — they ship decisions. Every model you deploy makes choices that affect real people: who gets a loan, who gets flagged by a security system, which doctor's referral gets prioritized by a triage system...

We believe that building AI responsibly is part of the job — not an afterthought. This session won't give you all the answers, but it will make sure you're asking the right questions before you ship.

The content is intentionally open-ended — teachers are encouraged to adapt the format and topics to their own style, expertise, and student interests.


Suggested duration: 1–2 sessions (40–80 min)



Suggested Topics

Here's some possible topics to discuss:

🔧 Technical & Systems:

  • Bias and fairness — how training data encodes societal biases, real-world examples (hiring tools, facial recognition, credit scoring)
  • Transparency and explainability — black-box models vs. interpretable AI; the right to an explanation
  • Privacy and data ethics — consent, data collection, surveillance, personal data in training sets
  • Adversarial attacks & misuse — how AI systems can be manipulated, exploited, or repurposed beyond their intended use through prompt injection, jailbreaking, or other exploits, and how red-teaming helps anticipate these risks
  • Safety & Alignment — why AI systems are hard to control, and what happens when AI behaves in unintended ways
  • Existential risk — what happens if AI surpasses human intelligence and begins improving itself recursively (aka the singularity), why some researchers treat this as a serious risk, and the debate between those who dismiss it and those sounding the alarm

🌍 Societal & Economic

  • Misinformation and deepfakes — generative AI as a vector for synthetic media and disinformation
  • Labor and economic impact — automation, job displacement, the future of work
  • Environmental impact — energy consumption of large models, carbon footprint
  • AI in high-stakes domains — healthcare, law, criminal justice, military
  • Autonomy and human oversight — when should humans stay in the loop?

⚖️ Governance & Power

  • Accountability and responsibility — who is responsible when AI causes harm? (developers, companies, users)
  • Regulation — how the EU AI Act classifies AI systems by risk, what it means for the products you build, and why it's shaping AI regulation globally
  • Geopolitical impact — how AI may concentrate power in a few countries and corporations, why nations like the US and China are racing to lead, and what's at stake for the rest of the world



Ideas for Running This Session

Every teacher runs this session differently — pick the format that fits your style, your group, and the time available, here's some options:

  • Lecture
  • Socratic discussion
  • Shared board — students post ideas, concerns, or examples on a shared board (e.g. Post-its or Miro), then walk through it together as a class
  • Interest-based breakouts — students pick a topic they care about and join that breakout group, discuss, then share one key takeaway with the class
  • Case study — pick one high-profile AI ethics failure (e.g. COMPAS, Amazon's hiring tool, deepfake scandals) and dissect it from multiple angles
  • Guest speaker
  • Split into two sessions — e.g. one focused on technical ethics (bias, alignment, privacy), one on societal/regulatory topics
  • Student-led presentations — assign topics in advance and let students teach each other
  • Ethics audit exercise — Students pick a system they've built or used during the course and put it under the microscope: Is it fair? Does it leave anyone out? What are its broader social implications? Could it be misused?
  • Red-teaming exercise — students pick an AI system to put to the test: it can be a commercial AI tool, your own project, or a classmate's. Your job is to break, trick, or misuse the model in order to find its weaknesses. Swap targets, compare results, and discuss what you found. (This is, by the way, exactly what AI labs like Anthropic and OpenAI do before shipping a model.)



Extra Resources

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment