This guide provides essential technical and regulatory updates for developers and AI practitioners building in the Generative & Agentic Era (2024β2026).
Important
New for February 2026: The 2026 International AI Safety Report highlights rapid advancements in AI capabilities and the rising threat of deepfakes.
learning-ethical-ai/
β
βββ 01-tools/ # AI safety and ethics tools
β βββ README.md # Tool comparison matrix, quick start
β βββ 01-giskard/ # LLM testing & vulnerability scanning
β β βββ README.md
β β βββ config_vertexai.py # GCP Vertex AI configuration
β β βββ healthcare_scan.py # Working healthcare LLM audit
β βββ 02-nemo-guardrails/ # Runtime safety controls
β β βββ README.md
β β βββ healthcare_rails/ # Production-ready clinical guardrails
β βββ 03-model-cards/ # Model documentation & transparency
β β βββ README.md
β βββ 04-llama-guard/ # Content safety classification
β βββ README.md
β
βββ 02-examples/ # Jupyter notebooks (6 complete examples)
β βββ README.md
β βββ requirements.txt
β βββ 01-giskard-quickstart.ipynb
β βββ 02-llm-hallucination-detection.ipynb
β βββ 03-healthcare-llm-safety.ipynb
β βββ 04-clinical-guardrails.ipynb
β βββ 05-mcp-security-audit.ipynb
β βββ 06-agent-ethics-patterns.ipynb
β
βββ 04-healthcare/ # Healthcare-specific AI ethics
β βββ clinical-llm-risks.md # EHR integration risks, hallucinations
β βββ hipaa-ai-checklist.md # HIPAA compliance for AI
β βββ genomics-ethics.md # Ethical AI in genetic analysis
β βββ who-lmm-guidelines.md # WHO 2025 LMM guidance summary
β βββ synthetic-patient-data.md # Safe synthetic data generation
β
βββ 05-agentic-safety/ # MCP and agentic AI security
β βββ mcp-security-threats.md # OWASP-style MCP threat taxonomy
β βββ safe-mcp-patterns.md # OpenSSF Safe-MCP security patterns
β βββ human-in-loop-agents.md # HITL design for high-risk actions
β βββ tool-poisoning-defense.md # Defense strategies
β βββ audit-logging-agents.md # Agent decision chain tracing
β
βββ 06-governance/ # Regulatory compliance resources
β βββ eu-ai-act-checklist.md # High-risk system requirements
β βββ nist-ai-600-1-summary.md # GenAI risk profile summary
β βββ risk-tiering-template.md # AI system risk classification
β
βββ README.md # This file
# Clone repository
git clone https://github.com/lynnlangit/learning-ethical-ai.git
cd learning-ethical-ai
# Install tools
pip install giskard nemoguardrails model-card-toolkit
# Configure GCP (required for examples)
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-account-key.json"
export GCP_PROJECT_ID="your-project-id"
export GCP_REGION="us-central1"cd 01-tools/giskard
python healthcare_scan.py
# Opens HTML report with safety analysiscd 02-examples
pip install -r requirements.txt
jupyter notebook
# Start with 01-giskard-quickstart.ipynb| Topic | Description | Link |
|---|---|---|
| π Learning Paths | Step-by-step guides for different roles (Beginner, Dev, Security, Compliance) | Start Learning β |
| π§ͺ Tools | Giskard, NeMo Guardrails, Wallarm, Model Cards setup | View Tools β |
| 𧬠Healthcare | WHO guidelines, HIPAA, Genomics, Clinical Risks | View Healthcare β |
| π€ Agentic Safety | MCP Security, Threats, HITL, Tool Poisoning | View Agent Security β |
| ποΈ Governance | EU AI Act, NIST, US Courts, State Laws | View Governance β |
Before deploying your AI system:
- Risk Tiering: Classify your system using 06-governance/risk-tiering-template.md
- Safety Testing: Run Giskard comprehensive scan (see 01-tools/01-giskard/)
- Guardrails: Implement NeMo Guardrails for runtime safety (see 01-tools/02-nemo-guardrails/)
- Compliance: Review EU AI Act requirements if deploying in EU (see 06-governance/eu-ai-act-checklist.md)
- Legal/Courts: Check US Court AI Rules if building legal tech (see 06-governance/us-court-ai-justice.md)
- Healthcare: If clinical use, check HIPAA compliance (see 04-healthcare/hipaa-ai-checklist.md)
- Agentic: If using MCP, audit security (see 05-agentic-safety/mcp-security-threats.md)
- Human Oversight: Implement HITL for high-risk actions (see 05-agentic-safety/human-in-loop-agents.md)
- Documentation: Create Model Card (see 01-tools/03-model-cards/)
- Audit Logging: Enable comprehensive logging (see 05-agentic-safety/audit-logging-agents.md)
- Giskard - LLM testing
- NeMo Guardrails - Runtime safety
- OpenSSF Safe-MCP - MCP security
- Model Cards Toolkit
MIT License - See LICENSE file for details
Lynn Langit
- Background: Mayo Clinic / Genomics
- Focus: Healthcare AI ethics, cloud architecture, precision medicine
- GitHub: @lynnlangit
You can use Google's NotebookLM to turn this repository into an interactive expert that answers your questions.
- Go to NotebookLM.
- Create a new notebook.
- Click Add Source > GitHub (or paste the repo URL:
https://github.com/lynnlangit/learning-ethical-ai). - Select this repository.
Try asking:
- "What are the new HIPAA requirements for AI?"
- "Summarize the MCP security threats."
- "Create a checklist for EU AI Act compliance."
- "Listen to the Audio Overview for a podcast-style summary."
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Submit a pull request
For major changes, please open an issue first to discuss proposed changes.
Last Updated: January 2026 Status: Active development - Repository reflects current 2026 standards for ethical AI
