Skip to content

risk-first/agentic-software-development

Repository files navigation

slug sidebar_position title
/
1
Agentic SDLC Risk Framework

Risk First Logo

Agentic Software Development Risk Framework

agentic-software-development.riskfirst.org

A risk framework for agentic AI software development — addressing the unique threats that emerge when AI systems autonomously write, modify, and deploy code.

Why This Exists

Existing AI governance frameworks like NIST AI RMF and ISO/IEC 42001 focus on:

  • AI as a decision-making component
  • Model lifecycle governance
  • Organizational accountability

But AI is no longer just making decisions inside software — it is becoming the primary producer and modifier of software itself. This shifts risk from "bad AI decision" to "unsafe evolving codebase" — a completely different class of risk that current frameworks don't address.

Part of Risk-First

This framework is part of Risk-First Software Development and builds on Risk-First principles. Navigate the framework at agentic-software-development.riskfirst.org for a more joined-up experience.

See also: Societal AI Risk Framework — addressing civilisation-scale risks from advanced AI systems.

What This Framework Covers

Capabilities

The capabilities of generative coding systems that create attack surface — Code Generation, Tool Calling, Execution, Autonomous Planning, Multi-Agent Orchestration, and more.

Risks

Threats unique to or amplified by agentic software development — Code Security, Supply Chain, Autonomy & Control, Prompt Injection, Human Factors, and more.

Controls

Practices and safeguards to address agentic risk.

External Framework Mappings

Threats in this framework are mapped to established security and AI governance standards:

Framework Description
MITRE ATLAS Adversarial Threat Landscape for AI Systems
OWASP Top 10 for Agentic Applications Critical security risks for autonomous AI (2026)
OWASP Top 10 for LLM Applications Security risks for LLM applications (2025)
NIST AI RMF AI Risk Management Framework
NIST SSDF Secure Software Development Framework
SLSA Supply-chain Levels for Software Artifacts
ISO/IEC 42001 AI Management System standard

Schema & Validation

This framework uses schemas based on the OpenSSF Gemara project — a GRC Engineering Model for Automated Risk Assessment. Gemara provides a logical model for compliance activities and standardized schemas (in CUE format) for automated validation and interoperability.

Contributing

This is an emerging area with significant open problems:

  • Provable correctness of agent-generated code
  • Runtime monitoring of autonomous planning
  • Standardized agent audit logs
  • Cross-agent trust and identity

License

This work is licensed under Creative Commons Attribution 4.0 International (CC-BY 4.0).

About

Risk Framework for using AI agents in Software Development

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published