Course Outline
Day 1: Foundations and Core Threats
Module 1: Introduction to OWASP GenAI Security Project (1 hour)
Learning Objectives:
- Understand the evolution from the OWASP Top 10 to security challenges specific to GenAI.
- Explore the ecosystem and resources of the OWASP GenAI Security Project.
- Identify the key differences between traditional application security and AI security.
Topics Covered:
- An overview of the mission and scope of the OWASP GenAI Security Project.
- Introduction to the Threat Defense COMPASS framework.
- Understanding the AI security landscape and regulatory requirements.
- Comparing AI attack surfaces with traditional web application vulnerabilities.
Practical Exercise: Setting up the OWASP Threat Defense COMPASS tool and performing an initial threat assessment.
Module 2: OWASP Top 10 for LLMs - Part 1 (2.5 hours)
Learning Objectives:
- Master the first five critical vulnerabilities in LLMs.
- Understand attack vectors and exploitation techniques.
- Apply practical mitigation strategies.
Topics Covered:
LLM01: Prompt Injection
- Direct and indirect prompt injection techniques.
- Hidden instruction attacks and cross-prompt contamination.
- Practical examples: Jailbreaking chatbots and bypassing safety measures.
- Defense strategies: Input sanitization, prompt filtering, and differential privacy.
LLM02: Sensitive Information Disclosure
- Training data extraction and system prompt leakage.
- Analyzing model behavior for sensitive information exposure.
- Privacy implications and regulatory compliance considerations.
- Mitigation: Output filtering, access controls, and data anonymization.
LLM03: Supply Chain Vulnerabilities
- Dependencies on third-party models and plugin security.
- Compromised training datasets and model poisoning.
- Vendor risk assessment for AI components.
- Secure model deployment and verification practices.
Practical Exercise: Hands-on lab demonstrating prompt injection attacks against vulnerable LLM applications and implementing defensive measures.
Module 3: OWASP Top 10 for LLMs - Part 2 (2 hours)
Topics Covered:
LLM04: Data and Model Poisoning
- Techniques for manipulating training data.
- Modifying model behavior through poisoned inputs.
- Backdoor attacks and data integrity verification.
- Prevention: Data validation pipelines and provenance tracking.
LLM05: Improper Output Handling
- Insecure processing of content generated by LLMs.
- Code injection via AI-generated outputs.
- Cross-site scripting via AI responses.
- Output validation and sanitization frameworks.
Practical Exercise: Simulating data poisoning attacks and implementing robust output validation mechanisms.
Module 4: Advanced LLM Threats (1.5 hours)
Topics Covered:
LLM06: Excessive Agency
- Risks of autonomous decision-making and boundary violations.
- Managing agent authority and permissions.
- Unintended system interactions and privilege escalation.
- Implementing guardrails and human oversight controls.
LLM07: System Prompt Leakage
- Vulnerabilities in exposing system instructions.
- Disclosure of credentials and logic through prompts.
- Attack techniques for extracting system prompts.
- Securing system instructions and external configuration.
Practical Exercise: Designing secure agent architectures with appropriate access controls and monitoring.
Day 2: Advanced Threats and Implementation
Module 5: Emerging AI Threats (2 hours)
Learning Objectives:
- Understand cutting-edge AI security threats.
- Implement advanced detection and prevention techniques.
- Design resilient AI systems capable of withstanding sophisticated attacks.
Topics Covered:
LLM08: Vector and Embedding Weaknesses
- Vulnerabilities in RAG systems and vector database security.
- Embedding poisoning and similarity manipulation attacks.
- Adversarial examples in semantic search.
- Securing vector stores and implementing anomaly detection.
LLM09: Misinformation and Model Reliability
- Detecting and mitigating hallucinations.
- Addressing bias amplification and fairness considerations.
- Fact-checking and source verification mechanisms.
- Integrating content validation with human oversight.
LLM10: Unbounded Consumption
- Resource exhaustion and denial-of-service attacks.
- Rate limiting and resource management strategies.
- Cost optimization and budget controls.
- Performance monitoring and alerting systems.
Practical Exercise: Building a secure RAG pipeline with vector database protection and hallucination detection.
Module 6: Agentic AI Security (2 hours)
Learning Objectives:
- Understand the unique security challenges of autonomous AI agents.
- Apply the OWASP Agentic AI taxonomy to real-world systems.
- Implement security controls for multi-agent environments.
Topics Covered:
- Introduction to Agentic AI and autonomous systems.
- OWASP Agentic AI Threat Taxonomy: Agent Design, Memory, Planning, Tool Use, Deployment.
- Security and coordination risks in multi-agent systems.
- Attacks involving tool misuse, memory poisoning, and goal hijacking.
- Securing agent communication and decision-making processes.
Practical Exercise: Threat modeling exercise using the OWASP Agentic AI taxonomy on a multi-agent customer service system.
Module 7: OWASP Threat Defense COMPASS Implementation (2 hours)
Learning Objectives:
- Master the practical application of Threat Defense COMPASS.
- Integrate AI threat assessment into organizational security programs.
- Develop comprehensive AI risk management strategies.
Topics Covered:
- Deep dive into the Threat Defense COMPASS methodology.
- Integration of the OODA Loop: Observe, Orient, Decide, Act.
- Mapping threats to MITRE ATT&CK and ATLAS frameworks.
- Building AI Threat Resilience Strategy Dashboards.
- Integration with existing security tools and processes.
Practical Exercise: Complete threat assessment using COMPASS for a Microsoft Copilot deployment scenario.
Module 8: Practical Implementation and Best Practices (2.5 hours)
Learning Objectives:
- Design secure AI architectures from the ground up.
- Implement monitoring and incident response for AI systems.
- Create governance frameworks for AI security.
Topics Covered:
Secure AI Development Lifecycle:
- Security-by-design principles for AI applications.
- Code review practices for LLM integrations.
- Testing methodologies and vulnerability scanning.
- Deployment security and production hardening.
Monitoring and Detection:
- Logging and monitoring requirements specific to AI.
- Anomaly detection for AI systems.
- Incident response procedures for AI security events.
- Forensics and investigation techniques.
Governance and Compliance:
- AI risk management frameworks and policies.
- Regulatory compliance considerations (GDPR, AI Act, etc.).
- Third-party risk assessment for AI vendors.
- Security awareness training for AI development teams.
Practical Exercise: Design a complete security architecture for an enterprise AI chatbot, including monitoring, governance, and incident response procedures.
Module 9: Tools and Technologies (1 hour)
Learning Objectives:
- Evaluate and implement AI security tools.
- Understand the current landscape of AI security solutions.
- Build practical detection and prevention capabilities.
Topics Covered:
- The AI security tool ecosystem and vendor landscape.
- Open-source security tools: Garak, PyRIT, Giskard.
- Commercial solutions for AI security and monitoring.
- Integration patterns and deployment strategies.
- Tool selection criteria and evaluation frameworks.
Practical Exercise: Hands-on demonstration of AI security testing tools and implementation planning.
Module 10: Future Trends and Wrap-up (1 hour)
Learning Objectives:
- Understand emerging threats and future security challenges.
- Develop continuous learning and improvement strategies.
- Create action plans for organizational AI security programs.
Topics Covered:
- Emerging threats: Deepfakes, advanced prompt injection, model inversion.
- Future developments and roadmap for the OWASP GenAI project.
- Building AI security communities and knowledge sharing.
- Continuous improvement and threat intelligence integration.
Action Planning Exercise: Develop a 90-day action plan for implementing OWASP GenAI security practices in participants' organizations.
Requirements
- A general understanding of web application security principles.
- Basic familiarity with AI/ML concepts.
- Experience with security frameworks or risk assessment methodologies is preferred.
Audience
- Cybersecurity professionals.
- AI developers.
- System architects.
- Compliance officers.
- Security practitioners.
Testimonials (1)
I really enjoyed learning about AI attacks and the tools out there to begin practicing and actively using for security testing. I took a lot of knowledge away which I didn't have at the beginning and the course met what I hoped it would be. My favorite part shown from the training was Comet Browser and was amazed at what it could do. Definitely something will be looking into more. Overall it was a great course and enjoyed learning all OWASP GenAI Top 10.