Generative AI is changing how we build and secure software. In this one-day training, Generative AI & Security, you will learn the inner workings of modern AI systems and why understanding them is essential for security. You will discover where today´s real risks lie and how attackers can misuse large language models to extract sensitive data, leak hidden prompts, or trigger unexpected costs. This course helps you build both awareness and practical skills to work safely and confidently with AI in your daily projects.
The course is built around the OWASP Top 10 for LLM Applications, translating each risk into real-world scenarios you will actually encounter. We explore input-side threats such as prompt injection, prompt leakage, and data or model poisoning. You will also examine output-side pitfalls like insecure handling of generated text, sensitive information disclosure, and hallucinations with legal or reputational impact. Finally, we look at architectural issues: supply-chain vulnerabilities, weaknesses in vector stores and RAG pipelines, excessive agent permissions, and uncontrolled resource consumption.
The course is highly interactive and hands-on. In guided lab sessions, you will practice with real LLM applications: crafting and detecting prompt injections, simulating poisoning, extracting secrets, and testing for insecure outputs. For each vulnerability, we link the exercise to concrete defenses, such as input validation, output sanitization, guardrails, and robust deployment strategies so you immediately know how to apply safeguards in practice.