GenAI dominates headlines, but LLM security is often overlooked.
In this deeptalk, Szymon Janowski, Senior ML Engineer, highlights why security is the forgotten brother of the GenAI hype.
We’ll walk through:
- why LLM security matters more than ever,
- common attack vectors (prompt injection, data poisoning, output handling, etc.),
- the OWASP Top 10 for GenAI,
- and practical recommendations for safeguarding LLM-powered systems.
If you’re building LLM-based apps, this talk is your field guide to staying secure.
Timeline
00:00 Intro & Agenda
01:01 Why LLM security matters
02:48 OWASP Top 10 for GenAI
04:12 Prompt injection explained
08:28 Prevention strategies
10:07 Sensitive data & privacy risks
12:35 Supply chain & model poisoning
17:59 Improper output handling
21:37 Prompt leakage & RAG risks
27:39 Misinformation, DoS & summary
Speaker
Szymon Janowski
Senior ML Engineer at deepsense.ai






