
As AI and large language models (LLMs) rapidly transform industries, they also introduce new vulnerabilities that traditional cybersecurity methods can’t fully address—data leaks, non-compliance, intellectual property theft, and more. In fact, 94% of IT leaders have allocated budgets to safeguard AI in 2024, and this number is expected to rise significantly as AI and LLM adoption continues. The modern attack surface has evolved, making AI and LLMs prime targets for cyberattacks.
In this edition of the Cyber Risk Series, we’ll tackle the most pressing AI security challenges, explore the hidden risks in your AI and LLM workloads, and forecast the 2025 AI security landscape. This event will bring AI security to the forefront, empowering security leaders to defend against emerging threats.
Wednesday, December 4, 2024
Virtual
Featured Speakers
Sumedh Thakar
President and CEO, Qualys

Graham Cluley
Smashing Security
Dr. Jessie Jamieson
Senior Cyber Risk Engineer - Cyber Risk and Resiliency Directorate, CERT Division
Steve Wilson
Chief Product Officer, Exabeam
Preeti Ravindra
Data, Math & Software or Security
Laura Seletos
Principal Cloud Security Architect, NVIDIA
Joe Petrocelli
VP Product Management, Qualys
Key topics:
- Full AI & LLM Workload Discovery
- AI Vulnerability Management
- Risk-Based Prioritization
- Compliance & Legal Protection
Agenda
10:00 AM PT
10:10 AM PT
Redefining Risk and Resilience in a New Cyber Era

Sumedh Thakar
President and CEO, Qualys
10:30 AM PT
Chatbots Breaking Bad: Unmasking the Risks of LLMs

Steve Wilson
Chief Product Officer, Exabeam
11:00 AM PT
Security in the Age of AI

Laura Seletos
Principal Cloud Security Architect, NVIDIA
11:30 AM PT
Becoming More Comfortable with Risk-Informed Secure AI

Jessie Jamieson
PhD, Senior Cyber Risk Engineer CERT Division CMU SEI
Effectively managing enterprise cybersecurity risks has historically been facilitated by the adoption of robust risk management frameworks, tools, and processes that directly link risks to actions. For this talk, we will illustrate how the concepts that have traditionally afforded us the ability to mitigate and respond to risk through security are the same concepts we can apply to secure capabilities enabled by emergent technologies, including AI. Along the way, we will examine what it is that makes us uncomfortable with AI and discuss concrete steps to take that will make us more comfortable with deploying these capabilities confidently and securely.
12:00 PM PT
Risk Mitigation for AI with Secure Development Lifecycle

Preeti Ravindra
Data, Math & Software for Security
12:30 PM PT
Navigating Security Challenges of Large Language Models with AI Asset Visibility and Model Scanning

Joe Petrocelli
VP Product Management, Qualys




