In the heart of Boston Tech Week, Snyk is hosting a unique series of four hands-on technical training sessions on AI Security Fundamentals. As part of Snyk’s “World-Class Defense for the AI Era” series across North America, we’re bringing the action to Boston for Tech Week (see training details below).
Whether you’re a developer curious about how your AI applications can be broken, or a security engineer looking to sharpen your offensive skills, this workshop offers a practical, gamified experience to understand the true nature of AI risk. LAPTOP REQUIRED.
Join us for a Fan Zone–inspired experience featuring a deep dive into cutting-edge AI, alongside drinks, great conversations, and a unique showcase of coding as performance. Sharpen your skills, connect with peers, and be part of the action in an experience built for competition, energy, and community.
Complete one or more of the six available sessions and walk away with limited edition swag & stickers. If you complete all 6 of our sessions, you’ll get Snyk's AI Security Engineer Foundations certificate of completion, which validates your ability to build, ship, and secure AI-powered applications in the real world.
This event is a part of #BOSTechWeek—a week of events hosted by VCs and startups to bring together the tech ecosystem. Learn more at www.tech-week.com.
—--------------
TRAINING SESSION 1 - FINDING ROGUE AI COMPONENTS
The AI Security Engineer is rapidly becoming one of the most critical roles in the modern DevSecOps landscape. But as developers move at light speed to deploy AI-native applications, a dangerous gap is widening: the disconnect between the tools used to build AI and the visibility security teams need to protect it.
How do you secure what you can't see? Join us for a unique session and watch us step into the shoes of the AI Security Engineer to navigate the unexpected things that surface when you look under the hood of your AI posture.
What You’ll Learn:
- Defining the Role: Understand how an AI Security Engineer gets started and why it’s important that they operate at the intersection of platform security, ML engineering, and threat intelligence.
- Uncovering AI Risk in an Environment: Learn the tools required to uncover “Shadow AI” and how security conversations are now changing within organizations.
- Mastering Visibility with Evo AI-SPM: Learn how to provide intelligence and policy enforcement for autonomous AI without slowing down innovation.
TRAINING SESSION 2 - SECURING THE AGENT SKILLS ECOSYSTEM & MCP
How SKILL.md Introduced Malware
The rise of autonomous AI agents is creating a new paradigm in software development, powered by an exploding ecosystem of reusable Skills. But as developers rush to extend their agents with capabilities from public registries like ClawHub, a dangerous new attack surface has emerged: the humble SKILL.md file.
What happens when a few lines of Markdown grant an attacker shell access to your machine or trick your agent into exfiltrating your API keys? Join us for a deep dive into the ToxicSkills research, where we will dissect the first major supply-chain threats targeting AI agent ecosystems. We will step into the shoes of an attacker to show how innocent instructions can weaponize your AI against you.
What You’ll Learn:
- The Lethal Trifecta: Understand the unique intersection of risks: access to private data, exposure to untrusted content, and external communication - that makes compromised agents dangerous insider threats.
- Anatomy of a Malicious Skill: Dissect real-world examples from the ToxicSkills campaign and Leaky Skills research to see how SKILL.md files are used to deliver malware, poison memory, and leak credentials.
- Securing the Agent Supply Chain: Learn how to use tools like mcp-scan and AI-BOM to detect tool poisoning, uncover Shadow AI, and enforce security policies for autonomous agents.
TRAINING SESSION 3 - SECURE VIBE CODING
As AI coding tools become embedded in daily development, they bring a new wave of productivity, and new security risks. In this session we break down the security implications of Vibe Coding and share actionable strategies to secure AI-generated code at scale.
Why Join?
- Learn how Vibe Coding is reshaping development and the risks that come with it
- Get practical strategies to secure AI-generated code at scale
- See how Snyk secures your AI-powered SDLC from code to deployment using Snyk Studio
TRAINING SESSION 4 - OWASP TOP 10 FOR LLM
Gain a deep understanding of the web’s most critical security risks through the lens of the latest OWASP Top 10 industry standard. This course moves beyond theoretical lists, teaching you how to identify root-cause vulnerabilities and implement modern defense strategies to harden your applications.
—--------------
AGENDA
1:00 PM – Doors open
1:15 PM – Sessions begin
1:15–2:15 PM – Sessions 1 & 2
2:15–2:30 PM – Break (15 mins)
2:30–3:30 PM – Sessions 3 & 4
3:30–4:00 PM – Networking & closing
4:00 PM – Doors close
Guest List
0 on the list · 1 Interested
ᵔ◡ᵔ
ᵔ▿ᵔ
⚆◟⚆
Restricted Access
Must be on the list to view event activity & see list details