AI can ship code at the speed of a prompt, but it can also ship assumptions just as fast. That is the hidden risk of "vibe coding": endpoints that look correct, tests that pass, and security gaps that only surface in production, incident reviews, or audits.
In this webcast, we will show a practical, developer-first approach to keep AI productive without letting it reshape your security posture. You will learn how to:
• Turn fuzzy feature ideas into LLM-ready specs that eliminate guesswork
• Structure systems with clear interfaces so AI changes stay contained and reviewable
• Teach your tools the map of your repo, trust boundaries, data sensitivity, and standing
rules so you do not have to repeat critical constraints in every prompt
We will also cover operational practices teams often miss, including:
• Enforcing security rules instead of relying on memory
• Tagging AI-touched work so reviews and checklists trigger automatically
• Identifying "red zone" areas where AI can assist but should never drive
If your developers are using AI to write code, this is not optional. It is the difference between faster delivery and faster incidents.
Attendees are eligible to receive 1 CPE credit.
Generously supported by:


Tom has been part of the SecureWorld team for over 14 years. He has launched several of the regional conferences we hold today. Tom is currently responsible for SecureWorld Digital, which provides educational content to the SecureWorld audience. He produces, executes, and moderates the majority of the Remote Sessions webcasts while also working closely with the SecureWorld event directors to build relevant agendas at the regional conferences.

Questions? Have an idea for a topic? Interested in sponsoring our web programs?