On 19 January, Vulcan joined “Making AI Assurance Work in the Real World,” a practitioner-focused forum hosted by AI Verify Foundation, bringing together 200+ global AI assurance and testing practitioners, including regulators, enterprises, and solution providers, to examine how AI assurance is validated through adversarial testing, scenario-based evaluation, and real-world deployment. The session was co-hosted by the AI Verify Foundation and IMDA as part of the Assurance Boot Camp.
For Vulcan, this marked a significant milestone. From supporting the AI Verify Foundation during its early conception to today conducting hands-on AI red-teaming and testing across diverse client environments, we are proud to help shape how responsible AI moves from frameworks into practice. Our work increasingly references applied toolkits such as Singapore’s IMDA AI Starter Kit, enabling organizations to operationalize AI governance, risk identification, and security validation throughout the AI lifecycle.
We had the privilege of sharing the stage with NTUC FairPrice Group and KASIKORN Business-Technology Group (KBTG). Both organizations openly shared their AI system deployment journeys, highlighting challenges ranging from multilingual chatbot vulnerabilities to retail-specific risks and persona drift in customer-facing applications. Vulcan also shared practical experience in adversarial testing across multilingual environments, demonstrating how language switching and mixed-language prompts introduce unique risk patterns in real-world deployments.

Vulcan and NTUC FairPrice Group presented AI testing findings from a public-facing retail chatbot use case, including multilingual and deployment-specific risks.

Vulcan and KASIKORN Business-Technology Group (KBTG) shared a scenario-based approach to testing real-world AI risks.
Across the joint sessions, our discussions surfaced practical, real-world findings from banking and retail environments, including observed adversarial behaviors, linguistic and contextual blind spots, and the necessity of iterative, scenario-based testing to strengthen AI safety and security at scale. These insights reinforced a shared conclusion: many AI risks only emerge after deployment and must be continuously tested, not assumed to be solved at launch.
We extend our thanks to the organizers and the broader community for advancing these critical conversations. AI assurance is no longer a theoretical ideal, it is actively being tested, validated, and refined in production. At Vulcan, we look forward to continuing this work alongside the ecosystem to help organizations deploy AI systems with confidence.