Phoenix Police Department (PPD) has launched Versaterm's CallTriage, an AI-powered conversational platform handling non-emergency (Crime Stop) calls. Several aspects make this deployment noteworthy to cybersecurity professionals, particularly those focused on public-sector AI governance, trust, and resilience.
Starting mid-August 2025, residents calling PPD's non-emergency line are now greeted by conversational AI capable of supporting 36 languages, including Spanish, Arabic, Punjabi, Mandarin, Greek, and more. After guiding callers through a brief interaction, the system either routes them to relevant resources (like the Online Reporting portal, Silent Witness, the Office of Homeless Solutions, or other community services) or escalates the call to a live operator if needed.
This $643,000 initiative, funded through the police communications budget, was also championed by Vice Mayor Ann O'Brien, whose political backing helped secure implementation approval.
Prior to full launch, CallTriage underwent controlled trials and limited public rollout, demonstrating improved hold times and dispatcher load. Allie Edwards, Communications Bureau Administrator, emphasized the importance of tailoring policies and maintaining privacy; call records remain public records subject to redaction.
Phoenix's deployment comes with a history lesson: The Portland, Oregon, Bureau of Emergency Communications previously tested the same tool but discontinued its use due to outdated hardware infrastructure, resulting in performance issues. Phoenix addressed this by upgrading its phone system and working with Versaterm on rigorous testing and enhancements.
"We have conducted extensive testing, verification, and certification, along with product enhancements, to ensure agencies can provide the communities they serve with the best possible experience," a city representative told one radio news outlet.
Importantly, callers retain the option to connect with a live dispatch operator, preserving human oversight.
"Any type of innovation that increases a police department's responsiveness to its community should be welcomed, as long as it works as intended. Using AI to handle non-emergency calls has the potential to streamline responses and make the department more efficient," said Col. Cedric Leighton, CNN Military Analyst; U.S. Air Force (Ret.); Chairman, Cedric Leighton Associates, LLC. "As the PPD employs this AI-powered conversational platform, I hope the human backups they are using have the capability to address issues in all 36 languages the platform supports."
"The PPD's careful deployment of CallTriage serves as a potential model for other police departments around the country and, perhaps, around the world," Col. Leighton continued. "The current CallTriage system could serve as a starting point for an AI-based system that could eventually handle some aspects of 911 emergency calls. For example, a sophisticated AI-based platform could help emergency responders distinguish between a real emergency call and a 'swatting' incident."
From a security perspective, these kinds of AI tools raise governance considerations, including:
-
Privacy and public records transparency: While the AI handles routing, all call transcripts remain subject to public-record requests with standard redactions—preserving accountability. Security leaders should monitor how AI-generated data is stored, accessed, and audited for compliance.
-
System hardening and integration: This deployment underscores the necessity of robust compatibility. Phoenix preemptively addressed hardware limitations that caused Portland to discontinue the system. Proper load testing, infrastructure validation, and failover planning are critical—especially for mission-critical civic systems.
-
Trust and human override safeguards: Maintaining the option to speak with a human dispatcher is vital. AI triage systems must include clear, easily accessible human override mechanisms to prevent misrouting, machine error, and user frustration.
-
Accessibility and bias mitigation: Supporting 36 languages demonstrates a strong commitment to equity. Yet, subtle language recognition errors or cultural biases must be monitored closely. Ensuring inclusive, unbiased training and continuous evaluation is essential.
-
Performance monitoring and feedback loops: PPD plans weekly reviews, with formal evaluations at 30-, 60-, and 90-day intervals covering call volume and satisfaction. These performance metrics and resident feedback loops are best practices for responsible AI deployment.
"AI is only one part of the equation," said Randy J. Hinrichs, author of The AI Moral Code. "Let the system do what it does best—pattern recognition, error correction, speed, routing, and multilingual reach—and let people do what we do best: judge ambiguous situations, show empathy, demand accountability and transparency through audit logs, keep fairness honest with continuous testing, and protect dignity with an instant human override.”
And there are broader implications for security practitioners:
-
AI as mission-critical infrastructure: AI systems like CallTriage are now part of essential public safety infrastructure. They must be included in incident response planning, resilience testing, and cybersecurity risk assessment.
-
Third-party risk governance: Using tools from vendors like Versaterm requires robust third-party oversight: SLA enforcement, patching agreements, auditing access, and supply chain monitoring.
-
Human-in-the-loop design: AI should augment—not replace—human decision-making in sensitive domains. Clear boundaries, escalation paths, and oversight mechanisms are vital for trust.
Dr. Kimberly KJ Haywood, Chief AI Governance & Education Advisor, AI Connex, and Adjunct Cybersecurity Professor at Collin College, said an abundance of caution should be initiated by the Phoenix PPD as it rolls out the new technology. She said:
-
"PPD moved forward with upgraded systems, real-world testing, and human oversight. Though I understand the necessity, it’s a decision that comes with serious responsibility. In my experience, the real test of any technology is how it handles failure. What happens when Versaterm's CallTriage platform doesn't understand a caller, misroutes a report, or misses a critical signal from someone in distress?"
-
"Stress-testing is essential, but skipping foundational steps like AI Security by Design makes deployment risky and potentially costly. Without strong human-in-the-loop safeguards, AI can misread tone, urgency, or struggle with callers who are speech-impaired, neurodivergent, or speaking in dialects the system wasn't trained on. Supporting 36 languages is important, but that alone doesn't ensure understanding or equity. PPD must also address algorithmic and data bias, as inequity can quietly take root in the training and logic behind the platform."
-
"To strengthen the rollout, PPD should focus on two key areas: first, a clear, automatic escalation path that doesn't require callers to fight the system to reach a human; and second, regular audits that stress-test the technology against real-world edge cases. Public safety tools must work for everyone, especially those most likely to be misunderstood."