Oklahoma Embraces Autonomous AI for Cybersecurity Defense
Oklahoma's Chief Information Security Officer Michael Toland discusses the state's shift toward allowing AI agents to make independent cybersecurity decisions.
Michael Toland, Oklahoma's Chief Information Security Officer (CISO), recently described the "scary" but necessary decision to allow AI agents to operate autonomously in the state's cybersecurity efforts. Facing an onslaught of cyberattacks enhanced by generative AI, Oklahoma has shifted from human-confirmed actions to fully automated decision-making using Darktrace's "Cyber AI Analyst".
The Need for Speed in Cybersecurity
Toland emphasized that traditional methods are no longer sufficient. "My staff isn’t going to get any bigger. My budget isn’t going to get any bigger," he said. With AI-powered threats like sophisticated phishing emails and malware on the rise, Oklahoma's small team of 35 IT security professionals relies on AI to monitor 28 billion potential threats annually.
- AI as a Force Multiplier: Darktrace's agent scans network traffic in near-real time, flagging anomalies—such as unfamiliar processes or unusual user behavior. In one month, it generated 3,000 alerts, 18 of which were critical. Toland estimates this efficiency equals the work of 500 human analysts.
- Autonomous Actions: The AI can quarantine suspicious devices, imposing progressive time-outs. While it lacks workstation access, it monitors everything, including email tone shifts that might indicate compromise.
Risks and Rewards of Agentic AI
Sounil Yu, CTO of Knostic, cautioned against premature autonomy: "I think a lot of people are going to be playing Russian roulette with security tools that they let go wild." However, Toland argued that speed is critical—IBM data shows breaches often go undetected for 194 days, allowing attackers to corrupt backups.
Yu acknowledged AI's long-term potential to tilt the balance toward defenders: "AI effectively levels the playing ground." But he likened current implementations to "wielding the power of 100 interns"—powerful yet requiring oversight.
Key Takeaways
- Oklahoma’s AI-driven approach reflects a broader trend of governments adopting agentic tools to counter AI-augmented threats.
- The state’s system highlights the trade-off between autonomy and risk in cybersecurity.
- Experts agree AI will eventually favor defenders but urge caution in its deployment.
Related News
Zscaler CAIO on securing AI agents and blending rule-based with generative models
Claudionor Coelho Jr, Chief AI Officer at Zscaler, discusses AI's rapid evolution, cybersecurity challenges, and combining rule-based reasoning with generative models for enterprise transformation.
Human-AI collaboration boosts customer support satisfaction
AI enhances customer support when used as a tool for human agents, acting as a sixth sense or angel on the shoulder, according to Verizon Business study.
About the Author

Dr. Lisa Kim
AI Ethics Researcher
Leading expert in AI ethics and responsible AI development with 13 years of research experience. Former member of Microsoft AI Ethics Committee, now provides consulting for multiple international AI governance organizations. Regularly contributes AI ethics articles to top-tier journals like Nature and Science.