Cisco Unveils Findings from its State of AI Security Report for 2025

- The report highlights security risk to AI models, systems, applications, and infrastructure from both direct compromise of AI assets as well as vulnerabilities in the supply chain.
- AI is being harnessed by adversaries to improve effectiveness of cyber operations, such as social engineering and algorithmic jailbreaking of large language models (LLMs).
- The report provides actionable recommendations for securing AI systems.
Cisco, the global leader in networking and security, unveiled the findings of its inaugural global State of AI Security report. The report aims to provide a comprehensive overview of important developments in AI security across several key areas: threat intelligence, policy, and research.
Artificial Intelligence AI has emerged as one of the defining technologies of the 21st century, yet the AI threat landscape is novel, complex, and not effectively addressed by traditional cybersecurity solutions. The State of AI Security report aims to empower the community to better understand the AI security landscape, so that companies are better equipped to manage the risks and reap the benefits that AI brings.
“As AI becomes deeply embedded into business and society, securing it must become a top priority,” said Fady Younes, Managing Director for Cybersecurity at Cisco Middle East, Africa, Türkiye, Romania and CIS. “Traditional cybersecurity approaches are not enough to address the unique threats AI presents. With the State of AI Security report, Cisco is helping organizations better understand and navigate this fast-evolving threat landscape— empowering them with the insights, research, and recommendations needed to build secure, resilient AI systems.”
Findings from Cisco’s first State of AI Security report include:
Evolution of the AI Threat Landscape
The rapid proliferation of AI and AI-enabled technologies has introduced a massive new attack surface that security leaders are only beginning to contend with.
Risk exists at virtually every step across the entire AI development lifecycle; AI assets can be directly compromised by an adversary or discreetly compromised though a vulnerability in the AI supply chain. The State of AI Security report examines several AI-specific attack vectors including prompt injection attacks, data poisoning, and data extraction attacks. It also reflects on the use of AI by adversaries to improve cyber operations like social engineering, supported by research from Cisco Talos.
Looking at the year ahead, cutting-edge advancements in AI will undoubtedly introduce new risks for security leaders to be aware of. For example, the rise of agentic AI which can act autonomously without constant human supervision seems ripe for exploitation. On the other hand, the scale of social engineering threatens to grow tremendously, exacerbated by powerful multimodal AI tools in the wrong hands.
Key Developments in AI Policy
The past year has seen significant advancements in AI policy. International efforts have led to key developments in global AI governance. Early actions in 2025 suggest greater focus towards effectively balancing the need for AI security with accelerating the speed of innovation.
Original AI Security Research
The Cisco AI security research team has led and contributed to several pieces of groundbreaking research which are highlighted in the State of AI Security report.
Research into algorithmic jailbreaking of large language models (LLMs) demonstrates how adversaries can bypass model protections with zero human supervision. This technique can be used to exfiltrate sensitive data and disrupt AI services.
More recently, the team explored automated jailbreaking of advanced reasoning models like DeepSeek R1, to demonstrate that even reasoning models can still fall victim to traditional jailbreaking techniques.
The team also explores the safety and security risks of fine-tuning models. While fine-tuning is a popular method for improving the contextual relevance of AI, many are unaware of the inadvertent consequences like model misalignment.
The report also reviews two pieces of original research into poisoning public datasets and extracting training data from LLMs. These studies shed light on how easily—and cost-effectively—a bad actor can tamper with or exfiltrate data from enterprise AI applications.
Recommendations for AI Security
Securing AI systems requires a proactive and comprehensive approach. The report outlines several actionable recommendations:
- Manage risk at every point in the AI lifecycle: Ensure your security team is equipped to identify and mitigate at every phase: supply chain sourcing (e.g., third-party AI models, data sources, and software libraries), data acquisition, model development, training, and deployment.
- Maintain familiar cybersecurity best practices: Concepts like access control, permission management, and data loss prevention remain critical. Approach securing AI the same way you would secure core technological infrastructure and adapt existing security policies to address AI-specific threats.
- Uphold AI security standards throughout the AI lifecycle: Consider how your business is using AI and implement risk- based AI frameworks to identify, assess, and manage risks associated with these applications. Prioritize security in areas where adversaries seek to exploit weaknesses.
- Educate your workforce in responsible and safe AI usage: Clearly communicate internal policies around acceptable AI use within legal, ethical, and security boundaries to mitigate risks like sensitive data exposure.