What is AI Model Drift and Why is it a Security Concern?
Artificial Intelligence systems are increasingly being deployed in production environments. Organizations now use AI for fraud detection, healthcare analytics, recommendation...
Artificial Intelligence systems are increasingly being deployed in production environments. Organizations now use AI for fraud detection, healthcare analytics, recommendation...
Artificial Intelligence systems are becoming part of critical applications. AI is now used in healthcare, banking, e-governance, cybersecurity, and enterprise...
AI systems introduce risks that traditional security testing cannot fully address. Unlike conventional software, AI models can be manipulated through prompts, leak sensitive data, generate unsafe outputs, or behave unpredictably. This blog explains why AI security testing requires specialized approaches covering applications, models, infrastructure, data, and overall AI trustworthiness.
Learn how to test for prompt injection vulnerabilities in LLM-powered applications using OWASP-recommended techniques. This blog covers practical testing workflows, common attack payloads, automation tools, and mitigation strategies to secure your AI models effectively.
OWASP Agentic AI Threat T15: When AI Agents Manipulate the Humans Who Trust Them
Learn how human attackers exploit delegation and trust in multi-agent AI systems. Explore OWASP T14 mitigations to stop privilege escalation and manipulation.
Learn how rogue AI agents bypass oversight to execute unauthorized actions or exfiltrate data. Explore OWASP T13 defenses to secure multi-agent AI systems.
Attackers can poison AI-to-AI communications to spread false data and disrupt workflows. Learn OWASP’s T12 defenses for securing inter-agent communication.