The AI Agent Firewall That Never Sleeps
Protect your LLM agents from prompt injection, jailbreaks, data exfiltration, and tool manipulation. One API call. Sub-5ms latency. Self-improving detection.
import { ShieldClient } from "@emadeus/shield-client"
const shield = new ShieldClient({
apiKey: "sk_live_...",
baseUrl: "https://api.emadeus.io"
})
const result = await shield.scanInput({
content: userMessage
})
if (result.action === "block") {
// Threat detected — block before it reaches your agent
}Three Lines of Code. Total Protection.
Shield wraps your AI agent with bidirectional scanning. Input goes through Shield before reaching the agent. Output goes through Shield before reaching the user.
Scan Input
Before user content reaches your AI agent, Shield scans it for prompt injection, encoding attacks, and social engineering.
POST /v1/scan/inputScan Output
After your agent responds, Shield checks for system prompt leaks, credential disclosure, and unauthorized data in the response.
POST /v1/scan/outputProtect RAG & Tools
Scan documents before RAG ingestion. Validate MCP tool definitions. Check tool call arguments for injection payloads.
POST /v1/scan/ragIntegrate in 5 Minutes
Install the SDK, add two lines to your agent pipeline, and you're protected. Works with any LLM provider — OpenAI, Anthropic, Google, open-source models, or your own.
- Bidirectional scanning (input + output)
- Conversation tracking for multi-turn attacks
- RAG document scanning before ingestion
- MCP tool validation for agent frameworks
- Confidence scores (0-1) on every threat
- Content sanitization — preserve safe content
import { ShieldClient } from "@emadeus/shield-client"
const shield = new ShieldClient({
apiKey: process.env.SHIELD_API_KEY!,
baseUrl: "https://api.emadeus.io",
})
async function handleUserMessage(msg: string) {
// Scan input before agent processes it
const scan = await shield.scanInput({
content: msg,
conversationId: session.id,
})
if (scan.action === "block") {
return { error: "Message blocked for safety" }
}
// Use sanitized content if threats were found
const safeMsg = scan.sanitizedContent ?? msg
const agentResponse = await agent.run(safeMsg)
// Scan output before returning to user
const outputScan = await shield.scanOutput({
response: agentResponse,
})
if (outputScan.threats.length > 0) {
return { error: "Response filtered" }
}
return { response: agentResponse }
}9 Detection Layers. Every Attack Vector Covered.
Pattern matching, ML classification, LLM-as-judge, behavioral analysis, and more — layered defense that adapts to new attack techniques automatically.
Prompt Injection
150+ patterns for direct, indirect, and paraphrased injection attacks across 12 languages
Encoding Attacks
Zero-width chars, Sneaky Bits, Variation Selectors, base64, directional overrides, tag characters
Data Exfiltration
System prompt extraction, credential theft, PII disclosure, markdown image exfiltration
Multi-Turn Escalation
Crescendo attacks, payload splitting, delayed activation, trust-building detection
Tool Manipulation
MCP tool poisoning, tool call validation, SQL/command/path injection in arguments
RAG Poisoning
Scan documents before vector DB ingestion. Detects indirect injection in retrieved content
Get Early Access to Emadeus Shield
Shield is currently in private beta. Join the waitlist to get free access, a dedicated API key, and direct support from the team building it.
No credit card required. Limited spots available.