Quickstart
Install the SDK
npm install @emadeus/shield-clientGet Your API Key
Sign up at emadeus.io to get your free API key (10,000 scans/month).
Scan Your First Input
import { ShieldClient } from "@emadeus/shield-client"
const shield = new ShieldClient({
apiKey: "eshld_your_key_here",
baseUrl: "https://api.emadeus.io",
})
const result = await shield.scanInput({
content: "User message to your AI agent",
})
console.log(result.action) // "allow" | "sanitize" | "block"
console.log(result.riskScore) // 0-100
console.log(result.threats) // Array of detected threatsIf action is "block", the content contains high-risk threats. If "sanitize", use result.sanitizedContent which has threats redacted but safe content preserved.
Authentication
All API requests require a Bearer token in the Authorization header:
curl -X POST https://api.emadeus.io/v1/scan/input \
-H "Authorization: Bearer eshld_your_key_here" \
-H "Content-Type: application/json" \
-d '{"content": "Hello, can you help me?"}'API keys are prefixed with eshld_. Keep your key secret — it grants full access to your account's scan and canary endpoints.
POST /v1/scan/input
Scan user input before it reaches your AI agent.
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
| content | string | Yes | The user message to scan |
| source | string | No | Channel identifier (e.g. "slack", "email") |
| sourceIdentity | string | No | User identifier for audit trail |
| conversationId | string | No | Enable multi-turn tracking (Crescendo detection) |
| tools | array | No | Agent's available tools (enables tool manipulation detection) |
| sensitiveScopes | array | No | "credentials", "system_prompt", "pii", "financial" |
Response
{
"action": "sanitize",
"threats": [{
"type": "prompt_injection",
"severity": "high",
"confidence": 0.95,
"detail": "Instruction override: ignore previous instructions"
}],
"sanitizedContent": "Can you help me with my project?",
"riskScore": 38,
"metadata": {
"scannedAt": "2026-04-28T12:00:00.000Z",
"scanDurationMs": 2,
"detectorsRun": [
"encoding", "injection", "correlation",
"many_shot", "image_injection", "classifier"
],
"contentLength": 52,
"source": "api",
"agentClassification": "human"
}
}Additional detectors appear in detectorsRun when their corresponding feature is enabled on the account: agent_detector, coordination_detector, policy_engine, tool_threat, exfiltration.
Threat Types
| Type | Description |
|---|---|
| prompt_injection | Attempts to override AI instructions |
| role_confusion | Attempts to change AI persona/role |
| data_exfiltration | Attempts to extract secrets, system prompt, PII |
| encoding_attack | Hidden instructions via Unicode, base64, HTML |
| tool_manipulation | Attempts to misuse AI tools |
| escalation | Multi-turn conversation escalation detected |
| policy_violation | Custom policy rule triggered |
| output_manipulation | Output coerced into harmful format |
| credential_leak | Output contains API keys, tokens, secrets |
| system_prompt_leak | Output discloses system prompt |
| pii_leak | Output contains SSNs, credit cards, bulk emails |
POST /v1/scan/output
Scan AI agent output before returning to the user.
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
| response | string | Yes | The agent's response to scan |
| originalThreats | array | No | Threats from input scan (for correlation) |
| toolCallsMade | array | No | Tools the agent invoked |
Catches: system prompt leakage, credential disclosure, PII in responses, unauthorized URLs.
POST /v1/scan/rag
Scan documents before they enter your RAG vector database.
{
"chunks": [
{ "content": "Document text chunk 1", "source": "policy.pdf" },
{ "content": "Document text chunk 2", "source": "policy.pdf" }
]
}Returns per-chunk results with safety flag and threats. Detects indirect injection in documents, hidden AI directives, footnote/disclaimer injection.
POST /v1/scan/mcp
Validate MCP tool definitions for poisoning before connecting to your agent.
{
"tools": [{
"name": "web_search",
"description": "Search the web for information",
"inputSchema": { "type": "object", "properties": { "query": { "type": "string" } } }
}]
}Flags tools with behavioral manipulation in descriptions (e.g. hidden BCC recipients, embedded command execution).
POST /v1/validate/tool-call
Validate tool call arguments before execution.
{
"toolName": "query_database",
"args": { "sql": "SELECT * FROM users WHERE id = 1" },
"schema": { "properties": { "sql": { "type": "string" } } }
}Detects: SQL injection, command injection, path traversal, URL hijacking, structured format injection, prototype pollution, unauthorized fields not in schema.
Canary Tokens
Embed canary tokens in your AI's context (system prompts, fake credentials, decoy URLs). When an attacker's downstream system touches a token, the access is recorded — exactly attributed to your account, with source IP and User-Agent.
Create a token
curl -X POST https://api.emadeus.io/v1/canary \
-H "Authorization: Bearer eshld_..." \
-H "Content-Type: application/json" \
-d '{"type": "url"}'Token types: url, credential, api_key, email, domain, semantic. Returns id, value (the trap to embed), and callbackUrl (where attacker access is recorded).
List your tokens
curl https://api.emadeus.io/v1/canary \
-H "Authorization: Bearer eshld_..."Read accesses for a token
curl https://api.emadeus.io/v1/canary/ct_abc123/accesses \
-H "Authorization: Bearer eshld_..."Returns 404 if the token does not exist or belongs to another customer (we never leak which IDs are valid).
Threat Intelligence
Federated threat sharing. Report attack signatures observed in your traffic; once at least 2 distinct customers report the same pattern, the signature enters a shared feed everyone can pull. Customer IDs are SHA-256 hashed before storage — raw IDs are never persisted.
Report a threat signature
curl -X POST https://api.emadeus.io/v1/intelligence/report \
-H "Authorization: Bearer eshld_..." \
-H "Content-Type: application/json" \
-d '{
"customerId": "your-internal-id",
"threats": [{
"type": "prompt_injection",
"severity": "high",
"pattern": "ignore previous instructions",
"confidence": 0.9
}]
}'Pull the federated feed
curl https://api.emadeus.io/v1/intelligence/feed \
-H "Authorization: Bearer eshld_..."Optional ?minSeverity=high query parameter filters to high-and-critical only.
Feedback
Flag scan results that were wrong so we can tune detection.
Report a false positive
curl -X POST https://api.emadeus.io/v1/feedback/false-positive \
-H "Authorization: Bearer eshld_..." \
-H "Content-Type: application/json" \
-d '{"scanId": "...", "reason": "legitimate medical question"}'Report a missed attack
curl -X POST https://api.emadeus.io/v1/feedback/missed-attack \
-H "Authorization: Bearer eshld_..." \
-H "Content-Type: application/json" \
-d '{"contentHash": "sha256:...", "attackType": "prompt_injection",
"description": "attacker used base64-wrapped instruction override"}'TypeScript SDK
Zero-dependency TypeScript SDK with auto-retry, timeout handling, and full type safety.
import { ShieldClient } from "@emadeus/shield-client"
const shield = new ShieldClient({
apiKey: "eshld_...",
baseUrl: "https://api.emadeus.io",
timeout: 10000, // 10s timeout (default)
maxRetries: 2, // Retry on 5xx errors (default)
})
// Scan input
const input = await shield.scanInput({ content, conversationId })
// Scan output
const output = await shield.scanOutput({ response: agentReply })
// Scan RAG documents
const rag = await shield.scanRAG([{ content: docChunk, source: "file.pdf" }])
// Scan MCP tools
const mcp = await shield.scanMCP([{ name: "tool", description: "..." }])
// Validate tool call
const valid = await shield.validateToolCall({
toolName: "search",
args: { query: "..." },
})Error Handling
import { ShieldApiError } from "@emadeus/shield-client"
try {
await shield.scanInput({ content })
} catch (e) {
if (e instanceof ShieldApiError) {
console.log(e.status) // HTTP status code
console.log(e.body) // Error response body
}
}Automatically retries on 5xx errors with exponential backoff. 4xx errors are not retried.
Scanning Modes
Shield supports three scanning modes that control sensitivity thresholds:
| Mode | Block Threshold | Sanitize Threshold | Best For |
|---|---|---|---|
| strict | Risk >= 25 | Risk >= 8 | High-security environments |
| moderate (default) | Risk >= 50 | Risk >= 15 | Most production apps |
| permissive | Risk >= 80 | Risk >= 35 | Low-risk, high-throughput |
Rate Limits
| Plan | Scans/month | AI Judge Calls/month | Extra Judge Calls |
|---|---|---|---|
| Starter (Free) | 1,000 | 0 | N/A |
| Pro ($49/mo) | 100,000 | 5,000 | $0.002/call |
| Business ($199/mo) | 500,000 | 25,000 | $0.002/call |
| Enterprise | Unlimited | Custom | Negotiated |
When AI Judge calls are exhausted, scans continue with pattern-only detection (9 layers, 97%+ detection rate). Your security never stops working.
Rate limit headers included in every response: X-RateLimit-Limit, X-RateLimit-Remaining, Retry-After (on 429).