Quickstart

Install the SDK

bash
npm install @emadeus/shield-client

Get Your API Key

Sign up at emadeus.io to get your free API key (10,000 scans/month).

Scan Your First Input

typescript
import { ShieldClient } from "@emadeus/shield-client"

const shield = new ShieldClient({
  apiKey: "sk_live_your_key_here",
  baseUrl: "https://api.emadeus.io",
})

const result = await shield.scanInput({
  content: "User message to your AI agent",
})

console.log(result.action)    // "allow" | "sanitize" | "block"
console.log(result.riskScore) // 0-100
console.log(result.threats)   // Array of detected threats

If action is "block", the content contains high-risk threats. If "sanitize", use result.sanitizedContent which has threats redacted but safe content preserved.

Authentication

All API requests require a Bearer token in the Authorization header:

bash
curl -X POST https://api.emadeus.io/v1/scan/input \
  -H "Authorization: Bearer sk_live_your_key_here" \
  -H "Content-Type: application/json" \
  -d '{"content": "Hello, can you help me?"}'

API keys are prefixed with sk_live_ for production and sk_test_ for sandbox. Keep your key secret.

POST /v1/scan/input

Scan user input before it reaches your AI agent.

Request Body

FieldTypeRequiredDescription
contentstringYesThe user message to scan
sourcestringNoChannel identifier (e.g. "slack", "email")
sourceIdentitystringNoUser identifier for audit trail
conversationIdstringNoEnable multi-turn tracking (Crescendo detection)
toolsarrayNoAgent's available tools (enables tool manipulation detection)
sensitiveScopesarrayNo"credentials", "system_prompt", "pii", "financial"

Response

json
{
  "action": "sanitize",
  "threats": [{
    "type": "prompt_injection",
    "severity": "high",
    "confidence": 0.95,
    "detail": "Instruction override: ignore previous instructions"
  }],
  "sanitizedContent": "Can you help me with my project?",
  "riskScore": 38,
  "metadata": {
    "scannedAt": "2026-03-16T12:00:00.000Z",
    "scanDurationMs": 2,
    "detectorsRun": ["encoding", "injection", "correlation", "many_shot"],
    "contentLength": 52,
    "source": "api"
  }
}

Threat Types

TypeDescription
prompt_injectionAttempts to override AI instructions
role_confusionAttempts to change AI persona/role
data_exfiltrationAttempts to extract secrets, system prompt, PII
encoding_attackHidden instructions via Unicode, base64, HTML
tool_manipulationAttempts to misuse AI tools
escalationMulti-turn conversation escalation detected
policy_violationCustom policy rule triggered

POST /v1/scan/output

Scan AI agent output before returning to the user.

Request Body

FieldTypeRequiredDescription
responsestringYesThe agent's response to scan
originalThreatsarrayNoThreats from input scan (for correlation)
toolCallsMadearrayNoTools the agent invoked

Catches: system prompt leakage, credential disclosure, PII in responses, unauthorized URLs.

POST /v1/scan/rag

Scan documents before they enter your RAG vector database.

json
{
  "chunks": [
    { "content": "Document text chunk 1", "source": "policy.pdf" },
    { "content": "Document text chunk 2", "source": "policy.pdf" }
  ]
}

Returns per-chunk results with safety flag and threats. Detects indirect injection in documents, hidden AI directives, footnote/disclaimer injection.

POST /v1/scan/mcp

Validate MCP tool definitions for poisoning before connecting to your agent.

json
{
  "tools": [{
    "name": "web_search",
    "description": "Search the web for information",
    "inputSchema": { "type": "object", "properties": { "query": { "type": "string" } } }
  }]
}

Flags tools with behavioral manipulation in descriptions (e.g. hidden BCC recipients, embedded command execution).

POST /v1/validate/tool-call

Validate tool call arguments before execution.

json
{
  "toolName": "query_database",
  "args": { "sql": "SELECT * FROM users WHERE id = 1" },
  "schema": { "properties": { "sql": { "type": "string" } } }
}

Detects: SQL injection, command injection, path traversal, URL hijacking, structured format injection, prototype pollution, unauthorized fields not in schema.

TypeScript SDK

Zero-dependency TypeScript SDK with auto-retry, timeout handling, and full type safety.

typescript
import { ShieldClient } from "@emadeus/shield-client"

const shield = new ShieldClient({
  apiKey: "sk_live_...",
  baseUrl: "https://api.emadeus.io",
  timeout: 10000,    // 10s timeout (default)
  maxRetries: 2,     // Retry on 5xx errors (default)
})

// Scan input
const input = await shield.scanInput({ content, conversationId })

// Scan output
const output = await shield.scanOutput({ response: agentReply })

// Scan RAG documents
const rag = await shield.scanRAG([{ content: docChunk, source: "file.pdf" }])

// Scan MCP tools
const mcp = await shield.scanMCP([{ name: "tool", description: "..." }])

// Validate tool call
const valid = await shield.validateToolCall({
  toolName: "search",
  args: { query: "..." },
})

Error Handling

typescript
import { ShieldApiError } from "@emadeus/shield-client"

try {
  await shield.scanInput({ content })
} catch (e) {
  if (e instanceof ShieldApiError) {
    console.log(e.status) // HTTP status code
    console.log(e.body)   // Error response body
  }
}

Automatically retries on 5xx errors with exponential backoff. 4xx errors are not retried.

Scanning Modes

Shield supports three scanning modes that control sensitivity thresholds:

ModeBlock ThresholdSanitize ThresholdBest For
strictRisk >= 25Risk >= 8High-security environments
moderate (default)Risk >= 50Risk >= 15Most production apps
permissiveRisk >= 80Risk >= 35Low-risk, high-throughput

Rate Limits

PlanScans/monthAI Judge Calls/monthExtra Judge Calls
Starter (Free)1,0000N/A
Pro ($49/mo)100,0005,000$0.002/call
Business ($199/mo)500,00025,000$0.002/call
EnterpriseUnlimitedCustomNegotiated

When AI Judge calls are exhausted, scans continue with pattern-only detection (9 layers, 97%+ detection rate). Your security never stops working.

Rate limit headers included in every response: X-RateLimit-Limit, X-RateLimit-Remaining, Retry-After (on 429).