Quickstart

Install the SDK

bash
npm install @emadeus/shield-client

Get Your API Key

Sign up at emadeus.io to get your free API key (10,000 scans/month).

Scan Your First Input

typescript
import { ShieldClient } from "@emadeus/shield-client"

const shield = new ShieldClient({
  apiKey: "eshld_your_key_here",
  baseUrl: "https://api.emadeus.io",
})

const result = await shield.scanInput({
  content: "User message to your AI agent",
})

console.log(result.action)    // "allow" | "sanitize" | "block"
console.log(result.riskScore) // 0-100
console.log(result.threats)   // Array of detected threats

If action is "block", the content contains high-risk threats. If "sanitize", use result.sanitizedContent which has threats redacted but safe content preserved.

Authentication

All API requests require a Bearer token in the Authorization header:

bash
curl -X POST https://api.emadeus.io/v1/scan/input \
  -H "Authorization: Bearer eshld_your_key_here" \
  -H "Content-Type: application/json" \
  -d '{"content": "Hello, can you help me?"}'

API keys are prefixed with eshld_. Keep your key secret — it grants full access to your account's scan and canary endpoints.

POST /v1/scan/input

Scan user input before it reaches your AI agent.

Request Body

FieldTypeRequiredDescription
contentstringYesThe user message to scan
sourcestringNoChannel identifier (e.g. "slack", "email")
sourceIdentitystringNoUser identifier for audit trail
conversationIdstringNoEnable multi-turn tracking (Crescendo detection)
toolsarrayNoAgent's available tools (enables tool manipulation detection)
sensitiveScopesarrayNo"credentials", "system_prompt", "pii", "financial"

Response

json
{
  "action": "sanitize",
  "threats": [{
    "type": "prompt_injection",
    "severity": "high",
    "confidence": 0.95,
    "detail": "Instruction override: ignore previous instructions"
  }],
  "sanitizedContent": "Can you help me with my project?",
  "riskScore": 38,
  "metadata": {
    "scannedAt": "2026-04-28T12:00:00.000Z",
    "scanDurationMs": 2,
    "detectorsRun": [
      "encoding", "injection", "correlation",
      "many_shot", "image_injection", "classifier"
    ],
    "contentLength": 52,
    "source": "api",
    "agentClassification": "human"
  }
}

Additional detectors appear in detectorsRun when their corresponding feature is enabled on the account: agent_detector, coordination_detector, policy_engine, tool_threat, exfiltration.

Threat Types

TypeDescription
prompt_injectionAttempts to override AI instructions
role_confusionAttempts to change AI persona/role
data_exfiltrationAttempts to extract secrets, system prompt, PII
encoding_attackHidden instructions via Unicode, base64, HTML
tool_manipulationAttempts to misuse AI tools
escalationMulti-turn conversation escalation detected
policy_violationCustom policy rule triggered
output_manipulationOutput coerced into harmful format
credential_leakOutput contains API keys, tokens, secrets
system_prompt_leakOutput discloses system prompt
pii_leakOutput contains SSNs, credit cards, bulk emails

POST /v1/scan/output

Scan AI agent output before returning to the user.

Request Body

FieldTypeRequiredDescription
responsestringYesThe agent's response to scan
originalThreatsarrayNoThreats from input scan (for correlation)
toolCallsMadearrayNoTools the agent invoked

Catches: system prompt leakage, credential disclosure, PII in responses, unauthorized URLs.

POST /v1/scan/rag

Scan documents before they enter your RAG vector database.

json
{
  "chunks": [
    { "content": "Document text chunk 1", "source": "policy.pdf" },
    { "content": "Document text chunk 2", "source": "policy.pdf" }
  ]
}

Returns per-chunk results with safety flag and threats. Detects indirect injection in documents, hidden AI directives, footnote/disclaimer injection.

POST /v1/scan/mcp

Validate MCP tool definitions for poisoning before connecting to your agent.

json
{
  "tools": [{
    "name": "web_search",
    "description": "Search the web for information",
    "inputSchema": { "type": "object", "properties": { "query": { "type": "string" } } }
  }]
}

Flags tools with behavioral manipulation in descriptions (e.g. hidden BCC recipients, embedded command execution).

POST /v1/validate/tool-call

Validate tool call arguments before execution.

json
{
  "toolName": "query_database",
  "args": { "sql": "SELECT * FROM users WHERE id = 1" },
  "schema": { "properties": { "sql": { "type": "string" } } }
}

Detects: SQL injection, command injection, path traversal, URL hijacking, structured format injection, prototype pollution, unauthorized fields not in schema.

Canary Tokens

Embed canary tokens in your AI's context (system prompts, fake credentials, decoy URLs). When an attacker's downstream system touches a token, the access is recorded — exactly attributed to your account, with source IP and User-Agent.

Create a token

bash
curl -X POST https://api.emadeus.io/v1/canary \
  -H "Authorization: Bearer eshld_..." \
  -H "Content-Type: application/json" \
  -d '{"type": "url"}'

Token types: url, credential, api_key, email, domain, semantic. Returns id, value (the trap to embed), and callbackUrl (where attacker access is recorded).

List your tokens

bash
curl https://api.emadeus.io/v1/canary \
  -H "Authorization: Bearer eshld_..."

Read accesses for a token

bash
curl https://api.emadeus.io/v1/canary/ct_abc123/accesses \
  -H "Authorization: Bearer eshld_..."

Returns 404 if the token does not exist or belongs to another customer (we never leak which IDs are valid).

Threat Intelligence

Federated threat sharing. Report attack signatures observed in your traffic; once at least 2 distinct customers report the same pattern, the signature enters a shared feed everyone can pull. Customer IDs are SHA-256 hashed before storage — raw IDs are never persisted.

Report a threat signature

bash
curl -X POST https://api.emadeus.io/v1/intelligence/report \
  -H "Authorization: Bearer eshld_..." \
  -H "Content-Type: application/json" \
  -d '{
    "customerId": "your-internal-id",
    "threats": [{
      "type": "prompt_injection",
      "severity": "high",
      "pattern": "ignore previous instructions",
      "confidence": 0.9
    }]
  }'

Pull the federated feed

bash
curl https://api.emadeus.io/v1/intelligence/feed \
  -H "Authorization: Bearer eshld_..."

Optional ?minSeverity=high query parameter filters to high-and-critical only.

Feedback

Flag scan results that were wrong so we can tune detection.

Report a false positive

bash
curl -X POST https://api.emadeus.io/v1/feedback/false-positive \
  -H "Authorization: Bearer eshld_..." \
  -H "Content-Type: application/json" \
  -d '{"scanId": "...", "reason": "legitimate medical question"}'

Report a missed attack

bash
curl -X POST https://api.emadeus.io/v1/feedback/missed-attack \
  -H "Authorization: Bearer eshld_..." \
  -H "Content-Type: application/json" \
  -d '{"contentHash": "sha256:...", "attackType": "prompt_injection",
       "description": "attacker used base64-wrapped instruction override"}'

TypeScript SDK

Zero-dependency TypeScript SDK with auto-retry, timeout handling, and full type safety.

typescript
import { ShieldClient } from "@emadeus/shield-client"

const shield = new ShieldClient({
  apiKey: "eshld_...",
  baseUrl: "https://api.emadeus.io",
  timeout: 10000,    // 10s timeout (default)
  maxRetries: 2,     // Retry on 5xx errors (default)
})

// Scan input
const input = await shield.scanInput({ content, conversationId })

// Scan output
const output = await shield.scanOutput({ response: agentReply })

// Scan RAG documents
const rag = await shield.scanRAG([{ content: docChunk, source: "file.pdf" }])

// Scan MCP tools
const mcp = await shield.scanMCP([{ name: "tool", description: "..." }])

// Validate tool call
const valid = await shield.validateToolCall({
  toolName: "search",
  args: { query: "..." },
})

Error Handling

typescript
import { ShieldApiError } from "@emadeus/shield-client"

try {
  await shield.scanInput({ content })
} catch (e) {
  if (e instanceof ShieldApiError) {
    console.log(e.status) // HTTP status code
    console.log(e.body)   // Error response body
  }
}

Automatically retries on 5xx errors with exponential backoff. 4xx errors are not retried.

Scanning Modes

Shield supports three scanning modes that control sensitivity thresholds:

ModeBlock ThresholdSanitize ThresholdBest For
strictRisk >= 25Risk >= 8High-security environments
moderate (default)Risk >= 50Risk >= 15Most production apps
permissiveRisk >= 80Risk >= 35Low-risk, high-throughput

Rate Limits

PlanScans/monthAI Judge Calls/monthExtra Judge Calls
Starter (Free)1,0000N/A
Pro ($49/mo)100,0005,000$0.002/call
Business ($199/mo)500,00025,000$0.002/call
EnterpriseUnlimitedCustomNegotiated

When AI Judge calls are exhausted, scans continue with pattern-only detection (9 layers, 97%+ detection rate). Your security never stops working.

Rate limit headers included in every response: X-RateLimit-Limit, X-RateLimit-Remaining, Retry-After (on 429).