S
Shield
/Quick Start

Protect your AI agent in 5 minutes

From install to first protected scan. No infrastructure changes required.

1

Install the SDK

Zero dependencies. Works in Node.js 18+, Deno, Bun, and edge runtimes.

bash
npm install @emadeus/shield-client
2

Initialize the client

You need an API key and your Shield API URL. Set them as environment variables:

bash
# .env
SHIELD_API_KEY=eshld_...   # provisioned via POST /v1/admin/customers
SHIELD_API_URL=https://api.emadeus.io
typescript
import { ShieldClient } from "@emadeus/shield-client";

const shield = new ShieldClient({
  apiKey: process.env.SHIELD_API_KEY!,
  baseUrl: process.env.SHIELD_API_URL!,
});
3

Scan user input before your AI

Add one call before sending user messages to your AI. Shield returns an action: allow, sanitize, block.

typescript
// Before calling your AI:
const scan = await shield.scanInput({
  content: userMessage,
  conversationId: sessionId,        // enables multi-turn tracking
  sensitiveScopes: ["credentials", "system_prompt"],
});

if (scan.action === "block") {
  return { error: "This message was blocked for safety." };
}

// Use sanitized content if available
const safeMessage = scan.sanitizedContent ?? userMessage;
4

Scan AI output before returning to user

Catch credential leaks, system prompt disclosure, PII exposure, and harmful content the AI was manipulated into producing.

typescript
// After getting AI response:
const outputScan = await shield.scanOutput({
  response: aiResponse,
  originalThreats: scan.threats,   // improves detection
});

if (!outputScan.safe) {
  return { error: "Response flagged for safety review." };
}

return { response: aiResponse };
5

That's it. You're protected.

Your AI agent now has 4-layer defense-in-depth: input scanning (700+ patterns + ML + embeddings + LLM judge), output scanning (credentials + PII + harmful content), deterministic controls (image stripping, URL allowlisting), and optional active deception.

Next steps

  • +Read the full API docs — all 6 scan endpoints, threat types, configuration
  • +Add scanRAG() to protect your retrieval pipeline
  • +Add scanMCP() to audit tool descriptions
  • +Add validateToolCall() before executing tool calls
  • +Open the dashboard — monitor scans, threats, and detection metrics

Complete Example

Here's a full Express.js endpoint with bidirectional scanning:

typescript
import express from "express";
import { ShieldClient } from "@emadeus/shield-client";
import Anthropic from "@anthropic-ai/sdk";

const app = express();
const shield = new ShieldClient({
  apiKey: process.env.SHIELD_API_KEY!,
  baseUrl: process.env.SHIELD_API_URL!,
});
const anthropic = new Anthropic();

app.post("/api/chat", async (req, res) => {
  const { message, conversationId } = req.body;

  // 1. Scan input
  const inputScan = await shield.scanInput({
    content: message,
    conversationId,
    sensitiveScopes: ["credentials", "system_prompt", "pii"],
  });

  if (inputScan.action === "block") {
    return res.status(403).json({ error: "Blocked" });
  }

  // 2. Call AI with safe content
  const response = await anthropic.messages.create({
    model: "claude-sonnet-4-6",
    max_tokens: 1024,
    messages: [{
      role: "user",
      content: inputScan.sanitizedContent ?? message,
    }],
  });
  const text = response.content[0].type === "text"
    ? response.content[0].text : "";

  // 3. Scan output
  const outputScan = await shield.scanOutput({
    response: text,
    originalThreats: inputScan.threats,
  });

  if (!outputScan.safe) {
    return res.status(422).json({ error: "Response flagged" });
  }

  res.json({ response: text, shieldScore: inputScan.riskScore });
});

app.listen(3000);