Skip to content

Agent enrollment

You can enroll AI agents in runtime guardrails by updating their LLM base URL to point to your AI Gateway instance. This routes all LLM API calls through the gateway’s guardrails service without requiring changes to your application logic.

How it works

To integrate guardrails, most LLM SDKs provide a configuration option to override the default base URL for API requests. By redirecting this URL to your AI Gateway endpoint, all traffic from the agent is automatically routed through the gateway’s guardrails service. The gateway then functions as a transparent proxy, applying your security policies before forwarding the request to the upstream LLM provider.

From the application’s perspective, the API remains functionally identical. The only technical distinction is that all outbound prompts and inbound completions are inspected and filtered according to your configured guardrail logic.

Prerequisites

  • An AI Gateway deployed and accessible from the agent’s network.
  • The gateway URL (for example, http://ai-gateway.internal:10000).

Enrollment by SDK

This section outlines several SDK enrollment options.

OpenAI Python SDK

Python
from openai import OpenAI

client = OpenAI(
    api_key="your-api-key",
    base_url="http://ai-gateway.internal:10000"
)

response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello, world!"}]
)

Caution

Don’t hardcode API keys in production. While shown inline here for demonstration, you should always retrieve credentials via environment variables or a secure vault.

OpenAI Node.js SDK

JavaScript
import OpenAI from "openai";

const client = new OpenAI({
    apiKey: "your-api-key",
    baseURL: "http://ai-gateway.internal:10000",
});

const response = await client.chat.completions.create({
    model: "gpt-4",
    messages: [{ role: "user", content: "Hello, world!" }],
});

LangChain (Python)

Python
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="gpt-4",
    openai_api_key="your-api-key",
    openai_api_base="http://ai-gateway.internal:10000"
)

response = llm.invoke("Hello, world!")

Tip

You do not need to provide the real API key in your code. If you’ve already configured the gateway with the appropriate environment variables (for example, OPENAI_API_KEY), the AI Gateway will automatically inject the real key for you. This allows your application to use blank or placeholder values if your library requires the field to be populated.

Environment variable

For SDKs that respect the OPENAI_BASE_URL environment variable, you can set it globally without modifying code:

Bash
export OPENAI_BASE_URL=http://ai-gateway.internal:10000

This approach works with the OpenAI Python and Node.js SDKs, as well as many frameworks built on top of them. It’s the recommended method for production deployments, as it avoids hardcoding the gateway URL in application code.

Verifying enrollment

Once your agent is configured to route traffic through the AI Gateway, verify that it appears in the AQtive Guard (AQG) AI-SPM inventory:

  1. From the AQG main menu, select Inventory.
  2. Under the AI Assets group, select Agents.
  3. Confirm that your agent appears in the agents table with recent activity.

You can also check the runtime guardrails dashboard to verify that interactions are being recorded and policies are being applied.

Tip

If your agent doesn’t appear in the inventory after configuration, verify that the AI Gateway is reachable from the agent’s network and that the base URL is correctly set. Check the gateway logs for connection attempts.

To customize which guardrails are applied to your agent’s traffic, refer to AI guardrail policies.