Skip to main content

Introduction

Agent Builder Platform is a comprehensive solution for building and deploying AI agents that can search documents, process information, and interact with tools. It provides a unified API for creating and managing intelligent agents that integrate with your applications.

Some of the key capabilities of Agent Builder Platform include:

  • RAG (Retrieval-Augmented Generation): Search through documents intelligently with context-aware responses
  • Tool Agents: Perform actions using function calling and extensible tool integrations
  • Task Agents: Process structured data with template-based prompts and schema validation
  • Streaming Support: Receive real-time incremental responses for all agent types
  • Guardrails: Built-in content safety and prompt attack protection

The Agent Builder Platform provides the following agent types:

Quickstart Guide

This guide will help you get started with creating and using different types of agents in our platform.

Prerequisites

  • Valid authentication token
  • Access to the API endpoints
  • Appropriate permissions (READ, CREATE, EDIT, INVOKE, DELETE)

Creating Your First Agent

All agents within an environment must have a unique name. Duplicate names will result in an error when creating or updating an agent.

Available Tool Types

The following tool types are supported:

  • function: Call predefined functions (e.g., multiply)
  • structured_output: Force JSON output matching a schema
  • task_agent: Reference a Task Agent as a tool for multi-agent orchestration

Tool agents can perform specific tasks using predefined tools. Here are common examples:

Basic Calculator Agent

{
"name": "Calculator",
"description": "Math assistant",
"agentType": "tool",
"config": {
"tools": [
{
"toolType": "function",
"name": "multiply",
"description": "Multiplies two numbers",
"funcName": "multiply"
}
],
"llmModelId": "anthropic.claude-3-haiku-20240307-v1:0",
"systemPrompt": "You are a helpful assistant with access to various tools.",
"inferenceConfig": {
"maxTokens": 4000
},
"guardrails": ["HAIP-Prompt_attack-Medium"]
}
}

Structured Output Agent

Experimental Feature

Structured Output on Tool Agent is currently an experimental feature. The API and functionality may change in future releases.

{
"name": "PersonExtractor",
"description": "Extracts structured information about people",
"agentType": "tool",
"config": {
"tools": [
{
"toolType": "structured_output",
"name": "structured_output",
"description": "Extracts structured information about a person from text",
"outputSchema": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Full name of the person"
},
"age": {
"type": "integer",
"description": "Age of the person"
},
"occupation": {
"type": "string",
"description": "Person's job or profession"
}
},
"required": ["name"]
}
}
],
"llmModelId": "anthropic.claude-3-sonnet-20240229-v1:0",
"systemPrompt": "You are a helpful assistant specialized in extracting structured information about people from text.",
"inferenceConfig": {
"maxTokens": 4000,
"temperature": 0.1
},
"guardrails": ["HAIP-Profanity"]
}
}

Example usage:

POST /v1/agents/{agent_id}/versions/{version_id-or-latest}/invoke
{
"messages": [
{
"role": "user",
"content": "John Doe is a 35-year-old software engineer"
}
]
}

Expected response:

{
"name": "John Doe",
"age": 35,
"occupation": "software engineer"
}

Streaming Responses

Our platform supports streaming responses for all agent types (Tool, RAG, and Task Agents), allowing you to receive data incrementally as it's generated. This is particularly useful for long-running operations or when you want to display results to users in real-time.

Streaming Endpoints

  • RAG and Tool Agents: POST /v1/agents/{agent_id}/versions/{version_id-or-latest}/invoke-stream
  • Task Agents: POST /v1/agents/{agent_id}/versions/{version_id-or-latest}/invoke-task-stream

Understanding the Stream

When you invoke the streaming endpoints, the server will send back a sequence of data chunks with the appropriate format based on the agent type.

  • For Tool Agent and Task Agent streams, the system uses text/plain content type.
  • For RAG Agent streams, the system also uses text/plain content type.

In all cases, each line in the response body is a self-contained JSON object, followed by a newline character. The stream is terminated by a special chunk with type response.completed

Processing Streamed Data

To process the stream, your client should:

  1. Open a connection to the streaming endpoint.
  2. Read the response line by line.
  3. Parse the line as a JSON object
  4. If a chunk has type response.completed, close the connection.
  5. Each JSON object (chunk) is either a Created, Completed, TextDelta, or ToolCall chunk, determined by the value of type field.
  6. Extracting CreatedChunk: The CreatedChunk object has type response.created and signals the start of the streaming response. It has an id, and all subsequent chunks will have the same id.
    {"type": "response.created", "response": {"id": "resp_id", "model": "LLM-model-id", "object": "response", "createdAt": 1760631101}}
  7. Extracting Content: The TextDelta object has type response.output_text.delta and contains a delta field with a segment of the text response. You should append these segments together to form the complete message.
    {"type": "response.output_text.delta", "role": "assistant", "delta": "The result of 3x5 is ", "id": "resp_id"}
  8. Extracting Tool Calls: The ToolCallChunk object has type response.function_call_arguments.done.
    {"type": "response.function_call_arguments.done", "name": "multiply", "itemId": "tool_123", "arguments": "{\"a\": 3, \"b\": 5}", "id": "resp_id"}
  9. Extracting CompletedChunk: The CompletedChunk object has type response.completed and signals the end of the streaming response. It may have customOutputs with sourceNodes from the request.
    {"type": "response.completed", "response": {"id": "resp_id", "model": "LLM-model-id", "object": "response", "createdAt": 1760631101, "customOutputs": {"sourceNodes": [{"id": "source-node-id", "text": "source node text", "score": 0.05, "objectId": "object-id", "chunkId": "chunk-id"}], "ragMode": "normal"}}}

Example: Streaming with a Tool Agent (using cURL)

Let's say you have a Tool Agent (like the Calculator example) with ID your-tool-agent-id and using the most recent version of config.

Request:

curl -N -X POST "http://your-api-base-url/v1/agents/your-tool-agent-id/versions/latest/invoke-stream" \\
-H "Authorization: Bearer your-jwt-token" \\
-H "Content-Type: application/json" \\
-d '{
"messages": [
{
"role": "user",
"content": "What is 3x5?"
}
]
}'

Expected Response Stream (raw text/plain lines):

{"type": "response.created", "response": {"id": "resp-id", "model": "LLM model ID", "object": "response", "createdAt": 1760632921}}
{"type": "response.output_text.delta", "role": "assistant", "delta": "Okay, I can help", "id": "resp-id"}
{"type": "response.output_text.delta", "role": "assistant", "delta": " with that. ", "id": "resp-id"}
{"type": "response.function_call_arguments.done", "id": "resp-id", "arguments": "{\"a\": 3, \"b\": 5}", "itemId": "tooluse_abc", "name": "multiply"}
{"type": "response.output_text.delta", "role": "assistant", "delta": "The result of 3 x 5", "id": "resp-id"}
{"type": "response.output_text.delta", "role": "assistant", "delta": "is 15.", "id": "resp-id"}
{"type": "response.completed", "response": {"id": "resp-id", "model": "LLM model ID", "object": "response", "createdAt": 1760632921}}

(Note: The exact chunking and content can vary. Some chunks might have empty content.)

Processing the above stream would yield:

  • Aggregated Content: "Okay, I can help with that. The result of 3 x 5 is 15."
  • Tool Call:
    {
    "item_id": "tooluse_abc",
    "name": "multiply",
    "arguments": "{\"a\": 3, \"b\": 5}"
    }

Example: Streaming with a RAG Agent (using cURL)

For a RAG Agent with ID your-rag-agent-id and the most recent version of the config.

Request:

curl -N -X POST "http://your-api-base-url/v1/agents/your-rag-agent-id/versions/latest/invoke-stream" \\
-H "Authorization: Bearer your-jwt-token" \\
-H "Content-Type: application/json" \\
-d '{
"messages": [
{
"role": "user",
"content": "What is our vacation policy?"
}
],
"hxqlQuery": "SELECT * FROM SysContent",

}'

Expected Response Stream (raw text/plain lines):

{"type":"response.created","response":{"id":"44de5f46-1ad8-4d26-ab5a-46a928cdaa3f","model":"amazon.nova-micro-v1:0","object":"response","createdAt":1760631101}}
{"type":"response.output_text.delta","role":"assistant","delta":"Our vacation policy states that","id":"44de5f46-1ad8-4d26-ab5a-46a928cdaa3f"}
{"type":"response.output_text.delta","role":"assistant","delta":" employees are entitled to X days","id":"44de5f46-1ad8-4d26-ab5a-46a928cdaa3f"}
{"type":"response.completed","response":{"id":"44de5f46-1ad8-4d26-ab5a-46a928cdaa3f","model":"amazon.nova-micro-v1:0","object":"response","createdAt":1760631104,"customOutputs":{"sourceNodes":[{"id":"source-node-id","text":"source node text","score":0.05,"objectId":"object-id","chunkId":"chunk-id"}],"ragMode":"normal"}}}

Processing the above stream would yield:

  • Aggregated Content: "Our vacation policy states that employees are entitled to X days"
tip

When using streaming, ensure your client correctly handles line endings and JSON parsing for each chunk. Remember that tools like Swagger UI may not display streaming responses correctly; cURL, Postman (with appropriate settings), or custom client code are better choices.

Response Format

All non-streaming agent invocations return an AgentResponse object with the following structure:

{
"object": "response",
"createdAt": 1741705500,
"model": "anthropic.claude-3-haiku-20240307-v1:0",
"output": [
{
"type": "message",
"status": "completed",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "The response text from the agent."
}
]
}
],
"customOutputs": {
"sourceNodes": [...],
"ragMode": "normal"
}
}

Response Fields

FieldTypeDescription
objectstringAlways "response"
createdAtintegerUnix timestamp of when the response was created
modelstringThe LLM model ID used for this response
outputarrayList of output items (messages and/or tool calls)
customOutputsobject | nullAdditional output data (present for RAG agents)

Output Types

Each item in the output array has a type field:

TextOutput ("message")

A text response from the agent:

FieldTypeDescription
typestring"message"
statusstring"completed"
rolestring"assistant"
contentarrayList of content blocks
content[].typestring"output_text" for text content
content[].textstringThe text content of the response

ToolCallOutput ("function_call")

A function call made by the agent (visible in streaming responses):

FieldTypeDescription
typestring"function_call"
namestringName of the tool/function called
argumentsstringJSON-encoded arguments passed to the tool

Custom Outputs

The customOutputs field is present for RAG agents and contains retrieval-specific data:

FieldTypeDescription
sourceNodesarrayDocuments retrieved from Content Lake used as context
sourceNodes[].docIdstringDocument identifier
sourceNodes[].chunkIdstringChunk identifier within the document
sourceNodes[].scorenumberRelevance score (0–1)
sourceNodes[].textstringText content of the retrieved chunk
ragModestringRAG mode used: "normal"

Guardrails

Guardrails are policy-based content filters that help ensure safe and appropriate AI interactions. They can detect and filter content such as profanity, insults, hate speech, and prompt injection attacks.

Discovering Available Guardrails

The list of available guardrails is dynamic and managed by the platform. Use the API to discover what's available in your environment:

GET /v1/guardrails

Example Response:

[
{
"name": "HAIP-Profanity",
"description": "Filters profane language from inputs and outputs"
},
{
"name": "HAIP-Insults-High",
"description": "Filters insulting content with high sensitivity"
},
{
"name": "HAIP-Insults-Low",
"description": "Filters insulting content with low sensitivity"
},
{
"name": "HAIP-Hate-High",
"description": "Filters hate speech with high sensitivity"
},
{
"name": "HAIP-Prompt_attack-Medium",
"description": "Detects and blocks prompt injection attacks"
}
]
info

Guardrail names and descriptions are managed at the platform level and may change over time. Always use GET /v1/guardrails to discover the current list rather than hardcoding guardrail names.

Applying Guardrails

Guardrails can be applied in two ways:

At Agent Creation

Include guardrails in the agent's config to apply them to all invocations:

{
"name": "Safe Assistant",
"description": "An assistant with content safety guardrails",
"agentType": "tool",
"config": {
"llmModelId": "anthropic.claude-3-haiku-20240307-v1:0",
"systemPrompt": "You are a helpful assistant.",
"tools": [...],
"guardrails": ["HAIP-Profanity", "HAIP-Insults-High", "HAIP-Prompt_attack-Medium"]
}
}

Per Invocation

Pass additional guardrails in the request body to apply them to a specific invocation only. These are applied in addition to any guardrails defined in the agent config:

{
"messages": [
{
"role": "user",
"content": "Tell me about our company policies."
}
],
"guardrails": ["HAIP-Hate-High"]
}