Agent700 API Chat Guide
Overview
The Agent700 Platform API provides a comprehensive chat completion system that allows you to interact with AI agents through natural language conversations. The chat API supports both streaming and non-streaming responses, multiple AI models, advanced features like MCP tools, PII scrubbing, and smart document evaluation.
Base URL: https://api.agent700.ai/api
Table of Contents
- Chat Concepts
- Data Models
- API Endpoints
- Request Parameters
- Response Formats
- Streaming Responses
- Code Examples
- Use Cases and Patterns
- Best Practices
- Troubleshooting
Chat Concepts
What is the Chat API?
The Chat API enables you to send messages to AI agents and receive intelligent responses. Each chat request can:
- Use an existing agent configuration or create an ad-hoc virtual agent
- Maintain conversation history through message arrays
- Support streaming for real-time responses
- Integrate with MCP tools for extended capabilities
- Apply PII scrubbing for privacy protection
- Use smart document evaluation for context-aware responses
Key Concepts
Agents
Agents are pre-configured AI assistants with specific:
- Master Prompts: System instructions that define the agent's behavior
- Model Settings: Default model, temperature, max tokens, etc.
- Features: PII scrubbing, MCP tools, document evaluation, etc.
- Revisions: Versioned configurations that can be referenced
You can use an existing agent by providing its agentId, or create a virtual agent by providing name, masterPrompt, and other settings directly in the request.
Messages
Messages form the conversation history:
- System Messages: Instructions that guide the agent's behavior (typically injected from agent configuration)
- User Messages: Input from the user
- Assistant Messages: Previous responses from the agent
- Tool Messages: Results from tool calls (for MCP integration)
Messages are processed in order, maintaining conversation context.
Streaming vs Non-Streaming
- Non-Streaming: Returns complete response after processing finishes
- Streaming: Returns response chunks in real-time as they're generated (SSE)
Data Models
ChatRequest Schema
{
"messages": [
{
"id": "message-uuid",
"role": "user",
"content": "Hello, how are you?"
}
],
"agentId": "e2f5206e-5bfc-4d5c-a7a2-31a18bfc8bd6",
"agentRevisionId": 1,
"name": "My Virtual Agent",
"masterPrompt": "You are a helpful assistant.",
"introductoryText": "Welcome! How can I help?",
"model": "gpt-4",
"temperature": 0.7,
"topP": 1.0,
"maxTokens": 1000,
"reasoningEffort": "medium",
"thinkingEnabled": false,
"thinkingBudgetTokens": 1000,
"scrubPii": true,
"piiThreshold": 0.8,
"smartDocEvaluation": false,
"smartDocChunkSize": 1000,
"smartDocChunkOverlap": 200,
"smartDocEmbeddingModel": "text-embedding-ada-002",
"smartDocTopK": 3,
"fullDocAnalysis": false,
"fullDocChunkSize": 2000,
"fullDocChunkOverlap": 400,
"fullDocMaxLength": 10000,
"enableMcp": true,
"mcpServerNames": ["filesystem", "github"],
"mcpAutoApprove": false,
"streamingEnabled": false
}Required Fields:
messages(array): At least one message in the conversation
Optional Fields:
agentId(string, UUID): Use an existing agent configurationagentRevisionId(integer): Use a specific agent revisionname(string): Name for virtual agent (whenagentIdnot provided)masterPrompt(string): System prompt for virtual agentintroductoryText(string): Welcome message for virtual agentstreamingEnabled(boolean): Enable SSE streaming for HTTP endpointmodel(string): Override agent's default modeltemperature(number): Sampling temperature (0-2)topP(number): Nucleus sampling parameter (0-1)maxTokens(integer): Maximum tokens in responsereasoningEffort(string): Reasoning mode for supported modelsthinkingEnabled(boolean): Enable extended thinking modethinkingBudgetTokens(integer): Token budget for thinkingscrubPii(boolean): Enable PII scrubbingpiiThreshold(number): PII detection confidence thresholdsmartDocEvaluation(boolean): Enable smart document evaluationsmartDocChunkSize(integer): Chunk size for smart evaluationsmartDocChunkOverlap(integer): Overlap between chunkssmartDocEmbeddingModel(string): Embedding model namesmartDocTopK(integer): Number of top chunks to retrievefullDocAnalysis(boolean): Enable full document analysisfullDocChunkSize(integer): Chunk size for full analysisfullDocChunkOverlap(integer): Overlap for full analysisfullDocMaxLength(integer): Maximum text length for analysisenableMcp(boolean): Enable MCP toolsmcpServerNames(array): List of MCP server namesmcpAutoApprove(boolean): Auto-approve MCP tool calls
ChatMessage Schema
{
"id": "28e2c8b8-c5bd-41df-9641-ccc9d0757e59",
"role": "user",
"content": "What is the weather today?"
}Fields:
id(string, UUID, optional): Unique message identifierrole(string, required): Message role -"system","user","assistant", or"tool"content(string, required): Message content
Role Types:
system: Instructions for the agent (typically from agent configuration)user: User input messagesassistant: Previous agent responsestool: Tool call results (for MCP integration)
ChatResponse Schema
{
"error": null,
"response": "Hello! How can I assist you today?",
"finish_reason": "stop",
"scrubbed_message": "",
"prompt_tokens": 25,
"completion_tokens": 10
}Fields:
error(string, nullable): Error message if request failedresponse(string, required): Complete response textfinish_reason(string, nullable): Reason for completion -"stop","length","content_filter", or"error"scrubbed_message(string, nullable): Information about PII that was scrubbedprompt_tokens(integer, optional): Number of tokens in the promptcompletion_tokens(integer, optional): Number of tokens in the completion
Finish Reasons:
stop: Model completed normallylength: Response was truncated due to token limitcontent_filter: Content was filtered by safety systemserror: An error occurred during processing
API Endpoints
Chat Completion (Non-Streaming)
Send a chat request and receive a complete response.
Endpoint: POST /api/chat
Authentication: Required (JWT Bearer token or App Password)
Payment Required: Yes (agent owner must have active paid subscription)
Request Body: See ChatRequest Schema
Response: 200 OK
{
"error": null,
"response": "Hello! How can I assist you today?",
"finish_reason": "stop",
"scrubbed_message": ""
}Error Responses:
400 Bad Request: Invalid request format or missing required fields401 Unauthorized: Missing or invalid authentication token402 Payment Required: Agent owner doesn't have active paid subscription500 Internal Server Error: Error processing chat request
Example Request:
curl -X POST https://api.agent700.ai/api/chat \
-H "Authorization: Bearer YOUR_ACCESS_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"agentId": "e2f5206e-5bfc-4d5c-a7a2-31a18bfc8bd6",
"messages": [
{
"role": "user",
"content": "What is the weather today?"
}
]
}'Chat Completion (Streaming)
Send a chat request and receive streaming responses via Server-Sent Events (SSE).
Endpoint: POST /api/chat
Authentication: Required (JWT Bearer token or App Password)
Payment Required: Yes
Request Body: Same as non-streaming, but with streamingEnabled: true
Response: 200 OK (SSE stream)
Headers:
Content-Type: text/event-streamCache-Control: no-cacheConnection: keep-aliveX-Accel-Buffering: no
SSE Event Format:
event: content
data: {"content": "Hello", "stream_id": "..."}
event: content
data: {"content": "! How", "stream_id": "..."}
event: done
data: {"finish_reason": "stop", "prompt_tokens": 25, "completion_tokens": 10}
event: error
data: {"error": "Error message"}
SSE Events:
content: Partial content chunk from the responsedone: Stream completed successfullyerror: An error occurred during processing
Example Request:
curl -X POST https://api.agent700.ai/api/chat \
-H "Authorization: Bearer YOUR_ACCESS_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"agentId": "e2f5206e-5bfc-4d5c-a7a2-31a18bfc8bd6",
"messages": [
{
"role": "user",
"content": "Tell me a story"
}
],
"streamingEnabled": true
}'Fetch URL Metadata
Fetch and sanitize metadata from a URL for link previews.
Endpoint: GET /api/chat/fetch-url-metadata
Authentication: Required (JWT Bearer token or App Password)
Query Parameters:
url(string, required): HTTP or HTTPS URL to fetch
Response: 200 OK
{
"title": "Example Page Title",
"description": "This is an example page description",
"image": "https://example.com/image.jpg",
"url": "https://example.com/page"
}Error Responses:
400 Bad Request: Invalid URL, SSL failure, or request error413 Payload Too Large: URL content too large504 Gateway Timeout: Timeout fetching URL
Example Request:
curl -X GET "https://api.agent700.ai/api/chat/fetch-url-metadata?url=https://example.com" \
-H "Authorization: Bearer YOUR_ACCESS_TOKEN"Request Parameters
Basic Parameters
messages (required)
Array of chat messages forming the conversation history.
{
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
},
{
"role": "assistant",
"content": "Hi there! How can I help you?"
},
{
"role": "user",
"content": "What is 2+2?"
}
]
}agentId (optional)
UUID of an existing agent to use. When provided, the agent's configuration (master prompt, model settings, etc.) is loaded.
{
"agentId": "e2f5206e-5bfc-4d5c-a7a2-31a18bfc8bd6",
"messages": [...]
}agentRevisionId (optional)
Specific revision ID of an agent to use. If not provided, the agent's current revision is used.
{
"agentId": "e2f5206e-5bfc-4d5c-a7a2-31a18bfc8bd6",
"agentRevisionId": 2,
"messages": [...]
}Virtual Agent Parameters
When agentId is not provided, you can create a virtual agent with these parameters:
name (optional)
Name for the virtual agent.
{
"name": "My Virtual Assistant",
"masterPrompt": "You are a helpful assistant.",
"messages": [...]
}masterPrompt (optional)
System prompt that defines the virtual agent's behavior.
{
"masterPrompt": "You are a medical assistant. Follow these instructions:\n\nNo diagnosis\n\nNo prescriptive actions\n\nNo medical advice",
"messages": [...]
}introductoryText (optional)
Welcome message shown to users for the virtual agent.
{
"introductoryText": "Welcome! I'm here to help you with medical questions.",
"messages": [...]
}Model Parameters
model (optional)
Override the agent's default model.
{
"model": "gpt-4",
"messages": [...]
}temperature (optional)
Sampling temperature (0-2). Higher values make output more random.
{
"temperature": 0.7,
"messages": [...]
}topP (optional)
Nucleus sampling parameter (0-1). Controls diversity via nucleus sampling.
{
"topP": 0.9,
"messages": [...]
}maxTokens (optional)
Maximum number of tokens in the response.
{
"maxTokens": 1000,
"messages": [...]
}Advanced Model Parameters
reasoningEffort (optional)
Reasoning effort hint for models that support explicit reasoning modes (e.g., "low", "medium", "high").
{
"reasoningEffort": "medium",
"messages": [...]
}thinkingEnabled (optional)
Enable extended "thinking" mode for models that support it.
{
"thinkingEnabled": true,
"thinkingBudgetTokens": 1000,
"messages": [...]
}thinkingBudgetTokens (optional)
Token budget reserved for thinking when thinkingEnabled is true.
{
"thinkingEnabled": true,
"thinkingBudgetTokens": 2000,
"messages": [...]
}Privacy and Security Parameters
scrubPii (optional)
Enable PII (Personally Identifiable Information) scrubbing from messages.
{
"scrubPii": true,
"piiThreshold": 0.8,
"messages": [...]
}piiThreshold (optional)
PII detection confidence threshold (0-1). Higher values are more strict.
{
"scrubPii": true,
"piiThreshold": 0.9,
"messages": [...]
}Document Evaluation Parameters
smartDocEvaluation (optional)
Enable smart document evaluation using embeddings for alignment data placeholders.
{
"smartDocEvaluation": true,
"smartDocChunkSize": 1000,
"smartDocChunkOverlap": 200,
"smartDocEmbeddingModel": "text-embedding-ada-002",
"smartDocTopK": 3,
"messages": [...]
}fullDocAnalysis (optional)
Enable full document analysis over alignment data placeholders (does not support streaming).
{
"fullDocAnalysis": true,
"fullDocChunkSize": 2000,
"fullDocChunkOverlap": 400,
"fullDocMaxLength": 10000,
"messages": [...]
}MCP (Model Context Protocol) Parameters
enableMcp (optional)
Enable MCP tools for this request if supported by the model.
{
"enableMcp": true,
"mcpServerNames": ["filesystem", "github"],
"mcpAutoApprove": false,
"messages": [...]
}mcpServerNames (optional)
Array of MCP server names that may be used by this agent/request.
{
"enableMcp": true,
"mcpServerNames": ["filesystem", "github", "database"],
"messages": [...]
}mcpAutoApprove (optional)
Automatically approve MCP tool calls without requiring explicit user confirmation.
{
"enableMcp": true,
"mcpAutoApprove": true,
"messages": [...]
}Streaming Parameters
streamingEnabled (optional)
Enable SSE streaming for HTTP endpoint. When true, the endpoint returns Server-Sent Events instead of a JSON response.
{
"streamingEnabled": true,
"messages": [...]
}Response Formats
Non-Streaming Response
Complete response returned after processing finishes.
{
"error": null,
"response": "Hello! How can I assist you today?",
"finish_reason": "stop",
"scrubbed_message": "",
"prompt_tokens": 25,
"completion_tokens": 10
}Streaming Response (SSE)
Real-time response chunks via Server-Sent Events.
Content Event:
event: content
data: {"content": "Hello", "stream_id": "abc123"}
Done Event:
event: done
data: {"finish_reason": "stop", "prompt_tokens": 25, "completion_tokens": 10}
Error Event:
event: error
data: {"error": "Error processing request"}
Response with PII Scrubbing
When PII is detected and scrubbed:
{
"error": null,
"response": "I can help you with that.",
"finish_reason": "stop",
"scrubbed_message": "User mentioned email: [REDACTED]",
"prompt_tokens": 30,
"completion_tokens": 8
}Error Response
When an error occurs:
{
"error": "Error processing request: Connection timeout",
"finish_reason": "error",
"response": "Sorry, I encountered an error processing your request. Please try again.",
"scrubbed_message": ""
}Streaming Responses
Server-Sent Events (SSE)
The chat API supports real-time streaming via Server-Sent Events (SSE). This allows you to receive response chunks as they're generated, providing a better user experience for long responses.
Enabling Streaming
Set streamingEnabled: true in your request:
{
"agentId": "e2f5206e-5bfc-4d5c-a7a2-31a18bfc8bd6",
"messages": [
{
"role": "user",
"content": "Tell me a long story"
}
],
"streamingEnabled": true
}SSE Event Types
content
Partial content chunk from the streaming response.
event: content
data: {"content": "Once", "stream_id": "abc123"}
event: content
data: {"content": " upon", "stream_id": "abc123"}
event: content
data: {"content": " a", "stream_id": "abc123"}
event: content
data: {"content": " time", "stream_id": "abc123"}
done
Stream completed successfully. Includes final metadata.
event: done
data: {"finish_reason": "stop", "prompt_tokens": 25, "completion_tokens": 100}
error
An error occurred during processing.
event: error
data: {"error": "Error processing request"}
Connection Management
- Connection Limits: Each user can have a limited number of concurrent SSE connections (typically 5)
- Automatic Disconnect: The server automatically disconnects after processing completes
- One Message Per Connection: Each chat request requires a new connection
Handling SSE in JavaScript
async function streamChat(accessToken, agentId, messages) {
const response = await fetch('https://api.agent700.ai/api/chat', {
method: 'POST',
headers: {
'Authorization': `Bearer ${accessToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
agentId: agentId,
messages: messages,
streamingEnabled: true
}),
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
let buffer = '';
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split('\n');
buffer = lines.pop() || '';
for (const line of lines) {
if (line.startsWith('event: ')) {
const eventType = line.substring(7);
} else if (line.startsWith('data: ')) {
const data = JSON.parse(line.substring(6));
if (eventType === 'content') {
// Handle content chunk
console.log('Content:', data.content);
} else if (eventType === 'done') {
// Handle completion
console.log('Finished:', data.finish_reason);
} else if (eventType === 'error') {
// Handle error
console.error('Error:', data.error);
}
}
}
}
}Code Examples
JavaScript/TypeScript
Basic Chat Request
async function sendChatMessage(accessToken, agentId, userMessage) {
const response = await fetch('https://api.agent700.ai/api/chat', {
method: 'POST',
headers: {
'Authorization': `Bearer ${accessToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
agentId: agentId,
messages: [
{
role: 'user',
content: userMessage
}
]
}),
});
if (!response.ok) {
throw new Error(`Chat request failed: ${response.statusText}`);
}
return response.json();
}
// Usage
const result = await sendChatMessage(
accessToken,
'e2f5206e-5bfc-4d5c-a7a2-31a18bfc8bd6',
'Hello, how are you?'
);
console.log('Response:', result.response);Chat with Conversation History
async function chatWithHistory(accessToken, agentId, conversationHistory) {
const response = await fetch('https://api.agent700.ai/api/chat', {
method: 'POST',
headers: {
'Authorization': `Bearer ${accessToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
agentId: agentId,
messages: conversationHistory
}),
});
if (!response.ok) {
throw new Error(`Chat request failed: ${response.statusText}`);
}
const result = await response.json();
// Add assistant response to conversation history
conversationHistory.push({
role: 'assistant',
content: result.response
});
return result;
}
// Usage
let conversation = [
{ role: 'user', content: 'What is 2+2?' }
];
const result1 = await chatWithHistory(accessToken, agentId, conversation);
console.log('First response:', result1.response);
conversation.push({ role: 'user', content: 'What about 3+3?' });
const result2 = await chatWithHistory(accessToken, agentId, conversation);
console.log('Second response:', result2.response);Streaming Chat Request
async function streamChat(accessToken, agentId, messages, onChunk, onComplete, onError) {
const response = await fetch('https://api.agent700.ai/api/chat', {
method: 'POST',
headers: {
'Authorization': `Bearer ${accessToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
agentId: agentId,
messages: messages,
streamingEnabled: true
}),
});
if (!response.ok) {
throw new Error(`Chat request failed: ${response.statusText}`);
}
const reader = response.body.getReader();
const decoder = new TextDecoder();
let buffer = '';
let currentEvent = null;
let fullResponse = '';
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split('\n');
buffer = lines.pop() || '';
for (const line of lines) {
if (line.startsWith('event: ')) {
currentEvent = line.substring(7).trim();
} else if (line.startsWith('data: ')) {
const data = JSON.parse(line.substring(6));
if (currentEvent === 'content') {
fullResponse += data.content;
if (onChunk) onChunk(data.content, fullResponse);
} else if (currentEvent === 'done') {
if (onComplete) onComplete(data, fullResponse);
} else if (currentEvent === 'error') {
if (onError) onError(data.error);
}
}
}
}
}
// Usage
await streamChat(
accessToken,
agentId,
[{ role: 'user', content: 'Tell me a story' }],
(chunk, full) => {
console.log('Chunk:', chunk);
console.log('Full so far:', full);
},
(metadata, full) => {
console.log('Complete:', full);
console.log('Tokens:', metadata);
},
(error) => {
console.error('Error:', error);
}
);Virtual Agent Chat
async function chatWithVirtualAgent(accessToken, userMessage) {
const response = await fetch('https://api.agent700.ai/api/chat', {
method: 'POST',
headers: {
'Authorization': `Bearer ${accessToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
name: 'Medical Assistant',
masterPrompt: 'You are a medical assistant. Follow these instructions:\n\nNo diagnosis\n\nNo prescriptive actions\n\nNo medical advice',
introductoryText: 'Welcome! I can help answer general health questions.',
messages: [
{
role: 'user',
content: userMessage
}
]
}),
});
if (!response.ok) {
throw new Error(`Chat request failed: ${response.statusText}`);
}
return response.json();
}
// Usage
const result = await chatWithVirtualAgent(
accessToken,
'What are the symptoms of a cold?'
);
console.log('Response:', result.response);Chat with MCP Tools
async function chatWithMcpTools(accessToken, agentId, userMessage) {
const response = await fetch('https://api.agent700.ai/api/chat', {
method: 'POST',
headers: {
'Authorization': `Bearer ${accessToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
agentId: agentId,
enableMcp: true,
mcpServerNames: ['filesystem', 'github'],
mcpAutoApprove: false,
messages: [
{
role: 'user',
content: userMessage
}
]
}),
});
if (!response.ok) {
throw new Error(`Chat request failed: ${response.statusText}`);
}
return response.json();
}
// Usage
const result = await chatWithMcpTools(
accessToken,
agentId,
'List the files in the current directory'
);
console.log('Response:', result.response);Python
Basic Chat Request
import requests
def send_chat_message(access_token, agent_id, user_message):
url = 'https://api.agent700.ai/api/chat'
headers = {
'Authorization': f'Bearer {access_token}',
'Content-Type': 'application/json',
}
data = {
'agentId': agent_id,
'messages': [
{
'role': 'user',
'content': user_message
}
]
}
response = requests.post(url, headers=headers, json=data)
response.raise_for_status()
return response.json()
# Usage
result = send_chat_message(
access_token,
'e2f5206e-5bfc-4d5c-a7a2-31a18bfc8bd6',
'Hello, how are you?'
)
print('Response:', result['response'])Chat with Conversation History
def chat_with_history(access_token, agent_id, conversation_history):
url = 'https://api.agent700.ai/api/chat'
headers = {
'Authorization': f'Bearer {access_token}',
'Content-Type': 'application/json',
}
data = {
'agentId': agent_id,
'messages': conversation_history
}
response = requests.post(url, headers=headers, json=data)
response.raise_for_status()
result = response.json()
# Add assistant response to conversation history
conversation_history.append({
'role': 'assistant',
'content': result['response']
})
return result
# Usage
conversation = [
{'role': 'user', 'content': 'What is 2+2?'}
]
result1 = chat_with_history(access_token, agent_id, conversation)
print('First response:', result1['response'])
conversation.append({'role': 'user', 'content': 'What about 3+3?'})
result2 = chat_with_history(access_token, agent_id, conversation)
print('Second response:', result2['response'])Streaming Chat Request
import requests
import json
def stream_chat(access_token, agent_id, messages, on_chunk=None, on_complete=None, on_error=None):
url = 'https://api.agent700.ai/api/chat'
headers = {
'Authorization': f'Bearer {access_token}',
'Content-Type': 'application/json',
}
data = {
'agentId': agent_id,
'messages': messages,
'streamingEnabled': True
}
response = requests.post(url, headers=headers, json=data, stream=True)
response.raise_for_status()
buffer = ''
current_event = None
full_response = ''
for line in response.iter_lines():
if line:
line = line.decode('utf-8')
if line.startswith('event: '):
current_event = line[7:].strip()
elif line.startswith('data: '):
data = json.loads(line[6:])
if current_event == 'content':
full_response += data.get('content', '')
if on_chunk:
on_chunk(data.get('content', ''), full_response)
elif current_event == 'done':
if on_complete:
on_complete(data, full_response)
elif current_event == 'error':
if on_error:
on_error(data.get('error'))
return full_response
# Usage
def on_chunk(chunk, full):
print(f'Chunk: {chunk}')
def on_complete(metadata, full):
print(f'Complete: {full}')
print(f'Tokens: {metadata}')
def on_error(error):
print(f'Error: {error}')
result = stream_chat(
access_token,
agent_id,
[{'role': 'user', 'content': 'Tell me a story'}],
on_chunk=on_chunk,
on_complete=on_complete,
on_error=on_error
)Virtual Agent Chat
def chat_with_virtual_agent(access_token, user_message):
url = 'https://api.agent700.ai/api/chat'
headers = {
'Authorization': f'Bearer {access_token}',
'Content-Type': 'application/json',
}
data = {
'name': 'Medical Assistant',
'masterPrompt': 'You are a medical assistant. Follow these instructions:\n\nNo diagnosis\n\nNo prescriptive actions\n\nNo medical advice',
'introductoryText': 'Welcome! I can help answer general health questions.',
'messages': [
{
'role': 'user',
'content': user_message
}
]
}
response = requests.post(url, headers=headers, json=data)
response.raise_for_status()
return response.json()
# Usage
result = chat_with_virtual_agent(
access_token,
'What are the symptoms of a cold?'
)
print('Response:', result['response'])cURL Examples
Basic Chat Request
curl -X POST https://api.agent700.ai/api/chat \
-H "Authorization: Bearer YOUR_ACCESS_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"agentId": "e2f5206e-5bfc-4d5c-a7a2-31a18bfc8bd6",
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
]
}'Chat with Conversation History
curl -X POST https://api.agent700.ai/api/chat \
-H "Authorization: Bearer YOUR_ACCESS_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"agentId": "e2f5206e-5bfc-4d5c-a7a2-31a18bfc8bd6",
"messages": [
{
"role": "user",
"content": "What is 2+2?"
},
{
"role": "assistant",
"content": "2+2 equals 4."
},
{
"role": "user",
"content": "What about 3+3?"
}
]
}'Streaming Chat Request
curl -X POST https://api.agent700.ai/api/chat \
-H "Authorization: Bearer YOUR_ACCESS_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"agentId": "e2f5206e-5bfc-4d5c-a7a2-31a18bfc8bd6",
"messages": [
{
"role": "user",
"content": "Tell me a story"
}
],
"streamingEnabled": true
}'Virtual Agent Chat
curl -X POST https://api.agent700.ai/api/chat \
-H "Authorization: Bearer YOUR_ACCESS_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "Medical Assistant",
"masterPrompt": "You are a medical assistant. Follow these instructions:\n\nNo diagnosis\n\nNo prescriptive actions\n\nNo medical advice",
"introductoryText": "Welcome! I can help answer general health questions.",
"messages": [
{
"role": "user",
"content": "What are the symptoms of a cold?"
}
]
}'Chat with Custom Model Settings
curl -X POST https://api.agent700.ai/api/chat \
-H "Authorization: Bearer YOUR_ACCESS_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"agentId": "e2f5206e-5bfc-4d5c-a7a2-31a18bfc8bd6",
"model": "gpt-4",
"temperature": 0.7,
"maxTokens": 1000,
"messages": [
{
"role": "user",
"content": "Explain quantum computing"
}
]
}'Chat with PII Scrubbing
curl -X POST https://api.agent700.ai/api/chat \
-H "Authorization: Bearer YOUR_ACCESS_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"agentId": "e2f5206e-5bfc-4d5c-a7a2-31a18bfc8bd6",
"scrubPii": true,
"piiThreshold": 0.8,
"messages": [
{
"role": "user",
"content": "My email is [email protected]"
}
]
}'Fetch URL Metadata
curl -X GET "https://api.agent700.ai/api/chat/fetch-url-metadata?url=https://example.com" \
-H "Authorization: Bearer YOUR_ACCESS_TOKEN"Use Cases and Patterns
Basic Conversation
Simple question-and-answer interaction:
const response = await sendChatMessage(
accessToken,
agentId,
'What is the capital of France?'
);
console.log(response.response); // "The capital of France is Paris."Multi-Turn Conversation
Maintaining context across multiple exchanges:
let conversation = [
{ role: 'user', content: 'I want to learn Python' }
];
// First turn
const response1 = await chatWithHistory(accessToken, agentId, conversation);
console.log(response1.response);
// Second turn - maintains context
conversation.push({ role: 'user', content: 'What are the basics?' });
const response2 = await chatWithHistory(accessToken, agentId, conversation);
console.log(response2.response);Streaming for Better UX
Real-time response display for long answers:
await streamChat(
accessToken,
agentId,
[{ role: 'user', content: 'Write a long article about AI' }],
(chunk, full) => {
// Update UI in real-time
document.getElementById('response').textContent = full;
},
(metadata) => {
console.log('Streaming complete');
}
);Virtual Agent for One-Time Use
Creating a temporary agent without saving:
const response = await chatWithVirtualAgent(
accessToken,
'Explain quantum physics simply'
);Agent Revision Testing
Testing different agent configurations:
// Test revision 1
const response1 = await sendChatMessage(
accessToken,
agentId,
'What is AI?',
{ agentRevisionId: 1 }
);
// Test revision 2
const response2 = await sendChatMessage(
accessToken,
agentId,
'What is AI?',
{ agentRevisionId: 2 }
);
// Compare responses
console.log('Revision 1:', response1.response);
console.log('Revision 2:', response2.response);PII Protection
Protecting sensitive information:
const response = await sendChatMessage(
accessToken,
agentId,
'My SSN is 123-45-6789',
{ scrubPii: true, piiThreshold: 0.8 }
);
if (response.scrubbed_message) {
console.log('PII was scrubbed:', response.scrubbed_message);
}MCP Tool Integration
Using external tools through MCP:
const response = await chatWithMcpTools(
accessToken,
agentId,
'List files in /home/user/documents'
);
// Agent can use filesystem MCP tools to list filesSmart Document Evaluation
Context-aware responses using embeddings:
const response = await sendChatMessage(
accessToken,
agentId,
'What is your privacy policy?',
{
smartDocEvaluation: true,
smartDocChunkSize: 1000,
smartDocTopK: 3
}
);
// Agent retrieves relevant chunks from alignment dataBest Practices
Message Management
- Maintain Conversation History: Include previous messages in the
messagesarray to maintain context - Limit History Length: Very long conversation histories can exceed token limits - consider truncating old messages
- Use Message IDs: Include
idfields for messages to help track conversation state
Example:
// Good: Maintains context
const messages = [
{ role: 'user', content: 'What is Python?' },
{ role: 'assistant', content: 'Python is a programming language...' },
{ role: 'user', content: 'What can I build with it?' }
];
// Bad: Loses context
const messages = [
{ role: 'user', content: 'What can I build with it?' }
];Error Handling
Always implement proper error handling:
async function safeChat(accessToken, agentId, message) {
try {
const response = await sendChatMessage(accessToken, agentId, message);
if (response.error) {
console.error('Chat error:', response.error);
return null;
}
return response;
} catch (error) {
if (error.response?.status === 401) {
// Token expired, refresh and retry
const newToken = await refreshToken();
return safeChat(newToken, agentId, message);
} else if (error.response?.status === 402) {
// Payment required
console.error('Payment required for agent owner');
return null;
} else {
console.error('Unexpected error:', error);
return null;
}
}
}Token Management
- Monitor Token Usage: Check
prompt_tokensandcompletion_tokensin responses - Set Appropriate Limits: Use
maxTokensto prevent excessive token usage - Handle Token Limits: Check
finish_reasonfor"length"to detect truncation
Example:
const response = await sendChatMessage(accessToken, agentId, message);
if (response.finish_reason === 'length') {
console.warn('Response was truncated due to token limit');
// Consider increasing maxTokens or shortening the request
}
console.log(`Used ${response.prompt_tokens} prompt tokens and ${response.completion_tokens} completion tokens`);Streaming Best Practices
- Handle Connection Limits: Be aware of SSE connection limits per user
- Implement Reconnection: Handle disconnections gracefully
- Buffer Management: Properly handle partial SSE messages
Example:
let reconnectAttempts = 0;
const maxReconnectAttempts = 3;
async function streamWithRetry(accessToken, agentId, messages) {
try {
await streamChat(accessToken, agentId, messages, onChunk, onComplete, onError);
} catch (error) {
if (reconnectAttempts < maxReconnectAttempts) {
reconnectAttempts++;
console.log(`Reconnecting... (attempt ${reconnectAttempts})`);
await new Promise(resolve => setTimeout(resolve, 1000 * reconnectAttempts));
return streamWithRetry(accessToken, agentId, messages);
} else {
console.error('Max reconnection attempts reached');
}
}
}Performance Optimization
- Cache Agent Configurations: Store agent metadata to reduce API calls
- Batch Operations: When possible, batch multiple chat requests
- Use Streaming: For long responses, use streaming to improve perceived performance
Security Best Practices
- Use PII Scrubbing: Enable
scrubPiiwhen handling sensitive data - Validate Input: Sanitize user messages before sending
- Secure Token Storage: Never expose access tokens in client-side code
- Monitor Usage: Track API usage to detect anomalies
Example:
function sanitizeMessage(message) {
// Remove potentially dangerous content
return message.trim().substring(0, 10000); // Limit length
}
const sanitized = sanitizeMessage(userInput);
const response = await sendChatMessage(
accessToken,
agentId,
sanitized,
{ scrubPii: true, piiThreshold: 0.9 }
);Model Selection
- Use Agent Defaults: Let agents use their configured models unless you need to override
- Consider Cost: Different models have different costs - choose appropriately
- Match Use Case: Select models that fit your use case (speed vs. quality)
Troubleshooting
Common Errors and Solutions
"Payment required for agent owner" (402)
Cause: The agent owner doesn't have an active paid subscription.
Solutions:
- Verify the agent owner has an active paid subscription
- Check payment status in account settings
- Contact support if payment is active but still receiving this error
"Missing required field: messages" (400)
Cause: The messages array is missing or empty.
Solutions:
- Ensure the request body includes a
messagesarray - Verify the array contains at least one message
- Check that the JSON structure is valid
Example Fix:
// ❌ Incorrect
await sendChatMessage(accessToken, agentId, null);
// ✅ Correct
await sendChatMessage(accessToken, agentId, 'Hello');"Authentication required" (401)
Cause: Missing or invalid authentication token.
Solutions:
- Include the
Authorization: Bearer <token>header - Verify the token is not expired
- Refresh the token if needed
Example Fix:
// Ensure token is included
const response = await fetch(url, {
headers: {
'Authorization': `Bearer ${accessToken}`, // Don't forget this!
},
});"Error processing request" (500)
Cause: Server error during chat processing.
Solutions:
- Check the error message in the response
- Verify request format is correct
- Retry the request after a short delay
- Contact support if the issue persists
Streaming Connection Issues
Issue: SSE connection fails or disconnects unexpectedly.
Solutions:
- Check connection limits (typically 5 concurrent connections per user)
- Implement reconnection logic
- Verify network stability
- Check for firewall/proxy issues blocking SSE
Example:
// Implement connection retry
let retries = 0;
const maxRetries = 3;
async function connectWithRetry() {
try {
await streamChat(accessToken, agentId, messages, onChunk, onComplete, onError);
} catch (error) {
if (retries < maxRetries) {
retries++;
await new Promise(resolve => setTimeout(resolve, 1000 * retries));
return connectWithRetry();
}
throw error;
}
}Response Truncation
Issue: Response is cut off mid-sentence.
Cause: Response exceeded maxTokens limit.
Solutions:
- Increase
maxTokensin the request - Check
finish_reasonfor"length"to detect truncation - Consider breaking complex queries into smaller parts
Example:
const response = await sendChatMessage(
accessToken,
agentId,
message,
{ maxTokens: 2000 } // Increase from default
);
if (response.finish_reason === 'length') {
console.warn('Response was truncated. Consider increasing maxTokens.');
}PII Not Being Scrubbed
Issue: PII scrubbing not working as expected.
Solutions:
- Verify
scrubPii: trueis set in the request - Adjust
piiThreshold(lower = more sensitive, higher = less sensitive) - Check
scrubbed_messagein the response for details
Example:
const response = await sendChatMessage(
accessToken,
agentId,
'My email is [email protected]',
{ scrubPii: true, piiThreshold: 0.7 } // Lower threshold = more aggressive
);
if (response.scrubbed_message) {
console.log('PII scrubbed:', response.scrubbed_message);
}MCP Tools Not Working
Issue: MCP tools not being called or approved.
Solutions:
- Verify
enableMcp: trueis set - Check that the model supports MCP (some models don't)
- Ensure
mcpServerNamesincludes the correct server names - Check if
mcpAutoApproveneeds to be enabled - Verify MCP servers are properly configured
Example:
// Ensure MCP is properly configured
const response = await sendChatMessage(
accessToken,
agentId,
'List files in current directory',
{
enableMcp: true,
mcpServerNames: ['filesystem'], // Correct server name
mcpAutoApprove: true // Auto-approve tool calls
}
);Debugging Tips
- Log Request/Response: Log full request and response for debugging
- Check Token Usage: Monitor token counts to identify issues
- Validate JSON: Ensure request body is valid JSON
- Test with Minimal Request: Start with the simplest possible request
Example Debugging:
async function debugChat(accessToken, agentId, message) {
const request = {
agentId: agentId,
messages: [{ role: 'user', content: message }]
};
console.log('Request:', JSON.stringify(request, null, 2));
try {
const response = await sendChatMessage(accessToken, agentId, message);
console.log('Response:', JSON.stringify(response, null, 2));
return response;
} catch (error) {
console.error('Error:', error);
console.error('Error details:', error.response?.data);
throw error;
}
}Related Documentation
- Authentication Guide - Comprehensive authentication documentation
- Error Handling Guide - Detailed error codes and troubleshooting
- Workflows Guide - Managing agent workflows
- Alignment Data Guide - Using alignment data and document evaluation
- OpenAPI Specification - Complete API reference with chat schemas
Support
For chat-related issues or questions:
- Email: [email protected]
- API Status: Check
/api/auth/heartbeatendpoint
