Agent700 Alignment Data
Store persistent, user-scoped context in Alignment Data and inject it into your agents with placeholders and smart document evaluation.
Agent700 Alignment Data
Transform static AI agents into dynamic, context-aware systems by using persistent, user-scoped data injection with Agent700 Alignment Data. Welcome to the comprehensive guide for the Agent700 Alignment Data system. This documentation explores how to transform static AI agents into dynamic, context-aware powerhouses by leveraging persistent, user-scoped data injection.
⚡ The Core Concept
Traditional AI agents rely on context passed manually in each prompt. Alignment Data changes this by providing a "Context Library" that lives alongside your user profiles.
When a chat request is made, Agent700 automatically scans the agent's prompt for {{key}} placeholders, retrieves the matching user-specific data, and injects it in real-time. For large datasets, it even performs automatic RAG (Retrieval-Augmented Generation).
🧠 Alignment Data as Agent Memory
One of the most powerful ways to use Alignment Data is as a Long-Term Memory Layer. While standard chat history is "ephemeral" (forgotten once a session ends), Alignment Data allows an agent to "remember" facts across weeks, months, or years.
Session Context vs. Persistent Memory
| Feature | Chat History (Short-Term) | Alignment Data (Long-Term) |
|---|---|---|
| Duration | Lasts for the current session only. | Persistent indefinitely for the user. |
| Scale | Limited by the context window. | Virtually unlimited (via Smart Retrieval). |
| Scope | Current conversation flow. | Cross-conversational "facts" and "state." |
| Control | Automatic rolling history. | Explicitly managed via API. |
The "Reflective Memory" Pattern
You can create an agent that essentially "writes to its own diary" by following this implementation pattern:
- Chat: The user tells the agent something important (e.g., "I'm allergic to peanuts").
- Reflect: At the end of the session, a "Summary Agent" or a background hook extracts new facts from the transcript.
- Remember: Update the user's Alignment Data (e.g.,
user.medical.allergies) via aPUT /api/alignment-datacall. - Recall: In the next conversation (even weeks later), the agent sees
{{user.medical.allergies}}and knows to avoid peanut-related suggestions immediately.
Example: The Lifelong Personal Assistant
- Alignment Key:
user.memory.notes - Current Value: "User prefers meetings after 10 AM. Working on Project 'X' with Alice. Likes dark mode."
- Agent Prompt: "You are a personal assistant. Use these long-term notes to inform your suggestions:
{{user.memory.notes}}" - The Result: Even if the user starts a fresh chat on a new device, the agent immediately knows not to schedule a 9 AM meeting.
🚀 Beyond the Basics: Advanced & Experimental Concepts
For power users, Alignment Data can be pushed beyond simple context storage into the realm of Self-Governing Systems.
1. Unified Web-IDE Intelligence
Concept: Syncing cross-environment coding standards.
If you use Agent700's MCP (Model Context Protocol) server, your alignment data follows you. You can set a key like dev.standard.react to "Always use Tailwind and TypeScript." Whether you are chatting with the agent on the web or using it in your local IDE, your architectural preferences are perfectly synchronized.
2. Wildcard Multi-Tenant Policies
Concept: Dynamic rule-swapping via patterns.
Instead of one-to-one mapping, you can use wildcard patterns like brand.safety.* to pull in entire sets of compliance rules. This allows a single "Master Agent" to dynamically adapt its entire personality and safety threshold based on which business unit is currently interacting with it.
3. The Self-Improving Lesson Library
Concept: Preventing regression through "Negative Memory."
When an agent is corrected by a user (e.g., "Actually, don't use double quotes"), a background process can update an alignment key called lessons.learned.
The agent prompt then becomes: "Answer the user, but avoid these past mistakes: {{lessons.learned}}". This creates a self-correcting feedback loop that improves the agent with every interaction.
💡 Imagination Sparks: Creative Use Cases
To help you see the potential, here are several ways developers are using Alignment Data today:
1. The Hyper-Personalized Travel Concierge
Instead of asking a user for their preferences every time, store them in the alignment library.
- Alignment Key:
user.travel_profile - Value:
{"diet": "Vegan", "airline": "Delta", "past_trips": ["Tokyo", "Paris"], "budget": "Luxury"} - Agent Prompt: "You are a travel assistant. Help the user plan a trip considering their profile:
{{user.travel_profile}}" - The Result: The agent automatically suggests vegan-friendly luxury hotels in cities the user hasn't visited yet.
2. The Multi-Tenant Brand Protector
If you run a SaaS that provides AI agents to multiple companies, use alignment data to manage brand voice.
- Alignment Key:
brand.guidelines - Value: "Tone: Professional yet witty. Must mention 'Eco-Friendly' in every response. Use British English."
- Agent Prompt: "Answer as the brand's official representative using these rules:
{{brand.guidelines}}" - The Result: One single agent architecture serves 1,000 different clients, each with their own unique "soul" defined in their alignment data.
3. The Digital Twin (Personal Branding)
Store your own writing style, common phrases, and background to create an agent that sounds just like you.
- Alignment Key:
me.personal_context - Value: (A 15,000-character collection of your past blog posts and emails)
- Parameter: Set
smartDocEvaluation: truein your chat request. - The Result: Agent700 will perform a semantic search on your "Digital Twin" library and inject only the most relevant snippets of your writing style for the current query.
4. The Live Policy Guardrail
Ensure support agents always follow the latest internal compliance rules without redeploying code.
- Alignment Key:
policy.current_version - Value: "Refunds: Only for items under $50. Response Time: Under 2 hours."
- Agent Prompt: "Support customers strictly based on current policy:
{{policy.current_version}}"
🛠️ Technical Implementation
Storing Alignment Data
First, populate the library via the Platform API.
# Example: Storing a corporate policy
curl -X POST https://api.agent700.ai/api/alignment-data \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"key": "corp_docs",
"value": "This is a very long text containing all internal manuals..."
}'Executing a Chat Request
Reference the data in your chat call using placeholders.
Endpoint: POST https://api.agent700.ai/api/chat
Payload Structure:
{
"agentId": "your-agent-uuid",
"messages": [
{
"role": "user",
"content": "Can I get a refund for my $75 purchase?"
}
],
"smartDocEvaluation": true,
"smartDocTopK": 5,
"smartDocChunkSize": 1000,
"streamingEnabled": true
}[!IMPORTANT] Placeholder Location: You don't pass the alignment data keys in the chat request. You place them in the Agent's Master Prompt or System Message configuration inside the Agent700 dashboard.
📐 Best Practices
1. Leverge Dot-Notation
Stay organized by using namespaces.
user.settings.themeuser.settings.notificationscompany.billing.v2
2. The construct-json Trick
construct-json TrickUse the GET /api/alignment-data/construct-json endpoint to see exactly what your "tree" looks like. It will automatically re-assemble all dotted keys into a nested JSON object.
3. Smart vs. Simple
- Simple Injection: For short strings (names, dates, simple flags), placeholders are replaced 1-to-1.
- Smart Evaluation: If a value is massive (like a handbook), enabling
smartDocEvaluationensures only the relevant parts are injected, preventing your agent from becoming confused by irrelevant data or hitting token limits.
Updated 2 days ago
