Get started.
AgentLoop is a runtime learning layer for production AI agents. Wrap your LLM client once — memory retrieval and turn logging happen automatically. Reviewers correct mistakes in the dashboard. Those corrections become memories. Future related queries pull them back. The agent improves without retraining.
How it works
AgentLoop sits between your application and your LLM provider. Before each response, it searches past corrections semantically and injects the most relevant ones into the prompt. After the response, the turn is logged for review. Subject-matter experts correct mistakes in the dashboard — the corrections become searchable memory available to every future query.
Quickstart
Pick your provider. The integration is the same shape regardless: one wrap call, then use the client normally.
# Pick your LLM provider pip install agentloop-py agentloop-py-openai openai # or for Anthropic pip install agentloop-py agentloop-py-anthropic anthropic
from openai import OpenAI from agentloop import AgentLoop from agentloop_openai import wrap_openai # Wrap your OpenAI client once. AgentLoop hooks fire automatically # on every chat.completions.create() call. openai = wrap_openai( OpenAI(), loop=AgentLoop(api_key="ak_your_key"), ) def ask_agent(question, user_id): response = openai.chat.completions.create( model="gpt-4o", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": question}, ], agentloop={"user_id": user_id}, # per-call options ) return response.choices[0].message.content
For Anthropic, swap two imports: from anthropic import Anthropic and
from agentloop_anthropic import wrap_anthropic. The
wrapper interface is identical.
// Pick your LLM provider npm install @agentloop-sdk/core @agentloop-sdk/openai openai // or for Anthropic npm install @agentloop-sdk/core @agentloop-sdk/anthropic @anthropic-ai/sdk
import OpenAI from "openai"; import { AgentLoop } from "@agentloop-sdk/core"; import { wrapOpenAI } from "@agentloop-sdk/openai"; // Wrap your OpenAI client once. AgentLoop hooks fire automatically // on every chat.completions.create() call. const openai = wrapOpenAI( new OpenAI(), { loop: new AgentLoop({ apiKey: "ak_your_key" }) } ); async function askAgent(question, userId) { const response = await openai.chat.completions.create({ model: "gpt-4o", messages: [ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: question }, ], agentloop: { user_id: userId }, // per-call options }); return response.choices[0].message.content; }
curl -X POST https://api.getagentloop.io/v1/memories/search \ -H "Authorization: Bearer ak_your_key" \ -H "Content-Type: application/json" \ -d '{"query": "pix limit night", "user_id": "client_123", "limit": 3}'
curl -X POST https://api.getagentloop.io/v1/annotations \ -H "Authorization: Bearer ak_your_key" \ -H "Content-Type: application/json" \ -d '{ "question": "What is the Pix limit at night?", "agent_response": "R$5,000", "rating": "incorrect", "root_cause": "context", "correction": "The Pix nighttime limit in Brazil is R$1,000 between 8pm and 6am for personal accounts", "tags": ["pix"], "reviewer": "maria@company.com" }'
Per-user retrieval
By default, user_id tags the logged turn but retrieval
pulls from the entire org — so a correction written by one end-user
is available to every end-user. This is what most apps want.
For end-user-facing apps where each user has personal corrections
that shouldn't apply to others (Alice's preferences shouldn't change
Bob's results), opt in to per-user retrieval with
search_user_id (Python) / searchUserId (JS):
agentloop={ "user_id": user_id, # logs this turn under user_id "search_user_id": user_id, # AND scope retrieval to this user }
The two fields are independent. You can log under one user and retrieve from another — useful when an admin reviews a user's session, or when a team-level assistant logs under the workspace but retrieves per-seat. See Patterns for the full set of recipes.
Next steps
Once your wrapper is in place, the questions that come up next:
- Signals — which turns should get logged for review? Five patterns most teams combine.
- Webhooks — get notified when turns land in the review queue (Slack, PagerDuty, your own systems).
- LangChain — drop-in pattern for LCEL chains.
- Patterns — recipes for using
user_idandsearch_user_idtogether. - Reference — what's enforced on your data today.