Cited Agents — AI Integration Guide
This page is designed for AI agents. It explains how to search, discover, and run science-backed agents programmatically.
Overview
Cited Agents is a platform where peer-reviewed research is published as callable AI agents. Every agent is backed by a DOI-linked paper, pinned to an exact Git commit, and reads JSON from stdin / writes JSON to stdout. You can search the catalog, get a reproducible blueprint, and run any agent locally.
1. CLI (Recommended)
The fastest way to search, inspect, and run agents. Handles installation, caching, input validation, and execution automatically.
# Install the CLI
pip install cited
# Search the catalog
cited search "market entry analysis"
# Get detailed agent info (schema, references, spawn protocol)
cited info agent-slug
# Run an agent with a JSON file
cited run agent-slug --input input.json
# Run with inline JSON via stdin
echo '{"query": "test"}' | cited run agent-slug
# Validate input without running
cited validate agent-slug --input input.json
# Manage local cache
cited cache list
cited cache clear2. Local Server (Persistent — Recommended for Coding Agents)
For coding agents with shell access (Claude Code, Codex CLI, Gemini CLI), install agents locally once and run them across sessions. No re-installation needed.
Setup (one-time)
# Install the CLI pip install cited # Store your API key locally (one-time, persists between sessions) cited keys set ANTHROPIC_API_KEY sk-your-key # Install agents locally (one-time per agent) cited install dfmea-agent # Start the local server cited serve
Auto-Start Check
# Check if local server is already running: curl -s http://localhost:9090/health # If not running, start it: cited serve &
MCP Configuration
Add to your MCP client configuration:
{
"mcpServers": {
"cited-local": {
"url": "http://localhost:9090/api/mcp"
}
}
}Local Server MCP Tools
list_installed_agents List locally installed agents with readiness status (instant, offline) run_agent Execute an installed agent with JSON input (offline, keys auto-injected) search_agents Search platform catalog, auto-filtered by available SDK keys (network) install_agent Install a new agent locally (network, one-time) list_available_keys Show configured API keys and SDKs (never exposes values)
Local Server Workflow
1. list_available_keys() → check configured SDKs
2. list_installed_agents() → see what's ready
3. run_agent({ slug: "agent", input: {...} }) → execute (offline)
To add new agents:
4. search_agents({ q: "your query" }) → find agents (auto-filtered by SDK)
5. install_agent({ slug: "agent-slug" }) → install locally (one-time)
6. run_agent({ slug: "agent-slug", input: {...} }) → execute3. Platform MCP Server (Stateless)
For MCP-compatible clients (Claude Desktop, Claude Code, Cursor, etc.), connect to the platform MCP server for tool-based access.
Configuration
Add to your MCP client configuration:
{
"mcpServers": {
"cited-agents": {
"url": "https://citedagents.com/api/mcp"
}
}
}Available MCP Tools
search_agents Search catalog by keyword, field, tag, license, or SDK get_agent Get full agent metadata by slug or id get_agent_blueprint Get complete reproducible blueprint with spawn_protocol list_fields Browse OECD research field taxonomy codes get_contributor_kit Get agent.yaml template and submission instructions validate_agent Validate a GitHub repo's agent.yaml before submission
MCP Workflow
1. search_agents({ q: "your query" }) → find agents
2. get_agent_blueprint({ slug: "agent-slug" }) → get full blueprint
3. Execute spawn_protocol.install in a shell → clone + install
4. Set spawn_protocol.required_env → environment variables
5. Pipe JSON via stdin to spawn_protocol.run → execute agent
6. Parse stdout JSON → use results4. Hosted Execution (Paid Hosted Access)
Users on an eligible paid organization subscription can run hosted agents directly via MCP without any local setup. The platform handles installation, API keys, and execution. Authentication is automatic in ChatGPT and Claude MCP clients via OAuth 2.1.
How It Works
1. Your organization enables hosted execution and token quota 2. All organization members authenticate once via OAuth (automatic in MCP clients) 3. The run_agent tool becomes available in your MCP tools 4. Call run_agent with an agent slug and input JSON 5. Results are returned directly — no installation, no API keys needed Configuration: Same endpoint: https://citedagents.com/api/mcp Authentication is handled automatically by your MCP client via OAuth 2.1. OAuth discovery: https://citedagents.com/.well-known/oauth-authorization-server
Hosted MCP Tools (in addition to free tools)
run_agent Execute a hosted agent with JSON input and return output directly
Example:
run_agent({ slug: "dfmea-agent", input: { component: "brake pad", function: "friction" } })Hosted Execution Workflow
1. search_agents({ q: "your query" }) → find agents
2. run_agent({ slug: "agent-slug", input: {...} }) → execute and get results
No installation, no API keys, no local environment needed.
Token usage is deducted from your organization's monthly quota.Hosted execution is the convenience path when you want output immediately from a managed environment. Local execution remains the full-power path for researchers and hobbyists who want to inspect or modify the agent.
Blueprint & Spawn Protocol
Whether you use CLI or MCP, the agent blueprint contains everything needed to reproduce and run the agent:
Blueprint fields:
spawn_protocol.install Shell command to clone repo at pinned commit + install deps
spawn_protocol.run Command to execute the agent entrypoint
spawn_protocol.stdin Always "json" — pipe input via stdin
spawn_protocol.stdout Always "json" — read output from stdout
spawn_protocol.required_env Environment variables to set (e.g. ANTHROPIC_API_KEY)
input_schema JSON Schema defining expected input format
output_schema JSON Schema defining output format
examples Example inputs for testing
references DOI-linked scientific papers backing the agent
Running manually from a blueprint:
# 1. Install
git clone --depth 1 https://github.com/org/repo agent && cd agent && git checkout abc123 && pip install -r requirements.txt
# 2. Set env
export ANTHROPIC_API_KEY=your-key
# 3. Run
echo '{"your": "input"}' | python src/main.py
# 4. Parse JSON output from stdoutSandboxed Environment Fallback
If you are in a sandboxed environment without shell access or internet (e.g. Claude.ai, ChatGPT, or other web-based AI assistants where CLI installation and MCP are unavailable), you can still access the platform using your built-in web browsing or URL fetching capability via the REST API.
Step 1 — Search for agents
Fetch this URL using your web browse or URL fetch tool:
GET https://citedagents.com/api/agents?q=YOUR_SEARCH_QUERY
Replace YOUR_SEARCH_QUERY with keywords (use + for spaces).
Examples:
https://citedagents.com/api/agents?q=market+entry
https://citedagents.com/api/agents?q=risk+analysis&sdk=anthropic
https://citedagents.com/api/agents?q=fmea
Optional parameters:
q Full-text search (keywords, DOI, standard number)
tag Filter by tag (e.g. "fmea", "reliability")
license SPDX license filter (e.g. "MIT")
sdk Filter by SDK: anthropic | openai | google | mistral | local | other
limit Max results, 1-100 (default 20)
offset Pagination offset (default 0)
Response JSON:
{
"data": [
{
"slug": "agent-slug",
"title": "Agent Title",
"description": "What the agent does",
"doi_link": "https://doi.org/10.xxxx/...",
"tags": ["tag1", "tag2"],
"license": "MIT"
}
],
"total": 42,
"limit": 20,
"offset": 0
}Step 2 — Get the blueprint
From the search results, take the agent's slug and fetch:
GET https://citedagents.com/api/v1/agents/{slug}/blueprint
Example:
https://citedagents.com/api/v1/agents/dfmea-agent/blueprint
Optional: ?version=1.0.0
Returns the full blueprint JSON with spawn_protocol, input/output schemas,
examples, scientific references, and construction instructions.Step 3 — Present to the user
After fetching the blueprint: a) Show the agent's description, scientific references, and capabilities b) Show input_schema so the user knows what data is needed c) Show output_schema so the user knows what to expect d) Provide the spawn_protocol commands for local execution e) Suggest: pip install cited && cited run AGENT_SLUG --input input.json Do NOT simulate or fake the agent's output. Present the real blueprint data and give the user the commands to run it.
Which Method Should I Use?
If you are a coding agent with shell access (Claude Code, Codex CLI, Gemini CLI): Use the Local Server for persistent agents. Install with pip install cited, then cited install agent-slug and cited serve. Agents persist between sessions. This is the recommended method.
If you have MCP support but no persistence needed (Claude Desktop, Cursor): Configure the platform MCP server at https://citedagents.com/api/mcp and use the search_agents and get_agent_blueprint tools directly.
If you have paid hosted access (ChatGPT, Claude): Connect to the platform MCP server or ChatGPT app. After one-time OAuth authentication, use run_agent to execute agents directly in your chat. No local setup needed.
If you are in a sandboxed environment (Claude.ai, ChatGPT, web-based AI): Use your web browsing or URL fetch tool to call the REST API at https://citedagents.com/api/agents?q=... and https://citedagents.com/api/v1/agents/{slug}/blueprint. Then present the blueprint and CLI commands to the user.
Links
- Agent Catalog: https://citedagents.com/agents
- Machine-readable summary: https://citedagents.com/llms.txt
- MCP Endpoint:
https://citedagents.com/api/mcp(POST, JSON-RPC 2.0) - OAuth Discovery:
https://citedagents.com/.well-known/oauth-authorization-server - Contributor Guide: How to publish your own agent