# Cited Agents — AI Integration Guide > Cited Agents is a platform for discovering and running science-backed AI agents. > Every agent is grounded in peer-reviewed research, pinned to an exact Git commit, and fully reproducible. ## Quick Start ### 1. CLI (recommended) Install: pip install cited Search: cited search "market entry analysis" Run an agent: cited run agent-slug --input input.json Get agent info: cited info agent-slug Run with inline JSON: echo '{"query": "test"}' | cited run agent-slug ### 2. MCP Server (for Claude Desktop, Claude Code, Cursor, and other MCP clients) Add to your MCP client configuration: { "mcpServers": { "cited-agents": { "url": "https://citedagents.com/api/mcp" } } } Available MCP tools: - search_agents: Search catalog by keyword, field, tag, license, or SDK - get_agent: Get full agent metadata - get_agent_blueprint: Get complete reproducible blueprint - list_fields: Browse OECD research field taxonomy codes - get_contributor_kit: Get agent.yaml template and submission instructions - validate_agent: Validate a GitHub repo's agent.yaml Workflow: search_agents → get_agent_blueprint → run locally via spawn_protocol ### 3. Hosted Execution (Enterprise — recommended for ChatGPT and Claude users) If your organization has an enterprise subscription, run agents directly via MCP: Same endpoint: https://citedagents.com/api/mcp Authentication: Automatic via OAuth 2.1 (handled by your MCP client) OAuth discovery: https://citedagents.com/.well-known/oauth-authorization-server Enterprise MCP tools (in addition to free tools): - run_agent: Execute a hosted agent with JSON input and return output directly Workflow: 1. search_agents({ q: "your query" }) → find agents 2. run_agent({ slug: "agent-slug", input: {...} }) → execute and get results No installation, no API keys, no local environment needed. Token usage is deducted from your organization's monthly quota. Also supported via Gemini CLI, Gemini Enterprise (Google Cloud), and Microsoft Copilot Studio. Note: the consumer Gemini web app does not yet support custom MCP servers. ### 4. Local Server (persistent agents — recommended for coding agents) If you have shell access (Claude Code, Codex CLI, Gemini CLI), set up persistent local agents: Check if local server is running: curl -s http://localhost:9090/health If not running, set it up: pip install cited cited keys set ANTHROPIC_API_KEY sk-your-key # store API key locally (one-time) cited install dfmea-agent # install agent locally (one-time) cited serve & # start local server in background Configure your MCP client: { "mcpServers": { "cited-local": { "url": "http://localhost:9090/api/mcp" } } } Local server MCP tools: - list_installed_agents: List locally installed agents with readiness status (instant, offline) - run_agent: Execute an installed agent with JSON input (offline, API keys auto-injected) - search_agents: Search platform catalog, auto-filtered by your available SDK keys (network) - install_agent: Install a new agent locally from the platform (network, one-time) - list_available_keys: Show which API keys/SDKs are configured (never exposes values) The local server persists agents between sessions. Install once, run many times. ## Workflow 1. Search for an agent using CLI or MCP 2. Get the blueprint (contains spawn_protocol with exact install + run commands) 3. Run locally: execute spawn_protocol.install, then pipe JSON via stdin to spawn_protocol.run 4. Parse stdout JSON output ## Blueprint Structure The blueprint returned by CLI (cited info) or MCP (get_agent_blueprint) contains everything needed: - spawn_protocol.install: Shell command to clone repo at pinned commit and install dependencies - spawn_protocol.run: Command to execute the agent - spawn_protocol.stdin: Always "json" — pipe input via stdin - spawn_protocol.stdout: Always "json" — read output from stdout - spawn_protocol.required_env: Environment variables needed (e.g. ANTHROPIC_API_KEY) - input_schema: JSON Schema for expected input - output_schema: JSON Schema for expected output - examples: Example inputs for testing ## Example: Search and Run via CLI cited search "risk analysis" cited info dfmea-agent echo '{"component":"brake pad","function":"friction"}' | cited run dfmea-agent ## Sandboxed Environment Fallback If you are in a sandboxed environment without shell access or internet (e.g. Claude.ai, ChatGPT, or other web-based AI assistants), you can still access the platform using your built-in web browsing or URL fetching capability via the REST API: ### Step 1 — Search for agents Fetch this URL with your web browse / URL fetch tool: https://citedagents.com/api/agents?q=YOUR_SEARCH_QUERY Replace YOUR_SEARCH_QUERY with keywords (use + for spaces). Examples: https://citedagents.com/api/agents?q=market+entry https://citedagents.com/api/agents?q=risk+analysis&sdk=anthropic Optional parameters: tag, license, sdk, limit (max 100), offset Returns JSON: { data: [...agents], total, limit, offset } ### Step 2 — Get the blueprint From the search results, take the agent's slug and fetch: https://citedagents.com/api/v1/agents/AGENT_SLUG/blueprint Example: https://citedagents.com/api/v1/agents/dfmea-agent/blueprint Returns the full blueprint JSON with input/output schemas, examples, spawn_protocol, scientific references, and construction instructions. ### Step 3 — Present to the user After fetching the blueprint: a) Show the agent's description, scientific references, and capabilities b) Show input_schema so the user knows what data is needed c) Show output_schema so the user knows what to expect d) Provide the spawn_protocol commands for local execution e) Suggest: pip install cited && cited run AGENT_SLUG --input input.json Do NOT simulate or fake the agent's output. Present the real blueprint data and give the user the commands to run the agent themselves. ## For more information - Full API docs: https://citedagents.com/for-ai - Catalog: https://citedagents.com/agents - OAuth discovery: https://citedagents.com/.well-known/oauth-authorization-server - Contributor guide: https://citedagents.com/docs/contributor-guide.md