LLM integration
How LLMs (Claude, Grok, GPT, others) and apps built on top of them discover and call parcelpump.
Three complementary surfaces, each tuned for a different consumer:
| Surface | Format | Audience |
|---|---|---|
/llms.txt |
markdown | LLMs at "what is this site?" stage; AI search crawlers |
/openapi.json + /docs |
OpenAPI 3.0 / Swagger UI | Developers writing code, code generators, OpenAPI-aware LLM tools |
@parcelpump/mcp-server |
Model Context Protocol | Claude Desktop, Cursor, Grok, any MCP-aware client |
/.well-known/ai-plugin.json |
ChatGPT plugin manifest | Custom GPTs and a few legacy LLM frontends |
1. /llms.txt — discovery
https://api.parcelpump.io/llms.txt (and, when the marketing site
exists, https://parcelpump.dev/llms.txt) is a short markdown
document describing what parcelpump is and pointing at the structured
specs. Modeled on the emerging convention promoted at llmstxt.org.
LLMs crawling for context on a domain look here. Keep it under 200 lines, no images, plain prose.
The route handler is in src/api/server.ts. Edit the inline string
when adding new top-level capabilities.
2. OpenAPI spec — code generators + structured LLM tools
https://api.parcelpump.io/openapi.json is a full OpenAPI 3.0
document generated from src/api/route-registry.ts. Both:
- Code generators can produce typed clients in any language.
Recommended:
npx openapi-typescript https://api.parcelpump.io/openapi.json -o pp-types.ts - Swagger UI at
https://api.parcelpump.io/docslets a human click through every endpoint with try-it-out. - LLMs that accept OpenAPI specs as tool definitions (most function-calling frameworks: LangChain, OpenAI function-calling, Bedrock agent action groups) can ingest this directly.
Convention: every new route in src/api/server.ts adds a
corresponding entry in src/api/route-registry.ts in the same
commit. The reference doc at docs/api-reference.md is regenerated
via npm run docs:api.
3. MCP server — the leverage move
Model Context Protocol is the
emerging open standard for letting LLMs discover and call tools.
Anthropic introduced it; it's now adopted by Cursor, OpenAI's
agents framework, Grok, several IDE plugins, and (via mcp-bridge)
the Bedrock and Vertex agent runtimes.
Once a user installs an MCP server in their LLM client's config, the LLM gets typed access to its tools without the user writing code or the LLM writing fetch calls. For parcelpump, that means Claude Desktop / Cursor / etc. can do this in a chat:
User: "Find the 5 largest farms by acreage in Franklin County WA that are owned by 'HAYDEN'"
[Claude calls parcelpump_search internally with q=hayden, state=WA] [Claude follows up with parcelpump_get on the top hits] [Claude formats the answer]
No fetch glue, no API-key juggling per request. Tools exposed:
parcelpump_search— fuzzy typeaheadparcelpump_get— single parcelparcelpump_get_or_load— fetch-or-scrape compositeparcelpump_list_sources— discoveryparcelpump_get_findings— review engine output
Source: mcp-server/src/index.ts. Adding a tool is ~25 lines —
declare schema in ListToolsRequestSchema, dispatch in
CallToolRequestSchema, ship.
User installation (Claude Desktop)
In ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"parcelpump": {
"command": "npx",
"args": ["-y", "@parcelpump/mcp-server"],
"env": {
"PARCELPUMP_API_URL": "https://api.parcelpump.io",
"PARCELPUMP_API_KEY": "<your hex key>"
}
}
}
}
Restart Claude Desktop. The parcelpump tools appear in the tool list. Same shape works for Cursor (Settings → MCP) and the Zed AI panel.
Publishing the MCP server
When the package is published to npm:
cd parcelpump/mcp-server
npm install
npm run build
npm publish --access public
Until then, users can install from a local checkout:
{
"mcpServers": {
"parcelpump": {
"command": "node",
"args": ["/path/to/parcelpump/mcp-server/dist/index.js"],
"env": { "PARCELPUMP_API_KEY": "..." }
}
}
}
4. /.well-known/ai-plugin.json — legacy compat
Older OpenAI plugin manifest. Still useful for ChatGPT custom GPTs
("Add an action") and a couple of LLM frontends that haven't moved
to MCP. Points at /openapi.json for the actual spec; minimal extra
information.
The route lives in src/api/server.ts alongside /llms.txt.
How LLMs should drive parcelpump_search
The search tool intentionally takes minimal parameters and lets the LLM decide how much narrowing to apply. The tool description tells the LLM:
| User intent | Tool call |
|---|---|
| "Find Hayden Farms" | { q: "Hayden Farms" } |
| "Hayden Farms in Franklin County" | { q: "Hayden Farms", source: "franklin-county-wa" } |
| "Hayden Farms in WA" | { q: "Hayden Farms", state: "WA" } |
| "Smith family farms near Walla Walla" | { q: "Smith family", lat: 46.07, lon: -118.34 } |
| "Parcel 123290284" | { q: "123290284" } |
| "John Smith — somewhere in Florida" | { q: "John Smith", state: "FL" } |
| "Properties near here" (map context) | { q: <user-typed>, lat: <map.center.lat>, lon: <map.center.lon> } |
The key behaviors:
state/sourceare STRICT filters. Use only when the user explicitly constrains.lat/lonis a BIAS, never a filter. Distant exact-name matches still appear. Use when the user says "near", "around", "close to", or when the LLM is operating on a map context.- Default to no constraints. A unique-name search like "Hayden Farms" needs no filters; the textual signal is strong enough to rank correctly across the whole country.
- The LLM can inline lat/lon for common place names from its
training data. Pasco WA → (46.24, -119.10), Walla Walla → (46.07,
-118.34), Manhattan → (40.78, -73.97), etc. Future
parcelpump_geocode_placetool handles unfamiliar places.
Full mechanics in docs/search.md.
Auth pattern recommendations
Every LLM-side surface uses the same X-Parcelpump-Key header. Two
patterns:
- Per-user keys (recommended for end-user-facing LLM apps):
the user supplies their own key, it never touches a shared
service. Same key issuance flow as for any other consumer
(
docs/lambda-deployment.md§ "Creating an API key for a new consumer"). - Per-app key (for agent platforms aggregating many users): the platform holds one key with a rate limit appropriate to their fan-in. Currently no rate limit is enforced, but expect one once a third-party consumer goes live.
Crawler etiquette
llms.txt and /openapi.json are deliberately scrapable. Bulk
download of /parcels or /search results for re-publication is
not permitted — those are gated by API key and any abuse leads to
key revocation. This will be formalized in a usage policy at
parcelpump.dev/policy when the marketing site exists.
Why this combination
/llms.txtanswers "what is this?" for any LLM that lands on the domain, no auth needed.- OpenAPI + Swagger UI is the structured spec everyone tooling (codegen, function-calling LLMs, IDE plugins) reads.
- MCP is the emerging high-leverage path: one user-side install, permanent typed access in any MCP client.
- AI-plugin.json is the cheap legacy bridge for the long tail of older LLM frontends.
Together they cover every realistic LLM-discovery and LLM-call path.