What should a prediction market API give an AI agent?
A prediction market API for AI agents should return more than price. It should return the market question, current probability, recent deltas, liquidity, spread, resolution source, signal evidence, caveats, sources, and suggested next calls. The goal is not just data access; it is grounded reasoning.
Agents fail when they summarize stale or incomplete market state. They perform better when the API gives them a compact decision card: what changed, why it matters, what is uncertain, and what to inspect next.
Agents need small, complete answers
Human dashboards can rely on visual scanning. Agents need explicit structure. A good endpoint should not force an agent to scrape a rendered page, infer which rows matter, and guess whether a market is resolved.
The better pattern is a typed response with stable fields: schema version, generated time, valid-until time, evidence list, risks, source links, scores, and recommended agent action.
- Schema version: so parsers do not break silently.
- Valid-until: so agents know when to refresh.
- Sources: so generated answers can cite the data they used.
- Risks: so unresolved or ambiguous markets are not over-stated.
- Suggested next calls: so agents can fan out only when needed.
Freshness is part of correctness
Prediction markets are time-sensitive. A market summary that was correct 30 minutes ago may be misleading after a large trade, new source update, or resolution event. Agent APIs should expose fetched_at, cache_seconds, and valid_until fields so downstream systems can decide when to refresh.
This is especially important for LLM answers. Without freshness metadata, an answer can sound confident while being stale. With freshness metadata, the agent can say when the data was observed and whether a refresh is needed.
Why Markdown twins help LLM citation
Rendered dashboards are useful for humans, but they include navigation, chrome, scripts, and layout. LLMs need a cleaner artifact. A Markdown twin gives the same market or brief in a compact, citation-friendly format.
Orrery publishes Markdown variants for market pages and the Daily Brief, plus llms.txt and llms-full.txt. That does not guarantee any model will rank or cite the site, but it gives crawlers and answer engines a stable, low-noise version of the content.
Why x402 fits agent workflows
Traditional SaaS payments assume a human signs up, chooses a plan, stores a card, and manages seats. Agents often need one answer at a time. HTTP 402 plus x402 changes the interface: the endpoint tells the caller the price, the agent pays, and the same request returns the answer.
For prediction market intelligence, per-call pricing fits the shape of the work. A lightweight search or movers endpoint can be cheap. A deep single-market decision card can cost more. The agent buys the depth it needs.
Safe agent actions beat directional advice
Agent APIs should be careful with financial-adjacent language. Orrery's decision cards use actions like monitor, investigate_now, check_resolution_risk, check_liquidity, check_sources, and alert_human. They do not say buy or sell.
That keeps the system useful without pretending that market intelligence is a complete decision. The API can rank attention and expose evidence; the caller remains responsible for any downstream action.
FAQ
Can an AI agent use prediction market data safely?
Yes, if the data is fresh, sourced, explicit about uncertainty, and framed as research rather than investment advice. Agents should cite timestamps and source fields.
What is x402 in this context?
x402 is an HTTP-native payment pattern where paid endpoints return a 402 payment challenge and a verified request returns the data. It lets agents buy individual API calls.
Why provide Markdown pages if there is already an API?
APIs are best for structured automation. Markdown pages are best for citation, retrieval, and low-noise LLM grounding. They complement each other.