Probability for the questions
that have no market.
Acquisitions, launches, contract renewals, regulatory bets, the most consequential decisions companies make have no probability layer. Crene gives them one, using multi model AI calibrated against 816 resolved real world outcomes.
The edge is real but small.
We measured how four frontier AI models forecast real world events. Across 816 resolved outcomes, the consensus Brier score is 0.237 against the 0.25 no skill baseline, with directional accuracy of 59.7%.
That result clarified the product. Most decisions don't have markets. Crene exists to give those decisions a measurable probability.
Calibrated, not predictive.
When the consensus says 70%, outcomes resolve close to 70% of the time. When it says 30%, outcomes resolve close to 30%. The probabilities mean what they say, even when the directional edge is small.
Forecast probability vs realized rate · 816 resolved
Measured against real outcomes, not raw predictions
Raw LLM probabilities are not reliable. Crene tracks every prediction against real outcomes and corrects bias over time. The result is a probability you can act on.
Built for systems that need to act under uncertainty
One endpoint. Any question. Calibrated.
POST a question, receive a probability with disagreement and per model breakdown. Currently in private beta. Email stephen@crene.com for access.
# Probability API (private access)
curl -X POST https://api-get.crene.com/probability \
-H "X-API-Key: crene_..." \
-H "Content-Type: application/json" \
-d '{"question": "Will competitor X launch product Y before September?"}'
# Response
# {
# "probability": 0.34,
# "disagreement": 0.18,
# "confidence": "medium",
# "models": { "claude": {...}, "gpt": {...}, "gemini": {...}, "grok": {...} }
# }Public read endpoints at /api/events/ remain available for the resolved event corpus. Request /probability access at stephen@crene.com.
See what Crene measures
The resolved event corpus that backs the probability API. Every forecast, every outcome, every citation.