960 events · Updated 11:14 UTC

Probability for the questions that have no market.

Send any forward-looking question, get a calibrated probability backed by four frontier models. Cross-model disagreement is exposed as an uncertainty signal. Every outcome we resolve feeds back into the calibration corpus.

GPT · Claude · Gemini · Grok
Where AI agrees right nowView all

EARNWill Republic Services (RSG) report Q1 FY2026 revenue above $4.3B?

64%

HOUSIWill the 30-year fixed-rate mortgage (Freddie Mac) fall below 5.75% by April 30, 2026?

34%

ELECTWill the United States hold midterm elections on November 3, 2026?

97%

MKTWill the US 10Y Treasury yield close above 4.50% by 2026-05-13?

61%

MACROWill US Q1 2026 GDP growth (second estimate) be revised higher?

57%

TRADEWill the EU impose new tariffs on Chinese EVs before June 2026?

73%
0Active Forecasts
0%Avg Spread
0Resolved
0.242Brier (472)
Verified Accuracy472

events resolved and Brier scored

BRIER SCORE0.24Avg across 472 resolved
CRENE RESOLVED472Events verified & Brier scored
DATASET SCHEMA
question_id · category · consensus
spread · confidence · claude_prob
gpt4o_prob · gemini_prob · grok_prob
outcome · resolution_date · brier_score
Updated every 6 hours
960 active657 resolving 30d
HIGH
60%
MED
57%
LOW
60%

Accuracy by confidence tier · 472 resolved

Calibration curve
0%25%50%75%100%050100ACTUAL
Spread distribution
0-1010-2020-3030-4040-5050+
Average model probability
GPT54%Claude56%Gemini54%Grok56%
Data evaluated byNeudataEagle AlphaMonda
Download sample dataset
200 CRENE events · Full schema · 4 model probabilities
Methodology

Measured trust, not raw prediction

Raw LLM probabilities are not reliable. Crene tracks every prediction against real outcomes and corrects bias over time. The result is a probability you can act on.

01
Question
Any forward-looking question with a binary outcome. Crene also generates a structured corpus of events spanning macro, rates, crypto, and policy.
02
Forecast
Four frontier models (Claude, GPT, Gemini, Grok) forecast independently. No model sees another's output. Cross-model disagreement is exposed as an uncertainty signal.
03
Resolve
Outcomes resolved against tier-A authoritative sources: government data, central banks, SEC filings, primary financial reporting. Every resolution carries a citation.
04
Calibrate
Resolved outcomes feed back into the calibration corpus. Brier scoring per model, per domain. The longer the system runs, the more trustworthy the probabilities become.
Who this is for

Built for systems that need to act under uncertainty

AI Agents
Probabilistic reasoning for autonomous workflows. When an agent needs to estimate the likelihood of an outcome before acting, Crene returns a probability with calibrated uncertainty.
Decision Tools
Internal forecasting for deals, launches, hiring, regulatory outcomes. The 99% of decisions that have no liquid market still need probabilities. Crene provides them.
Quantitative Researchers
Structured AI consensus on the existing CRENE-resolved corpus across macro, rates, crypto, and policy. Full per-model history with Brier scoring for calibration analysis and backtesting.
APIREST + CSV EXPORT

One endpoint. Any question. Calibrated.

POST a question, receive a probability with disagreement and per-model breakdown. Currently in private access while we onboard design partners.

# Probability API (private access)
curl -X POST https://api-get.crene.com/probability \
  -H "X-API-Key: crene_..." \
  -H "Content-Type: application/json" \
  -d '{"question": "Will competitor X launch product Y before September?"}'

# Response
# {
#   "probability": 0.34,
#   "disagreement": 0.18,
#   "confidence": "medium",
#   "models": { "claude": {...}, "gpt": {...}, "gemini": {...}, "grok": {...} }
# }

Public read endpoints at /api/events/ remain available for the resolved-event corpus. Request /probability access at stephen@crene.com.

See what Crene measures

The resolved-event corpus that backs the probability API. Every forecast, every outcome, every citation.