Monday 30 March 2026Afternoon Edition

ZOTPAPER

News without the noise


AI & Machine Learning

Ask ChatGPT to Pick a Number From 1 to 10,000 and It Will Almost Always Choose Between 7,200 and 7,500

A viral Reddit experiment reveals a striking pattern in how large language models generate supposedly random numbers

Zotpaper2 min read
A Reddit user has demonstrated that when asked to pick a random number between 1 and 10,000, ChatGPT overwhelmingly selects numbers in the 7,200 to 7,500 range — a finding that highlights the fundamental inability of language models to produce genuine randomness.

The experiment, which gained traction on both Reddit and Hacker News, involved repeatedly prompting ChatGPT to select a number from the range. The results showed a dramatic clustering around the 7,200-7,500 band, far from the uniform distribution you would expect from a truly random process.

The finding is not entirely surprising to AI researchers, who have long noted that language models are pattern-matching systems rather than random number generators. The bias likely reflects patterns in the training data — humans themselves tend to favour certain numbers when asked to pick randomly.

Analysis

Why This Matters

The experiment is a vivid illustration of a deeper truth about large language models: they are fundamentally incapable of randomness. Every output is deterministic given the same inputs, shaped entirely by patterns learned from training data.

Background

Humans are notoriously bad at generating random numbers too, tending to favour odd numbers and those ending in 7. The LLM bias may simply be amplifying human biases encoded in its training data.

Key Perspectives

For most use cases this is trivial, but it matters for any application where LLMs are expected to make unbiased selections — from shuffling options to generating test data.

What to Watch

Whether other models show similar patterns, and whether model providers add true random number generation as a tool call to address the limitation.

Sources