How many times do you need to shuffle a deck of cards before it’s truly random? (About seven good riffle shuffles.) How much uranium does it take to build a nuclear bomb? How does Google know which page you actually wanted?
These aren’t pub-trivia questions. They’re linked by a 100-year-old fight in Russia, where two mathematicians — one religious and conservative, the other combative and secular — turned probability theory into a battlefield. Out of their quarrel came ideas that echo through nuclear design, web search, and today’s predictive text. And it all started with coin flips.
The Tsar of Probability vs. Andrey the Furious
In 1905, Russia was on fire. Socialist uprisings spread across the empire, threatening the Tsar’s grip on power. The streets were full of workers, radicals, and police sabers. The divisions ran so deep that even mathematicians picked sides.
On one side stood Pavel Nekrasov, nicknamed the Tsar of Probability. A deeply religious man, Nekrasov believed math could prove the existence of free will, even the will of God. Numbers weren’t just numbers — they were divine fingerprints.
His nemesis was Andrey Markov, known by peers as Andrey the Furious. An atheist and a radical, Markov had no patience for what he called “the abuses of mathematics.” In his eyes, Nekrasov was peddling superstition in a lab coat.
Their intellectual battle was fought over the Law of Large Numbers, a theorem first proven by Swiss mathematician Jacob Bernoulli in 1713. The law says that if you flip a coin enough times, the proportion of heads and tails will eventually settle around 50/50. It was the cornerstone of probability theory — at least until Markov started tearing it apart.

Free will, Belgian marriages, and the abuse of statistics
Nekrasov doubled down on Bernoulli’s framework. He argued that if you observed the law of large numbers in real-world data — say, crime rates, birth rates, or marriages in 19th-century Belgium — it proved that the decisions behind them must be independent. If crime numbers converged like coin flips, then the crimes themselves must be the result of free choices.
In other words, math confirmed free will.
Markov thought this was delusional. He saw dependency everywhere. People don’t make decisions in isolation; they’re influenced — by peers, culture, propaganda, even the price of bread. If Nekrasov’s math suggested otherwise, the math was wrong.
To prove it, Markov turned to literature — specifically, Eugene Onegin, Alexander Pushkin’s national epic. By analyzing 20,000 consecutive letters from the poem, Markov showed that vowels and consonants weren’t independent events. Whether the next letter was a vowel depended heavily on what came before.
Yet — and here’s the genius move — he demonstrated that even dependent events followed the law of large numbers. Nekrasov’s philosophical house of cards collapsed. Free will couldn’t be smuggled into mathematics.
Markov ended his paper with a decisive conclusion:
“Thus, free will is not necessary to do probability.”
What he built instead would later bear his name: the Markov chain.
Memoryless systems and the birth of prediction
A Markov chain is simple: Each event depends only on the state immediately before it. It doesn’t care about the entire history — just what’s happening right now.
Think about weather: Tomorrow’s forecast depends heavily on today’s conditions, but not directly on what happened last month. Or disease spread: your odds of catching the flu depend on who’s infected today, not who coughed on someone years ago.

This “memoryless” property became the backbone of a new way of doing probability. Markov probably had no idea he’d just written the operating manual for the future.
Because 40 years later, it would help build the bomb.
Solitaire, neutrons, and the Monte Carlo gamble
Fast-forward to Los Alamos, 1946. The war is over, but mathematician Stanislaw Ulam is bedridden with encephalitis, his brain swollen and his body weak. To pass the time, he plays endless games of solitaire.
As he plays, he starts wondering: What are the odds of winning a randomly shuffled game? The math is impossibly large — 52 factorial possible card arrangements, an astronomical number. But Ulam realizes he can approximate the answer by playing a lot of games and counting how many he wins.
It’s a revelation.
Back in the lab, Ulam and John von Neumann apply the same idea to something much deadlier: neutron behavior in a nuclear core. Predicting how neutrons would scatter, split, or escape was a nightmare calculation. But what if they modeled it as a Markov chain, stepping through probabilities of scattering, absorption, or fission?
Run it thousands of times, average the outcomes, and you had an answer: How much uranium it really took to make a bomb.
They called it the Monte Carlo method, after the casino in Monaco. Gambling, math, and the atomic age — all in one package.
From nukes to nets: The algorithm that ate the Internet
By the 1990s, the world had a new chaos to organize: the Internet. Thousands of web pages were popping up daily, each one shouting for attention. Early search engines like Yahoo and Lycos tried brute force: count how many times a word appeared on a page. Want to rank for “cars”? Just type “cars cars cars” in white text on a white background. Congratulations, you’re #1.

Enter two Stanford PhD students: Larry Page and Sergey Brin. They realized you could model the web itself as a Markov chain. Each webpage was a state, each hyperlink a transition. A surfer randomly clicking links would spend more time on “important” pages — the ones with many quality backlinks.
That became PageRank, the algorithm that launched Google.
The brilliance was in its memorylessness: it didn’t need the entire history of the web, just the current structure of links. The same math that modeled Russian poetry and neutron chains was now ranking cat memes and conspiracy blogs.
By 2000, “to Google” had become a verb. By 2025, Google’s parent company Alphabet is worth nearly US$2 trillion. And at the core of that empire? A Markov chain.
Claude Shannon and the prediction game
But there’s another twist. In the 1940s, Claude Shannon, the father of information theory, started tinkering with Markov chains to model English text. If you know the last letter, you can predict the next one. If you know the last two, you do even better. By scaling up, Shannon essentially sketched the blueprint for predictive text and — eventually — language models like GPT.
Today, your Gmail drafts, your phone’s autocorrect, and even ChatGPT itself rely on descendants of Markov’s chains. The system looks at the current context and asks: what’s the most likely next token?
In other words: We’re all living in Markov’s probability machine.
When prediction eats itself
But there’s a catch. If AI-generated text starts flooding the Internet and then gets used as training data for future AIs, the system can collapse. Models start learning from themselves, spiraling into repetitive nonsense — a kind of digital inbreeding.

The same risk shows up in climate models, social networks, and even politics. Positive feedback loops break the assumptions of memoryless prediction. Instead of stabilizing, they spiral. More CO₂ means more warming means more water vapor means more warming. More outrage online means more clicks means more outrage.
Markov’s math helps us predict — but it also shows us where prediction breaks down.
Seven shuffles and the illusion of randomness
So back to the original question: how many times do you need to shuffle a deck of cards to make it random?
The answer, thanks to Markov chains, is seven riffle shuffles. Do less, and the deck isn’t random enough — patterns remain. Do more, and you’re just wasting time.
But if you shuffle like most of us do, with awkward clumps and sloppy bridges, it’s not seven. It’s more like 2,000.
Which is maybe the point. Randomness is hard. Prediction is harder. And yet, from Russian revolutionaries to Silicon Valley billionaires, humanity keeps trying to bend chance into certainty.
The long shadow of a math feud
It’s almost comical that so much of modern life — nuclear deterrence, the search box on your phone, AI chatbots writing this sentence — traces back to two men in Imperial Russia, arguing over whether God’s will could be measured with statistics.
Markov, driven by spite, created a tool to win an academic duel. Nekrasov, guided by faith, lost the argument but set the stage.
Out of their feud came an intellectual technology that governs the modern world.
Next time you Google something, play solitaire, or shuffle cards before poker night, remember: you’re living inside the fallout of a century-old math quarrel.
And don’t forget — seven riffles or it doesn’t count.
Follow us on X, Facebook, or Pinterest