Chapter 6 · Stochastic Processes in Finance

The Markov Property

The memoryless property — the present is all you need to predict the future.
The core idea

Almost every model a hedge fund builds rests on a single, powerful assumption. It is deceptively simple: the future depends only on the present — not on the past. It does not matter how the price got here. It does not matter what happened yesterday, or last week, or five years ago. Only where you are right now determines where you are going next. This is the Markov Property.

The Markov Property — only the present state matters for predicting the future A timeline showing past states crossed out as irrelevant, present state highlighted, and future state as a question mark connected only to the present X₀ 3 days ago X₁ 2 days ago Xₙ₋₁ yesterday IRRELEVANT PAST Xₙ RIGHT NOW ALL YOU NEED Xₙ₊₁ NEXT STATE predicted from Xₙ only

The entire history is crossed out. Only the present state connects to the future.

1 · The goldfish analogy

The easiest way to understand the Markov Property is through a goldfish. A goldfish, famously, has a very short memory. Now imagine two goldfish swimming through your Price Landscape — a map of where price, volume, and time all meet — a surface representing price, volume, and time simultaneously. One fish has a perfect memory. The other forgets everything after three seconds.

Two goldfish — Non-Markovian fish with memory vs Markovian fish with no memory Left panel shows a fish trailing a long history path it must remember. Right panel shows a fish with only its current position, history erased. Non-Markovian Fish Remembers everything 3 days ago yesterday 1hr ago To predict next move: needs ENTIRE history! Markovian Fish 3-second memory only current (t,p,v) To predict next move: only needs NOW!

Same fish, same pool — completely different computational requirements to predict the next move.

The Non-Markovian fish's entire history is encoded in the path it has swum. To predict where it goes next, you must load all of that history into memory. The Markovian fish simply looks at its current coordinate — price, volume, time — and that single point contains everything the model needs.

2 · The mathematical definition, decoded

In your textbook, you will encounter a formula that looks like a wall of symbols. But once you understand what each piece is saying, it reads as simply as the goldfish analogy.

P(Xₙ₊₁ = x | Xₙ, Xₙ₋₁, … , X₀) = P(Xₙ₊₁ = x | Xₙ)
Left side
"The probability of the next state being x, given the entire history from X₀ up to Xₙ."
Right side
"The probability of the next state being x, given only the current state Xₙ."
The equals sign
These two probabilities are identical. The history adds zero extra information. The present is a perfect summary of the past.
In plain English
"I don't care where you've been. I only care where you are. That's enough."

This equation is the foundation of the Chapman-Kolmogorov equations — the bridge that lets us calculate how a probability distribution spreads across the entire Price Landscape as time moves forward. Because we only need the current state, we can build a chain of one-step predictions without ever looking back.

3 · Why every quant model depends on this

The Markov Property is not just philosophically elegant. It is practically essential. Without it, most quantitative finance models would be computationally impossible to run in real time. Here is what it buys us in the three areas that matter most to a hedge fund strategist:

Computational Speed

Running a Kalman Filter or Monte Carlo simulation only requires the current state vector in RAM — not gigabytes of historical data. One coordinate replaces a warehouse of records.

Market Efficiency

If markets are Markovian, the current price already contains all past information. Looking at old charts should give you no edge — the market has already processed that history.

Model Simplicity

Every GBM and Brownian Motion model you build inherits the Markov Property automatically. The next price depends only on the current price, not on last Tuesday's candle.

Markov chain — each state connects only to the next, not to the full history A sequence of price states connected by arrows showing transition probabilities — each arrow goes forward only, no backward arrows Chapman-Kolmogorov: one-step prediction chains $98 t = 0 $100 t = 1 $102 NOW $104? p = 0.5 $100? p = 0.5 p(next|now) p(next|now) Each arrow only needs one state!

Each transition needs only the current state — no history stored, no memory required.

4 · The reality check: when markets actually do remember

While Brownian Motion and GBM are mathematically Markovian, real markets are not perfectly memoryless. Experienced traders know that certain historical events do influence current behaviour — and these violations of the Markov Property are precisely where trading edges are found.

Support and Resistance

Traders remember a "liquidity peak" from three days ago. When price returns to that level, they act differently than they would at a random price. The market is remembering history. Pure Markov assumption: violated.

Volatility Clustering

High volatility today tends to be followed by high volatility tomorrow. The variance is not independent — it has memory. This is why GARCH models exist: they explicitly model the memory in the volatility process.

The Hedge Fund Paradox: We assume the Markov Property because it makes the mathematics tractable and the models fast. But we actively hunt for violations of the Markov Property — because where the market has memory that our models ignore, that is exactly where pricing inefficiencies (and profits) live.

Try it — Markov vs memory-driven paths

Compare a pure Markovian path (each step is independent, no memory) against a memory-driven path (each step is influenced by recent history, like volatility clustering). Watch how the two paths diverge over time — and how the memory-path creates clusters and trends that the Markov path never shows.

70%
15%
Markov final price
Memory final price
Markov volatility
Memory volatility

Purple = pure Markovian path (no memory)  |  Coral = memory-driven path (volatility clusters)

The two faces of the Markov Property

As an assumption — for speed

The present state is a perfect filter of the past. One coordinate replaces all of history. Models run in milliseconds. Monte Carlo needs only the current position, not a hard drive of candles.

As a target — for profit

Every place where the real market violates Markov — support levels, volatility clusters, momentum — is a place where standard models are wrong. Wrong models create mispriced assets. Mispriced assets create alpha.

In short: The Markov Property says the present is all you need. We use it because it makes every model tractable — Kalman Filters, Monte Carlo, option pricing all depend on it. But the real insight is this: we assume Markov to build the baseline. We look for violations of Markov to find the edge. The assumption is the tool. The violation is the treasure.

Next Chapter: First Passage Time & Reflection →