Quantitative Methods ยท Simulation Methods ยท LO 1 of 3
Why does a stock price never go below zero, but a normal distribution does?
Understand why asset prices follow a lognormal distribution when asset returns follow a normal distribution, and how continuous compounding connects them.
โฑ 8min-15min
ยท
3 questions
ยท
LOW PRIORITYUNDERSTAND๐งฎ Calculator
Why this LO matters
Understand why asset prices follow a lognormal distribution when asset returns follow a normal distribution, and how continuous compounding connects them.
INSIGHT
A lognormal random variable is defined not by what it looks like, but by what its logarithm looks like.
Take the natural logarithm of an asset price. If the result is normally distributed, the original price is lognormally distributed. That is the entire definition.
Two consequences follow immediately. Asset prices cannot go below zero, because you cannot take the log of a negative number. And prices are skewed to the right, because the log transformation pulls extreme high values closer together than extreme low values. Returns, by contrast, can be negative and roughly symmetric. That is why returns behave normally and prices behave lognormally.
What makes lognormal the right model for prices?
Think about a newspaper vending machine. You put in a coin and get a paper. The coin can only be positive. Nobody inserts minus fifty cents and receives a refund of a newspaper. That is the floor at zero.
Now think about how far prices can move. A stock priced at โน100 can rise to โน500, โน5,000, or beyond. But it can only fall to zero. The upside is unlimited; the downside is capped. That asymmetry produces a long right tail. Normal distributions have no floor and no asymmetry. Lognormal distributions have both.
The lognormal framework and continuous compounding
1
Lognormal distribution definition. A random variable Y is lognormal distribution|lognormal if its natural logarithm, ln(Y), is normally distributed. Remember the phrase "the log is normal." Whenever you see a lognormal distribution in an exam question, identify whether the question asks about the distribution of the variable itself or the distribution of its logarithm.
2
Two key properties of lognormal distributions. Lognormal distributions are bounded below by zero and skewed to the right. These two properties make lognormal distributions ideal for modelling asset prices, which also cannot be negative and often have occasional very large moves upward.
3
Continuously compounded returns. A continuously compounded return is the natural logarithm of the price ratio between end and start of a period: r = ln(P_end / P_start). This differs from the simple return, which is computed as (P_end / P_start) โ 1. When a question specifies "continuously compounded," apply the natural logarithm. Otherwise, use simple returns.
4
The connection: normal returns produce lognormal prices. If a stock's continuously compounded return is normally distributed, its future price is necessarily lognormally distributed. The future price equals P_T = P_0 ร e^(r), where r is the continuously compounded return. Because e raised to any normal random variable always produces a lognormal variable, lognormal prices follow directly from normal continuously compounded returns.
5
Multi-period additivity. The continuously compounded return over multiple periods equals the sum of single-period continuously compounded returns: r(0,T) = r(0,1) + r(1,2) + ... + r(Tโ1,T). This additivity is the mathematical reason continuous compounding is so useful. You can compute period-by-period returns and add them directly, rather than multiplying factors.
6
Annualizing volatility and expected return.Volatility of continuously compounded returns scales by the square root of time: annualized volatility = daily volatility ร โ250 (where 250 is the number of trading days per year). Expected return scales by time without the square root: expected annual return = daily mean return ร 250. The two rules differ because volatility measures dispersion, which accumulates in variance (requiring a square root for standard deviation), while expected return is simply the sum of daily returns.
How normal returns become lognormal prices: three worked examples
The three examples below build in order. The first is purely conceptual. The second uses the calculator to verify the additivity property. The third shows how to annualize daily statistics correctly.
FORWARD REFERENCE
The Black, Scholes, Merton option pricing model is a mathematical formula that values options, the right to buy or sell an asset at a fixed price. The model explicitly assumes the underlying asset price is lognormally distributed and that returns are normally distributed with continuous compounding.
For this LO, you only need to recognize that the assumption of lognormal asset prices is the foundation for option pricing theory. You will study the Black, Scholes, Merton model fully in Derivatives ยท Module 7.1.
โ Derivatives
Worked Example 1
Why asset prices cannot go below zero but returns can
Priya Menon is a junior analyst at Sundaram Capital in Chennai. Her manager asks her to explain, in plain English, why the research team models Tata Consultancy Services share prices using a lognormal distribution rather than a normal distribution. Priya needs to walk through the logical chain from return distribution to price distribution.
๐ง Thinking Flow โ lognormal from normal, the logical chain
The question asks
Why does a normal distribution for continuously compounded returns imply a lognormal distribution for future share prices?
Key concept needed
The lognormal definition. Y is lognormal if and only if ln(Y) is normally distributed. Many candidates reverse this and say "prices are normal, returns are lognormal." That is backwards and will cost marks.
Step 1, Name the wrong approach first
Many candidates argue that since share prices look bell-shaped over some periods, a normal distribution is fine for prices. The problem: a normal distribution assigns positive probability to negative values. A share price of minus โน40 is mathematically impossible. A normal distribution cannot enforce the floor at zero.
Step 2, Apply the lognormal definition
A continuously compounded return r is defined as ln(P_end / P_start). If r is normally distributed, then P_end = P_start ร exp(r). The future price is the starting price multiplied by e raised to a normal random variable. Because exp(anything) is always strictly positive, the resulting price can never be zero or negative. That floor at zero is exactly what lognormal distributions enforce by definition.
Step 3, Sanity check
Check the two key properties of lognormal distributions against what we know about share prices.
Property 1: bounded below by zero. Share prices cannot go negative. โ
Property 2: right-skewed. Prices can rise by many multiples, but can only fall a maximum of 100%. The upside is unbounded; the downside is capped. That asymmetry creates a long right tail. โ
Both properties match.
Answer
Future share prices follow a lognormal distribution because they equal the starting price multiplied by exp(r), where r is normally distributed. The exp() transformation enforces the zero floor and creates right skew, matching both defining properties of the lognormal distribution.
Worked Example 2
Computing continuously compounded returns, two methods, same answer
Dmitri Volkov is a quantitative analyst at Kazan Asset Management. He is reviewing closing prices for Meridian Logistics, a mid-cap freight company, over three days. Monday closes at 40, Tuesday at 37, and Wednesday at 46. His supervisor asks him to compute the continuously compounded return from Monday to Wednesday, then verify the answer using a second method.
๐ง Thinking Flow โ two methods for continuously compounded return
The question asks
What is the continuously compounded return from Monday to Wednesday, and do both calculation methods agree?
Key concept needed
Multi-period additivity. The continuously compounded return over multiple periods equals the sum of single-period continuously compounded returns. Many candidates compute the simple return, (46 รท 40) โ 1 = 15%, and stop there. Simple returns are not the same as continuously compounded returns. Continuous compounding always produces a lower number for the same price move.
Step 1, Method 1 (direct)
Take the natural logarithm of the overall price ratio.
ln(46 / 40) = ln(1.15)
On the BA II Plus: press 46 รท 40 =, then press the LN key (to the left of the 7 key).
Result: approximately 0.13976, or 13.98%.
Step 2, Method 2 (period by period, then sum)
Compute the continuously compounded return for each day, then add them.
Monday to Tuesday: ln(37 / 40) = โ0.07796
Tuesday to Wednesday: ln(46 / 37) = +0.21772
Sum: โ0.07796 + 0.21772 = 0.13976, or 13.98%.
Step 3, Sanity check
Both methods give 13.98%. The simple return was 15%. The continuously compounded return (13.98%) is lower than the simple return (15%). This is always true: continuous compounding requires a lower stated rate to achieve the same price move. If the continuously compounded answer were higher than the simple return, something went wrong. โ
โ Answer: The continuously compounded return from Monday to Wednesday is approximately 13.98%. Both the direct method and the sum-of-periods method agree.
๐งฎ Method 1, Direct
`2ND``FV`
Clear all registers โ 0
`46``รท``40``=`
Compute the price ratio โ 1.15
`LN`
Take the natural logarithm (key to the left of the 7 key) โ 0.13976
Keysequence
What it does โ Display
---
--- โ ---
`37``รท``40``=`
Price ratio: Monday to Tuesday โ 0.925
`LN`
Continuously compounded return, day 1 โ โ0.07796
`STO``1`
Store this in memory register 1 โ โ0.07796
`46``รท``37``=`
Price ratio: Tuesday to Wednesday โ 1.24324
`LN`
Continuously compounded return, day 2 โ 0.21772
`+``RCL``1``=`
Add the stored day 1 return โ 0.13976
โ ๏ธ Using (46 รท 40) โ 1 gives 0.15, or 15.00%, not 13.98%. This is the most common error. Any time a question specifies "continuously compounded return," use the LN key. Do not subtract 1 from the price ratio.
Method 2, Period by period
Worked Example 3
Annualizing volatility and expected return from daily data
Sofia Andersson is a risk analyst at Nordic Quant Partners in Stockholm. She has collected four continuously compounded daily returns for Helix Semiconductor during a volatile week.
Day
Continuously compounded daily return
Monday to Tuesday
โ7.80%
Tuesday to Wednesday
+21.77%
Wednesday to Thursday
+4.26%
Thursday to Friday
โ6.45%
Her manager asks her to compute the annualized volatility and the estimated expected annual return, using 250 trading days per year.
๐ง Thinking Flow โ annualizing volatility vs. annualizing expected return
The question asks
How do you scale daily volatility and daily expected return to annual figures, and why do they use different scaling rules?
Key concept needed
Volatility scales by the square root of time; expected return scales by time directly. Many candidates apply the square root to both. That is wrong. Variance is additive over time, so standard deviation scales by the square root. Expected return is simply the sum of daily returns, so it scales linearly.
Step 1, Enter the daily returns into the calculator's data worksheet
Press 2ND then 7 (DATA) to open the data entry screen. Enter each return as a decimal. Press ENTER after each value and scroll past the Y fields.
X01 = โ0.0780, X02 = +0.2177, X03 = +0.0426, X04 = โ0.0645
Step 2, Read the sample mean and sample standard deviation
Press 2ND then 8 (STAT). Scroll to find:
xฬ (mean of daily returns) = 0.02945
Sx (sample standard deviation) = 0.13656
Use Sx, not ฯx. Sx divides by n โ 1, which is correct for a sample. ฯx divides by n and understates true volatility.
Step 3, Annualize volatility (scale by the square root of 250)
Annualized volatility = daily Sx ร โ250
= 0.13656 ร 15.8114
= approximately 2.1594, or 216%
This very high number reflects the extreme daily swings in the data. The method is correct; the data is contrived to be volatile.
Step 4, Annualize expected return (scale by 250, no square root)
Expected annual return = daily xฬ ร 250
= 0.02945 ร 250
= approximately 7.3625, or 736%
Again, extreme because the daily mean is high. The point is the method: multiply by 250, not โ250.
Step 5, Sanity check
Volatility used โ250. Expected return used 250. The annualized volatility (216%) is much smaller than 250 times the standard deviation (341%), which confirms the square root dampens the scaling. The expected return (736%) equals exactly 250 times the daily mean (2.945%), confirming linear scaling. โ
โ Answer: Annualized volatility โ 216% (daily Sx ร โ250). Expected annual return โ 736% (daily mean ร 250). The scaling rules differ because variance accumulates additively over time, while expected return accumulates linearly.
๐งฎ Data entry, entering the four daily returns
`2ND``FV`
Clear all registers โ 0
`2ND``7`
Open the DATA worksheet โ X01 = 0
`.078``+/-``ENTER`
Enter โ0.0780 as X01 โ X01 = โ0.078
`โ``โ`
Skip Y01 โ X02 = 0
`.2177``ENTER`
Enter +0.2177 as X02 โ X02 = 0.2177
`โ``โ`
Skip Y02 โ X03 = 0
`.0426``ENTER`
Enter +0.0426 as X03 โ X03 = 0.0426
`โ``โ`
Skip Y03 โ X04 = 0
`.0645``+/-``ENTER`
Enter โ0.0645 as X04 โ X04 = โ0.0645
Keysequence
What it does โ Display
---
--- โ ---
`2ND``8`
Open the STAT worksheet โ n = 4
`โ`
Scroll to mean โ xฬ = 0.02945
`โ``โ`
Scroll to sample standard deviation โ Sx = 0.13656
Keysequence
What it does โ Display
---
--- โ ---
`0.13656``ร``250``โ``=`
Sx ร โ250 = annualized volatility โ 2.1594
`0.02945``ร``250``=`
xฬ ร 250 = expected annual return โ 7.3625
โ ๏ธ Applying โ250 to the expected return gives 0.02945 ร 15.81 = 0.4655, or about 47%. That is wrong. Expected return scales by the number of periods (250), not the square root of periods. Applying the square root to expected return is the single most common error on annualization questions.
Reading statisticsAnnualizing
โ ๏ธ
Watch out for this
The Simple Return vs. Continuous Return Trap
A candidate who computes (120 / 112) โ 1 reports 7.14% as the continuously compounded return for a price move from 112 to 120. That is the simple return, not the continuous one.
The correct calculation is ln(120 / 112) = 6.90%.
Continuously compounded returns always come out lower than simple returns for the same price move, because the ln() function grows more slowly than linear subtraction. Candidates make this error because they treat "return" as one concept, when the formula for a continuously compounded return specifically requires the natural logarithm of the price ratio.
Before selecting an answer to any question involving continuously compounded returns, verify that you used the LN key and not the (end / start โ 1) shortcut.
๐ง
Memory Aid
ACRONYM
L-B-R: Log, Bounded, Right-skewed
L
L, Log โ If the natural logarithm of a variable is normally distributed, the variable itself is lognormal. This is the definition. Flip it and you flip the whole LO.
B
B, Bounded below by zero โ Asset prices cannot go negative. This is the first reason lognormal fits prices and normal does not.
R
R, Right-skewed โ A stock can rise by multiples but can only fall 100%. The long right tail of lognormal matches this asymmetry.
When a question asks why lognormal is used for asset prices, run through L-B-R in order. If you see an answer claiming lognormal is symmetric or can produce negative values, eliminate it using B and R. If you see an answer reversing the log relationship, claiming prices are normal and returns are lognormal, eliminate it using L.
Practice Questions ยท LO1
3 Questions LO1
Score: โ / 3
Q 1 of 3 โ REMEMBER
Which of the following best describes the defining relationship between a lognormal distribution and a normal distribution?
CORRECT: B
CORRECT: B, The definition of a lognormal distribution is built entirely on what the log looks like. A variable Y is lognormal if and only if ln(Y) is normally distributed. The phrase "log is normal" encodes this directly: take the log, get the normal. The distribution's name tells you the rule.
Why not A? Option A reverses the relationship. Y itself is not normally distributed. If it were, it would simply be called a normal distribution. The lognormal variable takes on only positive values and is right-skewed, which is incompatible with the symmetric, unbounded normal distribution. Candidates who select A have confused the variable with its transformation.
Why not C? Squaring Y is a different transformation entirely and has no established connection to normality in this context. The relevant transformation is the natural logarithm, not the square. There is no standard result linking Yยฒ to a normal distribution when Y is lognormal. This option tests whether candidates know which mathematical operation defines the relationship.
---
Q 2 of 3 โ UNDERSTAND
A lognormal distribution is preferred over a normal distribution for modelling asset prices primarily because the lognormal distribution is:
CORRECT: C
CORRECT: C, Two features of asset prices must be captured by any suitable distribution. First, a share price cannot fall below zero. Second, prices can rise by many multiples of their starting value but can only fall by 100% at most. The lognormal distribution enforces both: it produces only positive values by construction, and its right skew allows for a long upside tail. Both properties are necessary, and together they distinguish lognormal from normal.
Why not A? Lognormal distributions are not symmetrical. They are right-skewed. The normal distribution is symmetrical. Selecting A confuses the two distributions. Daily price changes can look approximately symmetric over short windows, but a symmetric distribution for prices would assign positive probability to negative prices, which is economically impossible.
Why not B? A normal distribution is unbounded on both sides, not a lognormal. The lognormal's key advantage is precisely that it is bounded below by zero. Describing the lognormal as unbounded on both sides gets the property backwards and describes the distribution it is meant to replace in the context of asset price modelling.
---
Q 3 of 3 โ APPLY
Fatima Al-Rashid is analysing shares in Gulf Ceramics, a building materials company. The share closes at 85 on Monday and at 91 on Tuesday. What is the continuously compounded return from Monday to Tuesday?
CORRECT: A
CORRECT: A, The continuously compounded return is computed using the natural logarithm of the price ratio: ln(91 / 85) = ln(1.07059) โ 0.06824, or approximately 6.82%. The key step is applying the LN function, not subtracting one from the price ratio. The ln() transformation is what links normally distributed returns to lognormally distributed prices.
Why not B? Option B is the simple return: (91 / 85) โ 1 = 0.07059, or 7.06%. This is an arithmetic return, not a continuously compounded return. While it looks close to the correct answer, the two are conceptually distinct. The continuously compounded return is always lower than the simple return for the same price movement, because the natural logarithm grows more slowly than linear subtraction. Whenever a question specifies "continuously compounded," use the LN key, not subtraction.
Why not C? Option C, approximately 13.98%, is the two-day continuously compounded return from Worked Example 2, where Dmitri Volkov computed the return for Meridian Logistics from Monday to Wednesday across prices of 40 and 46. That number does not apply here. The question specifies one day and prices of 85 and 91. Selecting it suggests confusing scenario details or misreading the question.
---
Glossary
lognormal distribution
A probability distribution for a variable whose natural logarithm is normally distributed. Example: if you take the log of every daily closing price of a stock and those logged values look bell-shaped, the prices themselves follow a lognormal distribution. Lognormal distributions can only produce positive values and have a long right tail.
continuously compounded return
The natural logarithm of the ratio of an asset's ending price to its starting price over a period. Example: if a stock moves from 100 to 110, the continuously compounded return is ln(110/100) = 9.53%, which is lower than the simple return of 10% because continuous compounding assumes interest is reinvested instantly rather than at period end. This return type is the bridge between normally distributed returns and lognormally distributed future prices.
simple return
The percentage change in price computed as (ending price / starting price) โ 1. Example: a stock moving from 50 to 55 has a simple return of (55/50) โ 1 = 10%. Simple returns are the most intuitive measure of price change but do not have the additivity property that continuously compounded returns do.
volatility
A measure of how much returns vary over time, expressed as a standard deviation. Example: a stock with 20% annual volatility will, in a typical year, see its returns scattered within roughly 20 percentage points of its average. Higher volatility means wider swings in either direction.
LO 1 Done โ
Ready for the next learning objective.
๐ PRO Feature
How analysts use this at work
Real-world applications and interview questions from top firms.
Quantitative Methods ยท Simulation Methods ยท LO 2 of 3
Why does rolling dice 1,000 times tell you more about risk than a single formula ever could?
Monte Carlo simulation generates thousands of possible futures from probability distributions you specify, then uses the spread of those outcomes to estimate portfolio risk, security values, and the impact of your assumptions.
โฑ 8min-15min
ยท
6 questions
ยท
HIGH PRIORITYAPPLY
Why this LO matters
Monte Carlo simulation generates thousands of possible futures from probability distributions you specify, then uses the spread of those outcomes to estimate portfolio risk, security values, and the impact of your assumptions.
INSIGHT
Instead of calculating one answer, Monte Carlo generates thousands of possible futures.
Each future is drawn randomly from probability distributions you choose. The spread of those thousands of outcomes, the highest, lowest, average, worst 10%, becomes your estimate of risk and value. You are not discovering the true answer. You are building a distribution of plausible answers. The more times you repeat the process, the more stable that distribution becomes.
What Monte Carlo simulation does, and when to use it
Think about how a weather service forecasts rain for next week. They do not have one model that produces "Tuesday: 4.7mm of rain." They run hundreds of simulations with slightly different starting conditions. Some simulations show heavy rain on Tuesday. Others show none. The forecast, "60% chance of rain", comes from the spread of those hundreds of runs, not from any single calculation.
Monte Carlo simulation does the same thing for investment problems.
Most people's instinct, when faced with a complex valuation or risk question, is to reach for a formula. That instinct is right when a formula exists. The problem is that many of the most important questions in finance, what is this path-dependent option worth, what is the distribution of my portfolio's annual return, have no neat formula. Monte Carlo is what you use when the formula does not exist.
The core capabilities and limits of Monte Carlo simulation
1
Specified distributions as input. Monte Carlo requires you to choose a probability distribution for each risk factor that drives the outcome you care about, for example, stock price movement or interest rate changes. You tell the model the shape of that distribution (normal, lognormal) and its parameters (mean, standard deviation). The computer then draws random samples from that distribution to create scenarios.
2
Repeated simulation trials to explore paths. Each simulation trial generates one possible path of prices or values over time using the distributions you specified. By running hundreds or thousands of trials, you build a complete picture of what could happen, best case, worst case, and everything in between, without needing a closed-form pricing formula.
3
Statistical outputs, not exact answers. The result of a Monte Carlo simulation is a distribution of outcomes: a histogram showing how often each result occurred across all trials, plus summary statistics like the mean, median, lowest, and highest values. These are estimates based on sampling, not exact mathematical results.
4
Sensitivity analysis and what-if testing. Because you control the distributional assumptions, you can test what happens if a key assumption changes, what if volatility doubles, or the mean return shifts 2% lower? This is a genuine strength unique to Monte Carlo compared to closed-form analytical methods.
5
No cause-and-effect visibility. Monte Carlo tells you what outcomes are likely but gives you less insight into why, into the cause-and-effect relationships between inputs and outputs. Analytical formulas, when they exist, often make those relationships transparent and lead to deeper understanding.
6
Applies to securities and portfolios with no formula. Use Monte Carlo to value complex securities, Asian options, lookback options, mortgage-backed securities with embedded options, for which no closed-form pricing equation exists, and to estimate portfolio risk and return distributions over a time horizon.
Now let us look at when Monte Carlo is the right tool, how each step of the process works, and where candidates go wrong.
FORWARD REFERENCE
Probability distributions, what you need for this LO only
A probability distribution is a mathematical description of all the values a variable can take and how likely each value is. The most common in finance is the normal distribution (the bell curve), but distributions can also be lognormal, binomial, or uniform. When we say "specify the distribution," we mean choosing which type and giving its parameters, for example, "normal distribution with mean 8% and standard deviation 15%." For this LO, you only need to recognise that Monte Carlo requires you to choose a distribution and input its parameters before the simulation begins. You do not compute the distribution yourself from data, that distinction is the core difference from bootstrapping, which resamples from observed historical data without assuming any particular distribution. You will study probability distributions fully in Quantitative Methods, Module 2.
โ Quantitative Methods
FORWARD REFERENCE
Exotic options, what you need for this LO only
Asian options, lookback options, and barrier options are derivatives whose payoff depends on the price path of an underlying asset, not just the final price. A standard call option pays off based only on whether the stock price at expiry exceeds the strike. An Asian option's payoff depends on the average price over the option's life. A lookback option's payoff depends on the minimum or maximum price the stock reached during its life. For this LO, you only need to recognise that Monte Carlo is the standard tool for valuing these instruments because they have no simple closed-form formula. Treat them as examples of "complex securities" that make Monte Carlo useful. You will study exotic options fully in Derivatives, Module 7.
โ Derivatives
FORWARD REFERENCE
Mortgage-backed securities, what you need for this LO only
Mortgage-backed securities (MBS) are bonds backed by pools of residential mortgages. Their complexity comes from embedded options: borrowers can prepay their mortgages early when interest rates fall. This prepayment option makes the MBS's cash flows uncertain and dependent on the path of future interest rates, which is why simple bond pricing formulas break down. For this LO, you only need to recognise that MBS valuation is a common application of Monte Carlo, because the embedded prepayment option makes simple discounted cash flow pricing inaccurate. Monte Carlo can simulate thousands of interest rate paths and estimate prepayment timing and bondholder receipts for each path. You will study mortgage-backed securities and embedded options fully in Fixed Income, Module 4.
โ Fixed Income
Worked examples, applying the six-step process
The following four examples build from recognition (identifying Monte Carlo as the right tool) through execution (walking through all six steps) to comparison (understanding what Monte Carlo can and cannot do relative to other methods). Start with Example 1 and follow Priya, Carlos, Fatima, and Kwame through the same core concept applied in progressively more demanding situations.
Worked Example 1
Identifying Monte Carlo as the right tool
Priya Menon is a risk analyst at Stellarion Asset Management in Singapore. Her team wants to estimate the range of possible annual returns for a multi-asset portfolio containing equities, bonds, and commodities. No single formula exists that captures the interaction of all three asset classes simultaneously. Her manager asks her to explain what tool they should use and why.
๐ง Thinking Flow โ identifying Monte Carlo as the right tool
The question asks
Which method should Priya use to explore the distribution of possible portfolio returns, and why does that method fit this situation?
Key concept needed
Monte Carlo requires you to specify a probability distribution for each risk factor and input its parameters. The analyst provides the distributional shape; the computer draws from it.
Step 1, Name the wrong approach first
Many analysts, and many exam candidates, assume Priya should gather five years of monthly returns and resample them. That would be bootstrapping: drawing observations from an observed historical sample, with replacement. Bootstrapping is valid, but it requires an empirical dataset. The question does not say one exists. More importantly, bootstrapping uses whatever distribution the historical data happens to produce. If the portfolio's risk factors have changed, a new asset class added, a different volatility regime, the historical sample may not represent the future well.
Step 2, Apply the correct method
Monte Carlo fits here because Priya can specify probability distributions for each risk factor directly. For equities she might specify: returns are normally distributed with mean 7% and standard deviation 16%. For bonds: normally distributed, mean 3%, standard deviation 5%. For commodities: a separate distribution with its own parameters. The computer then draws random values from each distribution simultaneously, producing one possible annual portfolio return. That is one simulation trial.
Step 3, Sanity check
If Priya runs 1,000 trials, she gets 1,000 possible annual returns. The spread of those 1,000 numbers, lowest value, highest value, the middle 90%, is her estimate of portfolio risk. If her distributional assumptions are reasonable, the mean of those 1,000 outcomes should be close to the weighted average of her individual asset expected returns. If the simulated mean is wildly different from that weighted average, something in her distributional inputs is wrong.
โ Answer: Monte Carlo simulation is the correct tool. It generates a large number of random samples from probability distributions Priya specifies for each risk factor, producing a frequency distribution of portfolio returns from which risk measures, worst case, expected return, Value at Risk, are derived. It does not require historical data as its input source. That is what distinguishes it from bootstrapping.
Worked Example 2
Valuing a complex security using the six-step process
Carlos Ibรกรฑez is a derivatives analyst at Vortega Capital in Madrid. He needs to price an Asian option on shares of a pharmaceutical company. The option's payoff at maturity equals the difference between the stock's final price and its average price over the six-month life of the option, or zero, whichever is greater. His colleague suggests using the Black-Scholes formula. Carlos knows that formula does not apply here. He decides to use Monte Carlo simulation and explains all six steps to his intern.
๐ง Thinking Flow โ walking through the six-step Monte Carlo process
The question asks
How does Carlos apply the six-step Monte Carlo process to value this path-dependent option, and what does each step actually produce?
Key concept needed
Because the Asian option's payoff depends on the entire price path (the average price over six months), not just the final price, no single formula captures it. Monte Carlo generates thousands of complete price paths and computes the payoff on each one.
Step 1, Specify the quantity of interest
Carlos defines what he is solving for: the value today of the Asian option, which he will call Cโ. The underlying variable is the stock price, which changes over time. He records the starting stock price: โฌ50.
Step 2, Specify the time grid
The option expires in six months. Carlos chooses monthly steps, so there are K = 6 subperiods. The time increment ฮt equals one month (or 1/12 of a year). This grid is the skeleton on which price paths will be built.
Step 3, Specify how data will be generated
Carlos selects a model for stock price movement. He assumes stock price changes follow:
ฮStock price = (ฮผ ร Prior stock price ร ฮt) + (ฯ ร Prior stock price ร Z)
where Z is a standard normal random variable, ฮผ is the expected annual return (say, 8%), and ฯ is the annual volatility (say, 20%). This step encodes his distributional assumption. He is telling the computer: draw Z from a standard normal distribution, then use this equation to convert each Z into a price change.
Step 4, Use simulated values to produce stock prices
The computer draws six values of Z, one per monthly subperiod: Zโ, Zโ, Zโ, Zโ, Zโ , Zโ. It applies the equation from Step 3 to each, starting from โฌ50, to produce six monthly stock prices. This is one complete price path. For example: โฌ50.00, โฌ51.20, โฌ49.80, โฌ52.40, โฌ53.10, โฌ51.70.
Step 5, Calculate the average stock price and the option payoff
Carlos averages the six prices: (50.00 + 51.20 + 49.80 + 52.40 + 53.10 + 51.70) รท 6 = โฌ51.37.
The final stock price in this path is โฌ51.70.
The option payoff at maturity = max(โฌ51.70 โ โฌ51.37, 0) = max(โฌ0.33, 0) = โฌ0.33.
He discounts โฌ0.33 back six months at the risk-free rate to get today's value for this one trial, C_i0. That completes one simulation trial.
Step 6, Repeat steps 4 and 5 many times
Carlos instructs the computer to repeat Steps 4 and 5 one thousand times. Each repetition draws a fresh set of six Z values, builds a new price path, and computes a new option value. After 1,000 trials, Carlos takes the mean of all 1,000 present values. That mean is the Monte Carlo estimate of the Asian option's value today.
Step 3, Sanity check
In many trials the final price will be below or equal to the average, because the stock wandered up and then came back down. Those trials contribute a payoff of zero. The option value should therefore be positive but modest relative to the stock price. If the simulated mean comes out near zero or implausibly large, say, greater than โฌ10 on a โฌ50 stock with 20% volatility, Carlos knows a distributional parameter was entered incorrectly.
โ Answer: The Monte Carlo estimate of the Asian option value is the mean of all 1,000 discounted trial payoffs. The method works because each trial independently generates a complete price path using the specified normal distribution, and the average payoff across enough trials converges to a stable estimate of the option's true expected value.
Worked Example 3
What Monte Carlo produces versus what analytical methods produce
Fatima Al-Rashidi is a portfolio manager at Crescent Quant Partners in Dubai. She is presenting two valuation approaches to her investment committee. For a standard European call option, her quant team used the Black-Scholes formula and obtained an exact price of $4.82. For an exotic barrier option on the same stock, they ran a Monte Carlo simulation with 1,000 trials and obtained an estimated price of $3.17. A committee member asks: "If Monte Carlo gives only an estimate, why use it at all? And does it tell us why the barrier option is cheaper?"
๐ง Thinking Flow โ comparing Monte Carlo outputs to analytical method outputs
The question asks
What are the specific strengths and limitations of Monte Carlo compared to analytical methods like Black-Scholes, and what does "statistical estimate" actually mean?
Key concept needed
Statistical outputs, not exact answers, and no cause-and-effect visibility. The right framing is that each method fits a different situation. When a closed-form formula exists, it is faster, exact, and reveals cause-and-effect. When no formula exists, Monte Carlo is not inferior, it is the only practical option.
Step 1, Identify what analytical methods give that Monte Carlo does not
The Black-Scholes formula gives $4.82 with mathematical certainty. More importantly, the formula's structure makes relationships visible: if volatility rises, the option price rises; if time to expiry shortens, the option price falls. The formula is an equation in the inputs, so the sensitivity of price to each input can be derived algebraically. Monte Carlo cannot show Fatima that equation. It shows her a histogram of 1,000 outcomes. She can measure how the histogram shifts when she changes an assumption, but she cannot read the cause-and-effect relationship directly from the output.
Step 2, Identify what Monte Carlo gives that analytical methods cannot
The barrier option has a payoff that depends on whether the stock price crosses a threshold during its life. No simple closed-form formula handles that path dependency. Monte Carlo simulates thousands of complete price paths, checks whether each one crosses the barrier, and computes the payoff for each path. The estimated value of $3.17 is the mean present value across all valid trials. No formula can replicate this directly.
Step 3, Address what "statistical estimate" means
The $3.17 is not a guess. It is the average of 1,000 independent calculations, each using the correct distributional inputs. By the law of large numbers, this average converges to the true expected value as the number of trials increases. Running 10,000 trials instead of 1,000 would produce a tighter estimate, a lower standard error of the mean. The estimate is statistical because it could change slightly each time the simulation runs, but the uncertainty shrinks as trials increase.
Sanity check
The barrier option should be worth less than the plain European call on the same stock, because the barrier feature knocks the option out if the stock moves unfavourably. $3.17 less than $4.82 makes directional sense. If the Monte Carlo had returned $5.40, something would be wrong with the simulation setup.
โ Answer: Monte Carlo's strength is handling complex path-dependent securities with no closed-form formula. Its limitation is that it produces statistical estimates, not exact values, and provides less direct insight into cause-and-effect relationships than an analytical formula does. Both tools are valid; they fit different situations.
Worked Example 4
Monte Carlo versus bootstrapping, the core distinction
Kwame Asante is a quantitative researcher at Meridian Strategies in Accra. He is studying the return distribution of a newly launched fund that invests in illiquid private credit instruments. The fund has only 18 months of return history. Kwame wants to build a distribution of possible annual returns to estimate Value at Risk. His colleague Ingrid suggests Monte Carlo simulation. Kwame is not sure this is the right method. He explains the difference to her.
๐ง Thinking Flow โ distinguishing Monte Carlo from bootstrapping
The question asks
Given the data situation, short history, no established distribution for private credit returns, which method fits better, and what is the fundamental difference between the two?
Key concept needed
Specified distributions as input (Monte Carlo) versus resampling from an observed sample (bootstrapping). This is the single most tested distinction in this LO. The trap is believing the two methods are interchangeable or that Monte Carlo is always the more sophisticated choice.
Step 1, Identify what Monte Carlo requires that is missing here
Many candidates assume Monte Carlo is always the right answer for simulation problems. The critical requirement is that you must specify a probability distribution and its parameters before running a single trial. For private credit returns, Kwame does not know whether returns are normally distributed, skewed, fat-tailed, or something else. The 18-month history is too short to estimate parameters reliably. If he guesses a normal distribution with mean 6% and standard deviation 4%, those parameters are largely invented. The simulation output will only be as good as his assumption, and his assumption is weak.
Step 2, Identify what bootstrapping requires and whether it fits
Bootstrapping requires only an observed sample, not a distributional assumption. The computer treats the 18 months of returns as if they represent the range of possible outcomes. It draws observations from that sample randomly, with replacement. For example, it might draw: month 7, month 3, month 3 again, month 15, and so on, until it has a full year's worth of draws. That constitutes one bootstrap resample. Repeating this thousands of times produces a distribution of annual returns built entirely from actual observed data, not from an assumption about its shape.
Step 3, State the fundamental difference
Monte Carlo requires you to know or assume the distribution and its parameters. You provide the shape; the computer draws from it. Bootstrapping requires only a sample. You provide the observed data; the computer treats that sample as the population and resamples it. If you do not know the distribution, bootstrapping is the safer tool because it makes no assumption about the distribution's form.
Sanity check
If Kwame wrongly applies Monte Carlo with an assumed normal distribution, his Value at Risk estimate will be anchored to that assumption. If private credit returns are actually left-skewed with fat tails, large losses more common than a normal curve implies, his Monte Carlo will systematically underestimate the worst-case losses. Bootstrapping, drawing from the actual 18-month experience, would at least reflect whatever skew and tail behaviour the real returns showed, however limited that history is.
โ Answer: For this situation, bootstrapping is the better fit because Kwame does not have a reliable distributional assumption for private credit returns. Monte Carlo is appropriate when you can specify the distribution and its parameters with reasonable confidence. Bootstrapping is appropriate when you have a sample but no knowledge of the underlying population distribution.
โ ๏ธ
Watch out for this
The Monte Carlo/Bootstrapping Confusion Trap
A candidate who reads "Monte Carlo simulation uses past data to generate outcomes" will incorrectly conclude that Monte Carlo requires historical return records as its input, the same error that appears as a wrong answer in quiz questions on this LO.
The correct position is that Monte Carlo requires specified probability distributions and their parameters as input. The analyst provides the distributional shape; the computer draws random values from it.
Candidates make this error because both Monte Carlo and bootstrapping involve repeated random sampling, so they assume the input source is the same. The distinction is precisely where the randomness comes from: a distribution you specify (Monte Carlo) versus an observed sample you resample (bootstrapping).
Before selecting any answer about Monte Carlo's inputs or requirements, ask: "Am I describing what the analyst provides to the simulation?" If the answer involves historical records rather than distributional assumptions, the answer describes bootstrapping, not Monte Carlo.
๐ง
Memory Aid
FORMULA HOOK
You bring the distribution; Monte Carlo brings the draws, bootstrapping brings its own data.
Practice Questions ยท LO2
6 Questions LO2
Score: โ / 6
Q 1 of 6 โ REMEMBER
Which of the following best describes the primary input that Monte Carlo simulation requires from the analyst?
CORRECT: C
Correct: C, Monte Carlo simulation requires the analyst to specify the probability distribution for each risk factor before the simulation begins. The analyst provides the shape of the distribution (for example, normal or lognormal) and its parameters (mean and standard deviation). The computer then draws random values from that distribution to generate trial outcomes.
Why not A? Using historical return records as the input source describes bootstrapping, not Monte Carlo simulation. Bootstrapping resamples from an observed historical dataset, drawing observations with replacement, without requiring any distributional assumption. Monte Carlo does not need a historical sample at all, it generates synthetic data from the distribution you specify, even if no historical data exists for the asset.
Why not B? A closed-form pricing formula is precisely what Monte Carlo is used when you do not have. Analytical methods like Black-Scholes require a formula. Monte Carlo is the alternative for complex or path-dependent securities for which no such formula exists. Requiring a formula as an input would defeat the method's entire purpose.
---
Q 2 of 6 โ UNDERSTAND
A quantitative analyst says: "Monte Carlo simulation is most valuable when an analytical formula exists, because it confirms the formula's output." Which part of this statement is incorrect?
CORRECT: A
Correct: A, Monte Carlo's defining advantage is that it can value complex securities and estimate risk distributions even when no closed-form pricing equation exists. Path-dependent instruments like Asian options, lookback options, and mortgage-backed securities with embedded options have no simple formula. Monte Carlo handles them by simulating thousands of complete price paths and averaging the resulting payoffs. When a formula does exist, the analytical method is typically preferred because it is faster, exact, and reveals cause-and-effect relationships that Monte Carlo's output cannot.
Why not B? This overstates the limitation. Monte Carlo can produce estimates consistent with an analytical formula, and analysts sometimes use both methods as a cross-check. The issue is not that confirmation is impossible, it is that using Monte Carlo solely to confirm a formula you already have is an inefficient application of the tool. The statement is wrong because of where it places "most valuable," not because Monte Carlo and formulas never interact.
Why not C? Treating Monte Carlo as primarily a verification tool for closed-form solutions inverts its purpose entirely. The curriculum is explicit: Monte Carlo is used for securities "for which there is no given formula to arrive at their value." Selecting this option reflects a misunderstanding of the method's core use case, it is a substitute for a formula that does not exist, not a checker for one that does.
---
Q 3 of 6 โ APPLY
Nadia Okonkwo is a risk manager at Pinnacle Fund Services in Lagos. She wants to estimate the Value at Risk for a portfolio containing five correlated asset classes. She specifies a normal distribution with estimated mean and standard deviation for each asset class, plus a correlation matrix linking their movements. She runs 2,000 trials. What does each single trial in Nadia's simulation produce?
CORRECT: B
Correct: B, Each simulation trial draws one random value from each asset class's normal distribution, applies the correlation structure to link those draws realistically, and combines them into a single portfolio return for that trial. After 2,000 trials, Nadia has 2,000 possible portfolio returns. She then reads the distribution of those 2,000 values: the worst 5% of outcomes, for example, gives her the 95% Value at Risk. The single trial produces one data point; the aggregate of all trials produces the risk estimate.
Why not A? A single trial does not produce an exact Value at Risk figure. The Value at Risk is a statistic derived from the full distribution of all trial outcomes, no individual trial has that information. It is just one random draw. Exact mathematical derivation from distributional parameters would describe an analytical method, not a simulation. Monte Carlo generates statistical estimates by aggregating many trials, not by solving an equation.
Why not C? Drawing a randomly selected past date from historical records describes bootstrapping, not Monte Carlo. Nadia has already specified normal distributions with parameters for each asset class. The computer draws from those specified distributions, it does not consult historical return records. If Nadia wanted to resample from actual past portfolio returns without assuming normality, she would use bootstrapping instead.
---
Q 4 of 6 โ APPLY+
Tomรกs Ferreira is a derivatives analyst at Soleado Capital in Lisbon. He is pricing a barrier option: the option expires worthless if the underlying stock price touches โฌ60 at any point during the six-month life, regardless of where the price ends up at maturity. The current stock price is โฌ50 and the strike is โฌ55. Tomรกs's colleague argues that running 500 simulation trials is sufficient to produce a reliable estimate. Tomรกs disagrees and runs 10,000 trials instead. Which of the following best justifies Tomรกs's decision?
CORRECT: C
Correct: C, The Monte Carlo estimate is the mean of all trial payoffs, discounted to today. This mean is a statistical estimate, and like any sample mean, it has a standard error equal to the population standard deviation divided by the square root of the number of trials. Running 10,000 trials instead of 500 reduces the standard error by a factor of roughly 4.5 (the square root of 10,000 divided by 500 is approximately 4.47), making the estimate considerably more precise. This is the specific mechanism by which more trials improve the reliability of a Monte Carlo result.
Why not A? The number of simulation trials has no effect on the distributional parameters the analyst specifies. Volatility, mean, and distribution shape are inputs the analyst provides before the simulation begins. Changing the number of trials does not alter them. Confusing trial count with parameter estimation reflects a misunderstanding of what the analyst controls versus what the simulation generates.
Why not B? Whether a simulated path crosses the โฌ60 barrier depends on the specified distributional parameters and the random draws in each trial, not on the total number of trials. Even a single trial could produce a path that crosses โฌ60. The concern is not whether the barrier is ever touched across all trials, but whether the average of all trial payoffs is a stable, reliable estimate of the option's expected value. That stability requires many trials because of the reason explained in the correct answer.
---
Q 5 of 6 โ ANALYZE
Fatima Al-Rashidi's investment committee at Crescent Quant Partners is comparing two approaches: the Black-Scholes formula for a standard European call option and Monte Carlo simulation for an exotic barrier option on the same underlying stock. A committee member claims: "The Black-Scholes result is superior because it shows us exactly why the option price changes when volatility changes. The Monte Carlo result cannot show us that." Is this claim correct?
CORRECT: B
Correct: B, The committee member's claim accurately describes a genuine limitation of Monte Carlo. In the Black-Scholes formula, the option price is an explicit function of volatility, stock price, strike, time, and interest rate. An analyst can calculate delta, vega, and other sensitivities algebraically by differentiating the formula. Monte Carlo produces a histogram of trial outcomes. To measure sensitivity to volatility, the team would need to re-run the entire simulation with a different volatility input and compare the two output distributions, an indirect process that does not reveal the algebraic relationship. The curriculum explicitly names this as a limitation of Monte Carlo relative to analytical methods.
Why not A? This overstates Monte Carlo's transparency. While Monte Carlo allows sensitivity testing by re-running simulations under changed assumptions, a genuine strength, it does not expose the algebraic cause-and-effect structure the way a formula does. The formula makes the relationship between each input and the output mathematically explicit. Monte Carlo's output is a distribution of numbers, not an equation. Claiming identical transparency ignores the structural difference between the two methods.
Why not C? This inverts the correct comparison. Analytical formulas, where they exist, are preferred for their exactness and interpretability. Monte Carlo is not a universal replacement, it is the tool of choice when no closed-form formula exists. Arguing that Monte Carlo makes formulas redundant ignores the computational cost of simulation, the estimation uncertainty it introduces, and the loss of algebraic insight. The correct framing is that each method fits a different situation, not that one dominates the other in all cases.
---
Q 6 of 6 โ TRAP
Yelena Marchetti is a portfolio analyst at Dawnridge Investment Partners in Milan. She reads the following description of a risk modelling method: "The method repeatedly samples from a large dataset of observed monthly returns, drawing observations randomly, with replacement, to build a distribution of possible annual portfolio returns. No assumption about the shape of the return distribution is required." Yelena concludes that this description refers to Monte Carlo simulation. Is she correct?
CORRECT: C
Correct: C, The description matches bootstrapping precisely: repeated random sampling with replacement from an observed dataset, with no distributional assumption required. Monte Carlo simulation works differently. The analyst specifies a probability distribution, for example, a normal distribution with mean 6% and standard deviation 12%, before the simulation begins. The computer then draws random values from that specified distribution to generate synthetic scenarios. Monte Carlo does not draw from historical return records; it generates data from a distribution the analyst provides.
Why not A? Both methods involve repeated random sampling, which is where the confusion arises. But the source of the randomness is different. Monte Carlo samples from a distribution you specify analytically. Bootstrapping samples from an observed historical dataset. The phrase "large dataset of observed monthly returns" in the description is the signal, that is historical empirical data, which identifies the method as bootstrapping. "Observed data" and "specified distribution" are not the same input.
Why not B? This option reinforces the trap directly. Monte Carlo does require a distributional assumption, it is the defining feature of the method. The analyst must choose the distribution type (normal, lognormal, and so on) and provide its parameters (mean, standard deviation) before a single trial can run. It is bootstrapping, not Monte Carlo, that operates without a distributional assumption. Selecting this option means accepting the core misconception that Monte Carlo is distribution-free. It is not.
---
Glossary
risk factor
Any variable whose uncertain future value affects the outcome you are trying to estimate or value. In a stock portfolio, risk factors include equity returns, interest rates, and exchange rates. Think of a risk factor as any dial you do not fully control that influences the result.
probability distribution
A mathematical description of all possible values a variable can take and how likely each value is. A normal distribution (bell curve) says most outcomes cluster near the average, with increasingly rare outcomes further away. Like a weather forecast showing 70% chance of rain and 30% chance of sun, but for a continuous range of outcomes.
simulation trial
One complete run of a Monte Carlo simulation, producing a single possible outcome by drawing random values from the specified distributions. Running 1,000 trials produces 1,000 possible outcomes. One trial is like rolling all your dice once; 1,000 trials is rolling them 1,000 times to see the full range of results.
bootstrapping
A resampling method that builds a distribution of outcomes by drawing observations randomly, with replacement, from an existing historical dataset. It requires no assumption about the shape of the underlying distribution. Like shuffling a deck of cards that represents your observed data and repeatedly dealing hands to see what combinations come up.
Asian option
A type of exotic option whose payoff depends on the average price of the underlying asset over the option's life, not just the final price at expiry. Because the payoff depends on the entire price path, no simple closed-form formula exists, making Monte Carlo the standard valuation tool.
standard normal random variable
A random variable drawn from a normal distribution with mean zero and standard deviation one. In Monte Carlo simulations of stock prices, each random draw of this variable converts into a price change using the volatility and time-step parameters. A value of 0 means the asset moved exactly as expected; a value of 2 means it moved two standard deviations above expectations.
embedded options
Features built into a security that give either the issuer or the holder the right to take some action before maturity, for example, a mortgage borrower's right to prepay early if interest rates fall. These features make the security's cash flows path-dependent and uncertain.
Value at Risk
A statistical estimate of the maximum loss a portfolio is likely to suffer over a given time horizon at a specified confidence level. For example, a 95% one-day Value at Risk of ยฃ500,000 means there is a 5% chance of losing more than ยฃ500,000 in a single day. It is an output of risk modelling, not an input.
standard error
A measure of how much a sample statistic (like a sample mean) varies from one sample to another. In Monte Carlo simulation, the standard error of the estimated value decreases as the number of trials increases, specifically, it falls proportionally to one divided by the square root of the number of trials. Quadrupling the number of trials cuts the standard error in half.
bootstrap resample
One complete draw from a bootstrapping procedure, a new synthetic dataset created by sampling observations from the original historical dataset with replacement. Each resample is the same size as the original dataset but may include some observations more than once and others not at all, because replacement allows repetition.
LO 2 Done โ
Ready for the next learning objective.
๐ PRO Feature
How analysts use this at work
Real-world applications and interview questions from top firms.
Quantitative Methods ยท Simulation Methods ยท LO 3 of 3
Why would you resample the same historical data over and over instead of asking a statistician what distribution to assume?
Bootstrap solves the problem when you have no idea what the true population distribution looks like, you treat your observed sample as if it were the entire population and resample from it.
โฑ 8min-15min
ยท
3 questions
ยท
LOW PRIORITYUNDERSTAND
Why this LO matters
Bootstrap solves the problem when you have no idea what the true population distribution looks like, you treat your observed sample as if it were the entire population and resample from it.
INSIGHT
Bootstrap solves a real problem. You have data. You have no theory about where that data came from.
Instead of guessing a distribution, bootstrap treats your historical sample as if it were the entire population. It then resamples from that sample with replacement, meaning some data points appear multiple times in a given resample, and others not at all. Repeat this thousands of times and the empirical distribution of your resampled outcomes becomes your best estimate of what the population could produce.
Bootstrap asks almost nothing of you: just feed it data. Monte Carlo, by contrast, demands that you specify a probability distribution upfront. When you don't know what that distribution is, bootstrap is the answer.
How bootstrap works, and when to use it instead of Monte Carlo
Think about a musician studying for a quiz on a song they've never heard before. One approach: assume the song follows standard pop structure (verse, chorus, bridge) and memorise that template. That's Monte Carlo, you assume a known pattern and work from it.
The other approach: listen to the song on repeat, pause it at random points, and note what comes next each time. Eventually, your notes give you a picture of the song's structure directly from the evidence. That's bootstrap, you let the observed data speak for itself, without assuming any template upfront.
Both approaches help you learn. But when you genuinely don't know whether the song follows standard structure, the second approach is the only honest one.
Bootstrap vs. Monte Carlo, the four distinctions that matter on exam day
1
The core difference: specification vs. observation. Monte Carlo requires you to specify a probability distribution before generating any outcomes. Bootstrap requires no distribution at all, it uses what you actually observed. This single distinction drives every exam question on this LO.
2
When to use bootstrap. Use bootstrap when you have a sample of historical data but no reliable knowledge of the true population probability distribution. Rejected normality? Rejected lognormality? Bootstrap is the answer.
3
When to use Monte Carlo. Use Monte Carlo when you know or can credibly assume the distributions that drive the underlying variables. If you are comfortable saying "returns are normally distributed with mean 6% and standard deviation 12%," Monte Carlo can use that specification.
4
The bootstrap procedure itself. Draw K random values from your observed sample with replacement (the same observation can appear more than once), compute a quantity of interest from those drawn values (such as an option value or portfolio return), then repeat this process many times, typically 1,000 or more iterations. The result is a distribution of outcomes built entirely from the empirical distribution of your historical data.
FORWARD REFERENCE
Probability distributions, what you need for this LO only
A probability distribution is a mathematical description of how likely each possible outcome is for a random variable. For bootstrap, you do not need to specify any distribution upfront. You use the empirical distribution instead: the actual historical frequencies you observed in your data. For this LO, you only need to recognise that bootstrap extracts its random values from observed data frequencies, whereas Monte Carlo extracts them from a pre-specified theoretical distribution. You will study probability distributions fully in Quantitative Methods Module 2.
โ Quantitative Methods
Seeing bootstrap in action
The thinking flow below makes the distinction concrete. Read it slowly the first time. On exam day, you will run through it in under 90 seconds.
Worked Example 1
Identifying when bootstrap applies instead of Monte Carlo
Priya Menon is a quantitative analyst at Meridian Capital, a hedge fund in Singapore. She wants to simulate the value of an Asian-style contingent claim on a commodity index. She has 15 years of monthly price observations for the index. Prior tests have rejected normality, lognormality, and every other standard distribution her team has tried. Her colleague suggests two approaches: Monte Carlo simulation or bootstrap resampling. Priya needs to identify which method fits her situation and explain why.
๐ง Thinking Flow โ Choosing bootstrap when the distribution is unknown
The question asks
Which simulation method is appropriate when the analyst has historical data but cannot identify the true probability distribution?
Key concept needed
The core difference between bootstrap and Monte Carlo. Monte Carlo requires a pre-specified probability distribution. Bootstrap requires only an empirical distribution drawn from observed data. Candidates who assume both methods require distribution specification will incorrectly select Monte Carlo for every simulation scenario.
Step 1, Identify the signal condition
Many candidates first ask: "Which method is more accurate?" That is the wrong question.
The correct question is: "Does the analyst know the probability distribution of the underlying variable?"
Priya's team has rejected every standard distribution. She does not know the true population distribution. That is the signal: use bootstrap, not Monte Carlo.
Step 2, Apply the method definition
The wrong approach is to use Monte Carlo. Monte Carlo requires Priya to specify upfront: "Prices are distributed according to distribution X with parameters Y and Z." She cannot do this. Every candidate distribution has already been rejected. So Monte Carlo is ruled out.
The correct approach is bootstrap. Bootstrap treats the observed 15-year monthly price history as if it were the entire population. It draws K random values from that historical dataset with replacement. Any individual monthly observation can be selected more than once in a given draw. Those drawn values become a simulated price path. The contingent claim value is computed at the end of that path. This process repeats many times, typically 1,000 or more iterations, to build a distribution of simulated claim values.
No distribution assumption is made anywhere in this process.
Monte Carlo, by contrast, would require Priya to specify upfront: "Prices are distributed according to distribution X with parameters Y and Z." She cannot do this. Every candidate distribution has already been rejected.
Step 3, Sanity check
If bootstrap is correct, the output should depend entirely on the historical price observations Priya already has, not on any parameter she has to guess or assume.
That is true. The 15-year price history is the only input that varies between resamples. No distribution is ever specified. The sanity check holds.
Answer
Priya should use bootstrap resampling. Her situation matches the defining condition: she has an observed sample but no knowledge of the true population probability distribution. Bootstrap treats that observed sample as the population, resamples from it with replacement, and produces a valid statistical estimate of the contingent claim value using only the empirical evidence she already has.
โ ๏ธ
Watch out for this
The "Bootstrap Needs a Distribution Too" trap.
A candidate who conflates bootstrap with Monte Carlo will conclude that bootstrap "must specify probability distributions for key risk factors", the requirement that defines Monte Carlo, not bootstrap. The correct answer is that bootstrap requires no pre-specified distribution: it uses the observed historical sample directly as a stand-in for the population. Candidates make this error because both methods run many simulated trials, so they assume both share the same setup step. Monte Carlo requires you to hand it a distribution before it can run; bootstrap pulls its randomness entirely from empirical data you already hold. Before selecting an answer about simulation methods, ask one question first: does the analyst know the true probability distribution, or only have historical observations?
๐ง
Memory Aid
ACRONYM
SHED, the four features that define bootstrap resampling.
S
S, Sample is the population โ Treat the observed historical data as if it were the entire population.
H
H, History only โ No distribution specification required. Empirical data is the only input.
E
E, Equal size, with replacement โ Each resample draws the same number of observations as the original sample, and any observation can be drawn more than once.
D
D, Distribute the results โ Repeat many times to build a distribution of the statistic of interest.
When a question asks you to choose between bootstrap and Monte Carlo, run through SHED. If the analyst has history but no known distribution, every letter of SHED applies and bootstrap is the answer. If an answer option claims bootstrap "requires a distribution," the H in SHED catches that error immediately.
Practice Questions ยท LO3
3 Questions LO3
Score: โ / 3
Q 1 of 3 โ REMEMBER
Which of the following best describes the key feature that distinguishes bootstrap resampling from Monte Carlo simulation?
CORRECT: C
CORRECT: C, Bootstrap's defining feature is that it uses the observed historical data directly, treating the sample as if it were the entire population. It draws new samples of equal size from that data with replacement. No distribution is ever specified beforehand. The empirical data is the only input. That is what separates bootstrap from every other simulation method.
Why not A? This describes Monte Carlo, not bootstrap. Monte Carlo requires you to provide a distribution, for example, "returns are normally distributed with mean 6% and standard deviation 12%", before it can generate any random outcomes. Bootstrap requires no such specification. Choosing A confuses the two methods by assigning Monte Carlo's requirement to bootstrap.
Why not B? This option describes a hybrid that matches neither method correctly. Fitting a theoretical distribution to observed data and then drawing from that fitted distribution is a Monte Carlo setup, not bootstrap. Bootstrap never fits a theoretical curve to the data. It draws directly from the raw observed values themselves, leaving the shape of the distribution entirely implicit in the historical data.
---
Q 2 of 3 โ UNDERSTAND
An analyst is evaluating a portfolio strategy but acknowledges that the returns of the underlying assets do not follow any recognisable standard distribution. Tests have rejected normality, lognormality, and every other common model. Which statement best explains why bootstrap resampling is appropriate in this situation?
CORRECT: A
CORRECT: A, The exact situation described, an analyst with historical data but no reliable distributional model, is precisely the problem bootstrap was designed to solve. Because bootstrap treats the sample as the population and draws from it directly, it never needs to know or assume what the true distribution looks like. The observed data carries all the information the method requires.
Why not B? Bootstrap does not fit parametric distributions to the data. That description matches a different class of methods entirely. Bootstrap makes no distributional assumptions and estimates no distribution parameters. Its power comes from making no parametric claims at all. It is non-parametric by design. An analyst who believes bootstrap automatically selects the best-fitting distribution misunderstands the core mechanism.
Why not C? Bootstrap does not require normally distributed returns. Requiring normality would make bootstrap useless in exactly the situation where it is most valuable: when standard distributions have been rejected. The central limit theorem is relevant in some statistical contexts, but it is not a prerequisite for bootstrap validity. Bootstrap works through repeated resampling from the empirical data, and its validity comes from that repeated drawing process, not from any assumption about the shape of the distribution.
---
Q 3 of 3 โ TRAP
Marcus Okafor is a risk analyst at a commodities trading firm in Lagos. He wants to estimate the distribution of daily profit and loss for a derivatives portfolio. He has three years of daily P&L observations. A colleague argues that bootstrap resampling "must require specifying the volatility distribution before it can generate any simulated outcomes, just like any other simulation method." Marcus believes the colleague is wrong. Which response best supports Marcus's position?
CORRECT: B
CORRECT: B, Bootstrap requires no pre-specified distribution parameters. Not the mean. Not the standard deviation. Not the shape. Marcus is right to push back. Bootstrap treats the three years of observed daily P&L values as the sampling pool and draws from them directly. Any observation from the historical dataset can appear in any resample, including more than once. The method's entire purpose is to sidestep distributional assumptions by relying entirely on empirical evidence.
Why not A? This is the exact trap this question targets. The colleague's error, and the error in option A, is assuming that bootstrap shares Monte Carlo's requirement to specify distribution parameters upfront. Monte Carlo needs a distribution before it can generate random outcomes. Bootstrap does not. It replaces that specification step with direct resampling from observed data. Choosing A means confusing the defining difference between the two methods.
Why not C? This option invents a distinction, "shape but not parameters", that does not correspond to how either bootstrap or Monte Carlo actually works. Bootstrap specifies neither shape nor parameters. It makes no distributional assumptions at all. Monte Carlo requires both a distributional form and the parameters that define it. There is no hybrid middle ground where the shape is required but the parameters are not.
---
Glossary
bootstrap resampling
A computational method that treats an observed sample as if it were the entire population, then repeatedly draws new samples of equal size from it with replacement, so any observation can appear multiple times in a single resample. Imagine shuffling a deck of cards, dealing a hand, recording what you got, putting all the cards back, and repeating thousands of times to learn about the deck's composition without knowing in advance what cards it contains.
Monte Carlo simulation
A computational method that generates many random outcomes according to a probability distribution you specify in advance, then analyzes the results across all those outcomes. Like rolling a fair die 10,000 times when you already know the die is fair and six-sided, you use that known structure to compute probabilities of various totals.
probability distribution
A description of how likely each possible outcome is for a random variable. If you roll a fair die, each face has a 1-in-6 chance, that is a probability distribution. In finance, distributions describe which returns or prices are more or less likely to occur.
empirical distribution
The distribution you observe directly from actual data, rather than assuming a theoretical shape like a bell curve. If you record daily stock returns for five years and count how many times returns fell in each range, you have an empirical distribution. It shows what actually happened in your data, not what a formula predicts should have happened.
with replacement
A drawing procedure where each selected item is returned to the pool before the next draw, so the same item can be selected again. Pulling a name from a hat, recording it, putting it back, and drawing again is "with replacement." The second draw can produce the same name as the first.
contingent claim
A financial instrument whose payoff depends on whether a specific condition is met at a future date. A bet that pays you $100 if gold exceeds $2,000 per ounce on a specific date and pays nothing otherwise is a contingent claim. The payment is contingent on the condition being true.
resampling
The process of repeatedly drawing samples from an existing dataset to study how a statistic varies across different subsets. You can study the variability of an average without deriving complex formulas, resample many times and observe how the average changes across draws.
LO 3 Done โ
You have completed all learning objectives for this module.
๐ PRO Feature
How analysts use this at work
Real-world applications and interview questions from top firms.
Modelling asset prices, valuing complex securities, and estimating risk without a formula
๐ This is a PRO session. You are previewing it. Unlock full access to get all LO sections, interview questions from named firms, and one-line positioning statements.
LO 1
Asset price distribution: why lognormal fits prices and normal does not
How analysts use this at work
Fixed income and derivatives teams at firms like PIMCO and JPMorgan use lognormal distribution assumptions every time they price an option or build a structured product. A quantitative analyst at PIMCO building a mortgage-backed securities pricing model must assume that the underlying pool of home prices follows a lognormal path over time. The reason is mathematical and practical: if the analyst assumes returns are normally distributed, prices emerge as lognormally distributed automatically. This makes the floor at zero a built-in property, not a guess. The analyst encodes this assumption in their model, and every trade the desk executes rests on it.
Portfolio managers at firms like Vanguard and T. Rowe Price use the distinction between normal and lognormal distributions when communicating with clients about risk. A portfolio's expected return over time is not symmetric. A balanced fund that returns 40% in a bull market and loses 15% in a bear market has a right-skewed distribution of outcomes, not a symmetric one. Risk consultants at Mercer use this distinction when constructing forward-looking return scenarios for pension funds. They must choose the right distribution type before they can generate plausible scenarios. Using normal instead of lognormal would allow negative price simulations that are economically impossible.
Interview questions
PIMCO Quantitative Analyst "A colleague tells you that asset prices follow a normal distribution and returns follow a lognormal distribution. How do you respond, and what is the correct relationship?"
Goldman Sachs Investment Analyst "A stock priced at $100 has a continuously compounded return of 8% over one year. What is the expected stock price at year-end, and what distribution does it follow?"
State Street Quantitative Researcher "You are asked to simulate 10,000 possible year-end prices for a stock with an expected annual return of 10% and annual volatility of 20%. Walk me through how you set up this simulation and what distribution the simulated prices will follow."
One-line to use in your interview
Interviewers listen for industry-specific language. It signals you understand the concept, not just the definition. Use the plain English version to adapt it in your own words.
In practice, I always model asset prices as lognormal and returns as normal, because the exponential transformation that connects them enforces the zero floor that a normal price distribution cannot. That single assumption is what makes option pricing formulas tractable.
In plain English
A stock price cannot be negative, but a normal curve allows it. The fix is to work with returns, which can be negative, and then exponentiate them to get prices. That exponentiation automatically keeps prices above zero and creates the right-skew shape that real price data shows.
LO 2
Monte Carlo simulation: when no formula exists, run the numbers anyway
How analysts use this at work
Derivatives desks at investment banks and hedge funds use Monte Carlo simulation to value securities that have no closed-form price formula. An analyst at JPMorgan pricing a collateralised debt obligation cannot use Black-Scholes because the instrument's payoff depends on the entire path of mortgage prepayments, not just the final value. Instead, the analyst specifies distributions for interest rates and prepayment speeds, then runs thousands of simulated paths to estimate what the security is worth today. Each simulation trial produces one possible cash flow history. The average discounted payoff across all trials is the estimated price. The analyst delivers this estimate to the trading desk as the fair value reference for the product.
Risk management teams at firms like BlackRock and Bridgewater use Monte Carlo to estimate portfolio-level risk statistics that have no analytical solution. A risk analyst at BlackRock modelling a multi-asset portfolio with five correlated risk factors cannot derive a single Value at Risk number from a formula. She specifies a normal distribution for each factor, links them with a correlation matrix, and runs 10,000 trials. Each trial produces one possible annual portfolio return. The bottom 5% of those returns becomes the 95% VaR estimate. This approach is slower than a closed-form calculation but it handles correlations and non-linear payoffs that formulas cannot touch.
Interview questions
Citadel Quantitative Analyst "You need to price an Asian option whose payoff depends on the average stock price over six months. Black-Scholes does not apply. Walk me through how Monte Carlo simulation works in this context, step by step."
Two Sigma Portfolio Risk Manager "An analyst tells you that Monte Carlo simulation is most useful as a cross-check when you already have a closed-form answer. Another analyst says Monte Carlo is only used when no closed-form formula exists. Who is correct, and why?"
Bridgewater Risk Analyst "You run a Monte Carlo simulation with 500 trials and get a portfolio VaR estimate of $12 million. Your manager asks you to run it again with 50,000 trials. What changes in the output and what does not change, and why?"
One-line to use in your interview
Interviewers listen for industry-specific language. It signals you understand the concept, not just the definition. Use the plain English version to adapt it in your own words.
When I encounter a valuation problem with no formula, I default to Monte Carlo simulation. I specify the probability distributions for each risk factor, generate thousands of complete price paths, and average the discounted payoffs. The limitation I always keep in mind is that the output is a statistical estimate, not an exact answer, and it tells me what is likely but not why.
In plain English
For complex products, I build a model that generates thousands of possible futures by drawing random outcomes from distributions I choose. I average those outcomes to get a price estimate. The estimate gets more accurate the more futures I generate, but it never becomes a precise number. That is the trade-off versus using a formula.
LO 3
Bootstrap resampling: when the data is all you have and you cannot assume anything
How analysts use this at work
Performance attribution teams at institutional asset managers use bootstrap resampling when they need to estimate the distribution of strategy returns but cannot assume any standard distribution form. An analyst at Wellington Management evaluating a new emerging market equity strategy has only 24 months of live track record. Tests reject normality, lognormality, and every standard model the team tried. The analyst cannot run Monte Carlo because specifying a distribution would be arbitrary. Instead, she bootstrap resamples from the 24 observed monthly returns, drawing 12 values with replacement to simulate one hypothetical year, repeats this 1,000 times, and builds the distribution of simulated annual returns directly from the data. No distribution is assumed. No parameters are guessed. The empirical data does all the work.
Quantitative researchers at hedge funds and risk consultancies use bootstrap resampling to validate statistical estimates without relying on distributional assumptions that might not hold. A researcher at a macro hedge fund studying the distribution of daily commodity returns has 8 years of data but knows the returns exhibit crash risk and fat tails that standard distributions miss. She bootstrap resamples from the observed returns, which preserves whatever fat-tailed behaviour actually exists in the data, rather than imposing a normal distribution that would underestimate tail risk. The resampled distribution of risk estimates becomes the basis for stress testing. If she had imposed a normal distribution, the stress scenarios would have been too optimistic.
Interview questions
Neuberger Berman Quantitative Researcher "An analyst tells you that bootstrap resampling requires specifying a normal distribution for the underlying returns. How do you correct this, and what does bootstrap actually require as its input?"
Wellington Management Investment Analyst "You have 30 months of monthly return data for a new strategy. Tests reject all standard distributions. Your manager asks you to estimate the distribution of annual returns. Which method do you use and why, and how does it differ from Monte Carlo simulation?"
Mercer Investment Consultant "Two junior analysts argue about whether to use bootstrap or Monte Carlo for a client's portfolio risk estimate. The first has five years of quarterly return data. The second has a normal distribution with estimated mean and standard deviation for each asset class. How do you advise them, and what is the key distinguishing principle?"
One-line to use in your interview
Interviewers listen for industry-specific language. It signals you understand the concept, not just the definition. Use the plain English version to adapt it in your own words.
When I have historical data but no reliable distributional model, I use bootstrap resampling. I treat the sample I have as if it were the entire population and draw from it with replacement thousands of times. The key distinction from Monte Carlo is that I never specify a distribution. The data speaks for itself.
In plain English
If I have a set of actual return observations but cannot guess what shape the broader population follows, I resample from what I actually have. I draw observations randomly, put them back, draw again, and repeat. The distribution of results across all those draws is my estimate. I never assume a bell curve or any other shape.