NBA Shot Simulator
Published model · All-or-Nothing Scenarios · Fine-Tuning Simulator →
33,362 three-pointers from the 2014-15 season — made or missed. League average: about 35%. What shot conditions drove that number, and what would have changed it?
This simulator runs on the published model — Electric Insights' release-day explanatory account of NBA three-point make rates.
Pin every shot to one chosen condition per factor and see how the overall make rate shifts. Filter to a single player to see how their profile differs from the league.
How to Use This Tool
Test what drove three-point make rates in the 2014-15 NBA season
How to Use This Tool
Test what drove three-point make rates in the 2014-15 NBA season
1. Choose league or player
Start with All players to use the league-wide published model, or switch to a player such as Stephen Curry to re-fit the six-variable model on that player's shots only. The shot count and baseline make rate under the selector show which sample you're analyzing.
2. Try a preset or set your own
Use the preset buttons for a quick start, or set each dropdown manually. The colored bar under each variable's label shows its model leverage — the make-rate swing between its best and worst levels.
3. Run the scenario
Click Run scenario to send your settings through the fitted model. The result shows the projected make rate and a 95% confidence interval. The How certain is this result? chart shows 10,000 simulated outcomes so you can see how much the baseline and your scenario overlap.
4. Explore each factor
Click Explore Each Factor after a run to see every level of every variable holding the rest of the scenario fixed. This is the quickest way to see whether defender distance, shot clock, or shot distance is doing most of the work.
5. Review past scenarios
Every run is saved to the Saved Scenarios drawer at the bottom. Each card shows which conditions you changed and the predicted make rate. Click Load to restore a past scenario, or Remove to drop it.
6. Reset and compare
Click Reset to baseline to return every dropdown to "No change" and clear the results. Switching players also resets the scenario so the new model starts fresh. The league-wide baseline is ~35%; individual players vary widely from that.
Loading simulator…
Couldn't load the simulator
Set the scenario
Predicted outcome
How did you change the survey?
Each pair of bars compares the actual distribution (green) to the distribution under your scenario (red where you changed it). This is what you changed — not the predicted impact, just the input.
How certain is this result?
10,000 simulated outcomes drawn from the model's coefficient uncertainty, for both the baseline (green) and your scenario (red). Where the distributions overlap, the model isn't certain the two would produce different headlines.
Set-everyone-to-X table
Click to show predicted outcomes for every level of every variable
Set-everyone-to-X table
Click to show predicted outcomes for every level of every variable
For each variable's level, the predicted outcome if every respondent had that response, all else unchanged. This is the analytic view of the simulator.
Calibration: predicted vs. observed
How closely the model's predicted probabilities track the observed outcome rates, binned by predicted decile. Bubble size shows respondents per bin. The dashed diagonal is perfect calibration; the blue scenario line is your run; the gray line (when present) is the unmodified base model for comparison.
Explore each factor
For each variable, see how the predicted outcome would change if you switched just that one selection — holding all your other selections fixed. Reveals which individual choices have the most leverage given your current scenario.