JFK Approval Simulator
Published model · Fine-Tuning Scenarios · ← All-or-Nothing Simulator
Change how survey respondents rated JFK on key issues and see how his overall approval probability shifts.
Same published Approval model as the All-or-Nothing Simulator, with redistribution rather than pinning.
Shift the mix of responses gradually — for example, moving some respondents from "Poor" to "Only fair" rather than moving everyone at once. Each scenario shows a 95% confidence interval and a 10,000-run Monte Carlo distribution, plus a per-factor sensitivity breakdown.
Note: this simulator and its sibling tools above run only on the published Approval model. To explore drivers of Vote Intention or Tax Cut Support, use the Model Builder or Survey Explorer.
How to Use This Tool
Shift response distributions continuously and see how approval would have changed
How to Use This Tool
Shift response distributions continuously and see how approval would have changed
1. Try a preset or drag sliders
Use the preset buttons for a quick start, or drag the sliders to change the mix of responses for any variable. The colored bar under each variable's header shows its model leverage — how much the predicted outcome can swing based on that variable alone. Each variable's percentages must sum to 100.
2. Run the scenario
Click Run scenario to send your slider settings through the published logistic regression model. The result shows the projected approval rate and a 95% confidence interval. The How certain is this result? chart shows 10,000 simulated outcomes so you can see how much the baseline and your scenario overlap.
3. Explore each factor
Click Explore Each Factor after a run to see a full per-variable sensitivity breakdown. Each card shows the predicted approval if that one variable's distribution were set to 100% at each level, holding your other sliders fixed. Combined best/worst cards show the result of every variable's best or worst level at once.
4. Review past scenarios
Every run is saved to the Saved Scenarios drawer at the bottom of the page. Each card shows which variables you shifted from baseline and the predicted outcome. Click Load to restore a past scenario's slider settings, or Remove to drop it.
5. Shift distributions gradually
Unlike the All-or-Nothing simulator, sliders let you move just some respondents from one answer to another — for example, shifting 10% of "Poor" ratings to "Only fair." This tests realistic, incremental shifts rather than extreme all-or-nothing scenarios. Make sure each variable's totals still sum to 100%.
6. Reset and iterate
Click Reset to baseline to return all sliders to their November 1963 starting distributions and clear the results. The baseline of 56.9% is the model's estimate of actual November 1963 approval. Try different combinations to see which factors matter most — or least.
Loading simulator…
Couldn't load the simulator
Set the distribution
The number on each card is the variable's swing on this page — how far the prediction would move if you concentrated all responses in this variable's lowest-leverage level versus its highest-leverage level, using the real values within each level. A bigger swing means redistributing this variable matters more.
Note: this page doesn't pin everyone to one specific value. It shifts what kind of responses show up in the model's average — emphasizing some, de-emphasizing others — so the answers stay grounded in situations the data actually contains. That's why swings here are smaller than on the All-or-Nothing page, where every respondent is set to the same extreme value.
Predicted outcome
How did you change the survey?
Each pair of bars compares the actual distribution (green) to the distribution under your scenario (red where you changed it). This is what you changed — not the predicted impact, just the input.
How certain is this result?
Every prediction has wiggle room — these histograms show how much. The green bars are the plausible answers for the baseline; the red bars are the plausible answers for your scenario. Where the colors overlap, the two answers are close enough that the model can't cleanly tell them apart.
Calibration: predicted vs. observed
How closely the model's predicted probabilities track the observed outcome rates, binned by predicted decile. Bubble size shows respondents per bin. The dashed diagonal is perfect calibration; the blue scenario line is your run; the gray line (when present) is the unmodified base model for comparison.
Explore each factor
For each variable, see how the predicted outcome would change if you switched that one distribution to 100% at each level — holding all your other selections fixed. Reveals which individual levels have the most leverage given your current scenario.