JFK Approval Simulator
Published model · Fine-Tuning Scenarios · ← All-or-Nothing Simulator
Change how survey respondents rated JFK on key issues and see how his overall approval probability shifts.
Same published model as the All-or-Nothing Simulator — Electric Insights' release-day explanatory account of Kennedy's approval — with redistribution rather than pinning.
Shift the mix of responses gradually — for example, moving some respondents from "Poor" to "Only fair" rather than moving everyone at once.
How to Use This Tool
Shift response distributions continuously and see how approval would have changed
How to Use This Tool
Shift response distributions continuously and see how approval would have changed
1. Try a preset or drag sliders
Use the preset buttons for a quick start, or drag the sliders to change the mix of responses for any variable. The colored bar under each variable's header shows its model leverage — how much the predicted outcome can swing based on that variable alone. Each variable's percentages must sum to 100.
2. Run the scenario
Click Run scenario to send your slider settings through the published logistic regression model. The result shows the projected approval rate and a 95% confidence interval. The How certain is this result? chart shows 10,000 simulated outcomes so you can see how much the baseline and your scenario overlap.
3. Explore each factor
Click Explore Each Factor after a run to see a full per-variable sensitivity breakdown. Each card shows the predicted approval if that one variable's distribution were set to 100% at each level, holding your other sliders fixed. Combined best/worst cards show the result of every variable's best or worst level at once.
4. Review past scenarios
Every run is saved to the Saved Scenarios drawer at the bottom of the page. Each card shows which variables you shifted from baseline and the predicted outcome. Click Load to restore a past scenario's slider settings, or Remove to drop it.
5. Shift distributions gradually
Unlike the All-or-Nothing simulator, sliders let you move just some respondents from one answer to another — for example, shifting 10% of "Poor" ratings to "Only fair." This tests realistic, incremental shifts rather than extreme all-or-nothing scenarios. Make sure each variable's totals still sum to 100%.
6. Reset and iterate
Click Reset to baseline to return all sliders to their November 1963 starting distributions and clear the results. The baseline of 56.9% is the model's estimate of actual November 1963 approval. Try different combinations to see which factors matter most — or least.
Loading simulator…
Couldn't load the simulator
Set the distribution
Predicted outcome
How did you change the survey?
Each pair of bars compares the actual distribution (green) to the distribution under your scenario (red where you changed it). This is what you changed — not the predicted impact, just the input.
How certain is this result?
10,000 simulated outcomes drawn from the model's coefficient uncertainty, for both the baseline (green) and your scenario (red). Where the distributions overlap, the model isn't certain the two would produce different headlines.
Calibration: predicted vs. observed
How closely the model's predicted probabilities track the observed outcome rates, binned by predicted decile. Bubble size shows respondents per bin. The dashed diagonal is perfect calibration; the blue scenario line is your run; the gray line (when present) is the unmodified base model for comparison.
Explore each factor
For each variable, see how the predicted outcome would change if you switched that one distribution to 100% at each level — holding all your other selections fixed. Reveals which individual levels have the most leverage given your current scenario.