JFK Approval Simulator
Published model · All-or-Nothing Scenarios · Part 2: Fine-Tuning Simulator →
Three weeks before his assassination, 57% of Americans approved of Kennedy. What drove that number — and what could have changed it?
This simulator runs on the published model — Electric Insights' release-day explanatory account of Kennedy's approval.
Pin every respondent to one chosen response level per factor and see how the headline shifts.
How to Use This Tool
Explore what drove presidential approval in November 1963
How to Use This Tool
Explore what drove presidential approval in November 1963
1. Try a preset or set your own
Use the preset buttons for a quick start, or use the dropdowns to set each factor yourself. The colored bar beneath each label shows its model leverage — the approval swing from best to worst response on that variable. You can set one factor or several before running.
2. Run the scenario
Click Run scenario to send your settings through the published logistic regression model. The result shows the projected approval rate and a 95% confidence interval. The How certain is this result? chart shows 10,000 simulated outcomes so you can see how much the baseline and your scenario overlap.
3. Explore each factor
Click Explore Each Factor after a run to see a full per-variable sensitivity breakdown. Each card shows the predicted approval at every level of that variable, holding your other settings fixed. Combined best/worst cards show the result of taking every variable's best or worst level at once.
4. Review past scenarios
Every run is saved to the Saved Scenarios drawer at the bottom of the page. Each card shows which variables you changed and the predicted outcome. Click Load to restore a past scenario's settings, or Remove to drop it from the list.
5. Combine variables
Set multiple dropdowns at once before running to test compound scenarios — for example, what happens when both Khrushchev and Vietnam ratings shift simultaneously. The model accounts for all variables jointly, so combined scenarios can reveal effects that single-variable tests miss.
6. Reset and iterate
Click Reset to baseline to return all dropdowns to "No change" and clear the results. The baseline of 56.9% is the model's estimate of actual November 1963 approval. Try different combinations to see which factors matter most — or least.
Loading simulator…
Couldn't load the simulator
Set the scenario
Predicted outcome
How did you change the survey?
Each pair of bars compares the actual distribution (green) to the distribution under your scenario (red where you changed it). This is what you changed — not the predicted impact, just the input.
How certain is this result?
10,000 simulated outcomes drawn from the model's coefficient uncertainty, for both the baseline (green) and your scenario (red). Where the distributions overlap, the model isn't certain the two would produce different headlines.
Set-everyone-to-X table
Click to show predicted outcomes for every level of every variable
Set-everyone-to-X table
Click to show predicted outcomes for every level of every variable
For each variable's level, the predicted outcome if every respondent had that response, all else unchanged. This is the analytic view of the simulator.
Calibration: predicted vs. observed
How closely the model's predicted probabilities track the observed outcome rates, binned by predicted decile. Bubble size shows respondents per bin. The dashed diagonal is perfect calibration; the blue scenario line is your run; the gray line (when present) is the unmodified base model for comparison.
Explore each factor
For each variable, see how the predicted outcome would change if you switched just that one selection — holding all your other selections fixed. Reveals which individual choices have the most leverage given your current scenario.