How did you change the survey? (Your scenario vs. November 1963)
Each bar compares your hypothetical distribution (red) to the actual November 1963 distribution (green). This shows what you changed — not the approval impact, just the input.
Part 2 of 2 — Fine-Tuning Scenarios · ← Part 1: All-or-Nothing Simulator
Instead of moving everyone to a single view, shift the distribution of responses gradually — and see how those proportional changes would have moved approval.
Use Part 1 for extreme all-or-nothing scenarios. Use this page to explore realistic distributional shifts.
Shift response distributions continuously and see how approval would have changed
Each variable card has sliders for every response category. Drag them to set a hypothetical distribution — for example, increase “Poor” ratings on Khrushchev handling from 9% to 25%. The total for each variable must sum to 100%; a running total shows you where you stand. Use the per-card Reset button to restore a single variable without clearing the others.
Click Simulate Approval to run your scenario. Unlike the All-or-Nothing Simulator, this tool applies your adjusted distributions to the actual respondent microdata — each individual’s predicted probability is re-estimated using their real values on unchanged predictors and your new distribution on modified ones. The result is a population-weighted aggregate.
The probability distribution chart shows 10,000 Monte Carlo draws for both the baseline and your scenario. The overlap between the two distributions tells you how distinguishable the shift is from sampling noise — narrow overlap means a robust finding; wide overlap means the difference could plausibly be due to chance. Each draw samples the full coefficient vector jointly.
Each run is automatically saved to the Saved Scenarios tray. Click Load on any saved card to restore its slider settings and re-run. Use Pin to anchor a scenario as a comparison baseline — the chart will overlay the pinned scenario against your current one so you can see the difference directly.
The calibration bubble chart shows observed vs. predicted probabilities by decile group. When no sliders are changed, all bubbles sit on the diagonal. As you adjust distributions, bubbles drift — the further they drift, the harder your scenario is to achieve within the model’s uncertainty bounds. ECE (average gap) and MCE (largest single-group gap) summarize overall reliability.
Click Reset All to return every slider to its November 1963 baseline distribution and clear the results. The baseline approval of 56.9% reflects the actual survey estimate from the published model. The All-or-Nothing Simulator (previous page) is better suited for testing extreme scenarios where all respondents hold the same view.
Try a scenario:
| Variable | Response | Survey % | Approved % | N |
|---|---|---|---|---|
| Khrushchev | Excellent | 22 | 85 | 269 |
| Pretty good | 43 | 70 | 534 | |
| Only fair | 23 | 28 | 292 | |
| Poor | 9 | 5 | 115 | |
| Not sure | 4 | 40 | 48 | |
| Economy | Excellent | 12 | 91 | 149 |
| Pretty good | 43 | 74 | 528 | |
| Only fair | 26 | 34 | 332 | |
| Poor | 9 | 12 | 120 | |
| Not sure | 10 | 41 | 124 | |
| World Peace | Excellent | 29 | 83 | 359 |
| Pretty good | 44 | 62 | 553 | |
| Only fair | 18 | 19 | 229 | |
| Poor | 5 | 5 | 66 | |
| Not sure | 4 | 42 | 45 | |
| Vietnam | Excellent | 10 | 85 | 119 |
| Pretty good | 35 | 73 | 440 | |
| Only fair | 22 | 42 | 283 | |
| Poor | 13 | 16 | 159 | |
| Not sure | 21 | 53 | 255 | |
| Civil Rights | Favor | 55 | 73 | 673 |
| Oppose | 30 | 31 | 378 | |
| Not sure | 15 | 50 | 183 | |
| Race | White | 94 | 55 | 1,195 |
| Black | 6 | 84 | 57 |