Electric Insights
Advanced DIY Tools

Build Models

Configure synthetic predictor variables, build models, and create custom simulations.

How to Use This Tool

Follow these steps to build and analyze your models

1. Select Predictors

Check relevant variables. Hover info icons for definitions. The six predictor variables in the published Approval model are pre-selected as a default.

2. Add Synthetic Variable (Optional)

You can optionally create a synthetic variable to explore alongside your selected inputs. This helps test how a hypothetical factor might affect the outcome—and reveals which real-world variables show similar patterns. The algorithm will adjust the synthetic variable iteratively to attempt to match the parameters you specify.

3. Select Outcome

Choose what to model: Presidential Approval, Vote Intention, or Tax Cut Support. Presidential Approval is selected as a default

4. Run Analysis

Click "Run Analysis" to estimate the model and view results.

5. Review Results

Examine probabilities, margins, and diagnostics. The analysis will identify variables that could improve your model and show the synthetic variable's estimated effect.

6. Build Simulator

Set hypothetical values to see their effects, including values for any synthetic variables you configured.

7. Iterate

Adjust variables (including synthetic ones), compare scenarios, and refine your model based on the insights gained.

Select Outcome Variable

What outcome would you like to model?

Required
r =
-0.8 0.8
r =
0 0.8
2 (Binary) 5 (Max)
Modeling: Presidential Approval

Full Model Performance

BETA

Predicted Probability

The average predicted probability across all observations

0.42

± 0.03 (95% CI)

Model Strength Overview Scoring Details

Overall score blends discrimination, calibration, separation, and explanatory power. When available, we also include evidence against the null and parsimony .

Rescaling to 0–1

  • AUC: (AUC − 0.50)/0.50
  • Brier: 1 − (Brier/0.25)
  • Tjur: Tjur/0.35
  • Pseudo R²: R²/0.40
  • p-value: −log10(p)/6
  • ΔAIC: ΔAIC/10

Aggregation Method

Clamp to [0,1], average equally over present metrics, map to a 0–100 score.

Interpretation Guide

≥80
Strong
60–79
Moderate
<60
Weak

Model Fit

Goodness-of-fit metrics for the model

Pseudo R²

Measures how well the model explains the variance in the outcome compared to a null model. Higher is better.

0.32

Higher is better

Rating:

Tjur R²

A measure of discrimination. The mean predicted probability for outcome-positive cases (e.g., approvers) minus the mean for outcome-negative cases (e.g., non-approvers). Higher is better.

0.28

Higher is better

Rating:

AUC (ROC)

The probability that the model assigns a higher predicted probability to a random positive case (e.g., an approver) than to a random negative case (e.g., a non-approver). Ranges from 0.5 (chance) to 1.0 (perfect); higher is better.

0.82

0.5 = random, 1 = perfect

Rating:

Brier Score

Average squared difference between each predicted probability and the actual outcome (1 = approver, 0 = non-approver). Ranges from 0 (perfect) up to 1; lower is better.

0.15

Lower is better

Rating:

Information Criteria

Metrics balancing fit and complexity

AIC Comparison

An information score for comparing models: how well the model fits the data after penalizing additional estimated parameters (e.g., each added predictor, each added level for a categorical variable, interaction term). Lower is better.

Null Model

–642.5

Change

Final Model

–498.3

Lower is better

Log Likelihood Comparison

Measures how well the model predicts observed outcomes. For each case: if y = 1, add ln(p̂); if y = 0, add ln(1 − p̂). Sum across all cases. Higher (less negative) means better fit.

Null Model

–642.5

Change

Final Model

–498.3

Higher is better

LR χ²

Likelihood ratio chi-square statistic comparing the fitted model to the null model

288.4

p-value

Statistical significance of the likelihood ratio test

<0.001


Sample Completeness

Sample retention after preprocessing
Retained 850 85%
Dropped 150 15%

Total: 1,000

The model shows statistically significant improvement over the null model (p < 0.001) with good predictive accuracy.

Synthetic Variable Impact Analysis

How model diagnostics change when adding the synthetic variable (vs. the base model).

BETA

Δ Pseudo R²

+0.04

Δ Tjur R²

+0.03

Δ AIC

-20.8

Δ Log Likelihood

-12.4

Δ AUC

+0.02

Δ Brier

-0.01

Summary:

Adding the synthetic variable leads to a small but statistically significant improvement in model performance across all metrics, with the most notable improvement in model fit (reduced AIC).