Electric Insights
NBA 2014–15 Shot Logs · 32,511 Three-Point Shots

Model Builder

Explore what predicted 3-point shot makes in the 2014–15 NBA season. Test whether adding different shot conditions improves the model — or build one from scratch.

All six shot-condition variables are pre-selected. Add or remove them to see how model fit changes. Use the simulator at the bottom to test individual scenarios.

How to Use This Tool

Not sure where to start? All six shot-condition variables are pre-selected. Click Run Analysis to see how the model performs, then add or remove variables to explore.

1. Select Predictors

Check the shot-condition variables you want to test as predictors. All six are pre-selected by default. You can add or remove any — up to 20 for Run Analysis. Hover the icons for plain-English definitions.

2. Add a Synthetic Variable (Optional)

Check Configure Synthetic Variable to add a hypothetical shot condition — one not measured in the data but theoretically possible. You set how strongly it should correlate with the outcome and how different it should be from existing predictors. After running, the results show whether this imaginary variable would have improved the model, and which real shot-log variables come closest to capturing the same signal.

3. Select Outcome

Choose what you want the model to predict. 3-Point Make is the only outcome — whether the shot went in.

4. Run Analysis or Auto-Build

Run Analysis is the primary analyst tool — it builds a model from exactly the variables you selected. Use it when you want full control over what goes into the model.

Auto-Build Standard searches all available variables automatically and selects those that most improve predictive fit. Useful as a benchmark to see what the data prefers without imposing a prior.

Auto-Build Actionable Predictors applies an additional constraint: it excludes near-proxy variables (questions that are essentially restatements of the outcome) and weights selection toward variables that can realistically be moved by policy or communication. The result is a model suited for strategy — every predictor in it represents an independent lever. Fit will often be lower than Standard; that gap reflects the cost of the constraint, not a modeling error.

Auto-Build does not use a synthetic variable. To add one, run Auto-Build first to identify the best predictor set, then re-run with Run Analysis with the synthetic variable enabled.

5. Review Results

Results show how well the model predicts shot makes and how much each variable contributes. The Other Shot Conditions section below the results lists every variable not yet in your model — each card shows whether adding it would likely improve predictive fit. Click Add to model on any card to include it, then re-run.

6. Use the Simulator

After running, click Launch in the Simulator panel. Set any combination of shot conditions — for example, set Defender Distance to Very Tight and Shot Clock to Hurried — and see the predicted make probability update instantly.

7. Iterate and Compare

Each run is saved automatically in the Saved Analyses tray. Click Load on any saved card to restore that run's variables and settings. Use Compare to view two runs side by side — each card shows three scores: Tjur R² (how well the model separates makes from misses), AUC (overall predictive accuracy), and Brier (calibration error — lower is better). Higher Tjur R² and AUC and lower Brier means a better-performing model.

Understanding Your Results

After running your analysis, results are organized into up to three sections:

1

Full Model

Complete model including all selected predictors and the synthetic variable (a hypothetical predictor you define yourself), if configured

2

Base Model

Model performance without the synthetic variable (appears only when one is configured)

3

Synthetic Variable Performance

How well the synthetic variable met its specifications and its impact on model performance (appears only when one is configured)

Shot Conditions to Include in Your Model

All six shot-condition variables are pre-selected. Uncheck any you want to exclude, then click Run Analysis. Or use Auto-Build Standard to let the algorithm search automatically, or Auto-Build Actionable Predictors to prefer variables that can be influenced by coaching and shot selection.

What are you trying to predict?

Choose the outcome your model will try to explain.

Required
r =
-0.8 0.8
r =
0 0.8
2 (Binary) 5 (Max)
Modeling: 3-Point Shot Make
0 of 20 variables selected
Run Analysis

Build a model from exactly the variables you selected. Full analyst control.

Auto-Build

Search all variables automatically. Choose a selection strategy:

Best predictive fit across all available variables

Auto-Building Optimal Model… 0s
  • Initialising…
  • Screening candidate predictors
  • Running forward stepwise selection
  • Fitting final model
  • Computing diagnostics & margins

Section 1: Full Model Performance

Complete model including all predictor variables and the synthetic variable

Predicted Probability

The average predicted probability across all observations

0.42

± 0.03 (95% CI)

Model Strength Overview Scoring Details

Overall score blends discrimination, calibration, separation, and explanatory power. When available, we also include evidence against the null and parsimony .

Rescaling to 0–1

  • AUC: (AUC − 0.50)/0.50
  • Brier: 1 − (Brier/0.25)
  • Tjur: Tjur/0.35
  • Pseudo R²: R²/0.40
  • p-value: −log10(p)/6
  • ΔAIC: ΔAIC/10

Aggregation Method

Clamp to [0,1], average equally over present metrics, map to a 0–100 score.

Interpretation Guide

≥80
Strong
60–79
Moderate
<60
Weak

Model Fit

Goodness-of-fit metrics for the model

Pseudo R²

Measures how well the model explains the variance in the outcome compared to a null model. Higher is better.

0.32

Higher is better

Rating:

Discrimination (Tjur R²)

A measure of discrimination. The mean predicted probability for outcome-positive cases (makes) minus the mean for outcome-negative cases (misses). Higher is better.

0.28

Higher is better

Rating:

Predictive Accuracy (AUC)

The probability that the model assigns a higher predicted probability to a random make than to a random miss. Ranges from 0.5 (chance) to 1.0 (perfect); higher is better.

0.82

0.5 = random, 1 = perfect

Rating:

Calibration Error (Brier)

Average squared difference between each predicted probability and the actual outcome (1 = made, 0 = missed). Ranges from 0 (perfect) up to 1; lower is better.

0.15

Lower is better

Rating:

Information Criteria

Metrics balancing fit and complexity

AIC Comparison

An information score for comparing models: how well the model fits the data after penalizing additional estimated parameters (e.g., each added predictor, each added level for a categorical variable, interaction term). Lower is better.

Null Model

–642.5

Change

Final Model

–498.3

Lower is better

Log Likelihood Comparison

Measures how well the model predicts observed outcomes. For each case: if y = 1, add ln(p̂); if y = 0, add ln(1 − p̂). Sum across all cases. Higher (less negative) means better fit.

Null Model

–642.5

Change

Final Model

–498.3

Higher is better

LR χ²

Likelihood ratio chi-square statistic comparing the fitted model to the null model

288.4

p-value

Statistical significance of the likelihood ratio test

<0.001

Sample Completeness

Sample retention after preprocessing
Retained 850 85%
Dropped 150 15%

Total: 1,000

The model shows statistically significant improvement over the null model (p < 0.001) with good predictive accuracy.