The questions you ask after the report —
answerable now, not next week.
Subgroup cuts, predictor swaps, sensitivity tests — your team, in real time, on the data you already have.
We don’t replace your supplier.
We replace the slow, expensive part of working with one.
Most concept-test reports answer the questions the supplier thought to ask. The questions you actually have come up in the meeting after the report.
With your current setup, those questions become custom cuts that take a week and cost real money. Each one. Every time the brand team meets.
With this platform, you’re asking and answering them in real time. The data your supplier delivered is now an interactive thing instead of a static report.
Three things this does that a report doesn’t.
Not features — capabilities. Each one is something the brand team can do in seconds, on their own, after the report has landed.
Subgroup in seconds
“What does this look like for the 25–34 income tier? In the West? Among Garage Beer-aware consumers?”
Click the subgroup, click run. Three answers in two minutes — not three custom cuts in three weeks.
The platform refits the model on each subgroup, with confidence intervals, calibration metrics, and a full simulator on every result.
Drop a predictor, see what changes
“If we remove brand fit from the model, does package appeal still drive purchase intent? What about quality perception?”
Uncheck a variable, click run. The model refits, the new effects display, the comparison to the previous run sits next to it.
Not a what-if scenario layered onto a fixed model — an actual refit, with the same diagnostics as the original.
It tells you when it doesn’t know
When the data won’t support an analysis — subgroup too small, predictors that perfectly separate the outcome — the platform refuses, names the variable causing the problem, and suggests fixes.
That sounds like a constraint. It’s actually a guarantee.
You’ll never present a finding that gets challenged in the C-suite because the underlying fit was unstable.
Results in the units you already use.
The capabilities above only matter if you can act on the results. So effects come back in percentage points — the same units as the headline.
If the topline is “54.7% top-2 purchase intent,” a predictor’s effect reads as “+8.3 points” or “−5.1 points.” That’s atypical for logistic regression: most modeling tools report coefficients or log-odds, leaving a translation step between the model and the brand team’s vocabulary. Here that step is the model’s job, not yours.
Modeling depends on Design.
The capabilities above — subgroup refits, predictor swaps, sensitivity tests — only work as well as the data permits. And the data only permits modeling when the study was designed with explanatory modeling in mind: outcome chosen up front, candidate drivers measured on usable scales, response coverage adequate across subgroups.
Most concept tests aren’t designed this way. They’re designed for descriptive reporting — toplines, attribute scores, demographic breaks. The good news: the gap is fixable, and it’s fixable cheaply if you catch it before fielding.
Two ways to engage:
- Already fielded? We work with what’s there. The diagnostic tier tells you what your data does and doesn’t support before you commit to a full engagement. See diagnostic.
- Still in design? A short pre-fieldwork review costs little and pays back the most. See methodology partnership.
What this is, and what it isn’t.
We are not in the business of replacing your existing research supplier. They do things this platform doesn’t.
What this platform does
- Refits explanatory models on the subgroups you choose, in seconds.
- Shows the predictors that drive your outcome — with effect sizes, confidence intervals, and calibration.
- Runs simulators where the brand team can stress-test “what if uniqueness perception drops 10 points?”
- Surfaces the data’s honest limits — small samples, perfect-prediction artifacts, weak signal.
- Lives alongside your supplier’s report, not instead of it.
What it doesn’t replace
- Norms and category benchmarks. BASES’s 30-year database tells you whether 47% top-2-box is good for your category. We don’t.
- Volume forecasting. BASES II and equivalents convert concept scores into year-1 volume. Different problem.
- Field operations. Panel sourcing, questionnaire programming, fielding, weighting — that’s your supplier’s work. The platform picks up at data handoff, but the modeling layer depends on Design choices made before fielding (which factors get measured, on what scale, with what coverage) — see the callout above.
- Slide decks, executive summaries, written recommendations. The supplier’s analyst still writes the narrative; the platform gives the brand team the tool that sits beneath it.
Two paths if your data isn’t ready: Codebook Construction for studies already fielded, or a methodology partnership for studies still in design.
Garage Beer concept test
A real concept-test dataset, n = 803 craft-beer buyers. Five binary outcomes — purchase intent, package appeal, brand fit, quality perception, premium perception. Eight subgroup variables. Twenty package-reaction predictors.
Switch outcomes, slice by income or region, refit the model, watch the predictors shift. The model behind the platform was built and validated against a published Stata 18 reference.
Open the demoWho this is for
Concept testing
Pre-launch package, formulation, or claim tests. Top-2-box outcomes with attribute batteries and demographic splits.
Brand tracking
Wave-over-wave brand tracking where the question is “why did awareness move” not just “did it move.”
U&A and segmentation
Usage-and-attitudes studies where the strategic question is which attitudes actually predict behavior, not which are most prevalent.
The platform handles any binary outcome with categorical or continuous predictors. If your study fits that shape, this works on it.
“How is this different from a Tableau dashboard?”
Tableau dashboards show you the data your supplier already cut. They don’t fit new models on the fly.
If you want to ask “what predicts purchase intent in the West region for 36–45-year-olds, controlling for brand awareness,” a dashboard can’t answer that — it would need to be pre-built.
This platform fits the model live.
Closer to interactive statistics than to interactive visualization.
How to put it on your data
Engagements are scoped per study, with a fixed-fee diagnostic as the usual first step. The Services page lays out tier options and pricing. Most CPG buyers start with a diagnostic on one existing concept-test dataset.
One sentence, three demos, twenty minutes.
The fastest way to understand this is to see it on a real concept test. We’ll walk you through the platform answering the questions your team would actually ask — subgroup cuts, predictor swaps, sensitivity tests — on the Garage Beer dataset or, if you’d rather, on yours.