How a Growth Equity Software Investor Built a Dynamic ARR Analytics Workflow with Clarus
Dec 20, 2025
In growth equity software investing, the hard part isn’t getting access to data—it’s turning messy, inconsistent revenue exports into answers you can trust fast enough to matter.
One growth equity software investor wanted a way to interactively analyze ARR performance across time periods and segments—without rebuilding spreadsheets every time someone asked a new question.
They weren’t looking for a static dashboard with one set of filters. They needed a workflow that could support the way diligence and monitoring actually happens:
“What does the ARR waterfall look like for the whole company?”
“What about North America?”
“What about Technology customers in the MidMarket?”
“Which churn events mattered most—and when?”
“How do cohorts retain over time?”
“Who are the top accounts today, and how have they trended historically?”
This post explains the bottleneck, why generic tooling breaks down, and how Clarus accelerates ARR analytics by building a workflow that’s dynamic by design.
The Firm
The client is a growth equity investor focused on software businesses. Their investment process depends on quickly developing conviction around revenue quality: growth durability, retention dynamics, expansion motion, and customer concentration risk—both in underwriting and in ongoing portfolio monitoring.
They already had spreadsheets and BI tools. The issue was that the answers they needed required a workflow that could be rerun and sliced in dozens of ways without hours of manual rework
The Problem: High-Variance Revenue Data, High-Pressure Questions
The core workflow is simple to describe and hard to execute repeatedly:
Input: customer-level ARR exports (often from multiple systems) + inconsistent fields + historical changes
Output: repeatable analytics that can be sliced by time period and segment on demand
The bottleneck showed up in two places:
Underwriting: answering revenue questions quickly
Diligence timelines compress everything. The team needed to explore revenue dynamics fast:
ARR bridges/waterfalls across months, quarters, and years
retention by cohort (e.g., join month/quarter)
churn dynamics (who churned, when, and how it clustered)
concentration exposure (top accounts and their trajectories)
But every “one more cut” request turned into an analyst rebuilding pivots, reconciling definitions, and sanity-checking logic.
Portfolio monitoring: repeating the same analysis every period
In monitoring, the workflow had a different failure mode: version drift.
exports change format
customer metadata changes (industry, region, segment)
IDs don’t always map cleanly across periods
definitions vary by company (“ARR” isn’t always apples-to-apples)
Teams can build dashboards—but when the underlying data is messy, the dashboards quietly become untrusted and people fall back to spreadsheets.
Why Generic Dashboards and "Revenue Analytics' Tools Don't Fully Solve It
The investor had access to BI tooling and could get dashboards built. The gap was not visualization—it was workflow reliability and definition control:
Segment fields (Region, Customer Type, Industry, etc.) aren’t consistently populated
Customer identifiers change; merges and splits break history
Definitions matter: “New” vs “Expansion” vs “Downsell” vs “Churn” must be consistent
Analysts need to drill from a macro view to a single segment or account and understand the driver without losing trust
In short: most tools can show charts. Fewer can consistently produce decision-grade ARR analytics across changing inputs.
What They Needed Instead
The team’s requirements weren’t “build a dashboard.” They were:
A canonical definition layer for ARR movements
A dynamic slicing engine: toggle time periods (month/quarter/year) and filter by any segment combination
Drill-down: from company-wide to region → industry → segment → account without rebuilding logic
Cohort retention that is cohort-aware (join month/quarter) and consistent across time
Churn intelligence: top churn events, timing patterns, and segment-specific churn
Account concentration views: top 25 accounts today and historically, plus “top 25 as of X date” forward trajectories
Re-runability: refresh with new exports without breaking everything
They wanted a workflow that behaves like an analytical “instrument panel,” not a static report.
The Clarus Approach: Dynamic ARR Analytics as a Repeatable Workflow
Clarus worked with the investor to build a workflow that turns customer-level ARR history into a structured, queryable dataset and a set of reusable views that can be sliced instantly.
Step 1: Normalize the inputs into a canonical customer-ARR table
Clarus ingests raw exports and produces a standardized table with:
consistent customer identifiers (mapping rules + history stitching)
standardized timestamps and time buckets (month/quarter/year)
normalized segment fields (Region, Customer Type, Industry, etc.)
validation checks (missing fields, duplicate IDs, negative/invalid movements)
Step 2: Define the ARR movement logic once—then reuse it everywhere
The investor’s ARR waterfall definitions were encoded deterministically:
New ARR: customer prior ARR = 0, current ARR > 0
Expansion / Upsell: customer prior ARR > 0, current ARR increases
Downsell: customer prior ARR > 0, current ARR decreases but remains > 0
Churn: customer prior ARR > 0, current ARR = 0
This definition layer becomes the source of truth for every view (waterfalls, churn lists, cohorts, top accounts).
Step 3: Build dynamic views that can be filtered and drilled
With a canonical dataset and movement logic in place, Clarus generates interactive outputs that support questions like:
ARR waterfall (toggleable by time + segment)
change time grain: month vs quarter vs year
filter: entire company, North America, MidMarket Technology customers, etc.
drill-down: click from total → segment → sub-segment → customer list behind the movement
Top churn events + timing
identify the largest churn events by ARR lost
view timing patterns and clusters
filter by segment (region/industry/customer type) and drill into customers driving churn
Retention by cohort
cohort definition: month/quarter customer joined
retention curves and tables that can be filtered by segment
compare cohorts over time and identify where retention shifts
Top accounts: today and historically
Top 25 today + historical trend lines
Top 25 as of X date + forward trajectory (“what happened after they became top accounts?”)
segment-aware versions of the same views (e.g., Top 25 in North America)
Step 4: Make it re-runnable as new data arrives
The workflow is built to refresh with new exports and maintain consistency:
schema checks prevent silent breaks
ID mapping rules persist across periods
output definitions remain stable, so the team doesn’t lose trust quarter-to-quarter
What Changed Operationally
The win wasn’t a single time number—because diligence packages and company data maturity vary.
What changed was the nature of the analysis:
fewer “rebuild the spreadsheet” cycles
fewer definition debates and inconsistent cuts
faster iteration during diligence (“one more slice” becomes a query, not a rebuild)
repeatable monitoring outputs across periods
higher confidence because every view ties back to the same canonical logic
Instead of spending time translating raw exports into bespoke pivots, the team spent time on what matters: interpreting the signals and making investment decisions.
The Takeaway
For growth equity software investors, ARR analytics isn’t hard because “waterfalls are complicated.” It’s hard because the inputs are messy, the questions are high-variance, and definitions must be consistent across time and segments.
Generic dashboards visualize what’s there. Clarus builds the workflow that makes the data reliable and the analysis repeatable—so you can dynamically slice ARR, churn, cohorts, and account concentration without rebuilding the engine every time.
If you’re spending too much effort rebuilding ARR bridges, cohort cuts, and churn analysis from raw exports—especially during diligence—we’re happy to share what a production workflow implementation looks like.
