What this shows
For
updated assessments only, the velocity ratio compares post-update usage against the pre-update baseline. A ratio of 1.0 = usage matching pre-update levels. Above 1.0 = outperforming.
Key metrics
- Velocity ratio: actual ÷ expected cumulative at the current date.
- Daily lift %: percentage change in daily usage post-update vs pre-update.
- Days to parity: days until the 7-day rolling average first matched the pre-update daily average.
Toggle
seasonally adjusted for holiday-affected assessments.
Velocity Ratio Over Time
Daily Usage Lift (%)
Cumulative Post-Update Usage vs Pre-Update Average
What this shows
Of clinicians who tried an assessment post-release/update, what proportion used it again? The bar chart shows the adaptive-window retention rate; the table adds 30/60/90-day fixed intervals.
What to look for
- Longer windows naturally produce higher retention. Use 30/60/90-day columns for like-for-like comparison.
- Screening tools (MDQ, MSI-BPD) — expect lower retention (single-use per client).
- Outcome tools (EDE-Q 6.0, BSL-23, ISI) — should show higher retention from readministration.
Clinician Retention After First Use (Post-Release)
Pre vs Post-Update Retention — Matched Windows (Updated Assessments)
Retention Detail (Adaptive + Fixed Windows)
What this shows
Weekly administration counts per assessment. Dashed vertical lines mark release/update dates. Grey shading = holiday period (15 Dec – 31 Jan).
What to look for
- New assessments: steady upward or stable trend post-launch = genuine adoption.
- Updated assessments: usage should at minimum return to pre-update levels.
- Tip: filter to Updates or New for clearer comparison.
Weekly Administration Counts
What this shows
Unique clinicians per week per assessment. Separates genuine breadth from volume driven by a few heavy users.
What to look for
- Steadily growing clinician count = adoption spreading across user base.
- High total admins + low unique clinicians = concentrated usage (weaker signal).
Unique Clinicians Per Week