Navigation
DORA Metrics
Four DevOps maturity metrics: deployment frequency, lead time for changes, change failure rate, mean time to recovery
DORA metrics are four indicators developed by the DevOps Research and Assessment research group. They measure the effectiveness of development and delivery processes. GitRiver calculates them automatically.
DORA metrics are a Pro feature. The user needs an assigned Pro seat for access.
The Four Metrics
1. Deployment Frequency
How often you deploy to production.
| Rating | Criteria |
|---|---|
| Excellent | ≥ 1 time per day |
| Good | ≥ 1 time per week |
| Average | ≥ 1 time per month |
| Low | Less than 1 time per month |
Data source: successful RiverCD syncs or successful pipelines on the main branch.
2. Lead Time for Changes
How long it takes from merging a pull request to a successful deployment.
| Rating | Criteria |
|---|---|
| Excellent | ≤ 1 hour |
| Good | ≤ 1 day |
| Average | ≤ 7 days |
| Low | > 7 days |
Displayed: median and average in seconds.
3. Change Failure Rate
What percentage of deployments lead to failures.
| Rating | Criteria |
|---|---|
| Excellent | ≤ 15% |
| Good | ≤ 30% |
| Average | ≤ 45% |
| Low | > 45% |
4. Mean Time to Recovery
How quickly you recover after a failed deployment.
| Rating | Criteria |
|---|---|
| Excellent | ≤ 1 hour |
| Good | ≤ 1 day |
| Average | ≤ 7 days |
| Low | > 7 days |
How it’s calculated: time from a failed deployment to the next successful one.
Where to View
- Open the repository
- Go to the “DORA” tab (or “Analytics”)
- Select the period: 7 days, 30 days, or 90 days
For each metric, the following are displayed: current value, rating (Excellent/Good/Average/Low), and a daily chart.
Where the Data Comes From
GitRiver automatically determines the data source:
- If RiverCD is used: data from syncs (deploy_syncs table). This is the most accurate method - RiverCD knows exactly what was deployed.
- If RiverCD is not configured: data from CI pipelines on the main branch. Each successful pipeline is counted as a deployment.
Metrics collection requires data for the selected period. If there were no deployments - a “No data” message is displayed.
Value Stream Analytics (VSA)
VSA shows the time spent at each stage from issue to production:
| Stage | What it measures |
|---|---|
| Issue -> PR | Time from issue creation to creation of a linked pull request |
| Review | Time from PR creation to first review |
| PR -> Merge | Time from PR creation to merge |
| Merge -> Pipeline | Time from merge to successful pipeline |
| Pipeline -> Deploy | Time from successful pipeline to deployment |
For each stage: number of items, median, average, and 90th percentile (in hours).
Bottleneck - the stage with the highest median - is highlighted. This helps identify where the biggest delay is in the process.
Total cycle time - the sum of medians across all stages.
VSA works most accurately when pull requests reference issues via
#Nin the title or description.