Skip to content

Lesson 3: Metrics & Reporting

Adopting AI agents is an investment. Metrics help you:

  • Justify the investment to engineering leadership
  • Identify bottlenecks in your workflow
  • Optimise agent configuration based on real data
  • Track improvement over time

Access the team dashboard:

Terminal window
rearch dashboard open
# Opens https://app.rearch.engineer/org/acme/dashboard

The dashboard has four main sections:

High-level metrics at a glance:

MetricDescription
Tasks completedTotal tasks successfully completed this period
Avg. completion timeMean time from task creation to PR merge
Success ratePercentage of tasks completed without manual intervention
Lines of codeTotal lines added/modified/removed by agents
PRs createdNumber of pull requests generated
Review pass ratePercentage of agent PRs approved on first human review

A chronological stream of all agent activity across the organisation:

[Mar 08, 14:32] ✓ Task #412 completed — "Add pagination to /users endpoint"
Agent: backend-agent | Pipeline: feature | Duration: 8m 12s
PR #89 created → merged by @alice
[Mar 08, 13:15] ⚠ Task #411 needs review — "Migrate auth to OAuth2"
Agent: backend-agent | Pipeline: migration | Duration: 22m 04s
PR #88 created → 2 change requests from @bob
[Mar 08, 11:00] ✓ Task #410 completed — "Fix date formatting in reports"
Agent: frontend-agent | Pipeline: bugfix | Duration: 3m 44s
PR #87 created → merged by @carol

Per-team and per-developer statistics:

Frontend Team
├── Tasks: 45 completed / 3 failed
├── Avg. time: 6m 30s
├── Top user: Alice (28 tasks)
└── Most used pipeline: feature (67%)
Backend Team
├── Tasks: 62 completed / 5 failed
├── Avg. time: 11m 15s
├── Top user: Bob (41 tasks)
└── Most used pipeline: bugfix (45%)

Charts showing progress over time:

  • Tasks per week (with trend line)
  • Average completion time (should decrease as prompts improve)
  • Success rate over time
  • Code quality metrics (lint errors, test coverage delta)

Generate reports for specific time periods or audiences:

Terminal window
# Weekly engineering report
rearch report generate \
--period "last 7 days" \
--format markdown \
--output weekly-report.md
# Monthly executive summary
rearch report generate \
--period "2025-02" \
--format pdf \
--template executive \
--output feb-2025-report.pdf
# Per-team breakdown
rearch report generate \
--period "last 30 days" \
--team frontend \
--format json \
--output frontend-metrics.json
TemplateAudienceIncludes
engineeringDev teamDetailed metrics, task breakdown, failure analysis
executiveLeadershipHigh-level ROI, time saved, adoption trends
complianceSecurity/auditAgent access logs, code review coverage, security gate results

Create your own report template:

.rearch/reports/sprint-review.yaml
template:
name: sprint-review
sections:
- title: "Sprint Summary"
metrics:
- tasks_completed
- tasks_failed
- avg_completion_time
- title: "Top Contributions"
query: "top_tasks_by_lines_changed(limit=5)"
- title: "Quality"
metrics:
- review_pass_rate
- test_coverage_delta
- lint_errors_introduced
- title: "Recommendations"
auto_generate: true # AI-generated suggestions based on data

The most common question from leadership: “Is this worth it?”

ReArch calculates estimated ROI based on:

Time saved = (tasks_completed × avg_manual_time) - (tasks_completed × avg_agent_time + review_time)

Configure your baseline:

.rearch/config.yaml
metrics:
baseline:
avg_manual_task_minutes: 120 # How long a task takes without AI
developer_hourly_rate: 75 # For cost calculations (optional)

Example ROI output:

February 2025 ROI Summary
─────────────────────────
Tasks completed by agents: 107
Estimated manual time: 214 hours
Actual agent + review time: 52 hours
Time saved: 162 hours
Efficiency gain: 75.7%

All metrics are available via API for custom dashboards:

Terminal window
# JSON export
curl -H "Authorization: Bearer $REARCH_API_KEY" \
"https://api.rearch.engineer/v1/org/acme/metrics?period=30d" \
-o metrics.json
# CSV export
rearch metrics export --period "last 90 days" --format csv --output metrics.csv

Integrate with your existing tools:

  • Grafana: Use the ReArch data source plugin
  • Datadog: Forward metrics via the webhook integration
  • Google Sheets: Import CSV exports on a schedule

You have completed the Team Collaboration course. You now know how to:

  • Set up a team workspace with roles and permissions
  • Configure AI-assisted code review agents
  • Track productivity metrics and generate reports for stakeholders