Measuring Technical Debt: KPIs, Tools & Dashboards
What gets measured gets managed. Learn how to quantify, track, and communicate technical debt with data-driven metrics that speak to both engineers and executives.
"We have too much tech debt" is a feeling every developer knows. But feelings don't get budget. To actually reduce technical debt, you need numbers - metrics that translate code quality problems into business impact. This page covers the essential KPIs to track, the tools that measure them automatically, and how to build dashboards that make invisible debt visible.
Whether you're a developer building a case for refactoring time, a tech lead justifying tooling purchases, or a CTO presenting to the board, these measurement frameworks give you the data to drive real change.
Why Measurement Matters: The Numbers Don't Lie
of CTOs name technical debt as their biggest challenge
Source: Stripe Developer Survey
of IT budgets go to maintaining legacy systems rather than innovation
Source: Gartner Research
faster debt reduction at companies that actively track it
Source: McKinsey Digital
The takeaway: Organizations that measure tech debt reduce it faster, but most teams fly blind. This page changes that.
Essential KPIs to Track
Not all metrics are created equal. Focus on these four categories to build a complete picture of your technical health.
Code Quality Metrics
Objective measures of code health that tools can calculate automatically
Technical Debt Ratio (TDR)
Target: <5%The ratio of remediation time to development time. A 5% TDR means fixing all debt would take 5% of the time it took to build the codebase.
SonarQube calculates this automatically based on rule violations and estimated fix times.
Code Coverage
Target: >80%Percentage of code executed during automated tests. Low coverage means more untested code that could break silently.
100% coverage isn't the goal - test meaningful paths, not every getter.
Cyclomatic Complexity
Target: <10 avgNumber of independent paths through code. Higher complexity = harder to test, understand, and maintain.
- 1-10: Simple, low risk
- 11-20: Moderate complexity
- 21-50: High complexity
- 50+: Untestable, refactor immediately
Code Duplication
Target: <3%Percentage of duplicated code blocks. Duplicates mean bugs get fixed in one place but remain elsewhere.
Tools detect blocks of 10+ identical tokens across files. Each duplicate increases maintenance cost linearly.
Copy-paste is the fastest way to create tech debt.
Velocity Metrics (DORA Framework)
DevOps Research and Assessment metrics - the gold standard for engineering performance
Why DORA matters: These four metrics are backed by 7+ years of research covering 30,000+ organizations. Google's own engineering teams use them. When tech debt is high, all four DORA metrics suffer.
Deployment Frequency
Higher = BetterHow often your team deploys to production. Elite teams deploy multiple times per day; struggling teams deploy monthly.
- Elite: Multiple per day
- High: Weekly to daily
- Medium: Monthly to weekly
- Low: Less than monthly
Lead Time for Changes
Lower = BetterTime from code commit to running in production. Tech debt increases this through complex deployments and manual processes.
- Elite: Less than 1 hour
- High: 1 day to 1 week
- Medium: 1 week to 1 month
- Low: More than 1 month
Change Failure Rate
Target: <15%Percentage of deployments that cause failures requiring rollback or hotfix. High debt = more unexpected interactions and bugs.
- Elite: 0-15%
- High: 16-30%
- Medium: 31-45%
- Low: 46-60%
Mean Time to Recovery (MTTR)
Lower = BetterHow quickly you recover from production failures. Complex, undocumented systems take longer to diagnose and fix.
- Elite: Less than 1 hour
- High: Less than 1 day
- Medium: Less than 1 week
- Low: More than 1 week
Developer Experience Metrics
Qualitative measures that reveal how tech debt affects your team's daily work
Onboarding Time
Target: 6 weeksTime for a new developer to make their first meaningful contribution. Complex codebases extend this dramatically.
Calculate: Days from start date to first merged PR affecting core functionality.
Build Time
Track TrendLocal build + test time. Long builds destroy developer flow and indicate bloated dependencies or poor architecture.
If build time grows 10%+ per quarter, you have a scaling problem.
Developer Satisfaction (NPS)
Survey ScoreRegular surveys asking developers how they feel about the codebase. This is Google's primary tech debt metric (more below).
Ask: "How confident are you making changes to [module X]?" (1-5 scale)
Engineer Turnover Rate
Target: <10%Annual turnover percentage. High tech debt increases frustration, which increases attrition, which increases knowledge loss.
Exit interviews mentioning "code quality" are a red flag.
Business Impact Metrics
Metrics that translate technical problems into business language executives understand
Incident Frequency
Trend DownProduction incidents per week/month. Each incident has a cost in engineer time, customer impact, and potential revenue loss.
Track: Severity 1/2/3 incidents separately for better granularity.
Feature Delivery Rate
Per SprintStory points or features delivered per sprint. Declining velocity despite stable team size is the clearest sign of debt impact.
Plot this over 12+ months to see velocity degradation trends.
Customer-Reported Bugs
Trend DownBugs discovered by customers vs. caught internally. High external bug rate indicates poor test coverage and quality gates.
Goal: 90%+ of bugs found internally before customers see them.
Maintenance vs Features
Target: 70/30Percentage of engineering time on new features vs. maintenance, bug fixes, and keeping the lights on.
If maintenance exceeds 50%, tech debt is strangling your team.
Measurement Tools Deep Dive
You don't need to measure everything manually. These tools automate tech debt tracking and integrate with your existing workflow.
SonarQube - The Industry Standard
7M+ developers worldwideKey Features
- 30+ languages: Java, JavaScript, TypeScript, Python, C#, PHP, Go, Ruby, and more
- Quality Gates: Automatically fail builds that don't meet standards
- Technical Debt calculation: Estimates hours to fix each issue
- AI CodeFix: Automatic suggestions for fixing issues (new in 2024)
- PR decoration: Inline comments on pull requests
Pricing
- Community Edition: Free, self-hosted, open source
- Developer Edition: $150/year per 100K lines of code
- Enterprise Edition: Custom pricing, portfolio management
- SonarCloud: $10-$4,000/mo for cloud-hosted version
Quick Setup Guide (Click to expand)
1. Add to your CI pipeline (GitHub Actions example):
# .github/workflows/sonar.yml
name: SonarCloud Analysis
on: [push, pull_request]
jobs:
sonarcloud:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: SonarCloud Scan
uses: SonarSource/sonarcloud-github-action@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
2. Configure sonar-project.properties:
sonar.projectKey=your-org_your-project
sonar.organization=your-org
sonar.sources=src
sonar.tests=tests
sonar.javascript.lcov.reportPaths=coverage/lcov.info
Code Climate - Prevention-Focused
GitHub/GitLab nativeKey Features
- Maintainability ratings: A-F grades for every file and module
- PR-level analysis: Shows debt impact of each change before merge
- Test coverage integration: Coverage trends and diff coverage
- Velocity metrics: Built-in DORA dashboard
Pricing
- Free: Open source repositories
- Quality: $16/user/month for code analysis
- Velocity: $99/user/month for DORA metrics
NDepend - .NET Specialist
.NET deep analysisKey Features
- 150+ code metrics: The deepest .NET analysis available
- Dependency visualization: Interactive graphs of code relationships
- Roslyn inspections: Real-time analysis in Visual Studio
- Trend analysis: Track metrics over years of history
Pricing
- Developer seat: $492 one-time + $192/year renewal
- Build server: $792 one-time for CI/CD integration
- Enterprise: Custom pricing for large deployments
Tool Comparison Matrix
| Feature | SonarQube | Code Climate | NDepend |
|---|---|---|---|
| Languages | 30+ | 15+ | .NET only |
| Self-hosted option | |||
| Cloud option | |||
| Free tier | Community Edition | OSS only | 14-day trial |
| DORA metrics | |||
| PR integration | |||
| AI fix suggestions | |||
| Entry price | Free | $16/user/mo | $492/seat |
Recommendation: Start with SonarQube Community Edition (free). It covers most needs and establishes measurement habits. Add Code Climate if you need DORA metrics, or NDepend if you're deep in .NET legacy code.
Setting Up Your Tech Debt Dashboard
A great dashboard turns raw metrics into actionable insights. Here's what your tech debt dashboard should include.
Dashboard Components
Traffic Light Indicators
At-a-glance status for each major area. Executives love these because they can see overall health in seconds.
Trend Lines
Direction matters more than absolute numbers. Is debt growing or shrinking?
- Rising trend = problem getting worse
- Flat trend = holding steady
- Falling trend = improvements working
Breakdown by Team/Module
Aggregate metrics hide where problems actually live. Show debt by:
- Module or service
- Team ownership
- Repository
- Age of code (legacy vs. recent)
Alert Thresholds
Automated notifications when metrics cross defined boundaries:
- Warning at 80% of threshold
- Alert at 100% of threshold
- Critical if sustained for 2+ weeks
Example Dashboard Layout
This mock layout shows a healthy codebase after 12 months of debt reduction work. Your starting point may look redder - and that's okay.
How Google Measures Technical Debt
Google's engineering practices are legendary. Here's how they approach tech debt measurement internally.
"We've found that the most useful signal for tech debt is developer sentiment. Code metrics are lagging indicators - by the time they're bad, the damage is done. Developer surveys are leading indicators."
- Google Engineering Practices Documentation
Quarterly Developer Surveys
Google surveys engineers every quarter asking:
- How confident are you making changes in [area]?
- What's slowing you down most?
- Which systems cause the most frustration?
Engineering Log Analysis
Automated analysis of engineering activity:
- Build times and flaky test frequency
- Code review turnaround time
- Time spent in specific codepaths
Three Forms of Debt
Google categorizes tech debt into:
- Code degradation: Quality issues over time
- Expertise gaps: Lost institutional knowledge
- Migration needs: Systems that need updating
Key Insight: The Developer Confidence Score
Google's single most valuable metric is a simple question: "On a scale of 1-5, how confident are you making changes in [specific area]?" Aggregated across teams, this creates a heatmap of technical debt that code scanners miss. Areas with low confidence scores but high code quality grades indicate documentation debt, expertise silos, or architectural complexity that tools can't detect.
Tracking Tech Debt Over Time
Different audiences need different review cadences. Here's a framework for ongoing measurement.
Weekly Reviews
For development teams
- Review new issues introduced
- Check PR quality gate failures
- Celebrate debt paid down
- Flag blocking issues
Monthly Reviews
For engineering managers
- Trend analysis (improving/declining?)
- Module-level breakdown
- Velocity correlation
- Adjust sprint allocations
Quarterly Reports
For directors and VPs
- Business impact summary
- ROI of debt reduction efforts
- Risk assessment updates
- Budget and resource requests
Annual Planning
For C-suite and board
- Strategic tech debt roadmap
- Multi-year investment plan
- Competitive benchmarking
- End-of-life decisions
When to Escalate Concerns
Notify Manager
- Tech debt ratio exceeds 5%
- Coverage drops below 70%
- Velocity declining 3+ sprints
Escalate to Director
- Tech debt ratio exceeds 10%
- Change failure rate above 30%
- Developer satisfaction below 50%
Executive Attention
- Security vulnerabilities in debt
- End-of-life dependencies
- Team threatening to leave
Frequently Asked Questions
The tech debt ratio measures the cost of fixing all technical debt relative to the cost of rebuilding the system from scratch. Formula: Tech Debt Ratio = (Remediation Cost / Development Cost) x 100. For example, if fixing all debt costs $100K and the system cost $500K to build, your ratio is 20%. SonarQube calculates this automatically. Industry benchmarks: less than 5% is good, 5-10% is moderate, over 10% needs attention. Track this over time to ensure debt is not accumulating faster than you can pay it down.
DORA (DevOps Research and Assessment) metrics are four key measures of software delivery performance: (1) Deployment Frequency - how often you deploy to production, (2) Lead Time for Changes - time from code commit to production, (3) Change Failure Rate - percentage of deployments causing failures, (4) Mean Time to Recovery - how quickly you recover from failures. High tech debt correlates with poor DORA metrics across all four dimensions. These metrics are now industry standard and directly tie code quality to business outcomes that executives understand.
Track story points or issues completed per sprint over time. A declining trend while team size stays constant indicates accumulating tech debt. Also measure cycle time (commit to production) and lead time (idea to production). Compare velocity between high-debt and low-debt areas of your codebase - debt-heavy modules typically show 40-60% slower development. Use this data to quantify the "interest" you are paying: "We complete 30% fewer features than last year due to time spent working around bad code in module X."
Key tools include: SonarQube/SonarCloud (comprehensive code quality and debt calculation), CodeClimate (maintainability grades and test coverage), Snyk/Dependabot (dependency vulnerabilities and outdated packages), CAST (architecture analysis for large systems), and NDepend/JArchitect (dependency structure analysis). For specific concerns: ESLint/TSLint for JavaScript, Pylint for Python, RuboCop for Ruby. Most integrate with CI/CD pipelines for continuous monitoring. Start with SonarQube - it is free, covers many languages, and provides the tech debt ratio metric out of the box.
Create a dashboard showing: (1) Tech debt ratio trend over time, (2) Velocity trend (story points per sprint), (3) Bug rates (bugs found in production per release), (4) Test coverage percentage, (5) Dependency health (outdated/vulnerable dependencies), (6) DORA metrics if available. Use Grafana, Datadog, or even a simple spreadsheet with charts. Update weekly or per sprint. The key is visibility - when executives see debt metrics alongside business metrics, they understand the connection. Set alerts for thresholds like "tech debt ratio exceeds 10%" or "velocity drops 20% from baseline."
Industry targets are typically 70-80% line coverage for application code. However, coverage percentage alone is misleading - you need coverage of critical paths and edge cases. Better metrics include: mutation testing score (how many injected bugs do tests catch?), test effectiveness (bugs found in testing vs production), and coverage of business-critical flows. Start by covering the areas that change most frequently - these get the highest ROI from testing. Legacy code with 0% coverage should aim for 40% initially, focusing on the most changed modules first.
Calculate cost using: (1) Maintenance time - hours spent on bugs, workarounds, and firefighting x hourly rate, (2) Opportunity cost - features not shipped because team was fixing debt x revenue impact, (3) Velocity loss - (baseline velocity - current velocity) x average feature value, (4) Turnover cost - resignations citing code quality x $87K average replacement cost, (5) Incident cost - production outages x downtime revenue loss. Sum monthly costs to show annual impact. Most teams find $200K-$500K/year in mid-size organizations, which makes $50K investments in debt reduction easy to justify.
Cognitive complexity measures how difficult code is for humans to understand. Unlike cyclomatic complexity (which counts paths), cognitive complexity weights nested structures, breaks in flow, and recursion that make code hard to reason about. High cognitive complexity means more bugs, longer debugging time, and developer frustration. SonarQube calculates this automatically. Target: keep functions under 15, flag anything over 25 for refactoring. High-complexity code correlates strongly with bug density and is a leading indicator of future problems, making it an excellent early warning metric.
Automate continuous measurement in your CI/CD pipeline - every commit should update metrics. Review dashboards weekly in team standups or retrospectives. Report to management monthly or quarterly with trends and highlights. Key is consistent cadence so you can spot trends early. Include metrics in sprint retrospectives: "Our tech debt ratio increased 2% this sprint - should we allocate more time next sprint?" Make metrics visible by displaying dashboards on team monitors. The act of measuring and discussing regularly keeps technical debt in the conversation and prevents it from being forgotten.
Industry benchmarks from DORA and other research: Elite performers deploy multiple times per day (vs monthly for low performers), have less than 15% change failure rate (vs 46-60%), recover in under an hour (vs 1-6 months), and have lead times under an hour (vs 1-6 months). Tech debt ratio: under 5% is A-grade, 5-10% is B, 10-20% is C, over 20% is D. Test coverage: 70%+ for application code. However, the most important comparison is your own trend over time. Are you improving or declining? That trajectory matters more than absolute numbers.
Ready to Make Your Tech Debt Visible?
Measurement is just the first step. Once you have data, you need to communicate it effectively and get buy-in for reduction efforts.