Frequently Asked Questions
Common questions about debtmap’s features, usage, and comparison with other tools.
Features & Capabilities
What’s the difference between measured and estimated metrics?
Debtmap distinguishes between two types of metrics (Spec 118):
Measured Metrics - Precise values from AST analysis:
cyclomatic_complexity: Exact count of decision pointscognitive_complexity: Weighted readability measurenesting_depth: Maximum nesting levelsloc: Lines of code- These are suitable for CI/CD quality gates and thresholds
Estimated Metrics - Heuristic approximations:
est_branches: Estimated execution paths (formula-based)- Formula:
max(nesting, 1) × cyclomatic ÷ 3 - Use for: Estimating test cases needed
- Don’t use for: Hard quality gates
- Formula:
Why it matters:
- Use measured metrics for thresholds and gates (precise, repeatable)
- Use estimated metrics for prioritization and effort estimation (heuristic, approximate)
Example:
# GOOD: Use measured metric for quality gate
debtmap validate . --threshold-complexity 15
# GOOD: Use estimated metric for test prioritization
debtmap analyze . --top 10 # Considers est_branches for ranking
See Metrics Reference for complete details.
What is entropy-based complexity analysis?
Entropy analysis uses information theory to distinguish between genuinely complex code and repetitive pattern-based code. Traditional cyclomatic complexity counts branches, but not all branches are equal in cognitive load.
For example, a function with 20 identical if/return validation checks has the same cyclomatic complexity as a function with 20 diverse conditional branches handling different business logic. Entropy analysis gives the validation function a much lower effective complexity score because it follows a simple, repetitive pattern.
Result: 60-75% reduction in false positives compared to traditional complexity metrics.
How does coverage integration work?
Debtmap reads LCOV format coverage data (generated by tools like cargo-tarpaulin, pytest-cov, or jest) and maps it to specific functions and branches. It then combines coverage percentages with complexity metrics to calculate risk scores.
Key insight: A complex function with good test coverage is lower risk than a moderately complex function with no tests.
Example workflow:
# Generate coverage data
cargo tarpaulin --out lcov --output-dir target/coverage
# Analyze with coverage integration
debtmap analyze . --lcov target/coverage/lcov.info
See examples in Analysis Guide
What languages are supported?
Full support:
- Rust (via
syncrate - complete AST analysis) - Python (via
rustpython- full Python 3.x support)
Partial support:
- JavaScript (via
tree-sitter- ES6+, JSX) - TypeScript (via
tree-sitter- basic support)
Planned:
- Go (target: Q2 2025)
- Java (target: Q3 2025)
- C/C++ (target: Q4 2025)
Language support means: AST parsing, metric extraction, complexity calculation, and pattern detection.
Why was “branches” renamed to “est_branches”?
The metric was renamed in Spec 118 to make it clear that this is an estimated value, not a precise measurement.
Problem with old name (“branches”):
- Users thought it was a direct count from AST analysis (it’s not)
- Caused confusion with cyclomatic complexity (which counts actual branches)
- Unclear that the value was formula-based
Benefits of new name (“est_branches”):
- The “est_” prefix makes the estimation explicit
- Clearly distinguishes it from measured metrics
- Sets correct user expectations
What changed:
- Terminal output:
branches=8→est_branches=8 - Internal variable names updated for clarity
- Documentation updated to explain the distinction
What didn’t change:
- The formula remains the same:
max(nesting, 1) × cyclomatic ÷ 3 - JSON output (this field was never serialized to JSON)
- Scoring and prioritization logic
See Metrics Reference for more details.
Can I customize the complexity thresholds?
Yes! Configure thresholds in .debtmap.toml:
[thresholds]
cyclomatic_complexity = 10 # Flag functions above this
nesting_depth = 3 # Maximum nesting levels
loc = 200 # Maximum lines per function
parameter_count = 4 # Maximum parameters
[scoring]
critical_threshold = 8.0 # Risk score for Critical tier
high_threshold = 5.0 # Risk score for High tier
moderate_threshold = 2.0 # Risk score for Moderate tier
See Configuration for all available options.
Does debtmap integrate with CI/CD?
Yes! Use the validate command to enforce quality gates:
# Fail build if critical or high-tier debt detected
debtmap validate . --max-critical 0 --max-high 5
# Exit codes:
# 0 = validation passed
# 1 = validation failed (debt exceeds thresholds)
# 2 = analysis error
GitHub Actions example:
- name: Check technical debt
run: |
cargo tarpaulin --out lcov --output-dir target/coverage
debtmap validate . --lcov target/coverage/lcov.info \
--max-critical 0 --max-high 10 \
--format json --output debt-report.json
- name: Comment on PR
uses: actions/github-script@v6
with:
script: |
const report = require('./debt-report.json');
// Post report as PR comment
See Prodigy Integration for more CI/CD patterns.
Comparison with Other Tools
How is debtmap different from SonarQube?
| Aspect | Debtmap | SonarQube |
|---|---|---|
| Speed | 10-100x faster (Rust) | Slower (JVM overhead) |
| Coverage Integration | ✅ Built-in LCOV | ⚠️ Enterprise only |
| Entropy Analysis | ✅ Unique feature | ❌ No |
| Language Support | Rust, Python, JS/TS | 25+ languages |
| Setup | Single binary | JVM + server setup |
| Cost | Free, open-source | Free (basic) / Paid (advanced) |
| Use Case | Fast local analysis | Enterprise dashboards |
When to use SonarQube: Multi-language monorepos, enterprise compliance, centralized quality dashboards.
When to use debtmap: Rust-focused projects, local development workflow, coverage-driven prioritization.
How is debtmap different from CodeClimate?
| Aspect | Debtmap | CodeClimate |
|---|---|---|
| Deployment | Local binary | Cloud service |
| Coverage | Built-in integration | Separate tool |
| Entropy | ✅ Yes | ❌ No |
| Speed | Seconds | Minutes (uploads code) |
| Privacy | Code stays local | Code uploaded to cloud |
| Cost | Free | Free (open source) / Paid |
When to use CodeClimate: Multi-language projects, prefer SaaS solutions, want maintainability ratings.
When to use debtmap: Rust projects, privacy-sensitive code, fast local analysis, entropy-based scoring.
Should I replace clippy with debtmap?
No—use both! They serve different purposes:
clippy:
- Focuses on idiomatic Rust patterns
- Catches common mistakes (e.g., unnecessary clones, inefficient iterators)
- Suggests Rust-specific best practices
- Runs in milliseconds
debtmap:
- Focuses on technical debt prioritization
- Identifies untested complex code
- Combines complexity with test coverage
- Provides quantified recommendations
Recommended workflow:
# Fix clippy issues first (quick wins)
cargo clippy --all-targets --all-features -- -D warnings
# Then prioritize debt with debtmap
debtmap analyze . --lcov coverage/lcov.info --top 10
Should I replace cargo-audit with debtmap?
No—different focus. cargo-audit scans for security vulnerabilities in dependencies. Debtmap analyzes code complexity and test coverage.
Use both:
cargo-audit- Security vulnerabilities in dependenciescargo-geiger- Unsafe code detectiondebtmap- Technical debt and test gaps
How does debtmap compare to traditional code coverage tools?
Debtmap doesn’t replace coverage tools—it augments them.
Coverage tools (tarpaulin, pytest-cov, jest):
- Measure what % of code is executed by tests
- Tell you “you have 75% coverage”
Debtmap:
- Reads coverage data from these tools
- Prioritizes gaps based on code complexity
- Tells you “function X has 0% coverage and complexity 12—fix this first”
Value: Debtmap answers “which 25% should I test first?” instead of just “75% is tested.”
Usage & Configuration
Why don’t entry points need 100% coverage?
Entry points (main functions, CLI handlers, framework integration code) are typically tested via integration tests, not unit tests. Unit testing them would mean mocking the entire runtime environment, which is brittle and low-value.
Debtmap recognizes common entry point patterns and lowers their priority for unit test coverage:
// Entry point - integration test coverage expected fn main() { // Debtmap: LOW priority for unit tests } // HTTP handler - integration test coverage expected async fn handle_request(req: Request) -> Response { // Debtmap: LOW priority for unit tests } // Core business logic - unit test coverage expected fn calculate_discount(cart: &Cart) -> Discount { // Debtmap: HIGH priority for unit tests if uncovered }
You can configure entry point detection in .debtmap.toml:
[analysis]
entry_point_patterns = [
"main",
"handle_*",
"run_*",
"*_handler",
]
How do I exclude test files from analysis?
By default, debtmap excludes common test directories. To customize:
.debtmap.toml:
[analysis]
exclude_patterns = [
"**/tests/**",
"**/*_test.rs",
"**/test_*.py",
"**/*.test.ts",
"**/target/**",
"**/node_modules/**",
]
Command line:
debtmap analyze . --exclude '**/tests/**' --exclude '**/*_test.rs'
Can I analyze only specific files or directories?
Yes! Use the --include flag:
# Analyze only src/ directory
debtmap analyze . --include 'src/**'
# Analyze specific files
debtmap analyze . --include 'src/main.rs' --include 'src/lib.rs'
# Combine include and exclude
debtmap analyze . --include 'src/**' --exclude 'src/generated/**'
How do I configure ignore patterns for generated code?
Add to .debtmap.toml:
[analysis]
exclude_patterns = [
"**/generated/**",
"**/*.g.rs", # Generated Rust
"**/*_pb.py", # Protobuf generated Python
"**/*.generated.ts", # Generated TypeScript
]
Or use comments in source files:
#![allow(unused)] fn main() { // debtmap:ignore-file - entire file ignored fn complex_function() { // debtmap:ignore-start // ... complex generated code ... // debtmap:ignore-end } }
What if debtmap reports false positives?
1. Verify entropy analysis is enabled (default in v0.2.8+):
[analysis]
enable_entropy_analysis = true
2. Adjust thresholds for your project’s needs:
[thresholds]
cyclomatic_complexity = 15 # Increase if you have many validation functions
3. Use ignore comments for specific functions:
#![allow(unused)] fn main() { // debtmap:ignore - explanation for why this is acceptable fn complex_but_acceptable() { // ... } }
4. Report false positives: If you believe debtmap’s analysis is incorrect, please open an issue with a code example. This helps improve the tool!
How accurate is the risk scoring?
Risk scores are relative prioritization metrics, not absolute measures. They help you answer “which code should I focus on first?” rather than “exactly how risky is this code?”
Factors affecting accuracy:
- Coverage data quality: Accurate if your tests exercise realistic scenarios
- Entropy analysis: Effective for common patterns; may miss domain-specific patterns
- Call graph: More accurate within single files than across modules
- Context: Cannot account for business criticality (you know your domain best)
Best practice: Use risk scores for prioritization, but apply your domain knowledge when deciding what to actually refactor or test.
Can I run debtmap on a CI server?
Yes! Debtmap is designed for CI/CD pipelines:
Performance:
- Statically linked binary (no runtime dependencies)
- Fast analysis (seconds, not minutes)
- Low memory footprint
Exit codes:
0- Analysis succeeded, validation passed1- Analysis succeeded, validation failed (debt thresholds exceeded)2- Analysis error (parse failure, invalid config, etc.)
Example CI configuration:
# .github/workflows/debt-check.yml
name: Technical Debt Check
on: [pull_request]
jobs:
debt-analysis:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install debtmap
run: cargo install debtmap
- name: Generate coverage
run: cargo tarpaulin --out lcov
- name: Analyze debt
run: debtmap validate . --lcov lcov.info --max-critical 0
Troubleshooting
Analysis is slow on my large codebase
Optimization strategies:
1. Exclude unnecessary files:
[analysis]
exclude_patterns = [
"**/target/**",
"**/node_modules/**",
"**/vendor/**",
"**/.git/**",
]
2. Use incremental mode (cache results for unchanged files):
debtmap analyze . --incremental --cache-dir .debtmap-cache
3. Analyze specific directories:
# Only analyze src/, skip examples and benches
debtmap analyze src/
4. Reduce parallelism if memory-constrained:
debtmap analyze . --jobs 4
Expected performance:
- 50k LOC: 5-15 seconds
- 200k LOC: 30-90 seconds
- 1M+ LOC: 3-8 minutes
If analysis is significantly slower, please report a performance issue.
Debtmap crashes with “stack overflow”
This typically happens with extremely deep call stacks or heavily nested code.
Solutions:
1. Increase stack size:
# Linux/macOS
RUST_MIN_STACK=8388608 debtmap analyze .
# Windows PowerShell
$env:RUST_MIN_STACK=8388608; debtmap analyze .
2. Exclude problematic files:
debtmap analyze . --exclude 'path/to/deeply/nested/file.rs'
3. Report the issue: If you encounter stack overflows, please report with a minimal reproducible example.
Coverage data isn’t being applied
Check:
1. LCOV file path is correct:
debtmap analyze . --lcov target/coverage/lcov.info
2. LCOV file contains data:
grep -c "^SF:" target/coverage/lcov.info # Should be > 0
3. Source paths match: LCOV file paths must match your source file paths. If you generate coverage in a different directory:
[coverage]
source_root = "/path/to/project" # Rewrite LCOV paths
4. Enable debug logging:
RUST_LOG=debug debtmap analyze . --lcov lcov.info 2>&1 | grep -i coverage
Debtmap reports “No functions found”
Common causes:
1. Wrong language detection:
# Verify file extensions are recognized
debtmap analyze . --verbose
2. Syntax errors preventing parsing:
# Check for parse errors
RUST_LOG=warn debtmap analyze .
3. All files excluded by ignore patterns:
# List files being analyzed
debtmap analyze . --dry-run
4. Unsupported language features: Some cutting-edge syntax may not parse correctly. Report parsing issues with code examples.
How do I report a bug or request a feature?
Bug reports:
- Check existing issues
- Provide minimal reproducible example
- Include debtmap version:
debtmap --version - Include OS and Rust version:
rustc --version
Feature requests:
- Describe the use case (what problem does it solve?)
- Provide example of desired behavior
- Explain why existing features don’t address the need
Contributions: Debtmap is open-source and welcomes contributions! See CONTRIBUTING.md for guidelines.
Advanced Topics
Can I extend debtmap with custom analyzers?
Not yet, but planned for v0.3.0. You’ll be able to implement the Analyzer trait for custom language support or domain-specific pattern detection.
Roadmap:
- v0.3.0: Plugin API for custom analyzers
- v0.4.0: Plugin API for custom scoring strategies
- v0.5.0: Plugin API for custom output formatters
Track progress in issue #42.
How does debtmap handle monorepos?
Workspace support: Debtmap analyzes each workspace member independently by default:
# Analyze entire workspace
debtmap analyze .
# Analyze specific member
debtmap analyze packages/api
# Combined report for all members
debtmap analyze . --workspace-mode combined
Configuration:
[workspace]
members = ["packages/*", "services/*"]
exclude = ["examples/*"]
Can I compare debt between branches or commits?
Yes! Use the compare command:
# Compare current branch with main
debtmap compare main
# Compare two specific commits
debtmap compare abc123..def456
# Show only new debt introduced
debtmap compare main --show-new-only
Output shows:
- New debt items (introduced since base)
- Resolved debt items (fixed since base)
- Changed debt items (score increased/decreased)
See Examples - Comparing Branches for details.
How do I integrate debtmap with my editor?
VS Code:
- Install the “Debtmap” extension (planned for Q2 2025)
- Inline warnings in editor for high-risk code
- Quick fixes to generate test stubs
Vim/Neovim:
- Use ALE or vim-lsp with debtmap’s LSP mode (planned)
IntelliJ/RustRover:
- Use external tools integration:
- Settings → Tools → External Tools
- Add debtmap command
- Configure keyboard shortcut
Track editor integration progress in issue #38.
Need More Help?
- Documentation: debtmap.dev
- GitHub Issues: Report bugs or request features
- Discussions: Ask questions
- Examples: See Examples for real-world use cases