Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Configuration

Debtmap is highly configurable through a .debtmap.toml file. This chapter explains how to customize Debtmap’s behavior for your project’s specific needs.

Config Files

Debtmap uses TOML format for configuration files (.debtmap.toml). TOML provides a clear, readable syntax well-suited for configuration.

Creating a Configuration File

Debtmap looks for a .debtmap.toml file in the current directory and up to 10 parent directories. To create an initial configuration:

debtmap init

This command creates a .debtmap.toml file with sensible defaults.

Configuration File Discovery

When you run debtmap, it searches for .debtmap.toml starting in your current directory and traversing up to 10 parent directories. The first configuration file found is used.

If no configuration file is found, Debtmap uses built-in defaults that work well for most projects.

Basic Example

Here’s a minimal .debtmap.toml configuration:

[scoring]
coverage = 0.50      # 50% weight for test coverage gaps
complexity = 0.35    # 35% weight for code complexity
dependency = 0.15    # 15% weight for dependency criticality

[thresholds]
complexity = 10
max_file_length = 500
max_function_length = 50

[languages]
enabled = ["rust", "python", "javascript", "typescript"]

Scoring Configuration

Scoring Weights

The [scoring] section controls how different factors contribute to the overall debt score. Debtmap uses a weighted sum model where weights must sum to 1.0.

[scoring]
coverage = 0.50      # Weight for test coverage gaps (default: 0.50)
complexity = 0.35    # Weight for code complexity (default: 0.35)
dependency = 0.15    # Weight for dependency criticality (default: 0.15)

Active weights (used in scoring):

  • coverage - Prioritizes untested code (default: 0.50)
  • complexity - Identifies complex areas (default: 0.35)
  • dependency - Considers impact radius (default: 0.15)

Unused weights (reserved for future features):

  • semantic - Not currently used (default: 0.00)
  • security - Not currently used (default: 0.00)
  • organization - Not currently used (default: 0.00)

Validation rules:

  • All weights must be between 0.0 and 1.0
  • Active weights (coverage + complexity + dependency) must sum to 1.0 (±0.001 tolerance)
  • If weights don’t sum to 1.0, they will be automatically normalized

Example - Prioritize complexity over coverage:

[scoring]
coverage = 0.30
complexity = 0.55
dependency = 0.15

Role Multipliers

Role multipliers adjust complexity scores based on a function’s semantic role:

[role_multipliers]
pure_logic = 1.2        # Prioritize pure computation (default: 1.2)
orchestrator = 0.8      # Reduce for delegation functions (default: 0.8)
io_wrapper = 0.7        # Reduce for I/O wrappers (default: 0.7)
entry_point = 0.9       # Slight reduction for main/CLI (default: 0.9)
pattern_match = 0.6     # Reduce for pattern matching (default: 0.6)
debug = 0.3             # Debug/diagnostic functions (default: 0.3)
unknown = 1.0           # No adjustment (default: 1.0)

These multipliers help reduce false positives by recognizing that different function types have naturally different complexity levels. The debug role has the lowest multiplier (0.3) since debug and diagnostic functions typically have low testing priority.

Role-Based Scoring Configuration

DebtMap uses a two-stage role adjustment mechanism to accurately score functions based on their architectural role and testing strategy. This section explains how to configure both stages.

Stage 1: Role Coverage Weights

The first stage adjusts how much coverage gaps penalize different function types. This recognizes that not all functions need the same level of unit test coverage.

Configuration (.debtmap.toml under [scoring.role_coverage_weights]):

[scoring.role_coverage_weights]
entry_point = 0.6       # Reduce coverage penalty (often integration tested)
orchestrator = 0.8      # Reduce coverage penalty (tested via higher-level tests)
pure_logic = 1.0        # Pure logic should have unit tests, no reduction (default: 1.0)
io_wrapper = 0.5        # I/O wrappers are integration tested (default: 0.5)
pattern_match = 1.0     # Standard penalty
debug = 0.3             # Debug functions have lowest coverage expectations (default: 0.3)
unknown = 1.0           # Standard penalty (default behavior)

Rationale:

Function RoleWeightWhy This Value?
Entry Point0.6CLI handlers, HTTP routes, main functions are integration tested, not unit tested
Orchestrator0.8Coordination functions tested via higher-level tests
Pure Logic1.0Core business logic should have unit tests (default: 1.0)
I/O Wrapper0.5File/network operations tested via integration tests (default: 0.5)
Pattern Match1.0Standard coverage expectations
Debug0.3Debug/diagnostic functions have lowest testing priority (default: 0.3)
Unknown1.0Default when role cannot be determined

Example Impact:

# Emphasize pure logic testing strongly
[scoring.role_coverage_weights]
pure_logic = 1.5        # 50% higher penalty for untested logic
entry_point = 0.5       # 50% lower penalty for untested entry points
io_wrapper = 0.4        # 60% lower penalty for untested I/O

# Conservative approach (smaller adjustments)
[scoring.role_coverage_weights]
pure_logic = 1.1        # Only 10% increase
entry_point = 0.9       # Only 10% decrease

How It Works:

When a function has 0% coverage:

  • Entry Point (weight 0.6): Gets 60% penalty instead of 100% penalty
  • Pure Logic (weight 1.0): Gets 100% penalty (standard emphasis on testing)
  • I/O Wrapper (weight 0.5): Gets 50% penalty

This prevents entry points from dominating the priority list due to low unit test coverage while emphasizing the importance of testing pure business logic.

Stage 2: Role Multiplier with Clamping

The second stage applies a final role-based multiplier to reflect architectural importance. This multiplier is clamped by default to prevent extreme score variations.

Configuration (.debtmap.toml under [scoring.role_multiplier]):

[scoring.role_multiplier]
clamp_min = 0.3           # Minimum multiplier (default: 0.3)
clamp_max = 1.8           # Maximum multiplier (default: 1.8)
enable_clamping = true    # Enable clamping (default: true)

Parameters:

ParameterDefaultDescription
clamp_min0.3Minimum allowed multiplier - prevents functions from becoming invisible
clamp_max1.8Maximum allowed multiplier - prevents extreme score spikes
enable_clampingtrueWhether to apply clamping (disable for prototyping only)

Clamp Range Rationale:

Default [0.3, 1.8]: Balances differentiation with stability

  • Lower bound (0.3): I/O wrappers still contribute 30% of their base score

    • Prevents them from becoming invisible in the priority list
    • Ensures simple wrappers aren’t completely ignored
  • Upper bound (1.8): Critical functions get at most 180% of base score

    • Prevents one complex function from dominating the entire list
    • Maintains balanced prioritization across different issues

When to Adjust Clamp Range:

# Wider range for more differentiation
[scoring.role_multiplier]
clamp_min = 0.2           # Allow more reduction
clamp_max = 2.5           # Allow more emphasis

# Narrower range for more stability
[scoring.role_multiplier]
clamp_min = 0.5           # Less reduction
clamp_max = 1.5           # Less emphasis

# Disable clamping (not recommended for production)
[scoring.role_multiplier]
enable_clamping = false   # Allow unclamped multipliers
# Warning: May cause unstable prioritization

When to Disable Clamping:

  • Prototyping: Testing extreme multiplier values for custom scoring strategies
  • Special cases: Very specific project needs requiring wide multiplier ranges
  • Not recommended for production use as it can lead to unstable prioritization

Example Impact:

Without clamping:

Function: critical_business_logic (Pure Logic)
  Base Score: 45.0
  Role Multiplier: 2.5 (unclamped)
  Final Score: 112.5 (dominates entire list)

With clamping (default):

Function: critical_business_logic (Pure Logic)
  Base Score: 45.0
  Role Multiplier: 1.8 (clamped from 2.5)
  Final Score: 81.0 (high priority, but balanced)

Complete Example Configuration

Here’s a complete example showing both stages configured together:

# Stage 1: Coverage weight adjustments
[scoring.role_coverage_weights]
pure_logic = 1.0        # Pure logic should have unit tests (default: 1.0)
entry_point = 0.6       # Reduce penalty for integration-tested entry points
orchestrator = 0.8      # Partially reduce penalty for orchestrators
io_wrapper = 0.5        # I/O wrappers are integration tested (default: 0.5)
pattern_match = 1.0     # Standard
debug = 0.3             # Debug functions have lowest coverage expectations (default: 0.3)
unknown = 1.0           # Standard

# Stage 2: Role multiplier with clamping
[scoring.role_multiplier]
clamp_min = 0.3         # I/O wrappers contribute at least 30%
clamp_max = 1.8         # Critical functions get at most 180%
enable_clamping = true  # Keep clamping enabled for stability

How the Two Stages Work Together

The two-stage approach ensures role-based coverage adjustments and architectural importance multipliers work independently:

Example Workflow:

1. Calculate base score from complexity (10) and dependencies (5)
   → Base = 15.0

2. Stage 1: Apply coverage weight based on role (Entry Point, weight 0.6)
   → Coverage penalty reduced from 1.0 to 0.4
   → Preliminary score = 15.0 × 0.4 = 6.0

3. Stage 2: Apply clamped role multiplier (Entry Point, multiplier 1.2)
   → Clamped to [0.3, 1.8] → stays 1.2
   → Final score = 6.0 × 1.2 = 7.2

Key Benefits:

  • Coverage adjustments don’t interfere with role multiplier
  • Both mechanisms contribute independently to final score
  • Clamping prevents instability from extreme values
  • Configuration flexibility for different project needs

Verification

To see how role-based adjustments affect your codebase:

# Show detailed scoring breakdown
debtmap analyze . --verbose

# Look for lines like:
#   Coverage Weight: 0.6 (Entry Point adjustment)
#   Adjusted Coverage Penalty: 0.4 (reduced from 1.0)
#   Role Multiplier: 1.2 (clamped from 1.5)

For more details on how role-based adjustments reduce false positives, see the Role-Based Adjustments section in the Scoring Strategies guide.

Thresholds Configuration

Basic Thresholds

Control when code is flagged as technical debt:

[thresholds]
complexity = 10                      # Cyclomatic complexity threshold
duplication = 50                     # Duplication threshold
max_file_length = 500                # Maximum lines per file
max_function_length = 50             # Maximum lines per function

Note: The TOML configuration accepts max_file_length (shown above), which maps to the internal struct field max_file_lines. Both names refer to the same setting.

Minimum Thresholds

Filter out trivial functions that aren’t really technical debt:

[thresholds]
minimum_debt_score = 2.0              # Only show items with debt score ≥ 2.0
minimum_cyclomatic_complexity = 3     # Ignore functions with cyclomatic < 3
minimum_cognitive_complexity = 5      # Ignore functions with cognitive < 5
minimum_risk_score = 2.0              # Only show Risk items with score ≥ 2.0

These minimum thresholds help focus on significant issues by filtering out simple functions with minor complexity.

Validation Thresholds

The [thresholds.validation] subsection configures limits for the debtmap validate command:

[thresholds.validation]
max_average_complexity = 10.0         # Maximum allowed average complexity (default: 10.0)
max_high_complexity_count = 100       # DEPRECATED: Use max_debt_density instead (default: 100)
max_debt_items = 2000                 # DEPRECATED: Use max_debt_density instead (default: 2000)
max_total_debt_score = 10000          # Maximum total debt score (default: 10000)
max_codebase_risk_score = 7.0         # Maximum codebase risk score (default: 7.0)
max_high_risk_functions = 50          # DEPRECATED: Use max_debt_density instead (default: 50)
min_coverage_percentage = 0.0         # Minimum required coverage % (default: 0.0)
max_debt_density = 50.0               # Maximum debt per 1000 LOC (default: 50.0)

Deprecated Fields (v0.3.0+):

The following validation thresholds are deprecated since v0.3.0 and will be removed in v1.0:

  • max_high_complexity_count - Replaced by max_debt_density (scale-independent)
  • max_debt_items - Replaced by max_debt_density (scale-independent)
  • max_high_risk_functions - Replaced by max_debt_density (scale-independent)

Migration: Use max_debt_density instead, which provides a scale-independent metric (debt per 1000 lines of code). This allows the same threshold to work across codebases of different sizes.

Use debtmap validate in CI to enforce code quality standards:

# Fail build if validation thresholds are exceeded
debtmap validate

Language Configuration

Enabling Languages

Specify which languages to analyze:

[languages]
enabled = ["rust", "python", "javascript", "typescript"]

Language-Specific Features

Configure features for individual languages:

[languages.rust]
detect_dead_code = false        # Rust: disabled by default (compiler handles it)
detect_complexity = true
detect_duplication = true

[languages.python]
detect_dead_code = true
detect_complexity = true
detect_duplication = true

[languages.javascript]
detect_dead_code = true
detect_complexity = true
detect_duplication = true

[languages.typescript]
detect_dead_code = true
detect_complexity = true
detect_duplication = true

Note: Rust’s dead code detection is disabled by default since the Rust compiler already provides excellent unused code warnings.

Exclusion Patterns

File and Directory Exclusion

Use glob patterns to exclude files and directories from analysis:

[ignore]
patterns = [
    "target/**",              # Rust build output
    "venv/**",                # Python virtual environment
    "node_modules/**",        # JavaScript dependencies
    "*.min.js",               # Minified files
    "benches/**",             # Benchmark code
    "tests/**/*",             # Test files
    "**/test_*.rs",           # Test files (prefix)
    "**/*_test.rs",           # Test files (suffix)
    "**/fixtures/**",         # Test fixtures
    "**/mocks/**",            # Mock implementations
    "**/stubs/**",            # Stub implementations
    "**/examples/**",         # Example code
    "**/demo/**",             # Demo code
]

Glob pattern syntax:

  • * - Matches any characters except /
  • ** - Matches any characters including / (recursive)
  • ? - Matches a single character
  • [abc] - Matches any character in the set

Note: Function-level filtering (e.g., ignoring specific function name patterns) is handled by role detection and context-aware analysis rather than explicit ignore patterns. See the Context-Aware Detection section for function-level filtering options.

Display Configuration

Control how results are displayed:

[display]
tiered = true           # Use tiered priority display (default: true)
items_per_tier = 5      # Show 5 items per tier (default: 5)

When tiered = true, Debtmap groups results into priority tiers (Critical, High, Medium, Low) and shows the top items from each tier.

Output Configuration

Set the default output format:

[output]
default_format = "terminal"    # Options: "terminal", "json", "markdown"

Supported formats:

  • "terminal" - Human-readable colored output for the terminal (default)
  • "json" - Machine-readable JSON for integration with other tools
  • "markdown" - Markdown format for documentation and reports

This can be overridden with the --format CLI flag:

debtmap analyze --format json      # JSON output
debtmap analyze --format markdown  # Markdown output

Normalization Configuration

Control how raw scores are normalized to a 0-10 scale:

[normalization]
linear_threshold = 10.0         # Use linear scaling below this value
logarithmic_threshold = 100.0   # Use logarithmic scaling above this value
sqrt_multiplier = 3.33          # Multiplier for square root scaling
log_multiplier = 10.0           # Multiplier for logarithmic scaling
show_raw_scores = true          # Show both raw and normalized scores

Normalization ensures scores are comparable across different codebases and prevents extreme outliers from dominating the results.

Advanced Configuration

Entropy-Based Complexity Scoring

Entropy analysis helps identify repetitive code patterns (like large match statements) that inflate complexity metrics:

[entropy]
enabled = true                      # Enable entropy analysis (default: true)
weight = 1.0                        # Weight in complexity adjustment (default: 1.0)
min_tokens = 20                     # Minimum tokens for analysis (default: 20)
pattern_threshold = 0.7             # Pattern similarity threshold (default: 0.7)
entropy_threshold = 0.4             # Low entropy threshold (default: 0.4)
branch_threshold = 0.8              # Branch similarity threshold (default: 0.8)
use_classification = false          # Use smarter token classification (default: false)

# Maximum reductions to prevent over-correction
max_repetition_reduction = 0.20     # Max 20% reduction for repetition (default: 0.20)
max_entropy_reduction = 0.15        # Max 15% reduction for low entropy (default: 0.15)
max_branch_reduction = 0.25         # Max 25% reduction for similar branches (default: 0.25)
max_combined_reduction = 0.30       # Max 30% total reduction (default: 0.30)

Entropy scoring reduces false positives from functions like parsers and state machines that have high cyclomatic complexity but are actually simple and maintainable.

God Object Detection

Configure detection of classes/structs with too many responsibilities:

[god_object_detection]
enabled = true

# Rust-specific thresholds
[god_object_detection.rust]
max_methods = 20        # Maximum methods before flagging (default: 20)
max_fields = 15         # Maximum fields before flagging (default: 15)
max_traits = 5          # Maximum implemented traits
max_lines = 1000        # Maximum lines of code
max_complexity = 200    # Maximum total complexity

# Python-specific thresholds
[god_object_detection.python]
max_methods = 15
max_fields = 10
max_traits = 3
max_lines = 500
max_complexity = 150

# JavaScript-specific thresholds
[god_object_detection.javascript]
max_methods = 15
max_fields = 20         # JavaScript classes often have more properties
max_traits = 3
max_lines = 500
max_complexity = 150

Note: Different languages have different defaults. Rust allows more methods since trait implementations add methods, while JavaScript classes should be smaller.

Context-Aware Detection

Enable context-aware pattern detection to reduce false positives:

[context]
enabled = false         # Opt-in (default: false)

# Custom context rules
[[context.rules]]
name = "allow_blocking_in_main"
pattern = "blocking_io"
action = "allow"
priority = 100
reason = "Main function can use blocking I/O"

[context.rules.context]
role = "main"

# Function pattern configuration
[context.function_patterns]
test_patterns = ["test_*", "bench_*"]
config_patterns = ["load_*_config", "parse_*_config"]
handler_patterns = ["handle_*", "*_handler"]
init_patterns = ["initialize_*", "setup_*"]

Context-aware detection adjusts severity based on where code appears (main functions, test code, configuration loaders, etc.).

Error Handling Detection

Configure detection of error handling anti-patterns:

[error_handling]
detect_async_errors = true          # Detect async error issues (default: true)
detect_context_loss = true          # Detect error context loss (default: true)
detect_propagation = true           # Analyze error propagation (default: true)
detect_panic_patterns = true        # Detect panic/unwrap usage (default: true)
detect_swallowing = true            # Detect swallowed errors (default: true)

# Custom error patterns
[[error_handling.custom_patterns]]
name = "custom_panic"
pattern = "my_panic_macro"
pattern_type = "macro_name"
severity = "high"
description = "Custom panic macro usage"
remediation = "Replace with Result-based error handling"

# Severity overrides
[[error_handling.severity_overrides]]
pattern = "unwrap"
context = "test"
severity = "low"        # Unwrap is acceptable in test code

Pure Mapping Pattern Detection

Configure detection of pure mapping patterns to reduce false positives from exhaustive match expressions:

[mapping_patterns]
enabled = true                      # Enable mapping pattern detection (default: true)
complexity_reduction = 0.30         # Reduce complexity by 30% (default: 0.30)
min_branches = 3                    # Minimum match arms to consider (default: 3)

What are pure mapping patterns?

Pure mapping patterns are exhaustive match expressions that transform input to output without side effects. These patterns have high cyclomatic complexity due to many branches, but are actually simple and maintainable because:

  • Each branch is independent and straightforward
  • No mutation or side effects occur
  • The pattern is predictable and easy to understand
  • Adding new cases requires minimal changes

Example:

#![allow(unused)]
fn main() {
fn status_to_string(status: Status) -> &'static str {
    match status {
        Status::Success => "success",
        Status::Pending => "pending",
        Status::Failed => "failed",
        Status::Cancelled => "cancelled",
        // ... many more cases
    }
}
}

This function has high cyclomatic complexity (one branch per case), but is simple to maintain. Mapping pattern detection recognizes this and reduces the complexity score appropriately.

Configuration options:

ParameterDefaultDescription
enabledtrueEnable mapping pattern detection
complexity_reduction0.30Percentage to reduce complexity (0.0-1.0)
min_branches3Minimum match arms to be considered a mapping pattern

Example configuration:

# Conservative reduction
[mapping_patterns]
complexity_reduction = 0.20         # Only 20% reduction

# Aggressive reduction for codebases with many mapping patterns
[mapping_patterns]
complexity_reduction = 0.50         # 50% reduction

# Disable if you want to see all match complexity
[mapping_patterns]
enabled = false

When to adjust:

  • Increase complexity_reduction if you have many simple mapping functions being flagged as complex
  • Decrease complexity_reduction if you want more conservative adjustments
  • Increase min_branches to only apply reduction to very large match statements
  • Disable entirely if you want raw complexity scores without adjustment

External API Configuration

Mark functions as public API for enhanced testing recommendations:

[external_api]
detect_external_api = false         # Auto-detect public APIs (default: false)
api_functions = []                  # Explicitly mark API functions
api_files = []                      # Explicitly mark API files

When enabled, public API functions receive higher priority for test coverage.

Classification Configuration

The [classification] section controls how Debtmap classifies functions by their semantic role (constructor, accessor, data flow, etc.). This classification drives role-based adjustments and reduces false positives.

[classification]
# Constructor detection
[classification.constructors]
detect_constructors = true            # Enable constructor detection (default: true)
constructor_patterns = ["new", "create", "build", "from"]  # Common constructor names

# Accessor detection
[classification.accessors]
detect_accessors = true               # Enable accessor/getter detection (default: true)
accessor_patterns = ["get_*", "set_*", "is_*", "has_*"]   # Common accessor patterns

# Data flow detection
[classification.data_flow]
detect_data_flow = true               # Enable data flow analysis (default: true)

Configuration Options:

SectionOptionDefaultDescription
constructorsdetect_constructorstrueIdentify constructor functions
constructorsconstructor_patterns[“new”, “create”, “build”, “from”]Name patterns for constructors
accessorsdetect_accessorstrueIdentify accessor/getter functions
accessorsaccessor_patterns[“get_”, “set_”, “is_”, “has_”]Name patterns for accessors
data_flowdetect_data_flowtrueEnable data flow analysis

Why Classification Matters:

Classification helps Debtmap understand function intent and apply appropriate complexity adjustments:

  • Constructors typically have boilerplate initialization code with naturally higher complexity
  • Accessors are simple getters/setters that shouldn’t be flagged as debt
  • Data flow functions (mappers, filters) have predictable patterns that inflate metrics

By detecting these patterns, Debtmap reduces false positives and focuses on genuine technical debt.

Additional Advanced Options

Debtmap supports additional advanced configuration options:

Lines of Code Configuration

The [loc] section controls how lines of code are counted for metrics and reporting:

[loc]
include_tests = false         # Exclude test files from LOC counts (default: false)
include_generated = false     # Exclude generated files from LOC counts (default: false)
count_comments = false        # Include comment lines in LOC counts (default: false)
count_blank_lines = false     # Include blank lines in LOC counts (default: false)

Configuration options:

OptionDefaultDescription
include_testsfalseWhether to include test files in LOC metrics
include_generatedfalseWhether to include generated files in LOC metrics
count_commentsfalseWhether to count comment lines as LOC
count_blank_linesfalseWhether to count blank lines as LOC

Example - Strict LOC counting:

[loc]
include_tests = false         # Focus on production code
include_generated = false     # Exclude auto-generated code
count_comments = false        # Only count executable code
count_blank_lines = false     # Exclude whitespace

Tier Configuration

The [tiers] section configures tier threshold boundaries for prioritization:

[tiers]
t2_complexity_threshold = 15      # Complexity threshold for Tier 2 (default: 15)
t2_dependency_threshold = 10      # Dependency threshold for Tier 2 (default: 10)
t3_complexity_threshold = 10      # Complexity threshold for Tier 3 (default: 10)
show_t4_in_main_report = false    # Show Tier 4 items in main report (default: false)

Tier priority levels:

  • Tier 1 (Critical): Highest priority items
  • Tier 2 (High): Items above t2_* thresholds
  • Tier 3 (Medium): Items above t3_* thresholds
  • Tier 4 (Low): Items below all thresholds

Example - Stricter tier boundaries:

[tiers]
t2_complexity_threshold = 12      # Lower threshold = more items in high priority
t2_dependency_threshold = 8
t3_complexity_threshold = 8
show_t4_in_main_report = true     # Include low-priority items

Enhanced Complexity Thresholds

The [complexity_thresholds] section provides more granular control over complexity detection:

This supplements the basic [thresholds] section with minimum total, cyclomatic, and cognitive complexity thresholds for flagging functions.

These options are advanced features with sensible defaults. Most users won’t need to configure them explicitly.

Orchestration Adjustment

The [orchestration_adjustment] section configures complexity reduction for orchestrator functions that primarily delegate to other functions:

[orchestration_adjustment]
enabled = true                        # Enable orchestration detection (default: true)
min_delegation_ratio = 0.6            # Minimum ratio of delegated calls (default: 0.6)
complexity_reduction = 0.25           # Reduce complexity by 25% (default: 0.25)

Configuration Options:

OptionDefaultDescription
enabledtrueEnable orchestration pattern detection
min_delegation_ratio0.6Minimum % of function that delegates to be considered orchestrator
complexity_reduction0.25Percentage to reduce complexity score (0.0-1.0)

Orchestrator functions coordinate multiple operations but don’t contain complex logic themselves. This adjustment prevents them from being over-penalized.

Boilerplate Detection

The [boilerplate_detection] section identifies and reduces penalties for boilerplate code patterns:

[boilerplate_detection]
enabled = true                        # Enable boilerplate detection (default: true)
detect_constructors = true            # Detect constructor boilerplate (default: true)
detect_error_conversions = true       # Detect error conversion boilerplate (default: true)
complexity_reduction = 0.20           # Reduce complexity by 20% (default: 0.20)

Configuration Options:

OptionDefaultDescription
enabledtrueEnable boilerplate pattern detection
detect_constructorstrueIdentify constructor initialization boilerplate
detect_error_conversionstrueIdentify error type conversion boilerplate
complexity_reduction0.20Percentage to reduce complexity for boilerplate (0.0-1.0)

Boilerplate code often inflates complexity metrics without representing true technical debt. This detection reduces false positives from necessary but repetitive code.

Functional Analysis

The [functional_analysis] section configures detection of functional programming patterns:

[functional_analysis]
enabled = true                        # Enable functional pattern detection (default: true)
detect_pure_functions = true          # Detect pure functions (default: true)
detect_higher_order = true            # Detect higher-order functions (default: true)
detect_immutable_patterns = true      # Detect immutable data patterns (default: true)

Configuration Options:

OptionDefaultDescription
enabledtrueEnable functional programming analysis
detect_pure_functionstrueIdentify functions without side effects
detect_higher_ordertrueIdentify functions that take/return functions
detect_immutable_patternstrueIdentify immutable data structure usage

Functional patterns often lead to cleaner, more testable code. This analysis helps Debtmap recognize and appropriately score functional programming idioms.

CLI Integration

CLI flags can override configuration file settings:

# Override complexity threshold
debtmap analyze --threshold-complexity 15

# Provide coverage file
debtmap analyze --coverage-file coverage.json

# Enable context-aware detection
debtmap analyze --context

# Override output format
debtmap analyze --format json

Configuration Precedence

Debtmap resolves configuration values in the following order (highest to lowest priority):

  1. CLI flags - Command-line arguments (e.g., --threshold-complexity 15)
  2. Configuration file - Settings from .debtmap.toml
  3. Built-in defaults - Debtmap’s sensible default values

This allows you to set project-wide defaults in .debtmap.toml while customizing specific runs with CLI flags.

Configuration Validation

Automatic Validation

Debtmap automatically validates your configuration when loading:

  • Scoring weights must sum to 1.0 (±0.001 tolerance)
  • Individual weights must be between 0.0 and 1.0
  • Invalid configurations fall back to defaults with a warning

Normalization

If scoring weights don’t sum exactly to 1.0, Debtmap automatically normalizes them:

# Input (sums to 0.80)
[scoring]
coverage = 0.40
complexity = 0.30
dependency = 0.10

# Automatically normalized to:
# coverage = 0.50
# complexity = 0.375
# dependency = 0.125

Debug Validation

To verify which configuration file is being loaded, check debug logs:

RUST_LOG=debug debtmap analyze

Look for log messages like:

DEBUG debtmap::config: Loaded config from /path/to/.debtmap.toml

Complete Configuration Example

Here’s a comprehensive configuration showing all major sections:

# Scoring configuration
[scoring]
coverage = 0.50
complexity = 0.35
dependency = 0.15

# Basic thresholds
[thresholds]
complexity = 10
duplication = 50
max_file_length = 500
max_function_length = 50
minimum_debt_score = 2.0
minimum_cyclomatic_complexity = 3
minimum_cognitive_complexity = 5
minimum_risk_score = 2.0

# Validation thresholds for CI
[thresholds.validation]
max_average_complexity = 10.0
max_high_complexity_count = 100       # DEPRECATED: Use max_debt_density
max_debt_items = 2000                 # DEPRECATED: Use max_debt_density
max_total_debt_score = 10000
max_codebase_risk_score = 7.0
max_high_risk_functions = 50          # DEPRECATED: Use max_debt_density
min_coverage_percentage = 0.0
max_debt_density = 50.0

# Language configuration
[languages]
enabled = ["rust", "python", "javascript", "typescript"]

[languages.rust]
detect_dead_code = false
detect_complexity = true
detect_duplication = true

# Exclusion patterns
[ignore]
patterns = [
    "target/**",
    "node_modules/**",
    "tests/**/*",
    "**/*_test.rs",
]

# Display configuration
[display]
tiered = true
items_per_tier = 5

# Output configuration
[output]
default_format = "terminal"

# Entropy configuration
[entropy]
enabled = true
weight = 1.0
min_tokens = 20

# God object detection
[god_object_detection]
enabled = true

[god_object_detection.rust]
max_methods = 20
max_fields = 15

# Classification configuration
[classification.constructors]
detect_constructors = true
constructor_patterns = ["new", "create", "build", "from"]

[classification.accessors]
detect_accessors = true
accessor_patterns = ["get_*", "set_*", "is_*", "has_*"]

[classification.data_flow]
detect_data_flow = true

# Advanced analysis
[orchestration_adjustment]
enabled = true
min_delegation_ratio = 0.6
complexity_reduction = 0.25

[boilerplate_detection]
enabled = true
detect_constructors = true
detect_error_conversions = true
complexity_reduction = 0.20

[functional_analysis]
enabled = true
detect_pure_functions = true
detect_higher_order = true
detect_immutable_patterns = true

Configuration Best Practices

For Strict Quality Standards

[scoring]
coverage = 0.60         # Emphasize test coverage
complexity = 0.30
dependency = 0.10

[thresholds]
minimum_debt_score = 3.0        # Higher bar for flagging issues
max_function_length = 30        # Enforce smaller functions

[thresholds.validation]
max_average_complexity = 8.0    # Stricter complexity limits
max_debt_items = 500            # Stricter debt limits
min_coverage_percentage = 80.0  # Require 80% coverage

For Legacy Codebases

[scoring]
coverage = 0.30         # Reduce coverage weight (legacy code often lacks tests)
complexity = 0.50       # Focus on complexity
dependency = 0.20

[thresholds]
minimum_debt_score = 5.0        # Only show highest priority items
minimum_cyclomatic_complexity = 10   # Filter out moderate complexity

[thresholds.validation]
max_debt_items = 10000          # Accommodate large debt
max_total_debt_score = 5000     # Higher limits for legacy code

For Open Source Libraries

[scoring]
coverage = 0.55         # Prioritize test coverage (public API)
complexity = 0.30
dependency = 0.15

[external_api]
detect_external_api = true      # Flag untested public APIs

[thresholds.validation]
min_coverage_percentage = 90.0  # High coverage for public API
max_high_complexity_count = 20  # Keep complexity low

Troubleshooting

Configuration Not Loading

Check file location:

# Ensure file is named .debtmap.toml (note the dot prefix)
ls -la .debtmap.toml

# Debtmap searches current directory + 10 parent directories
pwd

Check file syntax:

# Verify TOML syntax is valid
debtmap analyze 2>&1 | grep -i "failed to parse"

Weights Don’t Sum to 1.0

Error message:

Warning: Invalid scoring weights: Active scoring weights must sum to 1.0, but sum to 0.800. Using defaults.

Fix: Ensure coverage + complexity + dependency = 1.0

[scoring]
coverage = 0.50
complexity = 0.35
dependency = 0.15    # Sum = 1.0 ✓

No Results Shown

Possible causes:

  1. Minimum thresholds too high
  2. All code excluded by ignore patterns
  3. No supported languages in project

Solutions:

# Lower minimum thresholds
[thresholds]
minimum_debt_score = 1.0
minimum_cyclomatic_complexity = 1

# Check language configuration
[languages]
enabled = ["rust", "python", "javascript", "typescript"]

# Review ignore patterns
[ignore]
patterns = [
    # Make sure you're not excluding too much
]