Environment Variables¶
Prodigy provides comprehensive environment variable management for workflows, enabling parameterization, secrets management, and environment-specific configurations.
Overview¶
Environment variables in Prodigy allow you to:
- Define workflow-wide variables accessible in all commands
- Securely manage sensitive credentials with automatic masking
- Configure environment-specific settings using profiles
- Load variables from
.envfiles - Use dynamic and conditional variables
- Reference variables across all workflow phases
Variable Precedence¶
Variables can be defined in multiple locations. When the same variable is defined in multiple places, Prodigy uses this precedence order (highest to lowest):
- Profile variables - Activated with
--profileflag - Workflow
envblock - Defined in workflow YAML - Environment files - Loaded from
.envfiles (later files override earlier) - Parent process environment - Inherited from shell
This hierarchy allows you to set sensible defaults while providing runtime overrides when needed.
graph TD
Start[Variable Reference: $API_URL] --> Profile{Profile<br/>variable?}
Profile -->|Yes| UseProfile[Use profile value]
Profile -->|No| WorkflowEnv{Workflow<br/>env block?}
WorkflowEnv -->|Yes| UseWorkflow[Use workflow value]
WorkflowEnv -->|No| EnvFile{Environment<br/>file?}
EnvFile -->|Yes| UseEnvFile[Use env file value]
EnvFile -->|No| ParentEnv{Parent<br/>process env?}
ParentEnv -->|Yes| UseParent[Use parent value]
ParentEnv -->|No| Error[Error: Variable not found]
UseProfile --> End[Value resolved]
UseWorkflow --> End
UseEnvFile --> End
UseParent --> End
style Profile fill:#e1f5ff
style WorkflowEnv fill:#fff3e0
style EnvFile fill:#f3e5f5
style ParentEnv fill:#e8f5e9
style Error fill:#ffebee
Figure: Variable resolution follows precedence from profile → workflow env → env files → parent environment.
Defining Environment Variables¶
Environment variables are defined in the env block at the workflow root:
# Source: workflows/environment-example.yml
env:
# Static variables
NODE_ENV: production
API_URL: https://api.example.com
PROJECT_NAME: "my-project"
VERSION: "1.0.0"
commands:
- shell: "echo Building $PROJECT_NAME version $VERSION"
- shell: "curl $API_URL/health"
Variable Types¶
Static Variables¶
Simple key-value pairs for constant values:
# Source: workflows/mapreduce-env-example.yml:8-11
env:
PROJECT_NAME: "example-project"
PROJECT_CONFIG: "config.yml"
FEATURES_PATH: "features"
Dynamic Variables¶
Computed from command output at workflow start:
# Source: workflows/environment-example.yml:10-12
env:
WORKERS:
command: "nproc 2>/dev/null || echo 4"
cache: true
Dynamic variables are evaluated once and cached for workflow duration when cache: true.
Conditional Variables¶
Values that depend on expressions:
# Source: workflows/environment-example.yml:14-18
env:
DEPLOY_ENV:
condition: "${branch} == 'main'"
when_true: "production"
when_false: "staging"
Variable Interpolation¶
Prodigy supports two interpolation syntaxes for flexibility:
# Source: workflows/mapreduce-env-example.yml:43-46
commands:
# Simple syntax
- shell: "echo Starting $PROJECT_NAME workflow"
# Bracketed syntax (more explicit)
- shell: "echo Created output directory: ${OUTPUT_DIR}"
# In Claude commands
- claude: "/analyze --project $PROJECT_NAME --config ${PROJECT_CONFIG}"
When to use bracketed syntax:
- When variable name is followed by alphanumeric characters:
${VAR}_suffix - For clarity in complex expressions:
${map.results} - Inside quoted strings:
"Path: ${OUTPUT_DIR}/file"
Prefer Bracketed Syntax
Using ${VAR} instead of $VAR prevents ambiguity when variables are adjacent to other characters. For example, $VARsuffix may be interpreted as a variable named VARsuffix, while ${VAR}suffix is unambiguous.
Secrets Management¶
Secrets are automatically masked in all output, logs, and error messages to prevent credential leaks.
Defining Secrets¶
# Source: workflows/mapreduce-env-example.yml:22-25
env:
API_TOKEN:
secret: true
value: "${GITHUB_TOKEN}"
Secrets can reference environment variables from the parent process using ${ENV_VAR} syntax.
Common Mistake: Forgetting secret: true
The most common mistake is defining sensitive values without marking them as secrets:
Alternative Secrets Syntax¶
The secrets block is an alternative to inline secret: true definitions.
Advanced: Secret Providers¶
Prodigy supports multiple secret providers for integration with external secret management systems:
# Source: src/cook/environment/config.rs:99-112
env:
# Environment variable provider (default)
API_TOKEN:
secret: true # (1)!
value: "${GITHUB_TOKEN}" # (2)!
# File-based secrets
DATABASE_PASSWORD:
secret: true
provider: file # (3)!
key: "/run/secrets/db_password" # (4)!
# HashiCorp Vault integration
VAULT_TOKEN:
secret: true
provider: vault # (5)!
key: "secret/data/myapp/token" # (6)!
version: "v2" # (7)!
# AWS Secrets Manager
AWS_SECRET:
secret: true
provider: aws # (8)!
key: "myapp/prod/api-key" # (9)!
- Marks value as secret for automatic masking
- References environment variable from parent process
- File provider reads secret from filesystem
- Path to file containing the secret value
- HashiCorp Vault integration for centralized secrets
- Vault path to the secret
- KV secrets engine version (v1 or v2)
- AWS Secrets Manager integration
- Secret name/ARN in AWS Secrets Manager
Provider Availability
Secret provider support depends on configuration. The env and file providers are always available. Vault and AWS providers require additional setup.
Automatic Masking¶
Secrets are masked in:
- Command output (stdout/stderr)
- Error messages and stack traces
- Event logs and checkpoints
- Workflow summaries
- MapReduce agent logs
Example output with masking:
flowchart LR
Cmd[Execute Command] --> Output[Generate Output]
Output --> Scan{Contains<br/>secret value?}
Scan -->|Yes| Mask[Replace with ***]
Scan -->|No| Pass[Pass through]
Mask --> Log[Write to log]
Pass --> Log
Log --> Display[Display to user]
style Cmd fill:#e1f5ff
style Scan fill:#fff3e0
style Mask fill:#ffebee
style Pass fill:#e8f5e9
Figure: Secret masking automatically replaces sensitive values with *** in all output streams.
Secret Security
Always mark sensitive values as secrets. Without the secret: true flag, values will appear in logs and may be exposed.
Profiles¶
Profiles enable environment-specific configurations for development, staging, and production environments.
Defining Profiles¶
# Source: workflows/mapreduce-env-example.yml:28-39
env:
DEBUG_MODE: "false" # (1)!
TIMEOUT_SECONDS: "300" # (2)!
OUTPUT_DIR: "output" # (3)!
profiles:
development:
description: "Development environment with debug enabled" # (4)!
DEBUG_MODE: "true" # (5)!
TIMEOUT_SECONDS: "60" # (6)!
OUTPUT_DIR: "dev-output"
production:
description: "Production environment"
DEBUG_MODE: "false"
TIMEOUT_SECONDS: "300"
OUTPUT_DIR: "prod-output"
- Default values used when no profile is activated
- Timeout for operations in seconds
- Output directory for workflow results
- Optional description shown in help text
- Profile values override default env values
- Development uses shorter timeout for faster feedback
Activating Profiles¶
# Use default values (no profile)
prodigy run workflow.yml
# Activate development profile
prodigy run workflow.yml --profile development
# Activate production profile
prodigy run workflow.yml --profile production
Profile variables override default env values. Variables not defined in the profile inherit default values.
Profile Best Practice
Use profiles to separate environment-specific configuration (development, staging, production) rather than maintaining multiple workflow files. This ensures consistency while allowing environment-specific overrides.
Environment Files¶
Load variables from .env format files for external configuration. Environment files support standard .env format and can be used for external secrets management and configuration.
Defining Environment Files¶
Multiple files can be specified, with later files overriding earlier ones for the same variable names.
.env File Format¶
# Database configuration
DATABASE_URL=postgres://localhost/mydb
DATABASE_POOL_SIZE=10
# API settings
API_KEY=sk-abc123xyz
API_TIMEOUT=30
# Feature flags
ENABLE_CACHING=true
Supported Formats
Environment files follow standard .env format with KEY=VALUE pairs. Lines starting with # are treated as comments. No spaces are allowed around the = sign.
Variable Precedence with Environment Files¶
When variables are defined in multiple locations, Prodigy uses this precedence (highest to lowest):
- Profile variables (
--profileflag) - Highest priority - Workflow
envblock - Workflow-defined variables - Environment files - Later files override earlier files
- Parent process environment - Lowest priority
This precedence order ensures that explicit workflow configuration takes precedence over external sources, while profiles provide runtime overrides.
Precedence Example
Given these definitions:
env_files:
- .env.base # API_URL=http://localhost
- .env.production # API_URL=https://prod.api.com
env:
API_URL: https://staging.api.com
profiles:
prod:
API_URL: https://api.example.com
Resolution:
- No profile: https://staging.api.com (workflow env)
- With --profile prod: https://api.example.com (profile)
Usage in Workflow Phases¶
Environment variables are available in all workflow phases:
Standard Workflows¶
# Source: workflows/environment-example.yml:42-52
commands:
- name: "Show environment"
shell: "echo NODE_ENV=$NODE_ENV API_URL=$API_URL"
- name: "Build frontend"
shell: "echo 'Building with NODE_ENV='$NODE_ENV"
env:
BUILD_TARGET: production
OPTIMIZE: "true"
working_dir: ./frontend
MapReduce Setup Phase¶
# Source: workflows/mapreduce-env-example.yml:42-49
setup:
- shell: "echo Starting $PROJECT_NAME workflow"
- shell: "mkdir -p $OUTPUT_DIR"
- shell: "echo Created output directory: ${OUTPUT_DIR}"
- shell: "echo Debug mode: $DEBUG_MODE"
MapReduce Map Phase¶
Environment variables are available in agent templates:
# Source: workflows/mapreduce-env-example.yml:56-68
map:
agent_template:
# In Claude commands
- claude: "/process-item '${item.name}' --project $PROJECT_NAME"
# In shell commands
- shell: "echo Processing ${item.name} for $PROJECT_NAME"
- shell: "echo Output: $OUTPUT_DIR"
# In failure handlers
- shell: "timeout ${TIMEOUT_SECONDS}s ./process.sh"
on_failure:
- claude: "/fix-issue --max-retries $MAX_RETRIES"
MapReduce Agent Isolation
Each MapReduce agent runs in an isolated git worktree with its own execution context. Environment variables defined in the workflow are automatically inherited by all agents. Secret masking is maintained across agent boundaries to ensure credentials remain protected.
MapReduce Reduce Phase¶
# Source: workflows/mapreduce-env-example.yml:72-79
reduce:
- shell: "echo Aggregating results for $PROJECT_NAME"
- claude: "/summarize ${map.results} --format $REPORT_FORMAT"
- shell: "cp summary.$REPORT_FORMAT $OUTPUT_DIR/${PROJECT_NAME}-summary.$REPORT_FORMAT"
- shell: "echo Processed ${map.successful}/${map.total} items"
Merge Phase¶
# Source: workflows/mapreduce-env-example.yml:82-93
merge:
commands:
- shell: "echo Merging changes for $PROJECT_NAME"
- claude: "/validate-merge --branch ${merge.source_branch} --project $PROJECT_NAME"
- shell: "echo Merge completed for ${PROJECT_NAME}"
Per-Step Environment¶
Override or add variables for specific commands:
# Source: workflows/environment-example.yml:54-60
commands:
- name: "Run tests"
shell: "pytest tests/"
env:
PYTHONPATH: "./src:./tests"
TEST_ENV: "true"
working_dir: ./backend
temporary: true # Environment restored after this step
Options:
temporary: true- Restore environment after step completesclear_env: true- Clear all inherited variables, use only step-specific ones
Best Practices¶
Use Secrets for Sensitive Data
Always mark API keys, tokens, passwords, and credentials as secrets to enable automatic masking.
Parameterize Project-Specific Values
Use environment variables instead of hardcoding paths, URLs, and configuration values. This improves portability and maintainability.
Document Required Variables
Add comments in workflow files documenting expected variables and their purposes.
Use Profiles for Environments
Separate development, staging, and production configurations using profiles rather than maintaining separate workflow files.
Prefer Bracketed Syntax
Use ${VAR} instead of $VAR for explicitness and to avoid ambiguity in complex expressions.
Common Patterns¶
Project Configuration¶
API Integration¶
env:
API_URL: "https://api.example.com"
API_TIMEOUT: "30"
secrets:
API_KEY: "${env:EXTERNAL_API_KEY}"
commands:
- shell: "curl -H 'Authorization: Bearer $API_KEY' $API_URL/data"
Multi-Environment Configuration¶
env:
APP_ENV: "development"
LOG_LEVEL: "debug"
profiles:
staging:
APP_ENV: "staging"
LOG_LEVEL: "info"
production:
APP_ENV: "production"
LOG_LEVEL: "warn"
Feature Flags¶
env:
ENABLE_CACHING: "true"
ENABLE_ANALYTICS: "false"
MAX_WORKERS: "4"
commands:
- shell: |
if [ "$ENABLE_CACHING" = "true" ]; then
echo "Caching enabled"
fi
Troubleshooting¶
Variable Not Found¶
Symptom: $VAR appears literally in output or command fails with "command not found"
Cause: Variable not defined or incorrect interpolation syntax
Solution:
- Verify variable is defined in
envblock or environment file - Check spelling and case (variable names are case-sensitive)
- Ensure proper interpolation syntax (
$VARor${VAR}) - Use
--profileflag if variable is profile-specific
Secret Not Masked¶
Symptom: Sensitive value appears in logs or output
Cause: Variable not marked as secret
Solution:
# Before (not masked)
env:
API_KEY: "sk-abc123"
# After (masked)
env:
API_KEY:
secret: true
value: "sk-abc123"
Profile Variables Not Applied¶
Symptom: Default values used instead of profile values
Cause: Profile not activated with --profile flag
Solution:
Environment File Not Loaded¶
Symptom: Variables from .env file not available
Cause: File path incorrect or file doesn't exist
Solution:
- Verify file path is relative to workflow file location
- Check file exists:
ls .env.production - Verify file syntax (KEY=VALUE format, no spaces around
=)