CLI Reference
Commands accept either DAG names (from YAML name field) or file paths.
- Both formats:
start,stop,status,retry - File path only:
dry,enqueue - DAG name only:
restart
Global Options
dagu [global options] command [command options] [arguments...]--config, -c- Config file (default:~/.config/dagu/config.yaml)--dagu-home- Override DAGU_HOME for this command invocation--quiet, -q- Suppress output--cpu-profile- Enable CPU profiling--help, -h- Show help--version, -v- Print version
Commands
exec
Run a command without writing a YAML file.
dagu exec [options] -- <command> [args...]Options:
--name, -N- DAG name (default:exec-<command>)--run-id, -r- Custom run ID--env KEY=VALUE- Set environment variable (repeatable)--dotenv <path>- Load dotenv file (repeatable)--workdir <path>- Working directory--shell <path>- Shell binary--base <file>- Custom base config file (default:~/.config/dagu/base.yaml)--worker-label key=value- Set worker selector labels (repeatable)
# Basic usage
dagu exec -- python script.py
# With environment variables
dagu exec --env DB_HOST=localhost -- python etl.pySee the exec guide for detailed documentation.
start
Run a DAG workflow.
dagu start [options] DAG_NAME_OR_FILE [-- PARAMS...]Interactive Mode:
- If no DAG file is specified, opens an interactive selector
- Only available in terminal (TTY) environments
- Shows enhanced progress display during execution
Options:
--params, -p- Parameters as JSON--name, -N- Override the DAG name (default: name from DAG definition or filename)--run-id, -r- Custom run ID--from-run-id- Re-run using the DAG snapshot and parameters captured from a historic run
Note:
--from-run-idcannot be combined with--params,--parent, or--root. Provide exactly one DAG name or file so the command can look up the historic run.
# Basic run
dagu start my-workflow.yaml
# Interactive mode (no file specified)
dagu start
# With parameters (note the -- separator)
dagu start etl.yaml -- DATE=2024-01-01 ENV=prod
# Custom run ID
dagu start --run-id batch-001 etl.yaml
# Override DAG name
dagu start --name my_custom_name my-workflow.yaml
# Clone parameters from a historic run
dagu start --from-run-id 20241031_235959 example-dag.yamlstop
Stop a running DAG.
dagu stop [options] DAG_NAME_OR_FILEOptions:
--run-id, -r- Specific run ID (optional)
dagu stop my-workflow # Stop current run
dagu stop --run-id=20240101_120000 etl # Stop specific runrestart
Restart a DAG run with a new ID.
dagu restart [options] DAG_NAMEOptions:
--run-id, -r- Run to restart (optional)
dagu restart my-workflow # Restart latest
dagu restart --run-id=20240101_120000 etl # Restart specificretry
Retry a failed DAG execution.
dagu retry [options] DAG_NAME_OR_FILEOptions:
--run-id, -r- Run to retry (required)
dagu retry --run-id=20240101_120000 my-workflowstatus
Display current status of a DAG.
dagu status [options] DAG_NAME_OR_FILEOptions:
--run-id, -r- Check specific run (optional)
dagu status my-workflow # Latest run statusOutput:
Status: running
Started: 2024-01-01 12:00:00
Steps:
✓ download [completed]
⟳ process [running]
○ upload [pending]history
Display execution history of DAG runs with filtering and pagination.
Usage:
dagu history [flags] [DAG_NAME]Flags:
--from- Start date/time in UTC (formats:2006-01-02or2006-01-02T15:04:05Z)--to- End date/time in UTC (formats:2006-01-02or2006-01-02T15:04:05Z)--last- Relative time period (examples:7d,24h,1w,30d). Cannot combine with--from/--to--status- Filter by status:running,succeeded,failed,aborted,queued,waiting,rejected,not_started,partially_succeeded- Aliases:
success(succeeded),failure(failed),canceled/cancelled/cancel(aborted)
- Aliases:
--run-id- Filter by run ID (partial match supported)--tags- Filter by tags, comma-separated with AND logic (e.g.,prod,critical)--format,-f- Output format:table(default),json, orcsv--limit,-l- Max results (default:100, max:1000)
Default Behavior:
- Shows last 30 days of runs
- Table format with columns: DAG NAME, RUN ID, STATUS, STARTED (UTC), DURATION, PARAMS
- Sorted newest first
- Limit 100 results
- Run IDs are never truncated
Examples:
# All runs from last 30 days
dagu history
# Specific DAG runs
dagu history my-workflow
# Recent failures for debugging
dagu history my-workflow --status failed --last 7d
# Date range query
dagu history --from 2026-01-01 --to 2026-01-31
# JSON export for analysis
dagu history --format json --limit 500 > history.json
# CSV export for spreadsheets
dagu history --format csv --limit 500 > history.csv
# Tag filtering (AND logic)
dagu history --tags "prod,critical"
# Combined filters
dagu history my-workflow --status failed --last 24h --limit 10Output (table):
DAG NAME RUN ID STATUS STARTED (UTC) DURATION PARAMS
my-workflow 019c1ca4-ba96-7599-80c9-773862801abc Succeeded 2026-02-02 04:38:03 2m30s -
my-workflow 019c1ca3-f123-4567-89ab-cdef01234567 Failed 2026-02-01 14:22:15 45s env=prodOutput (JSON):
[
{
"name": "my-workflow",
"dagRunId": "019c1ca4-ba96-7599-80c9-773862801abc",
"status": "succeeded",
"startedAt": "2026-02-02T04:38:03Z",
"finishedAt": "2026-02-02T04:40:33Z",
"duration": "2m30s",
"params": "",
"tags": ["prod", "backend"],
"workerId": "",
"error": ""
}
]Output (CSV):
DAG NAME,RUN ID,STATUS,STARTED (UTC),DURATION,PARAMS
my-workflow,019c1ca4-ba96-7599-80c9-773862801abc,Succeeded,2026-02-02 04:38:03,2m30s,-
my-workflow,019c1ca3-f123-4567-89ab-cdef01234567,Failed,2026-02-01 14:22:15,45s,env=prodNote: CSV output follows RFC 4180. Fields containing commas, quotes, or newlines are automatically quoted and escaped.
Error Examples:
# Conflicting flags
$ dagu history --last 7d --from 2026-01-01
Error: cannot use --last with --from or --to
# Invalid status
$ dagu history --status invalid
Error: invalid status 'invalid'. Valid values: running, succeeded, failed, ...
# Date validation
$ dagu history --from 2026-02-01 --to 2026-01-01
Error: --from date (2026-02-01) must be before --to date (2026-01-01)See Also:
server
Start the web UI server.
dagu server [options]Options:
--host, -s- Host (default: localhost)--port, -p- Port (default: 8080)--dags, -d- DAGs directory
dagu server # Default settings
dagu server --host=0.0.0.0 --port=9000 # Custom host/portscheduler
Start the DAG scheduler daemon.
dagu scheduler [options]Options:
--dags, -d- DAGs directory
dagu scheduler # Default settings
dagu scheduler --dags=/opt/dags # Custom directorystart-all
Start scheduler, web UI, and optionally coordinator service.
dagu start-all [options]Options:
--host, -s- Host (default: localhost)--port, -p- Port (default: 8080)--dags, -d- DAGs directory--coordinator.host- Coordinator bind address (default: 127.0.0.1)--coordinator.advertise- Address to advertise in service registry--coordinator.port- Coordinator gRPC port (default: 50055)
# Single instance mode (coordinator disabled)
dagu start-all
# Distributed mode with coordinator enabled
dagu start-all --coordinator.host=0.0.0.0 --coordinator.port=50055
# Production mode
dagu start-all --host=0.0.0.0 --port=9000 --coordinator.host=0.0.0.0Note: The coordinator service is only started when --coordinator.host is set to a non-localhost address (not 127.0.0.1 or localhost). By default, start-all runs in single instance mode without the coordinator.
validate
Validate a DAG specification for structural correctness.
dagu validate [options] DAG_FILEChecks structural correctness and references (e.g., step dependencies) without evaluating variables or executing the DAG. Returns validation errors in a human-readable format.
dagu validate my-workflow.yamlOutput when valid:
DAG spec is valid: my-workflow.yaml (name: my-workflow)Output when invalid:
Validation failed for my-workflow.yaml
- Step 'process' depends on non-existent step 'missing_step'
- Invalid cron expression in schedule: '* * * *'dry
Validate a DAG without executing it.
dagu dry [options] DAG_FILE [-- PARAMS...]Options:
--params, -p- Parameters as JSON--name, -N- Override the DAG name (default: name from DAG definition or filename)
dagu dry my-workflow.yaml
dagu dry etl.yaml -- DATE=2024-01-01 # With parameters
dagu dry --name my_custom_name my-workflow.yaml # Override DAG nameenqueue
Add a DAG to the execution queue.
dagu enqueue [options] DAG_FILE [-- PARAMS...]Options:
--run-id, -r- Custom run ID--params, -p- Parameters as JSON--name, -N- Override the DAG name (default: name from DAG definition or filename)--queue, -u- Override DAG-level queue name for this enqueue
dagu enqueue my-workflow.yaml
dagu enqueue --run-id=batch-001 etl.yaml -- TYPE=daily
# Enqueue to a specific queue (override)
dagu enqueue --queue=high-priority my-workflow.yaml
# Override DAG name
dagu enqueue --name my_custom_name my-workflow.yamldequeue
Remove a DAG from the execution queue.
dagu dequeue <queue-name> --dag-run=<dag-name>:<run-id> # remove specific run
dagu dequeue <queue-name> # pop the oldest itemExample:
dagu dequeue default --dag-run=my-workflow:batch-001
dagu dequeue defaultversion
Display version information.
dagu versioncleanup
Remove old DAG run history for a specified DAG.
dagu cleanup [options] DAG_NAMEOptions:
--retention-days- Number of days to retain (default:0= delete all)--dry-run- Preview what would be deleted without actually deleting--yes, -y- Skip confirmation prompt
Active runs (running, queued) are never deleted for safety.
# Delete all history (with confirmation prompt)
dagu cleanup my-workflow
# Keep last 30 days of history
dagu cleanup --retention-days 30 my-workflow
# Preview what would be deleted
dagu cleanup --dry-run my-workflow
# Delete without confirmation (for scripts)
dagu cleanup -y my-workflow
# Combine options
dagu cleanup --retention-days 7 -y my-workflowOutput:
# Dry run output
Dry run: Would delete 5 run(s) for DAG "my-workflow":
- 019b1c4b-1b1e-7232-b12d-e822dac72613
- 019b1c4b-13e1-7251-a713-aaad60dfa88c
...
# Actual deletion output
Successfully removed 5 run(s) for DAG "my-workflow"migrate
Migrate legacy data to new format.
dagu migrate history # Migrate v1.16 -> v1.17+ formatcoordinator
Start the coordinator gRPC server for distributed task execution.
dagu coordinator [options]Options:
--coordinator.host- Host address to bind (default:127.0.0.1)--coordinator.advertise- Address to advertise in service registry (default: auto-detected hostname)--coordinator.port- Port number (default:50055)--peer.cert-file- Path to TLS certificate file for peer connections--peer.key-file- Path to TLS key file for peer connections--peer.client-ca-file- Path to CA certificate file for client verification (mTLS)--peer.insecure- Use insecure connection (h2c) instead of TLS (default:true)--peer.skip-tls-verify- Skip TLS certificate verification (insecure)
# Basic usage
dagu coordinator --coordinator.host=0.0.0.0 --coordinator.port=50055
# Bind to all interfaces and advertise service name (for containers/K8s)
dagu coordinator \
--coordinator.host=0.0.0.0 \
--coordinator.advertise=dagu-server \
--coordinator.port=50055
# With TLS
dagu coordinator \
--peer.insecure=false \
--peer.cert-file=server.pem \
--peer.key-file=server-key.pem
# With mutual TLS
dagu coordinator \
--peer.insecure=false \
--peer.cert-file=server.pem \
--peer.key-file=server-key.pem \
--peer.client-ca-file=ca.pemThe coordinator service enables distributed task execution by:
- Automatically registering in the service registry system
- Accepting task polling requests from workers
- Matching tasks to workers based on labels
- Tracking worker health via heartbeats (every 10 seconds)
- Providing task distribution API with automatic failover
- Managing worker lifecycle through file-based registry
worker
Start a worker that polls the coordinator for tasks.
dagu worker [options]Options:
--worker.id- Worker instance ID (default:hostname@PID)--worker.max-active-runs- Maximum number of active runs (default:100)--worker.labels, -l- Worker labels for capability matching (format:key1=value1,key2=value2)--peer.insecure- Use insecure connection (h2c) instead of TLS (default:true)--peer.cert-file- Path to TLS certificate file for peer connections--peer.key-file- Path to TLS key file for peer connections--peer.client-ca-file- Path to CA certificate file for server verification--peer.skip-tls-verify- Skip TLS certificate verification (insecure)
# Basic usage
dagu worker
# With custom configuration
dagu worker \
--worker.id=worker-1 \
--worker.max-active-runs=50
# With labels for capability matching
dagu worker --worker.labels gpu=true,memory=64G,region=us-east-1
dagu worker --worker.labels cpu-arch=amd64,instance-type=m5.xlarge
# With TLS connection
dagu worker \
--peer.insecure=false
# With mutual TLS
dagu worker \
--peer.insecure=false \
--peer.cert-file=client.pem \
--peer.key-file=client-key.pem \
--peer.client-ca-file=ca.pem
# With self-signed certificates
dagu worker \
--peer.insecure=false \
--peer.skip-tls-verifyWorkers automatically register in the service registry system, send regular heartbeats, and poll the coordinator for tasks matching their labels to execute them locally.
Configuration
Priority: CLI flags > Environment variables > Config file
Using Custom Home Directory
The --dagu-home flag allows you to override the application home directory for a specific command invocation. This is useful for:
- Testing with different configurations
- Running multiple Dagu instances with isolated data
- CI/CD scenarios requiring custom directories
# Use a custom home directory for this command
dagu --dagu-home=/tmp/dagu-test start my-workflow.yaml
# Start server with isolated data
dagu --dagu-home=/opt/dagu-prod start-all
# Run scheduler with specific configuration
dagu --dagu-home=/var/lib/dagu schedulerWhen --dagu-home is set, it overrides the DAGU_HOME environment variable and uses a unified directory structure:
$DAGU_HOME/
├── dags/ # DAG definitions
├── logs/ # All log files
├── data/ # Application data
├── suspend/ # DAG suspend flags
├── config.yaml # Main configuration
└── base.yaml # Shared DAG defaultsKey Environment Variables
DAGU_HOME- Set all directories to this pathDAGU_HOST- Server host (default:127.0.0.1)DAGU_PORT- Server port (default:8080)DAGU_DAGS_DIR- DAGs directoryDAGU_LOG_DIR- Log directoryDAGU_DATA_DIR- Data directoryDAGU_AUTH_BASIC_USERNAME- Basic auth usernameDAGU_AUTH_BASIC_PASSWORD- Basic auth password
