Steward
Dagu Steward is an LLM-powered assistant integrated into the Web UI. It can read, create, and modify your workflows through a chat interface with tool-calling capabilities.
Quick Setup
Enable Steward — Toggle it on in the Web UI at
/agent-settings, or set the environment variableDAGU_AGENT_ENABLED=true.Add a model — Click Add Model in the settings page and configure an LLM provider. Supported providers:
anthropic,openai,openai-codex,gemini,openrouter,zai,local(Ollama, vLLM, etc.).For
openai-codex, connect the ChatGPT Plus/Pro subscription in the model form before saving the model.Set a default model — Click the star icon next to a model to make it the default.
Once configured, click the Steward button at the bottom-left corner of any page to start chatting.
If you are using Ollama or another local model server, read Local AI before setting the Base URL. Dagu expects an OpenAI-compatible base such as http://localhost:11434/v1, not a native Ollama endpoint like /api/generate.
For the full built-in steward configuration surface, start with Steward Settings. The settings docs are split into focused pages for models, tool policy, personality, and web search.
Available Tools
Steward can use these built-in tools. Some are only available when the corresponding feature is configured:
| Tool | Description |
|---|---|
bash | Execute shell commands (120s default timeout, 600s max) |
read | Read file contents with line numbers |
patch | Create, edit, or delete files |
think | Record reasoning without side effects |
navigate | Open pages in the Dagu UI |
ask_user | Prompt the user with options or free-text input |
delegate | Spawn sub-agents for parallel tasks |
remote_agent | Delegate tasks to agents on remote nodes (when remote nodes are configured) |
list_contexts | List available remote nodes for remote_agent (when remote nodes are configured) |
Provider-native web search is configured in model and steward settings rather than exposed as a separate callable tool.
Tools can be individually enabled or disabled in Tool Permissions & Bash Policy.
Agent in Workflows
You can use AI capabilities directly in your DAG steps in two ways.
Agent Step (type: agent)
A multi-turn tool-calling loop — the agent reads files, runs commands, edits code, and iterates until the task is complete:
steps:
- id: fix_config
type: agent
messages:
- role: user
content: |
Fix the invalid database_url in /etc/app/config.yaml
output: RESULTChat Step (type: chat)
A single-shot LLM call — send a prompt and get a response, no tool use:
steps:
- id: summarize
type: chat
llm:
provider: openai
model: gpt-4o
messages:
- role: user
content: "Summarize today's error logs."
output: SUMMARYAI Coding Tool Integration
Install the Dagu skill for external AI coding tools (Claude Code, Codex, Gemini CLI, etc.) so they can write correct Dagu DAG files.
Use Dagu's built-in installer:
dagu ai install --skills-dir ~/.agents/skillsOr use the shared skills CLI:
npx skills add https://github.com/dagucloud/dagu --skill daguSee ai in CLI Commands for more details.
See Also
- Steward Documentation — Complete guide to Steward and its configuration
- Steward Settings — Start here for the built-in Web UI steward settings
- Models & Providers — Add models and set the default model
- Tool Permissions & Bash Policy — Control tools and bash rules
- Personality & Web Search — Configure profiles and provider-native search
- Agent Step — Using the agent as a workflow step
- Steward Tools Reference — Detailed tool parameter documentation
- Workflow Operator — Use the built-in steward from Slack or Telegram
- Basic Chat — Single-shot LLM calls in workflows
