Claude Code Routines: Scheduled Cloud Automation Without the DevOps Overhead
Anthropic shipped Claude Code Routines in April 2026. It is cron for agents, with GitHub-native triggers, API invocation, and the same terminal-native workflow developers already use locally. Here is the guide.
Claude Code Routines: Scheduled Cloud Automation Without the DevOps Overhead
Claude Code Routines is Anthropic's April 2026 release that turns Claude Code from a local developer tool into a scheduled cloud automation platform. The mental model is straightforward: you write a Claude Code session the way you already do, save it as a routine, and Anthropic runs it on a schedule, on a webhook, on an API call, or on a GitHub event.
This post walks through what Routines actually is, when to use it versus Claude Managed Agents or self-hosted cron, and the five highest-value routines we have deployed in the two weeks since launch.
What Routines Is
A routine is a reusable Claude Code session definition. It contains:
- A system prompt
- An MCP server list
- A trigger (cron, webhook, GitHub event, or manual/API)
- Input parameters
- Output handling (file artifacts, GitHub PRs, comments, notifications)
When the trigger fires, Anthropic spins up a Claude Code container, loads your routine, passes the input, runs until completion, and either persists outputs or exposes them through a webhook.
Claude Code Routines is a sibling product to Claude Managed Agents, not a replacement. Managed Agents is the right choice when you have a long-running agent with custom MCP servers and persistent state. Routines is the right choice when you have a task you already know how to describe in a Claude Code session and you want it to run on a schedule.
The Five Highest-Value Routines
Routine 1: Nightly dependency upgrade PR.
The single most useful routine we have deployed. A nightly run that checks package.json, requirements.txt, or equivalent, identifies upgradable dependencies that are not major-version bumps, runs the tests with the upgrade applied, and opens a PR if tests pass.
name: nightly-dep-bumps
trigger:
cron: "0 3 * * *" # 3 AM daily
timezone: America/Los_Angeles
mcp_servers:
- github
- bash
system_prompt: |
Check for dependency updates in this repo that are safe minor or patch bumps.
For each:
1. Apply the bump on a new branch
2. Run the test suite (npm test or pytest depending on repo)
3. If tests pass, open a PR with a summary of the change
4. If tests fail, discard the branch and log the failure
Do not attempt major version bumps. Do not bump more than 5 packages per run.
outputs:
on_complete: post_to_slack
channel: "#dep-updates"
This replaces Dependabot for teams that want actual test validation before the PR appears.
Routine 2: Support ticket daily digest.
Reads the past 24 hours of support tickets, clusters them by theme, identifies anomalies, and posts a summary to the engineering Slack. Takes about 45 seconds of Haiku 4.5 time. Replaces a manual analyst task that took someone 30 minutes each morning.
Routine 3: Weekly product metrics narrative.
Every Monday at 9 AM, reads the previous week's product metrics from the data warehouse, writes a 300-word narrative summary for leadership, and posts it to a company-wide channel. What makes it work is that the routine reads last week's narrative too and only calls out what has changed meaningfully — avoiding the usual noise of automated reports.
Routine 4: GitHub issue triage on every new issue.
Triggered on GitHub issue.opened events. Reads the issue, the recent activity in the repo, and the codebase area the issue references. Applies labels, assigns to the right engineer based on CODEOWNERS, and posts a comment that either asks a clarifying question or summarizes the likely root cause.
Value here is not the triage itself — it is speed. Issues that previously sat for a day before a human looked at them now get a first pass within 30 seconds of creation.
Routine 5: On-call handoff brief.
Runs at the start of each on-call shift. Reads incidents from the past week, deploys since the last handoff, current alerts, and open pull requests. Produces a briefing document the incoming on-call reads in five minutes to know what they are walking into. This one we hear consistently is the highest-signal routine teams deploy — it is a workflow improvement you cannot really buy a product for.
Pricing and Limits
Pricing mirrors Claude Managed Agents: standard Claude API rates plus $0.08 per routine runtime hour. A 45-second routine costs about $0.001 in runtime plus model tokens. A 15-minute routine costs $0.02 in runtime plus model tokens.
Limits that matter in April 2026:
| Limit | Value |
|---|---|
| Max routine duration | 2 hours (vs 24 for Managed Agents) |
| Max concurrent routine runs | 100 per workspace |
| Max routines per workspace | 500 |
| Output artifact size | 500 MB |
| Container cold start | ~8 seconds typical |
Lifetime Access
Stop renting AI tools
One-time $69. No subscription. No expiry. Break even in 4 months vs Pro monthly.
The 2-hour cap is the most common hit for heavy routines. If you need more, the routine can persist state to a volume and be re-invoked, or you can migrate to Managed Agents.
Wiring Routines into GitHub
The GitHub integration is why Routines is seeing fast adoption among engineering teams. You install the Claude Code app into your GitHub org, which grants:
- Read/write access to repos you explicitly add
- Event subscriptions (issue opened, PR opened, PR review requested, push, release, etc.)
- PR creation and commenting under the
claude-code-bot[bot]identity
A routine triggered by pull_request.opened receives the PR context (title, body, diff, author) as input. It can run tests, leave review comments, suggest changes, or approve trivial PRs (if you grant that permission).
The permission model is per-repo, and the identity is clearly marked as a bot, which keeps your audit trail clean.
When to Use Routines vs Alternatives
| If you... | Use |
|---|---|
| Have a task you run as a Claude Code session today | Routines |
| Need scheduled runs under 2 hours | Routines |
| Need long-running (>2hr) stateful agent workflows | Claude Managed Agents |
| Need GitHub-native triggers on PR/issue events | Routines |
| Need ultra-custom container runtime (GPU, niche languages) | DIY on Modal/Fly/AWS |
| Just want to schedule a Python script with an API call | GitHub Actions is probably enough |
The last row matters. Not every scheduled task needs an LLM in it. If your routine is "call an API, transform, post to Slack," GitHub Actions with a 10-line Python script is cheaper and simpler. Routines becomes the right choice when the work genuinely benefits from Claude's reasoning — picking what to report, writing a narrative, reviewing a change for correctness, classifying open-ended input.
Operational Patterns That Work
Pattern 1: Keep routines narrow.
A single routine should do one thing. "Nightly dep bumps" is a good routine. "Nightly dep bumps and also check for TODO comments and also sync docs" is three routines pretending to be one. Breaking them apart makes failures isolated and outputs readable.
Pattern 2: Fail loudly.
Route failures to a dedicated Slack channel or PagerDuty. The default mode we see is routines silently failing for weeks because no one set up failure notification. Anthropic's observability UI surfaces failures, but that only helps if someone looks at it.
Pattern 3: Version the system prompt.
Commit the routine YAML and system prompt into a repo. Change control matters because prompt tweaks can meaningfully change routine behavior and "why did last night's run behave differently" is answerable only if the prompt is versioned.
Pattern 4: Use cheap models for wide routines.
Triage, classification, and summarization routines are usually Haiku 4.5 tasks. Use Sonnet or Opus only when the reasoning depth genuinely matters. The cost difference across a 30-day month is substantial.
Pattern 5: Include a self-check.
At the end of the routine, have Claude summarize what it did, compare it to the routine's stated goal, and flag anomalies. "I processed 14 tickets but one ticket had malformed data I could not classify" is the kind of note that turns a silently-broken routine into a debuggable one.
The Strategic Frame
Scheduled AI automation has been available through LangChain, LlamaIndex, n8n, Make, and Zapier for two years. What Routines changes is the ergonomics for Claude Code users specifically. If you already describe tasks in Claude Code sessions, the routine is the same description plus a trigger. You skip the glue code, the DevOps, the cost monitoring setup, and the bot identity provisioning.
The bet Anthropic is making: developers who adopted Claude Code locally will want to deploy it everywhere, and the lowest-friction cloud extension of the local experience wins. Two weeks of adoption data suggests the bet is landing — GitHub Marketplace installs were already in the five figures by April 15, one week after launch.
For your team, the action is to inventory three to five recurring tasks that currently require a human to sit and do them, then pick the highest-value one and convert it to a routine this week. The second you see a routine run unattended at 3 AM and have a useful PR waiting at 9 AM, the model clicks.
AI Magicx includes agent-powered content automation built on Claude Code patterns. Start for free to see how scheduled AI workflows compose with creative output.
Enjoyed this article? Claim Lifetime