Anthropic is adding the function to handle permissions for users and reduce risks in running tasks. Credit: BLACKDAY / Shutterstock Anthropic is fitting its Claude Code AI-powered coding assistant with an auto mode for the Claude AI assistant to handle permissions on the user’s behalf, with safeguards to monitor actions before they run. Auto mode was announced March 24; instructions on getting started with it can be found on the introductory blog post. Currently being launched in research preview status for Claude Team users, this capability is due to roll out to enterprise and API users in coming days, according to Anthropic. The company explained that Claude Code default permissions are conservative, with every file write and Bash command asking for approval. While this is a safe default, it means users cannot start a large task and walk away. Some developers bypass permission checks with --dangerously-skip-permissions, but skipping permissions can result in dangerous and destructive outcomes and should not be used outside of isolated environments. Auto mode is a middle path to run longer tasks with fewer interruptions while introducing less risk than skipping all permissions. Before each tool call runs, a classifier reviews it to check for potentially destructive actions such as mass deleting files, sensitive data exfiltration, or malicious code execution, Anthropic said. Actions deemed safe can proceed and risky ones are blocked, redirecting Claude to take a different approach. Auto mode reduces risk compared to --dangerously-skip-permissions but does not eliminate it entirely. The classifier may still allow some risky actions: for example, if user intent is ambiguous or if Claude does not have enough context about an environment to know an action might create additional risk. It may also occasionally block benign actions. Anthropic plans to continue to improve the user experience over time. Software DevelopmentDevelopment ToolsArtificial Intelligence