Skip to main content

Documentation Index

Fetch the complete documentation index at: https://cli.devin.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Devin for Terminal supports multiple AI models. You can choose the best model for your task to optimize for maximum capability, speed, or cost efficiency.

Available Models

Models release frequently. We typically support the latest and greatest models from Anthropic, OpenAI, Google, and Cognition within minutes of their launch. We also support a number of leading open source models like Kimi and GLM. To stay up-to-date on model releases, consider following the Cognition X account.
Short names like opus, sonnet, swe, codex, and gemini always resolve to the latest version in that model family.

Reasoning / Thinking Levels

Some models support configurable reasoning levels, which control how much compute the model spends “thinking” before responding. You can cycle the thinking level with Alt+T (macOS: Opt+T) during a session.

Setting the Model

devin --model opus -- refactor this module
devin --model sonnet -- explain this code

Model Selection Tips

The correct choice of language model varies wildly from person-to-person and task-to-task. Many engineers working on the same project are convinced that their model is the best for the task, despite using different models. The fact of the matter is, AI can perform differently depending on your personal usage and writing style! As such, we strongly recommend trying multiple models to see which one you prefer. At minimum we recommend trying swe, gpt, and opus. We find that the vast majority of use-cases can be covered by these three.

Complex refactoring

Use opus or gpt for multi-file refactors, architecture changes, and tasks requiring deep reasoning.

Quick edits / cost-sensitive

Use swe (fast) for straightforward edits, bug fixes, and questions. It’s both fast and cheap at a reasonable level of intelligence.
Enterprise teams can restrict which models are available through Team Settings.