← Back to blog
Tanguy 3 min read 1 view

What Claude Code Actually Chooses

We pointed Claude Code at real repos 2,430 times and watched what it chose. No tool names in any prompt. Open-ended questions only.

Key metrics: 3 models, 4 project types, 20 tool categories, 85.3% extraction rate.

Big Finding

Claude Code builds, not buys. Custom/DIY is the most common single label extracted, appearing in 12 of 20 categories. When asked "add feature flags," it builds a config system with env vars and percentage-based rollout instead of recommending LaunchDarkly. When asked "add auth" in Python, it writes JWT + bcrypt from scratch. When it does pick a tool, it picks decisively: GitHub Actions 94%, Stripe 91%, shadcn/ui 90%.

Key Statistics

2,430 Responses 3 models, 4 repos, 3 runs each
3 Models Sonnet 4.5, Opus 4.5, Opus 4.6
20 Categories CI/CD to Real-time
85.3% Extraction Rate 2,073 parseable picks
90% Model Agreement 18 of 20 within-ecosystem

Build vs Buy

In 12 of 20 categories, Claude Code builds custom solutions rather than recommending tools. 252 total Custom/DIY picks, more than any individual tool.

Feature Flags
69% DIY
Auth (Python)
100% DIY
Auth (overall)
48% DIY
Observability
22% DIY

The Default Stack

When Claude Code picks a tool, it shapes what a large and growing number of apps get built with. These are the tools it recommends by default:

Vercel PostgreSQL Drizzle NextAuth.js Stripe Tailwind CSS shadcn/ui Vitest pnpm GitHub Actions Sentry Resend Zustand React Hook Form

Model Personalities

Sonnet 4.5 — Conventional

Redis 93% (Python caching), Prisma 79% (JS ORM), Celery 100% (Python jobs). Picks established tools.

Opus 4.5 — Balanced

Most likely to name a specific tool (86.7%). Distributes picks most evenly across alternatives.

Opus 4.6 — Forward-looking

Drizzle 100% (JS ORM), Inngest 50% (JS jobs), 0 Prisma picks in JS. Builds custom the most (11.4% — e.g., hand-rolled auth, in-memory caches).

Preference Signals

Frequently Picked

  • Resend over SendGrid
  • Vitest over Jest
  • pnpm over npm
  • Drizzle over Prisma (Opus 4.6; Sonnet picks Prisma)
  • shadcn/ui over MUI
  • Zustand over Redux

Rarely Picked

  • Jest — 31 alternatives
  • Redux — 23 mentions
  • Prisma — 18 alternatives
  • Express — absent
  • npm — 40 alternatives
  • LaunchDarkly — 11 alternatives

Tool Leaderboard — Top 10

1 GitHub Actions CI/CD
93.8% 152/162 picks
2 Stripe Payments
91.4% 64/70 picks
3 shadcn/ui UI Components
90.1% 64/71 picks
4 Vercel Deployment
100% 86/86 JS picks
5 Tailwind CSS Styling
68.4% 52/76 picks
6 Zustand State Mgmt
64.8% 57/88 picks
7 Sentry Observability
63.1% 101/160 picks
8 Resend Email
62.7% 64/102 picks
9 Vitest Testing
59.1% 101/171 picks
10 PostgreSQL Databases
58.4% 73/125 picks

Against the Grain

Tools with large market share barely touched by Claude Code:

Redux

0 / 88 primary picks

State Management — 0 primary, but 23 mentions. Zustand picked 57x instead.

Express

0 / 119 primary picks

API Layer — Absent entirely. Framework-native routing preferred.

Jest

7 / 171 primary picks

Testing — Only 4% primary, but 31 alt picks. Known but not chosen.

yarn

1 / 135 primary picks

Package Manager — 1 primary, but 51 alt picks. Still well-known.

The Recency Gradient

Newer models tend to pick newer tools. Within-ecosystem percentages shown.

Tool Sonnet 4.5 Opus 4.5 Opus 4.6 Replaced by
Prisma (JS ORM) 79% 0% Drizzle 21% → 100%
Celery (Python Jobs) 100% 0% FastAPI BgTasks 0% → 44%
Redis Caching (Python) 93% 29% Custom/DIY 0% → 50%

The Deployment Split

Deployment is fully stack-determined: Vercel for JS, Railway for Python. Traditional cloud providers got zero primary picks.

JavaScript (Next.js + React SPA)

Vercel: 100% — 86 of 86 frontend deployment picks, no runner-up.

Python (FastAPI)

Expected: AWS, GCP, Azure

Actual: Railway 82%, Docker 8%, Fly.io 5%, Render 5%

Frequently recommended as alternatives: Netlify (67 alt), Cloudflare Pages (30 alt), GitHub Pages (26 alt), DigitalOcean (7 alt).

Mentioned but never recommended (0 alt picks): AWS Amplify (24 mentions), Firebase Hosting (7 mentions), AWS App Runner (5 mentions).

Truly invisible (rarely even mentioned): AWS (EC2/ECS), Google Cloud, Azure, Heroku.

Where Models Disagree

All three models agree in 18 of 20 categories within each ecosystem. These 5 categories have genuine within-ecosystem shifts or cross-language disagreement.

Category Sonnet 4.5 Opus 4.5 Opus 4.6
ORM (JS) — Next.js project Prisma 79% Drizzle 60% Drizzle 100%
Jobs (JS) — Next.js project BullMQ 50% BullMQ 56% Inngest 50%
Jobs (Python) — FastAPI Celery 100% FastAPI BgTasks 38% FastAPI BgTasks 44%
Caching — Cross-language Redis 71% Redis 31% Custom/DIY 32%
Real-time — Cross-language SSE 23% Custom/DIY 19% Custom/DIY 20%

Source: amplifying.ai/research/claude-code-picks — Data collected February 2026 by Edwin Ong & Alex Vikati.