Code to Claude Context — Use Claude's 1M Window for Whole-Repo Tasks
Claude Opus 4.7 and Sonnet 4.6 both handle 1 million tokens of context — enough for a medium-sized codebase plus your prompt. freefilestoprompt.app is built for exactly this workflow: drag your repo, set the target model to Claude, watch the budget bar fill, auto-fit, copy, paste into Claude.
Why Claude is good at whole-repo tasks
Three reasons Claude handles repo-scale prompts well: (1) the 1M window means you don't have to chunk and lose cross-file references; (2) Claude's training emphasized code review and structured analysis, so it produces useful output on multi-file inputs; (3) Anthropic explicitly documents the <file path="..."> format for multi-file context — which is freefilestoprompt.app's default output format.
Suggested target models per repo size
| Repo size (tokens) | Recommended model | Why |
|---|---|---|
| ≤ 200K | Claude Haiku 4.5 | Cheapest tier, full repo fits, fast |
| 200K - 1M | Claude Sonnet 4.6 or Opus 4.7 | 1M ctx, best price/quality for review tasks |
| 1M - 10M | Llama 4 Scout (Groq) or Gemini 2.5 Pro | Bigger context windows for monorepos |
| > 10M | Pack a subset | No model fits cleanly — use auto-fit with high-priority pins |
The Anthropic-recommended XML format
freefilestoprompt.app's default output wraps each file in <file path="...">CONTENT</file> tags. This is what Anthropic's docs explicitly recommend for multi-document prompts. Claude is trained to recognize this structure and reference files by path in its responses. Optional: include a <directory_tree> block at the top so Claude understands the file hierarchy at a glance.
Common Claude + repo prompts that benefit
- Code review: "Review this codebase for bugs, security issues, and refactoring opportunities. Use file paths in your response."
- Architecture explanation: "Explain the overall architecture of this codebase. Walk through the request flow."
- Migration planning: "Plan a migration from express to fastify. List specific files that need changes."
- Test coverage: "Identify untested code paths. Suggest test cases."
- Onboarding doc: "Generate an onboarding doc for a new developer joining this team."
Try freefilestoprompt.app — Free, No Sign-Up
Drop files, set a target model, get one packed prompt. Runs entirely in your browser.
Open Files to Prompt →Frequently Asked Questions
Is XML output really better than Markdown for Claude?
For multi-file context, yes — Anthropic explicitly documents
Does Claude actually read 1M tokens?
Yes — both Opus 4.7 and Sonnet 4.6 reliably attend across 1M-token contexts in Anthropic's needle-in-haystack tests. Quality is consistent through the window. Performance does cost more (1M input tokens × $3 per M = $3 per Sonnet 4.6 call).
What about prompt caching with Claude?
Anthropic's prompt cache offers up to 90% discount on cached input tokens. For repeated repo-context prompts (e.g., asking multiple questions about the same repo), enable caching in your API call — the same packed prompt gets cached once and reused cheaply.
Should I pack everything or be selective?
Be selective — at 1M tokens, $3 per Sonnet 4.6 call adds up fast. Mark high-signal files high priority (entry points, type contracts, files relevant to your question), drop low-signal files (lockfiles, generated code, large fixtures). Auto-fit handles the math.
What's the largest repo I can effectively use with Claude Opus 4.7?
About 800k tokens of code (1M minus ~8k reserved output minus headroom). That maps roughly to 50-80k lines of typical TypeScript or Python. For larger codebases, use selective packing or move to Llama 4 Scout (10M ctx).
Can I use this with the Claude.ai chat interface?
Yes. Generate the packed prompt in freefilestoprompt.app, copy, paste into a Claude.ai conversation. Works the same as the API — the chat UI accepts long prompts including XML-tagged files.