freefilestoprompt.app

How to Paste Multiple Files Into an LLM

The naive approach — opening each file, copying contents, pasting into your LLM one-by-one — breaks down past 3-4 files. Modern context windows (1M tokens for Claude and Gemini, 400K for GPT-5, 10M for Llama 4 Scout) make multi-file prompts practical, but you need a tool that handles the packing math. freefilestoprompt.app is that tool.

The 30-second workflow

  1. Drag files (or a folder) into the drop zone. Browser reads them with FileReader; no upload. Each becomes a row showing path + size + token count.
  2. Set your target model. Pick the LLM you'll paste into. The budget bar shows your context window minus reserved output (default 8000 tokens for the model's response).
  3. Mark priorities. Pin must-keep files (📌). Drop irrelevant ones (∅) — tests, build artifacts, lockfiles. Set the rest to high / medium / low priority.
  4. Click Auto-fit. The packer fills the budget greedily by priority. Excluded files stay dimmed in the list.
  5. Pick output format. XML for Claude, Markdown for general use, plain for simple boundaries.
  6. Copy → paste into your LLM. Done.

What to drop, what to skip

Always include (pin or high priority)Always exclude (drop)
Top-level entry points (index.ts, main.py, App.tsx)node_modules/*, dist/*, .next/*, build/*
Type definitions, schemas, contractsLockfiles (package-lock.json, yarn.lock, Cargo.lock)
Route definitions, API surfaceGenerated code (Prisma client, GraphQL codegen output)
Files directly relevant to your promptSnapshots, fixtures, large test data
README and architecture docs (if asking about overview)Logs, build artifacts, dotfiles you don't care about

Choosing the right output format

XML wraps each file in <file path="...">CONTENT</file> tags. Anthropic recommends this for Claude. GPT and Gemini handle it cleanly too. Use XML by default.

Markdown uses ### File: path headers with fenced code blocks. Useful when you want the LLM to render the output back as Markdown (e.g., for a documentation prompt) or when pasting into a Markdown-rendered chat UI.

Plain uses === FILE: path === separators with no formatting. Smallest output size, but the LLM may have a slightly harder time parsing multi-file structure.

Common mistakes

Try freefilestoprompt.app — Free, No Sign-Up

Drop files, set a target model, get one packed prompt. Runs entirely in your browser.

Open Files to Prompt →

Frequently Asked Questions

Can I paste files directly without dragging?

Yes — there is a paste textarea below the drop zone. Anything you paste becomes a virtual file with a timestamped name.

How accurate is the token count?

Within ~5% for typical text and code. The estimator slightly overestimates so packed prompts reliably fit. For exact per-model counts, use freetokencounter.app on the output.

Does it work for non-code files?

Yes — any text-based file. PDFs, Word docs, and other binaries are auto-skipped. Convert PDFs to text first (freepdftotext.app or similar) and drag the .txt in.

Can I save my file list for next time?

Per-path priorities are saved in localStorage so when you re-add the same files they remember their priority. The file contents themselves are not saved (privacy + storage). Saved named bundles are on the Pro roadmap.

What about non-LLM use cases?

freefilestoprompt.app outputs a single concatenated text block with delimiters. That's useful for any tool that takes large text input — RAG pipelines, embedding generators, document analysis tools, etc.

How do I include git diff or git log?

Generate them locally (`git diff > diff.txt`, `git log --oneline > log.txt`), drag the resulting files into the drop zone. They get token-counted and packed like any other file.

More Free Tools from Freesuite

by freesuite.app