Gloss Key Takeaways
  1. Treat the constraints in Claude Code’s system prompt as practical coding guidelines born from real-world code review pain.
  2. Keep pull requests tightly scoped: fix what was asked and move cleanup, refactors, and extra features into separate tickets.
  3. Avoid noisy drive-by changes like adding docs/comments/types to untouched code; document only what you actually changed and only where clarity is needed.
  4. Don’t add defensive checks for impossible states inside trusted code; validate at system boundaries and rely on internal contracts.
  5. Resist premature abstraction and over-DRYing: inline one-offs, tolerate small duplication, and extract only once a pattern is clearly reused and stable.

The most useful part of Claude Code's 13,000-token system prompt isn't the identity framing or the tool descriptions. It's a section called "Doing tasks" that contains 14 explicit constraints on how code should be written.

These read like the accumulated frustrations of every senior engineer who's reviewed bad code. They're written as instructions for an AI, but they work equally well as team guidelines, code review checklists, or project rules for any engineering organization.

Here are all 14, extracted from the source, with context on why each one earns its place.


Don't expand the scope of a fix

"Don't add features, refactor code, or make 'improvements' beyond what was asked. A bug fix doesn't need surrounding code cleaned up. A simple feature doesn't need extra configurability."

Scope creep in pull requests is one of the most persistent problems in professional software development. You open a PR to fix a typo and close it having refactored the module. The code may be better, but the review surface is now enormous, QA can't isolate what changed, and you've blended unrelated concerns into a single changeset.

Application: One PR, one purpose. The cleanup gets its own ticket.


Leave existing documentation alone

"Don't add docstrings, comments, or type annotations to code you didn't change. Only add comments where the logic isn't self-evident."

Adding documentation to code you merely read while passing through creates noise in version control and merge conflicts in team workflows. If a function needs better docs, that's a separate task with its own review cycle.

Application: Documentation changes in a PR should explain the code you changed, not the code you happened to encounter.


Stop guarding against impossible states

"Don't add error handling, fallbacks, or validation for scenarios that can't happen. Trust internal code and framework guarantees. Only validate at system boundaries (user input, external APIs)."

This is the rule that generates the most debate, and it's the one with the highest payoff. Every unnecessary null check, every defensive guard against a state the type system already prevents, carries costs: maintenance burden, test surface, and a signal to the next reader that the types can't be trusted.

Application: Validation belongs at system boundaries. Inside your own code, trust the contracts you've built.


Don't abstract until you must

"Don't create helpers, utilities, or abstractions for one-time operations. Don't design for hypothetical future requirements."

A function called formatUserDisplayName() that gets called once isn't a utility. It's a detour that forces the reader to leave context and navigate to another file to understand what happens. The abstraction costs more in cognitive overhead than the inline code would.

Application: Extract on the second use. Not the first.


Repetition beats premature generalization

"The right amount of complexity is what the task actually requires, no speculative abstractions, but no half-finished implementations either. Three similar lines of code is better than a premature abstraction."

DRY (Don't Repeat Yourself) is taught as an absolute in most training programs. It isn't. Every extraction couples the call sites. When three similar operations share a function, changing one means auditing all three. Duplication preserves independence.

Application: Wait until the pattern is clear and stable before generalizing. Until then, copy-paste is a valid engineering choice.


Make decisions instead of adding flags

"Don't use feature flags or backwards-compatibility shims when you can just change the code."

Feature flags serve legitimate purposes in gradual rollouts and experimentation. They become problems when they substitute for decision-making. Every flag doubles the testing surface by creating a branch in logic that must be verified in both states.

Application: If you can just make the change directly, make the change. Reserve flags for deployment safety, not for postponing decisions.


Delete what's unused

"Avoid backwards-compatibility hacks like renaming unused _vars, re-exporting types, adding // removed comments for removed code. If you are certain that something is unused, you can delete it completely."

The _oldVariable rename, the // DEPRECATED comment, the re-export that exists in case some theoretical consumer depends on it, these are symptoms of fear-driven development. If the code is unused, remove it. Version control exists precisely for this purpose.

Application: Git is your backwards compatibility layer. Delete with confidence and recover from history if needed.


Understand before modifying

"In general, do not propose changes to code you haven't read. If a user asks about or wants you to modify a file, read it first. Understand existing code before suggesting modifications."

This sounds obvious. In practice, developers regularly suggest fixes based on their mental model of what the code looked like the last time they read it. The file may have been refactored. The function may already be fixed. The interface may have changed.

Application: Pull the latest version. Read the current state. Then write your change.


Edit files instead of creating them

"Do not create files unless they're absolutely necessary. Generally prefer editing an existing file to creating a new one, as this prevents file bloat and builds on existing work more effectively."

New files feel productive. They're also the primary driver of codebase sprawl. Every new file needs to be discovered, imported, organized, and maintained. Before creating utils/helpers/formatters/dateFormatter.ts, check whether there's already a place where date formatting happens.

Application: The answer to "where should this code live?" is almost always "in an existing file."


Read the error before changing strategy

"If an approach fails, diagnose why before switching tactics. Read the error, check your assumptions, try a focused fix. Don't retry the identical action blindly, but don't abandon a viable approach after a single failure either."

Two failure modes sit at opposite ends of the same spectrum: retrying the identical action expecting a different result, and abandoning a working approach because it failed once. Both stem from not reading the error output. The error message is the diagnostic. It deserves attention before any strategy change.

Application: "It didn't work" is not an analysis. What did the error message say? Which assumption was wrong?


Skip the time estimates

"Avoid giving time estimates or predictions for how long tasks will take, whether for your own work or for users planning projects. Focus on what needs to be done, not how long it might take."

This rule is pragmatic rather than philosophical. Time estimates for software tasks are unreliable enough to be misleading. A concrete list of tasks provides more useful information than a guess at duration.

Application: Enumerate the work. Break it into steps. Let the scope communicate the effort.


Weigh the blast radius

"Carefully consider the reversibility and blast radius of actions."

The prompt categorizes risky operations into four types: destructive (deleting files, dropping tables), hard to reverse (force-pushing, amending published commits), visible to others (pushing code, commenting on PRs, sending messages), and published to third parties (uploading to external services).

An important nuance: "A user approving an action once does NOT mean that they approve it in all contexts."

Application: Having write access and having reason to write are different things.


Investigate unfamiliar state before removing it

"If you discover unexpected state like unfamiliar files, branches, or configuration, investigate before deleting or overwriting, as it may represent in-progress work."

The lock file might be preventing a concurrent deployment. The unfamiliar branch might be someone's week of work. The configuration file you don't recognize might be keeping an environment alive.

Application: If you didn't create it and don't understand its purpose, ask before removing it.


Treat security as continuous, not periodic

"Be careful not to introduce security vulnerabilities such as command injection, XSS, SQL injection, and other OWASP top 10 vulnerabilities. If you notice that you wrote insecure code, immediately fix it."

No qualifiers. No "when appropriate." No "consider." The instruction is absolute: don't introduce vulnerabilities, and if you catch one, fix it now. Not in the next sprint. Not in a follow-up ticket.

Application: Security isn't a review phase. It's a property of every line of code you write.


The internal variant

Anthropic employees receive additional rules not present in the public build:

"Default to writing no comments. Only add one when the WHY is non-obvious."

"Don't explain WHAT the code does, since well-named identifiers already do that."

"Before reporting a task complete, verify it actually works: run the test, execute the script, check the output."

"Never claim 'all tests pass' when output shows failures."

These reflect a team that expects self-documenting code, verified results, and honest reporting. They're standards worth adopting regardless of whether you're working with an AI or a human colleague.


Putting these to work

In project configuration: If you use Claude Code, add the rules that fit your team to your project's CLAUDE.md. The agent follows them.

As review criteria: Rules 1 through 7 make a practical checklist for pull request review. Print them. Reference them in review comments.

As team standards: These are specific enough to be actionable and general enough to apply across languages and frameworks.

As interview material: "When would you choose NOT to add error handling?" reveals more about a candidate's judgment than "explain what error handling is."

These 14 rules weren't designed in a planning meeting. They were refined through millions of interactions with an AI agent that writes code across every type of codebase. Each rule corresponds to a failure that happened often enough to warrant a permanent constraint.

That's what makes them transferable. They're not theoretical best practices. They're the residue of real mistakes, distilled into constraints that prevent them from recurring.

Gloss What This Means For You

Use these rules as a lightweight checklist for every PR: is the change single-purpose, are you avoiding unrelated documentation churn, and are you trusting internal invariants while validating only at the edges? When you feel the urge to add a helper, a flag, or extra “just in case” handling, pause and ask whether the task truly requires it today. If not, ship the simplest correct change and capture follow-up improvements as separate, reviewable work.