hero

Once upon a time, there was an engineer who needed AI to write better code. Every day, he pasted the same vague instructions into Claude and got mediocre results. One day, he realized that prompting isn't a technical skill. It's a storytelling skill. Because of that, he started treating every prompt like a scene he was directing, not a search query he was typing. Because of that, his outputs went from generic to specific, from flat to useful, from "close enough" to exactly right. Until finally, he stopped blaming the model and started writing prompts worth responding to.

That little structure I just used? It's Pixar's story spine, one of 22 rules of storytelling that Pixar storyboard artist Emma Coats shared years ago. And many of those rules apply directly to how you write prompts for AI.

Prompting is storytelling. You're describing a world, a character, a situation, and a desired outcome to an audience of one: the model. The better you tell that story, the better the response. Pixar spent decades figuring out how to communicate complex ideas with clarity and emotion. We can borrow that.

Simplify. Focus. Combine. Hop over detours.

Pixar Rule #5: Simplify. Focus. Combine characters. Hop over detours. You'll feel like you're losing valuable stuff but it sets you free.

The most common prompting mistake is cramming everything into one request. You want the model to research, analyze, compare, summarize, format, and also make it sound casual but professional but not too casual. That's six tasks wearing a trenchcoat pretending to be one task.

Strip it down. What's the one thing you need from this prompt? Do that first. Then build.

Overstuffed prompt: "Research the latest trends in AI agent frameworks, compare the top 5, analyze their pricing models, suggest which one is best for a 10-person startup, write it as a blog post in a conversational tone with headers and bullet points, and include a comparison table."

Focused prompt: "Compare Langchain, CrewAI, and the Claude Agent SDK for a 10-person startup building customer support automation. Focus on: ease of setup, production readiness, and cost at 10,000 requests/day."

The focused version gets a better answer because the model can actually concentrate. You'll feel like you're losing the blog post formatting, the conversational tone, and the other two frameworks. You're not losing them. You're doing them next, once you have the substance right.

Come up with your ending before your middle

Pixar Rule #7: Come up with your ending before you figure out your middle. Seriously. Endings are hard, get yours working up front.

Most people write prompts that describe what they want the model to do. Better prompts describe what the output should look like when it's done.

Process-focused prompt: "Analyze this dataset and find interesting patterns."

Outcome-focused prompt: "Analyze this dataset. I need three specific findings that would change how our sales team prioritizes leads. For each finding, include the data that supports it and one concrete action the team should take on Monday."

The second prompt works because you defined the ending. You know what "done" looks like: three findings, each with evidence and an action item. The model now has a target to hit instead of an open field to wander through.

Before you write any prompt, ask yourself: what does the perfect response look like? Describe that. The model will figure out the middle.

Give your characters opinions

Pixar Rule #13: Give your characters opinions. Passive/malleable might seem likable to you as you write, but it's poison to the audience.

When you ask AI to "write about the pros and cons of remote work," you get a balanced, lifeless, both-sides essay that nobody wants to read. The model defaults to neutral because you didn't give it a position.

Give it one.

Neutral prompt: "Write about AI in education."

Opinionated prompt: "Write a piece arguing that AI tutoring will help struggling students more than it helps top performers, and that most edtech companies are building for the wrong end of the spectrum. Take a clear position and support it with specific examples."

The model doesn't need to believe the opinion. It needs the opinion to create focus, structure, and energy. An argument has a direction. A "balanced overview" has none.

This applies to technical prompts too. "Review this code" gets you generic observations. "Review this code assuming it will handle 50,000 concurrent users and identify the three places it will break first" gives the model a point of view.

What are the stakes?

Pixar Rule #16: What are the stakes? Give us reason to root for the character. What happens if they don't succeed? Stack the odds against.

The model doesn't know why your prompt matters unless you tell it. Context about consequences produces dramatically better output.

No stakes: "Write an email to the team about the new deployment process."

With stakes: "Write an email to the team about our new deployment process. Context: we had two production outages last month caused by manual deployment errors. One cost us a $200K client. The team is resistant to change because the old process felt familiar. This email needs to explain the new process clearly while acknowledging that the old way wasn't working. Tone: direct but not blaming."

Same task. Completely different output. The second prompt gives the model the stakes (outages, lost revenue, team resistance), and the model shapes every sentence around those stakes. It knows what matters. It knows what to emphasize. It knows what tone to strike.

Discount the first thing that comes to mind

Pixar Rule #12: Discount the first thing that comes to mind. And the second, third, fourth, fifth. Get the obvious out of the way. Surprise yourself.

This is the most underused prompting technique: explicitly telling the model to go past the obvious.

Standard prompt: "Give me 5 marketing angles for a B2B SaaS product."

Pixar-informed prompt: "Give me 10 marketing angles for a B2B SaaS product that sells expense management to mid-size companies. The first 5 will probably be obvious (save time, reduce errors, better visibility, etc). I want the second 5. The angles that competitors aren't using. The ones that would make a CFO stop scrolling."

You're not just asking for ideas. You're telling the model that the first wave of output isn't good enough, and you're giving it permission to go deeper. This works because language models do tend to generate the most statistically likely responses first. Telling them to move past those responses unlocks genuinely interesting output.

If you were your character, in this situation, how would you feel?

Pixar Rule #15: If you were your character, in this situation, how would you feel? Honesty lends credibility to unbelievable situations.

When you're writing prompts for content that involves people, decisions, or emotions, specify the emotional reality. Not just the facts.

Flat prompt: "Write a message telling a freelancer we're ending their contract."

Honest prompt: "Write a message ending a freelancer's contract. Context: they've done good work for 8 months but our budget was cut and we genuinely can't afford to continue. I want the message to be honest about the reason (budget, not performance), acknowledge their contribution specifically, and offer to write a recommendation. This person deserves a respectful ending, not corporate boilerplate."

The emotional context ("this person deserves a respectful ending") shapes the model's word choices, sentence structure, and tone in ways that a format specification never could.

What's the belief burning within you?

Pixar Rule #14: Why must you tell this story? What's the belief burning within you that your story feeds off of? That's the heart of it.

Every good prompt has a reason for existing. The model doesn't need to know your life story, but it does need to know your intent.

Generic: "Write a LinkedIn post about AI adoption."

With intent: "Write a LinkedIn post arguing that most companies are buying AI tools before they've cleaned their data, and that the uncomfortable truth is that a $50/month spreadsheet cleanup would deliver more value than a $50,000 AI platform. I consult with mid-market companies and I see this pattern weekly. I want the post to resonate with CTOs who suspect they're being sold something they're not ready for."

The intent changes everything. The model now knows the audience (CTOs), the belief (data quality before AI investment), the evidence base (consulting experience), and the emotional register (honest, slightly contrarian, peer-to-peer).

When you're stuck, list what wouldn't happen next

Pixar Rule #9: When you're stuck, make a list of what wouldn't happen next. Lots of times the material to get you unstuck will show up.

When the model gives you something that's not right and you can't articulate what you want instead, try the inversion.

Instead of: "Make this better"

Try: "This response is too formal, too long, and uses too many bullet points. The tone feels like a corporate whitepaper when I need it to sound like advice from a colleague over coffee. Don't use any bullet points. Keep it under 200 words. Be direct, not diplomatic."

Describing what you don't want is often easier than describing what you do want, and it's just as effective. The model subtracts the unwanted elements and what remains is usually closer to your vision.

Putting it on paper lets you start fixing it

Pixar Rule #11: Putting it on paper lets you start fixing it. If it stays in your head, a perfect idea, you'll never share it with anyone.

The biggest prompting mistake isn't writing a bad prompt. It's not writing one at all.

People spend 10 minutes mentally composing the perfect prompt, editing it in their heads, second-guessing the wording, and then either typing something worse than what they imagined or not typing anything at all.

Just send it. The response will show you what was wrong with your prompt faster than any amount of mental editing. Prompting is iterative. The first prompt is a draft. The model's response is your feedback. The second prompt is your revision.

Pixar doesn't get the story right on the first pass. They get it on paper, watch it fail, and fix what's broken. That's the process. It works for prompting too.

No work is ever wasted

Pixar Rule #17: No work is ever wasted. If it's not working, let go and move on. It'll come back around to be useful later.

Sometimes a prompt conversation goes sideways. The model misunderstood your intent, or you realized halfway through that you're asking the wrong question entirely. The instinct is to keep pushing, to fix the existing thread, to make the sunk cost worth something.

Don't. Start a new conversation. Rewrite the prompt from scratch using what you learned from the failed attempt. The failed attempt wasn't wasted. It taught you what you actually need, which is often different from what you originally asked for.

The best prompters I've worked with abandon conversations frequently. Not because they're impatient, but because they're efficient. They recognize when a thread has drifted past the point of recovery, and they use what they learned to write a better opening prompt for the next attempt.

The essence of your prompt

Pixar Rule #22: What's the essence of your story? The most economical telling of it? If you know that, you can build out from there.

Before you write any prompt, answer this question in one sentence: what do I need and why?

"I need three concrete ways to reduce our API costs because we're burning through our budget twice as fast as projected."

That sentence is almost a prompt by itself. And if your actual prompt is four paragraphs long, every sentence in it should serve that one-sentence core. If it doesn't serve the core, cut it.

The most economical telling of your prompt is usually the best one. Not because shorter is always better, but because economy forces clarity. When you can't hide behind extra words, every word has to earn its place.

That's what Pixar figured out about stories, and it's the same constraint that separates prompts that get somewhere from prompts that don't.