- AI’s capability to handle complex work is accelerating fast, with coding task complexity doubling roughly every 70 days, making tool choice and staying current a real productivity lever.
- As coding gets cheaper, the bottleneck shifts to product judgment—deciding what to build and iterating with users becomes the scarce skill.
- Teams are rebalancing around this shift, moving from traditional 4–8 engineers per PM toward 2:1, 1:1, or even combined engineer-PM roles.
- Engineers who talk to customers, build empathy, and make product calls can out-iterate larger teams that wait on specs.
- Career-wise, the people and team you’ll work with matter more than the company logo, and vague team placement is a red flag.
Three things you'll walk away with after reading this:
- The constraint is shifting upstream. As code generation gets cheaper, the bottleneck moves from writing software to knowing what software to write. Engineers who talk to users directly are outpacing entire teams.
- Generated code carries hidden costs. Lawrence Moroni's mortgage-versus-credit-card framework offers a concrete way to evaluate whether any piece of AI-produced code is an asset or a liability.
- Small, self-hosted models are the underserved opportunity. The market is splitting between hosted mega-models and local small models, and the skills gap on the small side is where careers get built.
In a Stanford CS230 lecture from late 2025, Andrew Ng made a straightforward claim: building an AI career right now is more viable than at any previous point. Then he and guest speaker Lawrence Moroni spent an hour supporting that claim with hiring data, stories from the front lines, and a few observations grounded in how the industry actually works.
This isn't a motivational summary. It's a compressed version of the technical and strategic points both speakers made, organized by theme.
The acceleration curve
Ng cited research from METR showing that the complexity of tasks AI handles, measured by the time a human needs for the equivalent work, doubles every seven months across general tasks. For coding specifically, the doubling period is approximately 70 days.
This isn't about benchmark scores reaching some threshold. It's about expanding scope. Tasks that required 10 minutes of human effort, then 20, then 40, keep moving into territory where AI handles them competently. Combined with the current toolkit (large language models, retrieval-augmented generation, agentic workflows, voice interfaces), a single developer can now ship software that was beyond anyone's capability twelve months ago.
Ng mentioned that his preferred coding tool at the time was Claude Code, while noting his preference shifts every few months as the field moves. His broader point: falling even half a generation behind on AI coding tools creates a measurable productivity gap. In this specific domain, the progress genuinely matches the rhetoric.
When building code gets cheap, deciding what to build gets expensive
Ng described his own development cycle as a tight loop: write code, show it to users, collect feedback, revise understanding, write more code. The writing part has accelerated dramatically. The product thinking part hasn't.
This creates a structural reorganization in how teams operate. The traditional engineer-to-product-manager ratio in Silicon Valley has run between 4:1 and 8:1. Ng is now seeing teams propose 2:1 or even 1:1 ratios. Some are eliminating the distinction entirely, collapsing both roles into a single person.
He shared a past mistake: pushing engineers toward product responsibilities and making technically strong people feel inadequate because they weren't natural product thinkers. But the underlying observation holds. Engineers who develop user empathy, who talk to customers and make judgment calls about priorities, iterate faster than anyone else in his experience. When you're not waiting for someone else to bring your product to users, the feedback loop tightens dramatically.
The team matters more than the brand
Ng's other major argument: the people on your immediate team determine your growth more than the company logo on your badge.
He told the story of a Stanford student who took a job at a company with a prominent AI brand. The company wouldn't reveal which team he'd join until after he signed. He ended up doing backend Java payment processing. Not bad work, but not what an AI student prepared for. He left within a year.
The detail that makes this stick: Ng told this story in a previous year's lecture, and then a different student went through the identical experience with the same company. If an employer won't tell you who you'll work with before you commit, that's information worth weighing heavily.
The engineer who solved every problem but one
Moroni opened with a case study. A young engineer with an excellent resume and strong technical skills applied to more than 300 positions after a layoff. He advanced deep into interview processes at Meta, Microsoft, and Blue Origin. Solved every coding challenge. Kept getting rejected.
The gap wasn't technical. Recruiting materials had told him to "stand his ground" and "have a backbone" during interviews. He interpreted this as combativeness when interviewers challenged his solutions. Moroni identified the pattern during mock interviews immediately: a brilliant engineer that no hiring manager would want on their team.
After working on it, the engineer interviewed at a company that explicitly valued collaboration. He got the offer and doubled his salary.
The point is straightforward: companies are evaluating whether they want to work with you, not just whether you can solve their problems.
From demos to production
The hiring market has shifted from "can you build something impressive" to "can you build something useful that ships." A couple of years ago, building an image classifier could justify a six-figure offer. Today, every hiring conversation centers on production experience. What have you shipped? What business outcome did it drive?
This traces to the overcorrection following pandemic-era overhiring. Between 2022 and 2023, companies hired aggressively, partly from pandemic backlogs and partly because "AI" on a resume commanded a premium. Many hires went to people who weren't yet qualified. The subsequent correction means employers now want evidence, not potential.
Moroni was direct: demonstrating business impact is no longer optional.
Evaluating the cost of generated code
Moroni framed technical debt using a financial analogy that makes the trade-offs concrete.
A mortgage is productive debt. You borrow half a million, pay back a million over thirty years, but the house appreciates and you eliminate rent. A high-interest credit card purchase is unproductive debt. You pay $500 for $200 shoes.
Every piece of software creates ongoing obligations: bugs, documentation, feature requests, maintenance. The question is whether you're building equity or accumulating liability.
Productive technical debt means clear objectives met, business value delivered, and code others can read and maintain. Unproductive technical debt means solutions searching for problems, spaghetti code from extended prompting sessions, and the VP who subscribes to a low-code platform and ships code the engineering team inherits.
Moroni shared his own experience building a macOS application where code generation models kept producing iOS APIs, because the training data overwhelmingly favors iPhone development. Trying to fix this through prompting spiraled into increasingly tangled output. Sometimes the answer is still writing code by hand.
The engagement-to-accuracy pipeline
Moroni's observation about industry hype was blunt: social media rewards engagement, not accuracy. LinkedIn in particular is "absolutely overwhelmed with influencers posting things they've used Gemini or GPT to write." The algorithm amplifies this, creating a feedback loop of engagement-optimized noise.
He described a European company CEO who approached him wanting to "implement an agent." Moroni's first question: why? After working through layers of LinkedIn-fueled enthusiasm, they identified what the CEO actually needed: making salespeople more efficient.
The salespeople were spending 80% of their time researching prospects and 20% selling. An agentic AI pilot handled the research portion, recovering 10-15% of previously wasted time. Salespeople earned more commission. The company got measurable ROI. But the solution started with "what problem are we solving," not "which technology should we use."
Moroni referenced a McKinsey finding that roughly 85% of corporate AI projects fail, primarily because they're poorly scoped. The technology works. The problem definition often doesn't.
When filters create the bias they're meant to prevent
Moroni walked through Gemini's image generation problems from a couple years back. He tested prompts requesting images of women from different ethnicities in the same scene: Asian, Indian, Black, Latina. All produced results.
Then he asked for a Caucasian woman. The model refused, citing concerns about "harmful stereotypes and biases." Asking for a "white" woman got the same refusal. But asking for an "Irish" woman worked, and every generated image had red hair. Eight percent of Irish people are redheads. The safety filter was reinforcing a stereotype while claiming to prevent them.
That filter damaged Gemini's reputation and, by extension, Google's. Responsible AI used to be about aspirational social goals. Now it means making sure the product functions correctly and doesn't embarrass the organization. When those priorities invert, you get exactly what happened here.
The split between hosted and self-hosted
Moroni's forward-looking prediction: the AI industry is bifurcating. "Big AI" continues pushing toward larger hosted models and AGI, driven by Google, Anthropic, and OpenAI. "Small AI" is the expansion of self-hostable, open-weight models that companies run on their own infrastructure.
The small side is underserved. Moroni cited Y Combinator data showing 80% of their portfolio companies use small models, many originating from China. The skills that matter: fine-tuning for specific tasks, running models on constrained hardware, building applications on self-hosted inference.
He gave a specific example from the film industry. Studios have extreme IP protection requirements. They cannot share plot details with GPT or Gemini, because that means sharing intellectual property with a third party. But the analysis opportunity is real: understanding audience patterns, optimizing release timing, studying what makes certain films succeed. Self-hosted small models solve the privacy constraint. A 7-billion-parameter model today performs at the level of a 50-billion-parameter model from a year ago.
The same dynamic applies across law, medicine, and any industry where data sovereignty is non-negotiable. Engineers who understand fine-tuning and edge deployment are building skills for the market that's forming, not the one that exists today.
What survives the correction
Moroni closed with a structural framework: hype at the top, massive venture capital investment underneath, inflated valuations, copycat products, and real value as a thin layer at the bottom. He's already seeing investment capital tighten. Companies that got funded because "AI" appeared on a pitch deck now face substantive scrutiny.
His analogy: the dotcom bubble burst, but Amazon and Google came through because they understood fundamentals. Pets.com ran Super Bowl commercials and couldn't handle the traffic that resulted. The companies built on substance survived. The ones built on narrative did not.
The shared conclusion
Ng's advice is "go build things." Moroni's is "be a trusted advisor." Both point to the same underlying idea: the people getting hired, funded, and producing results right now are the ones who understand the problem before they reach for the technology.
The job market is tighter than it was two years ago. But both speakers made a case, supported by specific examples and data, that for people who build and can articulate why what they built matters, the opportunity remains substantial.
Treat product sense as a core technical skill: ship something small, put it in front of users quickly, and tighten the loop between feedback and code. Stay close to the best AI coding tools because being even half a generation behind can compound into a big output gap. When job hunting, push for clarity on team, manager, and day-to-day work—if a company won’t tell you where you’ll land, assume you might not be building what you came for.