Growing Junior Talent When AI Does the First Draft

talent-developmentAIleadershipproduct-managementengineeringcareer-growth
Abstract visualization of digital information streams and data manipulation

The calculus of developing junior talent has fundamentally changed. For decades, the career ladder in product and engineering organizations followed a predictable pattern: juniors cut their teeth on straightforward tasks, gradually building judgment through repetition and feedback. They wrote the first drafts, built the simpler features, handled the routine PRs. Seniors reviewed, refined, and taught through correction.

That entire model is collapsing.

AI tools now generate first drafts faster and often better than most junior employees. GitHub Copilot writes boilerplate code. ChatGPT drafts product specs. Cursor completes entire functions. The traditional "apprenticeship through grunt work" path—the one that produced most of today's senior leaders—no longer exists in the same form.

This creates an urgent question for CPOs and CTOs: How do you develop the next generation of senior talent when AI has automated away the traditional learning ground?

The Disappearing Middle Rungs

The problem isn't just theoretical. Talk to any engineering or product leader at a growth-stage company, and you'll hear the same concern: junior hires are struggling to develop the judgment and intuition that used to come naturally through repetitive practice.

Ethan Mollick, a professor at Wharton who studies AI's impact on work, has documented this dynamic extensively. In his research on AI and skill development, he's found that AI tools create a "jagged frontier" of capability—they're exceptional at some tasks and terrible at others, but the distribution doesn't match human skill curves. The tasks AI handles well are often precisely the ones that used to build foundational skills.

Consider a junior product manager learning to write specs. Traditionally, they'd write dozens of mediocre specs, get feedback, improve incrementally, and eventually develop an intuition for what good looks like. Now? They prompt an LLM, get a decent first draft, polish it, and ship. They're more productive immediately—but they're not building the same mental models.

The same pattern appears in engineering. Junior developers used to struggle through implementing basic CRUD operations, learning about state management, error handling, and edge cases through trial and error. Now, Copilot generates working code on the first try. The developer ships faster but doesn't internalize the patterns.

The efficiency gain is real. The learning loss is also real.

Why This Matters More Than Previous Automation Waves

Some will argue this is just another automation cycle. Spreadsheets eliminated calculation jobs. IDEs eliminated memorizing syntax. Stack Overflow eliminated needing to know everything. Each time, the profession adapted by moving up the value chain.

But this wave is different in two critical ways:

First, the speed. Previous tools automated discrete tasks over decades. AI is compressing multiple skill levels simultaneously and doing it in months, not years. Organizations don't have time for gradual adaptation.

Second, the nature of what's automated. Past tools eliminated tedious work but left the learning scaffolding intact. You still had to understand formulas to use Excel effectively. You still had to understand code to use an IDE. AI tools can produce output without requiring understanding—and that's precisely the problem for skill development.

As Simon Wardley noted in his research on evolution and organizational learning, when you remove the learning feedback loops, you don't just slow skill development—you fundamentally change what skills develop. Junior talent growing up with AI-first workflows will have different strengths and different gaps than previous generations.

The New Skills Taxonomy

If AI handles first drafts, what should junior talent actually be learning?

The answer requires rethinking the skills taxonomy entirely. Instead of organizing skills by seniority level (junior writes code, senior architects systems), organizations need to organize by what remains distinctly human even with AI assistance.

Here's a framework that's emerging across leading product and engineering organizations:

1. Judgment Over Execution

The most critical skill is knowing what to build and why—not how to build it. This has always been important, but it's now the primary differentiator.

Junior talent needs to develop:

  • Problem sensing: Identifying which problems matter before anyone asks
  • Scope definition: Knowing what's in bounds vs. out of bounds
  • Tradeoff evaluation: Understanding second-order effects of decisions

These skills can't be automated because they require context, organizational knowledge, and strategic intuition. But they also can't be learned passively. They require deliberate practice with real stakes.

Actionable approach: Reframe junior projects around decision-making rather than execution. Instead of "implement this feature," try "here's a user problem—propose three solution approaches with tradeoffs, then we'll discuss which to build."

2. Prompt Engineering as Product Thinking

The ability to effectively direct AI tools is becoming a core skill—but not in the way most people think.

The valuable skill isn't writing better prompts. It's knowing what to ask for in the first place. That requires understanding the problem space deeply enough to know what "good" looks like.

Shreyas Doshi, former product leader at Stripe, Twitter, and Google, has written extensively about this. He argues that the most valuable product skill has always been "knowing what question to ask"—and AI makes that skill even more critical. A junior PM who can clearly articulate what they need will outperform a senior who can't, even if the senior knows more about implementation.

Actionable approach: Train junior talent to evaluate AI output critically. Have them generate three versions of a spec or code solution using AI, then defend which is best and why. The learning happens in the evaluation, not the generation.

3. Systems Thinking and Integration

AI tools are exceptional at producing isolated components. They're terrible at understanding how components fit into larger systems.

This creates an opportunity: the ability to think in systems becomes dramatically more valuable when individual components are commoditized.

Junior engineers need to learn:

  • How services interact across boundaries
  • Where state lives and why it matters
  • What breaks when you change one piece

Junior PMs need to learn:

  • How features affect other parts of the product
  • Where user journeys intersect
  • What downstream effects a change creates

Actionable approach: Structure learning around integration points, not isolated features. Have juniors own the "glue" work—connecting AI-generated components into coherent systems. This is where they'll build intuition that AI can't replicate.

4. Taste and Craft

AI produces "good enough" output. It rarely produces exceptional output.

The gap between good enough and exceptional is where human judgment, taste, and craft live. And that gap matters more as AI raises the baseline.

Lenny Rachitsky, who writes extensively about product development, has observed that the best product teams are increasingly defined by taste—the ability to see what could be great and push beyond merely functional.

For junior talent, this means learning to:

  • Recognize when something is "off" even if it works
  • Understand why some solutions feel better than others
  • Develop opinions about quality and craft

Actionable approach: Pair juniors with seniors specifically for "taste reviews." Not code reviews or spec reviews—but sessions focused entirely on "is this good enough or could it be better?" Make taste development explicit, not accidental.

Structural Changes: Rethinking How Teams Learn

Individual skill development is only part of the equation. The bigger challenge is structural: how do you organize teams and work when the traditional learning ladder is gone?

Invert the Apprenticeship Model

Traditionally, juniors learned by doing grunt work under senior supervision. That model assumed juniors had time to make mistakes on low-stakes work.

AI eliminates the low-stakes work. Everything that remains is higher stakes.

The new model: Juniors should work on high-judgment, lower-execution tasks with heavy senior involvement. Think of it as inverted apprenticeship—juniors lead strategy and decision-making with seniors handling implementation details when needed.

Example: A junior PM defines the problem space, proposes solutions, and makes the build decision. A senior PM reviews the thinking and coaches on judgment. The junior then uses AI tools to draft specs and coordinate execution—but the learning happens in the problem definition phase, not the documentation phase.

Create Deliberate Learning Debt

In a world where AI makes everyone more productive immediately, there's a temptation to optimize for short-term output. Junior + AI can ship features fast. Why slow down for learning?

Because short-term productivity gains create long-term capability gaps. Organizations that optimize purely for current output will find themselves unable to develop future leaders.

The solution is to deliberately create learning debt—intentionally slowing down to ensure juniors build foundational understanding.

Actionable approach: Implement "learning sprints" where juniors work without AI assistance on foundational problems. Yes, it's slower. Yes, it's less efficient. But it builds the mental models that make them effective with AI later.

Think of it like training wheels in reverse: you eventually need to work without the assistance to understand what the assistance is actually doing.

Build Feedback Loops That AI Can't Replace

The most critical element of skill development is feedback. Historically, feedback came naturally through iteration: you wrote bad code, it broke, you fixed it. You made bad decisions, users complained, you learned.

AI short-circuits these feedback loops. Code works on the first try (even if it's not optimal). Specs look polished (even if the thinking is shallow).

Organizations need to artificially recreate feedback loops that force learning:

  • Pre-mortems: Before using AI to implement, have juniors predict what will go wrong
  • Reflection sessions: After shipping, analyze what AI suggested vs. what actually worked
  • Blind reviews: Review AI-generated work without knowing it came from AI—does it actually meet the bar?

The key is making feedback explicit and frequent, not assuming it will happen organically.

The Portfolio Approach to Talent Development

Here's the uncomfortable truth: not everyone will successfully transition to AI-augmented work at the same pace.

Some junior talent will immediately thrive—they'll use AI as a force multiplier and develop judgment faster than previous generations. Others will struggle, becoming dependent on AI without building underlying skills.

Leading organizations are adopting a portfolio approach to talent development:

Accelerators: Junior talent who show strong judgment early. Give them AI tools and let them run. They'll develop faster than any previous generation.

Builders: Junior talent who need to build foundational skills first. Limit AI access initially, focus on fundamentals, then gradually introduce augmentation.

Specialists: Junior talent who go deep in areas where AI is weakest—taste, systems thinking, user empathy. Let them own these domains.

The mistake is treating all junior talent identically. The path to senior leadership will increasingly diverge based on how individuals interact with AI tools.

What This Means for Hiring

If the traditional junior-to-senior progression is broken, hiring strategies need to change too.

Stop hiring for "potential to learn on the job." That made sense when the job provided natural learning opportunities. It makes less sense when AI eliminates those opportunities.

Start hiring for:

  • Demonstrated judgment (even in non-work contexts)
  • Curiosity and self-directed learning (people who seek out hard problems)
  • Comfort with ambiguity (the ability to work without clear answers)

Some companies are experimenting with "AI-native" hiring: candidates complete take-home projects where AI use is encouraged, then defend their decisions in interviews. The evaluation isn't about the output quality (AI can make anyone's output look good). It's about the thinking process and judgment.

The Long Game: Building Institutional Knowledge

There's a second-order effect that's easy to miss: AI tools don't build institutional knowledge.

When a junior developer struggles through implementing a feature, they learn not just how to code—they learn why the system is architected that way, what past decisions led here, where the bodies are buried.

When AI generates the implementation, that knowledge transfer doesn't happen. The code works, but the context is lost.

Over time, this creates organizational knowledge decay. The team ships features but doesn't understand why things are the way they are. When something needs to change, no one knows what's safe to modify.

Actionable approach: Treat documentation and context-sharing as first-class work. Have juniors document not just what they built, but why decisions were made, what alternatives were considered, what constraints existed.

Make "teaching the next person" an explicit part of every project. This is how institutional knowledge survives in an AI-augmented world.

Practical Playbook for CPOs and CTOs

Here's a concrete starting point for organizations wrestling with this challenge:

Month 1: Audit Current Learning Paths

  • Map out how junior talent currently develops skills
  • Identify which learning opportunities AI has eliminated
  • Survey junior employees: what do they feel they're not learning?

Month 2: Redesign Onboarding

  • Add "fundamentals without AI" bootcamp for first 2-4 weeks
  • Create explicit judgment-building exercises
  • Pair every junior with a senior for weekly "taste reviews"

Month 3: Restructure Work Allocation

  • Stop assigning "easy" tasks to juniors (AI does those now)
  • Start assigning "high-judgment, lower-execution" projects
  • Create deliberate learning sprints without AI assistance

Month 4: Build New Feedback Mechanisms

  • Implement pre-mortems and reflection sessions
  • Create rubrics for evaluating judgment, not just output
  • Start tracking skill development metrics beyond velocity

Ongoing: Experiment and Iterate

  • Run A/B tests on different learning approaches
  • Share what works across teams
  • Accept that the playbook is still being written

The Contrarian Take

Here's the uncomfortable possibility: maybe we don't need as many junior roles anymore.

If AI can handle first drafts and routine execution, perhaps organizations should hire fewer juniors and invest more in developing the ones they do hire. Smaller cohorts, more intensive mentorship, higher bar for entry.

This is already happening quietly. Engineering teams are hiring fewer junior developers and more mid-level engineers who can effectively direct AI tools. Product teams are looking for PMs who can think strategically from day one.

The counterargument: this creates a death spiral. If no one hires juniors, where do future seniors come from? The industry needs entry points.

The answer probably lies somewhere in the middle: fewer junior roles, but much more intentional development for those who make it in. Quality over quantity. Apprenticeship over at-scale hiring.

References and Thought Leaders

The thinking in this piece builds on work from several researchers and practitioners:

On AI and Skill Development:

  • Ethan Mollick - Wharton professor studying AI's impact on work and his book "Co-Intelligence"
  • Simon Wardley - Research on organizational evolution and learning loops
  • Benedict Evans - Analysis of AI adoption curves and organizational change

On Product Thinking and Judgment:

  • Shreyas Doshi - Former Stripe/Twitter/Google PM on product sense and decision-making
  • Lenny Rachitsky - Newsletter on product development and organizational taste
  • Marty Cagan - SVPG founder on empowered product teams

On Engineering and Technology:

On Strategy and Business:

  • Ben Thompson - Stratechery on taste as competitive advantage
  • Dario Amodei - Anthropic CEO on AI capabilities and collaboration

Key Takeaways

For CPOs:

  • Reframe junior PM work around problem definition and judgment, not execution
  • Make taste development explicit through dedicated review sessions
  • Create learning sprints where AI assistance is intentionally limited
  • Measure skill development, not just feature velocity

For CTOs:

  • Structure projects around integration and systems thinking, not isolated components
  • Implement "fundamentals bootcamps" before introducing AI tools heavily
  • Build feedback loops that surface learning gaps AI might hide
  • Consider a portfolio approach: not all junior engineers need the same path

For Both:

  • Accept that the traditional career ladder is broken—design a new one intentionally
  • Invest more per junior hire, even if it means hiring fewer
  • Make institutional knowledge capture a first-class concern
  • Experiment aggressively—no one has this fully figured out yet

The Bottom Line

The question isn't whether AI will change how junior talent develops—it already has. The question is whether product and engineering leaders will adapt their development models quickly enough.

The organizations that figure this out will have a massive advantage. They'll develop senior talent faster and more effectively than competitors still clinging to outdated apprenticeship models.

The ones that don't will find themselves with a generation of employees who are productive with AI but lack the judgment to lead without it.

The window to get this right is narrow. AI capabilities are advancing faster than organizational learning. The time to redesign talent development isn't when the crisis is obvious—it's now, while there's still room to experiment and adapt.

The future of your organization's leadership pipeline is being determined by the choices you make today about how junior talent learns.

Choose wisely.


Further Reading

Essential Newsletters and Blogs:

Books Worth Your Time:

  • "Co-Intelligence: Living and Working with AI" by Ethan Mollick
  • "Empowered: Ordinary People, Extraordinary Products" by Marty Cagan
  • "Range: Why Generalists Triumph in a Specialized World" by David Epstein
  • "The Culture Code" by Daniel Coyle (on building learning organizations)
  • "The Alliance" by Reid Hoffman on modern employment relationships

Key Twitter/X Follows:


If any of this resonates, you should subscribe.

No spam. No fluff. Just honest reflections on building products, leading teams, and staying curious.