--- views
28 min read

The Third Path: Why the Super IC vs. Product Engineer Debate Misses the Point

The industry says AI is forcing engineers to choose between becoming Super ICs or Product Engineers. But there's a third path emerging—the Player-Coach at Scale—that refuses to accept this false dichotomy.

Marcus Elwin avatar

Marcus Elwin

AI Lead, Data Scientist, Product Manager & AI Engineer with 8+ years of experience across various industries. Passionate about deploying GenAI and LLMs in production environments.

AI Engineering Career Leadership Player-Coach AI Tools Software Engineering Claude Code Vibe Coding

The AI landscape has shifted dramatically over the past decade—from hosting tree-based models, to training deep learning systems, to now orchestrating agents via commercial LLM APIs. But the more interesting transformation isn’t technical—it’s organizational. The team structures that used to build AI products are collapsing.

Two years ago at a legal tech startup, I was Lead AI Engineer with a team of one junior developer. A traditional company would have staffed that function with 3-5 people and made me a pure manager—roadmap ownership, sprint planning, 1:1s, stakeholder alignment, maybe a Jira board to groom. Instead, I shipped.

Now I lead AI product development at a fintech. Same pattern: shipping features and systems that would traditionally require a team. In practice, it’s me and Claude Code.

The industry is calling this the “Super IC” phenomenon. But that framing misses something. I’m not a Super IC—I’m still a lead. I still own strategy, stakeholder management, and technical direction. I just also ship more now than ever, because AI removed the tradeoff that used to force a choice.

The Core Thesis

AI tools don’t just make individuals more productive—they collapse the artificial boundary between “leading” and “doing” that organizations created as a bandwidth hack. The most interesting career path isn’t choosing between them; it’s refusing to choose.

I’ve written about adjacent ideas before. If this post intrigues you, note that my thinking has evolved over time. I recommend checking out these earlier posts for more context:

How My Thinking Evolved

I’ve been circling this idea for years. In 2024, I wrote about the V-shaped data scientist—arguing that practitioners needed deep expertise plus breadth across the ML workflow to thrive. Then in 2025, I argued that taste still matters—that as AI commoditizes code, human judgment becomes the differentiator.

This post is where those threads converge. The V-shape was about skills. Taste was about judgment. The Player-Coach is about leverage—using AI to collapse the tradeoffs that made us choose between building and leading in the first place.

The Conventional Wisdom: Pick Your Lane

Should you be a specialist, or should you be a generalist or a little bit of both? Different people have different opinions here. However, the discourse right now says AI coding tools are bifurcating engineering careers—while at the same time claiming anyone can now be a 10x engineer with AI assistance.

Regardless, you have two paths to stay relevant:

Path 1: The Super IC. Leverage AI for massive individual output. Become the engineer who single-handedly ships what used to require a team. The poster children are indie hackers like Pieter Levels ($3.5M ARR as a solo developer)1 or the tiny teams at companies like Midjourney ($200M ARR with 40 employees)2. Trade breadth for depth. Optimize for velocity.

Path 2: The Product Engineer. Own the full product lifecycle. Use AI to handle implementation details while you focus on what to build and why. Companies like PostHog and Vercel hire explicitly for this profile—engineers who “care more about outcomes and impact than the exact implementation.” I wrote about this archetype in my taste post—the hybrid professional who’s technical enough to validate AI output, design-minded enough to prioritize UX, and business-aware enough to connect features to outcomes. Trade technical depth for product breadth.

The Junior Engineer Paradox

If you’re early in your career, you’re facing a unique contradiction. On one hand, you’re AI-native—you learned to code alongside Cursor, Claude Code, or GitHub Copilot, think in terms of prompt engineering, and have no muscle memory of the “old way” to unlearn. You’re positioned to thrive in an AI-first world.

On the other hand, you lack the judgment and context to know when AI is steering you wrong. You haven’t built enough systems to recognize architectural anti-patterns. You haven’t debugged enough production incidents to smell when generated code will fail at scale. You haven’t earned the scars that teach you which technical debt matters.

This is both your advantage and your vulnerability. We’ll return to what this means for your career strategy later—but the short version is this: velocity without judgment is a career liability. More on this in the “What This Means for Your Career” section below.

The evidence for this bifurcation is hard to ignore. A Stanford study found that employment for developers aged 20-34 has declined nearly 14% between 2022-2024, while prospects for senior engineers remain stable3. Amazon mandated a 15% increase in IC-to-manager ratios4. Meta’s “Year of Efficiency” eliminated 21,000 positions, explicitly asking managers to become ICs or leave5.

The message seems clear: pick a lane or get run over. But there’s a third option emerging—one (in my honest opinion) that refuses to accept this binary. Before I explain why it works, let’s understand why the choice ever existed in the first place.

The Flaw in the Binary

Here’s what bothers me about this framing: it assumes the IC/manager split was ever natural. It wasn’t. It was a bandwidth hack. For instance, many managers at today’s AI-driven companies still code quite a bit, but not as much as they did before becoming managers.

We, as humans, have limited hours and cognitive capacity. At some point, an engineer gets senior enough that their judgment becomes more valuable than their keystrokes. This holds true for other professions as well. So we invented the management track—a way to scale that judgment across more people. The tradeoff was that you stopped shipping. You traded direct output for leverage through others.

This was always a compromise, not an ideal. The best technical leaders I’ve known hated giving up the craft. They missed the dopamine of a clean PR and the satisfaction of solving hard problems directly. Many bounced back and forth between IC and management roles, never fully satisfied with either.

What AI tools change isn’t just individual productivity—they change the leverage equation that forced this tradeoff in the first place, both for technically skilled people and for other crafts like design, marketing, and writing.

Below is an attempt to visualize this evolution or shift that we are currently seeing and experiencing firsthand:

The Leverage Equation Changed

How AI transformed the traditional career fork into a viable dual path

Traditional Model
Zero-Sum Tradeoff
Senior Engineer
CHOOSE
Management
Scale via others
Stop Shipping
IC Track
Direct output
Limited Leverage
AI-Enabled Model
Both Paths Viable
Senior Engineer
BOTH PATHS
Scale via AI + Others
Strategic leverage
Keep Shipping
Direct Output via AI
10x velocity
Strategic Leverage

The shift: AI compresses implementation time, freeing bandwidth for strategic work without sacrificing direct output.

The traditional model forced a binary fork—you chose scaling through others (management) or scaling through yourself (IC). Either path had a ceiling. The AI-enabled model collapses this fork. You can now scale through both paths simultaneously because the execution bottleneck that forced the choice no longer applies.

The Third Path: The Player-Coach at Scale

Here’s what I’ve learned across two companies and five years of integrating AI into my workflow: the most interesting archetype isn’t the Super IC or the Product Engineer. It’s the technical lead who refuses to choose.

I call this the Player-Coach at Scale.

What Is a Player-Coach?

The term comes from sports—a player-coach is someone who actively plays on the team while also serving as its coach. Think Bill Russell winning two NBA championships as player-coach of the Celtics, or Franz Beckenbauer leading West Germany as a playing captain.

In sports, player-coaches are rare because the dual role is exhausting. You’re making real-time tactical decisions while also executing plays. The cognitive load is immense. Most people burn out or do both jobs poorly.

In software, the same pattern held—until AI changed the execution cost. Now the “playing” part (writing code, building systems) is compressible in a way it never was before.

You keep the strategic context, the stakeholder relationships, and the architectural judgment that comes from experience. But you also ship—not by working 80-hour weeks, but because AI compresses what used to be a two-week sprint into a day when you have the right context.

The Player-Coach combines:

  • The Super IC’s output capacity: Using AI tools to ship features and systems directly, not just review others’ work
  • The Product Engineer’s strategic judgment: Understanding what to build, for whom, and why it matters to the business
  • The traditional lead’s organizational context: Knowing the political landscape, the historical decisions, the technical debt that matters versus the debt that doesn’t

However, in my mind, a Player-Coach—although it may sound similar—is not a “Unicorn,” a term I’ve never liked:

This Is Not the Unicorn Myth

Remember the “unicorn developer” or “10x engineer” discourse from the 2010s? That was about finding mythical individuals who had mastered every skill—frontend, backend, DevOps, design, databases, mobile. It was unrealistic, led to burnout, and created toxic hiring expectations.

The Player-Coach is fundamentally different. It’s not about being a superhuman who knows everything. It’s about having strategic context that AI can’t replicate, combined with AI fluency that multiplies your existing judgment. You don’t need to master Kubernetes AND React AND ML ops. You need deep context in your domain and the ability to direct AI tools effectively within it.

The unicorn was a fantasy. The Player-Coach is a leverage strategy.

And it is also not the “Renaissance Engineer,” although I’ve seen some startups use that term for people they want to hire:

The Renaissance Engineer: Similar Spirit, Different Focus

The “Renaissance Engineer” is the closest existing concept to what I’m describing. Recent discourse (2024-2025) defines it as engineers who use AI to access adjacent domains—market analysis, design, security—without mastering each discipline. As one article puts it: “tasks that previously required sequential coordination across teams can now be initiated and completed by individuals.”

The similarities are real. Both the Renaissance Engineer and Player-Coach leverage AI to expand beyond traditional role boundaries. Both reject the idea that you need to personally master every discipline. Both recognize AI as the enabler of polymathic reach.

Where the Player-Coach differs:

  • Organizational context as the core asset. The Renaissance Engineer emphasizes curiosity and breadth. The Player-Coach emphasizes years of institutional knowledge—the failed projects, the political minefields, the technical debt that matters. That context is what makes AI output production-ready versus generically correct.

  • The leadership dimension. Renaissance Engineers build across domains. Player-Coaches build and coach—mentoring, aligning stakeholders, making architectural decisions, elevating those around them. The “coach” part isn’t incidental; it’s the strategic altitude that multiplies the impact of the “playing.”

  • Judgment over curiosity. Renaissance thinking celebrates exploration. Player-Coaching demands ruthless prioritization—knowing which 20% of problems deserve your direct attention and which 80% to delegate to AI or humans.

The Renaissance Engineer expands their brain. The Player-Coach expands their reach while maintaining strategic altitude.

And finally, it’s not about the “T-shaped” or “V-shaped” profiles, even if I’m a big fan of those metaphors:

...And It's Not About T-Shaped or V-Shaped Profiles

The HR-friendly version of these ideas is the “T-shaped” engineer (deep in one area, broad awareness across others) or the “V-shaped” professional (deep expertise in multiple converging domains). These frameworks dominated career development advice for a decade.

They’re not wrong—they’re just pre-AI thinking. The T-shape and V-shape still assume you need to personally develop that horizontal bar or those converging verticals. That takes years. The Player-Coach model shortcuts this: AI provides instant access to the horizontal bar. You don’t need to be T-shaped if you can operate as T-shaped by directing AI across domains you understand well enough to validate.

Your shape doesn’t change. Your effective surface area does.

Why the Player-Coach Failed Before

If you’re skeptical about this model, you should be. The player-coach has a terrible track record historically.

Ask anyone who’s tried it pre-AI: you end up doing both jobs poorly.

You cannot have two Priority-One goals at the same time.

— Andy Budd Being a Player Coach

Companies loved the idea because they got “two people for the price of one.” Engineers hated it because they were perpetually context-switching between shipping and leading, excelling at neither.

The criticism was valid. Here’s why it doesn’t apply anymore:

The old failure mode: Your coding time competed directly with your leadership time. Every hour reviewing PRs was an hour not architecting. Every 1:1 was an hour not debugging. The bandwidth constraint was real and zero-sum.

What AI changes: The implementation portion of coding—the part that consumed 60-80% of development time—is now compressible. What used to take me a week of focused coding now takes a day of AI-directed development. That freed bandwidth doesn’t disappear; it gets reallocated to the strategic work that can’t be compressed.

The Math Changed

Pre-AI: 40 hours/week = 25 hours coding + 15 hours leading = mediocre at both

Post-AI: 40 hours/week = 8 hours AI-assisted coding (equivalent output to 25 hours) + 15 hours leading + 17 hours for deep work, strategy, or more shipping

The player-coach failed because the math didn’t work. Now it does.

Now that we’ve defined the Player-Coach, distinguished it from similar-sounding myths, and explained why it works now when it didn’t before, let’s see how all three paths compare below in “The Three Engineering Archetypes”:

The Three Engineering Archetypes

How AI is reshaping the traditional career paths in software engineering

Beyond this high-level comparison, let’s dive deeper into the skill profiles. I’ll compare them across five key dimensions:

  • Output — Direct shipping capacity. PRs merged, features delivered, bugs fixed. The raw productive throughput of hands-on engineering work.
  • Strategy — Product thinking and technical vision. The ability to see the bigger picture, make architectural tradeoffs, and align work with business outcomes.
  • Context — Organizational knowledge depth. Understanding why systems exist, who the stakeholders are, and how decisions ripple across teams.
  • AI Leverage — Proficiency with AI coding assistants. How effectively you multiply your output through prompt engineering, context management, and AI-augmented workflows.
  • Leadership — People and team influence. Mentoring, code review quality, driving alignment, and elevating those around you—often without formal authority.

Skill Profile Comparison

Toggle profiles on/off to compare archetypes—including the mythical ones.

Output Strategy Context AI Lead
Profiles
Skill Comparison
Output
Strategy
Context
AI Leverage
Leadership

Notice the Pattern

Toggle all five profiles to see the full picture. The Unicorn (red) shows the unrealistic ideal—near-perfect everywhere. The Renaissance Engineer (orange) shows the pre-AI attempt: spread thin, notably weak on AI leverage. The Player-Coach (green) isn't chasing mythical perfection—it achieves balanced high scores through strategic AI leverage.

The radar tells the story: the Super IC spikes on output and AI leverage but craters on strategy and leadership. The Product Engineer inverts this—strong strategy, weaker direct output. The Player-Coach doesn’t win any single dimension outright, but maintains competitive scores across all five. That balance is what AI makes possible.

Toggle on the Unicorn to see the mythical ideal—near-perfect scores everywhere. It’s included as a reminder of what we’re not claiming. The Renaissance Engineer is actually quite close to the Player-Coach—both are AI-enabled, both pursue breadth across domains. The key differences: Renaissance Engineers score lower on context (60 vs 90) and leadership (60 vs 75) because they emphasize curiosity-driven exploration over organizational knowledge and coaching. The Player-Coach prioritizes strategic judgment and team elevation; the Renaissance Engineer prioritizes polymathic reach.

Note: These scores are based on my experience and observations of these archetypes across multiple organizations. They’re meant to be illustrative of relative strengths and tradeoffs, not empirically measured metrics. Your mileage may vary.

What This Looks Like in Practice

For me, it has meant shipping backend services at high throughput, scoping the product, setting the AI vision, doing AI engineering, managing stakeholders, and talking to clients to understand their needs. I’m in no way super-human or, for that matter, a unicorn. I’m only good at leveraging my context and AI tools to amplify my impact—which I expect is similar to many seasoned leads out there right now.

The Context Multiplier

Here’s the key insight that makes this work: context is the real multiplier.

There’s a fascinating paradox in the research. A METR randomized controlled trial from July 2024 found that experienced developers were actually 13% slower with AI tools on codebases they’d worked on for 2+ years—yet they still believed they were 20% faster6.

This seems like bad news for the “AI makes everyone 10x” narrative. But read it differently: AI helps most on greenfield development, boilerplate, and unfamiliar territory. It helps least when deep contextual knowledge matters most.

As a lead, I have context the AI doesn’t. I know what the team tried last year and why it failed. I know which customers are strategic and which integrations are political minefields. I know the technical debt that’s safe to ignore and the debt that will bite us in six months. I’m not just prompting Claude—I’m directing it with judgment that took years to build.

The Formula

Deep Context + AI Fluency = Leverage That Pure ICs Can’t Match

Super ICs get leverage from AI, but they often lack strategic altitude. Product Engineers have the altitude but delegate implementation. The Player-Coach keeps both—and AI finally makes that viable.

This is why context becomes the ultimate competitive advantage. Let me break down the formula:

The Player-Coach Leverage Formula

Why context-rich leaders get disproportionate returns from AI tools

Deep Context
Historical Decisions
Political Landscape
Technical Debt Map
Strategic Priorities
+
AI Fluency
Code Generation
Pattern Recognition
Rapid Prototyping
=
Player-Coach Leverage
Strategic altitude maintained
Direct output at scale
Judgment-directed AI

The Compounding Effect

Pure ICs have AI fluency but often lack strategic context. Product Engineers have context but delegate implementation. The Player-Coach keeps both—and the combination creates leverage that neither can match alone.

The inputs matter more than the tool (i.e. context engineering). Give two engineers the same AI assistant—one with deep organizational context, one without—and you’ll see dramatically different outputs. Same goes for those not using the latest advancements such as sub-agents, skills, or spec-driven development (Kiro, etc.). The context-rich engineer gets production-ready code that fits the system. The context-poor engineer gets generic solutions that require extensive rework.

The Real Filter: Artifacts vs. Meetings

This reframes what’s actually dying in engineering organizations. It’s not “middle management” in some generic sense. It’s a specific failure mode: people whose output is coordination without artifacts.

What I like to think of as “The New Career Filter”—a pattern you might have experienced or seen emerge in different forms:

The New Career Filter

AI tools don't eliminate the need for coordination, strategy, or leadership. They eliminate the excuse that those activities preclude direct contribution.

"Can you produce artifacts, or do you only produce meetings?"

Thriving
The Shipping Lead
Still writes code, reviews PRs deeply, and can jump into production issues
The Architect Who PRs
Designs systems AND implements critical paths, not just draws diagrams
The Strategic Implementer
Owns roadmap AND delivers key features, using AI to maintain both
The Context-Rich Builder
Applies years of judgment directly through AI-assisted development
At Risk
The Ticket Router
Main output is moving items between columns and attending standups
The Diagram Architect
Draws systems but hasn't touched production code in years
The Meeting Facilitator
'Provides guidance' through spec documents and feedback sessions only
The Pure Coordinator
Valuable for alignment but creates no technical artifacts

The filter isn’t IC versus manager. It’s: can you produce artifacts, or do you only produce meetings?

AI tools don’t eliminate the need for coordination, strategy, or leadership. They eliminate the excuse that those activities preclude direct contribution. If you have good judgment, you can now apply it directly rather than trying to transfer it to someone else through a spec document.

What This Means for Your Career

Having been around for some time now, I’ve had the opportunity to see how this plays out for different career stages. For instance, the worst thing you could do is blindly trust AI tools’ output and do “Vibe Coding” instead of “Vibe Engineering”. Then one of your peers would have to spend hours cleaning up—and you’d lose their confidence going forward.

Vibe Coding vs. Vibe Engineering

There’s a critical distinction that determines whether AI makes you an asset or a liability:

Vibe Coding — Using tools like Lovable, Bolt, or v0 to generate code without understanding what’s being produced. You prompt, you ship, you move on. This works for PoCs and throwaway prototypes. It’s a disaster for production systems. The code looks functional but lacks error handling, security considerations, performance optimization, and maintainability. Someone else has to clean it up—or worse, it ships and breaks in production.

Vibe Engineering — Bringing engineering best practices to AI-assisted development. You understand the generated code, validate its correctness, ensure it fits system patterns, add proper tests, consider edge cases, and deploy to production in a safe, controlled, and scalable manner. You’re using AI to accelerate implementation, not to bypass judgment.

The difference isn’t the tool. It’s whether you have the context and experience to know when AI is right, when it’s wrong, and what production-ready actually means.

Vibe Engineering is the responsible counterpart to vibe coding for experienced engineers.

— Simon Willison Vibe Engineering

One is a prototyping hack. The other is a multiplier.

Here’s my take:

If You're Junior or Mid-Level

The traditional ladder is compressing. You need to accelerate into either deep technical expertise or product ownership faster than before. Use AI to punch above your weight, but focus on building real judgment alongside the output. Don’t just ship—understand why you’re shipping what you’re shipping.

If you’re a senior IC: Your expertise is now directly monetizable as individual output at a scale that wasn’t possible before. The question is whether you want pure IC leverage or to expand into product and strategy. The wrong move is staying a “senior IC” who’s really just a mid-level engineer with more tenure.

If you’re a lead or manager: This is the existential question—are you still technical enough to ship? Be honest. If you’ve spent five years in meetings and lost your implementation edge, you have a choice: rebuild those skills with AI as an accelerant, or double down on pure people leadership knowing it’s a shrinking niche. The coordination-only manager is dying. The player-coach is thriving.

If You're an Engineering Org

The Tobi Lütke mandate at Shopify—“demonstrate why you cannot get what you want done using AI” before requesting headcount7—isn’t just about ICs. It applies to leads and managers too. Your future org chart probably has fewer layers, more player-coaches, and almost no pure coordinators.

Skills for the Player-Coach Era

Knowing the destination isn’t enough—you need the skills to get there. Here’s what I’ve found actually matters.

These aren’t the same skills that made you a good IC or a good manager. They’re a new synthesis—the competencies that let you operate in both modes while AI handles the translation between them. Click any skill in the radar to see how to develop it:

The Player-Coach Competency Model

Click any skill to learn how to develop it

Context Validation Altitude Skepticism Delegation

The radar captures the balance, but one quote from Addy Osmani captures the mindset:

What distinguishes great developers is knowing when AI is wrong.

— Addy Osmani The 70% Problem

That skepticism—calibrated, not paranoid—is what separates Player-Coaches from engineers who let AI drive.

The Uncomfortable Truth

Here’s what I’ve come to believe: the IC/manager career fork was never natural. It was a patch for human bandwidth limitations.

AI is unpatching it.

The engineers who thrive in the next decade won’t be the ones who pick between leading and shipping. They’ll be the ones who refuse to choose. They’ll have the strategic context of a lead, the implementation capacity of a team, and the judgment to know which problems deserve their direct attention versus delegation to AI.

I’ve been living this for two years across two companies. It’s not theoretical. It’s not coming. It’s here.

The Bottom Line

The question isn’t whether you’ll become a Super IC or a Product Engineer. The question is whether you’ll build the context and judgment that lets you do both—or whether you’ll let an artificial tradeoff define your ceiling.

From V-Shapes to Player-Coaches: What Changed

Looking back at my own writing, I can trace how this thinking evolved—and why the Player-Coach feels like a synthesis rather than just another framework.

When I wrote about the V-shaped data scientist in 2024, I was grappling with a different problem: practitioners who were technically excellent but couldn’t ship end-to-end. The solution was developing breadth—understanding the full ML workflow, not just your slice. But the V-shape still assumed you had to personally build all those competencies. That takes years.

When I wrote about taste in 2025, the landscape had shifted. Vibe coding platforms were minting unicorns. Anyone could ship an app. The new bottleneck was judgment—knowing what to build, sensing what users want, making the micro-decisions that separate beloved products from forgotten ones. I argued that taste was the new differentiator.

What changed between then and now? Two things:

  1. AI tools got good enough to trust with implementation. Not perfect—you still need to validate—but good enough that the “playing” part of being a player-coach is no longer a full-time job.

  2. The market started rewarding people who refused to specialize. Solo founders building profitable companies alone. Tiny teams shipping products at scales that used to require dozens of engineers. Technical leaders who still ship production code. Between 2015 and 2024, solo founders rose from 23.7% to 36.3% of all startups, and 52.3% of successful exits now come from solo-led companies8. This trend accelerated dramatically in 2023-2024 as AI coding tools matured—the data simply wasn’t there two years ago.

The V-shape was pre-AI thinking: build the skills yourself. Taste was mid-transition thinking: judgment matters more than execution. The Player-Coach is the synthesis: your shape doesn’t change, but your reach does. You stay deep in your domain, maintain the judgment and taste that can’t be automated, and use AI to extend into execution that would have required a team.

That’s not a prediction about the future. It’s a description of how I’ve been working for two years. The frameworks finally caught up.

Further Reading

If this resonated, here are the voices shaping this conversation:

Footnotes

  1. Pieter Levels, indie hacker and solo founder of Nomad List, Photo AI, and Remote OK. Multiple sources confirm $3M+ annual revenue with zero employees as of 2024-2025 (Starter Story, Software Growth).

  2. Midjourney achieved $200M annual recurring revenue in 2023 with a team of approximately 40 employees, generating over $5M per employee (CB Insights, Sacra).

  3. Stanford HAI analysis of US labor market data, 2024. Shows significant decline in junior developer employment concurrent with AI tool adoption.

  4. Amazon internal memo on organizational efficiency, requiring 15% increase in individual contributor to manager ratios across engineering.

  5. Meta’s “Year of Efficiency” restructuring eliminated approximately 21,000 positions, with explicit guidance for managers to return to IC roles.

  6. METR (Model Evaluation and Threat Research) randomized controlled trial, July 2024. Found experienced developers 13% slower with AI tools on familiar codebases.

  7. Tobi Lütke, Shopify CEO, internal memo 2024. Established policy requiring teams to demonstrate AI inadequacy before requesting additional headcount.

  8. FourWeekMBA analysis of startup data 2015-2024. Solo founders increased from 23.7% to 36.3% of startups, with 52.3% of successful exits coming from solo-led companies.

Was this helpful?

Let me know what you think!

Loading reactions...

Recent Posts