đź‘‹ Hey! It is Karin here.
Welcome to my ✨ newsletter ✨. This is the place where I tackle topics around product, business, artificial intelligence and occasionally spirituality. Subscribe today to get each and every issue.
When OpenAI launched GPT-4 in 2023, it seemed the future of AI belonged to tech giants with massive computing resources. In January 2025, DeepSeek—a small Chinese startup with fewer than 200 engineers —shattered this AI industry's fundamental assumption.
With a fraction of the resources used by tech giants, they matched GPT-4's performance and open-sourced their entire system. Their success marked a decisive end to the era of competitive advantage through superior models alone.
We're entering a new phase where success depends not on raw model power, but on how effectively AI is specialized and integrated into products people actually use.
The End of the Model Arms Race
Up until DeepSeek’s launch, the AI industry has been operating under a seductive (and, as DeepSeek’s emergence indicates, flawed) paradigm:
This drove billions in investment as tech giants raced to build ever-larger models.
DeepSeek's emergence shatters this assumption.
What makes DeepSeek remarkable is their efficiency: they matched GPT-4's performance using just 2,000 Nvidia chips (versus tens of thousands for OpenAI and Google) and made everything open source – code, weights, and methodology. This isn't "open" like OpenAI's controlled APIs; it's truly open, available for anyone to inspect and build upon.
DeepSeek's achievement—matching top performance with limited resources while embracing radical transparency—signals that the barriers to entry in AI development are crumbling faster than anyone expected.
And when a small team can match industry leaders with a fraction of the resources and open-source their entire system, it's clear we've entered a new era where raw model development is no longer the key differentiator: AI models are becoming commoditized.
The DeepSeek moment matters for two reasons that most observers are missing:
- It proves that AI development has hit the point of diminishing returns - throwing more compute at the problem no longer guarantees better results
- Most importantly, it signals that the next phase of AI competition will be fought on completely different grounds
1. The End of Diminishing Returns
OpenAI's approach followed the "bigger is better" approach: they scaled from GPT-3's 175 billion parameters to GPT-4's estimated 100 trillion parameters (Source). Despite this massive 8x increase in size, the performance gains weren't proportionally impressive.
DeepSeek took a radically different path. Their model achieves comparable results while activating just 37 billion parameters during inference, despite being trained on 671 billion parameters (Source). This efficiency-first approach stands in stark contrast to the brute-force scaling strategy of tech giants.
This means: intelligent architecture and efficient design can outperform raw computational power. It's not just about how big your model is. It is about how smartly you use your resources
2. The New Competitive Landscape
With models become commoditized, the next wave of billion-dollar AI companies won't be built on superior models, they'll be built on superior products.
But what makes an AI product superior? Three critical elements are emerging:
- Trust-Building Interfaces
Consider this revealing contrast: While OpenAI's APIs are used by roughly 2 million developers, ChatGPT has exploded to 250 million weekly active users as of January 2025 (Source). This staggering difference reveals a crucial truth: OpenAI's true moat isn't GPT-4's raw capabilities—it's ChatGPT's frictionless user experience.
- Workflow-Native Integration
The most successful AI products won't ask users to learn new workflows - they'll enhance existing ones. The examples of Microsoft Office and GitHub support this:
- Microsoft Office Copilot has achieved 70% penetration in Fortune 500 companies at $30/user/month - far higher than standalone AI tools. (Source)
- GitHub Copilot reaches 80% adoption among enterprise developers, while separate AI coding assistants struggle to gain market share. (Source)
The pattern is clear: AI that integrates into existing workflows wins over standalone solutions, even at premium prices. Users don't want new destinations - they want their current tools to become smarter.
- Domain-Specific Intelligence
The final and perhaps most important element, is deep specialization based on domain-specific insights. The next generation of AI products won’t be trying to be general-purpose assistants, they will be domain experts.
Consider these examples:
- Harvey AI captured 25% of the UK's top law firms within 6 months by focusing exclusively on legal work, despite using the same underlying models as general-purpose AI. (Source)
- Anthropic's Claude achieved 85% accuracy on medical diagnosis tasks after fine-tuning on specialty data, compared to 62% for general models. (Source)
- Bloomberg's AI financial analyst tool processes earnings calls 40% more accurately than general LLMs by incorporating proprietary market data and financial context (Source)
The key insight? The barrier to entry isn't technical prowess in AI—it's deep understanding of specific industries and workflows.
Strategic Implications
This shift from models to products has three major implications:
- The Next Wave of AI Unicorns Won't Build Models
They'll build specialized products on top of commodity AI, just as Uber built a $100B company on top of commodity GPS and mapping technology.
- Vertical Integration Will Beat Horizontal Platforms
Companies that deeply understand specific industries will outperform general-purpose AI platforms, even with inferior underlying technology.
- The Real AI Arms Race is Just Beginning
But it's not about compute or model size - it's about understanding user needs and building trust-worthy products.
Conclusion
The DeepSeek moment is the beginning of AI's product era. Just as the commoditization of cloud computing gave rise to Uber, Airbnb, and Netflix, the commoditization of AI capability will fertilize the next generation of transformative companies.
The winners won't be the ones with the biggest models or the most compute—they'll be the ones who understand that AI's true value lies not in its raw capability, but in its application to specific human needs.
For entrepreneurs and investors, the message is clear: there will be great AI companies built wherever there's a deep understanding of an industry's problems and a vision for how AI can solve them.
Want to take this further?
I will run the AIPlaygroundWeek again, so stay tuned for updates regarding the waiting list and opportunities to shape further topics to include.
Thanks for being here!
PS: Here are more ways to connect:
- Follow me on LinkedIn for insights during the week (free).
- Check my availability to do a keynote for your company or incubator.
- Let's do a 30 days sprint to get your first customers.
- Enroll in the OneMonthPM Membership, the value packed membership with all you need to become a full stack PM. You can find my Prototyping nano course in it.