Inside the rise of “neo-labs” like Mirendil — startups attempting to build foundation models for the physical world.
The Intelligence Briefing
- Company: Mirendil
- Funding: $175M seed round
- Valuation: ~ $1B
- Founders: Behnam Neyshabur (CEO), Harsh Mehta (CTO)
- Background: Former researchers at Anthropic
- Focus: AI foundation models for biology and materials science
- Investors: Andreessen Horowitz, Kleiner Perkins
The Next AI Frontier May Not Be Chatbots
SAN FRANCISCO — Mirendil, a newly launched AI research startup founded by former Anthropic scientists Behnam Neyshabur and Harsh Mehta, represents a growing class of companies attempting to build foundation models designed not for language or software, but for the physical world.
The company recently raised $175 million in seed funding at an estimated $1 billion valuation, a remarkable milestone for a startup that has yet to publicly release a product.
But Mirendil is not alone.
Across Silicon Valley and the broader AI research ecosystem, a new generation of startups is emerging from the laboratories of companies such as Anthropic, OpenAI, and Google DeepMind. Rather than building general-purpose chatbots or productivity tools, these startups are focusing on a different ambition: using artificial intelligence to accelerate scientific discovery.
The trend is part of a broader structural shift in the AI ecosystem, explored in The $189B AI Funding Surge Is Reshaping the Deep Tech Venture Map, where capital is increasingly flowing into infrastructure and scientific AI startups rather than consumer applications.
Researchers increasingly refer to these companies as “neo-labs” — organizations that combine the structure of a technology startup with the mission of a scientific research institute.
Instead of training models primarily on internet text and images, these systems are being designed to understand biological structures, chemical reactions, and physical systems.
The shift reflects a broader belief forming across the technology industry: the most transformative applications of artificial intelligence may lie not in software automation, but in the discovery of new knowledge about the natural world.
The Neo-Lab Strategy
For more than a decade, progress in artificial intelligence has been driven by general-purpose models trained on vast internet datasets.
Systems developed by companies such as OpenAI, Anthropic, and Google DeepMind have demonstrated extraordinary capabilities in language generation, reasoning, coding, and creative tasks.
But these models also have clear limitations.
They excel at processing information but struggle to simulate complex physical systems, where understanding molecular interactions, thermodynamics, or biological processes requires a different type of representation.
Neo-labs such as Mirendil are attempting to address that gap.
Rather than building a single model designed to perform many tasks, these companies are developing specialized foundation models trained on scientific datasets.
The rise of specialized AI infrastructure platforms mirrors trends seen across enterprise systems, including those discussed in The Invisible Infrastructure Layer Reshaping Enterprise AI.
The strategy resembles earlier shifts in computing architecture.
Early computers relied on general-purpose CPUs.
Over time, specialized processors such as GPUs, TPUs, and AI accelerators emerged to handle specific workloads more efficiently.
Artificial intelligence may now be entering a similar phase of specialization.
Why Biology and Materials Science Matter
Among the most promising domains for AI-driven discovery are biology and materials science.
Both fields involve systems of extraordinary complexity where experimentation is slow, expensive, and often limited by human intuition.
Drug discovery is one example.
Developing a new therapeutic compound can take 10 to 15 years and cost billions of dollars, with high failure rates during clinical trials.
Materials science faces similar challenges.
The discovery of new battery chemistries, superconductors, or advanced alloys often requires years of laboratory experimentation.
Artificial intelligence has already demonstrated early potential in these domains.
Google DeepMind’s AlphaFold predicted the three-dimensional structures of more than 200 million proteins, dramatically expanding the scientific understanding of biological molecules.
DeepMind later introduced GNoME, a system that predicted millions of new crystal structures, highlighting the potential of AI to accelerate materials discovery.
Neo-labs such as Mirendil aim to push these capabilities further.
Instead of predicting structures alone, they are attempting to build models capable of simulating entire biological or chemical systems over time.

The Talent Driving the Neo-Lab Boom
The founders behind these companies represent a new generation of AI entrepreneurs: research scientists leaving frontier laboratories to build specialized research organizations.
Behnam Neyshabur, Mirendil’s chief executive, has conducted influential research on the theoretical foundations of deep learning, particularly on why large neural networks generalize effectively as they scale.
Understanding this phenomenon has been central to the development of modern AI systems.
Harsh Mehta, the company’s chief technology officer, previously worked on model scaling and architecture development at Anthropic, contributing to research on large-scale AI training systems.
Their departure from Anthropic reflects a broader trend across the AI ecosystem.
Some of the most talented researchers in the field are increasingly choosing to launch startups focused on specific scientific or industrial applications of AI.
This pattern has also appeared across founder-led infrastructure companies shaping the modern AI ecosystem, such as those explored in Jensen Huang: The Architect of Nvidia’s AI Infrastructure.
For venture investors, this concentration of expertise has become a powerful signal.
In many cases, funding decisions are driven not only by the technology being built, but by the research pedigree of the founders themselves.
A New AI Architecture Arms Race
Mirendil’s ambitions may also involve a deeper shift in the underlying architecture of AI systems.
Most modern large language models rely on transformer architectures, which process information by computing relationships between every element in a sequence.
While extremely powerful, transformers become computationally expensive when dealing with extremely long sequences such as genomic data or molecular simulations.
Researchers have therefore begun exploring alternatives.
One promising direction involves state-space models, which can process long sequences with computational costs that scale more efficiently.
These architectures could potentially allow AI systems to simulate biological processes involving millions of interacting variables.
If such approaches prove successful, they could dramatically expand the types of scientific problems that AI systems can model.
For neo-labs, architecture innovation may become one of the most important competitive advantages.
The rise of agent-driven development platforms already hints at how infrastructure shifts can reshape the broader AI ecosystem, as explored in Cursor’s $2B ARR Explosion Signals the Arrival of Agentic Developer Infrastructure.

The Competitive Landscape
Mirendil is entering a rapidly evolving field of AI-driven scientific research.
Several companies are already applying machine learning to biological discovery.
Biotech firms such as Recursion Pharmaceuticals and Generate Biomedicines are using AI to accelerate drug discovery.
Startups like Atomic AI are focusing on RNA-based therapeutics.
At the same time, major technology companies continue to invest heavily in scientific AI research.
Google DeepMind’s AlphaFold remains one of the most influential tools in computational biology.
Meta has developed EvolutionaryScale models capable of simulating protein evolution.
Microsoft has launched an AI for Science initiative aimed at accelerating research in chemistry and materials science.
Competing with these organizations will require startups like Mirendil to develop both technological differentiation and strategic data partnerships.
The Data Problem
For many neo-labs, the most difficult challenge may not be algorithms.
It may be data access.
Many of the most valuable scientific datasets are not publicly available.
Pharmaceutical companies, research laboratories, and industrial firms often guard experimental data closely.
Some companies have attempted to solve this problem by generating their own datasets.
Recursion Pharmaceuticals, for example, operates large-scale wet laboratories that produce biological experiments designed specifically for machine learning training.
Mirendil appears to be exploring a different strategy.
Instead of relying solely on experimental datasets, the company may attempt to train models capable of learning the underlying physical principles governing molecular systems.
If such models can accurately capture the causal dynamics of biological or chemical interactions, they could generate reliable predictions without requiring enormous experimental datasets.
But achieving that level of accuracy remains one of the hardest challenges in AI research.
The Rise of the AI Scientist
The emergence of neo-labs reflects a broader shift in the role artificial intelligence may play in the global economy.
The first generation of AI tools focused on information tasks — generating text, producing images, or assisting with software development.
The next generation may focus on scientific discovery.
Rather than simply answering questions, AI systems could help design new drugs, invent new materials, and accelerate technological progress.
If that transformation occurs, the most important AI companies of the next decade may not resemble consumer technology platforms.
They may look more like research laboratories powered by machine learning.
The Road Ahead
Mirendil’s funding round signals strong investor belief that AI-driven scientific discovery could become one of the most valuable frontiers in technology.
But the company’s long-term success will depend on whether its models can deliver meaningful breakthroughs in real-world research.
Scientific discovery remains a slow and uncertain process.
Even the most advanced AI systems cannot eliminate the need for experimentation and validation.
Yet the growing momentum behind neo-labs suggests a powerful idea is taking hold across the technology industry.
Artificial intelligence has already learned how to generate language.
The next challenge is teaching it to understand — and ultimately reshape — the physical world.
Research Context
This analysis synthesizes information from venture funding disclosures, research publications in machine learning and computational biology, and industry reporting on emerging AI startups founded by former researchers from Anthropic, OpenAI, and Google DeepMind. Additional insights were derived from comparative analysis of AI-driven drug discovery platforms, materials-science modeling systems, and next-generation neural network architectures.
Editorial Note
TechFront360 covers artificial intelligence infrastructure, startup ecosystems, venture capital flows, and the strategic technology shifts shaping the global AI economy. Our reporting focuses on the systems, founders, and capital shaping the next generation of computing.
