Skip to content

Instantly share code, notes, and snippets.

@mberman84
Last active April 30, 2026 16:33
Show Gist options
  • Select an option

  • Save mberman84/b64f0aaa60b608d714cbe0b54e9cafe5 to your computer and use it in GitHub Desktop.

Select an option

Save mberman84/b64f0aaa60b608d714cbe0b54e9cafe5 to your computer and use it in GitHub Desktop.
US Open Source AI Transcript
the US is either screwed or we're going to win everything.
There is no middle ground when it comes to artificial intelligence.
There are a lot of players in this story.
Open source China, Nvidia, the closed source US Frontier Labs.
This is a big problem and we need to talk about it.
we are betting the House on AI working out for the United States.
40% of the stock market is just seven tech companies
very tied to how AI does.
And it all starts with open source
us. Open source AI is almost certainly doomed.
And that is because there is effectively no business model that makes it work. In the United States.
that's really bad because there are one of two outcomes. Either the closed source labs control everything or China, because they are so good at open source, wins. All of it.
and I don't think the American public will like either of those scenarios.
right. So first, what is open source?
Open source artificial intelligence means the lab that made the AI also revealed its recipe, and anybody can recreate it.
and they also typically give away the weights of the model. So you can download it. You can fine tune it.
You can make it your own.
Lama kwan. Gemma. Deep seek. These are all open source models.
And why is open source important?
Well, whenever you open source something, everybody can look at it.
that makes the model more hardened from a security perspective. It makes it more efficient because everybody can figure out little techniques to make the model run better faster, cheaper.
And that is a good thing.
But there is a big problem. The business model for open source AI in the United States is broken.
and the reason that China seems to be eating our lunch on the open source AI front
is not because we don't have the tech. It's not because we don't have the talent. It's because the funding isn't there.
monetization is not there.
So imagine this. You're an AI lab. You spend months
building out the recipe for a new open source model.
Then you buy all the GPUs, or you rent them and you bake the model. Then when you finally put it out, after investing all of that R&D, after investing all of that money,
everybody else can then take the model, run it themselves, or serve inference to customers that should have been yours.
And here's the thing. Their margins are going to be bigger because they didn't have to invest all that money into actually making the model in the first place.
But how is China doing it then? Wouldn't they have the same problem? Well,
to understand why open source works in China.
You actually have to understand how their government works.
so there is the CCP, the Chinese government, and they basically choose winners in their economy. They subsidize different companies
so that they are very competitive, maybe anti-competitive
In the global market.
and
typically when you're behind in a competitive race on technology,
the best strategy
is to give it away for free.
You effectively kill the margins for anybody who's in the lead.
You don't need to have the best product. But if you have a really good product that also happens to be incredibly cheap, that is a winning strategy.
so compare that to how America's economy works. It is a free for all.
The U.S. government typically does not pick winners.
But that also means the business model of open source becomes much more difficult to make work.
this problem is urgent
right now is when us businesses are deciding their AI strategy.
remember artificial intelligence in its current form has only been around for a few years. And now, because of the slow adoption time in enterprise, they're making these decisions.
And so they're looking at OpenAI as models. They're looking at anthropic models, and they are very expensive.
proprietary that you have less control over them. And then they're going to look at the open source alternatives. And really
the only competitive ones are coming out of China. They're a fraction of the cost. They're almost as good.
Plus you have more control over them. You can fine tune them to your needs. You can run them locally.
are more secure because you can run them on your own servers if you want.
the vast majority of businesses in the United States are not solving frontier math. They are not on the cutting edge of scientific research. And so even though GPT 5.5 or opus 4.7 might have a higher intelligence ceiling
and are better at solving some of those frontier math frontier science problems,
Most companies don't need that.
99% of use cases are just working with spreadsheets
coding
and making a schedule,
These do not require frontier level intelligence.
And so
if the deep seek model is just as good at everything else, 99.9% of problems out there.
it's a fraction of the cost. What do you think they're going to choose?
So let me lay out the US open source landscape right now. The number one player for a while was meta. They released llama.
They were so bullish on open source and then they weren't.
just a year ago Mark Zuckerberg was singing the praises, singing the benefits of open source.
fast forward one year later. No more open source at a meta.
Then you have open AI. It's literally in the name open AI. But once they figured out how much money you needed to raise, they knew they needed a business model and open source was not the way. They have released an open source model, GPT OS, and it was pretty good, but it's certainly a side quest for them. It's not something they're doing because it is the right business model.
They're doing it for goodwill, which, you know,
fine, that's part of a business strategy. But that is certainly not their focus.
Then we have anthropic who they have no open source strategy. Zero. They are like, nope, we're not even going to think about that straight shot to AGI.
And I'll come back to that in a moment.
We have Google who actually they have a really good open source strategy, but their GM, a series of models are made to be run locally, which is awesome. Don't get me wrong. You can run it on your computer. Maybe you can also run it on your phone.
And there are a lot of use cases for this. But this is not frontier intelligence. This is not meant to run a company.
And then we have the last one Nvidia. Nvidia might be the white knight in this story. They are investing $26 billion in building open source artificial intelligence.
And they may be the only company capable of doing it.
have an incredible revenue source.
they have some of the best AI researchers in the world.
And most importantly, they have the incentive to do it. It's okay for them to lose a bunch of money baking an AI model. If that means that the US economy and maybe the world economy is built on top of Nvidia infrastructure.
so they make these models spend a ton of money on it and give it away.
And the problem was, well, hey, if my competitors are serving the model and they have bigger margins,
that makes that business model not really work for startups, but for Nvidia, their competitors are not their competitors. They're the ones buying Nvidia chips. So the Nio clouds, the hyperscalers, they're serving this open source model that they didn't build, but they're serving it using Nvidia chips.
So Nvidia is upstream of all of it. And they're making money either way. So Nvidia might be the only company
where the business model for open source I actually make sense.
So here's a blog post from Mark Zuckerberg
just under two years ago. Open source AI is the path forward.
Boy, how things have changed.
he highlights. Today, several tech companies are developing leading closed models, but open source is quickly closing the gap. It turns out because of the lack of investment in open source AI in the US, they never truly close the gap. Although maybe again, it doesn't really matter. It only needs to be 99% as good.
And this is when they were still releasing incredibly good open source models under the name llama. So llama three is competitive with the most advanced models and leading in some areas. It turns out they were not able to continue that momentum.
Here, he lists out the specific reasons why open source is good for developers and
yes, all of these are very valid reasons. Let me show you.
We need to train, fine tune and distill our own models. We need to control our own destiny and not get locked into a closed vendor. We need to protect our data.
We need a model that is efficient and affordable to run, and we want to invest in the ecosystem that's going to be the standard for a long time. But as I said, they basically figured out the business model does not work. They then released a closed source model just a few weeks ago.
But I actually think open source could work. And it's all about building the standard.
If your AI is the standard and everybody's building on top of it, you have a lot of influence on the future of AI.
you still have the problem. Unless you already have a massive revenue source, like a meta or like a Google or like an Nvidia, that you're basically investing, building the model and then giving it away for free for others to compete with.
You.
all right. So why is it such a big problem if we decide to build on top of Chinese open source models? I mean, they're giving it away for free, right? We don't have to actually buy the inference from China. We don't have to send them our data. We can just host the models ourselves and serve them
and know it is a big problem.
If the United States economy is built on top of Chinese open source models, the only revenue stream we're going to have is to serve their models, to serve inference.
and then all of a sudden, China is dictating AI standards.
they might be optimizing their models for their own chips that they produce.
And in fact, they are doing that because we have export controls that prevents Nvidia from sending their best chips to China. So what is China do while they find all these great efficiencies, these algorithmic unlocks that allow their models to be trained on their chips?
then all of a sudden, if all US enterprise is built on top of Chinese models, they have a lot of influence over the direction of the chip industry because they're building their models.
If they decide, hey, we're going to optimize our model for our own chips, and we're going to make our chips like this. And now all of a sudden we have to buy our AI chips from China. Obviously, you can see that is a big geopolitical risk.
then there's also this notion that they might be able to control cultural elements in the United States. Now yes. When an open source open weights model gets released,
we can remove Chinese censorship. We can give it our own personality. But,
AI is still very much a black box.
we might not be able to fully remove
what China has baked into these models.
And there might be subtle changes in the way we have cultural discussions in the United States based on what we learn and interact with in Chinese based models.
And of course, this also hurts the closed source AI labs because if we're not buying inference from them, they have no way to make money. And again, much of the US economy is invested and betting on the US AI, specifically the closed source labs working out and winning.
So there's kind of a double edged sword here. On the one hand, we might be sending China a bunch of funds because we're buying their chips, we're buying their models. And then on the other hand, it also hurts us AI labs because they're closed source and we're just not buying them because they're so expensive.
maybe none of this matters.
maybe the straight shot to AGI is the only thing that matters. That is certainly an argument.
okay, so maybe it all doesn't matter, but why.
Well, Dario and the entire anthropic company believe there is only one thing that matters, and that is straight shot
to AGI.
That's the only thing that matters, because once one company reaches AGI there in the hard takeoff, nobody else can catch up.
And from then on they basically own everything.
for anthropic, they have this gorgeous flywheel that I've talked about
numerous times in videos where they have a coating model. They sell the coating model to enterprise companies and make a ton of money there at $30 billion IRR right now, they take all of that data plus the coating model itself and build the next generation of that model.
And that is that self-improvement, that recursive loop that we've talked about for AI. And it's incredibly powerful. So really, maybe it doesn't matter because once entropic or once OpenAI reaches AGI, once they reach that recursive self-improving loop for AI, nothing else matters. They can develop the cheapest models. They can develop all of these efficiency strategies. They can solve cancer.
I mean, they can literally do anything. Once you have
unlimited intelligence that is capable of thinking through any problem, it's really just that's it. You're done.
so that is the counter argument for how important US open source AI is.
here's the thing that's just one potential outcome. And we don't know how long that's going to take. And in the interim,
if US enterprises using Chinese open source models
then all of a sudden China gets to dictate and influence the future direction of artificial intelligence on chip production, chip manufacturing, all of a sudden we're in a very bad spot
because that disrupts the flywheel, that allows in anthropic, that allows an open AI
to reach AGI to reach a size artificial superintelligence.
so we need to fix open source AI in America.
One potential way is obviously just Nvidia. Nvidia might be the savior. They have all the money, all the researchers, all the incentive. Right? Show me incentive. I'll show you an outcome.
have everything. They need to continue to bet big on open source with their Nemo Tron family of models.
But they can't be the only ones.
We need startups.
small businesses are the lifeblood of the United States economy.
so a few options I'm going to throw out there one. Maybe we have federal grants.
Or some kind of federal compute quota for specifically open source AI companies.
Open source AI is a public good. So the federal government
helping subsidize it would actually be kind of a good thing.
now typically I am not in favor of the US government picking winners. But in this case it's open source.
It is a standard and it is a public good.
So there might be actually a valid reason to do so.
next we can treat open source as national infrastructure
tax credits. Accelerated depreciation.
And sovereign procurement guarantees.
for US open source models. In defense in health care,
finance and energy.
if we want a thriving US open source AI economy, we actually have to buy from us open source AI companies.
We can also lean into the hardware funded model like Nvidia. Why isn't AMD doing this? Why isn't Intel doing this? They should be betting big on open source also, because again, if you're building open source as a hardware company, you can optimize the models for your own hardware and then sell your hardware to run it. It's a great business model.
another solution is let's stop competing with general closed source models. These closed source models are incredible. They are great at the absolute frontier of knowledge, but most people and most companies don't need that. So let's build smaller, more efficient vertical application models.
so legal biotech code defense. We can build vertical versions of these open source models specific for these industries.
Cheaper, better faster.
And finally what we've seen work really well in the past, especially for open source is defining standards. It might be a little bit early to define standards for AI, but it is certainly helpful. Instead of all of these open source AI labs startups competing against each other and having to define everything themselves, which is very costly and very time intensive, if we had standards to build off of, it would save them a ton of time and money, and they can get to market much more easily.
by the way, the entire idea for this video came from something that happened just a few days ago. DPC released an incredible model, basically validating every point I made in this video. Check it out right here.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment