- Connecting Dots
- Posts
- The Subtle Strategic Moat-Widening of OpenAI's Dev Day
The Subtle Strategic Moat-Widening of OpenAI's Dev Day
As a developer, I'm excited. As an OpenAI investor, even more so.
As you know, OpenAI has been on fire. First with the launch of ChatGPT to an unsuspecting world in November 2022.
And now, with a slew of updates and improvements launched at their first Dev Day event yesterday (Nov 6 2023).
Lots has been written about all the new features and capabilities, so I’m not going to dig into that here. But, not a lot has been written about the strategic benefits of some of these launches.
LLMs Are Somewhat Of a Commodity
GPT-4 is the most capable LLM out there. But that does not confer absolute strategic advantage as one might think.Why?Because if you put aside the power of an individual model (like GPT-4), the underlying interface is actually very simple.
LLMs are simply a function that takes text in (the prompt) and provides text back out. Now granted, there’s a lot of variability in power and capability, but the interface is quite consistent.
This is why it’s relatively easy to move from one model to the other to try it out. If you’re using GPT-3.5 or GPT-4, trying out Anthropic’s Claude (to gain access to the 100k tokens) or to the open source Mistral LLM is relatively straight-forward. You’re still passing text in and getting text out. And there are new, quite capable LLMs coming out seemingly every week.
Because the interface is simple and consistent, as new models come out, it’s not too hard to actually try out a new model or in some cases, use model X for some use cases and model Y for other use cases.
Strategic Impact Of Dev Day Launches
Now, let’s look at some of the key new capabilities that OpenAI launched. I want to focus on the ones that don’t make the foundational model better per se, but just make the model easier to work with.
Anything that raises the level of abstraction that developers can work at has an advantage, because the higher the level of abstraction, the higher the number of developers that you get on the platform (all things being equal).
But, there’s a subtle but significant side-effect of these improved abstractions: In order to benefit from these better abstractions: The underlying interface is no longer simply pass text-in, get text-out.
For example: In the new Assistants API, OpenAI will now do the memory management for you. So you can implement a conversational-style interaction in your chatbot without having to worry about context windows, sliding windows of memory, selective summarization or a bunch of other things. You just use the new APIs to interact with the LLM and it manages all the memory for you.
Why is this a big deal?
Because the more developers start using this new API with memory management, the less of a commodity GPT-4 becomes. Now, you can’t just willy-nilly switch to another model that comes out next week without first considering whether it supports memory management and even if it does, you have to figure out whether your code has to change to match however that new model supports memory management. OpenAI does it elegantly with the notion of assistants, threads and messages. But there’s no requirement that other LLMs have to use those same concepts.
Same with the new Retrieval features in the Assistants API. You get a lot of power, and you get it simply, but you have to use the feature in the way OpenAI designed it.Same with Code Interpreter and data analysis.All of these are massively powerful features — and all make OpenAI’s platform different from the other models that are out there.
Different not just in what OpenAI can do, but different in terms of how you interact with the platform and use the capabilities.
Net result: The switching costs go up the more people use these features as a result of which the moat around OpenAI widens. That’s the strategic benefit.
Now, of course, there will be open source libraries like LangChain that will emerge to help abstract away these differences and help you move across models while still preserving some of these new features — but as well intentioned and well executed as those implementations will be, there will always be leaks in the abstractions. Things won’t always work exactly the way you want. Switching models will still require some thought and consideration.
So, people will stay in the warm confines of OpenAI’s platform longer because there’s no reason to try out other models and increasing reason not to. It’s cold and chaotic out there.
And just like nobody got fired for buying IBM back in the day, nobody gets fired for building with OpenAI.
This makes OpenAI the company more valuable. It’s strategy 101.
p.s. Apologies for the wonky image/illustration. I was lazy and just gave this post to DALL-E 3 and let it create a visual. It’s not awful, but there’s clearly work to do.
Reply