Multi-cloud was the pitch in the early innings of cloud computing. Companies were on high alert from the previous generation of on-prem vendor lock-in. Keep your infrastructure generic – use multiple clouds so that you aren't stuck with a single vendor like AWS or Google Cloud. Startups dreamed of disintermediating the clouds by offering a software layer over the cloud providers (all the margin, none of the CAPEX).
But multi-cloud never materialized. Even services like basic storage and compute that look the most like commodities aren't interchangeable. Expertise doesn't translate from one cloud to another – DevOps engineers familiar with AWS services, configuration, and concepts usually can't bring that sophistication to other clouds. It's difficult to have logical boundaries across clouds like VPCs, IAM, and other foundational pieces of infrastructure. Not to mention the egress costs.
The value chain for Generative AI might end up looking a lot different. Sure, models vary in expressiveness – OpenAI has the biggest models with the highest number of parameters. But the interfaces are so simple that they are mostly interchangeable. Already, a company offers hosted open-source LLMs that work natively with the OpenAI python library – you only have to change one single line (GooseAI). I'm sure many more will follow.
Customers will find it easy to go multi-model. Fewer touch points. Similar interfaces. The training data is roughly the same (OpenAI doesn't have many proprietary data sources beyond the ones provided by Microsoft/GitHub). Maybe startups can disintermediate the foundational model researchers (analogous to the cloud providers who run the data centers). These platforms might be multi-model to their customers – picking the cheapest or most efficient model for the task. Models may use other models to check quality – asking if the outputs are correct or not. Or ensemble models may provide the best answer from a variety of different foundational models.