An essential question for any company building infrastructure-level or application-level foundations or middleware. Two arguments.
The AI Stack must be default open.
Expensive training and inference jobs will drive continued growth for cloud providers and semiconductors. Well-capitalized cloud providers will do their best to make these workloads easy to run – commoditizing their complement (software) by open-sourcing libraries, optimizations, and models.
Companies are protective of their data. They do not want their data to be exfiltrated accidentally by employees or through the model provider's online training. Specific LM usage will have to use fine-tuned models, and companies will build additional infrastructure around these self-hosted (cloud) models.
A company will use OSS in another strategic way – for distribution, hiring, hurting a competitor, marketing, or creating goodwill. Why is Meta giving widespread access to Llama? Why did Google and Meta open-source TensorFlow and PyTorch? Why did Stability AI and Runway ML release Stable Diffusion?
The AI Stack will be default proprietary.
Foundational models will be foundational, i.e., there will be no alpha in hosting it yourself, base models will be commoditized and interchangeable, and with little or no lock-in (just text).
Or, the best models will be trained on proprietary data, and fine-tuning or infrastructure will be a competitive advantage closely held (think Google's ranking). The models could be tightly coupled to the hardware (which might not be generally available).
Or, a multi-modal world where dozens of models are called in a single workflow. The infrastructure will be too heavy-weight for companies to manage themselves. Will Coca-Cola self-host twelve sparse LMs, two diffusion models, and all the infrastructure in between?
Employees might ubiquitously use LMs in their daily work – just like the Internet, Google Search, or StackOverflow. Does proprietary data get exfiltrated through a search? Probably. Can you stop employees from injecting it via a prompt? Probably not. Companies will learn to live with the risk.
OSS works best when an API has many touchpoints. Think Kubernetes – which touches storage, compute, networking, operating systems, authn/z, and everything else. It's hard to design a highly modular proprietary system. But the AI stack will be much more friendly to integration – a simple generic API call with text or another well-known, easy-to-serialize mime type.