The 2016 influx of deep learning startups was marked by human-in-the-loop AI. Chatbots and magic assistants were powered by a human on the other side. Driverless cars with a remote driver handling most interactions.
The general playbook in 2016 went something like this:
The performance of deep neural networks scales with data and compute. Extrapolating 2016 results over the next few years shows that we'll have ubiquitous human-level conversational AI and other sophisticated agents.
To prepare for this, we'll be first-to-market by selling the same services today except with humans that are augmented by the models. While this will initially be a loss-leader, it will be extremely powerful once the models are good enough to solely power the interact (without humans). By then, we'll have the best distribution.
Of course, we still don't have the level of conversational AI that can power magic assistants or chatbots without a human-in-the-loop. Most of these startups ran out of money. The most successful ones were the ones that sold to strategic buyers (Cruise/GM, Otto/Uber, Zoox/Amazon) or ones that sold picks and shovels (Scale AI).
Extrapolating performance for ML models is challenging. More data, more compute, or different architectures don't always mean better performance (look at some of the initial results from Stable Diffusion 2.0).
We don't seem to be making the same mistakes as 2016 in the era of generative AI. Some companies are solving for distribution using someone else's proprietary model (e.g., Jasper AI/GPT-3), but these products deliver real value to customers today – with no human in the loop. If LLM performance plateaued, these companies would likely still have some intrinsic value.