A few thoughts on how LLMs might (and might not) disrupt Google.
Last year, I asked a more generalized version of this question in How to Beat Google Search. Funnily enough, I had written about the open-source GPT-3 model (GPT-J) only two days before and made zero connection to the two. But now, as LLMs are becoming more sophisticated, more people are using prompts to query specific knowledge.
Why can't Google do this? Much of the AI research that underpins LLMs originated at Google. They have no lack of talent or sophistication when it comes to this technology—a few reasons why Google might be disrupted by the AI it helped create.
- Innovator's dilemma.
LLMs change the nature of search so that significantly fewer ads can be shown. What if the Search Engine Results Page (SERP) no longer made sense for a set of queries? Google launched the Knowledge Graph in 2012 (the boxes that summarize information for queries). Chrome will even answer some queries in the Omnibox itself. If LLMs drastically increase the type and number of these queries that this is true, it could materially hurt SERP real estate and, thus, ad revenue.
Wikipedia probably took a hit to traffic growth around the time the Knowledge Graph became popular. LLMs probably threaten information-heavy sites like Wikipedia more than Google.
It's also possible that startups can gain distribution by offering smaller-scale services before the unit-economics make sense. OpenAI might be able to afford to lose $0.001/query in this phase. Google couldn't. In the future, I imagine the cost of inference will drop dramatically as it becomes optimized.
2. Reputational risk.
OpenAI lets anyone query their models. Stability AI even open-sourced its model and weights. Meta briefly launched Cicero before shutting it down. Google has published papers that mirror every development in LLMs but has not published anything that people can play around with. Why?
There's a huge reputational risk for Google to allow public access to a model which might output racist, biased, or otherwise offensive output (after all, these models are trained on internet data). Companies with less to lose can more freely launch these models and capture distribution.
There's even great reputational risk in replacing existing products with LLMs. Consumers trust Google to give them the right results. A few bad results or a search that takes too long could go a long way in eroding decades of user trust in Google. LLMs confidently hallucinate information and present them as fact. The failure modes for LLMs aren't fully understood yet – output is still unpredictable.
Counter arguments.
Distribution often beats product. Google has the best internet distribution you could ask for – hardware (Pixel, Chromebooks), operating systems (Android, ChromeOS), web browsers (Chrome, Chromium), and other distribution paths (Google/Apple deal for default iOS search, Maps, etc.). How much better can GPT-3 be than Google Search?
Google has the best access to large proprietary datasets and compute. Even though GPT-3 was mostly trained with non-proprietary data (Commoditization of Large Langauge Models), other models like GitHub Copilot rely on specific data.