throughout this journey, we’ve discovered that
it is one thing to build a search engine,
and an entirely different thing to convince
regular users of the need to switch to a
better choice
On the other hand, ChatGPT is the fastest growing product in human history. Because it beats Google for many types of searches. A friend of mine recently said that a good old Google search now feels like having to go to the library.
I wouldn't be surprised if the future of search comes from an unexpected angle. The cost to train a model which has basic understanding of the world and human language might drop enough that hobbyists can do it. And then domain specific knowledge might be learned on top of that seperately, creating "specialist LLMs". A web of such LLMs with domain specific knowledge might be able to answer questions better than a single large net. Similar to how humans work in teams of specialists.
I think we should stop talking about search engines (meaning "a google") as a single service, since it really isn't, but rather a series of more or less interconnected services. The notion of Internet Search is too nebulous to be able to have meaningful discussions about it.
It's much more enlightening to talk about which demand is being satisfied.
Google satisfies several disparate demands, including:
* Internet discovery
* Product discovery
* E-commerce discovery
* Brick&mortar commerce discovery
* Geographical discovery
* Fact discovery (question-answering)
All of these services have very little friction. Having a single interface helps with that, but I don't think it's a necessity. Low friction is though. Nobody wants to sign up for a service to get what google gives away ostensibly without that hassle.
It does most of these things decently well, largely thanks to being able to profile its users accurately. I don't think a competitor will replace Google by trying to copy their model and do all these things. Google is far too entrenched.
There's really no reason why you would need to compete against the combined offering of Google.
At several of these tasks, it's quite possible to outperform them. Especially in commercial discovery, there really aren't any good offerings right now. Finding the best something for the price range is frustrating, time consuming and annoying.
An LLM-based question-answering mostly satisfies the fact discovery need, not so much the others. This is of course fine, but it's important to understand that Google's killer functionality has never been answering questions, it's never been very good at that task.
Arguably, the seamless localization of the results is a much more important aspect.
Linux was out in 1991 ... Google was a late comer in 1998 and which for many years other search engines, both in general and perhaps more in the field I did a lot of web searching, returned much better results. Google eventually began to perform much better returning a slightly more good results iirc around 2002 and then further improving results ... also as M$ and a few other players having specialist string search engines banned from the web, (if anyone recalls M$ being upset people could search for some of their code being leaked ... ) google was the next best alternative though incredibly limited to do the same.
Interesting that this is their approach I think in time we will find not specialized llms but instead a single big one like chatgpt exposed is the winner because it allows for a faster and easier way to access information without knowing the domain ahead of time. When google came out in the late 90s we had to learn the right way to search and that became a skill… the advantage of natural language as the interface is the possible removal of needing to know the right questions to ask as precisely as we have had to in the past. To me that is the big break through of chatgpt with respect to search and it remains to be seen if and when google will figure that out…
I wouldn't be surprised if the future of search comes from an unexpected angle. The cost to train a model which has basic understanding of the world and human language might drop enough that hobbyists can do it. And then domain specific knowledge might be learned on top of that seperately, creating "specialist LLMs". A web of such LLMs with domain specific knowledge might be able to answer questions better than a single large net. Similar to how humans work in teams of specialists.