Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
Vector databases, a relatively new type of databases that can store and query unstructured data such as images, text, and video, are gaining popularity among developers and enterprises who want to build generative AI applications such as chatbots, recommendation systems, and content creation.
One of the leading providers of vector database technology is Pinecone, a startup founded in 2019 that has raised $138 million and is valued at $750 million. The company said Thursday it has “way more than 100,000 free users and more than 4,000 paying customers,” reflecting an explosion of adoption by developers from small companies as well as enterprises that Pinecone said are experimenting like crazy with new applications.
By contrast, the company said that in December it had fewer than low thousands of free users, and fewer than 300 paying customers.
Pinecone held a user conference on Thursday in San Francisco, where it showcased some of its success stories and announced a partnership with Microsoft Azure to speed up generative AI applications for Azure customers.
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
Bob Widerhold, the president and COO of Pinecone, said in his keynote talk that generative AI is a new platform that has eclipsed the internet platform and that vector databases are a key part of the solution to enable it. He said the generative AI platform is going to be even bigger than the internet, and “is going to have the same and probably even bigger impacts on the world.”
Vector databases are distinct set of databases for generative AI era
He explained that vector databases allow developers to access domain-specific information that is not available on the internet or in traditional databases, and to update it in real time. This way, they can provide better context and accuracy for generative AI models such as chatGPT or GPT-4, which are often trained on outdated or incomplete data scraped from the web.
Vector databases allow you to do semantic search, which is a way to convert any kind of data into vectors that allow you to do “nearest neighbor” search. You can use this information to enrich the context window of the prompts. This way, ‘you will have far fewer hallucinations, and you will allow these fantastic chatbot technologies to answer your questions correctly, more often,” Wiederholt said.
Wiederhold’s remarks came after he spoke Wednesday at VB Transform, where he explained to enterprise executives how generative AI is changing the nature of the database, and why at least 30 vector database competitors have popped up to serve the market. See his interview below.
Widerhold said that large language models and vector databases are the two key technologies for generative AI.
Whenever new data types and access patterns appear, assuming the market is large enough, a new subset of the database market forms, he said. That happened with relational databases and no-SQL databases, and that’s happening with vector databases, he said. Vectors are a very different way to represent data, and nearest neighbor search is a very different way to access data, he said.
He explained that vector databases have a more efficient way of partitioning data based on this new paradigm, and so are filling a void that other databases, such as relational and no-SQL databases, are unable to fill.
He added that Pinecone has built its technology from scratch, without compromising on performance, scalability, or cost. He said that only by building from scratch can you have the lowest latency, the highest ingestion speeds, and the lowest cost of implementing use cases.
He also said that the winner databases are going to be the ones who have built the best managed services for the cloud, and he said Pinecone has delivered there as well.
However, Wiederhold also acknowledged Thursday that the generative AI market is going through a hype cycle and that it will soon hit a “trough of reality” as developers move on from prototyping applications that have no ability to go into production. He said this is a good thing for the industry as it will separate the real production ready, impactful applications from the “fluff” of prototyped applications that currently make up the majority of experimentation.
Signs of cooling off for generative AI, and the outlook for vector databases
Signs of the tapering off, he said, includes a decline in June in the reported number of users of ChatGPT, but also Pinecone’s own user adoption trends, that have shown a halting of an “incredible” pickup from December through April. “In May and June, it settled back down to something more reasonable,” he said.
Wiederhold responded to questions at VB Transform about the market size for vector databases. He said it’s a very big or even enormous market, but that it’s still unclear whether it will be a $10 billion market or a $100 billion market. He said that question is getting sorted out as best practices get worked out over the next two or three years.
He said that there is a lot of experimentation going on with different ways to use generative AI technologies, and that one big question has arisen from a trend toward larger context windows for LLM prompts. If developers can stick more of their data, perhaps even their entire database directly in a context window, then a vector database wouldn’t be needed to search data.
But he said that is unlikely to happen. He drew an analogy with humans who, when swamped with information, can’t come up with better answers. Information is most useful when it’s manageably small so that it can be internalized, he said. “And I think the same kind of thing is true, in terms of the context window in terms of putting huge amounts of information into it.” He cited a Stanford University study that came out this week that looked at existing chatbot technology, and which found that smaller amounts of information in the context window produced better results. (VentureBeat has asked for more information on the study, and will update once we hear back from Pinecone).
Also, he said large enterprises are experimenting with training their own foundation models, and others are fine-tuning existing foundation models, and both of these approaches can bypass the need for calling on vector databases. But both approaches require a lot of expertise, and are expensive. “There’s a limited number of companies that are going to be able to take that on.”
Separately, at Transform on Wednesday, this question about building models or simply piggy backing on top of GPT-4 with vector databases was a key question for executives across the two days of sessions. Naveen Rao, CEO of MosaicML, which helps companies build their own large language models, acknowledged that there are a limited number of companies that have the scale to pay $200,000 for model building but also have the data expertise, preparation and other infrastructure necessary to leverage those models. He said his company has 50 customers, but that it has had to be selective to reach that number. That number will grow over the next two or three years, though, as those companies clean up and organize their data, he said. That promise, in part, is why Databricks announced last week that it will acquire MosaicML for $1.3 billion.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.