Oracle revolutionizes with first large language models in Database and Automated Vector Store

Oracle has announced today the general availability of HeatWave GenAI, an innovation that introduces the first Large Language Models (LLM) integrated within a database and an automated vector store, marking a milestone in the industry. With these new capabilities, customers can develop generative artificial intelligence applications without the need for previous AI experience, without moving data, and without additional costs.

Key Innovations of HeatWave GenAI

HeatWave GenAI enables developers to create a vector store for enterprise unstructured content with a single SQL command, using integrated embedding models. Additionally, it facilitates natural language searches in one step, employing LLM in-database or externally, without the need to move data out of the database. Thanks to the extreme scale and performance of HeatWave, there is no need to provision GPUs, reducing application complexity, increasing performance, enhancing data security, and lowering costs.

“HeatWave continues to surprise with its pace of innovation, now with the inclusion of HeatWave GenAI in our existing capabilities,” commented Edward Screven, Chief Corporate Architect at Oracle. “The integrated and automated AI enhancements allow developers to quickly and easily build generative AI applications, without requiring AI expertise or data movement.”

Competitive Advantages and Exceptional Performance

According to independent benchmarks, HeatWave GenAI is 30 times faster than Snowflake, 18 times faster than Google BigQuery, and 15 times faster than Databricks in vector processing, while also offering significantly lower costs. These results highlight the efficiency and superior performance of HeatWave GenAI compared to its competitors.

Vijay Sundhar, CEO of SmarterD, stated: “HeatWave GenAI greatly simplifies the use of generative AI. The integration of LLM in-database and vector creation in-database significantly reduces application complexity and improves productivity at no additional costs.”

New Automated Features

LLM in Database: Facilitates the development of generative AI applications at a lower cost, enabling data searches, content generation or summarization, and retrieval-augmented generation (RAG) within the HeatWave Vector Store.

Automated Vector Store: Customers can utilize generative AI with their enterprise documents without the need to move data to a separate vector database, automating all necessary steps.

Scale Vector Processing: Offers extremely fast and accurate semantic search results using a scale architecture and hybrid in-memory representation that enables vector processing at near-memory speeds.

HeatWave Chat: A Visual Code plugin for MySQL Shell that provides a graphical interface for HeatWave GenAI, allowing natural language or SQL queries, maintaining context and source document citations.

Impact and Collaborations

AMD’s Vice President, Dan McNamara, expressed excitement for the ongoing collaboration with Oracle: “The joint work between AMD and Oracle is enabling developers to design innovative enterprise AI solutions leveraging HeatWave GenAI and AMD EPYC processors.”

HeatWave continues to position itself as the only cloud service offering automated generative AI integrated with machine learning in a single offering for transactional and analytical lakehouse-scale. Available natively on OCI and Amazon Web Services, and on Microsoft Azure through Oracle Interconnect for Azure, HeatWave continues to drive innovation and efficiency in data management and cloud-based AI applications.

Scroll to Top