Confluent has introduced new features in Confluent Cloud for Apache Flink®, designed to streamline and simplify the development of real-time artificial intelligence (AI) applications. Among the highlights is Flink Native Inference, which eliminates the need for complex workflows, allowing teams to run any open-source AI model directly on Confluent Cloud.
Another significant improvement is the unified search in Flink, which centralizes access to data across various vector databases, facilitating the discovery and retrieval of information from a single interface.
Additionally, Confluent has integrated new machine learning (ML) capabilities into Flink SQL, enabling the easy and direct application of advanced AI-driven use cases, such as forecasting and anomaly detection.
With all these innovations, Confluent is redefining how companies can leverage artificial intelligence to improve customer engagement and real-time decision-making.
“Building real-time AI applications has been too complex for too long, requiring a maze of tools and in-depth knowledge just to get started,” says Shaun Clowes, Chief Product Officer at Confluent. “With the latest advancements in Confluent Cloud for Apache Flink, we are breaking down those barriers, putting AI-driven streaming data intelligence within reach of any team. What once required a mosaic of technologies can now be done seamlessly within our native platform, with enterprise-level security and built-in cost efficiencies,” he adds.
The AI boom is already here. According to McKinsey, 92% of companies plan to increase their investments in AI over the next three years. Organizations want to seize this opportunity and capitalize on the promises of AI. However, the path to creating real-time AI applications is complicated. Developers juggle multiple tools, languages, and interfaces to incorporate ML models and extract valuable context from the many places where data resides. This fragmented workflow leads to costly inefficiencies, slowdowns in operations, and AI hallucinations that can damage reputation.
Simplifying the Path to AI Success
“Confluent helps us accelerate co-pilot adoption for our customers, giving teams access to valuable organizational insights in real-time,” comments Steffen Hoellinger, Co-founder and CEO at Airy. “Confluent’s data streaming platform with Flink AI Model Inference simplified our tech stack by allowing us to work directly with large language models (LLMs) and vector databases for retrieval-augmented generation (RAG) and schema intelligence, providing real-time context for smarter AI agents. As a result, our customers have achieved greater productivity and better workflows across their business operations,” he states.
As the only serverless data streaming processing solution on the market that unifies both real-time and batch processing, Confluent Cloud for Apache Flink enables teams to effortlessly manage both continuous data streams and batch workloads on a single platform. This removes the complexity and operational overhead of managing separate processing solutions. With these new AI, ML, and analytics capabilities, companies can streamline their workflows and achieve greater efficiency. These features are available in an early access program, which is open for registration for Confluent Cloud customers.
- Flink Native Inference: run open-source AI models on Confluent Cloud without additional infrastructure management.
When working with ML models and data pipelines, developers often use separate tools and languages, resulting in complex and fragmented workflows and outdated data. Flink Native Inference simplifies this by allowing teams to run open-source or tuned AI models directly on Confluent Cloud. This approach offers greater flexibility and cost savings. Moreover, data never leaves the platform for inference, adding an extra layer of security.
- Search in Flink: use a single interface to access data from multiple vector databases.
Vector searches provide extensive language models (LLMs) the necessary context to avoid hallucinations and ensure reliable results. Search in Flink simplifies real-time data access from vector databases such as MongoDB, Elasticsearch, and Pinecone. This eliminates the need for complex ETL processes or manual data consolidation, saving valuable time and resources while ensuring that the data is contextual and always up-to-date.
- Integrated ML capabilities: make data science skills accessible to more teams.
Many data science solutions require highly specialized knowledge, creating bottlenecks in development cycles. Integrated ML capabilities simplify complex tasks such as forecasting, anomaly detection, and real-time visualization directly in Flink SQL. These features make real-time AI accessible to more developers, allowing teams to gain actionable insights faster and empowering businesses to make smarter decisions with greater speed and agility.
“The ability to integrate real-time, contextualized, and reliable data into AI and ML models will give companies a competitive edge with AI,” states Stewart Bond, Vice President of Data Intelligence and Integration Software at IDC. “Organizations need to unify data processing and AI workflows to achieve accurate predictions and LLM responses. Flink provides a single interface to orchestrate inference and vector search for RAG, and having it available in a fully managed native cloud implementation will make real-time analytics and AI more accessible and applicable to the future of generative AI and agent-based AI,” he concludes.