Confluent, Inc., a pioneer in data streaming, has announced the AI Model Inference feature, which will soon be available on Confluent Cloud for Apache Flink®. This functionality will allow teams to easily integrate machine learning into their data pipelines. The company has also launched Confluent Platform for Apache Flink®, a version of Flink that enables real-time processing in on-premises or hybrid environments, supported by their Flink experts. Additionally, Confluent has introduced Freight Clusters, a new type of clusters for Confluent Cloud that offers cost-effective management for high-volume use cases that do not require real-time processing, such as log data or telemetry processing.
AI Model Inference simplifies the creation and deployment of AI and Machine Learning applications
Generative AI helps organizations innovate faster and deliver more personalized customer experiences. AI workloads require recent and context-rich data to ensure that underlying models generate accurate results for businesses to make informed decisions based on the most up-to-date information available.
However, developers often have to use multiple tools and languages to work with AI models and data processing pipelines, leading to complex and fragmented workloads. This can hinder the utilization of the most current and relevant data for decision-making, leading to errors or inconsistencies and compromising the accuracy and reliability of AI-based insights. These issues can increase development time and make it difficult to maintain and scale AI applications.
With AI Model Inference on Confluent Cloud for Apache Flink®, organizations can use simple SQL statements from Apache Flink to make calls to AI engines, including OpenAI, AWS SageMaker, GCP Vertex, and Microsoft Azure. Now companies can orchestrate data cleaning and processing tasks on a single platform.
“Apache Kafka and Flink are critical links for powering machine learning and artificial intelligence applications with the most timely and accurate data,” said Shaun Clowes, Chief Product Officer at Confluent. “Confluent’s AI Model Inference eliminates the complexity of using streaming data for AI development, enabling organizations to innovate faster and deliver powerful customer experiences.”
AI Model Inference allows organizations to:
Simplify AI development by using familiar SQL syntax to work directly with AI/ML models, reducing the need for specialized tools and languages.
Establish seamless coordination between data processing and AI workflows to enhance efficiency and reduce operational complexity.
Enable precise and AI-driven real-time decision-making, leveraging recent and contextual streaming data.
” Leveraging recent and contextual data is vital for training and refining AI models, and for their use at the inference time to improve the accuracy and relevance of results,” said Stewart Bond, Vice President of Data Intelligence and Integration Software at IDC. “Organizations need to enhance the efficiency of AI processing by unifying data integration and processing pipelines with AI models. Flink can now treat foundational models as first-class resources, allowing real-time data processing unification with AI tasks to streamline workflows, improve efficiency, and reduce operational complexity. These capabilities empower organizations to make precise, real-time AI-driven decisions based on the most current and relevant data streaming while enhancing performance and value.”
Support for AI Model Inference is now available in early access for select customers. Customers can already sign up for early access and learn more about this offering.
Confluent Platform for Apache Flink® enables stream processing in private clouds and on-premises environments
Many organizations are looking for hybrid solutions to protect their most sensitive workloads. With Confluent Platform for Apache Flink®, a fully compatible Flink distribution with Confluent, customers can easily leverage stream processing for workloads in private or on-premises clouds with long-term expert support. Apache Flink can be used alongside Confluent Platform with minimal changes to existing Flink jobs and architecture.
Confluent Platform for Apache Flink® can help organizations:
Minimize risks with unified support for Flink and Kafka and expert guidance from leading data streaming experts.
Receive timely assistance to troubleshoot and resolve issues, reducing the impact of any operational disruptions in mission-critical applications.
Ensure that stream processing applications are secure and up-to-date with out-of-band bug and vulnerability fixes.
With Kafka and Flink available on the Confluent’s comprehensive data streaming platform, organizations can ensure better integration and compatibility between technologies and receive full support for streaming workloads in all environments. Unlike open-source Apache Flink, which only maintains the two most recent versions, Confluent offers three years of support for each version of Confluent Platform for Apache Flink® since its release, ensuring uninterrupted operations and peace of mind.
Confluent Platform for Apache Flink® will be available to Confluent customers later this year.
The new auto-scalable Freight clusters offer more cost-effectiveness at scale
Many organizations use Confluent Cloud to process log and telemetry data. These use cases involve large amounts of critical business data, but are often less latency-sensitive as they typically feed indexing or batch aggregation engines. To make these common use cases more cost-effective for customers, Confluent is introducing Freight clusters—a new type of serverless cluster with up to 90% lower cost for high-performance use cases with relaxed latency requirements. Thanks to Elastic CKUs, Freight clusters auto-scale seamlessly based on demand, without the need for manual sizing or capacity planning, allowing organizations to minimize operational overhead and optimize costs by paying only for the resources they use when they need them.
Freight clusters are available in early access in some AWS regions. Customers can sign up for early access and learn more about this offering.