Red Hat introduces Ramalama: Making Artificial Intelligence boring to make it easier to use

At the recent Fedora Flock conference, Red Hat unveiled ramalama, an innovative project that aims to make artificial intelligence (AI) “boring” in the best sense of the word. This new tool promises to simplify the use of AI and machine learning through the use of OCI (Open Container Initiative) containers, eliminating complexity and making the process accessible to a wider audience.

A philosophy of simplicity

The ramalama project stands out for its focus on simplicity in a field that is often dominated by cutting-edge technological innovations and complex solutions. “In a field full of advanced technologies and complicated solutions, ramalama stands out for its refreshing and simple mission: to make AI boring,” explained Red Hat representatives during the presentation.

Ramalama’s approach is to offer tools that are reliable and easy to use. The premise is that users can perform AI-related tasks, from installation to execution, with a single command per context, facilitating the management and deployment of AI models. This includes listing, extracting, executing, and serving models, with a clear goal of avoiding unnecessary complications.

Technology and compatibility

Ramalama leverages OCI container technology to simplify the use of AI models. The system initializes by inspecting the GPU support of the device; in the absence of GPUs, it resorts to CPU support. The tool uses container engines like Podman to extract OCI images containing all the necessary software to run AI models, eliminating the need for manual configuration by the user.

A standout feature of ramalama is its compatibility with multiple technologies, including NVIDIA, AMD, Intel, ARM, RISC-V, and Apple Silicon GPUs, although it currently appears to be limited to x86_64 architectures. Models are obtained from the Ollama registry by default, while llama.cpp and Podman are part of the technological infrastructure that supports ramalama.

Support for different operating systems

Ramalama is not only designed to support various hardware architectures but also to be compatible with multiple operating systems. In addition to Linux, which will be officially supported, the technology is also prepared to run on macOS and possibly on Windows through WSL (Windows Subsystem for Linux). If GPU support is unavailable, ramalama will fall back to using the CPU.

Future and accessibility

Currently in an early development phase, the source code for ramalama is already available on GitHub under the MIT license. The tool kicks off its journey in Fedora, a Linux distribution that serves as a testing ground for technologies that could be integrated into Red Hat Enterprise Linux. The adoption of ramalama by Fedora suggests that it could be extended to other major distributions if successful.

With ramalama, Red Hat aims to democratize access to AI by making it more accessible and less daunting, allowing even those with less experience in the field to harness the power of artificial intelligence. The goal is for the technology to become so simple and accessible that its use becomes a pleasant and hassle-free experience.

Scroll to Top