On Wednesday, May 7, OpenNebula Systems, in collaboration with Digital Realty, hosted a new edition of the OpenNebula TechDay AI series in the heart of Madrid. The event “Building Sovereign AI Factories with OpenNebula,” brought together cloud and AI professionals to explore the powerful intersection of open source cloud technologies, infrastructure sovereignty, and generative AI.
From VMware Migration to Open Innovation
The day began with a welcome session introducing OpenNebula’s evolving role as a sovereign alternative in the cloud ecosystem. One of the first key topics was the ongoing migration from VMware to open virtualization platforms. As organizations look to future-proof their infrastructure strategies, OpenNebula is emerging as a cost-effective, flexible, and vendor-neutral choice—particularly valuable in the face of licensing changes and growing interest in digital sovereignty.
Building and Managing On-Prem AI Factories
The agenda then turned to the core theme of the day: how OpenNebula is being used to build and operate AI Factories in sovereign, on-premise environments. Presentations focused on enabling the deployment of LLMs (Large Language Models) as a service through a simplified private cloud platform. This model drastically reduces operational costs compared to proprietary solutions like VMware, Nutanix, and Red Hat—or public cloud alternatives—while offering full control and compliance.
Participants learned about OpenNebula’s full-stack support for modern hardware acceleration, including GPU and ARM virtualization, DPUs (NVIDIA BF-3), InfiniBand, and 5G integration. These capabilities, paired with strong multi-tenancy features and hybrid cloud support, enable organizations to deliver scalable, high-performance AI services from their own infrastructure.
Running LLMs on Sovereign Infrastructure
A highlight of the sessions was the demonstration of how OpenNebula integrates with popular tools and frameworks to streamline LLM deployment. Attendees saw how validated models from Hugging Face can be launched and managed within OpenNebula-powered environments using Ray, vLLM, and NVIDIA Dynamo—allowing users to harness generative AI within a sovereign setup.
The presentation also introduced the broader cloud AI provisioning model, emphasizing KVM virtualization, a growing ecosystem via the OpenNebula Marketplace, robust multi-tenancy, and seamless hybrid cloud integration.
Demos: AIaaS and Multi-Tenancy in Action
Live demo sessions offered a closer look at OpenNebula’s ability to support AI-as-a-Service (AIaaS) architectures on NVIDIA-accelerated infrastructure. Attendees watched real-time examples of deploying AI workloads in a multi-tenant environment, managing access through virtual data centers, and orchestrating infrastructure across public and private clouds.
The OpenNebula AI Factories Roadmap
Looking to the future, the OpenNebula team shared an exciting AI Factories roadmap, detailing upcoming features and improvements aimed at AI-heavy environments. These include:
- GPU profiles optimized for LLM workloads
- Integration of RAG (Retrieval-Augmented Generation)
- Native Ray support for distributed AI computing
- Appliance for AI model training
This roadmap reaffirms OpenNebula’s commitment to supporting the complete AI development lifecycle—from training to deployment—on sovereign cloud infrastructure.
A Glimpse at OpenNebula 7.0
Another major highlight was the upcoming release of OpenNebula 7.0, expected later this month. This version will include key features such as DRS with AI-driven predictive monitoring, Veeam backup support, SAN ISCSI datastore integration (including NetApp support), LVM-thin shared LUN support, and support for ARM-based CPUs.
Other enhancements include OVF/OVA import capabilities, improved VNC console functionality, and tools for automated provisioning across hybrid, multi-provider cloud-edge environments—making OpenNebula even more scalable, interoperable, and production-ready for enterprise use cases.
Real-World Stories: Telefónica and AI Sweden
The event also featured inspiring case studies from OpenNebula users.
Telefónica’s Discovery Innovation team shared their experience building a Private AI Factory. They explained how OpenNebula enabled them to recreate the agility of the public cloud on-premise while maintaining visibility and cost control. Their use of specialized GPU machines has proven essential for training LLMs and analyzing complex network data. They also emphasized that industrializing private infrastructure not only leads to clear cost savings but also addresses the explosive demand for AI training capacity in innovation-driven teams.
Later, AI Sweden provided a strategic outlook on AI adoption and ecosystem building. They highlighted the importance of giving teams access to infrastructure and creating bold, fast-moving initiatives. Through a modular approach and curated AI showrooms, AI Sweden is helping transform national capabilities in AI. The presentation closed with a powerful quote that resonated with the audience:
“We build people, not artifacts—but build artifacts in order to build people.”
Wrapping Up in Madrid
TechDay Madrid 2025 was a fantastic opportunity for cloud and AI professionals to connect, learn, and explore the evolving capabilities of OpenNebula in a hands-on, forward-looking environment. From real-world case studies to deep dives into GenAI deployment models, the event made one thing clear: OpenNebula is paving the way for sovereign, cost-effective, and scalable AI infrastructures.
We want to thank all the attendees, partners, and speakers who made this TechDay a success. Stay tuned for more updates from the OpenNebula 7.0 release and upcoming TechDays around the world.
If you are interested in having your company host one of our TechDays do not hesitate to contact us.
0 Comments