2025 has been a defining year for OpenNebula Systems. Our work has been shaped by major market shifts, rapid adoption, and the clear validation of a vision we have been building for 17 years, since the first release of OpenNebula back in July 2008: delivering the best virtualization and cloud platform in the market—open, sovereign, and enterprise-ready. In 2025, this vision has taken on renewed relevance as organizations increasingly turn to OpenNebula as a trusted alternative to VMware, while at the same time expanding their infrastructure to support the next generation of AI Factories across HPC, neocloud, and telecom environments.
As organizations around the world reassessed their infrastructure strategies, OpenNebula has emerged not merely as an alternative, but increasingly as the default choice for running modern cloud and AI workloads. We are not just catching up with established enterprise platforms like VMware or Nutanix—we are helping organizations move ahead with distinctive unique capabilities designed for hybrid environments, while offering full sovereignty, flexibility, and vendor neutrality.
This progress is the result of more than 17 years of experience supporting real-world cloud deployments, close collaboration with leading cloud and technology partners, and sustained innovation enabled in part by the€3B IPCEI-CIS initiative. None of this would have been possible without the support and contributions from our community, the trust of our customers and business partners, and thecommitment of our employees. Teamwork, innovation, and adaptability have been central to OpenNebula Systems’ success throughout 2025.
Team Growth and a Stronger Shared Culture
Our growth in 2025 has been as strong internally as it has been in the global tech market. Few companies manage that pace without losing coherence. What made the difference for us was not speed, but alignment.
Today, OpenNebula Systems is a people-first, global company, bringing together professionals from 22 different nationalities who work across regions and time zones. Despite the distance, the culture has remained remarkably consistent—centered on pragmatic engineering, openness by default, and a strong sense of responsibility toward users, customers, and partners. These values enable a culture of innovation, passion, collaboration, and simplicity.
We are still hiring across several units, not to chase growth for its own sake, but to support a platform that is now, more than ever, mission-critical for many organizations.
A Fully Feature-Set and Efficient Virtualization and Cloud Platform
2025 was a landmark year in terms of product maturity, marked by an unprecedented pace of innovation in enterprise cloud infrastructure.
But it was not only about adding new features—it was equally about efficiency. OpenNebula has become the go-to VMware alternative for three clear reasons: a simple and predictable subscription model, priced per hypervisor with all features included; alightweight control plane that can run on a single VM while scaling to thousands of hosts; and a highly efficient architecture that delivers higher VM density and better hardware utilization.
The results speak for themselves: a 200% increase in VM density per server and significant savings in licensing costs, enabling customers such as Beeks Group to deliver more computing power while keeping infrastructure spending under control. Additional customer experiences and use cases are available in the OpenNebula case studies.
With the OpenNebula 7 “Next Generation” series, we delivered on long-standing expectations, starting with the release of 7.0 in July, followed by two major maintenance releases, 7.0.1 and 7.0.2.
At the same time, we continued to invest in stability and long-term support. As part of the LTS roadmap, we released multiple maintenance updates for the 6.10 LTS series, including 6.10.3, 6.10.4, and 6.10.5, ensuring reliability and continuity for production environments.
Across multiple releases, OpenNebula has consolidated its position as a fully feature-complete enterprise virtualization and cloud platform, capable of replacing proprietary virtualization stacks, like VMware or Nutanix, while extending seamlessly into hybrid, edge, and AI-centric deployments.
Key improvements delivered this year include:
- VMware Migration: A complete, low-risk re-virtualization path with AI-driven DRS operations, native NetApp and Veeam integrations, enterprise-grade storage and backup, and automated tools for migrating ESXi workloads.
- AI-Driven Resource Management: Smarter resource optimization with OneDRS, offering predictive scheduling, customizable automation levels, flexible migration thresholds, time-series monitoring, and fine-grained quota management.
- AI Factory Platform: Production-ready AI infrastructure with native integrations for vLLM, Run:ai, and NVIDIA Dynamo, along with support for NVIDIA NIM and Hugging Face.
- GPU-Accelerated Infrastructure: Advanced GPU monitoring, NVIDIA vGPU and MIG enablement, enhanced PCI passthrough, BlueField DPU networking, and active collaboration with NVIDIA on next-generation GB200 certification.
- Enterprise Storage Efficiency: High-performance storage with multi-tier caching and incremental backups across Ceph, LVM, NetApp, and Veeam.
- Hybrid, Multi-Cloud, and Edge Ready: Built-in support for distributed cloud strategies, including ARM architecture and expanded cloud provider compatibility.
- Modern User Experience: A streamlined and extensible interface with improved visibility, accessibility, and developer integration.
- Enterprise Security: Stronger VM and identity protection with virtual TPM and SAML-based federation.
- Cloud-Native Integration: Unified VM and Kubernetes operations through Rancher-based cluster management and native CSI storage integration.
During 2025, we also released updated versions of key complementary components—OneKE (our CNCF-certified Kubernetes installer), OneDeploy (our tool for automating cloud deployments), CAPONE (our new implementation of the Cluster API for Kubernetes), and OneSwap (which allows a fully automated VMware VM migration)—further strengthening the OpenNebula ecosystem of technological integrations.
Of course, this is only the beginning. OpenNebula 7.2, scheduled for January 2026, is already well underway, and will introduce a new set of capabilities focused on scalability, performance, and deeper hardware integration. The upcoming release will include:
- Advanced multi-GPU support, with full integration of NVIDIA NVLink and NVSwitch for high-performance AI workloads.
- NVIDIA Spectrum-X certification, ensuring seamless support for next-generation AI and HPC networking fabrics.
- Enhanced LVM driver optimizations, enabling direct use of SAN appliances without intermediate storage layers.
- Storage Live Migration, allowing seamless data mobility across nodes with no service disruption.
- Improved CPU compatibility (EVC) to simplify workload migration across heterogeneous hardware environments.
Stay tuned—OpenNebula 7.2 is shaping up to be a major step forward! In parallel, we are already preparing the roadmap for OpenNebula 7.4 and 7.6 in 2026, which will continue to expand the platform’s capabilities across cloud, edge, and AI hybrid infrastructures.
While OpenNebula already provides strong Kubernetes integration, we recognize that many users are looking for a true managed Kubernetes experience. An upcoming KaaS offering is designed to address this gap by delivering a simple, developer-friendly experience similar to leading managed Kubernetes services, without sacrificing openness or control.
Integrated with the Enterprise Ecosystem
Enterprise adoption depends on ecosystem integration, and in 2025 OpenNebula significantly expanded its footprint in this area.
With OpenNebula 7.0, we delivered new enterprise-grade integrations with key technology partners such as Veeam and NetApp. Looking ahead, OpenNebula 7.2 will introduce high-performance drivers for Pure Storage, while technical collaborations are already underway with additional vendors across storage, backup, networking, and security—including Commvault for enterprise backup and VAST Data for high-performance storage tailored to AI Factories.
In 2025, OpenNebula reinforced its enterprise and telecom readiness through key certifications and partnerships. CAPONE achieved SUSE Ready Certification for Rancher, enabling full support for the Kubernetes lifecycle through Cluster API, while telecom certification with Dell Technologies validated OpenNebula for sovereign AI at the telco edge. These milestones, together with ARM64 certification, a partnership with Ampere, ongoing collaborations with hardware partners such as Supermicro across multiple AI Factory deployments, and active leadership in European telco initiatives such as Sylva and the Telco Cloud Reference Architecture (TCRA), strengthen OpenNebula’s position as a production-ready platform for modern cloud, edge, AI, and 5G infrastructures.
The goal has remained simple: OpenNebula should fit naturally into existing enterprise environments, without forcing organizations to redesign everything around it—while also providing a solid foundation for the next generation of AI Factories and Gigafactories.
A Strategic NVIDIA Partnership
The secure, multi-tenant foundation for enterprise and sovereign AI infrastructure.
One of the most significant milestones of 2025 was the deepening of our collaboration with NVIDIA. OpenNebula provides a secure, multi-tenant platform that transforms NVIDIA-based infrastructure into a cloud-like environment, supporting both on-demand inference and integrated AI training workflows, with validated use cases across HPC, neocloud, and telecom environments.
To capture and formalize this work, we introduced the OpenNebula AI Factory Reference Architecture, a new white paper that defines a practical blueprint for building next-generation AI infrastructure. The reference architecture summarizes how OpenNebula combines mature virtualization, dynamic resource allocation, and deep NVIDIA ecosystem integration to deliver performance, control, and sovereign governance—addressing the limitations of traditional cloud-native stacks as organizations scale AI initiatives.
OpenNebula is now tightly integrated with the NVIDIA hardware and software ecosystem, enabling native support for GPUs, AI accelerators, and the full AI software stack. At the hardware level, this includes integration with Grace CPUs, Hopper and Blackwell GPUs, GPU MIG, NVLink and NVSwitch for multi-GPU processing, and advanced networking technologies such as InfiniBand, Spectrum-X, and BlueField DPUs. At the software level, OpenNebula integrates with key NVIDIA platforms and tools—including NVIDIA Dynamo, Run:ai, NIM, and SLURM—enabling the convergence of HPC/AI and Kubernetes-based AI workloads on a single, unified infrastructure.
These integrations allow organizations to deploy and operate AI Factories with the same simplicity, openness, and operational model they expect from cloud infrastructure. Together, OpenNebula Systems and NVIDIA are enabling scalable, sovereign, and production-ready AI environments—from centralized data centers to the edge. We have built and validated concrete use cases across HPC, neocloud providers, and telecommunications, where performance, isolation, and operational simplicity are equally critical.
We presented these results at NVIDIA GTC San Jose and GTC Paris in 2025, and we look forward to an active participation at GTC 2026 in San Jose.
Expanding the Go-To-Market Collaboration
Beyond technology, 2025 strengthened OpenNebula Systems’ go-to-market ecosystem.
We expanded collaborations with partners such as Canonical, Red Hat, and SUSE, aligning OpenNebula with the most widely adopted enterprise Linux and Kubernetes platforms. The new Embedded Editions of the OpenNebula Enterprise Subscriptions include built-in, vendor-backed subscriptions that extend OpenNebula’s coverage to the most commonly used enterprise infrastructure components integrated with the platform.
With these Embedded Editions, OpenNebula Systems is now able to deliver end-to-end support for the complete integrated solution, including Ubuntu Pro, Red Hat Enterprise Linux, and SUSE Linux Enterprise, backed by Level 3 (L3) support from the respective technology vendors. This ensures extended security maintenance, compliance certifications, and live patching capabilities for mission-critical environments. Enterprise Subscriptions can also be extended with support for Rancher and RKE2, with the option to include an embedded SUSE Rancher Prime subscription, providing a fully supported Kubernetes stack.
In parallel, we continued to grow the OpenNebula CONNECT Program, welcoming new partners and system integrators worldwide. Several leading service and integration companies, such as Fujitsu’s Fsas Technologies, have already joined the program, with others currently in the onboarding process, to jointly address strategic opportunities around VMware replacement and the deployment of AI Factories and Gigafactories.
An Open Source Innovation-Driven Company
Innovation remains at the core of OpenNebula Systems’ DNA.
In 2025, we made substantial progress in strategic initiatives such as the €3B IPCEI-CIS, working alongside key industry leaders including SAP, Telefónica, Amadeus, Orange, Siemens, Atos, Deutsche Telekom, Telecom Italia, and Fincantieri. These collaborations have reinforced OpenNebula’s role in advancing next-generation cloud and edge capabilities. Most notably, we launched Virt8ra—a multi-vendor cloud infrastructure integrating resources from leading European providers such as IONOS, OVHcloud, and Scaleway—as well as Fact8ra.AI a flagship initiative focused on building open, federated AI Factories at scale, bringing together resources from cloud providers, HPC centers, and telecom operators.
Participation in the IPCEI-CIS through our OneNextGen project has been a key enabler in the transformation of OpenNebula into a next-generation cloud and edge platform. Many of the innovations delivered with OpenNebula 7.0 originate directly from this work, addressing critical challenges around scalability, interoperability, and intelligent operations across the cloud–edge continuum. These capabilities have been validated in real-world scenarios, ranging from resource-constrained ARM-based edge nodes to GPU-powered AI and HPC clusters.
Beyond the IPCEI-CIS, several innovation projects continued throughout 2025, with new initiatives scheduled to start in 2026. OpenNebula Systems remains deeply engaged in strategic programs focused on cloud sovereignty, interoperability, and digital autonomy. We have upgraded our participation in open source communities such as the Eclipse Foundation and in industrial groups such as CISPE while maintaining leadership of the Cloud-Edge Working Group within the European Alliance for Industrial Data, Edge, and Cloud.
Together, these efforts are shaping new OpenNebula innovations, strengthening ecosystem integration, and enabling advanced use cases across Telco Cloud, Edge and 5G, AI processing, and confidential computing.
Community and Ecosystem Engagement
Connecting partners, users, and customers across the ecosystem.
In 2025, we significantly expanded our webinar and TechDay programs, moving to a more frequent and structured calendar. Throughout the year, we delivered 43 webinars and hosted five in-person TechDays, while establishing a rhythm of weekly webinars and quarterly TechDay events in major cities around the world. The TechDay program has been redefined to better share practical knowledge, showcase real-world deployments, and strengthen connections with local communities. If you are interested in hosting an OpenNebula TechDay, we would love to hear from you—let’s work together to bring OpenNebula closer to your local tech ecosystem.
We also strengthened our presence at major open source and cloud industry events. Throughout the year, the OpenNebula Systems team delivered talks at conferences such as FOSDEM, the Linux Foundation Open Source Summit, NVIDIA GTC (San Jose and Paris), EuroHPC, FYUD, NexusForum Summit, 5G Forum, 6G Summit, GITEX, and Open Source Experience Paris. In parallel, we hosted exhibition booths at leading industry gatherings including Mobile World Congress (MWC), MSP Global, Cloud Expo Europe, Data Center World, Red Hat Summit, and the International Quantum Business Conference.
Looking ahead to 2026, we plan to further intensify this effort, with increased participation in partner-led events and a stronger focus on HPC and AI-focused conferences, continuing to engage with the communities shaping the future of cloud and AI infrastructure.
We are also excited to announce an important change for 2026: our annual OpenNebula conference will return as a fully in-person event. The new OneNext 2026 re:Virtualize, scheduled for April, will bring the industry together for deeper technical discussions, ecosystem collaboration, and real-world use cases—exploring the open, integrated platform for the future of enterprise cloud and AI Factories. The Call for Papers and sponsorship opportunities are now open.
Our View for 2026
The progress made in 2025 marks a turning point for OpenNebula Systems. Our focus has moved beyond incremental improvements toward building a next-generation open cloud and AI infrastructure, shaped in large part by the innovation enabled through IPCEI-CIS. Two forces continue to define this direction: the transition away from proprietary virtualization stacks and the rapid emergence of AI Factories as a core layer of digital infrastructure.
As organizations move beyond short-term responses to market disruption, the need for a durable, open replacement for VMware and other proprietary platforms has become clear. In 2026, OpenNebula will continue to evolve as that foundation—one that prioritizes operational simplicity, efficiency, and long-term control. Our goal is not only to replace existing platforms, but to provide a better default: lighter, more efficient, and designed to scale across data centers, edge locations, and hybrid environments.
At the same time, AI infrastructure is becoming as fundamental as virtualization once was. OpenNebula Systems’ vision for 2026 is to make AI Factories a natural extension of cloud infrastructure, not a separate, specialized stack. This means enabling multi-tenant, GPU-native environments where training and inference workloads can be deployed, scaled, and operated with the same clarity and predictability as virtual machines today.
A central principle guiding this vision is openness. In a landscape increasingly shaped by consolidation and closed ecosystems, OpenNebula Systems continues to invest in interoperability, portability, and standards-based integration. The objective is to ensure that organizations retain freedom of choice—across hardware, software, and providers—while avoiding the long-term risks of lock-in.
Looking ahead, OpenNebula Systems will continue to bring cloud, edge, and AI together into a unified platform, enabling infrastructures that are distributed by design, intelligent by default, and efficient at scale. This is the direction in which the platform is evolving—and the role we believe OpenNebula can play in shaping the next generation of digital infrastructure.
As always, this journey is driven by the trust and collaboration of our community, customers, partners, and the OpenNebula Systems team. Together, we are building an open foundation for the future of cloud computing and sovereign AI.
I wish you, and your loved ones, good health and a wonderful 2026!




0 Comments