About this TechDay
Looking for an open platform to deploy a multi-tenant infrastructure that delivers AI-as-a-Service with full data sovereignty and cloud-native flexibility??
AI Factories are redefining Data Centers and enabling the next era of AI. Join us at OpenNebula’s TechDay in Paris for an exclusive event focused on building and operating AI Factories using sovereign, open source cloud infrastructure. Learn how to deploy a multi-tenant AI-as-a-Service platform fully integrated with Hugging Face for the on-demand execution of Large Language Models (LLMs).
This event is ideal for enterprises, cloud providers, telcos, and AI/HPC centers interested in delivering scalable and secure AI workloads, while ensuring data sovereignty and cloud-native flexibility. Expert-led sessions will walk you through real deployment strategies, and you’ll get a live demo of the platform in action—showing how to easily launch LLM workloads through an AI Factory model.
You’ll also hear from real users, who will share case studies and best practices around deploying sovereign AI infrastructure and offering AI workloads as-a-service. The day will conclude with a networking lunch, offering you the chance to connect with industry peers and OpenNebula experts.
Secure your spot today—registration is free!
Agenda
blanco
09:00-09:30 Attendee Accreditation 📝
09:30-10:00 Welcome to the TechDay 👋
An introduction to the event and what to expect throughout the day. We will introduce speakers and give a quick overview of OpenNebula and its new features.
Speaker: Alexandre Guillemin — Partner Manager Specialist @ OpenNebula Systems
10:00-10:30 Building Sovereign AI Factories with OpenNebula+Live Demo
Explore how OpenNebula can be leveraged to create sovereign AI Factories – Infrastructures that allows organizations to develop, train and deploy AI models entirely within their own secure environments. The session will finalize with a live demo of how OpenNebula architecture is key for running LLMs at scale.
Speaker: Alberto Picón — Principal Technologist for Cloud-Edge Engineering @ OpenNebula Systems
10:30-11:00 How we make OpenAI with Open Source
Discover how OpenSource tools are used to create and manage LLM models, like OpenAI. Learn with a use case by Iguane Solutions how NVIDIA GPUs take full advantage from a platform like OpenNebula.
Speaker: Jean-Philippe Foures — VP Product @ Iguane Solutions
11:00-11:30 Coffee Break ☕
Take a break and network with other attendees over coffee.
11:30 - 12:30 – Success Stories: Use Cases
Learn how companies are taking advantage of the benefits that AI bring for their cloud environment. And how they are using OpenNebula.
Presentations to be confirmed soon.
12:30 - 13:00 – Closing Remarks & Future Outlook
Wrap up the day with a summary of key takeaways and a look ahead at OpenNebula’s future developments.
Speakers:
- Alexandre Guillemin — Partner Manager Specialist @ OpenNebula Systems
- Jean-Philippe Foures — VP Product @ Iguane Solutions
13:00 - 14:00 – Networking Lunch 🥪
Enjoy a networking lunch, giving you the opportunity to engage with industry experts and peers.
Registration
Acknowledgements
Initiative funded by the Spanish Ministry for Digital Transformation and Civil Service through the ONEnextgen Project: Next-Generation European Platform for the Datacenter-Cloud-Edge Continuum (UNICO IPCEI-2023-003) and co-funded by the European Union’s NextGenerationEU instrument through the Recovery and Resilience Facility (RRF).