Launching a distributed gaming cloud across 17 global locations in just 25 minutes, for little more than pocket change!
While most of today’s organizations are taking advantage of the long list of benefits offered by cloud computing, the growth of data-intensive workloads and the benefits of lower latency highlight the need to move beyond a simplistic “centralized cloud” approach. As businesses and applications serve global, mobile audiences the benefits of distributed infrastructure are becoming apparent to an expanding list of use cases.
One clear example is gaming and entertainment.
The growth of immersive and interactive gaming and media experiences are pushing the threshold on many levels. Just ask any twelve year-old Fortnite player, and you’ll learn all about the importance of latency and jitter – and how frustrating it is when these experiences don’t translate the way they want. The same story is being repeated across a huge range of industries and verticals, from industrial IoT and office productivity, to mobility, store automation, live entertainment or even real-time, AI-powered diagnosis of medical data.
While some of these problems are due to congestion or “last mile” issues, we’re mainly running into a simple physics problem: the speed of light! Moving compute closer to the user dramatically changes the equation for what is possible. Enter, edge computing!
OpenNebula at the Edge
While OpenNebula has consistently provided a simple and stable platform for managing private cloud infrastructures, whether on-premises, hosted, federated, or hybrid, the new OpenNebula version 5.8 “Edge” is flexing its muscles by bringing key capabilities to create and manage highly distributed cloud infrastructures.
The ability to distribute workloads used to be the domain of only the largest websites and applications. However, more and more organizations are looking to respond to their global users, and the public cloud has helped to lower the barrier to doing this rapidly and affordably.
But what if you’re not solely a public cloud user? Most enterprises have diverse infrastructure, including private and hybrid clouds. As such, being able to expand private clouds to distributed dedicated infrastructure – for instance to address the ever-growing need for low latency – is of increasing value to our users.
With our latest release, OpenNebula provides the ability to expand a cloud by instantiating hosts and clusters using bare metal resources from providers like Packet or even Amazon Web Services (AWS now offers a line of bare metal compute instances). With a single command, users can deploy and configure new clusters using these bare metal providers in locations around the world.
Moreover, OpenNebula provides the ability to grow or reduce the size of one’s cloud infrastructure based on the active demands of the system.
A Real World Example: Gaming
To showcase OpenNebula’s capabilities, what we are going to review here is the use case of a gaming company releasing their new video game to a global audience. As they establish their OpenNebula private cloud, this (fictional) gaming company is very aware of key requirements for their platform.
- The broad majority of video gaming now happens on mobile devices or devices connected over WiFi. Hence, user experience will correlate very closely to response time and latency between the cloud resources and the users’ consoles. So being able to deploy their gaming services as close as possible is key.
- In order to meet fluctuating demands, they will need to manage their distributed private cloud environment with speed and flexibility. This means being able to dynamically grow or shrink one’s cloud infrastructure according to real time needs, creating new cloud resources where needed and scaling these resources according to dynamic user demands.
- Finally, the platform needs to be highly scalable – being able to manage large-scale, highly distributed cloud infrastructures. The infrastructure cannot be limited to a single host location, but rather have the flexibility to scale in size, across multiple locations across the globe.
In this particular case, you’ll see how OpenNebula works with bare metal from Packet to provide the perfect building blocks for this use case, both at launch time and beyond.
Edge Cloud Infrastructure
Packet is a bare metal resource provider with the focus of bringing the experience of the cloud to physical infrastructure, regardless of what it is and where it resides.
The five year old company, which is backed by names just as Softbank, Dell Technologies and Samsung, manage tens of thousands of physical servers, built by a dozen different manufacturers, across three architectures and over 20 global facilities, and support 15+ official operating systems. An important thing to note for this example is that a large percentage of Packet’s users deploy with their Custom iPXE feature, which allows each customer to bring their own OS image.
With its focus on bare metal, fast provisioning, and distributed locations Packet is a consummate platform for building out the Edge.
For this experiment, we are using the following infrastructure, provided by Packet:
Name | Location | Type |
AMS1 | Amsterdam, Netherlands | c2.medium.x86 |
BOS2 | Boston, MA USA | c2.medium.x86 |
DFW1 | Dallas, TX USA | c2.medium.x86 |
DFW2 | Dallas, TX USA | x1.small.x86 |
EWR1 | Parsippany, NJ USA | c2.medium.x86 |
FRA2 | Frankfurt, Germany | c2.medium.x86 |
HKG1 | Hong Kong, China | x1.small.x86 |
IAD1 | Ashburn, VA USA | x1.small.x86 |
LAX1 | Los Angeles, CA USA | x1.small.x86 |
MRS1 | Marseille, France | x1.small.x86 |
NRT1 | Tokyo, Japan | x1.small.x86 |
ORD2 | Chicago, IL USA | c2.medium.x86 |
ORD3 | Niles, IL USA | c2.medium.x86 |
SIN1 | Singapore | x1.small.x86 |
SJC1 | Sunnyvale, CA USA | x1.small.x86 |
SYD1 | Sydney, Australia | x1.small.x86 |
YYZ1 | Toronto, ON, Canada | x1.small.x86 |
The underlying cloud is running OpenNebula v.5.8 “Edge”, and is instantiated on a Packet host in its Parsippany, NJ (USA) location.
This cloud then deploys and configures the host clusters, using the new OpenNebula provisioning feature, at the remaining locations. Each provision run produces a new ready-to-use dedicated OpenNebula cluster with its datastores, virtual networks and hosts.
The OpenNebula front-end host in the Parsippany was prepared using the miniONE tool, which automates the OpenNebula single-node evaluation installation, completing in just 4 minutes. For the host, we took the CentOS 7 operating system and c2.medium.x86 (https://www.packet.com/cloud/servers/c2-medium-epyc/) hardware configuration.
Hardware and Game Selection
Host clusters deployed on the locations world-wide are of 2 hardware configurations (x1.small.x86 and c2.medium.x86) with the Ubuntu 18.04 LTS operating system and KVM hypervisor. Hosts are running the OpenNebula managed KVM virtual machines with the Debian 9 guest operating system and gaming server service for the online FPS game “Wolfenstein: Enemy Territory”.
We chose this service for its maturity and simplicity. The real game which is used to demonstrate the functionality is “Enemy Territory: Legacy”. It’s an open source project that provides a compatible client (and server) for the game “Wolfenstein: Enemy Territory”. And as we walk through the details of this assumed “product launch”, we have direct insight into the latencies of all the servers, which we will review below.
A key feature for the successful experiment is having the gaming server services reachable from the public Internet for everyone. The OpenNebula installation was configured to manage the ad-hoc public IP ranges provided by Packet and assign them to the virtual machines over the standard OpenNebula IP management interfaces. Virtual machines can now have all their services exposed to the public transparently.
Experiment Execution
Each provision run prepares a cluster in one location in several phases.
- Phase 1: The new physical hosts are allocated and deployed on the Packet, getting the clean installation of the chosen operating system.
- Phase 2: The OpenNebula provision feature installs and configures the hosts to be able to run the KVM hypervisor.
- Phase 3: New hosts are added into the OpenNebula as hypervisor hosts and the OpenNebula front-end transfers own drivers and proceed with the monitoring.
- Phase 4: We are ready and starting the virtual machines.
- Phase 5: The gaming server service is automatically installed right on a boot.
The precise installation steps are passed to each VM over the OpenNebula specific metadata (contextualization). At the end, there is a fully working game server automatically included on the list of all the other public game servers world-wide.
Hosts and gaming service VMs were started from the base OS images and configured dynamically from scratch. No images with pre-installed or pre-configured services were used. As each provision execution prepares only a cluster in a single location, several independent provisions against different locations were started in parallel.
The following table shows the actual timing of each phase for various locations:
Name | Location | |||||
Phase 1 Deploy host | Phase 2 Configure KVM host | Phase 3 Monitor by ONE | Phase 4 Start VM | Phase 5 Bootstrap game svc | TOTAL | |
AMS1 | 462 | 163 | 98 | 23 | 82 | 831 |
BOS2 | 444 | 87 | 11 | 23 | 105 | 673 |
DFW1 | 309 | 129 | 44 | 15 | 78 | 579 |
DFW2 | 427 | 125 | 44 | 22 | 86 | 707 |
EWR1 | 548 | 77 | 3 | 8 | 109 | 748 |
FRA2 | 346 | 168 | 97 | 21 | 84 | 719 |
HKG1 | 351 | 352 | 375 | 376 | 146 | 1504 |
IAD1 | 305 | 93 | 12 | 32 | 80 | 525 |
LAX1 | 342 | 172 | 77 | 17 | 96 | 707 |
MRS1 | 329 | 194 | 104 | 24 | 75 | 729 |
NRT1 | 350 | 328 | 241 | 46 | 138 | 1105 |
ORD2 | 452 | 103 | 24 | 7 | 89 | 678 |
ORD3 | 438 | 103 | 25 | 36 | 85 | 691 |
SIN1 | 329 | 424 | 320 | 58 | 108 | 1242 |
SJC1 | 341 | 177 | 89 | 19 | 117 | 746 |
SYD1 | 345 | 462 | 342 | 60 | 171 | 1383 |
YYZ1 | 308 | 102 | 16 | 9 | 82 | 520 |
One can see that the quickest of all deployments was the Toronto (YYZ1), taking 520 seconds (just over 8 minutes) in total. The longest was in Hong Kong (HKG1), taking 1504 seconds (just over 25 minutes).
Times to deploy a host on Packet are quite consistent, with low variance. When we configure the hosts as a KVM hypervisor, the latency between front-end and host, or performance of the nearest OS packages mirrors in the locality is taken into account. Time to monitor the host by OpenNebula is only affected by the remote host latency and network throughput.
The same applies for the virtual machine start, as the base VM image (320 MiB) must initially be transferred from the front-end to each hypervisor host. For the distant hosts to the front-end, the image transfer times can be pretty long (for example 171 seconds to Sydney, SYD1). During the game service bootstrap, we depend mainly on the performance of the nearest OS packages mirror.
Below is a screenshot of the deployed and configured servers hosting the Enemy Territory video game. If you pay attention to the “PING” metric, you’ll notice the latency measured from the client from where we orchestrated this exercise (Brno, Czech Republic) to the various nodes. Understandably, the host location with the shortest latency was Frankfurt, Germany (FRA2) with 18ms. Compare that to a latency from Czech Republic to Sydney, Australia and you’ll see a latency of 331ms (almost 20x longer).
The ideal situation would be to stand up resources as close to the user base as possible. So users in relative proximity to the nodes will experience latencies below 10ms.
So, with this OpenNebula distributed cloud infrastructure using core and edge bare metal resources by Packet, players of Enemy Territory will be directed to the node with the shortest latency, providing optimal performance. And the infrastructure admin will have the flexibility to flex resources in each cluster in accordance with the active traffic.
And from the standpoint of clearing out the entire cloud architecture – the simultaneous execution of the host deletions took no more than 49 seconds.
What does a platform like this cost?
Packet has an innovative business model and a strong foothold that at the forefront of the rapidly developing technology ecosystem. It is worthwhile digging deeper into Packet’s offerings. The hourly rate for the resources hosting our Enemy Territory video game distributed cloud cost no more than $11.40/hour!
The conclusion is clear
OpenNebula v.5.8 Edge takes a fresh approach to creating a private distributed cloud by not only broadening support for lightweight LXD containers, but as we have seen here, by integrating support for simple cloud deployment on bare metal resources from providers like Packet. An administrator having the ability to create and manage a distributed private cloud with nodes in 17 locations around the world, deploy and configure them with a few clicks of a button, and to subsequently be able to flex those configurations on the fly – all within 25 minutes and for under $12 / hour – that sounds like a perfect building block for platforms of today, and tomorrow.
0 Comments