Blog Article:

Introducing the New Virtual Router: Simplifying Management of Virtual Networks

Michał Opala

Senior Cloud Engineer at OpenNebula

Mar 13, 2024

The Virtual Router is dead, long live the Virtual Router!

We’d like to happily announce that along with the release of the new one-apps set of tools, we’ve prepared a completely new implementation of our Virtual Router! 🤓 We decided to go on that journey in order to improve the overall OpenNebula experience for anybody who desires to deploy virtual environments that are slightly more complex than simple VMs. The new version has been in the OpenNebula Marketplace for some time now and is intended to replace the old image and VM template. Please feel free to take a look at the Virtual Router’s source code and its new documentation!

OneKE is now fully open source!

One of the main incentives for this Virtual Router rewrite was to improve our OpenNebula Kubernetes Engine (OneKE) in several ways. We’ve released a new OneKE appliance together with the new Virtual Router, it’s free as usual but now it’s also fully open source! You can find OneKE’s source code and its documentation already on our GitHub repo.

Features of the Virtual Router

The Virtual Router in OpenNebula is a solution to common problems regarding the management of virtual networks. It consists of several modules that can be enabled and configured via contextualization parameters. So what can you do with it?

  • Setup NAT between public and private VNETs.
  • Setup static port-forwarding between public and private VNETs.
  • Setup static and dynamic load-balancing with either IPVS/LVS or HAProxy.
  • Provide a DNS forwarder for isolated VNETs.
  • Provide a simple OpenNebula-compatible DHCP4 server.

On top of the features above, Virtual Router is based on Keepalived’s VRRP implementation so it’s capable of running fully in high-availability mode.

Alright, how can I start then?

You can find the Service Virtual Router in the OpenNebula Marketplace, which consists of a QCOW2 image and a VM template. Inside your OpenNebula instance, you can for example use the CLI to download it:

$ onemarketapp export 'Service Virtual Router' vr1 --datastore default
IMAGE
    ID: 0
VMTEMPLATE
    ID: 0

Then you need to create a Virtual Router instance. It’s completely up to you how you configure your instances. There are lots of parameters to try out, in the example below two NICs are attached to each instance and then NAT between public and private VNETs is established. Additionally, two VIPs are allocated for the Virtual Router instances where the first one is “floating only”, which means OpenNebula will not allocate NIC IP addresses other than the VIP itself. There are a lot more configuration possibilities, feel free to take a look at the dedicated documentation.

$ onevrouter create <<'EOF'
NAME = "vr1"
NIC = [
  NETWORK = "public",
  FLOATING_IP = "YES",
  FLOATING_ONLY = "YES" ]
NIC = [
  NETWORK = "private",
  FLOATING_IP = "YES",
  FLOATING_ONLY = "NO" ]
CONTEXT = [
  NETWORK = "YES",
  SSH_PUBLIC_KEY = "$USER[SSH_PUBLIC_KEY]",
  ONEAPP_VNF_NAT4_ENABLED = "YES",
  ONEAPP_VNF_NAT4_INTERFACES_OUT = "eth0" ]
EOF
ID: 0

And finally, for the Virtual Router to operate, it needs actual VM instances to run, you should instantiate at least one instance:

$ onevrouter instantiate vr1 vr1 --multiple 2

How does OneKE benefit from the new Virtual Router?

OneKE has been implemented as a OneFlow service, which consists of roles, i.e. pools of VMs that you can scale. The first role OneKE deploys is called “vnf” (Virtual Network Functions), the image used to create VMs for this role is identical to the image Virtual Router uses, that’s because Virtual Router has two viable modes of operation:

  • Running as the usual Virtual Router resource.
  • Running inside OneFlow services.

Thanks to the alternative OneFlow mode, OneKE can dynamically configure HAProxy instances via the OneGate interface. Together with NAT, DNS, and HAProxy modules Virtual Router provides all the network functionality OneKE needs. The HAProxy instance is used to establish RKE2’s Control Plane and also to expose services running inside Kubernetes and forward ingress traffic. At the same time, the DNS module helps with synchronizing the whole cluster internally without using IP addresses.

How to contribute to all this?

We’d very much like to see users getting involved in this initiative by creating issues on GitHub and submitting pull-requests with fixes for existing images and appliances. Your contributions will help us to improve OpenNebula and can benefit the entire Community, so don’t be shy!

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *