In this post, I will explain how to install OpenNebula on two servers in a fully redundant environment. This is the English translation of an article in Italian on my blog.
The idea is to have two Cloud Controllers in High Availability (HA) active/passive mode using Pacemaker/Heartbeat. These nodes will also provide storage by exporting a DRBD partition via ATA-Over-Ethernet; the VM disks will be created on logical LVM volumes in this partition. This solution, besides being totally redundant, will provide high-speed storage because we use snapshots to deploy the partitions of the VM, not using files on an NFS filesystem.
Nonetheless, we will still use NFS to export the /srv/cloud directory with OpenNebula data.
System Configuration
As a reference, this is the configuration of our own servers. Your servers do not have to be exactly the same; we will simply be using these two servers to explain certain aspects of the configuration.
First Server:
- Linux Ubuntu 64-bit server 10.10
- Cards eth0 and eth1 configured with IP 172.17.0.251 bonding network (SAN)
- ETH2 card with IP 172.16.0.251 (LAN)
- 1 TB internal HD partitioned as follows:
- sda1: 40 GB mounted on /
- sda2: 8 GB swap
- sda3: 1 GB for metadata
- sda5: 40 GB for /srv/cloud/one
- sda6: 850 GB datastore
Secondary Server
- Linux Ubuntu 64-bit server 10.10
- Cards eth0 and eth1 configured with IP 172.17.0.252 bonding network (SAN)
- ETH2 card with IP 172.16.0.252 (LAN)
- 1 TB internal HD partitioned as follows:
- sda1: 40 GB mounted on /
- sda2: 8 GB swap
- sda3: 1 GB for metadata
- sda5: 40 GB for /srv/cloud/one
- sda6: 850 GB datastore
Installing the base system
Install Ubuntu server 64-bit 10.10 on the two servers and enabling OpenSSH server during installation. In our case, the servers are each equipped with a double-disk 1TB SATA in hardware mirror, on which we will create a 40 GB partition (sda1) for the root filesystem, a 4 GB (sda2) for the swap, a third ( sda3) of 1 GB formetadata , a fourth (sda5) with 40 GB for the directory /srv/cloud/one replicated by DRBD, and a fifth (sda6) with the remaining space (approximately 850 GB) that will be used by DRBD for the export of VM filesystems.
In terms of network cards, we have a total of three network cards to each server: 2 (eth0, eth1) will be configured in bonding to manage data replication and communicate with the compute nodes in the cluster network (SAN) on the class 172.17.0.0/24 and a third (eth2) is used to access from outside the cluster on the LAN 172.16.0.0/24 with class.
Unless otherwise specified, these instructions are specific to the above two hosts, but should work on your own system with minor modifications.
Network Configuration
First we modify the hosts file:
/etc/hosts 172.16.0.250 cloud-cc.lan.local cloud-cc 172.16.0.251 cloud-cc01.lan.local 172.16.0.252 cloud-cc02.lan.local 172.17.0.1 cloud-01.san.local 172.17.0.2 cloud-02.san.local 172.17.0.3 cloud-03.san.local 172.17.0.250 cloud-cc.san.local 172.17.0.251 cloud-cc01.san.local cloud-cc01 172.17.0.252 cloud-cc02.san.local cloud-cc02
Next, we proceed to the configuration of the system. First configure the bonding interface, installing required packages:
apt-get install ethtool ifenslave
Then we load the module at startup with correct parameters creating file /etc/modprobe.d/bonding.conf
/etc/modprobe.d/bonding.conf alias bond0 bonding options mode=0 miimon=100 downdelay=200 updelay=200
And configuring LAN:
/etc/network/interfaces auto bond0 iface bond0 inet static bond_miimon 100 bond_mode balance-rr address 172.17.0.251 # 172.17.0.251 on server 2 netmask 255.255.255.0 up /sbin/ifenslave bond0 eth0 eth1 down /sbin/ifenslave -d bond0 eth0 eth1 auto eth2 iface eth2 inet static address 172.16.0.251 # 172.16.0.252 on server 2 netmask 255.255.255.0
Configuring MySQL
I prefer to configure a MySQL circular replication rather than to manage the launch of the service through HeartBeat because MySQL is so fast in the opening; having been active on both servers, they save a few seconds during the switch in case of a fault.
First we install MySQL:
apt-get install mysql-server libmysqlclient16-dev libmysqlclient
and create the database for OpenNebula:
mysql -p create database opennebula; create user oneadmin identified by 'oneadmin'; grant all on opennebula.* to 'oneadmin'@'%'; exit;
Then we configure active/active replica on server 1:
/etc/mysql/conf.d/replica.cnf @ Server 1 [mysqld] bind-address = 0.0.0.0 server-id = 10 auto_increment_increment = 10 auto_increment_offset = 1 master-host = server2.dominio.local master-user = replicauser master-password = replicapass log_bin = /var/log/mysql/mysql-bin.log binlog_ignore_db = mysql
And on server 2:
/etc/mysql/conf.d/replica.cnf @ server 2 [mysqld] bind-address = 0.0.0.0 server-id = 20 auto_increment_increment = 10 auto_increment_offset = 2 master-host = server1.dominio.local master-user = replicauser master-password = replicapass log_bin = /var/log/mysql/mysql-bin.log binlog_ignore_db = mysql
Finally, on both servers, restart mysql and create replica user:
create user 'replicauser'@'%.san.local' identified by 'replicapass'; grant replication slave on *.* to 'replicauser'@'%.dominio.local'; start slave; show slave status\G;
DRBD Configuration
Now is the turn of DRBD but configured in standard active/passive. First install the needed packages:
apt-get install drbd8 modprobe drbd-utils
So let’s edit the configuration file:
/etc/drbd.d/global_common.conf global { usage-count yes; # minor-count dialog-refresh disable-ip-verification } common { protocol C; handlers { pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"; # fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; # split-brain "/usr/lib/drbd/notify-split-brain.sh root"; # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root"; # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k"; # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; } startup { # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb wfc-timeout 120; ## 2 min degr-wfc-timeout 120; ## 2 minutes. } disk { # on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes # no-disk-drain no-md-flushes max-bio-bvecs on-io-error detach; } net { # sndbuf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers # max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret # after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork # allow-two-primaries; # after-sb-0pri discard-zero-changes; # after-sb-1pri discard-secondary; timeout 60; connect-int 10; ping-int 10; max-buffers 2048; max-epoch-size 2048; } syncer { # rate after al-extents use-rle cpu-mask verify-alg csums-alg rate 500M; } }
And let’s create one-disk definition:
/etc/drbd.d/one-disk.res resource one-disk { on cloud-cc01 { address 172.17.0.251:7791; device /dev/drbd1; disk /dev/sda5; meta-disk /dev/sda3[0]; } on cloud-cc02 { address 172.17.0.252:7791; device /dev/drbd1; disk /dev/sda5; meta-disk /dev/sda3[0]; } }
and data-disk:
/etc/drbd.d/data-disk.res resource data-disk { on cloud-cc01 { address 172.17.0.251:7792; device /dev/drbd2; disk /dev/sda6; meta-disk /dev/sda3[1]; } on cloud-cc02 { address 172.17.0.252:7792; device /dev/drbd2; disk /dev/sda6; meta-disk /dev/sda3[1]; } }
Now, on both nodes, we create the metadata disk:
drbdadm create-md one-disk drbdadm create-md data-disk /etc/init.d/drbd reload
Finally, only on server 1, activate the disk:
drbdadm -- --overwrite-data-of-peer primary one-disk drbdadm -- --overwrite-data-of-peer primary data-disk
Exporting the disks
As already mentioned, the two DRBD partitions will be visible through the network, although in different ways: one-disk will be exported through NFS, data-disk will be exported by ATA-over-Ethernet and will present its LVM partitions to the hypervisor.
Install the packages:
apt-get install vblade nfs-common nfs-kernel-server nfs-common portmap
We’ll disable automatic NFS and AoE startup because we handle it via HeartBeat:
update-rc.d nfs-kernel-server disable update-rc.d vblade disable
Then we create the export for OpenNebula directory:
/etc/exports /srv/cloud/one 172.16.0.0/24(rw,fsid=0,insecure,no_subtree_check,async)
and we create necessary directory:
mkdir -p /srv/cloud/one
Finally we have to set idmapd daemon to correctly propagate user and permission on network.
/etc/idmapd.conf [General] Verbosity = 0 Pipefs-Directory = /var/lib/nfs/rpc_pipefs Domain = lan.local # Modify this [Mapping] Nobody-User = nobody Nobody-Group = nobody
Finally we have to configure default NFS settings:
/etc/default/nfs-kernel-server NEED_SVCGSSD=no # no is default
and
/etc/default/nfs-common NEED_IDMAPD=yes NEED_GSSD=no # no is default
Fault Tolerant daemon configuration
There are two packages that can handle high available services on Linux: corosync and heartbeat. Personally I prefer heartbeat and provide instructions referring to this, but most configurations will be through the pacemaker, then you are perfectly free to opt for corosync.
First install the needed packages:
apt-get install heartbeat pacemaker
and configure heartbeat daemon:
/etc/ha.d/ha.cf autojoin none bcast bond0 warntime 3 deadtime 6 initdead 60 keepalive 1 node cluster-cc01 node cluster-cc02 crm respawn
Only on first server, we create the authkeys file, and will copy it on the second server:
( echo -ne "auth 1\n1 sha1 "; \ dd if=/dev/urandom bs=512 count=1 | openssl md5 ) \ > /etc/ha.d/authkeys chmod 0600 /etc/ha.d/authkeys scp /etc/ha.d/authkeys cloud-cc02:/etc/ha.d/ ssh cloud-cc02 chmod 0600 /etc/ha.d/authkeys /etc/init.d/heartbeat restart ssh cloud-02 /etc/init.d/heartbeat restart
After a minute or two, heartbeat will be online:
crm_mon -1 | grep Online Online: [ cloud-cc0 cloud-cc02 ]
Now we’ll configure cluster services via pacemaker.
Setting default options:
crm configure property no-quorum-policy=ignore property stonith-enabled=false property default-resource-stickiness=1000 commit bye
The two shared IP 172.16.0.250 and 172.17.0.250:
crm configure primitive lan_ip IPaddr params ip=172.16.0.250 cidr_netmask="255.255.255.0" nic="eth2" op monitor interval="40s" timeout="20s" primitive san_ip IPaddr params ip=172.17.0.250 cidr_netmask="255.255.255.0" nic="bond0" op monitor interval="40s" timeout="20s" commit bye
The NFS export:
crm configure primitive drbd_one ocf:linbit:drbd params drbd_resource="one-disk" op monitor interval="40s" timeout="20s" ms ms_drbd_one drbd_one meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" commit bye
The one-disk mount:
crm configure primitive fs_one ocf:heartbeat:Filesystem params device="/dev/drbd/by-res/one-disk" directory="/srv/cloud/one" fstype="ext4" commit bye
The AoE export:
crm configure primitive drbd_data ocf:linbit:drbd params drbd_resource="data-disk" op monitor interval="40s" timeout="20s" ms ms_drbd_data drbd_data meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" commit bye
The data-disk mount:
crm configure primitive aoe_data ocf:heartbeat:AoEtarget params device="/dev/drbd/by-res/data-disk" nic="bond0" shelf="0" slot=="0" op monitor interval="40s" timeout="20s" commit bye
Now we have to configure the correct order to startup services:
crm configure group ha_group san_ip lan_ip fs_one nfs_one aoe_data colocation ha_col inf: ha_group ms_drbd_one:Master ms_drbd_data:Master order ha_after_drbd inf: ms_drbd_one:promote ms_drbd_data:promote ha_group:start commit bye
We will modify this configuration later to add OpenNebula and lighttpd startup.
LVM Configuration
LVM2 will allow us to create partitions for virtual machines and deploy it via snapshot basis.
Install the package on both machines.
apt-get install lvm2
We have to modify the filter configuration to allow lvm scan only to DRBD disk.
/etc/lvm/lvm.conf ... filter = [ "a|drbd.*|", "r|.*|" ] ... write_cache_state = 0
ATTENTION: Ubuntu uses a Ramdisk to bootup the system, so we have to modify also lvm.conf file inside ramdisk.
Now we remove the cache:
rm /etc/lvm/cache/.cache
Only on server 1 we have to create physical LVM volume and Volume Group:
pvcreate /dev/drbd/by-res/data-disk vgcreate one-data /dev/drbd2
Install and configure OpenNebula
We are almost done. Now we download and install OpenNebula 2.2 via source:
First we have to install prerequisites:
apt-get install libsqlite3-dev libxmlrpc-c3-dev scons g++ ruby libopenssl-ruby libssl-dev ruby-dev make rake rubygems libxml-parser-ruby1.8 libxslt1-dev libxml2-dev genisoimage libsqlite3-ruby libsqlite3-ruby1.8 rails thin gem install nokogiri gem install json gem install sinatra gem install rack gem install thin cd /usr/bin ln -s rackup1.8 rackup
Then we have to create OpenNebula user and group:
groupadd cloud useradd -d /srv/cloud/one -s /bin/bash -g cloud -m oneadmin chown -R oneadmin:cloud /srv/cloud/ chmod 775 /srv id oneadmin # we have to use this id also on cluster node for oneadmin/cloud
Now we go in unpriviledged mode to create ssh certificate for cluster communications:
su - oneadmin ssh-keygen # use default cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys chown 640 ~/.ssh/authorized_keys mkdir ~/.one
We create a .profile file with default variables:
~/.profile export ONE_AUTH='/srv/cloud/one/.one/one_auth' export ONE_LOCATION='/srv/cloud/one' export ONE_XMLRPC='http://localhost:2633/RPC2' export PATH=$PATH':/srv/cloud/one/bin'
Now we have to create one_auth file to setup a default user inside OpenNebula (for example of api or sunstone):
~/.one/one_auth oneadmin:password
And load default variables before compile:
source .profile
Now download and install OpenNebula:
cd wget http://dev.opennebula.org/attachments/download/339/opennebula-2.2.tar.gz tar zxvf opennebula-2.2.tar.gz cd opennebula-2.2 scons -j2 mysql=yes ./install.sh -d /srv/cloud/one
About configuration: this is my oned.conf file, I use Xen HyperVisor, but you can use also KVM.
/src/cloud/one/etc/oned.conf HOST_MONITORING_INTERVAL = 60 VM_POLLING_INTERVAL = 60 VM_DIR=/srv/cloud/one/var SCRIPTS_REMOTE_DIR=/var/tmp/one PORT=2633 DB = [ backend = "mysql", server = "localhost", port = 0, user = "oneadmin", passwd = "oneadmin", db_name = "opennebula" ] VNC_BASE_PORT = 5900 DEBUG_LEVEL=3 NETWORK_SIZE = 254 MAC_PREFIX = "02:ab" IMAGE_REPOSITORY_PATH = /srv/cloud/one/var/images DEFAULT_IMAGE_TYPE = "OS" DEFAULT_DEVICE_PREFIX = "sd" IM_MAD = [ name = "im_xen", executable = "one_im_ssh", arguments = "xen" ] VM_MAD = [ name = "vmm_xen", executable = "one_vmm_ssh", arguments = "xen", default = "vmm_ssh/vmm_ssh_xen.conf", type = "xen" ] TM_MAD = [ name = "tm_lvm", executable = "one_tm", arguments = "tm_lvm/tm_lvm.conf" ] HM_MAD = [ executable = "one_hm" ] VM_HOOK = [ name = "image", on = "DONE", command = "image.rb", arguments = "$VMID" ] HOST_HOOK = [ name = "error", on = "ERROR", command = "host_error.rb", arguments = "$HID -r n", remote = "no" ] VM_HOOK = [ name = "on_failure_resubmit", on = "FAILED", command = "/usr/bin/env onevm resubmit", arguments = "$VMID" ]
The only important thing is to modify /srv/cloud/one/etc/tm_lvm/tm_lvm.rc setting default VG:
/srv/cloud/one/etc/tm_lvm/tm_lvm.rc ... VG_NAME=one-data ...
Now copy the init.d script from source to /etc/init.d but not set it to startup ad boot.
I have modified the default script to startup also sunstone:
/etc/init.d/one #! /bin/sh ### BEGIN INIT INFO # Provides: opennebula # Required-Start: $remote_fs # Required-Stop: $remote_fs # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: OpenNebula init script # Description: OpenNebula cloud initialisation script ### END INIT INFO # Author: Soren Hansen - modified my Alberto Zuin PATH=/sbin:/usr/sbin:/bin:/usr/bin:/srv/cloud/one DESC="OpenNebula cloud" NAME=one SUNSTONE=/srv/cloud/one/bin/sunstone-server DAEMON=/srv/cloud/one/bin/$NAME DAEMON_ARGS="" PIDFILE=/var/run/$NAME.pid SCRIPTNAME=/etc/init.d/$NAME # Exit if the package is not installed [ -x "$DAEMON" ] || exit 0 # Load the VERBOSE setting and other rcS variables . /lib/init/vars.sh # Define LSB log_* functions. # Depend on lsb-base (>= 3.0-6) to ensure that this file is present. . /lib/lsb/init-functions # # Function that starts the daemon/service # do_start() { mkdir -p /var/run/one /var/lock/one chown oneadmin /var/run/one /var/lock/one su - oneadmin -s /bin/sh -c "$DAEMON start" su - oneadmin -s /bin/sh -c "$SUNSTONE start" } # # Function that stops the daemon/service # do_stop() { su - oneadmin -s /bin/sh -c "$SUNSTONE stop" su - oneadmin -s /bin/sh -c "$DAEMON stop" } case "$1" in start) [ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME" do_start case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; stop) [ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME" do_stop case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; restart|force-reload) # # If the "reload" option is implemented then remove the # 'force-reload' alias # log_daemon_msg "Restarting $DESC" "$NAME" do_stop case "$?" in 0|1) do_start case "$?" in 0) log_end_msg 0 ;; 1) log_end_msg 1 ;; # Old process is still running *) log_end_msg 1 ;; # Failed to start esac ;; *) # Failed to stop log_end_msg 1 ;; esac ;; *) echo "Usage: $SCRIPTNAME {start|stop|restart|force-reload}" >&2 exit 3 ;; esac :
and set it with execute permissions:
chmod 755 /etc/init.d/one
Configuring the HTTPS proxy for Sunstone
Sunstone is the web interface for Cloud administration, if you do not want to use the command line… works on port 4567 and is not encrypted, so we’ll use lighttpd for proxy requests to HTTPS encrypted connection.
First install the daemon:
apt-get install ssl-cert lighttpd
Then generate certificates:
/usr/sbin/make-ssl-cert generate-default-snakeoil cat /etc/ssl/private/ssl-cert-snakeoil.key /etc/ssl/certs/ssl-cert-snakeoil.pem > /etc/lighttpd/server.pem
and create symlinks to enable ssl and proxy modules:
ln -s /etc/lighttpd/conf-available/10-ssl.conf /etc/lighttpd/conf-enabled/ ln -s /etc/lighttpd/conf-available/10-proxy.conf /etc/lighttpd/conf-enabled/
And modify lighttp setup to enable proxy to sunstone:
/etc/lighttpd/conf-available/10-proxy.conf proxy.server = ( "" => ("" => ( "host" => "127.0.0.1", "port" => 4567 ) ) )
Starting LightHTTP and OpenNebula with heartbeat
Now add the startup script automatically to heartbeat. First all stop heartbeat on both servers:
crm node standby cloud-cc01 standby cloud-cc02 bye
Then we can change the configuration:
crm configure primitive OpenNebula lsb:one primitive lighttpd lsb:lighttpd delete ha_group group ha_group san_ip lan_ip fs_one nfs_one aoe_data OpenNebula lighttpd colocation ha_col inf: ha_group ms_drbd_one:Master ms_drbd_data:Master order ha_after_drbd inf: ms_drbd_one:promote ms_drbd_data:promote ha_group:start commit bye
And startup the cluster again:
crm node online cloud-cc01 online cloud-cc02 bye
That’s all folks!
Thanks,
Alberto Zuin – http://www.anzs.it
Nice post. If understood correctly this is active/passive cluster so, you can not run VM’s in server2 ? … and which server “Install and configure OpenNebula” task are executed or both ?
Thanks
Hello Kaitzu,
In my real setup there are 4 servers: 2 controllers in active/passive setup (this setup), and 2 hypervisor node with xen where the instances run.
The 2 controllers act only as AoE/LVM storage and, obviously, as cloud controller; then, they don’t run VM directly. Of course server 1 is active and server 2 is “sleeping”, but all VM instances are runned in the other 2 servers.
The task “Install and configure OpenNebula” must be executed on both servers because they must be identical.
Thanks for your interesting,
Alberto
Try this url, helped me a lot
http://www.redhatlinux.info/2010/11/lvm-logical-volume-manager.html
Thanks lol…
Below mentioned link is very easy to understand,
http://www.redhatlinux.info/2010/11/lvm-logical-volume-manager.html
Did you try using clustered lvm?
When I wrote this article, OpenNebula was at the version 2.2 and I tried an old LVM stack: last year, cLVM didn’t support snapshot.
At the moment I use OpenNebula with shared MooseFS datastore, but now OpenNebula support directly LVM and iSCSI.
Hi Alberto,
Excellent post 🙂 very helpful.
I’m not able to find a document stating the hardware requirements. grrr… grrr…
I would like to know, based on your expertise installing and managing OpenNebula, which are the hardware requirements to set a full and functional production environment for this Cloud platform:
As far as I’ve read:
1. Frontend Server (metal duplicated for HA, so x2); which are the server minimum requirements in terms of RAM?
2. SAN capacity: 100Gb? 1TB? more or less…
3. I assume that will need to working nodes: which size in RAM each? When adding nodes to OpenNebula cluster do we need to always add servers with same hardware and size or OpenNebula will handle different servers with different hardware configurations? Let’s say if I add to the cluster one worker-node of 16GB and another of 32GB… is that fine?
Many thanks in advance.
1) RAM and CPU aren’t a real problem: OpenNebula daemon is very light, storage daemons aren’t also CPU/RAM intensive process. Obviously use a decent server (dual core, 2-4 MB RAM is OK), but make a lot of attention when choosing disk related hardware: a good controller and some high speed disks (SAS for example) for me are important.
2) SAN capacity… as you chioce: more space = more VM to run
3) More RAM/CPU = more VM to run. No problem mixing hardware: I use an AMD server with 64 GB RAM and a Intel server with 32 GB.
Hi Alberto,
Thanks for your kind response.
I will take into account your tips while deploying my pre-prod OpenNebula platform.
Yesterday I had an online meeting with Tino (OpenNebula staff), and we were talking too about the above topics. Thanks Tino for you helpful information 🙂
So, I’m now more informed on how to deploy my environment… I’m also thinking hardly about using Proxmox VE Cluster for the front-end redundancy. I may set a VM within a proxmox cluster with two identically machines. What do you think about?
Best regatds
“No problem mixing hardware: I use an AMD server with 64 GB RAM and a Intel server with 32 GB.”
Does live migration work in this scenario? AFAIK this is not possible/supported on Vmware clusters. How Opennebula handle diffente cpus on hosts/where can I find more information about this topic?
This is awesome, thank you
Hello Alberto, have you tried installing the KVM hypervisor on both hosts also? Since I only have 2 servers and want an active-passive KVM setup, I’ll give it a try.
Besides the version, most info here should still be relevant I hope. Only big difference is I run KVM on the 2 servers so won’t need nfs or AoE.