Setting up open virtualization system (oVirt)

In this post I’ll demonstrate how easy it is to setup open virtualization system with hypervisor and management. oVirt is based on Red Hat’s Red Hat Enterprise Virtualization manager (RHEV-M) and Red Hat Enterprise Virtualization hypervisor (RHEV-H).

So lets start with installing some prerequisite packages:

[root@ovirt-manager ~]# yum install -y wget postgresql-server postgresql-contrib pgadmin3 java-1.6.0-openjdk-devel

Next we’ll add new repository for ovirt:

[root@ovirt-manager ~]# wget http://www.ovirt.org/releases/nightly/fedora/16/ovirt-engine.repo -P /etc/yum.repos.d/

Final step is to install actual ovirt-manager packages:

[root@ovirt-manager ~]# yum install -y ovirt-engine ovirt-engine-setup

Now we have all needed packages installed and we can configure manager.

[root@ovirt-manager ~]# engine-setup
Welcome to oVirt Engine setup utility
HTTP Port  [8080] :
HTTPS Port  [8443] :
Host fully qualified domain name, note that this name should be fully resolvable  [ovirt-manager.demo.local] :
ovirt-manager.demo.local did not resolve into an IP address
User input failed validation, do you still wish to use it? (yes|no): yes
Password for Administrator (admin@internal) :
Warning: Weak Password.
Confirm password :
Database password (required for secure authentication with the locally created database) :
Warning: Weak Password.
Confirm password :
Organization Name for the Certificate: Demolab
The default storage type you will be using  ['NFS'| 'FC'| 'ISCSI']  [NFS] :
Should the installer configure NFS share on this server to be used as an ISO Domain? ['yes'| 'no']  [yes] : yes
Mount point path: /install
Display name for the ISO Domain: install
Firewall ports need to be opened.
You can let the installer configure iptables automatically overriding the current configuration. The old configuration will be backed up.
Alternately you can configure the firewall later using an example iptables file found under /usr/share/ovirt-engine/conf/iptables.example
Configure iptables ? ['yes'| 'no']: yes

oVirt Engine will be installed using the following configuration:
=================================================================
http-port:                     8080
https-port:                    8443
host-fqdn:                     ovirt-manager.demo.local
auth-pass:                     ********
db-pass:                       ********
org-name:                      Demolab
default-dc-type:               NFS
nfs-mp:                        /install
iso-domain-name:               install
override-iptables:             yes
Proceed with the configuration listed above? (yes|no): yes

After this setup might take while, but in few minutes you should get output like below:

Installing:
Configuring oVirt-engine...                              [ DONE ]
Creating CA...                                           [ DONE ]
Setting Database Security...                             [ DONE ]
Creating Database...                                     [ DONE ]
Updating the Default Data Center Storage Type...         [ DONE ]
Editing JBoss Configuration...                           [ DONE ]
Editing oVirt Engine Configuration...                    [ DONE ]
Configuring the Default ISO Domain...                    [ DONE ]
Configuring Firewall (iptables)...                       [ DONE ]
Starting JBoss Service...                                [ DONE ]

 **** Installation completed successfully ******

     (Please allow oVirt Engine a few moments to start up.....)

Additional information:
 * There is less than 4 GB available free memory on the Host.
It is  recommended to have at least 4 GB available memory to run the RHEV Manager.
 * Keystore already exists, skipped certificates creation phase
 * A default ISO share has been created on this host.
   If IP based access restrictions are required, please edit /install entry in /etc/exports
 * The firewall has been updated, the old iptables configuration file was saved to /usr/share/ovirt-engine/conf/iptables.backup.074609-01032012_1691
 * The installation log file is available at: /var/log/engine/engine-setup_2012_01_03_07_44_46.log
 * Please use the user "admin" and password specified in order to login into oVirt Engine
 * To configure additional users, first configure authentication domains using the 'engine-manage-domains' utility
 * To access oVirt Engine please go to the following URL: http://ovirt-manager.demo.local:8080

If you get database creation error, please check the database installation log. If there’s lines saying “Peer authentication failed for user “postgres”” please change authentication method in pg_hba.conf to trust and restart your postgresql-service and run installer again.

Next step is install ovirt-node (Hypervisor). It’s really simple and straightforward. Just get latest iso from http://www.ovirt.org/releases/nightly/binary/ and boot your hypervisor machine with it, install to local disk and do basic configurations: This shouldn’t take long, there is only few things to do. Select disk where you are installing, type root password and go.

 

Next thing to do is install more hypervisors and connect them to ovirt-engine. I’ll write another post about this with basic configuration examples. Try oVirt today, it’s really competitive alternative for VMware / Citrix and it’s totally open source 🙂

Implementing IPA-server with windows/linux-environment

Red Hat included IPA-server (FreeIPA) with RHEL6.1 release. FreeIPA is an integrated security information management solution, like Microsoft Active Directory, combining 389 (LDAP server), MIT Kerberos, NTP, DNS. It consists of a web interface and command-line administration tools, so management can be done either with web browser or from command-line. It’s quite easy to implement FreeIPA on Windows/Linux-environment and i’ll show here how you can install, configure and use IPA without any deeper knowledge about Linux.

First step: Install

In my example I have already installed RHEL6.2, but you can use also CentOS or Fedora. I installed RHEL with minimal installation, upgraded all packages and then installed all packages for FreeIPA:

[root@x1 ~]# yum install ipa-* bind bind-chroot bind-dyndb-ldap

This will install all FreeIPA-packages and bind (nameserver) with chroot-option. It will take while, for example in my demo lab it installed 262 packages totally. So run install and relax with cup of coffee while waiting.

After installation completes, we need to check that ipa-servers hostname is set correctly:

[root@x1 ~]# cat /etc/hosts
127.0.0.1       localhost localhost.localdomain localhost4 localhost4.localdomain4
::1             localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.16.16    x1.demo.konehuone.fi x1
[root@x1 ~]# hostname
x1.demo.konehuone.fi

Check that there is line with your servers FQDN, shortname and correct IP-address. If name/ip is not correct, you need to fix them before next step.

Second step: Configure FreeIPA

Now it is time to install and configure FreeIPA. This is really simple:

[root@x1 ~]# ipa-server-install --setup-dns

Installer will ask server host name, domain name and kerberos realm name. Accept default settings if you do not want to change them, you might want to change kerberos realm to short name (Like, DEMO in my example) if you want. You need to write also password for LDAP Directory Manager account and FreeIPA admin-user account, for best practise use different password for accounts.

By default installer will also setup DNS forwarder and reverse zones for you. It will ask IP of your forwared and will automatically create reverse zone for that network.  After this installer will configure all necessary services (This might take while also).

Last phase here is to check that necessary firewall ports are opened, below is example from my iptables-configuration:

[root@x1 ~]# cat /etc/sysconfig/iptables
# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 389 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 636 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 88 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 464 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 53 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 88 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 464 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 53 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 123 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

You could also combine ports to one line, but I like to have one line per port. After configuration you can test that everything is working just like it should:

[root@x1 etc]# kinit admin
Password for admin@DEMO.KONEHUONE.FI:
[root@x1 etc]# ipa user-find admin
--------------
1 user matched
--------------
  User login: admin
  Last name: Administrator
  Home directory: /home/admin
  Login shell: /bin/bash
  UID: 805800000
  GID: 805800000
  Account disabled: False
  Keytab: True
  Password: True
----------------------------
Number of entries returned 1
----------------------------

If you can find admin user your FreeIPA-server is now configured and running fine.

Third step: Configure your Windows/Linux machines to authenticate with IPA

Adding Windows-clients to IPA includes bit more tasks than Linux-clients so we’ll start with Windows one.

Before we add any clients to IPA we need to create user account for user. In my example I will use cli-way, but you could do it from web-ui also:

[root@x1 log]# ipa user-add
First name: Ipa
Last name: Test
User login [itest]: ipatest
--------------------
Added user "ipatest"
--------------------
  User login: ipatest
  First name: Ipa
  Last name: Test
  Full name: Ipa Test
  Display name: Ipa Test
  Initials: IT
  Home directory: /home/ipatest
  GECOS field: Ipa Test
  Login shell: /bin/sh
  Kerberos principal: ipatest@DEMO.KONEHUONE.FI
  UID: 805800004
  GID: 805800004
  Keytab: False
  Password: False

Next we need to reset ipatest-users password:

[root@x1 log]# ipa passwd ipatest
New Password:
Enter New Password again to verify:
------------------------------------------------
Changed password for "ipatest@DEMO.KONEHUONE.FI"
------------------------------------------------

Next we need to create also account for client machine. In this example we will add first dns-record for the machine called winxp and then add host-entry to IPA and give host initial password used when connecting host to IPA:

[root@x1 log]# ipa dnsrecord-add
Zone name: demo.konehuone.fi
Record name: winxp
[A record]: 172.16.16.17
[AAAA record]:
  Record name: winxp
  A record: 172.16.16.17
[root@x1 log]# ipa host-add
Host name: winxp.demo.konehuone.fi
------------------------------------
Added host "winxp.demo.konehuone.fi"
------------------------------------
  Host name: winxp.demo.konehuone.fi
  Principal name: host/winxp.demo.konehuone.fi@DEMO.KONEHUONE.FI
  Keytab: False
  Password: False
  Managed by: winxp.demo.konehuone.fi
[root@x1 log]# ipa-getkeytab -s x1.demo.konehuone.fi -p host/winxp.demo.konehuone.fi
-e arcfour-hmac -k krb5.keytab.winxp.demo.konehuone.fi -P
New Principal Password:
Verify Principal Password:
Keytab successfully retrieved and stored in: krb5.keytab.winxp.demo.konehuone.fi

After this we configure host to use IPA-server. Currently there’s one problem with Windows authentication. You need to have local user where kerberos users are mapped. However local user can be locked, so you cannot log on with them directly but via kerbers-authentication. In my example I’ll use XP machine, and because of that we need to download additional tools and install whole package to get ksetup-tool.

In my example I didn’t create any local users, I just mapped everything to guest-user:

ksetup /mapuser * guest

Final task is reboot the machine and after it you should be able to logon using kerberos acount, in my example ipatest@DEMO.KONEHUONE.FI

Final words

You can do much more with IPA. I will write another post how you can extend usage of your IPA. I think that FreeIPA is really good implementation for any company wanting to avoid usage of Microsoft Active Directory.

Open Virtualization Management with oVirt Project

Finally there’s really good open source alternatives for virtualization management. Yesterday oVirt-project made announcement:

The oVirt project today announced that Canonical, Cisco, IBM, Intel, 
NetApp, Red Hat and SUSE have joined together to help create a new
open source community for the development of open virtualization
platforms, including virtual management tools to manage the
Kernel-based Virtual Machine (KVM) hypervisor. With the oVirt
project, the industry gains an open source, openly governed
virtualization stack.

What this means actually is that Red Hat finally open sourced their RHEV-M and RHEV-H. While we need to wait while for official release (Jan, 2012) you can start testing it today.

Red Hat acquired gluster

Red Hat has announced that it will acquire Gluster. Gluster is the company behind cluster/cloud filesystem GlusterFS. Red Hat explains that the company considers the technologies used in Gluster to be a good fit with its cloud computing strategy. Red Hat will continue to support Gluster customers and will integrate Gluster products into its portfolio over the coming months. It hopes to continue to involve the GlusterFS community in developing the filesystem.

GlusterFS is released under the GPLv3 and is described by the developers as a stackable modular cluster file system, which runs in user space and hooks into the kernel via Fuse (Filesystem in Userspace). Communication between storage nodes and clients is via Infiniband or TCP/IP. GlusterFS scales to the petabyte level and, rather than storing data onto storage media directly, uses proven file systems such as Ext3, Ext4 and XFS.

VMware vCenter plugin for IBM’s XIV/SVC/Storwize v7000

IBM released version 2.5.1 of  it’s Storage Management Console for VMware vCenter while ago. Installation is quite simple and after this adding new vdisks for VMware infrastructure is quite easy. New version supports XIV, SVC and Storwize V7000 as per the versions on the following table:

Product Firmware version
IBM XIV® Storage System 10.1.0 to 10.2.4x
IBM Storwize® V7000 6.1 and 6.2
IBM System Storage® SAN Volume Controller (SVC) 5.1, 6.1 and 6.2

Direct download link for software is here.

Installation needs to be done to vCenter Windows-machine. Currently vSphere 5 is not supported, but hopefully it will be soon. Installation asks few questions about userid’s etc.

After installation you need to restart your vSphere Client and check that IBM’s plugin is enabled

After installation you need to connect plugin to your storage, in my example I will connect it to Storwize v7000. Click “Add”  to start storage system adding wizard.

Select brand of storage you are connecting. If you want to add multiple brands or multiple storages you need to add them one by one.

Next wizard asks connection settings for storage. Enter ip-address or hostname of storage, username and select private key which you have connected to selected user. If your key uses passphrase enter it also.

Note that keys need to be in openssh-format. If you created your keys with putty key generator you need to convert your key to correct one. First import key you have created and then export key in OpenSSH-format.

Last step is to select mdisk-groups you want to access from plugin. Note that v7000 gives mdisk group name by default in format mdiskgrpX. You should change names ones that explain better features of group, I usually use format of  disktype_speed_size_number (for example SAS_15k_600gb_1).

After this you are able to create and present new vdisks from vCenter rather than creating and mapping new vdisks from v7000/SVC/XIV GUI and then creating VMFS after this. Creating new disks is quite easy, just select correct mdisk group and clicl “New Volume”. After this you get window like below where you fill necessary details and after that you just select hosts/clusters where you want to map this volume.

Seven rules to help you on preparing for problems on storage area networks

I wrote this article due the fact that every SAN-adminisrator should know how to be better prepared for problems on their storage area networking. As central storage nowadays is rather a rule than an exception does storage area networking play really important role for everyday computing.

Note that these seven steps are more like my suggestions rather than generic rules that should be followed strictly. So try to find at least something that you can implement for your environment and please let me know if there’s more relevant things than these, or something to add.

Rule #1 – Implement NTP

From my opinion this is the most important thing. Anyone who has faced situation where you needed to find out what happened on environment where all clocks are pointing different time understands relevancy of NTP. When you have all your devices on same time finding root cause is usually much easier. Implement NTP server on your management network and sync all your devices from there. You can keep your management networks NTP on proper time by syncing it from internet but it’s more relevant that all devices are on same time rather than exactly on right second of world clock.

Rule #2 – Implement good naming schema

In past servers usually got their names from action heroes and stars and this might be nice but if you want to have easy rememberable names use them as CNAMES rather then proper names. In problem situation it would be nice to see from name exactly where your device/server is located so I suggest that you use something like Helsinki-DC1-BC01-BL01-spiderman rather than just spiderman. In this example you could easily see that your server is located at Helsinki on datacenter 1 and is on blade chassis one and there blade number one.

Use consistent naming on zoning. I usually name zones ZA_host1_host2. This shows immediately that it’s zone on fabric A and it’s between host1 and host2. On SAN I always prefer that aliases are also named with same kind of naming schema; AA_host1 which is alias on fabric A for host1.

For storage area networking domain ID is like phone number, domain ID’s should always be unique. This is not usually problem if you have separate SAN’s, but if you move something between SAN’s having unique ID is crucial so from the beginning use unique id’s. This information is also used for several other things like fibre channel address of device ports etc.

Rule #3 – Create generic SAN management station

This is usually done on all bigger environments but every now and then I see environments where there is no generic SAN management station implemented. Almost every company has implemented virtualization at least in some level so creating generic SAN management station should not be any kind of problem. You can go easily with virtualized Windows Server or maybe even with just virtualized Windows 7 with RDP connection enabled but I would go with server so there can be more than just one admin on station at time.

This station should have at least these:

  • SSH and telnet tool witch allows you to output of session to text file, on Windows environments I usually go with putty
  • FTP-server (and maybe tftp also). I usually go with Filezilla Server which is really easy to configure and use
  • NTP server for your SAN environment
  • Management tools for your SAN (Fabric Manager for Cisco and DCFM for Brocade) – This is really important on larger environments for toubleshooting
  • Enough disk space to store all firmware-images and log files from switches (Rule #5)
  • ….access for internet in cases where you need to download something new or just use google when sitting on fire 😉

Rule #4 – Implement monitoring on your SAN environment

This can be done at least by using same software you use for your server environment but I would go with Fabric Manager on Cisco SAN’s and DCFM on Brocade SAN’s because these include also other features and are really useful when your environment gets bigger. Configure your management software to send email/sms when something happens – don’t just trust your eyes!

You should also implement automatic log collection for your environment. For example this helps a lot when you try to find physical link problems or slow drain devices. Configure your management station to pull out all logs from switches daily/weekly and then clear all counters so next log starts with empty counters. This can be implemented with few lines of perl and ssh library and there are plenty of exciting scripts already on google if you don’t know how to do it with perl.

Rule #5 – Design your SAN layout properly

This is really easy to achieve and doesn’t even need much time to keep in update. Create layout sketch of your SAN – even in smaller environments – and share it with all admins. You don’t need to have all servers on this sketch, include just your SAN switches and storage systems, if you want you can include your servers also but this usually makes your sketch quite big and unreadable. In two SAN environments (Having two separate SAN’s should be defacto!) plug your servers and storage always on same ports, so if you connect your storage system on ports 1-4 on switch one in fabric A, connect them to ports 1-4 also in corresponding switch on fabric B.

Rule #6 – Update your firmwares

Don’t just hang on working firmwares. There is no software which is absolutely free of bugs and this is why you should always update your firmwares regularly. I am not saying that you should go with new release as soon as it gets to downloads but try to be in as new version you can. There are lot’s of storage systems which makes requirement for firmware levels so always follow your manufactures advices. If your manufacturer doesn’t support newer then something released year ago it might be time to change your vendor!

If you have properly designed SAN with two separate networks you can do firmware upgrades without any breaks on production and most of the enterprise class SAN switches (Usually called SAN Directors) have two redundant controllers so you can update them on fly without any interruption on your production!

Rule #7 – Do backups!!!

Take this seriously. Taking backups is not hard. You can implement this on your daily statistics collection scripts or do this periodically by your hands – which ever way you choose take your backups regularly. I have seen lot’s of cases where there was no backups from switch and on crash admins needed to create everything from scratch. Implement this also to your storage systems if possible, at least IBM’s high end storage systems has features which allows you to take backups of configs. Config files are usually really small and there shouldn’t be place where there is no disk/tape space for backups of such a important things like SAN switches and storage systems. From SAN switches you might also want to keep backup of your license files as getting new license files from Cisco/Brocade can take while.