Your Mission Critical Applications Deserve Real Backup Validation!

accident disaster steam locomotive train wreck

Photo by Pixabay on Pexels.com

Every organisation knows how important data protection is. However still most of the organisations never test their backups. Why? Because it is complex issue. However if you do not test how do you know that you can really survive from disasters?

Modern Approach for Data Protection

First step for Data Protection is of course thinking it in modern ways. Even that most of the restores from backups are still individual files and/or folders your organisation has to prepare for bigger disasters. What if half of your 500 VM production environment is hit by ransomware attack? How do you really survive from these types of disasters? Restore times with so called legacy backups might take days, even weeks. Can your business survive without access to these VMs for weeks, propably not.

Data protection depends upon timely backups, fast global search, and rapid recovery. Cohesity DataProtect reduces your recovery point objectives to minutes. Unique SnapTree technology eliminates chain-based backups and delivers instantaneous application or file-level recovery, with a full catalog of always-ready snapshots even at full capacity. With this approach it can dramatically reduce recovery time.

However even modern data protection is not enough if you don’t know that you really have something to recover to. Most of the modern technologies handle file system level data integrity but still there is no way to really know that your backups are fully recoverable without testing them.

From Data Protection to Recovery Testing

Typically organisations approach recovery testing with either just recovering single (or multiple) virtual machines. This of course makes sure that you can recover individual VMs but it doesnt ensure that you can recover something that is truly working. Some backup vendors implement recovery testing, but still it is mostly just VMs or some basic uptime testing.

Other way to do this is manually restore application setups and do manual testing. This is very costly because it requires lot’s of manual work, and also introduces several risks. However it enables your organisation to really test application workflows with proper testing. Do you really get answer from your 3-layer web application, can you get answers to your DB queries etc. What if you could take this method of running complex testing but without any need for manual labour?

Automating Recovery Testing

Because modern hypervisor platforms are API driven it is pretty easy to automate things on VM level. When you add API driven data protection platform, like Cohesity, you can automate full recovery testing with very complex testing. This is a issue I hear from most of my Service Provider customers – but also from bigger enterprise customers. How to automate complex recovery testing? Lets see….

Cohesity Backup Validation Toolkit

To make things simpler, you can download Cohesity Backup Validation toolkit from here and with minimal scripting knowledge it is easy to automate validation process.

After downloading it is time to create some config-files. Lets start with environment.json -file. This file contains connection information for both Cohesity, and VMware vSphere environments. Create file with content:

{
        "cohesityCluster": cohesity-01.organisation.com",
        "cohesityCred": "./cohesity_cred.xml",
        "vmwareServer": "vcenter-01.organisation.com",
        "vmwareResourcePool": "Resources",
        "vmwareCred": "./vmware_cred.xml"
}

After this we need to create actual config.json -file containing information about each virtual machine we are about to test.

This file also defines tests per VM so it is very easy to define multiple tests but only use selected per VM. Script also enables you to attach VM to needed test network, and change IP address to different for testing purpose so you don’t need to test with overlapping production IPs, or create siloed networking for VMware.

Note that VMs don’t need to be protected with same protection job making this more scalable since propably you have different job for web frontends and actual backend databases.

{
    "virtualMachines": [
        {
            "name": "Win2012",
            "guestOS": "Windows",
            "backupJobName": "VM_Job",
            "guestCred": "./guestvm_cred.xml",
            "VmNamePrefix": "0210-",
            "testIp": "10.99.1.222",
            "testNetwork": "VM Network",
            "testSubnet": "24",
            "testGateway": "10.99.1.1",
            "tasks": ["Ping","getWindowsServicesStatus"]
        },
        {
            "name": "mysql",
            "guestOS": "Linux",
            "linuxNetDev": "eth0",
            "backupJobName": "VM_Job",
            "guestCred": "./guestvm_cred_linux.xml",
            "VmNamePrefix": "0310-",
            "testIp": "10.99.1.223",
            "testNetwork": "VM Network",
            "testSubnet": "24",
            "testGateway": "10.99.1.1",
            "tasks": ["Ping","MySQLStatus"]
        }
    ]
}

And then final step is to create actual credential files. To prevent having usernames and password in configuration files in plaintext format we can use  simple powershell script to create these. You can have one shared credential file for all VMs, or you can have one per VM. Note that these users must have administrator level access to VMs to change IP network to test network.

To create credential files you can use included createCredentials.ps1 script which will create only one guestvm_cred.xml file but if you want to create more you can just simply run simple powershell command:

Get-Credential | Export-Clixml -Path guestvm_more.xml

Since this file is encrypted it can be only accessed with same user who created file, so make sure that you create credential files with same user you are using for running testing scripts.

So How Does it Work?

Here is an example run to clone two virtual machines (one Linux and one Windows) and run different set of tests on each VM.

First script gets configuration files and connects to Cohesity cluster and VMware vSphere vCenter environments. Then it will start clone process for VMs

blog1

blog2

and after clone process is done it will move to actual validation phase where we will first check that clone task is in success state and actual VMware VM’s are powered on with VMware Tools running in VMs cloned.

blog3

When VMs are up and VMware Tools are running we will run test per VM to ensure that we can push scripts trough VMware Tools. Next task is to move VMware VM to correct VM Network and then change IP configuration for each VM.

blog4

After moving VMs to correct network we will run tests for each VM

blog5

and after running tests we will clean clones automatically from Cohesity and VMware environment

blog6

Notes

This automation toolkit is not officially provided by Cohesity. Any bugs can be reported to me directly. There are some limitations with this toolkit:

You can use it only for VMware environments running vCenter

You can run tests only against Linux and Windows virtual machines and Windows machines need to have PowerShell installed.

Hope that this simple toolkit helps you to automate your organisations backup validations!

Building a Modern Data Platform by Exploiting the True Possibilities of Public Cloud

analysis blackboard board bubble

Photo by Pixabay on Pexels.com

Building a modern next-generation datacenter requires specific approach and understanding of automation. When we are designing modern datacenter we have to understand that data is center element in todays business. We have to understand that with automation we can not only save time but ensure that human error factor is reduced dramatically.  On-premise datacenter, even the next-generation one, is still only one element of data platform and since no modern data platform would be complete without having option to use the public cloud and in fact public cloud plays significant role in building modern data platform and providing all the capabilities we just couldn’t get any other way.

In this post we look look the benefits of public cloud while taking care that we overcome all the challenges we might see in public cloud adoption embracing cloud as functional key element of our platform.

Why our data needs public cloud?

While the modern storage systems are very good and past couple years they have evolved lot modern data centric approach and fast changes in business landscapes require flexibility, scalability, data movement and commercial approach which makes quickly clear that cloud can be potentially answer for all of these challenges.

While these business challenges are quite common in pretty much all traditional systems they are area where public cloud can be strongest. Cloud can, in theory, scale infinite and provide consumption model where organisations can move CAPEX investments to OPEX by paying only what they need while still having option of flexibility by going bigger or smaller based on current business requirement. But cloud can do also much more. We can easily take copy of our data and do pretty interesting things with it – once it is copied or moved to public cloud. Typically organisations start with low-hanging fruits, backups, since they are very easily moved from on-premise to cloud since pretty much every modern backup software supports extension to cloud (If your’s doesn’t maybe it’s very good time to look something better). When we backup our data to public cloud we can actually benefit more from it. We can use this cold data for business analytics or artificial intelligence. But it can work also as a disaster recovery. With proper design this can be way cheaper than building disaster recovery site. In the end flexibility is the most compelling reasons for any organisation to consider leveraging public cloud.

But while these benefits are pretty clear why so many organisations fail to meet these benefits by not moving to cloud?

Why organisations resist moving to the cloud?

It’s not about what public cloud can do it is more about what it doesn’t that tends to stop organisations wholeheartedly embracing cloud when it comes to organisations most valuable assets, data.image.png

As we’ve worked through the different areas of building a modern data platform our approach to data is way more than just storage. It is insight of data, protection, security, availability, and privacy, and these are things not normally associated with native cloud storage. Traditionally native cloud storage is not built to handle these types of needs but to be pretty much easily scalable and cheap.  And since organisations got so used to these requirements they don’t want to move their data to cloud if it means losing all of those capabilities, or having to implement and learn a new set of tools to deliver them.

Of course there is also the “data gravity” problem, we can’t have our cloud based data siloed away from the rest of our platform, it has to be part of it. We need to be able to move data in to the cloud but also ensure that we can move it back to on-premise again and even between cloud providers while still retaining all of those key elements that enterprise organisations require – control and management.

So is there really a way to overcome these challenges and have cloud as fundamental part of modern data platform. Yes, there is.

Making cloud be part of the enterprise data platform

There are dozens and dozens companies trying to solve this issue. Most of them start from the top without really looking the real problem, data mobility.  If you look AWS Marketplace’s storage category you will see almost 300 different options available so the question is how one knows which really gives organisation full potential for true hybrid cloud. The answer is, one really cant without deep knowledge. I will not point any single vendor but quite many makes claims that they can give you data mobility and leverage your data in full potential while only few of them can really do this.

There are two things making this very hard.

First is data movement between on-premise and cloud. It’s pretty easy to copy data from point A to point B but how to make this cost efficient and fast. Moving huge amounts of data takes time even with very fast internet connections so having builtin capabilities of moving only needed blocks can make significant difference not only in migration/movement times but since pretty much all cloud vendors charge egress traffic when it is time to move data back to on-premise or to other cloud vendor this can mean a huge difference in costs.

Second is ability to use migrated/moved data to several purposes. Using cloud as backup target is quite inefficient if you cannot use the same data as source for DR, analytics, AI or test&dev. Cloud storage doesn’t cost that much but if you can use if efficiently in more than one use case you will reduce the total cost quite much.

Both of these are foundation of enterprise capabilities. And while adding enterprise capabilities are great, the idea of a modern data platform relies on having our data in the location we need it, when we need it while maintaining management and control. This is where the use of efficient technology provides real advantage. You can achieve this in many ways one being for example using NetApp’s ONTAP  storage system as a consistent endpoint allowing organisations to use the same tools, policies and procedures at the core of data platform and extend this to organisations data in the public cloud. This is possible if vendor has an modern software-defined approach.

NetApp’s integrated SnapMirror provides the data movement capabilities so one can simply move data in and out of and between clouds. Replicating data in this way means that while on-premise version can be the authoritative copy, it doesn’t have to be the only one. Replicating a copy of  data to a location for a one off task, which once completed can then be destroyed, is a powerful capability and an important element of simplifying the extension of organisations data platform into the cloud.

So technology matters?

In short answer, no. One doesn’t need to use technology vendor X to deliver true hybrid cloud service. You do not need to use NetApp but I have used it as an example since it has nice cloud integration features built-in and because of that it can deliver modern data platform easily by providing consistent data services across multiple locations (on-premise and cloud) while still maintaining all critical enterprise controls. Of course this means that you need to have NetApp on-premise and in cloud.

When you evaluate vendor Y for your next-generation datacenter it is very critical to think how you can build your enterprise data platform to have an option to expand your business to cloud. While there are other data service providers having somewhat similar services as NetApp I think that NetApp’s story and capabilities are in line with the requirements for modern data platform. There are more solutions which can be used to achieve similar solution and even go bit further but I will cover one of them in my next post.

In the end most important thing on your design sterategy, if it is to include public cloud, is to ensure that you have appropriate access to data services, integration, control and data management. It is crucial that you don’t put your organisations most valuable asset, data, at risk or dimish the capabilities of your data platform by using the cloud. Cloud is playing huge role in future data plaforms so make sure you have easy option to move workloads to cloud – and back.

Primary and Secondary Storage aka Tiering in 2017

spicerack_tiers1_525

History of Hierarchical Storage Management

Having multiple storage tiers is not a new thing. Actually history of hierarchical storage management (HSM) goes quite far. It was actually first implemented by IBM on their mainframe computer platforms to reduce to the cost of data storage, and to simplify the process to get data from slower media. Idea was to that the actual user would not need to know where the data was actually stored and how to get it back – with HSM the computer would retrieve the asked data automatically.

Historically HSM was somewhat buried when world went from 1st platform to 2nd platform (Client-Server, PC-era). Quite soon many organizations realised they still had application needs for centralized storage platforms and so the storage area networks (SAN) was pretty much born. After server virtualization exploded need for high performance storage organizations realized that, again, it was too expensive to run all application data in one high performance storage tier or it was just too difficult to move data between isolated storage tiers. Tiering, or HSM, was actually born again.

Many storage vendors implemented somekind of tiering system. Some implemented system monitoring actual hot blocks and then migrated those hot blocks between slower and faster tier or 3-tier (SSD, SAS and SATA). Comparing these systems only difference was typically size of the block moved and frequency of moving blocks. This was approach for IBM, EMC and HDS (and many others also), just few to name. There was no big problem with this approach since it solved many problems but in many cases it just reacted too slow to performance needs. With proper design this works very well.

Other vendors implemented tiering based on caching. Every storage system has cache (read and write) but these vendors approach for tiering was implementing method to add high performance disks (SSD) to extend size of the cache. This reacts very fast to changes and typically doesn’t need any tuning. However this approach doesn’t allow you to pin application data to selected tier so proper design is critical.

All flash storage changed the game

aaeaaqaaaaaaaaoaaaaajdmwnze4mzm0ltuwywytngm0os1hntqyltvkotmwytq1y2iznw

Late 2000 all flash storage systems moved very high performance applications from tiered storage to pure flash platform. Price of the flash was very high and typically you had one all flash system per application. Few years later all flash systems actually replaced spinning disks and tiered storage pretty much on most of the organizations since pricing of the flash became affordable and implementation of efficiency technologies (deduplication and compression) meant you could put more data on same disk capacity which actually dropped the gigabyte pricing quite close to high performance spinning disks (10/15k SAS drives).

Suddenly you didn’t need any tiering since all flash systems gave you enough performance to run all your applications but this introduced isolated silos and moving from 2nd platform to 3rd platform means very dramatical growth in data amounts.

Living in the world of constant data growth

Managing unstructured data continues to be a challenge for most of the organizations. When Enterprise Strategy Group surveys IT managers about their biggest overall storage challenges, growth and management of unstructured data comes out at or near the top of the list most of the time.

And that challenge isn’t going away. Data growth is accelerating, driven by a number of factors:

The Internet of Things

We now have to deal with sensor data generated by everything. Farmers are putting health sensors on livestock so they can detect issues early on, get treatment and stop illness from spreading. They’re putting sensors in their fields to understand how much fertilizer or water to use, and where. Everything from your refrigerator to your thermostat will be generating actionable data in the not too distant future.

2016-07-12-1468314021-5633148-internetofthings

Bigger, much richer files

Those super-slow motion videos we enjoy during sporting events are shot at 1,000 frames per second with 2 MB frames. That means 2 GB of capacity is required for every second of super-slow motion video captured. And it’s not all about media and entertainment; think about industry-specific use cases leveraging some type of imaging, such as healthcare, insurance, construction, gaming and anyone using video surveillance.

More data capture devices

More people are generating more data than ever before. The original Samsung Galaxy S smartphone had a 5 megapixel camera, so each image consumed 1.5 MB of space compressed (JPEG) or 15 MB raw. The latest Samsung smartphone takes 16 megapixel images, consuming 4.8 MB compressed/48 MB raw storage — thats a 3 fold increase in only few years.

Enterprise Data Lake is 2017 version of tiered storage

Tiered storage, as it used to be implemented, has born again with next generation of old idea. In modern world tiering is solving problems related to massive data growth. More and more production data is going to All Flash arrays but since only 10-30% of actual data is really hot organizations must implement somekind of secondary storage vision to be able move cold data from still expensive primary storage to much cheaper secondary storage.

The secondary storage today is object based storage to response fast pace of data growth and data locality problems of IoT. Organizations are going to use this same object storage platform for their Internet of Things needs and also maybe place to hold their production application data backups.

 

How to backup object storage?

banner

Background

Traditional file server may have outlived it’s usefulness for an increasing number of organizations in past years. Traditionally these file servers were designed in an era where typically all employees were sitting in one location and remote workers / road warriors were quite rare and even then only files they created were office documents (word, excel, powerpoint, etc). However time has changed and now it’s new normal that organizations have employees all over the globe and also IoT (Internet of Things) has changed landscape very much; they generate far more data than normal users ever will and this data must be kept in safe location since data is now the main asset of many organizations and normal file servers are not enough to meet the requirements of the modern data center anymore.

So we got an object storage to solve issues but first you need to understand what is an object storage and how it differs from traditional file servers.

So what is an object storage?

For years data was typically stored in a POSIX file system (or databases but let’s focus on file services here) where data is organized in volumes, folders/directories and sub-folders/sub-directories. Data written to file system contains two actual pieces; the data written to file system itself and also metadata which in POSIX file systems is typically simple and includes only information about creation date, change date, etc. However object storages work a bit differently.

Object storage is also used to store data but unlike POSIX file systems, an object storage system gives each object unique id, rather than name, which is managed in flat index instead of folders. In POSIX file systems applications access files based on directory and filenames but in object storage they access files (or objects here) by providing an unique object id to fetch/re-write information. Because of this flat architecture object storage provides much greater scalability and faster access to much higher quantity of files compared most traditional POSIX based file servers. Flat architecture enables also much richer medata information for actual data since most object storage systems allow much broader set of information stored  per object than traditional POSIX file system. You can store all the same information (creation date, change date, etc) but also add additional information like expiration dates, protection requirements, information for application what type of file is, what kind of information it contains etc.

So in general object storages are designed to handle an massive amount of data; it can be data stored in large objects like video/images/audio or very billions of very small objects containing IoT device information like sensor data. Typical object storage can scale to very big and this raises a question. How we can backup the data stored in object storages?

Why object storage is not that easy to backup?

Problems Ahead

Traditionally object storage solutions are considered when you have massive amounts of data like petabytes of data and/or billions of objects (files). This kind of platform challenges your traditional enterprise backup solution. Think about it while. If you need to backup all changed objects how long it will take for enterprise backup server to lookup which object has been changed and then fetch them from storage and put them on disk and/or tape. With large scale environments this slowly becomes just impossible. You need to implement other kind of solutions.

From backups to data protection

First step here is to change our minds from backups to data protection. Backup is just a method to protect your data, isn’t it? When we understand this we can start thinking how we can protect our data in object storage environments. There is always the traditional way to protect it, but let’s think how object storage environments are protected in real life.

Object Storage Systems tend to have ability to have versioning built in. What this is is just a method to save old version of object in case of change or delete. So even when user deletes objects they are not removed from system in real life but just marked as deleted. When we combine this to multisite solutions we actually have built in data protection solution capable to protect your valuable data. However we need to have also smart information lifecycle management in our object storage to be able to actually remove deleted objects when our ILM rules says so. Like after ‘deleting’ (marking object deleted) we would still keep last version for 5 years and after that we will remove it from system.

This kind of approach has one limitation you must keep in your mind what comes to data durability. What typical data protection systems do not protect against is bit rot – the concept of magnetic degradation over time. Magnetic storage degrades over time and this is not a matter if the data will be corrupted but rather it is when it will. The good thing here is that enterprise ready object storage solution should have methods to prevent this since should have methods to monitor and repair the affects of magnetic degradation with use of unique identifier for each object that is the result of some type of algorithm (e.g. SHA-256) being run against the contents of the object. By re-running the algorithm against an object and comparing it against its unique identifier, the software makes sure that no bit rot corrupts the file.

There is of course still one issue which must be solved different way – nasty admin. There is always possibility that storage admin of object storage solution just deletes whole system. But let’s think this in future post coming later this spring!

 

What makes a software-defined storage software-defined

software-defined-storage-emcworld

Software-Defined Storage is something that every storage vendor on the planet talks today. If we however stop and think about it a bit we actually realise that software-defined is just yet another buzzword.  This is 101 on SDS and we are not going to go deep in all areas. I will write more in-depth article per characteristics later. So the question is: What makes a software-defined storage software-defined?

A bit of background research

To really understand what software-defined actually means we must first do a bit of background research. If we take any modern storage – and by modern I mean something released in past 10 years – we actually see a product having somekind of hardware component, x pieces of hard disk drives / ssd drives and yes, a software component. So by definition every storage platform released in past years is actually a software-defined storage since there is software which actually handles all the nice things of them.

Well, what the hell software-defined storage then is?

Software-Defined-Storage-for-Dummies

Software-defined actually doesn’t mean something where software is defining features but:

Software-defined storage (SDS) is an evolving concept for computer data storage software to manage policy-based provisioning and management of data storage independent of the underlying hardware. Software-defined storage definitions typically include a form of storage virtualization to separate the storage hardware from the software that manages the storage infrastructure. The software enabling a software-defined storage environment may also provide policy management for feature options such as deduplication, replication, thin provisioning, snapshots and backup. SDS definitions are sometimes compared with those of Software-based Storage.

Above is a quote from wikipedia defining what SDS actually means. So let’s go deeper and look what are characteristics you would except from SDS product. Note that there are products on marketed as SDS which doesn’t have all of these. It’s not necessary to have them all, but more you have better it is – or is it?

Automation with policy-driven storage provisioning with service-level agreements

Automation of storage systems is not that new feature, but it’s a feature that is really usable and it’s also required feature if you plan to build any kind of modern cloud architecture with storage systems. Traditional storage admin tasks like creating and mapping a LUN cannot done by humans in modern cloud environment but rather they must be done by automation. What this means in reality is that in VMware environment a VM-admin is able to create new storage based in service-level needed directly from his/her VMware tools. Most of the modern storage products has at least some kind of plugin for VMware vCenter – but not all has same features. VMware released their vVols with vSphere 6 and it will be dominant model with storage deployments in future I think. It helps VMware admins to create a storage with needed features (deduplication, compression, protection, replication, etc) per VM. This is not that new feature also, since OpenStack has been using this kind of model for quite long. And there are even storage vendors having their business on this (Tintri).

Policy based provisioning and automation are in my opinion most important features of software-defined storage and you should look these carefully if achieving cloud kind of environment is your short or long term plan.

Abstraction of logical storage services and capabilities from the underlying physical storage systems

Logical storage services and capabilities and their abstraction is really not a new thing. Most of the modern storage architectures have been using this kind of abstraction for several years. What this means is for example virtualisation of underlaying storage (One or more RAID-group in generic storage pool) to save unused capacity and giving flexibility. I don’t even remember any modern storage product not using any virtualisation of underlaying storage since this has been really basic feature for years and has nothing to do with modern software-defined thinking that much.

What makes this as a modern software-defined storage feature is features like VMware vVols. They enable you to have more granular control per VM what kind of storage they need and what are needed features. Not all VM’s for example need to be replicated in sync to second location and having a separate datastore for replicated VM’s is just not enough for most of the companies – and it’s really not enough in modern cloud like architectures. You must have per VM control and all the magic must happen without VM admin to know which storage datastore is replicated and which is not. This is a quite new feature in VMware but has been available for OpenStack users for a while mainly because OpenStack is built for modern cloud environment where as VMware was mainly built for datacenter virtualisation but has implemented cloud like features later on.

Commodity hardware with storage logic abstracted into a software layer.

Traditionally most of the storage vendors had their own hardware they used with some of them designing even own ASICs but at least some form of engineered hardware combined with commodity hard disk drives / ssd drives with custom firmware. Legacy vendors, like they are nowadays called did indeed invest lot’s of money to develop engineered hardware to meet their special needs and this typically meant longer release cycles and different go-to-market strategy since introducing a new feature might mean developing a new ASIC and new hardware to support it.

Some of the startup vendors claims that using engineered hardware is expensive and it means that customers pay too much. This might be true but there is long list of advantages using engineered hardware instead of commodity hardware but by definition you, as a customer, shouldn’t look that much about this area. Commodity hardware can be as good as engineered hardware and engineered hardware can be as cheap as commodity hardware.

If the vendor you are looking for uses commodity hardware they must do many things on their software layer which can be done on hardware layer with microcode.  However which ever route they go storage logic is, and has been abstracted into a software layer years ago. All of modern storage vendors products uses software to do most of the logic but some of them uses ASICs to do some clever things, like HP uses ASIC to do de-duplication while most of the competitors use software to do this.

Scale-Out storage architecture

Scale-Out storage architecture is not that new since first commercial scale-out storage products came to market over 10 years ago. However it is still not that common to have commercial storage vendor to have a good scale-out product on their portfolio to solve most of the customer needs but typically just used for one purpose (Like scale-out NFS).

You can think scale-out in two forms.

Isolation Domain Scale-Out

Traditional method to deploy storage controllers is to use HA-pairs where you have two controllers sharing same backend disk storage system and in case of failover one can handle all traffic. This type of method is not a scale-out, but you can also do scale-out type of storage architecture relaying on HA-pairs and then connecting them together. What this allows is moving data between HA-pairs and doing all kind of maintenance and hardware refresh tasks without need to take your storage down. There are however many limitations this method has for your scalability. When you add new capacity to cluster you must select one isolation domain / HA-pair where you add this capacity rather than just adding capacity to one pool. This method has however much better failover handling because there are several isolation domains in case of faults.

This method however typically means vendor using engineered hardware etc. Think this as a method NetApp Clustered Data ONTAP uses.

Shared Nothing Scale-Out

Shared Nothing is a scale-out method where you have N storage controllers who do not share their backend storage at all. This method is more common with vendors and products built on commodity hardware since this can solve most of the problems of commodity hardware has and is in most of the cases much more understandable way to scale than typical isolation domain type of scaling (HA-pairs, etc.). All of the virtual SAN products on market relay on this type of scaling since it doesn’t need any specific hardware components to achieve scalability and high availability. This is also type of scaling all hyper-converge infrastructure (HCI) vendors use and claim that their method is same as used with Google, Facebook and Amazon type of big environments, even that it’s not 100% true.

However this is by far best method to use if you design your architecture from scratch and design it for commodity hardware. It’s also much better in handling rebuild scenarios because there’s not that big performance hit and also self healing is possible with this kind of approach.

Think this as a method Nutanix, Simplivity, SolidFire, Isilon, etc. uses.

So is one better than other?

No. Both have good things and bad things. This is a design choice you make and you can make a realiable and scalable system with both methods. When architected correctly either of them can give you performance, reliability and scalability needed.

Conclusion. Why would I care?

Understanding what Software-Defined Storage (SDS) means helps you understand more competitive landscape and helps you not falling on typical traps of FUD from competitors. Any modern storage, either NAS or SAN, is software-defined, but not all have same feature s or same kind of approach. Anyway all of them solve most of the problems modern infrastructure has nowadays and most of them can help your organisation to store more data than 10 years ago and get enough speed to meet requirements of most of your applications. When selecting storage vendor do it based on your organisational needs rather than marketing jargon.

Note. I work for “legacy” storage vendor and this has nothing to do with my job. I say same principles to every customer I talk to. This is my personal view and has nothing to do with my employer.

4 years passed

It’s been a while since last post to my blog. During these 4 years I became dad, changed my job and lot’s of other smaller things. However one thing hasn’t changed. I still love to design and architect high availability systems. I now promise to make comeback.

I will publish new articles at least couple time each month, but try to write more often. Blog will still remain mainly around areas of high availability systems and how to work with them. I write also about interesting technologies and even stuff around my employers competitors. This blog remains my personal view and has nothing to do with my job so expect also critical posts regarding my current employers products.

Let’s keep system up and running with902 high availability!

Setting up open virtualization system (oVirt)

In this post I’ll demonstrate how easy it is to setup open virtualization system with hypervisor and management. oVirt is based on Red Hat’s Red Hat Enterprise Virtualization manager (RHEV-M) and Red Hat Enterprise Virtualization hypervisor (RHEV-H).

So lets start with installing some prerequisite packages:

[root@ovirt-manager ~]# yum install -y wget postgresql-server postgresql-contrib pgadmin3 java-1.6.0-openjdk-devel

Next we’ll add new repository for ovirt:

[root@ovirt-manager ~]# wget http://www.ovirt.org/releases/nightly/fedora/16/ovirt-engine.repo -P /etc/yum.repos.d/

Final step is to install actual ovirt-manager packages:

[root@ovirt-manager ~]# yum install -y ovirt-engine ovirt-engine-setup

Now we have all needed packages installed and we can configure manager.

[root@ovirt-manager ~]# engine-setup
Welcome to oVirt Engine setup utility
HTTP Port  [8080] :
HTTPS Port  [8443] :
Host fully qualified domain name, note that this name should be fully resolvable  [ovirt-manager.demo.local] :
ovirt-manager.demo.local did not resolve into an IP address
User input failed validation, do you still wish to use it? (yes|no): yes
Password for Administrator (admin@internal) :
Warning: Weak Password.
Confirm password :
Database password (required for secure authentication with the locally created database) :
Warning: Weak Password.
Confirm password :
Organization Name for the Certificate: Demolab
The default storage type you will be using  ['NFS'| 'FC'| 'ISCSI']  [NFS] :
Should the installer configure NFS share on this server to be used as an ISO Domain? ['yes'| 'no']  [yes] : yes
Mount point path: /install
Display name for the ISO Domain: install
Firewall ports need to be opened.
You can let the installer configure iptables automatically overriding the current configuration. The old configuration will be backed up.
Alternately you can configure the firewall later using an example iptables file found under /usr/share/ovirt-engine/conf/iptables.example
Configure iptables ? ['yes'| 'no']: yes

oVirt Engine will be installed using the following configuration:
=================================================================
http-port:                     8080
https-port:                    8443
host-fqdn:                     ovirt-manager.demo.local
auth-pass:                     ********
db-pass:                       ********
org-name:                      Demolab
default-dc-type:               NFS
nfs-mp:                        /install
iso-domain-name:               install
override-iptables:             yes
Proceed with the configuration listed above? (yes|no): yes

After this setup might take while, but in few minutes you should get output like below:

Installing:
Configuring oVirt-engine...                              [ DONE ]
Creating CA...                                           [ DONE ]
Setting Database Security...                             [ DONE ]
Creating Database...                                     [ DONE ]
Updating the Default Data Center Storage Type...         [ DONE ]
Editing JBoss Configuration...                           [ DONE ]
Editing oVirt Engine Configuration...                    [ DONE ]
Configuring the Default ISO Domain...                    [ DONE ]
Configuring Firewall (iptables)...                       [ DONE ]
Starting JBoss Service...                                [ DONE ]

 **** Installation completed successfully ******

     (Please allow oVirt Engine a few moments to start up.....)

Additional information:
 * There is less than 4 GB available free memory on the Host.
It is  recommended to have at least 4 GB available memory to run the RHEV Manager.
 * Keystore already exists, skipped certificates creation phase
 * A default ISO share has been created on this host.
   If IP based access restrictions are required, please edit /install entry in /etc/exports
 * The firewall has been updated, the old iptables configuration file was saved to /usr/share/ovirt-engine/conf/iptables.backup.074609-01032012_1691
 * The installation log file is available at: /var/log/engine/engine-setup_2012_01_03_07_44_46.log
 * Please use the user "admin" and password specified in order to login into oVirt Engine
 * To configure additional users, first configure authentication domains using the 'engine-manage-domains' utility
 * To access oVirt Engine please go to the following URL: http://ovirt-manager.demo.local:8080

If you get database creation error, please check the database installation log. If there’s lines saying “Peer authentication failed for user “postgres”” please change authentication method in pg_hba.conf to trust and restart your postgresql-service and run installer again.

Next step is install ovirt-node (Hypervisor). It’s really simple and straightforward. Just get latest iso from http://www.ovirt.org/releases/nightly/binary/ and boot your hypervisor machine with it, install to local disk and do basic configurations: This shouldn’t take long, there is only few things to do. Select disk where you are installing, type root password and go.

 

Next thing to do is install more hypervisors and connect them to ovirt-engine. I’ll write another post about this with basic configuration examples. Try oVirt today, it’s really competitive alternative for VMware / Citrix and it’s totally open source 🙂

Implementing IPA-server with windows/linux-environment

Red Hat included IPA-server (FreeIPA) with RHEL6.1 release. FreeIPA is an integrated security information management solution, like Microsoft Active Directory, combining 389 (LDAP server), MIT Kerberos, NTP, DNS. It consists of a web interface and command-line administration tools, so management can be done either with web browser or from command-line. It’s quite easy to implement FreeIPA on Windows/Linux-environment and i’ll show here how you can install, configure and use IPA without any deeper knowledge about Linux.

First step: Install

In my example I have already installed RHEL6.2, but you can use also CentOS or Fedora. I installed RHEL with minimal installation, upgraded all packages and then installed all packages for FreeIPA:

[root@x1 ~]# yum install ipa-* bind bind-chroot bind-dyndb-ldap

This will install all FreeIPA-packages and bind (nameserver) with chroot-option. It will take while, for example in my demo lab it installed 262 packages totally. So run install and relax with cup of coffee while waiting.

After installation completes, we need to check that ipa-servers hostname is set correctly:

[root@x1 ~]# cat /etc/hosts
127.0.0.1       localhost localhost.localdomain localhost4 localhost4.localdomain4
::1             localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.16.16    x1.demo.konehuone.fi x1
[root@x1 ~]# hostname
x1.demo.konehuone.fi

Check that there is line with your servers FQDN, shortname and correct IP-address. If name/ip is not correct, you need to fix them before next step.

Second step: Configure FreeIPA

Now it is time to install and configure FreeIPA. This is really simple:

[root@x1 ~]# ipa-server-install --setup-dns

Installer will ask server host name, domain name and kerberos realm name. Accept default settings if you do not want to change them, you might want to change kerberos realm to short name (Like, DEMO in my example) if you want. You need to write also password for LDAP Directory Manager account and FreeIPA admin-user account, for best practise use different password for accounts.

By default installer will also setup DNS forwarder and reverse zones for you. It will ask IP of your forwared and will automatically create reverse zone for that network.  After this installer will configure all necessary services (This might take while also).

Last phase here is to check that necessary firewall ports are opened, below is example from my iptables-configuration:

[root@x1 ~]# cat /etc/sysconfig/iptables
# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 389 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 636 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 88 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 464 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 53 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 88 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 464 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 53 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 123 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

You could also combine ports to one line, but I like to have one line per port. After configuration you can test that everything is working just like it should:

[root@x1 etc]# kinit admin
Password for admin@DEMO.KONEHUONE.FI:
[root@x1 etc]# ipa user-find admin
--------------
1 user matched
--------------
  User login: admin
  Last name: Administrator
  Home directory: /home/admin
  Login shell: /bin/bash
  UID: 805800000
  GID: 805800000
  Account disabled: False
  Keytab: True
  Password: True
----------------------------
Number of entries returned 1
----------------------------

If you can find admin user your FreeIPA-server is now configured and running fine.

Third step: Configure your Windows/Linux machines to authenticate with IPA

Adding Windows-clients to IPA includes bit more tasks than Linux-clients so we’ll start with Windows one.

Before we add any clients to IPA we need to create user account for user. In my example I will use cli-way, but you could do it from web-ui also:

[root@x1 log]# ipa user-add
First name: Ipa
Last name: Test
User login [itest]: ipatest
--------------------
Added user "ipatest"
--------------------
  User login: ipatest
  First name: Ipa
  Last name: Test
  Full name: Ipa Test
  Display name: Ipa Test
  Initials: IT
  Home directory: /home/ipatest
  GECOS field: Ipa Test
  Login shell: /bin/sh
  Kerberos principal: ipatest@DEMO.KONEHUONE.FI
  UID: 805800004
  GID: 805800004
  Keytab: False
  Password: False

Next we need to reset ipatest-users password:

[root@x1 log]# ipa passwd ipatest
New Password:
Enter New Password again to verify:
------------------------------------------------
Changed password for "ipatest@DEMO.KONEHUONE.FI"
------------------------------------------------

Next we need to create also account for client machine. In this example we will add first dns-record for the machine called winxp and then add host-entry to IPA and give host initial password used when connecting host to IPA:

[root@x1 log]# ipa dnsrecord-add
Zone name: demo.konehuone.fi
Record name: winxp
[A record]: 172.16.16.17
[AAAA record]:
  Record name: winxp
  A record: 172.16.16.17
[root@x1 log]# ipa host-add
Host name: winxp.demo.konehuone.fi
------------------------------------
Added host "winxp.demo.konehuone.fi"
------------------------------------
  Host name: winxp.demo.konehuone.fi
  Principal name: host/winxp.demo.konehuone.fi@DEMO.KONEHUONE.FI
  Keytab: False
  Password: False
  Managed by: winxp.demo.konehuone.fi
[root@x1 log]# ipa-getkeytab -s x1.demo.konehuone.fi -p host/winxp.demo.konehuone.fi
-e arcfour-hmac -k krb5.keytab.winxp.demo.konehuone.fi -P
New Principal Password:
Verify Principal Password:
Keytab successfully retrieved and stored in: krb5.keytab.winxp.demo.konehuone.fi

After this we configure host to use IPA-server. Currently there’s one problem with Windows authentication. You need to have local user where kerberos users are mapped. However local user can be locked, so you cannot log on with them directly but via kerbers-authentication. In my example I’ll use XP machine, and because of that we need to download additional tools and install whole package to get ksetup-tool.

In my example I didn’t create any local users, I just mapped everything to guest-user:

ksetup /mapuser * guest

Final task is reboot the machine and after it you should be able to logon using kerberos acount, in my example ipatest@DEMO.KONEHUONE.FI

Final words

You can do much more with IPA. I will write another post how you can extend usage of your IPA. I think that FreeIPA is really good implementation for any company wanting to avoid usage of Microsoft Active Directory.