Primary and Secondary Storage aka Tiering in 2017

spicerack_tiers1_525

History of Hierarchical Storage Management

Having multiple storage tiers is not a new thing. Actually history of hierarchical storage management (HSM) goes quite far. It was actually first implemented by IBM on their mainframe computer platforms to reduce to the cost of data storage, and to simplify the process to get data from slower media. Idea was to that the actual user would not need to know where the data was actually stored and how to get it back – with HSM the computer would retrieve the asked data automatically.

Historically HSM was somewhat buried when world went from 1st platform to 2nd platform (Client-Server, PC-era). Quite soon many organizations realised they still had application needs for centralized storage platforms and so the storage area networks (SAN) was pretty much born. After server virtualization exploded need for high performance storage organizations realized that, again, it was too expensive to run all application data in one high performance storage tier or it was just too difficult to move data between isolated storage tiers. Tiering, or HSM, was actually born again.

Many storage vendors implemented somekind of tiering system. Some implemented system monitoring actual hot blocks and then migrated those hot blocks between slower and faster tier or 3-tier (SSD, SAS and SATA). Comparing these systems only difference was typically size of the block moved and frequency of moving blocks. This was approach for IBM, EMC and HDS (and many others also), just few to name. There was no big problem with this approach since it solved many problems but in many cases it just reacted too slow to performance needs. With proper design this works very well.

Other vendors implemented tiering based on caching. Every storage system has cache (read and write) but these vendors approach for tiering was implementing method to add high performance disks (SSD) to extend size of the cache. This reacts very fast to changes and typically doesn’t need any tuning. However this approach doesn’t allow you to pin application data to selected tier so proper design is critical.

All flash storage changed the game

aaeaaqaaaaaaaaoaaaaajdmwnze4mzm0ltuwywytngm0os1hntqyltvkotmwytq1y2iznw

Late 2000 all flash storage systems moved very high performance applications from tiered storage to pure flash platform. Price of the flash was very high and typically you had one all flash system per application. Few years later all flash systems actually replaced spinning disks and tiered storage pretty much on most of the organizations since pricing of the flash became affordable and implementation of efficiency technologies (deduplication and compression) meant you could put more data on same disk capacity which actually dropped the gigabyte pricing quite close to high performance spinning disks (10/15k SAS drives).

Suddenly you didn’t need any tiering since all flash systems gave you enough performance to run all your applications but this introduced isolated silos and moving from 2nd platform to 3rd platform means very dramatical growth in data amounts.

Living in the world of constant data growth

Managing unstructured data continues to be a challenge for most of the organizations. When Enterprise Strategy Group surveys IT managers about their biggest overall storage challenges, growth and management of unstructured data comes out at or near the top of the list most of the time.

And that challenge isn’t going away. Data growth is accelerating, driven by a number of factors:

The Internet of Things

We now have to deal with sensor data generated by everything. Farmers are putting health sensors on livestock so they can detect issues early on, get treatment and stop illness from spreading. They’re putting sensors in their fields to understand how much fertilizer or water to use, and where. Everything from your refrigerator to your thermostat will be generating actionable data in the not too distant future.

2016-07-12-1468314021-5633148-internetofthings

Bigger, much richer files

Those super-slow motion videos we enjoy during sporting events are shot at 1,000 frames per second with 2 MB frames. That means 2 GB of capacity is required for every second of super-slow motion video captured. And it’s not all about media and entertainment; think about industry-specific use cases leveraging some type of imaging, such as healthcare, insurance, construction, gaming and anyone using video surveillance.

More data capture devices

More people are generating more data than ever before. The original Samsung Galaxy S smartphone had a 5 megapixel camera, so each image consumed 1.5 MB of space compressed (JPEG) or 15 MB raw. The latest Samsung smartphone takes 16 megapixel images, consuming 4.8 MB compressed/48 MB raw storage — thats a 3 fold increase in only few years.

Enterprise Data Lake is 2017 version of tiered storage

Tiered storage, as it used to be implemented, has born again with next generation of old idea. In modern world tiering is solving problems related to massive data growth. More and more production data is going to All Flash arrays but since only 10-30% of actual data is really hot organizations must implement somekind of secondary storage vision to be able move cold data from still expensive primary storage to much cheaper secondary storage.

The secondary storage today is object based storage to response fast pace of data growth and data locality problems of IoT. Organizations are going to use this same object storage platform for their Internet of Things needs and also maybe place to hold their production application data backups.

 

How to backup object storage?

banner

Background

Traditional file server may have outlived it’s usefulness for an increasing number of organizations in past years. Traditionally these file servers were designed in an era where typically all employees were sitting in one location and remote workers / road warriors were quite rare and even then only files they created were office documents (word, excel, powerpoint, etc). However time has changed and now it’s new normal that organizations have employees all over the globe and also IoT (Internet of Things) has changed landscape very much; they generate far more data than normal users ever will and this data must be kept in safe location since data is now the main asset of many organizations and normal file servers are not enough to meet the requirements of the modern data center anymore.

So we got an object storage to solve issues but first you need to understand what is an object storage and how it differs from traditional file servers.

So what is an object storage?

For years data was typically stored in a POSIX file system (or databases but let’s focus on file services here) where data is organized in volumes, folders/directories and sub-folders/sub-directories. Data written to file system contains two actual pieces; the data written to file system itself and also metadata which in POSIX file systems is typically simple and includes only information about creation date, change date, etc. However object storages work a bit differently.

Object storage is also used to store data but unlike POSIX file systems, an object storage system gives each object unique id, rather than name, which is managed in flat index instead of folders. In POSIX file systems applications access files based on directory and filenames but in object storage they access files (or objects here) by providing an unique object id to fetch/re-write information. Because of this flat architecture object storage provides much greater scalability and faster access to much higher quantity of files compared most traditional POSIX based file servers. Flat architecture enables also much richer medata information for actual data since most object storage systems allow much broader set of information stored  per object than traditional POSIX file system. You can store all the same information (creation date, change date, etc) but also add additional information like expiration dates, protection requirements, information for application what type of file is, what kind of information it contains etc.

So in general object storages are designed to handle an massive amount of data; it can be data stored in large objects like video/images/audio or very billions of very small objects containing IoT device information like sensor data. Typical object storage can scale to very big and this raises a question. How we can backup the data stored in object storages?

Why object storage is not that easy to backup?

Problems Ahead

Traditionally object storage solutions are considered when you have massive amounts of data like petabytes of data and/or billions of objects (files). This kind of platform challenges your traditional enterprise backup solution. Think about it while. If you need to backup all changed objects how long it will take for enterprise backup server to lookup which object has been changed and then fetch them from storage and put them on disk and/or tape. With large scale environments this slowly becomes just impossible. You need to implement other kind of solutions.

From backups to data protection

First step here is to change our minds from backups to data protection. Backup is just a method to protect your data, isn’t it? When we understand this we can start thinking how we can protect our data in object storage environments. There is always the traditional way to protect it, but let’s think how object storage environments are protected in real life.

Object Storage Systems tend to have ability to have versioning built in. What this is is just a method to save old version of object in case of change or delete. So even when user deletes objects they are not removed from system in real life but just marked as deleted. When we combine this to multisite solutions we actually have built in data protection solution capable to protect your valuable data. However we need to have also smart information lifecycle management in our object storage to be able to actually remove deleted objects when our ILM rules says so. Like after ‘deleting’ (marking object deleted) we would still keep last version for 5 years and after that we will remove it from system.

This kind of approach has one limitation you must keep in your mind what comes to data durability. What typical data protection systems do not protect against is bit rot – the concept of magnetic degradation over time. Magnetic storage degrades over time and this is not a matter if the data will be corrupted but rather it is when it will. The good thing here is that enterprise ready object storage solution should have methods to prevent this since should have methods to monitor and repair the affects of magnetic degradation with use of unique identifier for each object that is the result of some type of algorithm (e.g. SHA-256) being run against the contents of the object. By re-running the algorithm against an object and comparing it against its unique identifier, the software makes sure that no bit rot corrupts the file.

There is of course still one issue which must be solved different way – nasty admin. There is always possibility that storage admin of object storage solution just deletes whole system. But let’s think this in future post coming later this spring!

 

What makes a software-defined storage software-defined

software-defined-storage-emcworld

Software-Defined Storage is something that every storage vendor on the planet talks today. If we however stop and think about it a bit we actually realise that software-defined is just yet another buzzword.  This is 101 on SDS and we are not going to go deep in all areas. I will write more in-depth article per characteristics later. So the question is: What makes a software-defined storage software-defined?

A bit of background research

To really understand what software-defined actually means we must first do a bit of background research. If we take any modern storage – and by modern I mean something released in past 10 years – we actually see a product having somekind of hardware component, x pieces of hard disk drives / ssd drives and yes, a software component. So by definition every storage platform released in past years is actually a software-defined storage since there is software which actually handles all the nice things of them.

Well, what the hell software-defined storage then is?

Software-Defined-Storage-for-Dummies

Software-defined actually doesn’t mean something where software is defining features but:

Software-defined storage (SDS) is an evolving concept for computer data storage software to manage policy-based provisioning and management of data storage independent of the underlying hardware. Software-defined storage definitions typically include a form of storage virtualization to separate the storage hardware from the software that manages the storage infrastructure. The software enabling a software-defined storage environment may also provide policy management for feature options such as deduplication, replication, thin provisioning, snapshots and backup. SDS definitions are sometimes compared with those of Software-based Storage.

Above is a quote from wikipedia defining what SDS actually means. So let’s go deeper and look what are characteristics you would except from SDS product. Note that there are products on marketed as SDS which doesn’t have all of these. It’s not necessary to have them all, but more you have better it is – or is it?

Automation with policy-driven storage provisioning with service-level agreements

Automation of storage systems is not that new feature, but it’s a feature that is really usable and it’s also required feature if you plan to build any kind of modern cloud architecture with storage systems. Traditional storage admin tasks like creating and mapping a LUN cannot done by humans in modern cloud environment but rather they must be done by automation. What this means in reality is that in VMware environment a VM-admin is able to create new storage based in service-level needed directly from his/her VMware tools. Most of the modern storage products has at least some kind of plugin for VMware vCenter – but not all has same features. VMware released their vVols with vSphere 6 and it will be dominant model with storage deployments in future I think. It helps VMware admins to create a storage with needed features (deduplication, compression, protection, replication, etc) per VM. This is not that new feature also, since OpenStack has been using this kind of model for quite long. And there are even storage vendors having their business on this (Tintri).

Policy based provisioning and automation are in my opinion most important features of software-defined storage and you should look these carefully if achieving cloud kind of environment is your short or long term plan.

Abstraction of logical storage services and capabilities from the underlying physical storage systems

Logical storage services and capabilities and their abstraction is really not a new thing. Most of the modern storage architectures have been using this kind of abstraction for several years. What this means is for example virtualisation of underlaying storage (One or more RAID-group in generic storage pool) to save unused capacity and giving flexibility. I don’t even remember any modern storage product not using any virtualisation of underlaying storage since this has been really basic feature for years and has nothing to do with modern software-defined thinking that much.

What makes this as a modern software-defined storage feature is features like VMware vVols. They enable you to have more granular control per VM what kind of storage they need and what are needed features. Not all VM’s for example need to be replicated in sync to second location and having a separate datastore for replicated VM’s is just not enough for most of the companies – and it’s really not enough in modern cloud like architectures. You must have per VM control and all the magic must happen without VM admin to know which storage datastore is replicated and which is not. This is a quite new feature in VMware but has been available for OpenStack users for a while mainly because OpenStack is built for modern cloud environment where as VMware was mainly built for datacenter virtualisation but has implemented cloud like features later on.

Commodity hardware with storage logic abstracted into a software layer.

Traditionally most of the storage vendors had their own hardware they used with some of them designing even own ASICs but at least some form of engineered hardware combined with commodity hard disk drives / ssd drives with custom firmware. Legacy vendors, like they are nowadays called did indeed invest lot’s of money to develop engineered hardware to meet their special needs and this typically meant longer release cycles and different go-to-market strategy since introducing a new feature might mean developing a new ASIC and new hardware to support it.

Some of the startup vendors claims that using engineered hardware is expensive and it means that customers pay too much. This might be true but there is long list of advantages using engineered hardware instead of commodity hardware but by definition you, as a customer, shouldn’t look that much about this area. Commodity hardware can be as good as engineered hardware and engineered hardware can be as cheap as commodity hardware.

If the vendor you are looking for uses commodity hardware they must do many things on their software layer which can be done on hardware layer with microcode.  However which ever route they go storage logic is, and has been abstracted into a software layer years ago. All of modern storage vendors products uses software to do most of the logic but some of them uses ASICs to do some clever things, like HP uses ASIC to do de-duplication while most of the competitors use software to do this.

Scale-Out storage architecture

Scale-Out storage architecture is not that new since first commercial scale-out storage products came to market over 10 years ago. However it is still not that common to have commercial storage vendor to have a good scale-out product on their portfolio to solve most of the customer needs but typically just used for one purpose (Like scale-out NFS).

You can think scale-out in two forms.

Isolation Domain Scale-Out

Traditional method to deploy storage controllers is to use HA-pairs where you have two controllers sharing same backend disk storage system and in case of failover one can handle all traffic. This type of method is not a scale-out, but you can also do scale-out type of storage architecture relaying on HA-pairs and then connecting them together. What this allows is moving data between HA-pairs and doing all kind of maintenance and hardware refresh tasks without need to take your storage down. There are however many limitations this method has for your scalability. When you add new capacity to cluster you must select one isolation domain / HA-pair where you add this capacity rather than just adding capacity to one pool. This method has however much better failover handling because there are several isolation domains in case of faults.

This method however typically means vendor using engineered hardware etc. Think this as a method NetApp Clustered Data ONTAP uses.

Shared Nothing Scale-Out

Shared Nothing is a scale-out method where you have N storage controllers who do not share their backend storage at all. This method is more common with vendors and products built on commodity hardware since this can solve most of the problems of commodity hardware has and is in most of the cases much more understandable way to scale than typical isolation domain type of scaling (HA-pairs, etc.). All of the virtual SAN products on market relay on this type of scaling since it doesn’t need any specific hardware components to achieve scalability and high availability. This is also type of scaling all hyper-converge infrastructure (HCI) vendors use and claim that their method is same as used with Google, Facebook and Amazon type of big environments, even that it’s not 100% true.

However this is by far best method to use if you design your architecture from scratch and design it for commodity hardware. It’s also much better in handling rebuild scenarios because there’s not that big performance hit and also self healing is possible with this kind of approach.

Think this as a method Nutanix, Simplivity, SolidFire, Isilon, etc. uses.

So is one better than other?

No. Both have good things and bad things. This is a design choice you make and you can make a realiable and scalable system with both methods. When architected correctly either of them can give you performance, reliability and scalability needed.

Conclusion. Why would I care?

Understanding what Software-Defined Storage (SDS) means helps you understand more competitive landscape and helps you not falling on typical traps of FUD from competitors. Any modern storage, either NAS or SAN, is software-defined, but not all have same feature s or same kind of approach. Anyway all of them solve most of the problems modern infrastructure has nowadays and most of them can help your organisation to store more data than 10 years ago and get enough speed to meet requirements of most of your applications. When selecting storage vendor do it based on your organisational needs rather than marketing jargon.

Note. I work for “legacy” storage vendor and this has nothing to do with my job. I say same principles to every customer I talk to. This is my personal view and has nothing to do with my employer.

VMware vCenter plugin for IBM’s XIV/SVC/Storwize v7000

IBM released version 2.5.1 of  it’s Storage Management Console for VMware vCenter while ago. Installation is quite simple and after this adding new vdisks for VMware infrastructure is quite easy. New version supports XIV, SVC and Storwize V7000 as per the versions on the following table:

Product Firmware version
IBM XIV® Storage System 10.1.0 to 10.2.4x
IBM Storwize® V7000 6.1 and 6.2
IBM System Storage® SAN Volume Controller (SVC) 5.1, 6.1 and 6.2

Direct download link for software is here.

Installation needs to be done to vCenter Windows-machine. Currently vSphere 5 is not supported, but hopefully it will be soon. Installation asks few questions about userid’s etc.

After installation you need to restart your vSphere Client and check that IBM’s plugin is enabled

After installation you need to connect plugin to your storage, in my example I will connect it to Storwize v7000. Click “Add”  to start storage system adding wizard.

Select brand of storage you are connecting. If you want to add multiple brands or multiple storages you need to add them one by one.

Next wizard asks connection settings for storage. Enter ip-address or hostname of storage, username and select private key which you have connected to selected user. If your key uses passphrase enter it also.

Note that keys need to be in openssh-format. If you created your keys with putty key generator you need to convert your key to correct one. First import key you have created and then export key in OpenSSH-format.

Last step is to select mdisk-groups you want to access from plugin. Note that v7000 gives mdisk group name by default in format mdiskgrpX. You should change names ones that explain better features of group, I usually use format of  disktype_speed_size_number (for example SAS_15k_600gb_1).

After this you are able to create and present new vdisks from vCenter rather than creating and mapping new vdisks from v7000/SVC/XIV GUI and then creating VMFS after this. Creating new disks is quite easy, just select correct mdisk group and clicl “New Volume”. After this you get window like below where you fill necessary details and after that you just select hosts/clusters where you want to map this volume.

IBM added new disk choices for Storwize V7000

Recently IBM added two new disk choices for the Storwize V7000 in the 2.5″ form factor:

  • 300 GB 15k SAS
  • 1 TB 7.2k Nearline (NL) SAS

This adds total choices of 2.5″ disks for Storwize V7000 to seven.

  • 300 GB SSD
  • 146 GB 15k SAS
  • 300 GB 15k SAS
  • 300 GB 10k SAS
  • 450 GB 10k SAS
  • 600 GB 10k SAS
  • 1 TB 7.2k Nearline SAS

and of course there is still 3.5″ 2TB Nearline SAS disk available.

All new disks should be available in this week.

Original release notes can be found here.

Why IBM’s XIV matters?

IBM’s XIV has raised lot’s of discussion on market, mostly because IBM claims that it’s enterprise class storage system running on SATA disks which are considered to be more midrange stuff.

There are plenty of cases where customers have evaluated IBM’s XIV storage system and realized that it’s amazing product. Here are couple of examples why XIV really matters:

  • A service provider that had used EMC disk systems for over 10 years evaluated the IBM XIV versus upgrading to EMC V-Max. The three year total cost of ownership (TCO) of EMC’s V-Max was $7 Million US dollars higher, so EMC counter-proposed CLARiiON CX4 instead. Customer selected XIV.
  • A large US bank holding company managed to get 5.3 GB/sec from a pair of XIV boxes for their analytics environment. That’s amazing performance from SATA disks!

I have seen IBM’s XIV in couple of customer environments and it’s really proven to be enterprise storage. IBM recently upgraded XIV to third generation. Same time they announced that XIV will have in future option for SSD caching which is claimed to change performance to next level at a fraction of typical SSD storage costs. Will this happen really, I will bet that this feature comes quite soon. At the same time they also made internal cache larger, added faster disk controllers and changed internal connection to InfiniBand.

IBM has proven that you can create really good performing storage by looking this from other point of view and using generic intel x64 hardware and generic SATA-disks. In fact XIV was not invented by IBM but company which was founded by Moshe Yanai (who actually leaded EMC’s Symmetrix development).

Read more about XIV from IBM’s XIV page.

Could iSCSI provide enough performance?

Allmost every day I face up situation where people are thinking could iSCSI provide enough performance? Usually it’s quite relevant to know what kind of environment is and what kind of architecture design is used but in most of the cases iSCSI can provide enough IOPS. I’ll try to give you information why.

From storage systems actual storage is shared with several methods but most used once’s are Fibre Channel and NFS, where first is block based. There is also quite many other ways to share storage, like iSCSI and FCoE, which has been hot topic for quite long now and big bang of FCoE has been waited for couple of years now. From performance point biggest improvement for last two once’s has been 10 Gbit/s ethernet technology which provides good pipe for data movement.

Intel showed on 2010’s Microsoft Management Summit iSCSI performance where they managed to get from software iSCSI and 10 Gbit/s ehternet-technology over 1 million IOPS (IO operations per second) which is quite nice. In same summit they had nice demo on their booth where they showed environment with Intel’s Xeon 5600-chipset and Microsoft software iSCSI-initiator which was able to do more than 1.2 million IO operation per second with CPU usage of almost 100%. It’s quite relevant to understand that when you have CPU utilization near 100% you cannot actually do anything else than just this IO, but this shows that you can get really massive performance by using iSCSI and 10 Gbit/s ethernet.

At past iSCSI’s bottleneck was 1 Gbit/s ethernet connection. Of course there was ways to get better performance by designing architecture correct but in most of iSCSI storage there was only 4 pcs of 1 Gbit/s connections. When 10 Gbit/s connection got more introduced to storage systems it enabled more and more cases where iSCSI was comparable solution for Fibre Channel. There used to be also dedicated iSCSI-cards in market but they are mostly gone because CPU technology got so good that CPU overhead of iSCSI was not anymore so relevant. Nowadays most of 10 Gbit/s ethernet cards can do iSCSI encapsulation on their internal chip so it won’t affect to CPU so much neither.

10 Gbit/s ethernet-technology has helped a lot and you don’t need separate SAN-networks anymore if you go with iSCSI or FCoE. You can use already exciting 10 Gbit/s connections which are now common and mostly standard on Blade-systems. Still in big environments you should have separation between your data networks and storage networks, but this can be done with proper network architecture and VLAN’s. But I would still like to do separation (at least in core level) for storage and data networking to avoid cases where problems in data networking might affect your storage systems.

FCoE is coming for sure, but there are still some limitations on it and mostly lack of native FCoE storage is reason for this. However if you are doing investment for network renewal I would keep FCoE in mind and do all new networks in way that FCoE can be implemented with less work when the time comes. While waiting, iSCSI might be good alternative for FC.

….but I still prefer old school fibre channel. Why? Brocade just released 16 Gbit/s FC-switches and again FC is faster 😉

Read more about Intel’s iSCSI performance test from here..

Linux ja LVM virtuaalikoneissa

Nykyiset Linux-käyttöjärjestelmät tarjoavat erinomaisen skaalautuvuuden virtuaalikoneissa tukemalla esim. levyjärjestelmissä loogisia levyvolyymejä joka mahdollistaa käyttöjärjestelmän levyjen kasvatuksen ja pienennyksen lennossa. VMware vSphere tukee suurinta osaa markkinoiden Linux-käyttöjärjestelmistä vaikka virallinen tuki onkin vain suosituimmille distribuutioille. Tämän kirjoituksen aiheena on näyttää esimerkeillä miten LVM-järjestelmät toimivat Linuxilla, erityisesti virtuaalisissa ympäristöissä. Alhaalla esimerkkikuva siitä mitä LVM:llä tarkoitetaan:

Jos tarkoituksenasi on käyttää koko lisätty levy kasvattamaan loogista volyymiä suosituksena on että levyä ei osioida ollenkaan vaan levy lisätään LVM:n fyysisenä volyyminä. Levyä voidaan toki lisätä koneisiin myös sellaisenaan ja jättää käyttämättä LVM:ää mutta tämä ei ole suositeltavaa koska LVM tarjoaa paljon hyviä ominaisuuksia sekä skaalautuvuutta jatkon kannalta sekä lisäksi käytännössä kaikki modernit Linux-järjestelmät tarjoavat oletuksena LVM:ää asennuksen yhteydessäkin.

Miten liitän uusia levyjä ilman uudelleenkäynnistystä?

Heti kun olet luonut vSphere-clientillä uuden virtuaalilevyn virtuaalikoneelle voidaan uusi levy ottaa käyttöön helposti Linux-käyttöjärjestelmään. Linux kuitenkin vaatii että SCSI-väylä skannataan uudelleen ja näin uudet lisätyt levyt löytyvät. Valitettavasti tähän ei ainakaan minulla ole suoraan antaa helppoa tapaa ilman komentorivi-työskentelyä. Käynnistääksesi SCSI-väylän skannauksen anna alla oleva komento (vaihda X-kohta host-tekstin perästä oikean SCSI-väyläsi mukaiseksi):

# echo "- - -" > /sys/class/scsi_host/hostX/scan

Voit tarkistaa dmesg-komennolla mitä levyjä löydettiin etsimällä “Attached scsi disk”-viestiä

# dmesg | grep Attached
sd 0:0:X:0: Attached scsi disk sdb

Kun uusi levy on lisätty Linuxille luodaan LVM:lle fyysinen volume, volume group sekä looginen volume sekä lopuksi tiedostojärjestelmä.

# pvcreate /dev/sdb
Physical volume “/dev/sdb” successfully created

# vgcreate datavg /dev/sdb
Volume group “datavg” successfully created

# lvcreate -n datalv1 -l+100 datavg
Logical volume “datalv1” created

# mkfs.ext3 /dev/datavg/datalv1

Ensimmäisessä vaiheessa luotiin levylle fyysinen volume jonka jälkeen luotiin uusi volume group nimeltä “datavg”. Seuraavaksi luotiin datalv1-niminen looginen volume jonka kooksi annettiin 100% datavg:n koosta. Jotta voit käyttää uutta volumea luotiin sinne ext3-tiedostojärjestelmä jonka jälkeen levy on valmis käyttöönotettavaksi.

Olemassa olevan levyn kasvatus

Kun virtuaalilevyä on kasvatettu vSphere-clientilla tulee levyn muutokset skannata myös Linuxin puolella jotta lisätty osuus levystä voidaan ottaa käyttöön. Voidaksesi tehdä tämän tulee sinun tietää käyttämäsi levyn SCSI ID. Alla esimerkki jolla skannaus tehdään Linuxin puolella (korvaa devices-kohdan jälkeinen 0:0:1:0 oikealla id:llä)

# echo 1 > /sys/bus/scsi/devices/0:0:1:0/rescan

Voit käyttää dmesg-komentoa tarkistaaksesi että muutos on otettu onnistuneesti käyttöön. Löydät tämän etsimällä tekstiä “capacity change”.

# dmesg | tail -n 10 | grep change
sdb: detected capacity change from 8589934592 to 17179869184

Lopuksi itse tiedostojärjestelmä tulee kasvattaa vastaamaan muuttunutta levykokoa

# resize2fs /dev/sdb

Olemassa olevan volumen ja tiedostojärjestelmän kasvatus ilman partitiointia

Jotta muutokset tulevat voimaan tulee sinun suorittaa LVM:n fyysiselle volumelle rescan jolla muutokset havaitaan (vaihda sdb käyttämääsi oikeaan levyyn)

# pvresize /dev/sdb

Seuraavaksi laajennetaan looginen volume vastaamaan kasvanutta kokoa.

# lvextend -l+100 /dev/VolGroup01/LogVol00

Lopuksi ajetaan itse tiedostojärjestelmälle resize jotta uusi koko on käytettävissä.

# resize2fs /dev/VolGroup01/LogVol00