IBM added new disk choices for Storwize V7000

Recently IBM added two new disk choices for the Storwize V7000 in the 2.5″ form factor:

  • 300 GB 15k SAS
  • 1 TB 7.2k Nearline (NL) SAS

This adds total choices of 2.5″ disks for Storwize V7000 to seven.

  • 300 GB SSD
  • 146 GB 15k SAS
  • 300 GB 15k SAS
  • 300 GB 10k SAS
  • 450 GB 10k SAS
  • 600 GB 10k SAS
  • 1 TB 7.2k Nearline SAS

and of course there is still 3.5″ 2TB Nearline SAS disk available.

All new disks should be available in this week.

Original release notes can be found here.

Why IBM’s XIV matters?

IBM’s XIV has raised lot’s of discussion on market, mostly because IBM claims that it’s enterprise class storage system running on SATA disks which are considered to be more midrange stuff.

There are plenty of cases where customers have evaluated IBM’s XIV storage system and realized that it’s amazing product. Here are couple of examples why XIV really matters:

  • A service provider that had used EMC disk systems for over 10 years evaluated the IBM XIV versus upgrading to EMC V-Max. The three year total cost of ownership (TCO) of EMC’s V-Max was $7 Million US dollars higher, so EMC counter-proposed CLARiiON CX4 instead. Customer selected XIV.
  • A large US bank holding company managed to get 5.3 GB/sec from a pair of XIV boxes for their analytics environment. That’s amazing performance from SATA disks!

I have seen IBM’s XIV in couple of customer environments and it’s really proven to be enterprise storage. IBM recently upgraded XIV to third generation. Same time they announced that XIV will have in future option for SSD caching which is claimed to change performance to next level at a fraction of typical SSD storage costs. Will this happen really, I will bet that this feature comes quite soon. At the same time they also made internal cache larger, added faster disk controllers and changed internal connection to InfiniBand.

IBM has proven that you can create really good performing storage by looking this from other point of view and using generic intel x64 hardware and generic SATA-disks. In fact XIV was not invented by IBM but company which was founded by Moshe Yanai (who actually leaded EMC’s Symmetrix development).

Read more about XIV from IBM’s XIV page.

Could iSCSI provide enough performance?

Allmost every day I face up situation where people are thinking could iSCSI provide enough performance? Usually it’s quite relevant to know what kind of environment is and what kind of architecture design is used but in most of the cases iSCSI can provide enough IOPS. I’ll try to give you information why.

From storage systems actual storage is shared with several methods but most used once’s are Fibre Channel and NFS, where first is block based. There is also quite many other ways to share storage, like iSCSI and FCoE, which has been hot topic for quite long now and big bang of FCoE has been waited for couple of years now. From performance point biggest improvement for last two once’s has been 10 Gbit/s ethernet technology which provides good pipe for data movement.

Intel showed on 2010’s Microsoft Management Summit iSCSI performance where they managed to get from software iSCSI and 10 Gbit/s ehternet-technology over 1 million IOPS (IO operations per second) which is quite nice. In same summit they had nice demo on their booth where they showed environment with Intel’s Xeon 5600-chipset and Microsoft software iSCSI-initiator which was able to do more than 1.2 million IO operation per second with CPU usage of almost 100%. It’s quite relevant to understand that when you have CPU utilization near 100% you cannot actually do anything else than just this IO, but this shows that you can get really massive performance by using iSCSI and 10 Gbit/s ethernet.

At past iSCSI’s bottleneck was 1 Gbit/s ethernet connection. Of course there was ways to get better performance by designing architecture correct but in most of iSCSI storage there was only 4 pcs of 1 Gbit/s connections. When 10 Gbit/s connection got more introduced to storage systems it enabled more and more cases where iSCSI was comparable solution for Fibre Channel. There used to be also dedicated iSCSI-cards in market but they are mostly gone because CPU technology got so good that CPU overhead of iSCSI was not anymore so relevant. Nowadays most of 10 Gbit/s ethernet cards can do iSCSI encapsulation on their internal chip so it won’t affect to CPU so much neither.

10 Gbit/s ethernet-technology has helped a lot and you don’t need separate SAN-networks anymore if you go with iSCSI or FCoE. You can use already exciting 10 Gbit/s connections which are now common and mostly standard on Blade-systems. Still in big environments you should have separation between your data networks and storage networks, but this can be done with proper network architecture and VLAN’s. But I would still like to do separation (at least in core level) for storage and data networking to avoid cases where problems in data networking might affect your storage systems.

FCoE is coming for sure, but there are still some limitations on it and mostly lack of native FCoE storage is reason for this. However if you are doing investment for network renewal I would keep FCoE in mind and do all new networks in way that FCoE can be implemented with less work when the time comes. While waiting, iSCSI might be good alternative for FC.

….but I still prefer old school fibre channel. Why? Brocade just released 16 Gbit/s FC-switches and again FC is faster 😉

Read more about Intel’s iSCSI performance test from here..