Could iSCSI provide enough performance?

Allmost every day I face up situation where people are thinking could iSCSI provide enough performance? Usually it’s quite relevant to know what kind of environment is and what kind of architecture design is used but in most of the cases iSCSI can provide enough IOPS. I’ll try to give you information why.

From storage systems actual storage is shared with several methods but most used once’s are Fibre Channel and NFS, where first is block based. There is also quite many other ways to share storage, like iSCSI and FCoE, which has been hot topic for quite long now and big bang of FCoE has been waited for couple of years now. From performance point biggest improvement for last two once’s has been 10 Gbit/s ethernet technology which provides good pipe for data movement.

Intel showed on 2010’s Microsoft Management Summit iSCSI performance where they managed to get from software iSCSI and 10 Gbit/s ehternet-technology over 1 million IOPS (IO operations per second) which is quite nice. In same summit they had nice demo on their booth where they showed environment with Intel’s Xeon 5600-chipset and Microsoft software iSCSI-initiator which was able to do more than 1.2 million IO operation per second with CPU usage of almost 100%. It’s quite relevant to understand that when you have CPU utilization near 100% you cannot actually do anything else than just this IO, but this shows that you can get really massive performance by using iSCSI and 10 Gbit/s ethernet.

At past iSCSI’s bottleneck was 1 Gbit/s ethernet connection. Of course there was ways to get better performance by designing architecture correct but in most of iSCSI storage there was only 4 pcs of 1 Gbit/s connections. When 10 Gbit/s connection got more introduced to storage systems it enabled more and more cases where iSCSI was comparable solution for Fibre Channel. There used to be also dedicated iSCSI-cards in market but they are mostly gone because CPU technology got so good that CPU overhead of iSCSI was not anymore so relevant. Nowadays most of 10 Gbit/s ethernet cards can do iSCSI encapsulation on their internal chip so it won’t affect to CPU so much neither.

10 Gbit/s ethernet-technology has helped a lot and you don’t need separate SAN-networks anymore if you go with iSCSI or FCoE. You can use already exciting 10 Gbit/s connections which are now common and mostly standard on Blade-systems. Still in big environments you should have separation between your data networks and storage networks, but this can be done with proper network architecture and VLAN’s. But I would still like to do separation (at least in core level) for storage and data networking to avoid cases where problems in data networking might affect your storage systems.

FCoE is coming for sure, but there are still some limitations on it and mostly lack of native FCoE storage is reason for this. However if you are doing investment for network renewal I would keep FCoE in mind and do all new networks in way that FCoE can be implemented with less work when the time comes. While waiting, iSCSI might be good alternative for FC.

….but I still prefer old school fibre channel. Why? Brocade just released 16 Gbit/s FC-switches and again FC is faster 😉

Read more about Intel’s iSCSI performance test from here..

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s