Saturday, June 1, 2013

Debunking some common myths about high-performance networks

Can high-performance InfiniBand or Ethernet add value without adding cost?

The short answer is yes. The longer answer is that there are several myths associated with this topic that make most people assume the answer is no. That's because Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) providers are renowned for using low-cost commodity computing, network and storage hardware to achieve a sustainable business model that relies on low overall costs.

If that's true, it would be natural for readers to be surprised to learn that high-performance (40 Gbps to 56 Gbps) InfiniBand or Data Center Bridging (DCB) Ethernet -- neither of which is inexpensive -- can reduce costs for enterprises while adding value. So, let's debunk some common market myths about high-performance networks.

Myth: High-performance 40 Gbps to 56 Gbps networks (InfiniBand or DCB Ethernet) cost more than standard interconnects.

Reality check: That cost is very dependent on what is being measured and compared. For example, 1 Gbps standard Ethernet can be expensive when measured as cost/Gbps, cost/IOPS or cost/nanosecond of latency. Costs include network interface cards (NICs), transceivers, cables, conduit, switch ports, switches, rack space, floor space, power, cooling, maintenance, management, operations and administration. Trunking of 1 Gbps ports (a common, inexpensive way to increase bandwidth to a specific device) has the unexpected effect of network sprawl. This requires more switches, cables, transceivers and especially administration.

High-performance 40 Gbps to 56 Gbps networking does have a higher initial cost when compared to 1 Gbps standard Ethernet, but by all other measures beyond initial cost, it can be a lot less. Standard 10 Gbps Ethernet (e.g., 10 Gbps Ethernet not rated or designed for DCB) costs less upfront; however, it's not as inexpensive as you might think. This is primarily because the transceivers (connectors) and cables that standard 10 Gbps Ethernet, DCB Ethernet and InfiniBand use are the same. Therefore, savings from economies of scale apply equally to all the networks. When using other cost measures, high-performance networks (40 Gbps to 56 Gbps) come out lower. An unexpected market anomaly can currently be seen in DCB 10 Gbps Ethernet. Common street prices presently average higher upfront costs per port than 40 Gbps to 56 Gbps InfiniBand or 40 Gbps to 56 Gbps DCB Ethernet.

Myth: High-performance 40 Gbps to 56 Gbps networks (InfiniBand or DCB Ethernet) are Layer 2 networking, which is difficult to implement, operate and manage.

Reality check: These networks are Layer 2 and not Layer 3 like standard Ethernet networks, but they're not very difficult to manage if the right software is used. Most of the wariness about Layer 2 networks comes from experience with Fibre Channel (FC) fabrics, which are difficult to implement, operate and manage. InfiniBand and DCB Ethernet did not make the same mistake. Several vendors provide effective, built-in end-to-end management software that eliminates many of the labor-intensive tasks so common in FC Layer 2 networks.

So, how do high-performance 40 Gbps to 56 Gbps networks reduce costs and increase value? It's directly tied to the cost of infrastructure. One of the key advantages to InfiniBand and DCB Ethernet is that they can concurrently support multiple protocols. This allows the convergence of storage I/O, IP networking, server and application clustering onto a single managed fabric. It doesn't matter if the storage networks are iSCSI, Fibre Channel Protocol, ATA over Ethernet or Fibre Channel over Ethernet; if the IP networks are TCP/IP, User Datagram Protocol or file systems such as NFS, CIFS or HDFS; or if the server or application clustering are Message Passing Interface or other protocols -- they're all virtualized and run simultaneously on the same adapters and fabrics. When older edge devices need to be part of the converged fabric, the switches have gateways that convert seamlessly to installed older fabrics (1 Gbps or 10 Gbps Ethernet, and 4 Gbps/8 Gbps/16 Gbps FC).

Converging and virtualizing all these different disparate networks can dramatically reduce IaaS and PaaS infrastructure costs while increasing reliability. Costs are reduced by decreasing the number of adapters per attached server (which also reduces the physical size of required servers). This, in turn, requires fewer transceivers, cables, conduits, switches, rack space, floor space, power, cooling, administrative management, maintenance, complexity and software licenses.

It's a much simpler environment to grow. By virtualizing server I/O into a single high-performance, low-latency fabric, InfiniBand and Ethernet high-speed networks offer greater speed at a cost that may surprise some enterprise IT managers still clinging to the myths that sometimes surround these technologies.

BIO: Marc Staimer is founder and senior analyst at Dragon Slayer Consulting in Beaverton, Ore. The consulting practice of 15 years has focused on the areas of strategic planning, product development and market development. With more than 33 years of marketing, sales and business experience in infrastructure, storage, server, software, database, big data and virtualization, Marc is considered one of the industry's leading experts. He can be reached at marcstaimer@me.com.

This was first published in May 2013


View the original article here

0 comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...