The Xceptional Blog

Just Go Away! 4 Server Technologies to Ditch in 2019

Written by Natalie | Feb 4, 2019 7:53:20 PM

It’s that time of year again. Another year gone by, which means prepare to be bombarded with bold predictions of technology-related innovations and trends that will come your way in 2019. And for every new amazing technology you’re introduced to, there are many others that just need to go away. Like now. For good.

If you want to move forward, you must be willing to shed the old. Technologies that were once cutting-edge, which brought IT productivity and innovation, can now trap you. They stagger on, slowly, relentlessly consuming your data center. And then your business. Bid them goodbye.

Aging infrastructure can slow performance, require extra IT support, and incur unexpected maintenance costs. It can cause you to fail to meet business objectives. According to the analyst firm IDC, neglecting to upgrade your server infrastructure in a timely fashion, can cost IT organizations up to 39-percent of peak performance. IDC also found that is adds up to 40-percent in application management costs and up to 148-percent in server administration costs.[i]

FOUR KEY METRICS SAY IT’S TIME FOR A SERVER REFRESH

Server technology “unpredictions”  

As thought leaders look to the future and share things you should start doing, here, we’ll focus on the other aspect of innovation – the things that should be left behind.

  1. Goodbye: Hard Disk Drives (HDDs)

Meeting the challenges of exponential data growth and increasingly diversified applications and infrastructure is nearly impossible with existing, spinning hard disk drive (HDD) technology. HDDs, tailored for traditional enterprise applications and mission-critical needs, have been the workhorse of the storage industry for decades. However, while the capacity of HDDs has increased over the years, their random input/out (I/O) performance has not, creating bottlenecks in data access. Additionally, they consume lots of power and maintenance. Many of today’s enterprise web, cloud and virtualized applications require both high capacity and performance; HDDs do not deliver a cost-effective storage solution.

Hello:  Solid State Drives (SSDs)

Solid State Drives (SSDs) based on NAND flash memory are well suited for today’s modern data center. Compared to HDDs, SSDs offer vastly higher data-transfer rates, higher areal storage density, better reliability, and much lower and more consistent latency and access times. Additionally, SDDs consume less power per system than equivalent HDDs, reducing data center power and cooling expenses. And it is non-volatile memory (NVM), its retains data when power is removed; no destaging is required. While using SSDs instead of HDDs in any storage platform significantly improves application performance, the gains are even more dramatic with SSDs are connected directly to the server’s PCIe bus via NVMe™ (Non-Volatile Memory Express). When you maximize CPU core utilization with SSDs you can use fewer recurring software licenses across your deployments to maximize your IT budgets.

  1. Goodbye: Homogeneous Computing

The speed of innovation is starting to catch up with Moore’s Law.[ii] A few years ago, the typical data center was designed around row upon row of off-the-shelf, industry standard x86 servers. And like clockwork, the hardware got exponentially better and with it, software, every two years. But the one-size-fits all homogeneous approach is coming up against physical limits for scaling transistors while still managing power and thermals. This issue, as well as the demands from today’s compute-intensive applications, driven by the insatiable demand for more data, more real-time information and faster services, means the days where general-purpose microprocessors run all workloads are gone.

Hello:  Heterogeneous computing, where conventional CPUs are augmented with specialized processors, or accelerators, improves the performance and power efficiency of servers while running compute- and data-intensive workloads that are becoming commonplace in the modern data center. Accelerators in various forms like GPUs (graphics processing units), FPGAs (field programmable gate arrays), and ASICs (application-specific integrated circuits) can be used to speed up workloads such as virtualization, 3D/2D graphics, HPC and emerging applications such as artificial intelligence (AI), machine learning (ML), and database analytics.

  1. Goodbye: Intelligent Platform Management Interface (IPMI)

Managing heterogeneous scale of cloud- and web-based data center infrastructures with large numbers of servers can be challenging. Traditional interfaces, defined for server-centric environments, like the Intelligent Platform Management Interface (IPMI), are outdated, complex, and vulnerable to security breaches. Additionally, administrators, programmers and DevOps must deal with a multitude of devices that didn’t exist when the IPMI was created, making it very difficult to integrate these devices into their environments.

Hello:  Redfish

Redfish, an open industry-standard API published by the DMTF (formerly the Distributed Management Task Force), is designed to deliver simple and secure management for converged, hybrid IT and software defined data center. Delivering both in-band and out-of-band manageability, Redfish leverages common Internet and web services standards to expose information directly to the modern tool chain.

  1. Goodbye: Cybersecurity

Security attacks continue to take a substantial toll on organizations across all industries. New threats and vulnerabilities are being identified in real time, while many other attacks often infiltrate an organization’s firewalls. The consequences of a security breach can be staggering. The financial implications aren’t the only concerns. Security breaches also bring significant operational and reputational risks. However, while cybersecurity is increasingly top of mind for many IT managers, most of the focus is on preventive mechanisms that protect the OS and applications from malicious attacks. Little thought, or planning is devoted to securing the underlying server infrastructure, including the hardware and the firmware.

Hello:  Cyber Resiliency

Cyber resiliency is a mindset that encompasses the full spectrum of issues that occur before, during and after a system experiences a malicious or adverse event. The objective is not to try to create an impregnable fortress. Rather, it is to create the capability to anticipate threats and to absorb the impacts of such threats. It is also to respond in a rapid and flexible way to ensure key systems and processes continue operating. By expanding the scope, cyber resilience offers a longer-term, sustainable approach to managing business risk.

Out with the Old, In with the New

If your business is still running on legacy infrastructure, you’re running on borrowed time. So, as you turn the calendar forward to 2019, shake off the old and make way for innovation. And not sure where to start?  It shouldn’t surprise you that our answer is, start with us! We'll be more than happy to help!

[i] Jed Scarmella, Rob Brothers, and Randy Perry, “Why Upgrade Your Server Infrastructure Now?” (IDC White Paper commissioned by Dell EMC, July 2016)

[ii] Tim Cross, “After Moore’s Law,” Technology Quarterly, The Economist, March 12, 2016, retrieved January 28, 2019

 

 By Deb Sheedy

Published with permission from https://blog.dellemc.com/en-us/