Enterprise storage has been evolving in recent years, as more data, and more kinds of data, enter the digital universe, and enterprises seek the most effective ways to manage and exploit it.
While smaller businesses are increasingly moving to SaaS applications and cloud-based storage in pursuit of lower costs and greater agility, most enterprises need to pursue a hybrid strategy, retaining a significant amount of storage capacity on-premises for reasons of performance (a function of latency and bandwidth), security, regulatory compliance, cost and/or the avoidance of cloud service lock-in.
Typically an enterprise will keep mission-critical data in-house, using the cloud for lower-priority data, and for coping with episodic capacity requirements that fall outside the normal run of business.
The major in-house infrastructure trend among enterprises is away from traditional siloed NAS and SAN solutions (usually from ‘legacy’ providers) towards the kind of software-defined scale-out storage — using commodity rather than proprietary hardware — pioneered by cloud giants such as Amazon, Google and Facebook. Also important, where performance is critical, is the move away from traditional spinning disks towards flash storage (often from ’emerging’ providers).
These major trends, and their ramifications, are driving the storage sector towards what many observers believe to be a ‘tipping point’ that we’ll see playing out over the next few years.
STORAGE TRENDS AND PREDICTIONS FOR 2015
Around the turn of each year, numerous forward-looking articles for different technology sectors are published. To get a more detailed picture of the storage sector in 2015 and beyond, we’ve examined 26 such articles and logged the specific trends and predictions made by each author. The resulting frequency chart gives a good indication of what’s currently occupying the pundits:
The clear leader is ‘flash arrays and SSD’ (solid-state drives), followed by ‘backup, disaster recovery & archiving’, ‘hyper-converged infrastructure, software-defined storage’ and ‘big data & hyperscale storage’. Further down the list you’ll find many more familiar subjects, including hybrid and public cloud storage, security and encryption, object storage and unstructured data, and more.
Interestingly, several predictions centered around the likely fortunes of innovative ’emerging’ vendors versus incumbent ‘legacy’ storage providers — a sure sign of significant market turmoil. The enterprise storage market has certainly seen plenty of startup investment in recent years, as this chart from venture capital database service CBInsightsmakes clear:
Between 2010 and 2014, funding for enterprise storage startups increased almost three-fold.
Much of this investment is targeted at the number-one trend identified above — the rise offlash arrays and solid-state drives (in servers) for processing ‘hot’ IOPS-centric data that needs to be accessed quickly and frequently. The reason that flash has generally been restricted to the top tier of enterprise data is cost, but a recent analysis by Wikibon suggests that flash is en route to being more cost-effective than hard disk technology for almost all forms of storage from 2016 onwards:
According to Wikibon, three main drivers underlie this transition: (1) Consumer demand for flash that will drive down enterprise flash costs; (2) New scale-out flash array architectures that will allow physical data to be shared across many applications without performance impacts; and (3) New data centre deployment philosophies that allow data to be shared across the enterprise rather than stove-piped in storage pools dedicated to particular types of applications.
Flash arrays and SSDs have been flagged up as an enterprise storage trend for a long time, of course: in Gartner’s latest Hype Cycle for Storage Technologies, for example, solid-state arrays are deemed to be sliding into the post-peak ‘trough of disillusionment’, while enterprise SSDs are entering the ‘plateau of productivity’ (for an interesting analysis of the experience of large-scale enterprise flash adoption, see this study by researchers from Carnegie-Mellon University and Facebook).
The second most frequently-mentioned trend for 2015 concerns ‘colder’ — less frequently accessed but more voluminous — data required for backup, disaster recovery and archivingpurposes. In the past, these requirements generally had to be configured for multiple enterprise data silos, and often involved slow and inefficient tape-based workflows. By contrast, modern highly virtualised, software-defined storage infrastructures can build in backup, disaster recovery and archiving workflows in a much more efficient and cost-effective manner. Increasingly, these solutions will include cloud-based storage.
Hyper-converged infrastructure, the third-placed trend, combines compute, network, direct-attached storage (DAS) and virtualisation resources in a single unit, which can either be a physical appliance or a software-only solution ready to run on commodity hardware. As far as storage is concerned, the key feature of hyper-converged infrastructure is the ability to combine the separate DAS components into a software-definable pool of shared storage. Key vendors in this space include Nutanix on the ’emerging’ side, and incumbents such asEMC and VMware.
The compute and networking components have had their software-defined moment, and now it’s the turn of storage. In general terms, software-defined storage (SDS), fourth in our trends ranking, is about decoupling the hardware from the control software, enabling centralised management and the use of affordable commodity hardware. The SNIA (Storage Networking Industry Association) puts flesh on these bones, stipulating that SDS should include the following:
- Automation – simplified management that reduces the cost of maintaining the storage infrastructure
- Standard interfaces – APIs for the management, provisioning, and maintenance of storage devices and services
- Virtualised data path – block, file and/or object interfaces that support applications written to these interfaces
- Scalability – seamless ability to scale the storage infrastructure without disruption to the specified availability or performance
- Transparency – the ability for storage consumers to monitor and manage their own storage consumption against available resources and costs
As companies gather ever more data — including unstructured data (from social networks, for example), structured data (from business processes) and, increasingly, machine-generated data from IoT (Internet of Things) devices — so the demands on storage systems grow. That’s why Big data and hyperscale storage makes the top five in our trends ranking. Hyperscale storage was pioneered by web giants like Google, Facebook and Amazon, and involves a combination of low-cost commodity hardware, a highly automated software-defined control layer and the ability to scale rapidly up to petabyte-level capacity (1PB=1000TB). Few regular enterprises will operate at such capacities, but hyperscale-style techniques are certainly set to trickle down into terabyte-level deployments.
RECENT STORAGE SURVEYS
A couple of recent surveys give some useful insights into the current state and future prospects of the enterprise storage industry.
Emerging storage virtualisation specialist Tintri conducted a State of Storage survey in March 2015, canvassing the opinions of 1,020 data center professionals, most of whom (74%) currently use only ‘legacy’ storage providers such as Dell, EMC, Fujitsu, Hitachi, HP IBM and NetApp.
The primary ‘pain points’ for these IT professionals are familiar enough: performance (cited by 50% of respondents); capital expenses (41%); scaling to manage growth (40%); and manageability (39%). Interestingly, respondents who used only legacy storage experienced significantly more pain in terms of performance (+23%) and manageability (+18%) than those using only storage from emerging providers.
In Tintri’s survey, two out of three respondents work for organisations where over 50 percent of workloads are virtualised, with storage and private cloud reported as likely to see significantly increased spending. This goes some way to explaining respondents’ present and predicted pattern of storage usage among ‘legacy’ and ’emerging’ vendors:
In late 2014, enterprise data services platform provider CTERA surveyed 300 IT professionals in US organisations (60% of which were in IT services, financial services, government or retail). CTERA’s 2015 Enterprise Cloud Storage Survey delivered four key findings: (1) Security and data governance concerns within the enterprise are driving EFSS (Enterprise File Sync and Share) adoption; (2) Huge opportunity exists for providers of private cloud storage tools and those providing storage infrastructure-as-a-service (IaaS) to reduce costs, while maintaining security and control; (3) Cloud storage gateways are replacing and augmenting traditional file servers and tape storage, particularly in remote or branch offices (ROBO); and (4) Pressure exists for organizations to establish contemporary cloud storage solutions that provide the visibility and control required to meet enterprise needs and industry regulations.
Storage, in common with the rest of enterprise IT, is evolving to become a flexible servant of the business, able to adapt and scale as different needs arise. Key developments include the increasing prevalence of flash as it approaches and then overtakes hard disk in terms of cost-effectiveness, and the maturation of technologies like hyper-converged, software-defined and hyperscale storage. These and the other developments flagged up here, including the emergence of a new breed of storage vendors focused on flash and virtualisation, should bring enterprise storage into the big-data era, and also make traditionally intractable operations like backup, disaster recovery and archiving much more manageable.