According to a new forecast from the International Data Corporation (IDC) Worldwide Quarterly Cloud IT Infrastructure Tracker, total spending on IT infrastructure products (server, enterprise storage, and Ethernet switches) for deployment in cloud environments is expected to total $46.5 billion in 2017 with year-over-year growth of 20.9%. Public cloud datacenters will account for the majority of this spending, 65.3%, growing at the fastest annual rate of 26.2%. Off-premises private cloud environments will represent 13% of cloud IT infrastructure spending, growing at 12.7% year over year. On-premises private clouds will account for 62.6% of spending on private cloud IT infrastructure and will grow 11.5% year over year in 2017.
Worldwide spending on traditional, non-cloud, IT infrastructure is expected to decline by 2.6% in 2017 but nevertheless will account for the majority, 57.2%, of total end user spending on IT infrastructure products across the three product segments, down from 62.4% in 2016. This represents a faster share loss than in the previous three years. The growing share of cloud environments in overall spending on IT infrastructure is common across all regions.
In cloud IT environments, spending in all three technology segments is forecast to grow by double-digits in 2017. Ethernet switches and compute platforms will be the fastest growing at 22.2% and 22.1%, respectively, while spending on storage platforms will grow 19.2%. Investments in all three technologies will increase across all cloud deployment models – public cloud, private cloud off-premises, and private cloud on-premises.
Long-term, IDC expects spending on off-premises cloud IT infrastructure will grow at a five-year compound annual growth rate (CAGR) of 12.0%, reaching $51.9 billion in 2021. Public cloud datacenters will account for 82.1% of this amount growing at a 12.1% CAGR while spending on off-premises private cloud infrastructure will increase at a CAGR of 11.7%. Combined with on-premises private cloud, overall spending on cloud IT infrastructure will grow at an 11.7% CAGR and by 2020 will surpass spending on non-cloud IT infrastructure. Spending on on-premises private cloud IT infrastructure will grow at a 10.8% CAGR, while spending on non-cloud IT (on-premises and off-premises combined) will decline at a 2.7% CAGR during the same period.
"As adoption of public cloud services and private cloud deployments continue to spread around the world replacing traditional on-premises hardware-centric IT settings, overall market spending on servers, storage, and networking will follow this move," said Natalya Yezhkova, research director, Enterprise Storage. "The industry is getting closer to the point when cloud deployments will account for the majority of spending on IT infrastructure, which will be a major milestone embracing the benefits of service-centric IT."
New data from Synergy Research Group shows that just 20 metro areas account for 59% of worldwide retail and wholesale colocation revenues.
Ranked by revenue generated in Q3 2017, the top five metros are Washington, New York, Tokyo, London, and Shanghai, with those five metros alone accounting for 26% of the worldwide market. The next 15 largest metro markets account for another third of the worldwide market. Those top 20 metros include ten in North America, four in the EMEA region and six in the APAC region. Across the 20 largest metros, retail colocation accounted for 72% of Q3 revenues and wholesale 28%. In Q3 Equinix was the market leader by revenue in eight of the top 20 metros and Digital Realty would be the leader in five more if a full quarter of the acquired DuPont Fabros operations were included in its numbers. Other colocation operators that featured heavily in the top 20 metros include 21Vianet, @Tokyo, China Telecom, CoreSite, CyrusOne, Global Switch, Interxion, KDDI, NTT, SingTel and QTS.
Over the last four quarters colocation revenue growth in the top five metros outstripped growth in the rest of the world by two percentage points, so the worldwide market is slowly being concentrated more in those key metro areas. Top 20 metros with annualized growth rates of 15% or more (measured in local currencies) were Shanghai, Beijing, Hong Kong and Washington/Northern Virginia. All four saw strong growth in both the retail and wholesale segments of the market, though growth in wholesale tended to be higher. Chicago also saw very strong growth in wholesale revenues, though in this metro retail colocation growth was relatively weak.
“While we are seeing reasonably robust growth across all major metros and market segments, one number that jumps out is the wholesale growth rate in the Washington/Northern Virginia metro area,” said John Dinsdale, a Chief Analyst and Research Director at Synergy Research Group. “It is by far the largest wholesale market in the world and for it to be growing at 20% is particularly noteworthy. The broader picture is that data center outsourcing and cloud services continue to drive the colocation market, and the geographic distribution of the world’s corporations is focusing the colocation market on a small number of major metro areas.”
Data centre construction market in rival Frankfurt is ‘overheating’.
Brexit may be forcing banks to contemplate moving outside of London but the capital is set to continue to lead demand for data centres in Europe, despite ranking as the third most expensive city to build data centres in the world after New York and San Francisco.
According to the Data Centre Cost Index 2017 from leading professional services company, Turner & Townsend, demand for data centres in London is outstripping supply from contractors and has driven up construction costs by 4.1 percent.
The Index analyses input costs – including labour and materials – and compares the cost of both new build and technical fit out projects across 18 established data centre markets worldwide.
Foreign investment and the growth of cloud services providers are key factors fuelling the growth of data centres in London. In the rival city of Frankfurt, the construction market for data centres is classified as ‘overheating’ due to a high number of projects and intense competition for physical resources and labour driving prices up.
As well as being a major financial hub, Frankfurt benefits from a high density of network service providers, is home to the world’s largest internet exchange and is considered a gateway to eastern and central Europe. Ahead of Brexit, the city is seeing increased demand for data centre space.
The Index reveals the significant construction disparities that exist across Europe.
Amsterdam, Paris and Stockholm have new build data centre construction costs which are up to 10 percent lower than London.
A key construction trend throughout Europe – and one set to continue in 2018 – is the prominence of UK and Ireland-based contractors securing projects across mainland Europe, particularly where local data centre expertise is not available on a large scale.
Contractors delivering new build facilities in Frankfurt can expect to command margins of up to 15 percent, while average margins for new build in London are 6.5 percent.
Dan Ayley, Director, Hi-technology and manufacturing at Turner & Townsend said: “Despite Brexit headwinds, London retains its title as Europe’s leading data centre hub with approximately double the space and usage requirements of rival Frankfurt. This is set to continue in 2018 as high construction costs for new build facilities in both cities reflect the buoyancy of these markets and growing labour shortages.
“Data is one of the world’s most valuable commodities and in European hubs, data centre demand is being driven by the rapid adoption of cloud services and the emergence of business models based around the Internet of Things. This in turn is increasing competition to provide space at a lower cost and investors, developers and operators need certainty that they are building at the most competitive price. Against this backdrop they must proactively review and benchmark cost trends to be able to assess opportunities at a regional and global level.”
New York and San Francisco are the most expensive places to build data centres in the world with construction costs in these cities as much as 25 percent higher than other established data centre markets.
As one of the fastest growing data centre markets in the world, Dublin was ranked as one of the most cost-effective locations to build, but cost inflation in 2018 is anticipated to rise to 8 percent. While tax incentives remain attractive to operators, the city’s shortage of available power continues to present construction challenges.
Full-year combined EMEA ACV reaches €12.2 billion, up 3 percent on prior year.
The EMEA sourcing market rebounded in the final quarter of 2017, with double-digit growth in both traditional and as-a-service contracting values over the previous quarter, according to the findings of the 4Q17 EMEA ISG Index™ released by Information Services Group (ISG) (Nasdaq: III), a leading global technology research and advisory firm.
Data from the EMEA ISG Index™, which measures commercial outsourcing contracts with annual contract value (ACV) of €4 million or more, reveal the EMEA market, which fell sharply in the third quarter after starting the year strongly, rebounded in the fourth quarter, up 27 percent sequentially. Year-over-year, however, EMEA, was flat with the prior year, with combined fourth-quarter ACV of €3 billion in 2017. Traditional sourcing, at €1.9 billion, was down 11 percent versus the prior year, while as-a-service sourcing, at a record €1.1 billion for the region, was up 27 percent.
For the full year, EMEA generated €12.2 billion in ACV, up 3 percent against 2016. Traditional sourcing ACV of €8.3 billion was down 8 percent, but as-a-service sourcing soared 41 percent, to €3.9 billion, as demand for cloud-based services accelerated across the region.
The rise of as-a-service sourcing was driven by demand for Infrastructure-as-a-Service (IaaS). While Software-as-a-Service (SaaS) ACV of €900 million was flat for the year, IaaS ACV soared 58 percent versus the prior year, reaching €3 billion. In contrast, traditional sourcing ACV declined for the fourth consecutive year, despite a slight increase in the number of contracts. Although ITO values, at €6.5 billion, remained steady, BPO ACV in the region fell to €1.8 billion, its lowest level in a decade.
Globally, the market also saw a strong final quarter, with ACV for the combined market establishing a new record at €34.6 billion. Global trends reflect those in EMEA, with newer technologies continuing to gain momentum. As-a-service continued to reach new highs on a global scale, overtaking traditional sourcing in the Americas and Asia Pacific in the fourth quarter, as businesses spend more on cloud-based technology to drive digital transformation.
Continued strong demand driving investor appetite in Europe.
The data centre sector is seeing record investment levels as investors seek exposure to the record market growth in Europe. This investment is driven by take-up of colocation power hitting a Q3 YTD record of 86MW across the four major markets of London, Frankfurt, Amsterdam and Paris, according to global real estate advisor, CBRE.
There is also a substantial amount of new supply across the major markets as developers look to capture demand in a sector where speed-to-market is still key. The four markets are on course to see 20% of all market supply brought online in a single year. This 20% new supply, projected at 195MW for the full-year, equates to a capital spend of over £1.2 billion.
London has been centre-stage for European activity in 2017, its 41MW of take-up in the Q1-Q3 period represents 48% of the European total and dampened any concerns over the short-term impact of Brexit on the market. London was also home to two key investment transactions in Q3 as Singapore’s ST Telemedia acquired full control of VIRTUS and Iron Mountain acquired two data centres from Credit Suisse, one of which is in London.
CBRE projects that the heightened market activity seen so far in 2017 will continue into Q4 in three forms:
A continuation of strong demand, including significant moves into the market from the Chinese cloud and telecoms companies. Further new supply; CBRE is projecting that 80MW new supply will come online in Q4, including several brand-new companies such as datacenter.com, KAO and maincubesOngoing investment activity, with at least one major European investment closed out by the year-end.
Mitul Patel, Head of EMEA Data Centre Research at CBRE commented: “2017 has been a remarkable year for colocation in Europe and, with 2018 set to follow-suit, any thoughts that 2016 might have been a one-off have been allayed. We have entered a ‘new-norm’ for the key hubs in Europe, where market activity is double what we have been accustomed to in the pre-2016 years.Research shows that managed services may be the only chance for growth in the IT industry’s channel; resellers are still switching to this sales model in large numbers, but the sales process and customers relationships are very different; and some of the technology issues need new skills.
Managed services in 2018 will need to deal with a number of issues – some, like security are external, others like the changes needed in sales processes and customer engagement and security, are internal factors. One the main pressures will continue to be the availability of skilled resources, both in sales and in the area of security, where GDPR, to be introduced in May 2018, will provide the main impetus for re-analysis of their positions by most MSPs.
Research from IT Europa ( http://www.iteuropa.com/?q=market-intelligence/managed-service-providers-europe-msps-top-1000) and others shows that there is a continuing race to scale as the economics of managed services depend on having a large customer base, but at the same time, because of the expertise needed to deliver specific vertical market applications, many are having to build on their strengths and specialist further.
The changing nature of managed services….
The bigger MSPs are growing fastest, says the research, and in Europe, the Netherlands has overtaken Germany in numbers of large MSPs. The Netherlands has seen a dramatic acceleration in the number of data centres situated there in recent years. The UK is still biggest, and now has 36% of Europe’s largest managed services providers and is the largest individual market. The technology is changing as well: when asked about what is on the horizon, MSPs say Internet of Things (IoT) has started to appear as an MSP solution area.
This latest study of Europe’s managed services providers shows increased consolidation as well as more specialisation by application area. In the study of the top 1500 MSPs 2017, the listed companies – 112 in number - saw their sales rise by 7.5% yr/yr. The smaller independents by contrast managed a lower 5.5% growth. One reason for the changes has been the rush for scale among managed services companies, with a high rate of mergers among the small players, and acquisitions by larger firms. There seems to be no shortage of available funding, either from the industry itself or venture capital.
These results and a wider discussion on the changing nature of the MSP will be a feature of the
Managed Services & Hosting Summit – Europe, at the Novotel Amsterdam City • Amsterdam on 29th May 2018 (http://www.mshsummit.com/amsterdam/index.php).
This is an invitation-only executive-level conference exploring the business opportunities for the ICT channel around the delivery of Managed Services and Hosting. Topics for discussion will include sales and marketing processes, GDPR, building value in a business with an eye on the mergers and acquisitions market, and skills development to get into those higher margin areas. This is a timely event as the rapid and accelerating change in the way customers wish to purchase, consume and pay for their IT solutions is forcing the channel to completely redefine its role, business models and relationships.
The Managed Services & Hosting Summit is firmly established as the leading managed services event for channel organisations. Now in its eighth year as a UK event, the Managed Services & Hosting Summit Europe is being staged for the second time in Amsterdam and will examine the issues facing Managed Service Providers, hosting companies, channel partners and suppliers as they seek to add value and evolve new business models and relationships.
The Managed Services & Hosting Summit – Europe 2018 features conference session presentations by major industry speakers and a range of breakout sessions exploring in further detail some of the major issues impacting the development of managed services.
The summit will also provide extensive networking time for delegates to meet with potential business partners. The unique mix of high-level presentations plus the ability to meet, discuss and debate the related business issues with sponsors and peers across the industry, makes this a must attend event for any senior decision maker in the ICT channel.
The next Data Centre Transformation events, organised by Angel Business Communications in association with DataCentre Solutions, the Data Centre Alliance, The University of Leeds and RISE SICS North, take place on 3 July 2018 at the University of Manchester and 5 July 2018 at the University of Surrey.
For the 2018 events, we’re taking our title literally, so the focus is on each of the three strands of our title: DATA, CENTRE and TRANSFORMATION.
The DATA strand will feature two Workshops on Digital Business and Digital Skills together with a Keynote on Security. Digital transformation is the driving force in the business world right now, and the impact that this is having on the IT function and, crucially, the data centre infrastructure of organisations is something that is, perhaps, not as yet fully understood. No doubt this is in part due to the lack of digital skills available in the workplace right now – a problem which, unless addressed, urgently, will only continue to grow. As for security, hardly a day goes by without news headlines focusing on the latest, high profile data breach at some public or private organisation. Digital business offers many benefits, but it also introduces further potential security issues that need to be addressed. The Digital Business, Digital Skills and Security sessions at DCT will discuss the many issues that need to be addressed, and, hopefully, come up with some helpful solutions.
The CENTRES track features two Workshops on Energy and Hybrid DC with a Keynote on Connectivity. Energy supply and cost remains a major part of the data centre management piece, and this track will look at the technology innovations that are impacting on the supply and use of energy within the data centre. Fewer and fewer organisations have a pure-play in-house data centre real estate; most now make use of some kind of colo and/or managed services offerings. Further, the idea of one or a handful of centralised data centres is now being challenged by the emergence of edge computing. So, in-house and third party data centre facilities, combined with a mixture of centralised, regional and very local sites, makes for a very new and challenging data centre landscape. As for connectivity – feeds and speeds remain critical for many business applications, and it’s good to know what’s around the corner in this fast moving world of networks, telecoms and the like.
The TRANSFORMATION strand features Workshops on Automation and The Connected World together with a Keynote on Automation (Ai/IoT). IoT, AI, ML, RPA – automation in all its various guises is becoming an increasingly important part of the digital business world. In terms of the data centre, the challenges are twofold. How can these automation technologies best be used to improve the design, day to day running, overall management and maintenance of data centre facilities? And how will data centres need to evolve to cope with the increasingly large volumes of applications, data and new-style IT equipment that provide the foundations for this real-time, automated world? Flexibility, agility, security, reliability, resilience, speeds and feeds – they’ve never been so important!
Delegates select two 70 minute workshops to attend and take part in an interactive discussion led by an Industry Chair and featuring panellists - specialists and protagonists - in the subject. The workshops will ensure that delegates not only earn valuable CPD accreditation points but also have an open forum to speak with their peers, academics and leading vendors and suppliers.
There is also a Technical track where our Sponsors will present 15 minute technical sessions on a range of subjects. Keynote presentations in each of the themes together with plenty of networking time to catch up with old friends and make new contacts make this a must-do day in the DC event calendar. Visit the website for more information on this dynamic academic and industry collaborative information exchange.
This expanded and innovative conference programme recognises that data centres do not exist in splendid isolation, but are the foundation of today’s dynamic, digital world. Agility, mobility, scalability, reliability and accessibility are the key drivers for the enterprise as it seeks to ensure the ultimate customer experience. Data centres have a vital role to play in ensuring that the applications and support organisations can connect to their customers seamlessly – wherever and whenever they are being accessed. And that’s why our 2018 Data Centre Transformation events, Manchester and Surrey, will focus on the constantly changing demands being made on the data centre in this new, digital age, concentrating on how the data centre is evolving to meet these challenges.
Angel Business communications have announced the categories and entry criteria for the 2018 Datacentre Solutions Awards (DCS Awards).
The DCS Awards are designed to reward the product designers, manufacturers, suppliers and providers operating in data centre arena and are updated each year to reflect this fast moving industry. The Awards recognise the achievements of the vendors and their business partners alike and this year encompass a wider range of project, facilities and information technology award categories as well as Individual and Innovation categories, designed to address all the main areas of the datacentre market in Europe.
The DCS Awards categories provide a comprehensive range of options for organisations involved in the IT industry to participate, so you are encouraged to get your nominations made as soon as possible for the categories where you think you have achieved something outstanding or where you have a product that stands out from the rest, to be in with a chance to win one of the coveted crystal trophies.
This year’s DCS Awards continue to focus on the technologies that are the foundation of a traditional data centre, but we’ve also added a new section which focuses on Innovation with particular reference to some of the new and emerging trends and technologies that are changing the face of the data centre industry – automation, open source, the hybrid world and digitalisation. We hope that at least one of these new categories will be relevant to all companies operating in the data centre space.
The editorial staff at Angel Business Communications will validate entries and announce the final short list to be forwarded for voting by the readership of the Digitalisation World stable of publications during April and May. The winners will be announced at a gala evening on 24th May at London’s Grange St Paul’s Hotel.
The 2018 DCS Awards feature 26 categories across five groups. The Project and Product categories are open to end use implementations and services and products and solutions that have been available, i.e. shipping in Europe, before 31st December 2017. The Company nominees must have been present in the EMEA market prior to 1st June 2017. Individuals must have been employed in the EMEA region prior to 31st December 2017 and the Innovation Award nominees must have been introduced between 1st January and 31st December 2017.
Nomination is free of charge and all entries can submit up to two supporting documents to enhance the submission. The deadline for entries is : 9th March 2018.
Please visit: www.dcsawards.com for rules and entry criteria for each of the following categories:
DCS Project Awards
Data Centre Energy Efficiency Project of the Year
New Design/Build Data Centre Project of the Year
Data Centre Automation and/or Management Project of the Year
Data Centre Consolidation/Upgrade/Refresh Project of the Year
Data Centre Hybrid Infrastructure Project of the Year
DCS Product Awards
Data Centre Power product of the Year
Data Centre PDU product of the Year
Data Centre Cooling product of the Year
Data Centre Facilities Automation and Management Product of the Year
Data Centre Safety, Security & Fire Suppression Product of the Year
Data Centre Physical Connectivity Product of the Year
Data Centre ICT Storage Product of the Year
Data Centre ICT Security Product of the Year
Data Centre ICT Management Product of the Year
Data Centre ICT Networking Product of the Year
DCS Company Awards
Data Centre Hosting/co-location Supplier of the Year
Data Centre Cloud Vendor of the Year
Data Centre Facilities Vendor of the Year
Data Centre ICT Systems Vendor of the Year
Excellence in Data Centre Services Award
DCS Innovation Awards
Data Centre Automation Innovation of the Year
Data Centre IT Digitalisation Innovation of the Year
Hybrid Data Centre Innovation of the Year
Open Source Innovation of the Year
DCS Individual Awards
Data Centre Manager of the Year
Data Centre Engineer of the Year
New innovations that combine and optimise IT at the physical, logical and virtual levels are driving new opportunities – and new challenges – for datacentre operators and owners and their suppliers. This article provides an overview of some of these trends and innovations, as an extract from 451 Research’s 2018 Trends in Datacenters and Critical Infrastructure report.
By Rhonda Ascierto, Research Director for the Datacenters & Critical Infrastructure Channel at 451Research.
DMaaS, which first entered the market in late 2016, goes beyond DCIM as a service. It is part of a long-term evolutionary change toward integrating physical datacenter infrastructure management with many other services, including – but not limited to – IT workload management, energy management, connectivity and business costing. It is delivering some of the benefits of DCIM but with lower barriers by offering a simplified, low-touch and low-cost (or at least opex-based) alternative.
Based on DCIM software, these services aggregate and normalize monitored data via a gateway on-premises. The data is encrypted and sent to a provider’s cloud, where it is anonymized, pooled and analyzed, often using machine learning technology. In 2018, we expect the application of deep neural networks will begin. DMaaS also ties into remote and on-premises maintenance and fix services, enabling a full-service business model for suppliers.
DMaaS does not yet match the breadth of capabilities available from traditional DCIM, but over time, we expect it will go further, enabling peer benchmarking, 'guaranteed' outcomes as a service, and the leasing of assets with maintenance contracts - assets as a service. DMaaS will increasingly take the datacentre world beyond DCIM, and beyond single-site, proprietary management.
Emerging demand for the Internet of Things and distributed cloud architectures will necessitate the build-out of greater IT capacity closer to industrial environments and around the edges of carrier networks, to address increasing data volumes and to mitigate the cost of subpar response times.
The need to optimise IT services and next-generation networks, and to connect myriad machines and sensors, will in many cases drive the need for new datacentre form factors -- self-contained, highly integrated and compact micro-modular datacenters, or MMDCs, that can be deployed in small spaces virtually anywhere.
MMDCs will be lights-out, requiring remote IT and datacenter management software, including for monitoring, workload management and energy efficiency. Managing and upgrading many distributed sites will often require a distributed "fabric" of software to manage replication, availability and integrity. This will likely comprise a range of distributed applications that will need to be managed by layers of middleware, orchestration and other automation tools.
While the market for MMDCs is nascent today, that could change with just a few major projects, which are most likely to happen in the telecommunications, financial services, and logistics and transport sectors.
As enterprise IT spreads its work and its data across a diverse, hybrid infrastructure, the interconnected ecosystem of systems and services requires ever more complex software and network engineering, and greater use of third parties. Some of the applications will be cloud native, but others will not be designed to take advantage of cloud architectures.
The result is that enterprises may now enjoy more function, and more availability, and at lower cost – but they may have also traded this for a loss in accountability, predictability and control. And when failures do occur, sometimes in a gradual, partial way, and at other times on a grand scale, stabilisation and recovery can take hours or days.
For 2018, especially as new edge services and systems are deployed, we expect resiliency of complex architectures to be a growing concern, with some operators and providers getting ahead with new orchestration, analysis and monitoring tools, and service providers under pressure to be more open and responsive. Public cloud providers, colocation companies and hosting companies, will increasingly be required offer more information and monitoring systems, as well as build in recovery services, if they are to capture the next stage of cloud deployments.
The way that datacentre operators design and construct facilities is changing, with prefabricated and modular designs in the ascendant. This is being driven by a variety of factors, including the rapid pace of change of IT as well as an overall drive for efficiencies through standardisation and industrialisation. This will shorten build times and result in lower cost facilities supported by novel power and cooling systems.
Different operators use prefabrication in varying ways. At the very top, hyperscalers and multi-tenant datacenter (MTDC) providers will maintain some highly custom engineering efforts but are increasingly standardizing across their new builds to seek both capital and operational efficiencies. The world’s largest MTDC operators and others install datacenter subsystems in a largely repeatable pre-integrated fashion in a bid to cut cost, speed up new capacity installations, and, consequently, to align capital outlays better with demand. Operations across standardised facilities become simplified, more automated (supported by DMaaS) and less prone to human error.
Small- and medium-sized operators, on the other hand, are gravitating toward taking semi-custom prefabricated facilities designs where suppliers have already undertaken most of the engineering effort. An analogy is how commercial aircraft can be configured and built-to-order within the boundaries of standardised design frameworks.
Hyperscalers deploy open source hardware and software designs to drive efficiencies and scale, and this is increasingly putting pressure on service providers, enterprises and MTDC operators to consider similar architectures.
While growth will continue to come mostly from hyperscalers for the foreseeable future, owing to challenges in procurement, test and certification, there have been a number of initiatives to facilitate adoption for non-hyperscalers including: an online ‘Marketplace’ run by OCP to procure equipment, colocation providers offering OCP-compliant space, and the emergence of system integrators, hardware OEM vendors and others building ‘end-to-end’ services around open-source hardware procurement and technical support.
Network service providers are also assessing open architectures as they begin to move core network components and other communication applications into private cloud architectures. Higher performance, increased architectural agility and efficiency and ability to scale-out are key requirements demanded by this industry as it moves toward cloud networks and 5G connectivity, and factors that may make open source designs attractive to this vertical. The proliferation of IoT and distributed workloads is another opportunity for open infrastructure as the industry re-thinks datacenter architectures at the edge.
Operators of mission critical facilities are growing increasingly concerned about their reliance on utility grids to provide highly available power, for reasons ranging from cyberterrorism threats to the effects of climate change, and including economic and sustainability goals. More companies are seeking power assurance beyond what utilities can provide. This will encourage more investment and innovation in new approaches to supplying and distributing power and storing energy in more localized and distributed ways.
Microgrids are an area of increasing activity, where multiple sources of electricity generation are connected and controlled by software to maintain a stable supply of power that is independent of the utility grid and to a set of local users.
Datacentres are also likely to increasingly be viewed as microgrids: they already have backup power generation (typically diesel generators) and are moving toward more software-driven power with more power sources. There is a clear movement inside datacenters toward exploiting existing redundant backup power sources, such as UPSs and generators, as well as adding some auxiliary capacity using renewable energy.
The recent application of battery technologies, such as Lithium-ion and Nickel-Zinc, as well as fuel-cell technologies in datacenters will enable more operators to explore strategies to deploy reserve power on-site for both operational and back-up power needs. Demand response and transactive energy systems are helping to justify investments in local energy generation, a trend that we expect will continue.
By Steve Hone CEO, The DCA
This month’s journal theme is ‘service availability and resilience’. The question I would like to pose is, which is more important to today’s consumer? If you walked in to a Starbucks and asked this question I am pretty sure most people would say service availability. No one really cares how their skinny latte arrives as long as it does arrive, and no one is interested in the technology that provides free in store Wi-Fi, they just want to check their email or stream the latest Game of Thrones episode.
Having said that, service availability doesn’t just happen by magic. I could easily imagine that most customers, unless they were weird engineers like myself, wouldn’t bother to sneak a peek back stage to see what really goes on behind the scenes and provides the seamless service they were paying for. If they did I am confident most would either be amazed or appalled by what they found. Let’s face it, no one likes poor service, it’s not good for one’s business or reputation which in most cases takes years to build and seconds to destroy. Funnily enough despite what seems to be an obvious risk to one’s business; organisations tend to respond very differently to this threat and often it’s the larger organisations which seem slower to respond often due to internal politics or processes.
Service availability is also not just about how quickly you can respond, it is also about how you respond, and this means understanding all the risks involved. By that I mean both the risk of doing something as well as the risk to the business of doing nothing at all. Many organisations tend to avoid the path less travelled thinking it will never happen to them or that lightning will never strike twice. I’m afraid to say this is nothing more than a myth and if you have any doubts a quick google search will return a sorry list of businesses who all thought they were invincible and who all equally have one thing in common, all resisted or feared change.
In business the drive for change is often resisted. There exists a clear conflict between the desire to progress and reluctance to change which is easy to understand. In the red corner you have the development teams who are always looking at how new technologies can drive the business forward; while in the blue corner you have the IT operations team, tasked with maintaining order and a stable infrastructure where change is often seen as a disaster just waiting to happen, thus creating a default “NO” response to any new ideas. This may well appear to be the safe option, but now more than ever businesses must equally recognise they need to evolve to survive.
I recall Roger Keen former MD of City Lifeline telling me once, if you had asked who would be the world’s largest music company twenty-five years ago it would have been EMI, Virgin or Sony. Twenty five years on and it’s turned out to be none of the above. Apple is now the largest music company in the world and ironically, it’s not even a music company, it’s a technology company.
Please don’t think for one minute that the answer to guaranteed success is just to invest in more and more of the latest cutting-edge technology. You would be missing one vital ingredient, namely “vision”. In the face of rapidly changing times it is the vision organisations have and the agile way they use the technology that ultimately decides who makes it to the finishing line these days.
This is nicely summed up by Charles Darwin’s “theory of natural selection” who said, “it is not the strongest or the most intelligent who will survive but those who can best manage change”. If maximum service availability is the ultimate prize in todays connected world the winners may NOT be the ones with the best technology but those who adapt best as technology changes.
Thank you again to all the members who contributed articles to this months DCA Journal, next month the theme focuses on ‘Industry trends and innovation’ and the call for papers is already out, so you have until the 13th February to contribute and submit your articles.
By Alex Taylor, Anthropology PhD student, University of Cambridge,
CNet Training recently welcomed Alex Taylor, an anthropology PhD student from the University of Cambridge, onto its Certified Data Centre Management Professional (CDCMP®) education program. Alex recently researched the practices and discourses of data centres. In this article, he outlines his research in more detail and explains how the education program contributed to his ongoing anthropological exploration of the data centre industry.
Traditionally, anthropologists would travel to a faraway land and live among a group of people so as to learn as much about their culture and ways of life as possible. Today, however, we conduct fieldwork with people in our own culture just as much as those from others. As such, I am currently working alongside people from diverse areas of the data centre industry in order to explore how data centre practices and discourses imaginatively intersect with ideas of security, resilience, disaster and the digital future.
Data centres pervade our lives in ways that many of us probably don’t even realise and we rely on them for even the most mundane activities, from supermarket shopping to satellite navigation. These data infrastructures now underpin such an incredible range of activities and utilities across government, business and society that it is important we begin to pay attention to them.
I have therefore spent this year navigating the linguistic and mechanical wilderness of the data centre industry: its canyons of server cabinet formations, its empty wastelands of white space, its multi-coloured rivers of cables, its valleys of conferences, expos and trade shows, its forests filled with the sound of acronyms and its skies full of twinkling server lights.
While data centres may at first appear without cultural value, just nondescript buildings full of pipes, server cabinets and cooling systems, these buildings are in fact the tips of a vast sociocultural iceberg-of-ways that we are imagining and configuring both the present and the future. Beneath their surface, data centres say something important about how we perceive ourselves as a culture at this moment in time and what we think it means to be a ‘digital’ society. Working with data centres, cloud computing companies and industry education specialists such as CNet Training, I am thus approaching data centres as socially expressive artefacts through which cultural consciousness (and unconsciousness) is articulated and communicated.
CNet Training recently provided me with something of a backstage pass to the cloud when they allowed me to audit their CDCMP® data centre program. ‘The cloud’, as it is commonly known, is a very misleading metaphor. Its connotations of ethereality and immateriality obscure the physical reality of this infrastructure and seemingly suggest that your data is some sort of evaporation in a weird internet water cycle. The little existing academic research on data centres typically argues that the industry strives for invisibility and uses the cloud metaphor to further obscure the political reality of data storage. My ethnographic experience so far, however, seems to suggest quite the opposite; that the industry is somewhat stuck behind the marketable but misleading cloud metaphor that really only serves to confuse customers.
Consequently, it seems that a big part of many data centres’ marketing strategies is to raise awareness that the cloud is material by rendering data centres more visible. We are thus finding ourselves increasingly inundated with high-res images of data centres displaying how stable and secure they are. Data centres have in fact become something like technophilic spectacles, with websites and e-magazines constantly showcasing flashy images of these technologically-endowed spaces. The growing popularity of data centre photography – a seemingly emerging genre of photography concerned with photographing the furniture of data centres in ways that make it look exhilarating – fuels the fervour and demand for images of techno-spatial excess. Photos of science fictional datacentrescapes now saturate the industry and the internet, from Kubrickian stills of sterile, spaceship-like interiors full of reflective aisles of alienware server cabinets to titillating glamour shots of pre-action mist systems and, of course, the occasional suggestive close-up of a CRAC unit. One image in particular recurs in data centre advertising campaigns and has quickly become what people imagine when they think of a data centre: the image of an empty aisle flanked by futuristic-looking server cabinets bathed in the blue light of coruscating LEDs.
With increased visibility comes public awareness of the physical machinery that powers the cloud mirage. This new-found physicality brings with it the associations of decay, entropy and, most importantly, vulnerability that are endemic to all things physical. As counterintuitive as it may seem, vulnerability is what data centres need so that they may then sell themselves as the safest, most secure and resilient choice for clients.
The combination of the confusing cloud metaphor with the almost impenetrable, acronym-heavy jargon and the generally inward-looking orientation of the data centre sector effectively blackboxes data centres and cloud computing from industry outsiders. This means that the industry has ended up a very middle-aged-male-dominated industry with a severe lack of young people, despite the fact that it’s one of the fastest growing, most high-tech industries in the UK and expected to continue to sustain extraordinary growth rates as internet usage booms with the proliferation of Internet-of-Things technologies. This also makes data centres ripe territory for conspiracy theories and media interest, which is another reason why they increasingly render themselves hyper-visible through highly publicised marketing campaigns. You often get the feeling, however, that these visual odes to transparency are in actual fact deployed to obscure something else, like the environmental implications of cloud computing or the fact that your data is stored on some company’s hard drives in a building somewhere you’ll never be able to access.
Furthermore, while cloud computing makes it incredibly easy for businesses to get online and access IT resources that once only larger companies could afford, the less-talked-about inverse effect of this is that the cloud also makes it incredibly difficult for businesses to not use the cloud. Consider, for a moment, the importance of this. In a world of near-compulsory online presence, the widespread availability and accessibility of IT resources makes it more work for businesses to get by without using the cloud. The cloud not only has an incredibly normative presence but comes with a strange kind of (non-weather-related) pressure, a kind of enforced conformity to be online. It wouldn’t be surprising if we begin to see resistance to this, with businesses emerging whose USP is simply that they are not cloud-based or don’t have an online presence.
And the current mass exodus into the cloud has seemingly induced a kind of ‘moral panic’ about our increasing societal dependence upon digital technology and, by extension, the resilience, sustainability and security of digital society and the underlying computer ‘grid’ that supports it. Fear of a potential digital disaster in the cloud-based future is not only reflected by cultural artifacts such as TV shows about global blackouts and books about electromagnetic pulse (EMP), but is also present in a number of practices within the data centre industry, from routine Disaster Recovery plans to the construction of EMP-proof data centres underground for the long-term bunkering of data.
With the help of organisations like CNet Training I am thus studying the social and cultural dynamics of data-based digital ‘civilisation’ by analysing the growing importance of data infrastructures. Qualitative anthropological research is participatory in nature and, as such, relies upon the openness of the people, organisations and industries with whom the research is conducted. Every industry has its own vocabularies, culture, practices, structures and spheres of activity and CNet Training’s CDCMP® program acted as a vital window into the complexity of data centre lore. It provided me with a valuable insider’s way to learn the hardcore terms of data centre speak and also with the opportunity to meet people from all levels of the industry, ultimately equipping me with a detailed, in-depth overview of my field-site. Interdisciplinary and inter-industry sharing of information like this, where technical and academically-orientated perspectives and skills meet, helps not only to bridge fragmented education sectors, but to enable rewarding and enriching learning experiences. I would like to sincerely thank the CNet Training team for assisting my research.
For more information go to: http://www.cnet-training.com/
By Dave Harper, Mechanical Engineer, DUNWOODY LLP
Availability and resilience are both terms that often arise when looking to data centres but are not necessarily always fully understood. Resilience refers to a single component of the system required to make a data centre function, this component can be as broad as the site wide electrical feed or as specific as the extract fans for a single room.
The common levels of resilience and their general meanings are in ascending order of resilience; N, just enough of the service to serve the full demand room and no more; N+1, enough of the service to serve the full demand room with one minimal maintainable component offline (such as a single room cooler or UPS unit); 2N, two independent systems either of which is capable of serving the full room demand, these should be located such as a large physical incident (such as someone making a big mistake with a forklift) is implausible to affect both; 2(N+1), two independent systems either of which is capable of serving the full room demand whilst having one minimal maintainable component offline. These can all be directly observed from the design of the data centre and barring shortfalls from the definitions which have been missed in the design are facts known in advance. Availability on the other hand is a very simple metric to measure once a data centre is operating but is at best an estimate at the design stage. Availability is generally expressed in 9s of uptime, that being the percentage of the time that the actual computing load of the data centre is available to its users starting from 90% which is one 9. 90% availability indicates an expectation of approximately 36 days of downtime over a year whilst 99.99% availability indicates an expectation of approximately 1 hour of downtime in a year or more likely 4 hours of downtime at some point during 4 years.
There are a number of competing standards established that attempt to establish a baseline expectation for the relationship between resilience and availability. The most commonly referenced are the Uptime Institute Tier system, BS 50600, Syska Criticality Levels and the American standard TIA-942. All of these work to a similar concept of the first 4 tiers/levels of expected availability/criticality looking to the weakest link of any given data centre to define its level. The principle being that additional spending in one particular area comes with significant diminishing returns compared to bringing all aspects of the data centre to the same level. As an M&E consultant we can only finalise the HVAC and electrical elements of the design but must always be aware that those should be fitting within a system of equivalent resilience levels for telecommunications, site security, staffing, etc.
Each of the components of the data centre has different resilience requirements and conceptual failure impacts at each of the tiers. BS50600 is split, for example, into Building Construction, Power Distribution, Environmental Control, Telecommunications Cabling Infrastructure, Security Systems, and Management and Operational Information. Each of those further subdivides into various components with differing requirements. The major question of any given element is how long, in the case of a failure, does it take to impact the IT load. The answers to that question can then be sorted into two simple categories those which, if identified as soon as it fails, the component could be fixed/replaced before it impact the IT load and those which could not be. Of the physical infrastructure components that go into building a new data centre very few fall into the latter category without resilience. Those that you might intuit would fall into the former category largely do so by not being required under normal operation, things like generators. The critical aspect that without an annotated checking process is easy to miss that in most cases of failures do not actually manifest until the equipment is called upon to operate under genuine load conditions making the earliest possible point to detect the failure a point too late to replace without impacting the IT without resilience. Whilst once it may have been possible to survive by propping open the hall doors and put some big fans in (so long as it happened on a cold day) with the significantly higher power densities and security and control requirements in modern data centres not possible. If the actual cooling equipment is no longer operating to handle the load the time to heat shutdown failure is measured in small numbers of minutes (and the IT equipment should probably be shut down before that point to avoid permanent damage). This leads to standards establishing what is considered adequate for each category in a broad sense based on whatever considerations of optimisation the standard writers had in mind for each normal sort of element.
This is all well and good within what has become the standard and common data centre, air cooling within a hall, but becomes more difficult to apply as one looks to some of the more recent developments in high density data hall solutions. The major consideration is direct liquid cooling of the servers where it becomes much more ambiguous where the line is drawn between a component being part of the main system which requires resilience under the standards and being part of an individual server which is less directly considered by the standards. Particularly this becomes difficult to judge for purposes of leakage since the risk is not only of pipework leakage, where under air cooling regimes the system could be shut down and a redundant system allowed to take over, but of individual server leakage which is much more challenging to shut down without impacting the data hall.
Whilst there are well established standards used to link resilience to availability as data centre methodologies develop and potentially become more diverse their status as guidelines rather than requirements becomes clearer.
For enquiries : mail@dunwoody.uk.com
The Market Directory is now live
The Public Sector has seen a massive increase in the utilisation for digital services in a drive to streamline and improve the efficiency of both internal operations and online services to its citizens.
This demand has resulted in an increased reliance on the data centre underpinning these services and is driving the need to seriously look at both existing infrastructure and the way services are delivered in the most cost effective and energy efficient way. Finding the right products and or solution partners to work with is often a serious challenge and reported to represent one of the largest barriers to adoption and change.
The EURECA Market Directory is designed to enable Procurement Departments, and ICT specialists within the Public sector to easily and quickly search for companies and organisations who provide, products and services which have been developed specifically for use within the data centre sector.If you are actively seeking business from the public sector and would like your organisation to be listed in the Public-Sector Market Directory, then please Click Here
To find out more about the activities of the trade association and how to become a member click here.
By Gareth Spinner, Director, Noveus Energy
What do we currently expect and what do we currently get?
The resilience of the power networks in the UK is generally very good unless you are supplied by an overhead line susceptible to extreme weather conditions or live in a remote part of the country or down the end of a lane with no chance of a second back up supply. There are a number of places which are known to be what is termed “worst served”.
Back in the 1990’s the newly privatised regulated Distribution Network Operators were denounced for both general poor network performance both on the duration of breaks in service and the frequency of events. The penalties they faced forced a big push to improve analysis of events, apply critical thinking and how to use technology to restore supplies more quickly, and how to design or modify networks to reduce any impact to the maximum number of affected customers quickest.
We now have a highly automated high voltage network which is close to being able to automatically switch loads to healthy circuits and keep customers with power quickly with very little human intervention or at least have engineers observe and check what the automation proposes.
From a resilience perspective and with the pressure on overhead lines not being the favoured, mostly for aesthetic reasons, most new network is installed underground in public highways where the biggest impact is someone else working in the road or construction damages a cable. There is an inevitability that the +60-year-old cables will start to fail but have not materialised just yet.
Availability is subtly different, as power capacity is naturally constrained by how much capacity is accessible and what load exists at any one time. Load customers have historically enjoyed having a known fixed capacity that they can call upon with very little constraint.
The electricity network circuits and transformers have historically been developed to provide power from a National Grid of very large power stations and distributed across the UK. This is changing due to the growth of renewables which are disparate and smaller in size giving rise to other issues on capacity with generation export competing at times with load to use the circuits. The DNOs are having to evolve into System Operators (SO) where they have to manage a dynamic situation in real time.
In network history terms most DCs have been connected in very recent times. Owner/Operators have in the main been very diligent in analysing and risk assessing their grid connection. As a matter of course, most DCs will have the standard N+1 grid connection (or better), and will be acutely aware that this means no loss of supply for a single fault. Beyond this there is a level of generation back up. The Distribution Network Operators (DNOs) under their licence have to offer this level of service under the security of supply standards.
It is not the case that resilience receives the same level of scrutiny and many Owner/Operators would not know how to approach a review of how resilient their supply was beyond the point of connection.
Resilience is primarily the ability of the network to resist the effects or impacts of “environmental” conditions or its general degradation due to “wear and tear” or aging. The overall resilience of a service connecting a DC is made up of many aspects and attributes of the constituent parts.
Weather Effects
The weather has a huge potential impact, snow and ice on overhead cables, water ingress into underground cable systems, wind blowing debris into the network and third party damage are the main causes of disruption to power supplies. The network operators have no control of the weather but can design their networks to avoid the impacts; what challenges them the most is the cost benefit analysis of the options and making an assessment on what is a reasonable level of disruption to power supplies that they can front up to the Regulator on.
The DNOs do get a dispensation for Extreme Events, storms and floods beyond just a normal windy day and thus it is the response to extreme weather and the level of adverse publicity which dictate how quickly all power supplies are restored.
Aging Effects
The UK electricity network is now over 80 years old although most of the older components have been replaced there are still considerable elements that are over 50 years old compared to an original design life of 40 years.
It is also the case that factors of safety and “conservatism” have stood the DNOs in good stead. The analysis of the network condition through non-intrusive diagnostics has provided better Health Indices (HI) for each major circuit and components.
Components will age due to the everyday use and in non-normal situations due to over loading, overheating, repeated operations on switchgear, water ingress into transformers the list is almost endless. The ideal world would be that the DNO could predict the residual life in any component and replace it or fix it prior to a failure but as yet there is not enough good data to be able to do this with any great certainty.
The HI work is also evolving and with Smart Grid and more Research and Development going into the DNO “business as usual” we can hope for fewer losses of supply due to aging.
Another aspect of the electricity network is the expectation of the Regulator that DNOs make networks more efficient this means lower losses (saving carbon) with higher load factors, circuits and transformers operating closer to their design capacity and less investment in reinforcement. This could accelerate aging of some network elements especially since commercial pressure on procurement (which is at European or Global) generally means that equipment is designed and tested tightly to the specifications; there is no evidence as yet on life expectancy although it is hoped at least 40 years!
The UK is witnessing a general warming effect, more extreme weather happening more frequently. These events may not in the short term have a huge impact on a DC as the detrimental effects will not cascade to cause too many issues.
Without intervention and over time the network resilience will degrade as the extreme events and higher loads on the networks increase. The impact of higher density loads would suggest the impact will be greater too.
The DC Owner/Operators have to date been conservative in estimating their load requirements and thus overloading under normal operating conditions is not a contributory factor.
However, the changing landscape of localised renewable energy generation and storage and balancing this at local level is driving the change on how power networks are controlled and operated and the change to having System Operations down at a local level within Licence areas or smaller “islands”.
For this to be worthwhile there will need to be a lot more intelligent technology on the power networks, Smart meters will provide the load data at point of supply in real time which with a few algorithms will give the SO information to decide how to move load and running arrangements. In addition, real time data on small scale generation can match the load requirements and with that some balancing for abnormalities. Time of Day loads such as EV charging can then be controlled and regulated if the SO is permitted to do.
If the SO has also invested in better and more sophisticated monitoring equipment they will be able to assess the load capability of each circuit and transformer and actively manage the load to avoid faults and send resources to fix weaknesses before a fault occurs.
So there is potential for a lot more switching operations on the network which could increase maintenance and repairs; resulting in equipment off-line and more “holes” in the network leading to a slightly higher chance of a loss in supply if the alternative circuit or transformer trips.
The relatively large DC loads will potentially be part of these smaller and locally controlled networks and the DC Owner/Operator can play an active part in balancing loads and generation, and potential for Demand Side Management.
The challenge is what are the Regulation rules for the new SO? Is a large load customer such as a DC affected? Does this level of flexibility have financial implications?
A DC business on a single site will be concerned about resilience even though they have little ability to do anything about it, but can have reactive stand by generation.
A multi-site DC business can determine what their strategic view is for resilience and how they position themselves and have IT resilience across diverse sites and worry less about the individual site. This level of flexibility could work in their favour in managing the ever increasing cost of energy.
In the Smart Network world the multi-site DC could also be an active part of the SO, maximising potential for more efficient network operations and reducing costs.
On the proviso that the SO is mandated to provide the information, it will be possible to get the level of granularity on each and every circuit and transformer performance, the DC Owner/Operator will be better placed to take decisions on resilience. The SO could provide any diagnostic information on the health of the network at local level and thus influence the discussions on how resilience and service availability can be improved. If the SO is able to provide real time information, no need for a call to tell the DC Owner/Operator by phone calls, and thus decisions taken on energy usage, balancing, generation etc.
The DC has an absolute need for security of data for every customer, and resilience that information is always available on demand and essentially back up plans for the “what if” moments when a problem event occurs; the electricity market evolution is a long way from providing certainty other than staying as it is.
The lack of available capacity based on every consumer having an absolute amount of capacity whenever they need it is not going to be a sustainable proposition in the not too distant future; new connections are getting more and more expensive. If all consumers requirements are “pooled” at local level a high degree of diversity could be possible and with Smart solutions electricity could be delivered to where it is needed when it is needed but without a firm contracted available capacity 24 hours each and every day. Would this be acceptable to the DC?
The DC operators can take action themselves and consider different solutions. The Grid connectivity carries large capacity charges but generally the grid connection is the prime and reliable source of power. Standby generation is the back up to the grid connection, under the SO the inter-changeability of which source is used becomes easier and the level of available capacity could be traded? What then for the DC that has an absolute requirement for power when it is needed? Does the DC strategy change for each site, having a backup supply beyond N+1 to having prime generation supply, grid charge avoidance and even duplicate data storage on disparate sites?
Should it not be the case that the SO should be more discerning on how capacity is actively provided and the resilience managed in real time with condition knowledge to communicate in real time with the DC on the network status?
How the DC views generation or storage is a choice. Or is a partnership with the Smart SO the right way ahead? The DC with its own generation could be part of a more socialised community energy scheme and support the regeneration of communities and the ambition for more efficient homes.
For more information go to: https://noveusenergy.com/
By Luca Rozzoni, European Business Development Manager, Chatsworth Products (CPI)
Improving service availability and resilience is a never-ending quest for today’s data centres. However, to make real improvements there must be a change in the traditional approach to rack power distribution and monitoring.
Integrating intelligent products into the data centre’s design is critical and, when selecting rack power distribution units (PDUs) for high-density applications, there are a number of key considerations which can directly affect future levels of service availability and resilience.
First consider the incoming power and the installation of the appropriate input circuit to handle required capacity. Power from the utility to the data centre is typically three-phase. Whilst it is possible to bring either three- or single-phase power to the cabinets, three-phase power allows required power capacity to be delivered at a lower amperage, helping minimise losses and simplify load balancing across all three phases of the incoming power into the data centre.
Data centres and cabinets are typically designed and specified prior to deciding which IT equipment will be installed. Selecting an intelligent PDU that provides a good mix of C13 and C19 outlets is advisable, so that it will be able to support a wide range of equipment and densities. In addition, ensuring there is a locking feature on the outlets will help to prevent accidental disconnections of IT equipment.
Intelligent PDUs that draw greater than 20A of current typically have two or more branch circuits protected by an overcurrent protection fuse or breaker. Selecting a breaker over a fuse is highly recommended as a breaker can easily be reset when tripped. A fuse, in the same circumstance, must be replaced and power will remain out until this has been completed. As replacement requires the PDU to be turned off until it is serviced by an electrician, so the result is a higher Mean Time to Repair (MTTR).
Whilst the breakers can be thermal, magnetic or magnetic-hydraulic, a magnetic-hydraulic breaker is the least susceptible to temperature changes, and will minimise nuisance tripping, making it ideal for high-density applications.
When reviewing branch overcurrent protection, it is also worth considering:
· Slim profile breakers to ensure minimal interference with airflow within the cabinet
· The ability to remotely monitor the status of the circuit breaker or fuse irrespective of the type of PDU selected.
Modern, high-density data centres often increase server inlet temperatures, which translates into higher server exhaust temperatures. This helps to maintain top levels of efficiency and lower energy consumption costs. Many data centres also deploy containment solutions to fully separate hot exhaust air from cooling air to further optimise efficiency.
To ensure that the PDUs operate reliably in these higher temperatures, it is worth selecting a PDU with the highest temperature rating possible. The chosen PDU should also support full load capacity at the rated temperature.
It’s also imperative that the proper environmental conditions and levels are closely monitored and maintained. Fluctuations in air quality, temperature and humidity need to be avoided, while water, dust and other harmful particles can all affect the infrastructure, shorten the operational life of expensive equipment and result in downtime.
Environmental monitoring solutions now have the capacity to help your organisation monitor temperature and humidity, as well as smoke, water and even motion detection. Choosing an intelligent PDU to enable a cabinet level ecosystem and issue proactive notifications or alerts, will help data centre managers ensure service reliability.
Selecting intelligent PDUs with the features mentioned above will help data centre managers ensure service availability and efficiency. Two other underestimated features will also provide significant savings in networking costs and deployment time, IP consolidation and physical security.
Many intelligent PDUs available today have the ability to be arrayed (networked) using one IP address. When selecting a PDU, look for IP consolidation capability that networks the most amount of PDUs into the array so that the fewest amount of networking ports needs to be deployed. There are PDUs in the market today that can be in a 32 PDU array.
With information quickly becoming the most valuable asset organisations own, the need to push security and authentication to the cabinet level has become more critical. Selecting intelligent PDUs that have built-in integration with electronic access control (EAC), allows the remote management and control of every cabinet access attempt. This also ensures the data centre and IT department is meeting growing compliance regulations such as PCI-DSS, FISMA and HIPAA.
As the PDU represents the first line of defence, integrating intelligent products into a data centre’s design – and ensuring the careful selection of the most appropriate intelligent PDU for such a high-density environment – is key to ensuring the data centre of the future will be able to continue to offer greater service availability, resilience and security to cope with the growing demands being placed on it.
For more information go to: http://www.chatsworth.com/
By Mick Payne, Group Operations and Purchasing Director, Techbuyer
How to leverage on existing technology to maximise storage power, availability and resilience.
Data centre managers are faced with the same problems as all IT professionals this new year: how can I get more performance with fewer resources? And how can I insulate my systems against threat, malicious attack or other disaster?
These are thorny issues in the face of very real threat. At the end of last year, three young American men in their twenties pled guilty to the Mirai Botnet attack that took down internet services firm Dyn, whose clients include Twitter, Reddit and Spotify, and took down the electricity supply for many across the East Coast of the US. The case was huge and proved one thing beyond doubt: some attacks are indefensible.
Deliberate Denial of Service [DDoS] like the Mirai Botnet attack is just one threat amongst many. Just as dangerous to a data centre is the proverbial tree taking down the power line, for example. The upshot would be the same in both cases: systems go down and data is at risk of being lost. Data centre managers need to mitigate against these risks and make sure their facility recovers from it as soon as possible. They need resilient data storage system solutions that maintain service availability.
Sharing data over multiple servers creates an option to have backups in multiple geographical locations. If one data centre is compromised, the other site provides resilience with replica systems. This solution has been attractive to large corporations where loss of functionality affects critical business functions. However, the cost benefit to small and medium sized businesses is impacted by the price of the software and additional hardware.
On the cusp of 2018, there was the usual slew of industry insiders and journalists suggesting their storage predictions and growth trends for the coming year. High up there was software-defined storage (SDS), which builds resilience and increases compute power in data centre environments by replicating data (whether that be object based storage or file based storage) across multiple servers. SDS used to be seen as an expensive solution, but with the right choice of hardware it becomes much more cost-effective.
SDS technology has been in use for around ten years, predominately as tier-2 storage and has been popular with large corporations with deep pockets. The attraction is that it enables systems to increase their functionality without losing speed or power by replicating data over multiple servers, which then operate concurrently. This has obvious benefits when it comes to failovers. It is also great for scalability and flexibility, which is particularly important given the fast rate of technology adoption that exists now and is set to continue over at least the next ten years.
A major brand legacy system complete with software would cost around £300,000 for 1pb of storage, which is a relatively large capital outlay. The choice of white label commodity servers could halve that amount to around £150,000. However, many IT professionals are reluctant to choose lesser known brands because of perceived risk. The ideal scenario is to source branded servers like HP, Dell and Cisco at a reduced price. This is where the refurbished option comes into its own.
Quality refurbished product from companies like Techbuyer comes with free configure-to-order service and a manufacturer-matching global three year warranty. Organisations large and small are increasingly seeing the value of this in terms of cost savings and competitive advantage. The trend is helped by tech giants using refurbished product in their data centres.
Google released a report in 2016 called “Circular Economy at work in Google data centres”. In it, the company stated that 19% of its servers were remanufactured machines, 75% of components consumed in the spares program and 52% in the machine upgrades program were refurbished inventory during 2015. The company has since publicly stated that this is saving them hundreds of millions of dollars a year.
With companies of this calibre subscribing to refurbishment, the sector is going from strength to strength. Techbuyer’s experience is a steady year on year growth since it began specialising in refurbished stock 13 years ago. 2017 was the best year yet. The company’s revenue rose from £18.2m to £27.4m, the workforce doubled from 50 to 102 and it increased the number of products available from 150,000 to 225,000. The company also opened two new offices: one in Germany and a second site in the US.
A lot of the success is down to good management and a great team of people who pride themselves on the quality of product and service. But much as though we would like that to be the whole reason for success, it isn’t. Techbuyer is successful because refurbished product is the smartest solution for data storage needs. More and more public-sector bodies, small, medium and large companies are waking up to this fact and our business is growing as a result.
There has been a vibrant resale market for many years. Product age varies from just out of the box component parts to systems that have been bought in from large organisations carrying out upgrades. A model that is two years old is a fraction of the cost of the latest generation from the factory and yet offers comparable functionality. Just as many individuals are reluctant to buy a car straight off the production line, many companies no longer see the value in buying the latest generation data storage equipment. All the more so since Techbuyer offers to buy customers’ old equipment at the same time.
The decision to choose refurbished equipment has the added benefit of being more environmentally sustainable than buying new equipment and scrapping old. With data storage under ever closer public scrutiny on its use of resources, this is an attractive add-on for many organisations. Data storage giants Microsoft, Facebook and Amazon have been lauded for their strides towards sustainable energy sources in their data centres. Bodies like the DCA have been taking a proactive role in leading the discussion on making best use of resources.
The choice of refurbished data storage equipment feeds into this movement towards better use of resources. It is worth remembering that the average desktop computer consumes ten times its weight in fossil fuels. The production of one gram of microchips consumes 630 grams of fossil fuels. Although this pales into insignificance in comparison to the energy used to power data centres, it is not an insignificant amount. This, as much as cost saving, is what is driving Google and others to increase its use of refurbished parts. Savings on the bottom line are good for the planet too.
For more information go to: www.techbuyer.com
There is a clear disparity between the initial drivers used to define digital transformation efforts and how success is subsequently measured in enterprise organisations, according to the latest digital transformation study by the Cloud Industry Forum and Ensono.
By Simon Ratcliffe, Principal Consultant, Ensono.
In the survey of 250 business and IT decision-makers across multiple verticals, 99 per cent of respondents intended to measure the success of their digital transformation efforts in some way.
The study revealed that the achievements, although measured in a different frame of reference to the initially defined success metrics, were in-line with expectations for the majority of enterprises and, in almost half of cases (48 per cent), higher. Business-decision makers were even more positive about project success, with 65 per cent stating that their digital transformation efforts are delivering better results than anticipated.
While it is a positive message that digital transformation efforts are actively being measured, the study found a hitch in the way businesses approach the issue of KPIs for transformation projects. The biggest driver for digital transformation, according to the Cloud Industry Forum and Ensono study, was cost-saving according to 70 per cent of respondents. This was followed by increased profitability (58 per cent) and increased productivity (59 per cent).
However, despite being very clear about the drivers for the digital transformation, when it comes to measuring success a very different story emerges. Gone are the predominantly financial metrics and in come new and very different measures of success with customer experience being the most cited metric for success, with 52 per cent of respondents. This inconsistency between the drivers and the measurement of success across organisations suggests that an enterprise’s perception of digital transformation changes during the project, and business decision-makers often become more engaged over time and re-shape the way in which success is measured.
The fact that KPIs and drivers do not align indicates a wider problem and a common short-sightedness in the way businesses are approaching transformation. Given the investment required for successful digital transformation, viewing it as simply a cost reduction function, rather than a revenue generator designed to reach new potential customers or markets, is not only an outdated approach, but will limit transformation efforts. The focus on cost savings limits the impact of transformation efforts and could ultimately have longer-term implications on the business.
Digital transformation is fundamentally about business change and, primarily, an opportunity for growth. For transformation to drive growth in an organisation, it needs to be about driving deep and effective change – facilitated by technology and hybrid IT – as a revenue generator rather than a cost reduction function. Growth through innovation and delivery of the best service, product and experience to customers and through finding new and quicker routes to market is a more valuable return on the investment in transformation.
This must be seen as an on-going programme within any organisation, and not a finite project to which simple financial metrics can be applied at a point in time. As with many technology-led programmes, cost reduction is still the primary mechanism used to gain the attention of the business, but once the programmes are underway and the business becomes more aware of the benefits, the drivers morph into more business centric KPIs.
For digital transformation strategies to succeed, the IT department, the business and the board need to have a clear and shared vision, and that vision needs to focus on people first, with the right technology facilitating. This alignment across the organisation will ensure the success of any business transformation project.
Businesses today are constantly trying to keep the innovation tap flowing. Factors such as rapidly changing competitive landscapes and developing consumer demands mean those firms that fail to innovate are likely to fall behind. To stop this from happening, businesses in all industries are turning to cloud computing technologies and most are now realising that a hybrid - or multi-cloud - approach is the best way forward.
By Mark Baker, Field Product Manager, Canonical.
A hybrid cloud strategy effectively gives organisations the ability to rapidly develop and launch new applications, adopt agile ways of working and run workloads in whichever way best meets their specific needs.
For example, workloads that aren’t business-critical can be deployed externally, while those that hold sensitive customer data can continue to be run in-house in order to meet compliance requirements – an issue especially relevant for organisations in highly regulated sectors such as telecommunications, healthcare and financial services.
A hybrid cloud strategy also gives businesses the option to move workloads between environments based on cost or capacity requirements and application portability means they are not locked into one platform.However, not all firms are embracing third party providers. The desire to innovate as fast as possible means that, when it comes to the decision of going with a public and private cloud, many businesses are building when they should be buying. But is that the right move?
There is little doubt that a hybrid approach will be best bet for most organisations but the right mix of public and on premises cloud infrastructure is up to your business requirements.
As previously mentioned, if a business is heavily bound by compliance and regulatory constraints – and if they have the capacity to manage and maintain their own data centres – then a lean towards private cloud is probably the right option. However, for the majority of organisations, the advantages offered by public cloud providers are simply too good to ignore.For example, many businesses tout the cost-effectiveness and flexibility benefits that public cloud provides, while others want to be able to access new services, so are keen to tap into the fast pace of innovation that public cloud providers have become known for.
Outsourcing certain workloads also means organisations don’t have to deal with technical issues that may arise with the underlying platform, instead handing off that responsibility to a team of experts and freeing up time for their own developers to focus on other things.Whilst the benefits of public cloud are clear, many businesses that still need private infrastructure are choosing to build rather than buying from the experts. A common misconception is that just because they can build, they should build. But building infrastructure that works and building infrastructure that really makes a difference to the business are two entirely different things.
Developing, managing and operating bespoke cloud infrastructure is hugely complex and a substantial proportion of organisations simply don’t possess the technical skills to be able to achieve their lofty aims by themselves.In comparison, if your developers are tasked with the main build they will likely spend most of their time on the base layers, and have less opportunity to add value by building new tools and services.
This level of focus is unlikely to be attractive to potential hires. Experienced developers are not easy to come by and will be more attracted to companies that offer them opportunities to be creative and experiment with new services.Speed also needs to be a consideration. Public cloud providers are known for innovating rapidly and rolling out new features on a regular basis, a characteristic which is extremely hard to replicate to the same level of quality with an in-house developed platform.
So, what’s the answer? Should businesses be looking to leverage the innovation and speed-to-market benefits offered by public cloud providers, or should they aim to keep things in-house and maintain control over their workloads?
Unfortunately, the simple truth is that there is no one-size-fits-all approach. Organisations should consider what is the best option for them, depending on factors such as the type of data they hold and their performance needs.But one thing we can be certain of is that attempting to build cloud infrastructure without the appropriate levels of planning, investment and – most importantly – expertise will cause more harm than good in the long run.
If an organisation can’t meet the demands, it makes more sense to outsource the job to third parties that have done this many times before using standardised tools and technologies. Very few businesses actually need to build and operate their own cloud infrastructure. Public cloud providers and vendors or specialist on premises cloud platforms can help to ease the strain, providing the skills, experience security and innovation to help businesses flourish.When GDPR legislation comes into force in May 2018, keepers of personal data will no longer have the luxury of taking months to notify customers if a data breach occurs. To comply with Article 33 of GDPR, organisations will need to notify authorities and affected individuals of the breach within 72 hours of the attack being recognised.
By James Barrett, Senior Director EMEA, Endace.
If affected parties, such as customers, are not notified within 72 hours, companies risk fines of up to 4% of their global revenues. This short ‘window of responsibility’ is going to be a big reality check for many companies.
The real risk comes from the potential for significant brand damage if a breach is made public without a true understanding of the nature and impact of the breach. A well-known example of this is the TalkTalk notification a couple of years ago, where executives made the morally responsible decision to make public the loss of customer data, but without a true understanding of the facts of the attack. This led to huge losses in brand value, revenue and market valuation.
To avoid this scenario, but still comply with Article 33, many will need to make serious changes to their systems, operations and procedures to be capable of accurately communicating the details of a breach within the deadline GDPR imposes.
There is no silver bullet that can completely solve the complex problem of meeting GDPR’s breach reporting obligations. However, there are things that companies can do to prepare themselves. Having the necessary tools, and collecting the right information, can ensure that when a breach occurs, analysts can investigate it quickly and conclusively and the organisation can respond appropriately.
A key asset in breach investigation is access to a packet-level history of network activity. Network History is invaluable because it shows, definitively, what happened. Moreover, access to a detailed source of Network History lets companies investigate security events more quickly, reducing the risk that an unexamined threat leads to a more serious breach.
Having access to Network History when you need it requires implementing specialised packet-capture and recording appliances – network recorders - at key points on your network. These network recorders access the network using a tap, or from the SPAN port on a switch or router – which makes them completely invisible on the network being monitored. This means the recorded data can be relied on as tamper-proof evidence of activity on the network – unlike log files and other evidence that could have been tampered with by the attacker.
These network recorders must be deployed and recording in “always on” mode in order to capture evidence of attacks. Deploying network recording after the fact is a little like turning on your CCTV camera after a burglary – it’s too late by then.
While preparation is key to preventing the preventable, it’s also key to identifying and minimising loss should a breach occur. Companies that actively monitor their networks use tools such as Intrusion Detection (IDS), Behavioural Anomaly Detection, and Artificial Intelligence (AI) to analyse traffic on the network and raise alerts when potentially malicious activity is detected.
But when it comes to investigating the alerts that are raised, these analytics tools don’t provide the level of detail needed to determine definitively what took place. They’ll tell you something happened, but you need to investigate the event much more deeply to understand what that event was and whether it’s serious or not. In the event of a breach, knowing for certain what happened, how it happened and what the impact is, and doing that quickly, is critical.
This is valuable information that can otherwise take months to compile from log files and metadata – much too slow and inconclusive a process to provide the insight needed for timely and accurate notification of the breach. With GDPR fast approaching, companies need to ensure they can get to the bottom of breaches quickly and communicate the details to affected parties within GDPR’s tight time constraints. Failure to meet these obligations could be extremely expensive – not just in fines, but in legal costs, lost customers and declining share value.
No organisation is impenetrable. Being prepared to respond to a breach is becoming increasingly critical to all businesses. 2017 has been a year of major breaches, and the media attention they have attracted has caused many companies to start to think about whether they could respond to a breach adequately or not. The looming shadow of GDPR,vand the obligations it imposes, is raising this issue even higher up the corporate agenda. Which is, of course, exactly what the regulation was designed to achieve – to get companies to take the issue of security more seriously.
With greater visibility into what is happening and what has already happened on their networks companies are far more breach ready. The evidence they need to quickly understand a breach and communicate accurately about it is at their fingertips. Which will not only help them to stay on the right side of the upcoming GDPR regulations, but also allow them to minimise the losses should they become the victim of a major breach in the future.
New research shows that micro data centres allow a more resilient and cost-effective approach to scaling up IT resources
Victor Avelar, Director and Senior Research Analyst, Schneider Electric Data Center Science Center.
Network latency is a major issue in delivering services like video on demand and the Internet of Things. This problem is exacerbated given the increasing consumption of digital services through mobile devices. Massive centralised data centres far away from the point of consumption increase the latency for these applications compared to smaller so-called Edge data centres located closer to the point of use.
Micro data centres are becoming a vitally important option for organisations needing to deploy high-performance reliable IT resources at the edge, quickly and cost effectively. For the purpose of the comparisons made in this article we shall consider a micro data centre to be a self-contained, secure computing environment contained in a single rack and including all the storage, processing and networking necessary to run applications. Such a unit comes in a single enclosure with all the associated power, cooling, security and management tools, such as Data Centre Infrastructure Management (DCIM) software.
Although larger prefabricated data centres the size of a small building, which can be delivered on the back of a trailer, also come under the general definition of micro data centre, such units will not be considered for the particular analysis undertaken here.
There are four main factors driving the deployment of micro data centres at the edge of networks: scalability; speed of deployment; reliability and the growing popularity of outsourcing IT services to the cloud or colocation facilities.
The scalability of micro data centres allows companies to “pay as they grow” deploying only the IT infrastructure they need when they need it. Standardised, prefabricated units can be stepped and repeated in relatively small kW increments to accommodate growth requirements in IT as they arise.
Speed of deployment is a vital attribute in today’s fast-changing business world. With standardised, prefabricated and factory-tested units available “off the shelf” IT management is spared the headache of designing, specifying and integrating from scratch a customised solution with all the attendant troubleshooting and testing iterations such an approach entails. The more standardised a unit is, the quicker it can be deployed. Inevitably there may be some customisation of prefabricated units to meet some very specific demands but a micro data centre will always be faster to deploy than a traditional data centre or server room.
A micro data centre which has been factory tested and qualified is likely to be more reliable than a custom-built one-off facility. A number of small data centres, based on the prefabricated model, can be used in combination to provide failover services to each other thereby increasing reliability even further. In the event that one such centre suffers an outage or runs out of capacity temporarily, switching its load over to another data can reduce or eliminate downtime.
Finally, although outsourcing the bulk of one’s IT requirements to the cloud or to a colocation provider is proving a cost-effective option for many organisations, there is often nevertheless a need to keep a certain amount of processing capacity in house. Reasons for this may include security concerns, or the need to operate a legacy application internally. Micro data centres fit the bill perfectly as they can be installed in spare office space, making use of existing power and switching gear while freeing up much building capacity for other purposes.
We’ve also identified three technological developments that enable micro data centres which include ongoing miniaturisation of equipment, hyperconvergence of IT systems, and virtualisation. Silicon-based components continue to increase in complexity and performance for a given amount of space, as does the storage capacity of disk drives especially with the advent of solid-state drives. In round terms, a single rack of servers in 2016 could process the same IT workload as 13 racks of servers eight years ago.
Hyperconverged IT refers to the integration of compute, storage, and networking resources into a single chassis. When all these parts are integrated as a single solution (sometimes including the software) you improve the speed of deployment for the micro data center.
Along with this physical compaction, the growth of virtualisation allows administrators to harness all of that compute power across various workloads. Virtualisation of servers, storage and networking allows geographically distributed micro data centres to appear as a single physical data centre.
One of the benefits of micro data centres is that several can be combined to produce an IT resource of similar size and complexity to that of a purpose built data centre. There comes a point when the trade off between building a data centre in the traditional manner, and scaling up prefabricated units to meet increasing load demands has to be considered with all of the pertinent costs and benefits carefully analysed. Fortunately, there are tools which allow management to carry out such analysis to ensure that they make the most appropriate investment for their particular needs.
Schneider Electric performed a capital cost analysis of 1MW data centre architecture built with a combination of 200 5kW micro data centres and compared it with a traditional centralised data centre of identical capacity. Schneider used its own TradeOff Tool to estimate the capital costs of materials, labour and design for each subsystem in a data centre.
The biggest capital expense advantage that micro data centres have over the traditional approach is that they can typically run off a building’s existing physical infrastructure. In many cases, existing buildings have sufficient spare power capacity to support a micro data centre from both utility and emergency generator power, allowing micro data centres to exploit the sunk costs not only in power but also in cooling equipment such as chillers and in core and shell construction.
There may also be tax benefits in using micro data centres as they can be considered as “business equipment” rather than “building improvement” and can therefore be depreciated over a shorter time span.
Using its TradeOff Tool, Schneider made a number of assumptions about the costs of the elements used in each approach. The traditional data centre comprised 200 racks of 5kW power rating each, whereas the micro approach comprised 200 individual micro data centre units, each comprising a single 5kW rack. Each approach used 1N power and cooling redundancy, had its own physical security facilities and fire suppression. The traditional data centre used hot-aisle containment whereas each micro data centre was fully contained front and rear.
Core and shell building costs for the traditional data centre were estimated at $1615 per square meter. In the case of the micro data centre approach, this cost was zero as each data centre was making use of the sunk costs of the facility in which it was located.
The traditional data centre had a total UPS capacity of 1.2MW using a double-conversion online technology. The total UPS capacity needed using the micro data centre approach was 1.6MW. Using the traditional data centre approach the UPS capacity is typically oversized by 1.2 times the IT capacity. However, when the UPS is distributed across 200 racks in the micro data centre approach a similar 1.2 times oversizing factor results in a 6kW UPS which may not be enough. Because of the load diversity effect, the UPS capacity in a distributed architecture should be slightly higher to accommodate occasional rack densities above the 5kW/rack average. In this analysis, an 8kW UPS was used for each micro data centre, resulting in a higher overall UPS power requirement than the traditional approach.
The overall capital cost for a traditional 1MW data centre was estimated at $6.98m whereas the micro data centre approach allowed a similarly rated data centre to be deployed for only $6.98m, a saving of 42%. This clearly shows the enormous savings in capital costs that are possible for building a data centre using the distributed prefabricated approach rather than building a new facility from scratch.
However there may be situations where the traditional approach is necessary. Even if the 200 micro data centres were not all deployed in separate facilities, with several being deployed in a single location, there would be an inevitable penalty in terms of network latency for applications hosted across several sites. Depending on the requirements of the application, it might be essential to have it hosted in its entirety on a single physical data centre.
Larger organisations might be able to consider the option of building dedicated WAN networks between several facilities to improve latency but smaller organisations may be forced to rely on the public WAN connections.
However, if latency is not a critical factor and one makes use of highly standardised hyperconverged prefabricated equipment, using micro data centres in combination gives companies of modest size the ability to scale up their IT resources in a cost effective and reliable manner paying only for what is needed, when it is needed.
DCS talks to Mark Gaydos, Chief Marketing Officer at Nlyte, about the company’s success to date, how the DCIM market will continue to develop with artificial intelligence, machine learning and automation having a big role to play.
1. Please can you provide a brief overview of Nlyte’s progress in the data centre industry to date?
Having been founded in 2004 in London, Nlyte Software not only established the data centre infrastructure management (DCIM) industry but has kept the capabilities of a modern DCIM solution that both increases efficiency in the data centre and bolsters the value of ITSM solutions. It has also consistently been named in Gartner’s Magic Quadrant for DCIM, having most recently been named a ‘leader’ from 2014 onwards.
2. DCIM is Nlyte’s main focus?
DCIM is Nlyte Software’s focus and is what the organisation has built its global client base of both large multinationals and smaller businesses around. However, Nlyte Software is also evolving its offering, building on the industry benchmark DCIM solution to offer data centre service management (DCSM) solutions. Whereas DCIM helps an organisation manage data centre infrastructure, our DCSM solution goes beyond infrastructure to empower data centre professionals to perform their job functions more efficiently – all while synchronising information and activities to other systems that depend on accurate data centre information. In essence, DCSM transcends DCIM to incorporate workflows and integration into other critical systems. As the needs of infrastructure managers change, Nlyte Software has continued to evolve its platform to meet their needs.
3. Can you talk us through the DCIM portfolio, starting with the flagship Enterprise offering?
Our Nlyte Enterprise solution is our flagship offering. It serves mid-sized and large enterprises whose desire to manage their physical computing infrastructure more closely and efficiently. It supports audit, regulatory and fiscal compliance with regulations such as HIPPA, SOX, SCI and DCOI, and captures changes to the data centre automatically to improve data accuracy and reliability. It actually improves the financial management of IT assets and reduces operational costs by delivering an accurate reflection of asset inventory.
Nlyte Software’s core capabilities provide unmatched insight and visibility in near real-time. Additionally, Nlyte Software’s 50 plus integration connectors to key partners such as BMC, ServiceNow and HPE, etc., provide unmatched integration and increased ROI for organisations that use it.
4. And then there’s Energy Optimizer?
Nlyte Energy Optimizer (NEO) provides advanced monitoring and alarming capabilities to track vital energy metrics across data centres.
NEO’s benefits include proactively ensuring better uptime and redundancy, optimum power and cooling efficiency, elimination of inefficient resources – while also confidently raising centre temperatures to reduce costs – easier facility and IT personnel collaboration, reductions in energy costs, flexibility and insight to prove SLAs for tenants/customers, and power and environmental information to ITSM systems for more accurate operations and costing.
5. And Platinum?
Nlyte Platinum combines the breadth of Nlyte Enterprise offerings with the depth of Nlyte Energy Optimizer capabilities.
The modern and secure web-based architecture ensures the solution can scale with the growth of organisations, while providing a third party tested (Veracode VerAfied) secure environment. The architecture includes a built-in communication framework with pre-built connectors that enable easily integrate with other facility and IT investments to ensure maximum value with the lost total cost of ownership in the industry.
6. And Colocation?
Nlyte Colocation was created especially for multitenant data centres to securely monitor and report power, space and capacity for hundreds of tenants.
It enables colocation providers, both wholesale and retail, to securely monitor and report power, space and capacity for hundreds of tenants. The multitenant architecture ensures that tenants can securely view their own actual power and space usage within the colocation facility – in real-time and over time.
7. And Hyperscale?
Nlyte Hyperscale was created for large organisations that require a very highly scalable solution that supports massive amounts of change across large numbers of assets.
Hyperscale provides a robust architecture and environment for firms that demand incredible scale. Nlyte Hyperscale Platinum and Enterprise Editions support the management of millions of assets and accommodate tens of thousands of changes a month.
8. And System Utilization Monitoring?
Nlyte System Utilization Monitoring extends real-time monitoring throughout the data centres from the facilities layers and across the IT stack, providing the first end-to-end real-time utilisation view.
It constantly monitors system utilisation at the global, location and room levels to provide complete oversight, identifies specific underutilised servers and ‘Ghost Servers1 to raise utilisation rate, and functions in conjunction with Nlyte Enterprise Edition to support reaching DCOI Server Utilisation objectives.
9. Nlyte also offers Data Centre Service Management?
Yes, Nlyte Software offers a solution that goes beyond infrastructure to empower data centre professionals to perform their job functions more efficiently – all while synchronising information and activities to other systems that depend on accurate data centre information.
10. Offering products to work with the BMC, HPE and ServiceNow ITSM solutions?
Yes. Integration is one of the reason organisations choose Nlyte Software over products of our competitors. We integrate with BMC, HPE and ServiceNow ITSM solutions as well as HPE One View, Cisco EnergyWise, Dell OpenManage Server Administrator and Intel Data Centre Manager (DCM) which not only improves operational efficiency but improves the ROI of those systems.
11. How does the Nlyte product help to improve ITSM information?
Deploying robust DCIM solutions is only half the job. Our software has pre-built connectors to leading ITSM providers such as BMC, HPE and ServiceNow to enable clients to receive the full benefits of a well-integrated modern DCIM suite. This is the most effective and efficient way to close the gap between IT and facilities management. This enables information and processes to flow bidirectionally between infrastructure teams and ITSM organisations.
12. And ITSM processes?
Nlyte Software supports processes such as keeping CMDB information updated and accurate. It also ties in with change management processes so organisations get seamless changes and more accurate information about who, how and how long those changes are taking.
13. And ITSM measurement?
With integration between Nlyte Software and ITSM, organisations can measure the real cost of infrastructure maintenance and get better insight into the location of resources and dependencies associated with those assets. Overall this helps in reducing downtime and gives greater transparency to ITSM systems.
14. Nlyte also offers a range of services to support the product portfolio?
Support and services are a key reason why organisations choose Nlyte Software and why the company has a 98% customer retention rate. Nlyte Software offers a broad range of services to ensure the success of our customers from experience deployment services to ongoing support, customers always have services resources available to them when they choose Nlyte Software.
15. More generally, how can Nlyte help customers address the hybrid data centre infrastructure model (on-premise, Cloud, edge etc.)?
Strategically, we see the world going more hybrid all the time and in the future, we are rolling out solutions that will help our clients and prospects help understand their hybrid approach and strategy. We also see edge getting bigger, so we're going to be investing a great deal of time into our existing edge solution to ensure it stays on the front of change. Ultimately, if organisations are going to keep up with the public cloud providers, they need to have modern technology like Nlyte Software supporting their operations whether in their own data centres or in colocation facilities.
16. And where do IoT and AI fit with the Nlyte offering?
IoT and AI are the future of the data centre and we are planning some big things in 2018. Our DCIM solutions currently allow companies to offer all the capabilities to handle the growing number of devices in the workplace. We’ll be building on this in the future to handle the increase in devices coming next year while our AI/machine learning integrated offerings are soon to market. Stay tuned.
17. And how do you see DCIM fitting into the overall data centre automation and optimised management objective?
One of the biggest evolutions the data centre industry will experience will be the integration of AI and machine learning and automation. We are already taking steps towards a more automated data centre future with AI being tested across a number of different tasks and automation taking over from human processes in data centres across the world. As we enter 2018 we will see further integration, and will see these technologies leading the industry for the next century.
18. Is it fair to say that DCIM 1.0 left many end users underwhelmed, whereas the latest DCIM software is receiving a much more positive response?
Most technologies go through a typical adoption curve. There were many technology companies that jumped on the DCIM wave along the way with varying levels of value and success. Given the market is now over 14 years old, many providers have left the market and companies like Nlyte Software, that is used by hundreds and hundreds of companies daily, are delivering solid positive experiences and ROI to organisations around the world.
19. And how do you see Nlyte’s position in the overall data centre market developing over time?
We see Nlyte Software as continuing to lead the DCIM market, providing solutions that will give organisations the tools to maximise their physical computing infrastructure investments. We will also be moving further into the telecommunication and fintech spaces offering organisations that battle tight budgets and growing IT estates solutions to best conquer their challenges.
20. Specifically, what can we expect from Nlyte product-wise during 2018 and beyond?
2018 is going to be an exciting year. We’re investing in research and development to evolve the computing infrastructure market to reduce complexity and give back control. We will be working to give our customers full control of their IT assets. We will also be working with some of the largest IT vendors to deliver unique first of a kind machine learning and operational capabilities for infrastructure managers.
21. And what about in terms of company growth – a bigger channel, different, expanded routes to market and the like?
Our main goal is to deliver value to organisations using Nlyte Software in a way that will provide them with the best value. We are continuing to expand our own presence worldwide, for instance with a new team and office in India, but we will also continue to invest in our partners as they help use bring the value of DCIM to a wider audience worldwide.
22. What one or two pieces of advice would you give to someone evaluating DCIM solutions and their suitability to help on the journey towards digitalisation that seems to be the overriding objective right now?
There are a number of benefits in adopting a DCIM solution and if a prospective company is looking to install a DCIM strategy we suggest they talk to references just like themselves. There are many vendors who say they can deliver but, when they get down to it they don’t scale, are too hard to use or just don’t have a rich platform to provide value today and into the future. Ask to talk to references who are similar to you. We’d also suggest looking at DCIM as a journey – map out the milestones you have in the near term but also look where you want to be in the future – whether it’s tying into BMS or ITSM systems or implementing workflow. A good DCIM solution like Nlyte Software will continue to provide growing value over time.
Over the last year, software-defined wide area network (SD-WAN) has continued to revolutionise networking across the globe, and across a wide range of industries, including financial, healthcare, retail, manufacturing and transport. With IDC predicting that the SD-WAN market will reach $8 billion in 2021*, it’s clear organisations are realising the benefits of deploying SD-WAN to simplify and consolidate their WAN edge infrastructure – and this will only continue into 2018.
By David Hughes, CEO at Silver Peak.As enterprise applications move to the cloud, companies are realising that traditional WANs were never architected for a dynamic internet-based environment. Indeed, IDC indicates that the increase in SaaS for business applications throughout the enterprise disrupts the prominence of conventional MPLS-based WANs. Backhauling traffic from the branch to the headquarters, and then to the internet and back again is inefficient and impairs application performance. As such, employees often find that their business applications run faster at home or on their mobile devices than in the office. Historically, the internet was not secure or reliable enough to meet business needs; it didn’t perform well enough to support latency-sensitive or bandwidth-intensive business applications.
With internet access redefining the economics of networking, the time is now for companies to revisit deploying broadband in the WAN – as long as concerns over performance, reliability and security can be addressed. An SD-WAN frees enterprises to embrace broadband and directly connect users to cloud applications from branch locations. This results in a superior user experience, assured application performance, and the security, reliability and visibility required to comply with business policies and intent, and centrally orchestrate and address issues as they arise.
Before looking ahead to 2018, it’s important to consider the SD-WAN predictions for this year. In 2017, it was predicted that as SD-WAN adoption would gain ground and go mainstream, initial enterprise deployments would be hybrid, leveraging both MPLS and a complement of broadband connectivity. 2017 was also tipped to be the year that introduces SLAs for SD-WAN over pure broadband. Indeed, it forecasted that enterprise adoption of SD-WAN over pure broadband will accelerate dramatically as organisations realise that it's possible to deliver MPLS-equivalent quality of service and availability when combining any combination of transport, including consumer broadband connectivity.
When it comes to SD-WAN vendors and their offerings, 2017 was the year that the differences between SD-WAN vendors approaches were predicted to become more apparent. While SD-WAN offerings look similar, they are fundamentally quite different and at different maturity levels. Additionally, it was thought that 2017 would see the separation between the providers that deliver mature, scalable offerings and those that rushed to the market with a minimal, viable product in response to demand. It was also predicted that some vendors would also seek to deliver rudimentary support for nearly every function imaginable, while others would focus on core competencies and building partnerships and ecosystems to service chain best-of-breed functions, capabilities and services.
In 2017, it was also forecasted that geographically distributed enterprises will continue to require WAN optimisation – perhaps not for all locations and offices, but for a significant portion of their WAN traffic. It was predicted to become attractive to purchase WAN optimisation by-the-drip as an integrated service in an SD-WAN solution, versus buying it as a stand-alone product deployed at every location. Geographically distributed enterprises were also expected to seek to steer different types of internet traffic in different directions or paths based on business intent – and that deep visibility and control over internet traffic will become a critical requirement for SD-WAN deployments. Additionally, 2017 was predicted to bring early instances of machine learning-based and artificial intelligence innovations to networking.
In 2018, it’s predicted that the following trends will impact enterprises and vendors in networking:
1. Enterprises adopt cloud-first WAN architectures
Today, most WAN traffic, to and from branch and remote sites, is destined for the cloud, either to SaaS services or applications hosted in an IaaS environment. The traditional WAN was architected for branch-to-data-centre traffic flows, not to efficiently support new cloud-driven traffic patterns. Starting in 2018, most enterprises will adopt a “cloud-first” SD-WAN architecture designed to efficiently and effectively support the ongoing evolution in their application mix
2. The new WAN edge replaces the traditional branch router
Traditional routers are no longer the default choice for branch deployments. Routers are burdened by three decades of complexity and a cumbersome “CLI-first” device-by-device configuration paradigm. With SD-WAN as a foundation, a new class of centrally-orchestrated, application-driven WAN edge devices will replace traditional routers in the branch
3. Cisco is no longer the safe, default choice for routing and switching
The move away from traditional Cisco router and switch architectures will not be confined to branch office deployments. In 2018, we will continue to see big share shifts in the data centre. Enterprises will be more inclined to deploy innovative networking technologies from vendors beyond Cisco, marking a favourable change for the overall networking ecosystem
4. The new WAN edge enables improved security architectures
For years, enterprises have been hamstrung, forced to choose between backhauling all branch office traffic to one of several next gen firewall-equipped hubs or deploying a firewall at every branch site. In 2018, the new WAN edge empowers enterprises to make this decision on an application-by-application basis. Network managers can elect to breakout trusted traffic locally at the branch, divert to a cloud-based firewall service, which is sometimes called a web services gateway, or backhaul to a full security stack at a central location. Enterprise-grade SD-WAN solutions also enable enterprises to micro-segment traffic across the WAN, thereby containing the impact of any breach
5. Machine learning enables the self-driving network
Machine learning will be used to complement automation and enable networking to move beyond traditional device-by-device CLI configuration toward intent-driven service orchestration. We will see this evidenced in new application classification techniques, learning and adaptive networking functions and powerful data analytics that turn terabytes of data into actionable insights and actions for network operators
6. Cloud-based management becomes the default
For the last few years, the number of devices under cloud management has grown steadily, expanding beyond Wi-Fi to switching, and now to the new WAN edge. Cloud-based management and orchestration simplify initial deployment, provide better ongoing availability, and most importantly, are backed with web-scale storage and compute resources that enable analytics and machine learning-based techniques that would be difficult to support in most private enterprise environments
7. One Virtual Network Function (VNF) is better than four
2018 will see more carriers rolling out universal CPE offerings – commodity x86 appliances with the ability to host virtual network functions from multiple vendors. While much of the hype in this market has been around service chaining an arbitrary mixture of these VNFs, as carriers and their customers gain more experience, it will become apparent that the fewer VNFs required, the better. Ultimately, one advanced SD-WAN-based VNF will provide all the networking services required in a typical branch site deployment. The value of universal CPE will centre on the ability for enterprises to select their preferred technology stack without switching out hardware, enabling them to move on from building an arbitrary franken-mixture of services
8. More SD-WAN deployment options, spanning from DIY to fully managed
The initial wave of SD-WAN deployments have been spearheaded by early adopters willing to go it alone with DIY SD-WAN deployments which they configure, deploy and manage in-house. While an SD-WAN dramatically simplifies the branch over the traditional router-centric WAN, some enterprises prefer to outsource networking and will seek a fully managed solution. Others want deployment assistance, but then want to manage the network themselves. And others will want co-management. In 2018, enterprises will have more options to choose from with traditional VARs, system integrators and service providers all bringing to market new SD-WAN services that fill out the spectrum from DIY to fully managed
9. The first SD-WAN-driven IPO
As SD-WAN hits mainstream, the largest independent SD-WAN vendors will likely be in a position to pursue an initial public offering. 2018 is likely to see one or more S1 filings in preparation for IPOs. For smaller players, it will be time to wrap-up and find a home under the wing of a larger and more established company. For those technologies acquired in 2017, it remains to be seen what aspects of the original products will be incorporated into the parent companies’ offerings, and which will be abandoned
Ultimately, 2018 will see the SD-WAN revolution show no signs of losing steam. It’s clear the changing requirements of the cloud-first enterprise will drive market growth, as well as enterprises moving away from traditional branch routers and towards an application-driven WAN edge. At the same time, underlying SD-WAN technologies will continue to innovate in terms of machine learning and security. Whether 2018 will be the year of the acquisition in the SD-WAN market remains to be seen, but organisations will continue to streamline WAN connectivity, employ broadband and dramatically simplify IT operations and lower costs in the year ahead.
When Bring Your Own Device (BYOD) first took off, security concerns drove companies to take measures to endeavour to counteract the risks of allowing remote access to company data from employee devices. Many believed they had shut the door to cybercrime.
By Mike Simmonds, managing director, Axial Systems.
In reality, data breaches continued to soar. A report from the Identity Theft Resource Center and CyberScout, the data risk management company, found that the number of data breaches in the US alone jumped 29 percent in the first half of 2017 compared to the corresponding period the year before. Across the world we have seen a raft of organisations fall foul of extensively publicised security breaches , the consequences of which are often severe, not to mention the ongoing reputational damage.
The possibility of a fine being levied is always a significant concern to business leaders, and often leads to a focus of mind on the issue in question. In the case of the pending EU General Data Protection Regulation (GDPR), for instance, the most severe penalty available for non-compliance is a fine of €20 million, or up to 4% of the preceding financial years’ total worldwide annual turnover , whichever is greater.
The reputational consequences caused by a data breach can also be as damaging; as serious fines attract negative media coverage and may deter prospective customers. The inability of the business to recover what has been lost by the breach can further compromise credibility. After all, while some cyber criminals steal data, others, notably including the propagators of ransomware, corrupt it and make it potentially worthless, even after a ransom may have been paid.
In light of the above, how can businesses best go about protecting themselves against the consequences of cyber security breaches? Part of the answer is ensuring the right security practices and protocols are in place and are adhered to as best and normal behaviour.
Sometimes it is as simple as getting the latest security patches from vendors applied and tested across the company as soon as possible. Other times , it is more complex: making certain, for example, that sensitive or personal data that has been transitioned to the care of a cloud service provider is encrypted in-transit and the moment it lands rather than post-landing.
However, best practice data protection always needs to be about more than just applying a technological fix. While implementing the right systems is important, organisations must also instil a culture of security within the business so that employees understand the importance of data security to mitigate employees putting the organisation at risk by the way they manage and handle data.
In an age where immediate and easy access to data is the norm, that is not straightforward. Businesses must ensure that employees never compromise security in exchange for being able to access the information they want, when they want it.
There is a need for education here. Consider the manager that must deliver a presentation the next day and decides to store it in multiple accessible locations to ensure access; on the company laptop, on a file-sharing application in the cloud and on a memory stick, perhaps, with the rationale that if one location fails, the others can serve as back-up.
Such an approach creates its own problems however – and users need to be made aware of the issues and concerns. If the laptop is left on a train or is left accessible whilst in use, it could be easy prey for anyone with the skill and inclination to break into it. The cloud-based file sharing application could potentially be compromised also (or indeed it could be that a free service gives THEM the right to have access), while USB sticks are frequently mislaid or subsequently shared as a convenient “hand-o-matic” file sharing tool. Simply by taking the data outside of the corporate infrastructure, you are bypassing all the security measures and potentially putting sensitive information at risk
It’s a clear demonstration of how so many businesses can make themselves vulnerable by effectively sleepwalking into data breaches. So, what’s the solution?
Technology should always be part of it. Anti-virus and anti-malware software needs to be implemented and kept up-to-date. Data leakage protection can also be put in place, providing electronic tracking of files, or putting systems in place that stop users arbitrarily dropping data out to cloud services. Adaptive authentication, in which risk-based multi-factor authentication helps ensure the protection of users accessing websites, portals, memory sticksor applications, also has an increasingly key role to play.
While technology is important, countering data breaches is also about education. Businesses need to re-enforce the message that employees need to take a personally responsible approach to managing and protecting data over which they have control. They must be aware of the potential security threats and do all they can to mitigate them - from taking good care of devices they use at work to making sure their passwords are strong. The battle against the cyber criminals will continue but if businesses are to fight back effectively, they need their employees onside and focused.
The majority of modern data centres are extremely complex. Very few organisations have the luxury of starting with a clean slate, so data centres typically contain many generations of technology, often managed in separate silos and by different teams.
By Francis O’Haire, Director, Technology and Strategy, DataSolutions.
Each new “next-generation” technology comes along with the promise of replacing or fixing everything that came before. However, things have rarely turned out this way in reality. In the 1990’s, client/server computing did not fully replace the mainframe, then web applications did not replace client/server applications and cloud is not predicted to replace all of our on-premise infrastructure any time soon. For many companies, the adoption of newer technologies just adds another layer to be integrated, secured and managed and ultimately increases the complexity within the data centre. With increased complexity comes a higher risk of system failure, a lower level of flexibility and much higher ongoing costs.
Much of this complexity has been exposed as a result of the increase in popularity of server virtualisation. Widespread virtualisation of computing resources has given organisations a glimpse of what is possible when applications are no longer tethered to individual servers. Server virtualisation introduced the world to a software-defined computing model where virtual machines are managed like files and can be moved around on demand, copied, backed up and restored with ease and with little care for the computing hardware underneath. However, this flexibility comes at a price. A complex storage area network (SAN) comprised of storage arrays and storage networking switches is required to enable many of server virtualisation’s best features such as virtual machine migration and high availability. This combination of compute, storage and storage networking hardware came to be known as Three-Tier Architecture and is prevalent in almost all modern corporate data centres. While the compute layer is software-defined, the other storage related layers are not. The compute layer can be easily scaled by just adding more servers and balancing the virtual workloads across them. A SAN cannot be scaled so easily. The initial purchase of a SAN typically requires a complex sizing exercise and a good deal of foresight to make sure that it can accommodate both the business’ capacity and performance requirements for years to come. When either of these capacity limits are reached, an expensive fork-lift upgrade is required, and the cycle starts again. As new applications come on board, these limits can easily be reached well ahead of the expected life of the SAN. Specialist skills are also required to implement and maintain a SAN and often operate within a separate IT management silo.
The public cloud providers such as Amazon Web Services, Microsoft Azure and Google Cloud Platform learned a long time ago that they could not build their services using this traditional and complex three-tier data centre architecture. These hyper-scale cloud companies do not rely on Storage Area Networks at all. Instead they build up their infrastructure using small x86 servers with locally attached disks. These small servers contribute their compute, memory and storage resources into the overall pool which is then consumed by the applications running on top. Individual server nodes can and do fail regularly but the entire software-defined system is designed to expect failures and automated to work around them without affecting the availability of applications or the quality of service to customers. Failed or obsolete nodes can easily be replaced with new hardware and over time the entire system can evolve and grow in small increments and without disruption.
The obvious way for businesses to benefit from the simplicity and efficiencies that these cloud giants have achieved is to subscribe to their services and move their applications onto one of these platforms. Many organisations are already doing this but are finding that not all of their infrastructure can be migrated to public cloud. In some cases this is for technical or compliance reasons and in others it is a commercial decision. While great for applications that need to scale up or down rapidly, for more predictable workloads, the rental model of cloud will be more expensive in the long run. In the same way that one would rent a car for a short visit to a foreign country, it would be better to buy a car if moving to that country for several years. Thankfully there is now a solution for businesses aspiring to reach similar levels of simplicity and flexibility as the cloud providers within their own data centres. Hyperconverged Infrastructure (HCI) is that solution and it came about several years ago when software engineers from the likes of Google and Amazon realised the techniques they were using to build the infrastructure for their cloud platforms could be adapted to corporate data centres. They developed a fully software-defined solution which runs on any commodity x86 hardware and supports any common hypervisor to create cloud-like infrastructure for use in on-premise data centres. By starting out with as little as three server nodes, businesses can create a scale-out virtualisation platform which has no complex SAN to manage and includes automation and self-healing features that revolutionise how the data centre is architected. Instead of having to predict their future infrastructure needs, organisations can start small and grow in small increments adding a little more compute, storage or both as needed.
While combining compute and storage like this using HCI forms the foundation for running a simple and highly scalable private cloud on-premise, the real benefits, both now and into the future, come from the fact that it is fully software defined. New capabilities come from software updates and do not rely on specialised hardware. At the same time, as hardware does evolve, new nodes will bring those innovations as they are added and older nodes can be decommissioned or re-purposed. No more expensive fork-lift upgrades are required.
The next evolution of this technology is in moving beyond the convergence of compute and storage in the data centre to the convergence of on-premise private cloud with any number of public clouds to create a hybrid environment which is managed as a single entity; a true Enterprise Cloud. As I mentioned previously, organisations need to be able to run their applications wherever makes the most sense at a particular time whether for technical, compliance or commercial reasons. By just adopting public cloud while maintaining a legacy three-tier architecture on-premise, the overall complexity of IT is increasing. Each public cloud provider also has its own strengths and one may be better suited to a use case than another. Enterprise Cloud is an extension of the software-defined capabilities of HCI which will allow applications to migrate between private hyperconverged data centres and any public cloud as business requirements dictate. Machine learning based predictive analytics services will monitor the overall hybrid ecosystem and based on commercial, governance or security rules will make sure that applications are running in the right place at the right time.
None of this would be possible of course if IT did not evolve away from its hardware centric approach of the past to a fully software-defined, hyperconverged, Enterprise Cloud.
Ian Kilpatrick, EVP (Executive Vice-President) Cyber Security for Nuvias Group, has a strong vision of the future in IT, focussing on business needs and benefits, rather than just technology. Here he lists his top 10 security predictions for 2018.
2017 was a turbulent year for IT, but this is business as usual now. 2018 promises to be even more turbulent, with increasing levels of consolidation, regulation and global competition. No matter whether you strive to be at the bleeding edge of IT or not, our 2018 storage predictions and IT infrastructure trends are a good starting point to get a competitive edge and make your life easier.
By Boyan Ivanov, CEO, StorPool Storage.
Storage devices - NVMe on the rise
NVMe SSDs are going mainstream, superseding SATA and SAS SSDs. Server OEMs are finally delivering good server platforms for NVMe drives, which will help adoption of NVMe drives tremendously. Since the price of NVMe drives is just 5-10% higher compared to the price of datacenter grade SATA SSDs, we expect adoption to pick up tremendously during 2018. For small margin customers will get 2-3x the performance and half the latency.
Flash and DRAM shortage to alleviate
During 2017 the Flash and DRAM shortage was leading to increased costs for computer components which use Flash and DRAM, i.e. SSDs and DIMM modules. Delivery times were extended and varied significantly and between vendors - from couple of days to over a month, sometimes longer for larger quantities.
During 2018 new production capacities are coming into operation and we expect the shortage to be relieved. During Q2 and Q3 2018 we expect the price of NAND Flash to drop moderately. This will further lower the cost for SSDs and Flash-based storage systems and accelerate their adoption.
Hybrid storage systems will continue to decline
With the decreasing cost of Flash, SSD-HDD hybrids will make less and less sense. HDDs will be moved down to serve strictly capacity use-cases (where $/GB is more important than performance). Performance use-cases will be satisfied by Flash-based systems. Flash-based storage systems will also grow in size and scope, utilizing the newly-found efficiency of low cost NAND Flash.
High performance storage systems based on NVMe SSDs will start to gain wider adoption. With the right storage system partner it now becomes possible to have a shared storage system as fast as or faster than local SSDs.
The divergence of storage systems into two distinct categories will continue. The two categories are systems which focus only on performance (disregarding other aspects) and systems (like StorPool) which provide high performance, but also end-to-end data integrity and the full spectrum of data features (HA, scalability, snapshots, multi-site, etc).
SDS (Software-defined storage) and DS (Distributed Storage) - accelerated adoption
While in 2016 SDS was the second option in new projects (first were either traditional SAN or All-Flash Arrays), in 2017 SDS became the first option companies reviewed. We now see spill-over of SDS and DS in the datacenters and we expect it will be 30-50% of new deployments in 2018, with accelerating growth in 2019.
Also SDS is usually adopted in relatively larger projects (50 TB+), so we expect more than 50% of deployed capacity in 2018 to be SDS powered.
SDS is also the fundamental storage layer of HCI (Hyper-Converged Infrastructure) solutions, and there it replaces low- to mid-range SANs and all-flash arrays.
SDS consolidation
The concept of “unified storage” (block, file and object, in one product) is being demystified. As good as this concept sounds, it is an elusive promise. In practice, storage software needs very different architecture and implementation to be very good in each one of these use cases.
In 2017 we saw initial segmentation, where many vendors started to be more precise in what they actually do and to focus on use cases they are strong at - be it block, file or object. Customers and solution providers seem to now understand that they are likely to end up with 2 or 3 best-of breed solutions for each storage layer, to get the job done well.
We expect this search for “best of breed” to continue in 2018. And as the technology mature, we will start to see some clear leaders on a per use case and per technology stack basis.
DR (Disaster Recovery) and workload mobility in growth mode
Integrated multi-site capabilities for DR, workload mobility and for backup to boom in 2018. This was somewhat a surprise, since most companies have backup figured out and one would imagine that DR was part of this plan. However increasing application complexity and business demands are driving a new wave of DR requirements, where storage systems can deliver nearly zero RTO and RPO.
Compute
Heterogeneous computing, the concept of using general purpose CPUs alongside compute resources better at the task at hand - GPUs, FPGAs, might start gaining wider adoption outside of the traditional applications it has been used for - video, image processing (and now increasingly bitcoin and cryptocurrency mining…). The main driver for this is machine learning and analytics over large pools of structured and unstructured enterprise data.
Machine learning, AI (Artificial Intelligence), IoT (Internet of Things) and the analytics applications they require will become a major driver for new IT infrastructure.
CPUs
Intel Xeon Scalable is being used in nearly all new IT infrastructure projects (public clouds, on-prem clouds and others). It brings approximately 50% higher compute density (cores per server, GB RAM per server) and considerable power efficiency gains over the previous generation. This in turn brings even better overall economics to virtualized infrastructure.
The AMD Zen microarchitecture (Epyc) are back in the game with the recently announced deployment in Microsoft Azure. We expect to see further adoption in 2018.
Some companies are experimenting with ARM and Power architecture servers. However we do not consider this mass market, yet. So this 2017 prediction of ours came short.
Our expectation is that the use of ARM and Power architecture CPUs will remain very limited to specific applications. ARM servers are better suited for batch processing (throughput-driven) applications than for interactive/online (latency-driven) applications.
Qualcomm has recently entered the server CPU market with their 48-core CPU.
Virtualization stacks
The virtualization stack is ripe for reinvention. On the one hand KVM is gaining market share on everyone else’s behalf. We expect that this trend will accelerate during 2018, with small and large vendors offering HCI and rack-level architecture cloud solutions based on KVM. On the other hand containers, especially managed by Kubernetes are spreading fast. We see potential for reinvention of the entire virtualization stack.
We project the decreasing market share of other hypervisors to accelerate in 2018.
Networks
25/50/100G Ethernet is continuing to gain adoption over 10G Ethernet. We predict that during 2018 more than 50% of new datacenter networks to use 25/50/100G instead of the older 10/40G standards.
Intel OmniPath has moved from seldom being used to a market leader in HPC interconnects. This is because the OmniPath adapter is now integrated into Intel's Xeon CPUs. During 2018, we expect that most new RDMA fabrics in HPC will be OmniPath. Some deployments outside of HPC might choose it due to better cost/performance vs Ethernet, but only for large scale multi-rack cases where the application demands extreme performance between compute nodes.
Infiniband is not going away. It is continuing to be used in new HPC deployments. Outside of HPC, we don't see a compelling case for Infiniband during 2018.
Data growth and applications
According to a Seagate commissioned study made by IDC by 2025 the global datasphere will grow to 163 zettabytes (that is a trillion gigabytes). That’s ten times the 16.1ZB of data generated in 2016. This growth comes mainly from unstructured data although structured data and core business application continue to add significantly.
As much of this data is created in specialized platforms (social media, specific hardware manufacturers, etc.) we expect the raise of a new crop of large private clouds, which will become good target market for IT infrastructure providers.
Consolidation & regulation
We expect new wave of consolidation in 2018 especially between server vendors and storage players.
Cloud is increasing its adoption, although still 25.09% of money are spent on building public clouds now, according to a study from IDC. There is an ongoing trend of cross-regional consolidation between 2-nd and 3-rd tier cloud providers, which we expect to continue.
On the other hand the General Data Protection Rule (GDPR) which will take effect on 25 May 2018 in the European Union will stir the public, private and hybrid-clouds market and will make local markets more segmented. We already see datacenter and cloud providers shifting workloads to newly opened local facilities in bigger markets, so they can save and serve data locally.
Asian vendors are picking up
We already see actions from leading Asian vendors like Alibaba, Huawei, AIC, NEC, etc. to compete head-to-head with their US counterparts. We expect this trend to deepen in 2018, fuelled by significant cash and human resources dedicated to it by Asian vendors and their increasingly global vision.
2018 promises to be an exciting year for storage and IT infrastructure. There are some master-trends happening in the industry (regulation with GDPR and the increasing strength of Asian vendors), which have the potential to swirl the markets and make 2018 an year, unlike any of the recent ones. So instead of incremental shifts, there may be some tectonic shifts. Let’s welcome 2018 and see what will happen.
It’s worth investing in your legacy applications, hybrid cloud and in a data acceleration solution. Unlike oil and water, they all offer an optimising mix of solutions that will save you time and money.
By David Trossell, CEO and CTO of Bridgeworks.
Scott Jeschonek, Director of Cloud Solutions at Avere Systems thinks that while oil and water don’t mix, legacy and cloud do in comparison. In spite of the hype about moving applications to the cloud and about turning legacy applications in cloud-natives, he finds that legacy systems are alive and well, and he believes they aren’t going to go anywhere anytime soon:
“Though the cloud promises the cost savings and scalability that businesses are eager to adopt, many organisations are not yet ready to let go of existing applications that required massive investments and have become essential to their workflows.”
He adds that re-writing mission-critical applications for the cloud is often inefficient: The process is lengthy and costly in financial terms. There can also be some unexpected issues that arise from moving applications to the cloud, but these will vary from one firm to the next. Top of the list is the challenge of latency. “Existing applications need fast data access, and with storage infrastructure growing in size and complexity, latency increases as the apps get farther away from the data. If there isn’t a total commitment to moving all data to the cloud, then latency is a guarantee”, he writes.
He mentions other challenges in his article for Data Center Dynamics: ‘Unlike oil and water, legacy and cloud can mix well, include mismatched protocols and the amount of time required to re-write software applications to conform with cloud standards. With regards to mismatched protocols he says legacy applications typically use standard protocols such as NFS and SMB for network-attached storage (NAS). They are “incompatible with object storage, the architecture most commonly used in the cloud.” Too many this makes migrating to the cloud a daunting prospect, but it needn’t be.
The ideal situation would be to keep the familiarity of legacy applications and their associated time-efficiencies. Not many professionals are so keen to embrace new technologies while sacrificing these benefits because they often have limited experience of them. So in many respects it makes sense to use the existing applications. If they aren’t broken, the why fix or replace them? Unless there is a dire need to replace existing infrastructure and applications, there is no reason to buy the latest and greatest new technology on the premise that it’s the newest must-have on the block.
“That said, nothing is stopping you from moving applications to the cloud as-is”, he says before adding: “While enterprises may still choose to develop a plan that includes modernisation, you can gradually and non-disruptively move key application stacks while preserving your existing workflow.”
To avoid the cost of time and money in re-writing application he recommends cloud-bursting. This can be achieved with hybrid cloud technologies. They permit legacy applications to run on servers “with their original protocols while communicating with the cloud”. Often an application programmable interface (API) will be used to connect the two for this purpose.
Cloud-bursting solutions can let legacy applications “run in the datacentre of in a remote location while letting them use the public cloud compute resources as needed”, he says. The majority of the data can remain on-premise too. This reduces the risk and minimises the need to move large files to save time. This approach makes life easier for IT, and from an organisational perspective faster times to market can be achieved. As a utility model is used with the cloud, organisations only pay for what they use – and it allows organisations to focus on their core business while providing financial agility.
Cloud storage can also be used to back up data. Backing up is like an insurance policy. It might seem like an unnecessary expense, the cost of downtime as experienced recently by British Airways can be more prohibitive. The Telegraph reported on 29th May 2017 that there is ‘Devotion to cost-cutting 'in the DNA' at British Airways’. The journalist behind the article, Bradley Gerrard, also wrote: “The financial cost of the power outage is set to cost the airline more than £100m, according to some estimates. Mr Wheeldon expected it to hit £120m, and he suggested there could also be a “reputational cost”. Some experts claimed that human error was behind the downtime.
Anyway, cloud back-up is a must have in all necessity. “By using cloud snapshots or more comprehensive recovery from a mirrored copy stored in a remote cloud or private object location, the needed data is accessible and recoverable while using less expensive object storage options.”, claims Jeschonek. He therefore thinks that there are options to use a mixture of legacy systems and cloud solutions to save time and money. The cloud also offers additional back-up benefits too.
An article that appears on HP Enterprise’s website, ‘Cloud Control: 4 steps to find your company’s IT balance’, talks about the findings of a report by 451 Research: ‘Best Practices for Workload Placement in a Hybrid IT Environment’. It finds that companies must account for cost, business conditions, security, regulation and compliance. The report also notes that 61% of the respondents anticipated “spending less on hardware because they are shifting from traditional to on-premise clouds.” It adds that organisations are subsequently cutting their spending on servers, storage and networking.
Curt Hopkins, a staff writer for HPE magazine and the author of the article, agrees that huge costs can be incurred when moving non-cloud infrastructure to the cloud. “If you go with the public cloud, you will need to find a provider whose costs are affordable.” He adds that if you wish to “create your own private cloud, the cost of the servers on which to run it is not inconsequential.”
With old workloads you may have to plough through years and years of old documentation too. So before you move anything to the cloud, he advises you to undergo a complete total cost of ownership assessment. This will require you to factor in capital and operational costs, as well as training and personnel considerations.
At the end of the day, he’s right to suggest that it’s all about finding the right IT balance. Hybrid IT, in the form of hybrid cloud, is the most appropriate way to achieve it. “Finding your IT balance is not a zero-sum game; you don’t have to choose legacy IT, public cloud or private cloud…you can mix those options based on your workloads”, he says. To find the right balance he stresses that there is a need to undertake a cost-benefit analysis, and he thinks balance can be found “in the interplay between your primary business considerations.” This therefore requires you to also evaluate your costs, security, agility and compliance as a whole to gain a complete picture of the costs and benefits.
A report by Deloitte, ‘Cloud and infrastructure – How much PaaS can you really use?’, argues that the past was about technology stacks. It says the present situation favours infrastructure-as-a-service (IaaS), but the future is about platform-as-a-service (PaaS). IT argues that PaaS is the future because many organisations are creating a new generation of custom applications. The key focus seems to therefore be on software developers, and the ability of organisations to manage risk in new development projects.
Significantly it says: “The widening gap between end user devices, data mobility, cloud services, and back office legacy systems can challenge the IT executive to manage and maintain technology is a complex array of delivery capabilities. From mobile apps to mainframe MIPs, and from in-house servers to sourced vendor services, managing this broad range requires a view on how much can change by when, an appropriate operating model, and a balanced perspective on what should be developed and controlled, and what needs to be monitored and governed.” Unfortunately, the report makes no mention of whether any cloud model is right or wrong for managing legacy applications.
Cloud can nevertheless mix well with legacy applications, but there should also be some consideration of what your organisation can do with its existing infrastructure. Cloud back-up is advisable, and increasing your network bandwidth won’t necessarily mitigate the effects of latency. Nor for that matter will a rationalisation of your networking costs by reducing your network performance.
With machine learning, for example, it becomes possible to offer data acceleration with a product such as PORTrockIT. By using machine intelligence, you can mitigate the effects of data and network latency in a way that can’t be achieved with WAN optimisation. Cloud back-ups, as well as legacy and cloud applications that interconnect with each other can work more efficiently with reduced latency.
More to the point, while this is innovative technology, it enables you to maintain your existing infrastructure and so it can reduce your costs. With respect to disaster recovery, a data acceleration tool can improve your recovery time objectives to enable you to keep operating whenever disaster strikes. While traditionally data was placed close together to minimise the impact of latency, with a machine learning data acceleration solution your cloud-based disaster recovery sites can be placed far, far apart from each to ensure business and service continuity without falling to human error. So it’s worth investing in your legacy applications, hybrid cloud and in a data acceleration solution. Unlike oil and water, they all offer an optimising mix of solutions that will save you time and money.
According to Cisco, 78% of all workload processing will take place in the cloud by the end of 2018. Some two-thirds of that workload will be Software-as-a-Service. In addition, upcoming innovations such as smart energy grids, autonomous driving and healthtech all depend on availability of low-latency, ultra-fast, secure cloud services.
By Dr. Thomas Wellinger, Market Manager Data Centre R&M.
The adoption of Cloud computing, driven by a vast uptake of mobile applications, portable computing and remote working is pushing towards higher bandwidths and port densities. Increasing numbers of users, with diverse requirements, expect uninterrupted service as they access more and more data and applications. This, in turn, is changing the way in which data centres are designed and operated. Data centres need to offer higher bandwidth and more efficient network architectures to provide adequate cloud functionality and manage traffic spikes. Given the number of concurrent users, these can be vast,
To guarantee the high bandwidth, low latency and high uptime that cloud computing requires, networks are being organised in new and different ways. Traditionally, data centre traffic has mainly consisted of client-to-server interactions (north - south), but today network traffic in large data centres is primarily server-to-server (east - west) traffic for cloud computing applications. The 3-level tree network architecture commonly used in the past is built to accommodate client-to-server transmission but is not as effective for server-to-server applications, as it introduces latency and uses up bandwidth and. Data centres are re-evaluating their architectures, right down to the placement of switches.
The further users are from metropolitan areas, the more high-bandwidth applications may suffer. ‘Edge data centres’ move the ‘edge’ of the internet away from traditional internet hubs. Frequently-referenced applications and content are cached on servers closer to less densely networked or ‘tier-two’ markets. This improves the quality of high-bandwidth applications and improves the user experience for cloud services and mobile computing. However, building an edge network is different to building a ‘traditional’ network. Edge DCs are generally located in confined spaces and cabling from servers is often directly connected to a fibre platform in a central network cabinet. Fibres must accommodate cable twisting, moving, adding and changing and data has to pass through cables at sharp angles without quality loss.
High port density – in excess of 100 ports per rack unit - is essential to accommodating bandwidth-hungry high-density services. Fibres are brought directly from server ports to an Ultra High Density platform, which could accommodate up to 50% more fibre optic connections inside a traditional housing. Although cables need to have a significantly higher fibre count, handling should be no different to handling smaller cables, and termination should be as easy as possible. Poor cable management may result in signal interference and crosstalk, damage and failure. Adhering to good practices are vital to avoiding performance issues, data transmission errors and downtime. Edge data centres can be widely distributed, so automated asset management and tracking is a prerequisite. High Port density is also key to a successful rollout – traditional ‘72 ports per unit’ UHD solutions won’t suffice. ‘Edge’ or ‘access’ switches connect directly to end-user devices. If there’s ample port capacity, users can simply - and cost-effectively - re-patch devices themselves.
Today, the average edge data centre surface area is around 800 m2, with thousands of network ports. Still, many network managers carry out inventory and management of physical infrastructure with Excel sheets - or even paper, pencil and post-its. A well-specified DCIM system can help match IT and operational requirements and protocols to capacity planning and needs. Cloud is driving a huge demand for connectivity and availability, storage and server performance. As a result, numerous new systems being built, and existing systems are being retrofitted. That requires a great deal of care, as the foundation for today and tomorrow’s cloud services needs to be as reliable and robust as possible.
Many businesses are yet to understand the sheer scale and breadth of changes their company data processing policies will need to undergo to comply with the general data protection regulation (GDPR). Here, Paul Hanson, director of IT consultancy eSpida, explains how the regulation will impact multinational businesses — and how they must prepare themselves.
Benjamin Franklin once said, “By failing to prepare, you are preparing to fail”. This statement will ring especially true for multinational businesses in the coming months as the GDPR comes into force across the European Union (EU).
By uniting 28 different EU member state laws under one data protection law, GDPR is set to harmonise data protection laws throughout the EU, giving greater rights to individuals.
Taking effect as of May 25, 2018, every business will need to alter their existing procedures to ensure the correct mechanisms to comply with GDPR are in place. Failure to comply with the regulation will result in costly penalties of four per cent of global annual turnover or €20 million, whichever value is greater.
Non-compliant businesses could also be faced with bans or suspensions on processing data, in addition to the risk of class actions and criminal sanctions.
To enforce the regulation, each country will have its own national data protection act (DPA) regulator that will oversee and manage any breaches. Businesses operating in multiple EU countries have frequently asked since the announcement of GDPR, how an authority will be chosen to enforce action if found non-compliant with the regulation, or if an authority from each EU affiliate would take action.
If a business has conducted non-compliant cross-border data processing activities, only one national DPA regulator must act on the complaint. For instances where a business’ data controller operates in multiple EU countries, the DPA regulator that will take action must be located in the same country as the organisation’s main establishment, or where it’s central administration takes place.
Non-EU affiliates of a multinational business will also be impacted by the GDPR, depending on whether the data is accessible from one central system to affiliates across the globe. Companies operating on this scale will need to have a clear understanding of how data flows in the company to ensure that cross-border data transfers are compliant.
This is just one example of how GDPR is introducing formal processes for issues not previously covered by the DPA. Another area that the ruling focusses on is when a data breach occurs.
In 2016, it was revealed that Yahoo had suffered a cyberattack that resulted in three billion users having their account details leaked. What was appalling to the public, however, was that the attack had taken place three years prior to the incident being reported.
Unfortunately, this is not an isolated incident. In 2017, Uber revealed that data of its users had been held to ransom by hackers in 2016, prompting similar backlash to the Yahoo breach.
Under GDPR, companies are required to report a breach within 72 hours of its discovery. This includes notifying the country’s DPA regulator, which in the UK is the Information Commissioner’s Office (ICO), and the people it impacts. Businesses should also consider taking additional steps to avoid the detrimental impact cyber breaches can have on its employees and customers.
Identity management is just one example that allows companies to restrict access to certain resources within a system. Identity management can define what users can accomplish on the network depending on varying factors including the person’s location and device type.
With the rise in cloud computing among businesses, extra measures should also be taken to safeguard this data. A survey found that 41 per cent of businesses were using the public cloud for their work, with 38 per cent on a private cloud network. By implementing security measures like encryption software, businesses can prevent unauthorised access to digital information.
Taking these precautionary steps is necessary for businesses with more than 250 employees. This is because a business of this size, following the introduction of GDPR, must detail what information they are collecting and processing. This includes how long the information will be stored for and what technical security measures are in place to safeguard the information.
In addition to identity management and encryption software, businesses can also consider various other security tools for their systems, including anti-ransomware, exploit prevention and access management.
Another notable change for companies that have regular and systematic monitoring of individual data, or process a vast amount of sensitive personal data, is that they will now be required to employ a data protection officer (DPO). Sensitive data refers to genetic data and any personal information such as religious and political views.
GDPR will have a wide-ranging impact on multinational businesses. Although some may be more prepared than others, each business’ status in complying with GDPR is different, with no one solution suiting all. By investing in GDPR compliance specialists like eSpida, businesses can avoid costly fines because of discrepancies with the regulation.
It’s fair to say that the GDPR is the most meaningful change in data privacy law since it was first established over twenty years ago. Despite it currently only being enforced in the EU, many believe this will spark a revolution across the globe for the protection of data for individuals.
Businesses must prioritise updating their current systems to ensure their processing policies are compliant with the GDPR. Depending on the current position of a business, some may need more preparation than others. For example, not every business will be required to employ a DPO, but others may need to reorganise its HR team to help enforce GDPR compliance across a company.
With May just around the corner, businesses who have not already started preparing need to act now to avoid financial punishments and reputation repercussions.
There’s no denying that we are living in a digital world and our digital world means Big Data and the Internet of Things (IoT) are impacting, arguably, every aspect of our lives. From being able to see who’s ringing your doorbell from wherever you are in the world, to black boxes on car dashboards.
By Darren Watkins, managing director for VIRTUS Data Centres.
However, this isn’t just about what consumers want. Beyond the headlines of connected devices - and customer behaviour analysis - IoT and Big Data are being used to solve increasingly complex business problems. All businesses, whether born digital or going digital, are turning to IoT technology to manage the connections, devices and applications that underpin their organisations. Automated workflows - which have long been a watchword of manufacturing business strategy - are being embraced by many disparate companies.
As we know, all things digital generate massive amounts of data, and IoT and big data are clearly intimately connected: the IoT industry generates ‘big data’ to take all of the information that it gathers and turn it into something useful, actionable - and sometimes - automated. As well as enabling us to do more things, quickly, IoT provides a wealth of data, which - with compute processing and intelligence - can generate invaluable insight for organisations to use to improve products, services, efficiencies and ultimately, revenues.
Of course, everything has a knock-on effect. So, are existing business systems ready to cope with the intense pressure that IoT and big data bring?
Their impact is already being felt all along the technology supply chain and in response, CIOs have placed increasing pressure on IT infrastructure and service providers to help fulfil these mounting business requirements. IT departments need to deploy more forward-looking capacity management to be able to proactively meet the business priorities associated with IoT connections. And big data processing requires a vast amount of storage and computing resources.
As end-user demands increase, it has forced data centres to evolve and keep pace with the changing business requirements asked of them, placing them firmly at the heart of the business. Apart from being able to store IoT generated data, the ability to access and process it as meaningful actionable information - very quickly - is vitally important, and will give huge competitive advantage to those organisations that do it well.
Historically, for a data centre to meet increasing requirements it would simply add floor space to accommodate more racks and servers. However, the growing needs for IT resources and productivity have come hand in hand with demand for greater efficiencies, better cost savings and lower environmental impact. Third party colocation data centres have increasingly been looked at as the way to support this growth and innovation, rather than CIOs expending capital to build and run their own on-premise capability.
High Performance Computing (HPC) aggregates computing power to deliver much higher performance to solve complex science, engineering and business problems. It was once seen as the reserve of the mega-corporation, but is now being looked at as a way to redress this IT budget/performance dichotomy. Itrequires data centres to adopt High Density and Ultra High Density innovation strategies in order to maximise productivity and efficiency, increase available power density and the ‘per foot’ computing power of the data centre.
Industry views around High Density vary widely. Data centres built as recently as a few years ago were designed to have a uniform energy distribution of around 2 to 4 kilowatts (kW) per IT rack. Some even added ‘high density zones’ capable of scaling up if required, but many of these required additional footprint to be provided around the higher power racks to balance cooling capability, or supplemental cooling equipment that raised the cost of supporting the kW density increase.
Gartner recently defined a high density capability as one where the energy needed is more than 15kW per rack for a given set of rows, but this is being revised upwards all the time with some High Performance Computing (HPC) platforms now requiring performance in the 30-40kW range - sometimes referred to as Ultra High Density.
Ultra High Density provides customers with a financial advantage, as the data centre is built to operate at a high density without any supplementary support technology, and therefore the costs per kW reduces with the increasing density within the rack. The denser the computing power can be stacked in a rack, the data centre space can be better optimised and offered to the customer, making a high density deployment significantly more cost effective. High and Ultra-high density is particularly attractive to some industry sector requirements: › Cloud service providers › Digital Media workload processing › Big data research › Core telecommunications network solutions.
Making HPC capabilities accessible to a wider group of organisations in sectors such as education, research, life sciences and government requires high density solutions that, through greater efficiency, lowers the total cost of the increased compute power. HPC is, therefore, a vital component in enabling organisations of varying sizes to affordably benefit from greater processing performance. Indeed, the denser the deployment, the more financially efficient the customer’s deployment becomes.
Many data centres will claim to deliver high-density computing – and technically speaking, a lot of them will – but only data centres that have been built from the ground up with high density in mind will be able to do so cost-effectively.
With the Internet of Things and Big Data quickly becoming a reality, organisations across industries will need to ensure that their IT systems are ready and able to deal with the next generation of computing and performance needs to remain competitive and cost efficient. It’s more important than ever to conduct due diligence before signing up with data centre providers in order to avoid the risk of being tied into costly long-term contracts that neither meets current or future needs.
And, although the future seems expensive for these innovative technologies, for many, the possibilities are limited by issues of complexity and capacity. The benefit of IoT and big data will only come to fruition if businesses can run analytics that – with the growth of data – have become too complex and time critical for normal enterprise servers to handle efficiently.
At VIRTUS, we believe that getting the data centre strategy right means that a company has an intelligent and scalable asset that enables choice and growth. But - get it wrong and it becomes a fundamental constraint for innovation. So organisations must ensure their data centre strategy is ready and able to deal with the next generation of computing and performance needs - to remain not only competitive and cost efficient, but also ready for exponential growth.