If I had a bitcoin for every time I’ve heard vendors and Channel companies express the desire to become a ‘trusted advisor’ to their customers, well, depending on when I’d sold them, I would either be very wealthy or retirement wealthy! It seems that every organisation involved in the data centre and IT supply chain recognises that it can no longer survive as simply a seller/supplier of software, hardware, data centre space and the like. No, customers want someone to hold their hand as they seek to negotiate as pain-free route as possible through the digital transformation maze.
Unfortunately, long before the digital world reared its exciting, disruptive and slightly scary head, the trusted advisor term had fallen into disrepute, as most of those companies who claimed they wanted to become this to all their customers spectacularly failed to live up to their stated objective.
The temptation to fit square pegs into round holes in return for money was too great, hence many, many end users, still massaging their bruised egos and wallets, are now somewhat sceptical about the spectacular claims being made about anything from colos to Cloud, managed services to software-defined everything and, most recently, IoT and AI.
Anyone running a business has seen how technology-savvy start-ups have made significant in-roads into many industry sectors – leveraging new thinking and new ways of looking at and harnessing data centre and IT infrastructure to their advantage.
So, few if any end users are not receptive to the need for infrastructure change.
However, many feel vulnerable, as their lack of knowledge leaves them prey to unscrupulous suppliers who might pay lip service to the trusted advisor role, but are still rather too heavily driven by the lure of profit, whatever the cost to the customers.
The above might be slightly harsh summary of the status quo, but there’s little doubt that the majority of end users are desperately seeking one or more supplier who they really can trust to help them obtain the business solutions they need, and not simply sell them a bunch of new servers and a stack of storage.
The Channel perfectly reflects this confusing landscape. Many of their staff are struggling to understand how to sell solutions, with monthly payments, as opposed to tin and software with a one-off, front end capital payment. And many Channel companies aren’t sure whether to try and create their own Clouds and managed services, or leverage those available from, say, the hyperscalers, and then add some value.
As with any historical disruption (and I don’t think it’s going too far to place the current digital revolution somewhere in the history books as a significant, global event), there will be winners and losers. The winners will be the ones who understand what digitalisation is all about and who can use it, or help others to use it, to solve real business problems. End users know they have to adapt, and there really is a great opportunity not so much for the vendors (no one organisation can supply the complete, end-to-end digital solution) but the Channel to become the change facilitator.
Words such as ‘trust’ and ‘honesty’ have taken a bit of a battering across various walks of life in recent times (not least on the cricket field), let’s hope that all those involved in selling data centre and IT solutions can be honest enough to tell potential customers if they can, or can’t help them. And let’s hope that switched on Channel organisations recognise that putting together a portfolio of solutions from different colos, vendors, managed services providers etc. could well put them in a strong position to offer end users the trust, honesty and help they are wanting.
Worldwide spending on public cloud services and infrastructure is forecast to reach $160 billion in 2018, an increase of 23.2% over 2017, according to the latest update to the International Data Corporation (IDC) Worldwide Semiannual Public Cloud Services Spending Guide. Although annual spending growth is expected to slow somewhat over the 2016-2021 forecast period, the market is forecast to achieve a five-year compound annual growth rate (CAGR) of 21.9% with public cloud services spending totaling $277 billion in 2021.
The industries that are forecast to spend the most on public cloud services in 2018 are discrete manufacturing ($19.7 billion), professional services ($18.1 billion), and banking ($16.7 billion). The process manufacturing and retail industries are also expected to spend more than $10 billion each on public cloud services in 2018. These five industries will remain at the top in 2021 due to their continued investment in public cloud solutions. The industries that will see the fastest spending growth over the five-year forecast period are professional services (24.4% CAGR), telecommunications (23.3% CAGR), and banking (23.0% CAGR).
"The industries that are spending the most – discrete manufacturing, professional services, and banking – are the ones that have come to recognize the tremendous benefits that can be gained from public cloud services. Organizations within these industries are leveraging public cloud services to quickly develop and launch 3rd Platform solutions, such as big data and analytics and the Internet of Things (IoT), that will enhance and optimize the customer's journey and lower operational costs," said Eileen Smith, program director, Customer Insights & Analysis.
Software as a Service (SaaS) will be the largest cloud computing category, capturing nearly two thirds of all public cloud spending in 2018. SaaS spending, which is comprised of applications and system infrastructure software (SIS), will be dominated by applications purchases, which will make up more than half of all public cloud services spending through 2019. Enterprise resource management (ERM) applications and customer relationship management (CRM) applications will see the most spending in 2018, followed by collaborative applications and content applications.
Infrastructure as a Service (IaaS) will be the second largest category of public cloud spending in 2018, followed by Platform as a Service (PaaS). IaaS spending will be fairly balanced throughout the forecast with server spending trending slightly ahead of storage spending. PaaS spending will be led by data management software, which will see the fastest spending growth (38.1% CAGR) over the forecast period. Application platforms, integration and orchestration middleware, and data access, analysis and delivery applications will also see healthy spending levels in 2018 and beyond.
The United States will be the largest country market for public cloud services in 2018 with its $97 billion accounting for more than 60% of worldwide spending. The United Kingdom and Germany will lead public cloud spending in Western Europe at $7.9 billion and $7.4 billion respectively, while Japan and China will round out the top 5 countries in 2018 with spending of $5.8 billion and $5.4 billion, respectively. China will experience the fastest growth in public cloud services spending over the five-year forecast period (43.2% CAGR), enabling it to leap ahead of the UK, Germany, and Japan into the number 2 position in 2021. Argentina (39.4% CAGR), India (38.9% CAGR), and Brazil (37.1% CAGR) will also experience particularly strong spending growth.
The U.S. industries that will spend the most on public cloud services in 2018 are discrete manufacturing, professional services, and banking. Together, these three industries will account for roughly one third of all U.S. public cloud services spending this year. In the UK, the top three industries (banking, retail, and discrete manufacturing) will provide more than 40% of all public cloud spending in 2018, while discrete manufacturing, professional services, and process manufacturing will account for more than 40% of public cloud spending in Germany. In Japan, the professional services, discrete manufacturing, and process manufacturing industries will deliver more than 43% of all public cloud services. The professional services, discrete manufacturing, and banking industries will represent more than 40% of China's public cloud services spending in 2018.
"Digital transformation is driving multi-cloud and hybrid environments for enterprises to create a more agile and cost-effective IT environment in Asia/Pacific. Even heavily regulated industries like banking and finance are using SaaS for non-core functionality, platform as a service (PaaS) for app development and testing, and IaaS for workload trial runs and testing for their new service offerings. Drivers of IaaS growth in the region include the increasing demand for more rapid processing infrastructure, as well as better data backup and disaster recovery," said Ashutosh Bisht, research manager, Customer Insights & Analysis.
100% of IT leaders with high degree of cost transparency are on company board, compared with 54% of non or partially cost transparent enterprises.
A survey of senior IT decision-makers in large enterprises, commissioned by Coeus Consulting, found that IT leaders who can clearly demonstrate the cost and value of IT have greater influence over the strategic direction of the company and are best positioned to deliver business agility for digital transformation. Consequently, cost transparency leaders are twice as likely to be represented at board level and thus are better prepared for external challenges such as changing consumer demand, GDPR and Brexit.
The survey of organisations with revenues of between £200m and £30bn revealed the importance of cost transparency within IT when it comes to forward planning and defining business strategy. Based on the responses of senior decision-makers (more than half of whom are C-level), the report identifies a small group of Cost Transparency Leaders who indicated that their departments: work with the rest of the organisation to provide accurate cost information; ensure that services are fully costed; and manage the cost life cycle.
88% of respondents were unable to indicate that they can demonstrate cost transparency to the rest of the organisation.
When compared to their counterparts, Cost Transparency Leaders are:
Twice as likely to be represented at board level (100% v 54%)1.5x more likely to be involved in setting business strategy (85% v 55%)Twice as likely to report that the business values IT’s advice (100% v 52%)Twice as likely to demonstrate alignment with the business (90% v 50%)More than seven times as likely to link IT performance to genuine business outcomes (38% v 5%)
“This survey clearly reveals that cost transparency is a pre-requisite for IT leaders with aspirations of being a strategic partner to the business. Those that get it right are better able to transform the perception of IT from ‘cost centre’ to ‘value centre’ and support the constant demand for business agility that is typical of the modern, digital organisation. Only those that have achieved cost transparency in their IT operations will be able to deal effectively with external challenges such as Brexit and GDPR” said James Cockroft, Director at Coeus Consulting.
Digital transformation trends mean that businesses are focusing more heavily on their customers and are using technology to improve their experience. However, IT departments remain bogged down in day-to-day activities and the need to keep the lights on, which is preventing teams focusing on how they can help drive improvements to the customer experience.
This is according to research commissioned by managed services provider Claranet, with results summarised it its 2018 Report, Beyond Digital Transformation: Reality check for European IT and Digital leaders.
In a survey of 750 IT and Digital decision-makers from organisations across Europe, market research company Vanson Bourne found that the overwhelming majority (79 per cent) feel that the IT department could be more focused on the customer experience, but that staff do not have the time to do so. More generally, almost all respondents (98 per cent) recognise that there would be some kind of benefit if they adopted a more customer-centric approach, whether this be developing products more quickly (44 per cent), greater business agility (43 per cent), or being better prepared for change (43 per cent).Commenting on the findings, Michel Robert, Managing Director at Claranet UK, said: “As technology develops, IT departments are finding themselves with a long and growing list of responsibilities, all of which need to be carried out alongside the omnipresent challenge of keeping the lights on and making sure everything runs smoothly. Despite a tangible desire amongst respondents to adopt a more customer-centric approach, this can be difficult when IT teams have to spend a significant amount of their time on general management and maintenance tasks.”
Cisco has released the seventh annual Cisco® Global Cloud Index (2016-2021). The updated report focuses on data center virtualization and cloud computing, which have become fundamental elements in transforming how many business and consumer network services are delivered.According to the study, both consumer and business applications are contributing to the growing dominance of cloud services over the Internet. For consumers, streaming video, social networking, and Internet search are among the most popular cloud applications. For business users, enterprise resource planning (ERP), collaboration, analytics, and other digital enterprise applications represent leading growth areas.
451 Research, a top five global IT analyst firm and sister company to datacenter authority Uptime Institute, has published Multi-tenant Datacenter Market reports on Hong Kong and Singapore, its fifth annual reports covering these key APAC markets.
451 Research predicts that Singapore’s colocation and wholesale datacenter market will see a CAGR of 8% and reach S$1.42bn (US$1bn) in revenue in 2021, up from S$1.06bn (US$739m) in 2017. In comparison, Hong Kong’s market will grow at a CAGR of 4%, with revenue reaching HK$7.01bn (US$900m) in 2021, up from HK$5.8bn (US$744m) in 2017.
Hong Kong experienced another solid year of growth at nearly 16%, despite the lack of land available for building, the research finds. Several providers still have room for expansion, but other important players are near or at capacity, and only two plots of land are earmarked for datacenter use. Analysts note that the industry will face challenges as it continues to grow, hence the reduced growth rate over the next three years.
“The Hong Kong datacenter market continues to see impressive growth, and in doing so has managed to stay ahead of its closest rival, Singapore, for yet another year,” said Dan Thompson, Senior Analyst at 451 Research and one of the report’s authors. However, with analysts predicting an 8% CAGR for Singapore over the next few years, Singapore’s datacenter revenue is expected to surpass Hong Kong’s by the end of 2019.
451 Research analysts found that, while the number of new builds in Singapore slowed in 2017, the market still saw nearly 12% supply growth overall, compared with 19% the previous year. The report notes that the reduced builds in 2017 follow two years when providers had invested heavily in building new facilities and expanding existing ones.
“Rather than seeing 2017 as a down year for Singapore, we see it as a ‘filling up’ year, where providers worked to maximize their existing datacenter facilities,” said Thompson. “Meanwhile, 2018 is shaping up to be another big year, with providers including DODID, Global Switch and Iron Mountain slated to bring new datacenters online in Singapore.”
Analysts also reveal that demand growth in both Hong Kong and Singapore has shifted from the financial services, securities, and insurance verticals to the large-scale cloud and content providers.
451 Research finds that Singapore’s role as the gateway to Southeast Asia remains the key reason why cloud providers are choosing the area. “Cloud and content providers are choosing to service their regional audiences from Singapore because it is comparatively easy to do business there, in addition to having strong connectivity with countries throughout the region. This all bodes well for the country’s future as the digital hub for this part of APAC,” added Thompson.
451 Research finds that Hong Kong’s position as the gateway into and out of China remains a key reason why cloud providers are choosing the area, as well as the ease of doing business there. This is good news for the city as long as providers find creative solutions to their lack of available land.
451 Research has also compared the roles of the Singapore and Hong Kong datacenter markets in detail. The analysts concluded that multinationals need to deploy datacenters in both Singapore and Hong Kong, since each serves a very specific role in the region: Hong Kong is the digital gateway into and out of China, while Singapore is the digital gateway into and out of the rest of Southeast Asia.
Analysts find that these two markets compete for some deals, but surrounding markets are vying for a position as well. As an example, Singapore sees some competition from Malaysia and Indonesia, while Hong Kong could potentially see more competition from cities in mainland China, such as Guangzhou, Shenzhen and Shanghai. However, the surrounding markets are not without challenges for potential consumers, suggesting that Singapore and Hong Kong will remain the primary destinations for datacenter deployments in the region for the foreseeable future.
Growing adoption of cloud native architecture and multi-cloud services contributes to $2.5 million annual spend per organization on fixing digital performance problems.
Digital performance management company, Dynatrace, has published the findings of an independent global survey of 800 CIOs, which reveals that 76% of organizations think IT complexity could soon make it impossible to manage digital performance efficiently. The study further highlights that IT complexity is growing exponentially; a single web or mobile transaction now crosses an average of 35 different technology systems or components, compared to 22 just five years ago.
This growth has been driven by the rapid adoption of new technologies in recent years. However, the upward trend is set to accelerate, with 53% of CIOs planning to deploy even more technologies in the next 12 months. The research revealed the key technologies that CIOs will have adopted within the next 12 months include multi-cloud (95%), microservices (88%) and containers (86%).
As a result of this mounting complexity, IT teams now spend an average of 29% of their time dealing with digital performance problems; costing their employers $2.5 million annually. As they search for a solution to these challenges, four in five (81%) CIOs said they think Artificial Intelligence (AI) will be critical to IT's ability to master increasing IT complexity; with 83% either already, or planning to deploy AI in the next 12 months.
“Today’s organizations are under huge pressure to keep-up with the always-on, always connected digital economy and its demand for constant innovation,” said Matthias Scharer, VP of Business Operations, Dynatrace. “As a consequence, IT ecosystems are undergoing a constant transformation. The transition to virtualized infrastructure was followed by the migration to the cloud, which has since been supplanted by the trend towards multi-cloud. CIOs have now realized their legacy apps weren’t built for today’s digital ecosystems and are rebuilding them in a cloud-native architecture. These rapid changes have given rise to hyper-scale, hyper-dynamic and hyper-complex IT ecosystems, which makes it extremely difficult to monitor performance and, find and fix problems fast.”
The research further identified the challenges that organizations find most difficult to overcome as they transition to multi-cloud ecosystems and cloud native architecture. Key findings include:
76% of CIOs say multi-cloud makes it especially difficult and time-consuming to monitor and understand the impact that cloud services have on the user-experience72% are frustrated that IT has to spend so much time setting-up monitoring for different cloud environments when deploying new services72% say monitoring the performance of microservices in real-time is almost impossible84% of CIOs say the dynamic nature of containers makes it difficult to understand their impact on application performanceMaintaining and configuring performance monitoring (56%) and identifying service dependencies and interactions (54%) are the top challenges CIOs identify with managing microservices and containers
“For cloud to deliver on expected benefits, organizations must have end-to-end visibility across every single transaction,” continued Mr. Scharer. “However, this has become very difficult because organizations are building multi-cloud ecosystems on a variety of services from AWS, Azure, Cloud Foundry and SAP amongst others. Added to that, the shift to cloud native architectures fragments the application transaction path even further.
“Today, one environment can have billions of dependencies, so, while modern ecosystems are critical to fast innovation, the legacy approach to monitoring and managing performance falls short. You can’t rely on humans to synthesize and analyze data anymore, nor a bag of independent tools. You need to be able to auto detect and instrument these environments in real time, and most importantly use AI to pinpoint problems with precision and set your environment on a path of auto-remediation to ensure optimal performance and experience from an end users’ perspective.”
Further to the challenges of managing a hyper-complex IT ecosystem, the research also found that IT departments are struggling to keep pace with internal demands from the business. 74% of CIOs said that IT is under too much pressure to keep up with unrealistic demands from the business and end users. 78% also highlighted that it is getting harder to find time and resources to answer the range of questions the business asks and still deliver everything else that is expected of IT. In particular, 80% of CIOs said it is difficult to map the technical metrics of digital performance to the impact they have on the business.
The new Data Centre Trade Association members portal is now live via the following link www.dca-global.org (as people get used to the new domain name, for the time being the original domain name www.datacentrealliance.org will still also get you to the same place).
Amanda and I can’t relax quite yet as we are still busy populating the new members portal and website; however, you will see the new website includes a complete rebranding, if you see a blue rather than green logo don’t panic you have landed in the right place. Updated media packs will be going out over the next few weeks to members, so you have the very latest collateral, together with secure login details enabling you to amend your business and personal profiles, add new users and upload additional content such as news, PFDs. white papers, spec sheets, case studies and video.
(If you wish to have video embedded into your member profile page then simply send the code to the DCA and we can insert it for you).
2018 is again jam pack with data centre related conference and events. To help you plan ahead, The DCA has created a printed events calendar listing all the events that the DCA trade association is either hosting, sponsoring or promoting throughout the year; there are 35 in total. As global event partners for Data Centre World (DCW) we have just returned from DCW at the ExCel, London which continues to go from strength to strength. In we are looking forward to greeting members at Data Centres North on 1– 2 May, we’ll have a presence at DCW Hong Kong on the 16 -17 May and the DCS Awards take place on the 24 May in central London.
Details of these and all the events for the whole of 2018 can be found online in the DCA events calendar should you which to find out more. If you would like a copy of the printed event calendar one can be posted to you, to receive your free copy just email email@example.com.
The DCA Journal theme this month is focused on updates from some of the many Collaborative Partnerships we have. These partnerships play a vital role in keeping end users and members both informed and connected. The Data Centre Trade Association breaks these partnerships down into three main area Strategic, Academic and Media Partners.
The DCA has a growing number of Strategic Partnerships with organisations both directly and indirectly connected to the data centre sector, EMA, BCS, CIF, DCD, ECA, UTI, GITA TechUK to name a few. Maintaining a trusted and open relationship with fellow trade bodies enables mutual support, combined resources and knowledge and a unified voice on common issues.
Strategic Alliances can come in many forms from simple MOU enabling the exchanges of information to more collaborative joint initiatives such as events, workshops and or research projects at local, EU or international levels.
Academic Partnerships with Universities and Technical Colleges equally play an essential role. The DCA continues to act as a valuable link between the academic and commercial world not just on research and development projects but also on promoting the data centre sector as a career destination for students.
With the continued support of members and Media Partnerships The DCA publishes over 150 articles every year on a wide range of data centre related topics, all designed to help keep business owners up to speed on the latest innovations, market trends, products and services they need to ensure they can stay one step ahead of the game. The media partnerships we have in place allow The DCA to disseminate thought leadership content out to a combined readership which exceeds 120,000 global subscribers on a continual basis. Many of our media partners are also event organisers, in their own right, and we are proud to be in a position to support them in the planning of many of these events with everything from promotion, content and sourcing of speakers. Speakers are often taken from the wealth of experienced experts and professionals which make up the data centre trade association
I would like to thank all the members who have contributed thought leadership articles this month. Dr Jon Summers has written a thought provoking piece reminding us all that the purpose of a data centre is to support IT, Fellow Trade body The Cloud Industry Forum (CIF) providing its security predictions, John Booth from Carbon3IT discussing if there is a place for using fuel cells and Hydrogen in data centre and Dr Frank Verhagen of Certios and DCA NL with some well needed and timely advice on the thorny subject of GDPR.
Next month’s journal theme (May edition) is focused on Security both physical and cyber (copy deadline is 19 April). The theme for the June edition is focused on energy efficiency. If you would like to contribute to either of these topics lease contact firstname.lastname@example.org.
By Dr Jon Summers Scientific Leader in Data Centres, Research Institutes of Sweden, SICS North
In the world of data centres, the term facility is commonly used to indicate the shell that provides the space, power, cooling, physical security and protection to house Information Technology. The data centre sector is made up of several different industries that purposely have a point of intersection that could loosely be defined as the data centre industry. One very important argument is that a data centre exists to house IT, but the facility and IT domains rarely interact, unless the heat removal infrastructure invades the IT space. This is referring to the so called “Liquid Cooling” of IT, whereas normally the facility-IT divide is cushioned by air.
At RISE SICS North we are on a crusade to approach data centres as integrated systems and our experiments are geared to include the full infrastructure where the facility has IT in it. This holistic approach enables the researchers to measure and monitor the full digital stack from the ground to the cloud and the chip to the chiller, so we have built a management system that makes use of several opensource tools and generates more than 9GB of data from more than 30,000 measuring points within our ICE operating data centre depicted below. Some of these measuring points are provided by in house designed and deployed wired temperature sensor strips that have magnetic rails allowing them to be easily mounted to the front and back of racks. Recently, we have come up with a way to take control of fans in open compute servers in preparation for a new data centre build project where we will try and marry up the server requirements of air with what can be provided by the direct air handling units.
Data centre module, ICE (Infrastructure and Cloud research and test Environment – ice.sics.se).
Before joining the research group in Sweden, I was a full-time academic in the School of Mechanical Engineering at the University of Leeds. At Leeds, our research has been focused around thermal and energy management of microelectronic systems and the experiments made use of real IT, where we were able to integrate the energy required to provide the digital services alongside the energy needed to maintain systems within their thermal envelope. The research involved both air and liquid cooling, and for the latter we were able to work with rear door heat-exchangers, on-chip and immersion systems. In determining the Power Usage Effectiveness (PUE) of air versus liquid systems it is always difficult to show that liquids are more “Effective” than air in removing the heat. However, the argument for a metric that assesses the overhead of heat removal should include all the components whose function is to remove heat. So, for centrally pumped coolants in the case of liquid cooling, the overhead of the pump power is correctly assigned to the numerator of the PUE, but this is not the case for fans inside the IT equipment.
So what percentage of the critical load do the fans consume? Here we can do some simple back of the envelope calculations, but first we need to understand how air movers works. The facility fans are usually large and their electrical power, Pe, can be measured using a power meter. This electrical power is converted into a volumetric flowrate that overcomes the pressure drop, ∆P, that is caused by ducts, obstacles, filters, etc. between this facility fan and the entrance to the IT. If you look at a variety of different literature on this subject, such as fan curves and affinity laws then you may arrive at 1kW of electrical power per cubic metre per second of flow rate, VF. Therefore, with an efficiency, η, of 50% the flowrate and pressure follow the simple relationship, ηPe= ∆P VF. Thus, 1kW of power consumption will overcome 2000 Pascals of pressure drop at a flow rate of 1 cubic metre per minute. The IT fans are now employed to take over this volumetric flowrate of air to overcome the pressure drop across the IT equipment and exhaust the hot air at the rear of the IT equipment. Again, there is literature on the pressure drop across a server and we calculated this at Leeds using a Generic Server Wind Tunnel pictured below. For a 1U server for example the pressure drop is around 350 Pascals, but this does depend on the components inside the server.
Schematic of the generic server wind tunnel at the University of Leeds (source: Daniel Burdett)
Fans that sit inside a 1U server are typically at best 25% efficient and it is a commonly known fact that smaller fans are less efficient than larger ones. We can now use the simple equation, ηPe= ∆P VF, again to determine the electrical power that small less efficient fans require to overcome the server pressure drop at 1 cubic metre per second assuming no air has wondered off somewhere else in the data centre. This yields an accumulated fan power of 1.4kW. But just how much thermal power can these fans remove? For this answer, we need to employ the steady state thermodynamic relationship of PT = ρcpVF∆T, making use of the density of air, ρ (=1.22kg/m3), its specific heat capacity at constant pressure, cp (=1006J/kg/K) and the temperature increase (delta-T), ∆T, across the servers. Now we must make a guess at the delta-T. We can try a range, say 5, 10 and 15oC, which with the same flow rate of 1 cubic metre per second we obtain the thermal power that is injected in the airstream in passing through racks of servers of 6136W, 12273W and 18410W for the three-respective delta-T values. It is then easy to see that in complete ideal conditions, the small server fans respectively consume 18.6%, 10.3% and 7.1% of the total server power assuming no losses in the airflow.
Given that these simple equations are based on a lot of assumptions that would yield conservative approximations, it is not unreasonable to say that IT fan power can consume more than 7% of the power of a typical 1U server. It is now very tempting to add all of these figures together to show how partial PUE is affected by the rack delta-T. Gains in reducing end use energy demand of data centres are clearly best addressed by analysing the full integrated system.
John Booth Chair DCA Energy Efficiency and Sustainability Steering Group
From a recent techUK update (thanks Emma), it transpires that there are 8 individual compliance requirements for data centres on energy ranging from CRC to MEES (contact us directly if you need any help with them!).
Three of them (EU-Emissions Trading System EU-ETS), Industrial Emissions Directive (IED)(Environmental Permit Regulations (EPR) and the Medium Plant Combustion Directive (MPCD)) relate to Generators and specifically SOX. NOX and Particulate Matter (PM), thus it is clear that the sector is facing an increasing compliance burden and, in all possibility, higher costs, either by direct taxation or via CCA Carbon offsets, buy ins, trade-offs or permits etc.
So, faced with additional costs, the yet unseen implications of BREXIT to the UK data centre sector, the threat from countries with mature renewable energy/carbon neutral grids (France, Nordics) etc and guidance from the EU Code of Conduct for Data Centres (Energy Efficiency) with a number of general best practices relating to energy generally and backup power specifically; I thought I would take a look at fuel cells and hydrogen generation options for retrofit and new build data centres. There are some examples of the use of fuel cells in the industry and recent press articles on EU projects dealing with this subject, so it is worthy of a review.
The Fuel Cell and Hydrogen show was recently held at the NEC Birmingham and as it is local to me I managed to wrangle an invite. My intention was to ascertain whether the use of fuel cells and potentially Hydrogen are viable options, so I came up with three questions to ask the delegates:
1) Can Fuel Cells provide the necessary power requirements?
2) What is the footprint of the infrastructure?
3) Do the costs/TCO compared to conventional UPS/Generator options?
First, a quick look at Hydrogen and Fuel Cells.
Hydrogen is the lightest element in the periodic table and is the most abundant element in the Universe. As I am not a chemist I refer readers to the wiki entry on Hydrogen which can be found here https://en.wikipedia.org/wiki/Hydrogen
Fuel Cells are electrochemical cells that converts the chemical energy from a fuel into electricity through an electrochemical reaction of hydrogen fuel with oxygen or another oxidising agent. More information on fuel cells can be found here https://en.wikipedia.org/wiki/Fuel_cell
The energy efficiency of a fuel cell is generally between 40-60%, however, if waste heat is captured in a cogeneration scheme, efficiencies up to 85% can be obtained.
It would be safe to say that fuel cells are very much an emerging technology although the principle has been around for some time, indeed, my research indicates that the first mention of hydrogen (one of the fuels available) fuel cells appeared in 1838.
Enough of the theory, let’s move onto practical matters and answer the three questions posed above.
1. Can Fuel Cells provide the necessary power requirements?
Yes, they can, indeed, one systems solution provider that can provide modular fuel cell systems that can provide power from hundreds of kilowatts to many tens of megawatts. The key thing here is that a fuel cell powered data centre is likely to require a big footprint to provide the floorspace required for the fuel cell equipment and charging apparatus and a connection to either a gas main (some cells can operate from natural gas) or have a hydrogen producer in the near vicinity.
Mindsets will also have to change, firstly, fuel cell arrays need to be considered as a primary energy source and not a backup source, thus they are a replacement for the grid and not conventional backup solutions such as UPS and Generators, whether the risk adverse data centre sector can accept fuel cells rather depends on cost and reliability.
For reliability, fuel cell arrays can achieve 99.998% availability, and can be configured to be extremely robust in the case of failure, indeed if one cell fails, you’ll have many more to choose from as at least one manufacturer can provide hot swappable field replacement units, so I’m sure that a fuel cell array can provide an equivalent service availability to any grid based system, definitely in the UK with its strong and (relatively) stable grid but perhaps will score highly in areas that do not have a stable grid. (bearing in mind the fuel!)
As fuel cells will be under your direct control, it is possible that you could be minded to remove your conventional backup power systems thus saving costly capex and service costs and reduce any exposure to carbon or emission taxation regimes.
You could always over provision (for expected growth) and sell surplus energy from your cells back to the grid.
2) What is the footprint of the infrastructure?
This will clearly vary according to your design, but a 300kW system will weigh around 15 tons, and depending on the layout be 6m x 3m x 2.5m high. You might want to factor in some additional footprint for gas connection equipment, any access requirements, storage space and hot spares and surround it with fences etc.
The key thing is that all fuel cells are modular, thus meeting the EUCOC requirement for modular systems but its easy to add additional modules as the need arises.
The nature of a fuel cell solution is that it’s a partnership with the supplier, so remote monitoring and system performance tweaking is included as standard.
3) Do the costs/TCO compared to conventional UPS/Generator options?
Simply, no, they do not compare favourably with conventional solutions, the cost is around double what you would expect to pay for an equivalent UPS/Generator solution, but as with solar panels, the cost of the solution will reduce in price, the more people specify and procure them for data centre solutions, add to the mix the reduced carbon, energy and emission taxation, the elimination of generator tests and possible noise issues and I personally believe that it’s a worthwhile solution to consider, especially for new builds in water stressed areas (fuel cells can produce water!), areas without grid capacity or unstable connections and the edge!
So, in summary, fuel cell technologies are ready and waiting for the data centre community to come and have a look, indeed some organisations have already done so, the solution is modular and can be expanded at will (provided you have the space), there are significant benefits to the circular economy and the concept of data centre campuses.
A final note, during my day at the show, I made contact with various people working in the industry and it is highly likely that some of them will be speaking about this very subject at some of the summer data centre events. If you’re not able to attend or need any further information, or for some contacts in the fuel cell industry, please contact me on my usual email.
By Frank Verhagen, DPO Certified Data Protection Officer, CDCEP® (Certified Data Centre Energy Professional)
A Cloud provider is a Data Processor according to the definition of the General Data Protection Regulation (GDPR).
We have been looking into some interesting questions on behalf of Cloud providers.
Apart from the last sentence, which I would like to challenge - and which is one of the reasons the GDPR came to live in the first place - the questions are very recognisable and will therefore be addressed in this article. Cloud providers now realise, that related to GDPR, they are data processors; they have no idea and no details of the customer’s data this is very typical nowadays.
First question is about the need to appoint a Data Protection Officer (DPO). A DPO may be internal, external, hired full-time or part-time. The role is to have someone (we might name the DPO into the Data Protection ‘Office’) that may help to solve practical questions concerning the compliancy to the GDPR when processing data, running projects that involve personal data; all to make sure the organisation remains GDPR compliant. A DPO will also negotiate with the Information Commissioner’s Office (ICO) when and if necessary. Public organisations must have a DPO and private organisations that deal with personal data on a larger scale, definitely need to have one too (and they need to register the DPO with the supervisory authorities).
So, the answer to the question is almost always ‘yes’ – we need a DPO.
Cloud providers and data centres have at least two parallel operations running where they should consider the implications of the GDPR. First, their internal organisation and information data flows are subject to the GDPR (personnel and payroll records, contracts, CV’s, appraisals and recruitment records). For this kind of data, every organisation is the Data Controller (having the goals and the means). This first item, however important, will not be in the scope of this article as it applies to almost every existing organisation. Second, there is data that the Cloud provider stores and processes for customers. Following the GDPR, the Cloud provider should have a data processing agreement with each of its customers (the data controllers). In these agreements there will be policies that will stipulate what the data controller (customer) requires from the data processor (Cloud provider) to help meet the requirements of the GDPR.
Can a Cloud provider have any responsibility for the data, in the processing of data of their tenants? To what extend does a data processor assume responsibility?
Let’s see what a Cloud provider needs to do (this is not a complete list!).
First of all, make a data protection impact assessment (DPIA) in where an assessment (evaluation, severity) of the risks is been documented:
· Describe the nature, scope, context and purposes of the processing.
· Assess necessity, proportionality and compliance measures.
· Identify and assess risks to individuals.
· Identify any additional measures to mitigate those risks.
It is important to document any processes that you have, to protect the data whilst within your environment, for example: Is it encrypted - in motion and at rest-, is access (physical and electronic) restricted?
Second, in order to be GDPR compliant, the processor is (a.o.) deemed to have taken ‘appropriate measures’ and avoid risk of data breaches by implementing:
Art. 78 (…) with due regard to the state of the art, to make sure that controllers and processors are able to fulfil their data protection obligations. (…)
These ‘measures’ will be issued, clarified, further detailed in time, by the European Data Protection Board. Based on the DPIA and the risks that have been identified, ‘state of art’ would mean: implementing modern physical and digital security, implemented the right and secure processes, GDPR aware staff (trained); document all of this!
The processor agreement should encapsulate the level of responsibility the Cloud provider has, versus that of the customers. In that relationship the Cloud provider is the processor, so the controller (the customer) is responsible for their data, the responsibility of the processor is different (Chapter IV: Art. 28). A policy that customers agree to, should suffice.
If you haven’t done this, start doing it now.
This is the core of what you still can do (assuming your organisation hasn’t done it yet) (if you don’t know how, give me a call).
I encountered an interesting discussion recently.
In the light of the GDPR, can a Cloud provider have responsibilities for data processing when the data is not touched or even accessible?
It is and will be a continuous debate.
If it can be made absolutely clear that - even if you could - your organisation/staff cannot copy, alter or see the data, when (for example) all data is encrypted, and the data Cloud provider doesn’t have any keys to decrypt, can the Cloud provider still be seen as a data processor?
The answer: in that specific, theoretical case the GDPR doesn’t apply.
DPO Certified Data Protection Officier
CDCEP® (Certified Data Centre Energy Professional)
M +31 6 319 937 33
 Regulation (EU) 2016/679 of the European parliament and of the council
By Bharat Mistry, Principal Security Strategist, Trend Micro
In 2018, digital extortion will be at the core of most cybercriminals’ business model and will propel them into other schemes that will get their hands on potentially hefty payouts. Vulnerabilities in IoT devices will expand the attack surface as devices get further woven into the fabric of smart environments everywhere. Business Email Compromise scams will ensnare more organizations to fork over their money. The age of fake news and cyberpropaganda will persist with old-style cybercriminal techniques. Machine learning and blockchain applications will pose both promises and pitfalls. Companies will face the challenge of keeping up with the directives of the General Data Protection Regulation (GDPR) in time for its enforcement. Not only will enterprises be riddled with vulnerabilities, but loopholes in internal processes will also be abused for production sabotage.
As environments become increasingly interconnected and complex, threats are redefining how we should look at security. Having protection where and when it’s needed will become the backbone of security in this ever-shifting threat landscape.
For 2017, we predicted that cybercriminals would diversify ransomware into other attack methods. True enough, the year unfolded with incidents such as WannaCry and Petya’s rapidly propagated network attacks, Locky and FakeGlobe’s widespread spam run, and Bad Rabbit’s watering hole attacks against Eastern European countries. We do not expect ransomware to go away anytime soon. On the contrary, it can only be anticipated to make further rounds in 2018, even as other types of digital extortion become more prevalent. Cybercriminals have been resorting to using compelling data as a weapon for coercing victims into paying up. With ransomwareas-a-service (RaaS) still being offered in underground forums, along with bitcoin as a secure method to collect ransom, cybercriminals are being all the more drawn to the business model. Attackers will continue to rely on phishing campaigns where emails with ransomware payload are delivered en masse to ensure a percentage of affected users. They will also go for the bigger buck by targeting a single organization, possibly in an Industrial Internet of Things (IIoT) environment, for a ransomware attack that will disrupt the operations and affect the production line. We already saw this in the fallout from the massive WannaCry and Petya outbreaks, and it won’t be long until it becomes the intended impact of the threat.
Users and enterprises can stay resilient against these digital extortion attempts by employing effective web and email gateway solutions as a first line of defense. Solutions with high-fidelity machine learning, behavior monitoring, and vulnerability shielding prevent threats from getting through to the target. These capabilities are especially beneficial in the case of ransomware variants that are seen moving toward fileless delivery, in which there are no malicious payloads or binaries for traditional solutions to detect.
The massive Mirai and Persirai distributed denial-of-service (DDoS) attacks that hijacked IoT devices, such as digital video recorders (DVRs), IP cameras, and routers, have already elevated the conversation of how vulnerable and disruptive these connected devices can be. Recently, the IoT botnet Reaper, which is based on the Mirai code, has been found to catch on as a means to compromise a web of devices, even those from different device makers. We predict that aside from performing DDoS attacks, cybercriminals will turn to IoT devices for creating proxies to obfuscate their location and web traffic, considering that law enforcement usually refers to IP addresses and logs for criminal investigation and post-infection forensics.
Amassing a large network of anonymized devices (running on default credentials no less and having virtually no logs) could serve as jumping-off points for cybercriminals to surreptitiously facilitate their activities within the compromised network. We should also anticipate more IoT vulnerabilities in the market as many, if not most, manufacturers are going to market with devices that are not secure by design. This risk will be compounded by the fact that patching IoT devices may not be as simple as patching PCs. It can take one insecure device that has not been issued a fix or updated to the latest version to become an entry point to the central network. The KRACK attack proved that even the wireless connection itself could add to the security woes. This vulnerability affects most, if not all, devices that connect to the WPA2 protocol, which then raises questions about the security of 5G technology, which is slated to sweep connected environments.
With hundreds of thousands of drones entering the U.S. airspace alone, the prospect of overseeing the aerial vehicles can be daunting. We expect that reports of drone-related accidents or collisions are only the start of it, as hackers have already been found to access computers, grab sensitive information, and hijack deliveries. Likewise, pervasive home devices such as wireless speakers and voice assistants can enable hackers to determine house locations and attempt break-ins. We also expect cases of biohacking, via wearables and medical devices, to materialize in 2018. Biometric activity trackers such as heart rate monitors and fitness bands can be intercepted to gather information about the users. Even life-sustaining pacemakers have been found with vulnerabilities that can be exploited for potentially fatal attacks. What adopters and regulators should recognize now is that not all IoT devices have built-in security, let alone hardened security. The devices are open to compromise unless manufacturers perform regular risk assessments and security audits. Users are also responsible for setting up their devices for security, which can be as simple as changing default passwords and regularly installing firmware updates.
That’s one of the key promises of machine learning, the process by which computers are trained but not deliberately programmed. For a relatively nascent technology, machine learning shows great potential. Already, however, it’s become apparent that machine learning may not be the be-all and end-all of data analysis and insights identification. Machine learning lets computers learn by being fed loads of data. This means that machine learning can only be as good and accurate as the context it gets from its sources. Going into the future, machine learning will be a key component of security solutions. While it uncovers a lot of potential for more accurate and targeted decision-making, it poses an important question: Can machine learning be outwitted by malware? We’ve found that the CERBER ransomware uses a loader that certain machine learning solutions aren’t able to detect because of how the malware is packaged to not look malicious. This is especially problematic for software that employs pre-execution machine learning (which analyzes files without any execution or emulation), as in the case of the UIWIX ransomware (a WannaCry copycat), where there was no file for pre-execution machine learning to detect and block. Machine learning may be a powerful tool, but it is not foolproof. While researchers are already looking into the possibilities of machine learning in monitoring traffic and identifying possible zero-day exploits, it is not farfetched to conjecture that cybercriminals will use the same capability to get ahead of finding the zero-days themselves. It is also possible to deceive machine learning engines, as shown in the slight manipulation of road signs that were recognized differently by autonomous cars. Researchers have already demonstrated how machine learning models have blind spots that adversaries can probe for exploitation. While machine learning definitely helps improve protection, we believe that it should not completely take over security mechanisms. It should be considered an additional security layer incorporated into an in-depth defense strategy, and not a silver bullet. A multilayered defense with end-to-end protection, from the gateway to the endpoint, will be able to fight both known and unknown security threats.
To combat today’s expansive threats and be fortified against those yet to come, organizations should employ security solutions that allow visibility across all networks and that can provide real-time detection and protection against vulnerabilities and attacks. Any potential intrusions and compromise of assets will be avoided with a dynamic security strategy that employs cross-generational techniques appropriate for varying threats.
The DCA would like to thank our Strategic Partner CIF for this article drawn from the TrendMicro Report “Paradigm Shifts – TrendMicro Security Predictions for 2018”. The full report can be downloaded from: https://www.cloudindustryforum.org/content/paradigm-shifts-trend-micro-security-predictions-2018
One of the latest buzz words taking Cloud Computing by storm is that of Functions as a Service (FaaS) or serverless computing by Chris Gray, Chief Delivery Officer, Amido.
Serverless is a hot topic in the world of software architecture, however it has been gaining attention from outside the developer community since AWS pioneered the serverless space with its release of AWS Lambda back in 2014. As one of the fastest growing cloud service delivery models, FaaS has fundamentally changed the way in which technology is not only being purchased but how it’s delivered and operated.
The significance of FaaS for businesses could be huge. Businesses will no longer have to pay for the redundant use of servers, but just for how much computing power that application consumes per millisecond, much like the per-second billing approach that containers are moving towards. Instead of having an application on a server, the business can run it directly from the cloud allowing it to choose when to use it and pay for it, per task – making it event driven. According to Gartner, by 2020, event-sourced, real-time situational awareness will be a required characteristic for 80% of digital business solutions, and 80% of new business ecosystems will require support for event processing.
FaaS is a commoditised function of cloud computing and one that takes away wasted compute associated with idle server storage and infrastructure. “Not every business is going to be right for FaaS or serverless, but there is a real appetite in the industry to reduce the cost of adopting the cloud – so this is a great way to help drive these costs down,” adds Richard Slater, Principal Consultant at Amido. “The thing is, if you’re considering this as an option you are signing up to the ultimate in vendor lock-in as it’s not easy to move these services from one cloud to another (though there are promising frameworks like Serverless JS which claim to resolve this); each cloud provider approaches FaaS in a different way and at present you can’t take a function and move it between vendors. As the appetite for serverless technologies grow, the nature of DevOps will subsequently change; it will still be relevant, although how we go about doing it will be very different. We could say that we are moving into a world of NoOps where applications run themselves in the cloud with no infrastructure and little human involvement. Indeed, humans will need to be there to help automate those services, but won’t be required to do as much coding or testing as they do now. With the advent of AI, the IoT, and other technologies, business events can be detected more quickly and analysed in greater detail; enterprises should embrace ‘event thinking’ and Lambda Architectures as part of a digital landscape.”
With FaaS and serverless gaining momentum, we are seeing fundamental changes to the traditional way in which decisions around technology are made, with roles like the CIO evolving at enterprise level now that there isn’t the same level of vendor negotiations. “Cloud providers are basically the same price across the board, meaning there is little room for negotiation, other than length of contract. However, signing up to long-term single-cloud contracts introduces the risk of having a spending commitment with a cloud that doesn’t offer the features that you need in the future to deliver business value. In this respect, the CIO is still necessary,” adds Richard Slater.
The current industry climate is demanding an increase in specialised IT skills that can cater to serverless digital transformation. If business leaders want to deliver, they need to let go of the ‘command and control’ approach and empower teams to be accountable. Creating the environment and securing the right skillsets to be able to develop, own and operate applications from within the same team is demanding for a new breed of IT engineering. Organisations wanting to embrace digital transformation and this new breed of cloud service delivery must start to give trust to the individuals closest to the business and writing code on the ground. “To a certain extent this trust must be earned, but in many of today’s enterprises there is so much governance around technical delivery that it has the effect of slamming the brakes on any transformation,” concludes Richard Slater, Principal Consultant at Amido.
We’ve seen trends come and go over the years, but with global companies like Expedia and Netflix embracing serverless computing, and cloud heavyweights Amazon, Google and Microsoft offering serverless computing models in their respective public cloud environments, FaaS seems here to stay.
We have been talking to managed services providers about what their customers are saying about GDPR – the compliance requirement which will come in across Europe and affect anyone else with European data. Other than a few household names, it appears that most smaller businesses are taking a watching and monitoring stance, perhaps looking at reviewing their marketing emails, and waiting to see which household name ends up in the headlines.
A more sophisticated group of managed service providers – generally those with large public sector or major global enterprises among their customer base are being pushed into a closer alignment. They are being told that they have to sign new contracts which commit to being GDPR-compliant and in order to continue to do business, they have to make changes. And, as most of them have found out, there is no point solution, no magic bullet in becoming compliant- it is rather a process towards an ideal.
MSPs will find themselves caught in this matrix; the danger is that they could end up making a statement of compliance based on what their vendors have told them about specific products. The vendors, of course have no legal status with the end-user customer, and will not be there behind the MSP should anything wayward emerge in future months.
And ideas on how this will play out are limited; we have spoken with legal experts, and the best advice they can give is that parties show they are aware of the requirements and are able to show some progress towards compliance. MSPs will need to explore in detail how their own provider contracts will work and take suitable advice, not just listen to those selling “solutions”.
The other area of interest emerging in studies on MSPs is *marketing* – while everyone is aware of the level of competition and the need to differentiate, there seems to be little advice on how to build the MSP business pipeline, except by word-of mouth, which necessarily limits the MSP to one geographic region or one vertical or sub-vertical market. Vendors are increasingly able to supply collateral in the form of web downloads or documents, but the MSP needs to be able to wrap a clear message around such material and establish their own right to deliver solutions based on it.
So there are plenty of discussion points on how to build the best MSP business, and we know that the best are doing very well this year. Which is why the Managed Services and Hosting Summit (MSHS) on May 29 in Amsterdam aims to build on best practice, learning from those who are getting it right, and passing on ideas.
The MSHS event offers multiple ways to get those answers: from plenary-style presentations from experts in the field to demonstrations; from more detailed technical pitches to wide-ranging round-table discussions with questions from the floor. There is no excuse not to come away from this with questions answered, or at least a more refined view on which questions actually matter.
One of the most valuable parts of the day, previous attendees have said, is the ability to discuss issues with others in similar situations, and attendees are all hoping to learn from direct experience, especially in the complex world of sales and sales management, where there is a big jump from traditional reselling into annualised revenue models.
In summary, the European Managed Services & Hosting Summit 2018 is a management-level event designed to help channel organisations identify opportunities arising from the increasing demand for managed and hosted services and to develop and strengthen partnerships. Registration is free-of-charge for qualifying delegates - i.e. director/senior management level representatives of Managed Service Providers, Systems Integrators, Solution VARs and ISVs. More details: http://www.mshsummit.com/amsterdam/register.php
The next Data Centre Transformation events, organised by Angel Business Communications in association with DataCentre Solutions, the Data Centre Alliance, The University of Leeds and RISE SICS North, take place on 3 July 2018 at the University of Manchester and 5 July 2018 at the University of Surrey. Keynote speakers have been booked and the final programme is all but in place. The event website (https://www.dct.events/) is the best place to go to keep up to speed with just what’s happening!
For the 2018 events, we’re taking our title literally, so the focus is on each of the three strands of our title: DATA, CENTRE and TRANSFORMATION.
The DATA strand will feature two Workshops on Digital Business and Digital Skills together with a Keynote on Security. Digital transformation is the driving force in the business world right now, and the impact that this is having on the IT function and, crucially, the data centre infrastructure of organisations is something that is, perhaps, not as yet fully understood. No doubt this is in part due to the lack of digital skills available in the workplace right now – a problem which, unless addressed, urgently, will only continue to grow. As for security, hardly a day goes by without news headlines focusing on the latest, high profile data breach at some public or private organisation. Digital business offers many benefits, but it also introduces further potential security issues that need to be addressed. The Digital Business, Digital Skills and Security sessions at DTC will discuss the many issues that need to be addressed, and, hopefully, come up with some helpful solutions.
The CENTRES track features two Workshops on Energy and Hybrid DC with a Keynote on Connectivity. Energy supply and cost remains a major part of the data centre management piece, and this track will look at the technology innovations that are impacting on the supply and use of energy within the data centre. Fewer and fewer organisations have a pure-play in-house data centre real estate; most now make use of some kind of colo and/or managed services offerings. Further, the idea of one or a handful of centralised data centres is now being challenged by the emergence of edge computing. So, in-house and third party data centre facilities, combined with a mixture of centralised, regional and very local sites, makes for a very new and challenging data centre landscape. As for connectivity – feeds and speeds remain critical for many business applications, and it’s good to know what’s around the corner in this fast moving world of networks, telecoms and the like.
The TRANSFORMATION strand features Workshops on Automation and The Connected World together with a Keynote on Automation (Ai/IoT). IoT, AI, ML, RPA – automation in all its various guises is becoming an increasingly important part of the digital business world. In terms of the data centre, the challenges are twofold. How can these automation technologies best be used to improve the design, day to day running, overall management and maintenance of data centre facilities? And how will data centres need to evolve to cope with the increasingly large volumes of applications, data and new-style IT equipment that provide the foundations for this real-time, automated world? Flexibility, agility, security, reliability, resilience, speeds and feeds – they’ve never been so important!
Delegates select two 70 minute workshops to attend and take part in an interactive discussion led by an Industry Chair and featuring panellists - specialists and protagonists - in the subject. The workshops will ensure that delegates not only earn valuable CPD accreditation points but also have an open forum to speak with their peers, academics and leading vendors and suppliers.
There is also a Technical track where our Sponsors will present 15 minute technical sessions on a range of subjects. Keynote presentations in each of the themes together with plenty of networking time to catch up with old friends and make new contacts make this a must-do day in the DC event calendar. Visit the website for more information on this dynamic academic and industry collaborative information exchange.
This expanded and innovative conference programme recognises that data centres do not exist in splendid isolation, but are the foundation of today’s dynamic, digital world. Agility, mobility, scalability, reliability and accessibility are the key drivers for the enterprise as it seeks to ensure the ultimate customer experience. Data centres have a vital role to play in ensuring that the applications and support organisations can connect to their customers seamlessly – wherever and whenever they are being accessed. And that’s why our 2018 Data Centre Transformation events, Manchester and Surrey, will focus on the constantly changing demands being made on the data centre in this new, digital age, concentrating on how the data centre is evolving to meet these challenges.
The modern information technology landscape is changing at breakneck pace. By the year 2020, Business Insider Intelligence estimates that over 5.6 billion devices will utilize edge computing. The primary question you may be asking yourself and your IT staff is: What type of impact this shift to edge computing will have on the broader data center space?
By Chris Brown – Chief Technology Officer, Uptime Institute.
Some experts predict it may push some workloads out of the data center, and other industry authorities hypothesize that edge may ‘blow away cloud.’ In any case, it is important to recognize that edge deployments are highly workload-specific and will only siphon data processing loads away from traditional data centers to support business requirements for greater computational power closer to the data source.
Before you can identify which workloads can and should be moved to the edge, you need to carefully evaluate a variety of factors. A close look at the requirements involved with each specific workload type will give you tremendous insight into where edge computing will have the greatest impact.
Classifying your workloads by type
The key rationale for migrating any workload to the edge lives within its own data requirements, including speed of access, availability and protection. There are three ways to classify workloads in relation to their potential for data processing at the edge. The first is latency tolerance - you may have an application for which performance is contingent on constant access. Workloads with requirements for low latency and high speeds are excellent candidates for edge. Criticality of the site and specific availability and reliability requirements are the second defining factor. Carefully evaluate the levels of availability and reliability needed for the application to meet business objectives. The higher the need for available data, the better candidate it is for its own independent edge site. And finally, the third factor is data volume - can you quantify the volume of data originating at the site and traveling to and from it? The higher data volume originating at the site is, the better candidate for an edge setup.
Aside from data-contingent aspects of your workloads, there are multiple factors you may be weighing when making a decision about edge, including compliance, security, manageability, and costs of any changes to software architecture. Although these are all demands that will be unique to your organization, data considerations should remain the key driver for a switch to the edge.
The top 7 workloads suitable for edge
Below are the top seven types of workloads that are best-suited for an edge site, based on research from our colleagues at 451 Research. If you are managing one of these types of workloads, it may be a good candidate for an edge transition:
1. Industry 4.0 - Also referring to the fourth industrial revolution, Industry 4.0 is another word for smart manufacturing. As technology evolves, industrial companies are finding innovative uses for smart, internet connected technologies. In manufacturing, rich data can be captured from a plethora of tools, labor, cameras and sensors throughout the plant floor, with the potential to revolutionize how business is done. Asset management, machine vision and machine learning are also expected disruptors in this industry. Due to the nature of the data generated on manufacturing plant floors, Industry 4.0 workloads involve high volumes of data and little tolerance for downtime or latency.
2. Carrier Network Functions Virtualization (NFV) - In an ongoing trend - businesses are moving away from fixed-function hardware toward software appliances that run on industry-standard computing and storage equipment. A major, transformational industry trend, NFV will require more IT resources on the local scale and at the edge to support the volume as it grows. This pattern is expected to generate even more data than industry 4.0, while also requiring little latency or downtime.
3. IoT Gateways - IoT device usage is on the rise for consumers and businesses alike. The vast majority of connected machines and sensors do not have the compute power/hardware to for adequate data storage and need to be in close proximity to a data hub to transmit and store their data. Physical or virtual gateways can reside in an edge stack where the volume of data into an edge gateway will be massive, but analytics will limit the amount of on-site processing needed. Even though volume of data is less than other workloads mentioned here, criticality and latency tolerance is low, making it an excellent candidate for edge.
4. Remote Data Processing - As data entrenches itself further into everyday life, departmental and branch offices, retail locations, factories and remote industrial sites will require local processing to support the data being generated on site, even when networking is down. Security and compliance are key considerations when installing outposts like this. Remote data processing can forgive a few seconds of downtime, but there’s still a high risk if downtime lasts any significant period of time or if consequential latency takes place.
5. Imaging (e.g., medical, scientific) - Convenience is working its way into every industry, including medicine. And as high-resolution imaging is used in various areas like medical screenings and diagnostics, large data sets will be legally required to remain on file for years. Even clinics and small medical offices will need local IT capacity to analyze and retain imaging data sets for the long term. While imaging can withstand several seconds of latency, the volume of data and high demands for availability make this workload a good opportunity for edge.
6. 5G Cell Processing - In the coming years, 5G is expected to bid for a much larger share of IP traffic, which will bring about an exponential increase in wireless bandwidth. This will require an all-IP-based and standard IT-based architecture that can also be an accelerator for CDNs. Key functions include data caching and real-time transcoding of content. With little to no tolerance for latency but decreased demands of reliability and high volumes of data, 5G cell processing is a good opportunity to consider edge deployments.
7. CCTV and Analytics - Closed-circuit television installation and usage is only expected to increase with the onset of smarter cameras and more sophisticated processing power. High definition recording and advanced analytics are two contributing trends in this area. Newer CCTV setups could also include camera preprocessing, biometric and object identification, and cross-camera comparison. All of this points to more sophistication for CCTV deployments and as refinement continues, so will additional IT capacity needs and demand for edge.
Making the move to edge is a big decision, but it shouldn’t be a hard one. Systematically evaluating your workloads along with any potential compliance, security, and manageability demands will set your business up for edge success in the future.
I wrote an article recently which centred on Gartner’s prediction that the Disaster Recovery as a Service (DRaaS) market would grow from $2.01B in 2017 to $3.7B by 2021 by Richard Stinton, Enterprise Solutions Architect at iland.
In my opinion, one of the main drivers for this rapid level of growth is the fact that it is ‘as a service’ and not the complex and expensive ‘create your own’ environment that it used to be. As a result, this has made DRaaS much more accessible to the SMB market, as well as enterprise customers. But, as the list of DRaaS solutions grows along with adoption rates, it's important for customers to carefully consider how their choice of cloud provider should be influenced by their existing infrastructure. This will help to avoid technical challenges down the road.
The concept of Disaster Recovery
Before I delve into the key considerations for customers when choosing a DR solution, I should, for the sake of the uninitiated amongst us, explain what DR is. It literally means to recover from a disaster, and so encompasses the time and labour required to be up and running again after a data loss or downtime. DR depends on the solution that is chosen to protect the business against data loss. It is not simply about the time during which systems and employees cannot work. It is also about the amount of data lost when having to fall back on a previous version of that data. Businesses should always ask themselves: “how much would an hour of downtime cost?” And, moreover, “is it possible to remember and reproduce the work that employees, or systems did in the last few hours?”
When choosing a DR solution, what are the considerations?
In the past, customers would usually have resorted to building out a secondary data centre complete with a suitably sized stack of infrastructure to support their key production servers in the event of a DR undertaking. They could either build with new infrastructure, or eke out a few more years from older servers and networking equipment. Often, they would even buy similar storage technology that would support replication.
More recently, software-based replication technologies have enabled a more heterogeneous set up, but still requiring a significant investment in the secondary data centre, and, not forgetting the power and cooling required in the secondary DC, coupled with the ongoing maintenance of the hardware, all of which increases the overall cost and management task of the DR strategy.
Even recent announcements such as VMware Cloud on AWS, are effectively managed co-location offerings, involving a large financial commitment to physical servers and storage which will be running 24/7.
So, should customers be looking to develop their own DR solutions, or would it be easier and more cost-effective to buy a service offering?
Now, customers need only pay for the storage associated with their virtual machines being replicated and protected, and only pay for CPU and RAM when there is a DR test or real failover.
Choosing the right DR provider for you
When determining the right DR provider for you, I would always recommend undertaking a disaster recovery requirements checklist and regardless of whether you are choosing an in-house or DRaaS solution. This checklist should include the following points:
Does the DR solution offer continuous replication?
Which RTO and RPO does the solution offer?
DRaaS – Does the Cloud Service Provider offer a reliable and fast networking solution, and does the DRaaS solution offer networking efficiencies like compression?
Support of your systems
Is the DR solution storage agnostic?
How scalable is the solution (up and also down in a DRaaS environment)?
DRaaS – Does it offer securely isolated data streams for business critical applications and compliance?
Is it a complete off-site protection solution, offering both DR and archival (backup) storage?
Is it suited for both hardware and logical failures?
Does it offer sufficient failover and failback functionality?
Can it be tested easily and are testing reports available?
DRaaS – Are there any licence issues or other investments upfront?
DRaaS – Where is the data being kept? Does the service provider comply with EU regulations?
Let’s take VMware customers as an example. What are the benefits for VMware on-premises customers to working with a VMware-based DRaaS service provider?
Clearly, one of the main benefits is that the VMs will not need to be converted to a different hypervisor platform such as Hyper-V, KVM or Xen. This can cause problems as VMware tools will need to be removed (deleting any drivers) and the equivalent tools installed for the new hypervisor. Network Interface Controllers (NICs) will be deleted and new ones will need to be configured. This results in significantly longer on-boarding times as well as ongoing DR management challenges; these factors increase the overall TCO of the DRaaS solution.
In the case of the hyperscale cloud providers, there is also the need to align VM configuration to the nearest instance of CPU, RAM and storage that those providers support. If you have several virtual disks, this may mean that you need more CPU and RAM in order to allow more disks (the number of disks is usually a function of the number of CPU cores). Again, this can significantly drive up the cost of your DRaaS solution.
In some hyperscale cloud providers, the performance of the virtual disks is limited to a certain number of IOPS. For typical VMware VM implementations, with a C: drive and a data disk or two, this can result in very slow performance.
Over the past few years, iland has developed a highly functional web-based console, that gives DRaaS customers the same VMware functionality that they used to on-premises. This allows them to launch remote consoles, reconfigure VMs, see detailed performance data, take snapshots while running in DR and, importantly, perform test failovers in addition to other functions.
For VMware customers, leveraging a VMware-based cloud provider for Disaster Recovery as a Service delivers rapid on-boarding, cost-effectiveness, ease of ongoing management and a more flexible and reliable solution to protect your business.
Hyperconverged infrastructure is essential when embracing the demands of the IoT by Lee Griffiths, Infrastructure Solutions Manager, IT Division, APC by Schneider Electric.
The explosion in digital data requires, among many other things, an expanded vocabulary just to be able to describe it. Many of us familiar with computer systems as far back as the 1980s—or further—remember what a kilobyte is, in much the same way as we remember the fax machine. However, millenials may only have a vague recollection that such terms or technology ever existed or were relevant.
Now we must become as familiar with terms like the Zettabyte (that's 1x 1021byte) as we are with products like smart phones and self-driving cars—marvels that were science fiction not so long ago but which have become, or are rapidly becoming, commercially available realities. Industry analyst IDC predicts that by 2020 there will be 44Zbyte of data created and copied in Europe alone, based on the assumption that the amount will double every two years.
If that projection sounds like an echo of the famous Moore's Law of semiconductor growth, that the number of transistors able to be constructed on a single silicon chip will double every two years, it may be worth while pondering the effect on the computer industry and the world of that simple empirical observation of the 1960s. The ever-increasing capability of silicon chips, both to process information more quickly and store and retrieve it in vaster capacities, has underpinned our information society for decades.
More recently Cisco has estimated that within 5 years 50 billion devices and ‘things’ will be connected to the Internet. Digitalisation is driving the economy and according to IDC, by the year 2020 around 1.7 megabytes of new information will be created every second for each human living on the planet. Incredible numbers, but when you look at new innovations such as Amazon go, the partially automated Grocery Store, you can see these predictions becoming a reality. Think of the amount of IT Infrastructure that will be needed to support the supermarket industry of the near future if this new way of shopping takes off!
IDC's projections warn those depending on digital services that the amounts of data required are only going to increase for the foreseeable future. How does the data centre industry respond with the necessary capacity?
Speed of response is a vital factor in the data centre industry. Indeed, the very rapid surge in creation of new data and the consequent demand for more data centres is causing the industry to diversify along two routes.
At one level, centralised data centres are becoming bigger, making huge data capacities available. At another, smaller data centres are moving to the edge of the network, bringing data closer to the point of consumption, simplifying Internet traffic and reducing network latency for applications such as video streaming - where speed of response is essential. The rapidly growing Internet of Things (IoT) phenomenon is also leading to a demand for smaller data centres distributed around the edge of the network.
Collaboration enables innovation
The response of OEM’s in both the data centre and IT industries has been to realise that no one vendor can provide all tools necessary to deliver the types and variety of digital services required by todays rapidly growing businesses; that the necessary speed of response requires a collaborative approach between vendors, and their channel partners the systems integrators charged with designing and assembling facilities to customers' specific needs; and that standards and interoperability between various vendors' products are essential to such a collaborative approach.
The goal for many skilled systems integrators now is to be able to rack, stack, label and deploy a data centre of any size for any customer, tailored to their particular needs and rolled out in the quickest time frame possible.
At the most specialised space of the market this comprises localised Edge or micro data centres, delivered in a single rack enclosure with integrated power, uninterruptible power supply (UPS), power distribution, management software (DCIM), physical security, environmental monitoring and cooling. Such infrastructure can be assembled and deployed rapidly to support a self-contained, secure computing environment, in some cases in as little a timeframe as 2-3 weeks.
Further up the scale are centralised, hyperscale data centres with purpose-built computer Halls comprising air-conditioning equipment, containment enclosures for Hot or Cold Aisle cooling configurations and the necessary power supply and networking infrastructure to drive a multitude of computer-equipment racks. Such installations need to be adaptable to accommodate rapid upgrading and/or scaling up or down of the amount of compute and storage capacity to respond to end-user needs.
In either case, the ability to scale rapidly, to rack, stack, label and deploy the solution requires ever greater collaboration between vendors and convergence between their various product offerings, so that the time taken to build complete systems is as short as possible. In many cases Data Centre companies offering critical infrastructure solutions must work closely with IT Vendors who produce rack-mounted servers, storage arrays and networking equipment to ensure their products integrate seamlessly with each other.
Integration is absolutely essential for end-users looking to embrace digital transformation and expand their footprint rapidly. Solutions must be delivered ready to deploy, in excellent working condition and that requires both focused partnerships and the skills of highly specialised System Integrators who have become the go-to people in the converged and Hyperconverged infrastructure space. The magic it seems lies not within the individual pieces of IT and infrastructure equipment, but very much within the way the system is built, tested and deployed.
Additionally, such hardware must be guaranteed to work flawlessly with DCIM (Data Centre Infrastructure Management) and virtualisation software solutions that allow pools of processing or storage resources to be treated as individual isolated systems dedicated to particular customers or applications.
Collaboration; the key to hyperconvergence
By definition, converged infrasturucture enables the user to deploy four critical or core components of data centre technology - the compute, storage, networking and virtualised server - within a single, secure, standardised, rack-based solution. Hyperconverged infrastructure differentiates by utlising software to enable tighter integration between components and recognise them as a single stack, rather than as individual products.
In many of todays markets many businesses are adopting hyperconverged solutions as a more collaborative, forward-thinking and customisable approach for their data centre infrastructure requirements. It means that they can strategically hand pick the core components, which are in many cases used to expand footprint whilst providing both resiliency and connectivity at the Edge of the network. The real beauty of hyperconverged infrasturcture is that once a particluar solution is chosen, tested and deployed, it can be quickly standardised and replicated to provide faster scalability and reduced costs – both in CAPEX and OPEX.
A recent example of collaboration between Vendors is that between Schneider Electric and companies such as Cisco, HPE and Nutanix. In the former case the two companies have worked together to certify that Cisco's Unified Computing System (UCS) servers can be shipped already packaged in Schneider Electric's NetShelter racks and portfolio of localised Edge or micro data centre solutions. Nutanix, meanwhile, has certified that Schneider's PowerChute Network Shutdown power protection software will work seamlessly with the ESXi and Hyper-V software used in the management of their own Hyperconverged systems.
In addition Schneider Electric has leveraged its Micro Data Center Xpress™ architecture in partnership with HPE on HPE Micro Datacenter, a collaboratively engineered converged infrastructure solution providing end-to-end IT infrastructure, networking, storage and management in a self-contained and easy-to-deploy architecture - ideal for distributed Edge Computing and IT environments.
Collaborations such as these and others help simplify the task of systems integrators as they specify and assemble bespoke data centre systems of all sizes. Giving them the peace of mind that key components of the overall systems they are tasked to build will work together seamlessly, reliably speed up the delivery time of new data centre deployments and allow their customers to rapidly scale businesses as they seek out new markets. It is therefore of paramount importance that Edge data centre solutions work reliably, as promised from the moment they are connected.
The advance of semiconductor technology, guided by the road map established by Moore's Law, brought in the era of the PC, cellular phone networks and handheld technology. Now, in the era of Cloud Computing and the Internet of Things, the data centre – no matter the size - provides the fundamental technological base that makes all other services possible.
The ability to rack, stack and deploy new IT resources quickly, efficiently and under the guarantee that they will perform to specification, will no doubt have a huge impact on how well companies succeed in the era of Edge Computing and the IoT.
DCS talks to Glenn Fassett, General Manager International and Christo Conidaris, VP of Sales, UK, Curvature, about the company’s development over the past year and its plans for 2018 and beyond as interest in third party maintenance continues to grow.
1. We have to start by asking how has the merging of Curvature and SMS gone over the past year?
12 months on and now fully complete with both companies integrated under the new rebranded Curvature – and with the unified vision of Enabling IT Freedom - the merger has been heralded by commentators as a success, both in terms of company integration and customer. As the merged Curvature, we can now offer access to over 800 engineers, linked with unified systems in over 100 staffed service centres worldwide.
2. And is there anything left to do when it comes to integrating the two companies?
Successful organisations in IT rarely stand still and that is even more pertinent when two dynamic companies merge. Culturally, the employee integration was relatively straightforward. There was an immediate connection between both employee sets driven by a mutual desire to jointly succeed through enhanced customer satisfaction and cross-pollination of products and services. In a service industry, employees remain the key differentiators. Improvements are continuous, as is ongoing research into what our customers expect from services offered by the merged organisation. To that end, there are always new technologies, factors and processes to embrace and deploy, but the mechanics of the merger are complete.
3. And how are your customers benefitting from the merger?
Our customers have benefitted through an increased portfolio that has customer savings and resultant IT enablement at the very heart of its business model. Third Party Maintenance (TPM) as a philosophy is growing in popularity but we recognise that it has yet to reach mainstream status. The merged Curvature organisation stands at the forefront of that TPM growth curve and our increased geographical reach through the merger can only enhance customer satisfaction and adoption rates.
4. Curvature had a new CEO in the autumn – what’s the thinking behind this appointment and what has been his impact to date?
Yes, the appointment of our new CEO, Peter Weber in September 2017, not only provided a new, independent face to the merged organisation, but also a wealth of IT data centre and cloud enablement business success experience that is now shaping and driving the new Curvature. Prior to Curvature, Peter led his previous company into a $5 billion organisation. Under his tutelage, customers have already seen stronger Cloud services with new Cloud Migration plans and the portfolio continues to grow.
5. More recently, Curvature has appointed a storage expert? Again, what’s the thinking behind this move?
Indeed, Christo Conidaris has joined In January 2018 as VP for sales in the UK – a key territory for Curvature, where enabling savings in OpEx will be critical with budgets predicted as flat and outlook uncertain in changing economic times. Christo is a well-respected veteran of storage, which is a growth area for Curvature, with untapped storage partners yet to take full advantage of TPM offerings. For corporate CIOs, Christo is able to understand and effectively communicate how they can sweat storage assets and save money for innovation by consolidating their storage estate without impacting applications or performance.
6. During 2017, what developments did Curvature introduce in terms of its networking and server portfolio and what can we expect during 2018?
Curvature invests a great deal in understanding the products and services our customers really require, deciphering their needs and technical requirements almost in real-time. We achieve this finger-on-the-pulse delivery through our dedicated engineering department. Their very existence is to assess and understand all types of technology in the marketplace and respond by offering the required next generation of services within our portfolio. A key example is with our Cloud Migration services, helping seamless client migration, backed by professional services, to facilitate their impending move into the cloud, without risk or exposure and achieved in the most efficient and effective way.
Looking forward to 2018, both for networking and servers, expect Curvature to be offering leading organisations savings that deliver real impact to the bottom line for budget expenditure. In essence, 2018 will see Curvature become an alternative to allow perceptive data centres to procure more with less. A key example is in the mainframe arena. As servers have become more and more powerful, fewer companies continue mainframe support as those with mainframe skills gradually retire. However, there is still a wealth of essential processing and application delivery coming from mainframes. Curvature’s advanced Server Division bucks the trend of decreased support and pro-actively offers mainframe support, increasing the lifespan and cutting OEM support costs significantly.
7. During 2017, what developments did Curvature introduce in terms of its storage portfolio – both products and/or third party services?
Our response to storage was the same as the rest of our portfolio last year –delivery to our customers’ cost effective, fully tested alternatives to traditional rip and replace storage estates. Through the year, we announced Curvature’s branded Solid State Drives, joining the growing number of Curvature brand products already in existence, like Optics, NICs, memory and hard drives, all targeted to offer a lower cost alternative for customers running large workloads that require high reliability, low latency and energy efficiencies. Oh, and with next day delivery, vs lead times (sometimes of weeks) for SSDs.
8. And what can we expect from Curvature during 2018 when it comes to storage?
In the UK, the full benefits of delivering TPM storage alternatives that will lower OpEx to afford digital transformation and cloud enablement will be fully outlined and explained to corporates. Curvature’s Professional Services Team will play a more prominent and publicised role to advise on a Cloud balanced infrastructure without spending a penny extra from the existing IT budget. It’s all afforded, and delivered, through TPM savings, that actually boost service levels and enable freedom of procurement.
9. During 2017, what developments did Curvature introduce in terms of its data centre/IT asset management portfolio and what can we expect during 2018?
For data centres worldwide, Curvature has expanded its field engineering team to over 800 engineers offering true global reach, regardless of country borders or cultural differences, into over 15,000 data centres (including around half of the Fortune 500 listed companies). In terms of assets, Curvature now supports over 1.25 million devices globally. For data centre services, Curvature offers colocation, data centre, consolidation and cloud migration with a focus on the strong solutions that SMS provided for relocation services. Our Professional Services Team use proven methodologies to offer assessments and optimisation, migration, project management, design, relocation and remote staff augmentation.
10. What are some of the key issues to consider when looking at Third Party Maintenance (TPM)
There are three key areas that we advise potential customers to consider before they select a TPM partner of choice. The first is geographical coverage – where are they located and where and how many stocking locations are close enough to support their required service levels? The second is critical but frequently overlooked. Consider and examine who owns what stock. Curvature offers a prized 100% wholly-owned inventory. This alleviates logistic issues often experienced at shared logistic sites and stops any delivery issues to the customer site, meaning that we can meet, and frequently exceed, our promised service levels. The third, is consider the potential reach required not just for today, but also accommodate for tomorrow. Always opt for an organisation that can give you maximum reach and can facilitate further reach as you expand.
11. In particular, how can TPM help end users to optimise infrastructure as part of the digitalisation journey with fixed, or even reducing, budgets?
At Curvature, once CIOs have decided to go with TPM usage, we advise them to narrow their selection using similar buying criteria, as they would deploy when selecting a high-value insurance policy. Detail the longevity and breadth of coverage that the provider demonstrates and you can likely assess the quality of the final service. Curvature’s credentials are first rate, having been supplying IT services for over 35 years and supporting over 10 million devices at any one time.
In affording digitalisation and any IT initiatives, the Curvature proposition remains relatively simple: afford increased innovation, productivity and the pillars of digitalisation using the budgets you already have. Enablement of digitalisation is the whole core of Curvature’s existence, proposing a 40/60 mix of OEM support for bleeding edge technology vs TPM support for older technologies. This 60% move to TPM not only offers a larger ROI to fund the 40% bleeding edge technology, but with the remaining TPM support, perfectly performing devices are left in-tact and maintained with 7 year lifecycles.
12. How does Curvature help organisations prioritise the infrastructure assets that offer the best ROI to the business when it comes to a refresh programme?
Curvature’s ClearView is the recognised free, online assessment tool used by organisations that starts the transparent process of identifying areas to lower OpEx without compromising service levels. Organisations access the tool via https://global.curvature.com/ClearView-Maintenance-Optimization-Audit.html. Working through a green/amber/red light process, organisations are systematically able to prioritise importance of devices and processes ahead of any programme refresh.
13. Can you share one or two customer success stories – the Schneider Electric one sounds particularly interesting?
Schneider Electric were indeed an interesting case study that involved directly affording the five pillars of digitalisation through the millions of dollars savings gained from using Curvature as their global TPM partner. It’s also interesting to note that over half of the Fortune 500 Global Organisations use Curvature in some capacity, so while our users may be shy of declaring their allegiance to TPM, we see that trend changing this year with further visionaries like Schneider Electric announcing their route to enablement entirely derived from savings, achieved from within.
The right partnership can help fill the skills gap says Simon Hendy, channel manager at Pulsant.
The growing popularity of the hybrid cloud is driving many channel partners outside their comfort zone. Resellers focusing solely on hardware will be feeling this acutely – but they are not alone. Strangely enough those that belong firmly within the era of the cloud are being forced to re-examine their business model too.
While, the launch new breed of hybrid cloud platforms is helping those with feet in both the hardware and software camps – those that focus on either side of the spectrum are left wondering how they too can make the most of these advances.
Tangible versus assumptions
Many traditional hardware channel members have struggled with how to position cloud-based solutions such as software as a service (SaaS). They have been used to running their business by calculating margins on something tangible and physical, whereas with the cloud they are required to undertake consumption modelling where certain assumptions have to be made. For some, selling the cloud is tantamount to licensing fresh air.
At the same time, the cloud natives are faced with their own challenges. Namely, when a solution such as Azure Stack comes onto the market, they can be put off by the thought of having to deal with the hardware elements of it. They are unlikely to have any existing relationships with hardware manufacturers, and don’t have the operation in place to deal with the logistics of getting hardware to site. In many ways, it would be easier if they could just continue to sell the services that sit upon it, but this means missing out on a potentially lucrative stream of business.
Best of both worlds
Yes, there are some in the channel that can cover both cloud and hardware very well, but at either end of the spectrum it can be a real struggle. But change should be worthwhile. The market in general is gravitating towards a hybrid existence, and hybrid models are becoming more attractive for a variety of reasons.
Take for example, the growing legislation around data protection. Stringent industry regulations require organisations to be aware of where their data is hosted, and that permissions and security are in place to safeguard that data. The impending EU General Data Protection Regulation (GDPR) – due to come into force May 2018. From then, firms that fall foul of a data breach face a potential fine of €20m or 4% of annual turnover fines (whichever is greater).
Businesses are becoming increasingly concerned about where their most confidential data is being held and for peace of mind prefer an on premise solution. On the other hand, they still want to benefit from the cost and other advantages of the public cloud for certain applications.
Vendors are responding to this trend and seeing market hybrid as the future, are developing solutions to address it. There’s no doubt that the channel needs to sit up and take notice too, or get left behind. Yes, the cloud is important, and the opex over capex argument can’t be denied; but with companies wanting to recover some of their assets back on premise, the popularity of a hybrid environment is being seen as the best of both worlds and shows no sign of abating.
Partnerships are the key
Partnerships have always been valued in the IT world, regarded as a way of quickly extending a company’s reach to optimise current opportunities. There are many service providers who can solve this dilemma for both hardware and cloud only resellers by providing the missing skills and experience.
Azure Stack may just be the catalyst here. It is a seamless, single development and delivery platform that allows the channel to deliver Azure services from their own datacentre in a way that is consistent with the public Azure they, and their customers, will no doubt be familiar. Services can be developed in public Azure and seamlessly moved over to Azure Stack and vice versa, saving time, expense and making operations a lot more consistent.
Some see Azure Stack as revolutionising the cloud market. It may well play a part in building the hybrid cloud momentum even further. Those without the skills, contacts or experience can’t afford to miss out, but should team up with others that do while the opportunities are in abundance.
This way they can address the needs of their customers, explore new market opportunities, add new revenue streams while taking advantage of their partner’s industry knowledge and exclusive resources too.
As the hype around AI continues, building and executing on an AI strategy that supports market competitiveness will be top of mind for executives. The AI pilots are complete, yet executives are still grappling with what AI means for their organisations.
By Yasmeen Ahmad, Director, Think Big Analytics.
As the use cases develop and capabilities emerge, businesses will look to defining an AI strategy for the enterprise to maximise the benefit and impact. Core to this strategy will be the understanding of how data is accessed and integrated, as well as the plan for talent and skills development, infrastructure evolution, auditability requirements and governance requirements.
These strategies will ensure organisations are building the capabilities needed to succeed with AI in the long term and transform operational business models.
Deploying Competitive Algorithms at Scale
Simply put, automation is required to harness the multitude of algorithms coming into play, enabled through accessibility of data, technology, tools and frameworks. With the focus on AI, organisations have access to a whole new dimension of analytics, providing algorithms that can operate on complex data types to create a plethora of insight.
As companies work to keep up in a marketplace of change, there will be continuous AI innovation as analytic teams develop new models using a myriad of languages, libraries and frameworks. The potential of these algorithmic solutions will be huge, however, businesses will need a fast path to production allowing automated deployment, monitoring and management.
In this AnalyticOps setup, the concept of deploying a model will become more fluid. As models are innovated at rapid speed, this will necessitate the ability to launch multiple algorithms, potentially using different frameworks and languages, in parallel, at scale. By having numerous versions and evolutions of a model in production, organisations will perform champion-challenger approaches to allow algorithms to compete for first place.
Self-Adapting AI to Mass Scale Real-Time Decisioning
With an increased focus on personalisation and effective customer journey interactions, there are potentially tens to hundreds of decisions to be executed per customer, requiring real-time capability. These decisions need to be driven through data that is fresh and up-to-date.
In order to execute on these decisions, a new paradigm to performing and acting upon analytics will emerge. No longer will models be left to execute on their own accord for weeks and months, inevitably becoming stale and leading to sub-optimal results.
Instead, automated processes will continuously feed machine learning and deep learning algorithms with fresh data to allow models to self-adapt. The humans in the loop, who would traditionally analyse algorithmic outputs and make execution decisions, will be replaced by machine processes that can automate, real-time decisions that take up-to-date insight to action at speed.
Governance of Automated Decisioning
Automated decisioning, involving analytics, will mature and increase to mandate higher levels of governance. As organisations implement an increasingly larger number of self-adapting, AI algorithms into production, they will look to ways and means to manage the algorithms now making instrumental business decisions.
Whether it is an Algorithm Office, a Chief Analytics Officer or an AI Competency Centre – companies will seek the right structures and oversight to combat algorithm bias and ensure sound proof decisions. Furthermore, as the field matures, regulation will catch-up to mandate not just data protection, but also responsible algorithmic insight execution.
To protect against regulation violations, and also meet consumer acceptability of data use, organisations will look to create frameworks and guidelines for the use of insight. For example, marketing will look for enterprise wide frameworks to govern customer communications that consider factors such as channel, message, timing, environment, competitiveness etc.
Self Service to Enable AI Commoditisation
To date we have primarily seen code driven AI, requiring specialist skills, but this is changing as more AI applications make the techniques accessible across the enterprise. As businesses move to adopt AI in more aspects of decision making, business users will demand self service capability to understand and interact with AI models.
Furthermore, as AI automates certain jobs across the enterprise, the human element of creativity and craft will be ever more important. The business domain knowledge to interpret and guide AI efforts will require more transparency and understanding of AI to the wider organisation outside of the data science elites.
Embracing the Cloud to Take AI to New Heights
Using the cloud allows organisations to tap into the new tools and frameworks supporting AI endeavours. We have seen the pendulum swing to various extremes as organisations battle to understand which programming language, software or platform, open source and proprietary, will emerge as the front runner to become the go-to choice. However, the reality of a balanced ecosystem is emerging.
The most successful organisations will be those that embrace the flexibility that comes with a mix of open source or commercial software, and those that recognise ecosystem architectures that allow for tool and technology choices to be adaptable and swapped with minimal impact: it’s these organisations that will find themselves able to solve a broader range of business problems using the latest AI techniques.
The cloud will be an integral part of this ecosystem, providing organisations with ultimate flexibility to scale and access tools on demand, in extension to on premise capabilities.
Developing Skills Fit for the Enterprise
The hype over data scientists will be replaced with the realism that taking ideas through to production requires a hybrid team of specialists. Companies will look to re-assess their internal capabilities, including the structure of their data and analytic teams. Structures will evolve to bring analysts and data experts closer to business SME enabling more effective execution of business relevant use cases.
Also, as businesses look to build a pipeline from innovation to production, they will seek to invest in skillsets that go beyond data science to data and software engineering, DevOps and architecture. It is only through hybrid teams that organisations can operationalise analytic endeavors to realise the true value of AI. This will require a mix of hiring and upskilling through training and mentorship.
Ulitmately, the future of AI adoption means that organisations to have a strategic plan that focuses on a variety of elements that enables AI execution to create business value. Organisations need to move beyond capabilities for AI pilots to the right environment, skills and infrastructure that can take AI to productionalisation.
AnalyticOps frameworks, will enable enterprises to deploy, monitor and manage AI applications that self adapt and commoditise inisghts across the enterprise. As businesses move to rely on algorithms to make efficient and effective business decisions, organisations will need the agility to have models deployed in competition, with results feeding in real-time to business action.
The 2018 Pyeongchang Winter Olympics took place recently and, at these games, technology was front and centre by Darren Watkins, managing director for VIRTUS Data Centres.
South Korea boasts the fastest broadband in the world (an average of 28.6Mbps compared to 16.9 in the UK) and connectivity was further boosted at Pyeongchang by the introduction of a 5G mobile network at games venues. Tech giants were racing to live up to organisers’ promises that this was be the ‘most digitally advanced games yet’ - with firms such Samsung debuting groundbreaking technology like ski suits peppered with numerous sensors to feedback live body position data to their coaches.
Of course, this is nothing new. Virtually all athletes now use the Internet of Things (IoT) and big data to gather information about performance, enhance technique and even reduce the risk of injury. And this has been happening for a while. As early as the 1990s football, rugby and a raft of other professional sporting clubs began installing cameras which enabled match-play monitoring and helped teams review their performance to make changes and improve technique. Digital cameras, video technology and data tools have all added to the sophistication and precision of sports analysis - and the sports technology industry continues its rapid growth.
But, whilst the concept isn’t new, the richness of the data now available and the speed at which it is gathered is. And at this Games, just as in any other sporting event today, managing this data was no easy feat. Extreme spikes in data traffic will always challenge backend infrastructures, and the benefit of IoT and Big Data will only come to fruition with the right processing, power and storage capabilities behind it.
However, the real story is behind the scenes. Technology is helping Harry Lovell, a Ski Cross athlete for the British Academy who is hoping to compete in future Winter Olympics. It plays a huge part in Harry’s training, and his performance is significantly influenced by great technology. Cameras are used to capture information about his performance, helping him develop and refine technique. Harry’s smartwatch also helps him to reduce his resting heart rate, assisting his ability to perform at altitude, and even Harry’s clothing is engineered to give him extra speed.
The pace of change means that nobody can afford to be complacent - and virtually all athletes are relying on technology to help improve performance and take on the competition. Big data and IoT technologies put intense pressure on an organisation’s security, servers, storage and network. Sporting organisations are finding themselves struggling to proactively meet the demands that a tech-first sporting industry requires.
The strategic importance of data in sports, means that ultimately, data centre firms, like VIRTUS, are centre stage during this sporting revolution. Apart from being able to store IT generated data, the ability to access and interpret it as meaningful actionable information - very quickly - is vitally important; and gives huge competitive advantages to those that do it well.
The storage challenge
The most obvious issue for sports teams is simply in storing and analysing swathes of information which comes with IoT and big data. Speed is of course key; one of the primary characteristics of big data analysis is real-time or near real-time responses - and this is never more pertinent than in sports, where the ability to make fast, informed, decisions, is paramount.
In order to manage this crucial requirement, IT departments need to deploy more forward-looking capacity management to be able to proactively meet the demands that come with processing, storing and analysing machine generated data. So perhaps it’s no surprise that on-premise IT is on the decline and colocation facilities are becoming increasingly dominant.
High Performance Computing (HPC), is a good way to meet this challenge - and this requires data centres to adopt high density innovation strategies in order to maximise productivity and efficiency, increase available power density and the ‘per foot’ computing power of the data centre.
Cloud computing offers almost unlimited storage and instantly available and scalable computing resource - offering sports teams the very real opportunity of renting infrastructure that they could not afford to purchase otherwise.
Choosing the right partner
All of these solutions mean that sports organisations must partner with technology providers. The biggest challenge for teams is now in choosing a technology partner which meets their needs. Of course, there are many ways to do this - but we believe that transparency, flexibility and security are the key requirements which should be put at the top of any organisation’s list.
Put simply, cloud computing, colocation or managed services are appealing to many organisations because they’re not technology experts. But, when outages in service are no longer within your own ability to fix, or data leakages aren’t within your remit to control, then trust is paramount. When selecting a partner, we think it’s imperative ask those tricky questions about redundancy, uptime and reliability – and to make sure that you know they have robust disaster recovery procedures in place should the worst happen.
Technology enthusiasts would have been watching with as much interest as sports fans as the action at the Games unfolded. Technology partners of the Games reported that this Olympics would be the first to see all critical systems in the cloud and managed remotely - and on the ground and in the arenas, technology infused gadgets and systems helped to improve athletes’ performance and significantly enhance the fan experience.
If you run a data centre, you’re likely already stretched for both budget and time. Yet incoming regulations, such as NIS & the EU GDPR, remind us how difficult it is for data centres to run efficient and agile decommissioning processes, with audit controls slowing down operations. In this article, Fredrik Forslund, VP Enterprise & Cloud Erasure, Blancco Technology Group, weighs up the options for data sanitisation at scale whenever servers reach end-of-life or are required for re-provisioning.
Data Erasure in the Enterprise Data Centre
Data erasure is an alternative to physical destruction that involves securely overwriting all sectors of the storage medium. This process sanitises all data stored on the server so it can’t be recovered, even with advanced forensics. Automation and other optimisations for large-scale data erasure can help make it as efficient as possible and reduce the impact on your internal resources.
Data erasure for servers typically involves manual, time-consuming processes. Technicians use specialised tools and equipment to access storage media in special-purpose boot environments and run erasure software. Software requirements will vary by organisation, but should include:
Effective standards for data erasure contribute to overall data hygiene by ensuring that data is destroyed when it reaches the end of its retention date, is no longer necessary or isn’t adding value to the business. This factor is essential in preventing unauthorised access, whether through a security breach or inadvertent disclosure.
3 Decommissioning Options: From Manual to Fully-Automated to Achieve Data Sanitisation
Option #1: Physical destruction in-house or using a third-party vendor
Your first option for decommissioning servers is to remove each local drive and physically destroy it, either in-house or by using a third-party vendor. While this approach can effectively destroy data, there are significant financial, environmental and security consequences to consider.
From an economic standpoint, it prevents the business from reusing or reselling those assets. Additionally, physically destroying drives instead of returning them as part of the original lease contract or under RMA-warranty, when they need replacing, can result in financial penalties. While not captured on the balance sheet, the environmental cost can also be high and contradict CSR polices. Finally, from a logistical standpoint, physical destruction tends to be carried out only when there are many drives ready to be processed. Cost and effort must therefore be expended in ensuring the devices are safely and securely stored until this point is reached.
Addressing the SSD Challenge:
NAND flash SSDs require a different sanitisation process than traditional magnetic media (HDDs). Physical destruction using degaussing of the drives does not work at all for example. New digital erase processes and professional software are needed to make sure all of the nooks and crannies are securely and permanently erased. Storage devices based on flash memory require digital erasure to be done at a deeper hardware command level, including bad blocks. Newer SSDs especially, such as those using NVMe-based access protocols, often require completely updated erasure solutions to communicate with the drives correctly.
When choosing a modern SSD digital erasure solution, look for one that performs verified and complete data erasure on the lowest level. The process of permanent erasure includes accessing hidden data, bad blocks or other areas not accessible by traditional overwriting software and utilities. It means moving beyond tools that only clean the surface, or upper-level file systems, instead of going deeper below the logical or partition level. Other things to look for in a solution include support for older PCIe AiC SSDs, such as Fusion-io drives that can be remarketed at high values. And, as mentioned above, don’t forget to include support of newer NVMe accessed SSDs.
Option #2: Manual Data Erasure
Data erasure is a software-based alternative to physical destruction that involves securely overwriting all sectors and blocks of the storage medium. Technicians use specialised tools and equipment to access storage media in special-purpose boot environments and run the erasure software.
This approach overcomes many of the negative impacts of physical destruction, ensuring the hardware maintains its residual value and can be reused, resold or recycled and won’t have a negative impact on the environment. It is particularly effective in scenarios whereby the requirement for server decommissioning is very infrequent and happens in small volumes, or the target is only loose drives that are being replaced from the operational environment. However, manual data erasure is almost impossible to scale cost-efficiently as it relies on time-consuming and resource intensive processes.
Option #3: Server-based Erasure
Server-based erasure requires the data erasure software to be hosted on a laptop for example, and physically cabled to one or more servers at a time. Full erasure typically takes several hours, however the ability to automate the activity and have it run across multiple pieces using network boot of the hardware can save days and weeks of technicians’ time. It is by far the best option when multiple servers and drives need to be decommissioned at the same time.
A good example of this is a major multinational technology company we worked with that was struggling with its decommissioning solution. While it used physical destruction to sanitise some of the hardware that left its data centre, the company needed a solution that would erase all data and software on the remaining servers left in its network. Employing server-based erasure that could work remotely using network booting it was able to erase close to 900 servers (including 5,117 6x 1TB SATA HDD drives.) The total time from setup to finish was under ten hours, owing to all erasures having been performed simultaneously and remote-controlled over the network. The process also includes getting a tamper proof erasure certificate per server, including all drive serial numbers, as a complete audit trail.
The data centre could also enter custom field information into the reporting to meet the company’s internal security requirements. The erasure process was launched to all network-connected servers at once, thereby removing the need to connect a keyboard and monitor to each server as is the case with manual data erasure.
Simple, Powerful, Scalable Server Decommissioning
Greater automation makes life easier for data centres as they meet internal Service Level Agreements to decommission servers, either at end of life or prior to re-provisioning. These new approaches to automated data erasure offer dramatic workflow efficiencies, with environmental and budgetary advantages compared to physical destruction of decommissioned drives.
Because data erasure is completed on drives without leaving the security of the server room, data centre operators reduce risk associated with transporting storage media that contains sensitive data and provide a tamper-proof audit trail that data sanitisation has occurred.
Guaranteeing that customer data has been destroyed beyond recovery safeguards your reputation and ensures compliance with today’s toughest data privacy regulations. Eliminating low-value tasks such as unscrewing and handling loose drives and or running temporary cables and booting hundreds of servers individually can make your data centre more effective.
In the face of growing resource limitations and larger amounts of data, these reduced requirements for a common day-to-day responsibility can pay good dividends.
Halloween has long since gone, but the stench of fear around cyber security is lingering on by Jason Howells, Director EMEA of Business at Barracuda MSP.
Recent Data breaches have only proven that even those with unlimited security budgets are at risk of falling victim of an attack, causing potential distress among SMB’s. It therefore comes as no surprise that 92 per cent of businesses are worried about ransomware attacking their business, according to a recent Barracuda study.
In the past few years, cyber crime has increased dramatically. By some estimates, cybercrime damages will reach a total of $6 trillion annually by 2021. Clearly, there has never been a more pressing time to be finding ways to combat this threat to your business.
On the Defensive
It’s no secret; cyber-attacks have become more rampant in the past year. WannaCry, NotPetya, and the Equifax breach have dominated headlines, showing that a large-scale attack can happen at any time. Here are a few simple things you need to know to shield yourself from an attack.
Size does not matter. Whether you’re big or small, you are at risk of an attack. One way to educate your employees about cyber threats is to use company updates or quarterly reviews to highlight how a certain breach or cyber attack can apply to your organisation. In the case of WannaCry, for example, are all your systems up-to-date and patched? And if Equifax taught us anything, it is that no one is immune. Therefore, it is vital that you’re taking the right steps to proactively protect yourself from falling victim to a similar attack.
Evolution. We’re heralding a new shift in cyber criminal tactics. It used to be that malware was generally mass delivered via emails that were - on the most part - poorly crafted, often with telltale signs that they weren’t from who they were claiming to be from. In response, most organisations now have some kind of protection in place to either prevent a click on malicious emails or restore from backup if a click occurs. But, cyber criminals’ approaches have evolved, and you need to evolve with them. Nowadays, the real danger comes in the form of highly targeted, heavily researched, compelling spear phishing attacks. They work because they are believable: cyber criminals spend a huge amount of time making them look as realistic as possible and the results can be devastating.
Wearing those winter layers. A layered approach is key to protecting your data. You should have a disaster recovery plan in place, solid backup, and solutions to help mitigate an attack. One of the best ways to protect yourself is with a next-generation firewall and an email security solution, which is more than just a spam filter. You want to secure every threat vector you can. Think of it this way: You wouldn’t just leave your car unlocked; if you do someone can easily get right in. If you lock your car, though, the individual might move on to the next car or at least have a more difficult time getting into yours. Using technical safeguards can help prevent exposure to a variety of attacks, so taking extra precautions, such as encryption, to secure users’ data is a great idea.
Monitoring in real-time. The sooner an attack is detected, the easier it is to minimise the breach and protect your data. For example, if someone was copying files over the network and you’re monitoring it in real-time, you can immediately stop the network connection and mitigate the attack. If you only look at it once a month, you obviously won’t be nearly as effective. Often, attackers are working on the back-end of a network for an extended length of time before they can get in. The sooner this is detected, the safer your data will be.
The reality is, no one is invincible, anyone including you can fall victim to an advanced threat at any time. But putting the right solutions and procedures in place can help reduce the risks and severity of an attack. While no one knows what the next big thing will be when it comes to ransomware, following these best practices could be the difference between a close call, or a bullet to the heart.
Digital transformation is the name of the game for the modern business, and Infrastructure-as-a-Service (IaaS) is increasingly seen as the key enabler. Yet for many, security remains a major barrier. Organisations are concerned that the security required to mitigate risk in the public cloud may negatively disrupt business processes, and even diminish those benefits that made migration so attractive in the first place.
By Barry Scott, CTO EMEA, Centrify.
The good news is that organisations don’t have to start from scratch. By focusing on identity and taking a Zero Trust approach – assuming that untrusted actors already exist both inside and outside the network – there are many processes and technologies that they can lift from their on-premises environments.
IaaS drives innovation
Organisations are beginning to view public cloud environments as a viable extension of their own datacentres. Gartner predicted that the IaaS market would grow 37 per cent in 2017 to reach nearly $35bn, and go on to exceed $71bn by 2020. In the mobile- and digital-first enterprise, IaaS is a must-have to drive IT efficiency and scalability.
However, security concerns remain a perennial barrier to adoption. Even Microsoft has admitted this is a top concern with using the public cloud. It’s true that things are changing, and the IaaS providers themselves can take much of the credit. To help customers, for example, Amazon Web Services (AWS) provides tools and security bootstrapping, as well as a Security Best Practices document.
Security in the public cloud is undoubtedly a shared responsibility. AWS articulates it thus: the provider will take care of the lower layers of the infrastructure — what it labels “security of the cloud”. However, the customer is responsible for everything else, up to the application layer — “security in the cloud”. This includes data, operating systems and identity and access management (IAM). Microsoft goes even further, claiming that the only part of the stack it is 100% responsible for securing is the physical cloud infrastructure.
The challenges of IAM
Concerns over security are understandable. Threats are everywhere, especially in the headlines. One vendor blocked over 38 billion of them in the first half of 2017 alone, including 82 million ransomware threats.
The old perimeter security model is no longer effective against an agile and sophisticated enemy, able to exploit new cloud, mobile and IoT-driven architectures. You simply can’t rely on firewalls, VPNs and gateways to filter out untrusted users. In this new reality, users are accessing corporate resources in multiple applications and systems, from multiple devices, in multiple locations, at all times of day and night.
This has made IAM a key element of the security strategy to get right. Block unauthorised access every time and allow legitimate users to do their jobs with minimum interference and you’re well on the way to managing risk to acceptable levels.
The challenges facing security managers in IaaS environments are in many ways the same as those in the on-premises world.
It starts with static passwords — so often the root cause of damaging data security incidents. They led to the Uber breach of 57 million user details; the Target breach that affected over 70 million people; and the US Office of Personnel Management (OPM) breach that spilled hugely sensitive info on nearly 22 million federal employees, potentially to a hostile foreign power. Passwords are easy-to-crack, guess and phish.
Another major issue is the granting of too much privilege. Attackers are targeting admin accounts with great zeal, knowing that if they can get hold of credentials they’ll have the keys to the cyber-front door. Password sharing and reuse across such accounts only makes them more vulnerable. Recently researchers discovered a massive trove of 1.4 billion breached credentials for sale on the dark web. With so many log-ins available to hackers it’s surely time to switch to multi-factor authentication (MFA).
Siloed identity stores, complex hybrid cloud systems, negligent staff, malicious insiders and identity sprawl only further exacerbate these challenges. So how can organisations manage risk, and securely authenticate their users, without disrupting productivity, agility and business processes?
Towards a Zero Trust model
The answer is to take a Zero Trust approach. This demands powerful identity services to securely authenticate each user to apps and infrastructure — but only with just enough privilege necessary to perform the task at hand.
It’s an approach backed not only by the US lawmakers, but also analyst firms like Forrester and Google’s BeyondCorp project. Ideally it should focus on four key principles: verify the user, validate their device, limit access and privilege, and learn and adapt over time.
Zero Trust in IaaS environments
As mentioned, the good news for organisations looking to migrate to the public cloud is that many of the high-level Zero Trust principles that will keep your on-premises infrastructure secure can also be applied to the IaaS world. In fact, you should apply a common security and compliance model across on-premises and cloud infrastructure.
First, consolidate your identities to avoid the siloes that can lead to identity sprawl, broaden the attack surface and increase costs. Use centralised identities like Active Directory and enable federated log-in instead of using local AWS IAM accounts and Access Keys. Next, focus on least privilege when it comes to AWS Management Console, AWS services, EC2 instances and hosted apps. Privilege management tools can help you do this for AWS Management Console, Windows and Linux instances.
Don’t forget accountability. Have users log in with their individual accounts and elevate privilege as required rather than rely on anonymous shared privileged accounts like ‘ec2-user’ and ‘administrator’. Manage entitlements centrally from Active Directory, mapping roles and groups to AWS roles.
It’s also vital to audit everything, including logging/monitoring authorised and unauthorised user sessions to EC2 instances, associating all activity to an individual, and reporting on both privileged activity and access rights.
Finally, MFA should be everywhere in the modern enterprise. Apply it to AWS service management, on log-in and privilege elevation for EC2 instances, when checking out vaulted passwords, and when accessing enterprise apps. To help manage this, consider centralising on a single IAM platform with a trusted provider. Modern solutions use contextual analysis to apply risk scores to sessions and devices, ensuring that strong authentication is only required in risky circumstances, which minimises user friction. Behavioural analytics can also be applied to ensure the system learns and adapts over time, making it more effective.
With Zero Trust as your guide, IaaS can truly be an extension of your datacentre, enabling you to minimise unnecessary extra investments and ensure risk is kept under control.
Sustainability is high on the agenda for businesses from all perspectives, be it environmental, operational or cost efficiency related. Increasingly the data centre industry as a whole is facing demand from customers to demonstrate green credentials. Businesses and cloud service providers are turning to data centre providers to help them fulfil their corporate social responsibilities in reducing their own carbon footprints.
By Richard Wellbrock, VP Real Estate at Colt Data Centre Services.
With global internet traffic expected to increase threefold by 2021, the energy footprint will inevitably rise even further to fuel the rising consumption of data and connectivity. The amount of energy used by data centres is doubling every four years. Analysts forecast that data centres will consume roughly three times the amount of electricity in the next decade. As energy prices rise year-on-year, it is pertinent for businesses to look into sustainable ways for the future as resources become scarce over time.
One way to cut down on carbon footprint is to increase the amount of renewable energy used. Digital businesses are already in the race to help build a renewable powered internet. Apart from helping them fulfil their corporate strategy in demonstrating their green credentials, increasing renewable energy will also help them secure a stronger brand identity against competitors for achieving sustainability.
Data centre operators can help these businesses propel their initiatives by building a digital infrastructure powered by clean energy, such as wind, solar or biomass. Faced with customer demand to reduce their carbon footprints, green-conscious data centre owners are finding innovative ways to source renewable energy and recycle waste that can also benefit the local communities.
Energy-efficient Europe: the promised land of renewable energy
The use of renewable energy to power data centres is a development that is already under way in most countries across Europe, but more effort is needed to offset the exponential growth in internet traffic and data consumption.
The European Union has set out a mandate target to source 20 per cent of EU energy needs from renewables by 2020. Progress made to date by most of the countries is moving in the right direction. By March 2017, the share of renewables had increased in 22 of the 28 member states.
With governmental backing and a clear EU target to achieve, Europe has now been dubbed as the promised land for renewable powered data centres.
Data centre providers are making the most of renewable resources available when building new sites in Europe. Colt Data Centre Services is one such company that is procuring renewable energy across all its European locations.
Heat waste recycling: Switzerland pioneers innovative approach
Using renewable energy to power servers can be an exhausting task for data centres to manage. When done right, however, it can be a significant step towards a more sustainable future. The small town of Schlieren in Zurich, Switzerland is a great example of efficient energy sustainability in practice.
The town is trialling a new way of recycling waste in partnership with Colt Data Centre Services. The entire operation of the 2,500 sqm data centre is powered by renewable energy sources associated with the way the data centre is cooled. In partnership with the local authority, heat produced from the servers is also used to warm up nearby buildings, public swimming pools and transport facilities.
The energy exchange programme provides a unique opportunity to reduce power usage for the data centre, the town facilities and its local citizens. The Federal Government is also offering a welcoming tax break with the use of renewable energy to further incentivise local businesses to get involved.
These types of agreements offer mutual benefits to everyone and encourages communities to work together to meet the goal of reducing carbon footprints.
Small steps, big changes
As business and citizens continue on their digital journeys, more of such initiatives are needed across the entire ecosystem to build a greener future. The lack of transparency and reluctance by many businesses to change how energy is sourced and used in data centres will continue to undermine the industry’s long-term sustainability goals.
Businesses looking for data facilities must demand more from their data centre providers. IT leaders and procurement teams should ask their data centre operators to provide concrete evidence on how they can help the business achieve a more sustainable strategy. They should find out how the data centres are sourcing renewable energy and if there are any power recycling initiatives with the local communities to help achieve a greener future.
In-Memory Computing may be the key to the future of your success, as it addresses today’s application speed and scale challenges by Terry Erisman, Vice President of Marketing, GridGain Systems.
The In-Memory Computing Summit Europe, scheduled for June 25 and 26, 2018 in London, may hold the key to how your organization can meet the complex competitive challenges of today’s digital business transformation, omnichannel customer experience or real-time regulatory compliance initiatives. These initiatives take a variety of forms, including web-scale applications, IoT projects, social media, and mobile apps, but they all have one thing in common: the need for real-time speed and massive scalability. To solve this challenge, many leading organizations have turned to in-memory computing, which eliminates the processing bottleneck caused by disk reads and writes. In-memory computing isn’t new, but until recently it was too expensive and complicated for most organizations. Today, however, the combination of lower memory costs, mature solutions, and the competitive demand to achieve the required speed and scale for modern applications means in-memory computing can offer a significant ROI for organizations of any size in a wide range of industries.
The limitations of disk-based platforms became evident decades ago. Processing bottlenecks forced the separation of transactional databases (OLTP) from analytical databases (OLAP), but this required a periodic ETL process to move the transactional data into the analytics database. However, real-time decision-making is not achievable with the delays inherent in ETL processes, and over the last few years, organizations have turned to in-memory computing solutions to enable hybrid transactional/analytical processing (HTAP), which enables real-time analyses on the operational data set.
In-memory computing platforms, which are easier to deploy and use than point solutions, have driven down implementation and operating costs and made it dramatically simpler to take advantage of in-memory-computing-driven applications for use cases in financial services, fintech, IoT, software, SaaS, retail, healthcare and more.
The Next Step in In-Memory Computing: Memory-Centric Architectures
Two important limitations of many in-memory computing solutions are that all data must fit in memory and that the data must be loaded into memory before processing can begin. Memory-centric architectures address these limitations by storing the entire data set on persistent devices with support for a variety of storage types such as solid-state drives (SSDs), Flash memory, 3D XPoint and other similar storage technologies, and even spinning disks. Some or all the data set is then loaded into RAM while allowing processing to occur on the full data set, wherever the data resides. As a result, data can be optimized so all the data resides on disk but the higher-demand, higher-value data also resides in-memory, while low-demand, low-value data resides only on disk. This strategy, only available with memory-centric architectures, can deliver optimal performance while minimizing infrastructure costs.
A memory-centric architecture also eliminates the need to wait for all the data to be reloaded into RAM in the event of a reboot. Instead, the system processes data from disk while the system warms up and the memory is reloaded, ensuring fast recovery. While initial system performance will be similar to disk-based systems, it speeds up over time as more and more data is loaded into memory.
The In-Memory Computing Summit Europe 2018
While the benefits of in-memory computing are now well established, many companies don’t know where to begin. Which approach and which solution is best for their particular use case? How can they ensure they are optimizing their deployment and obtaining the maximum ROI? The In-Memory Computing Summit Europe 2018, hosted in London on June 25 & 26, is the only in-memory computing conference focusing on the full range of in-memory computing-related technologies and solutions. Attendees will learn about the role of in-memory computing in the digital transformation of enterprises, with a range of topics from in-memory computing for financial services, web-scale applications, and the Internet of Things to the state of non-volatile memory technology.
At last year’s inaugural event, 200 attendees from 24 countries gathered in Amsterdam to hear keynotes and breakout sessions presented by representatives of companies, including ING, Barclays, Misys, NetApp, Fujitsu and JacTravel. This year’s conference committee includes Rob Barr from Barclays, David Follen from ING Belgium, Sam Lawrence from SFB technology (UK) Ltd, Chris Goodall from CG Consultancy, William L Bain from ScaleOut Software, Nikita Ivanov from GridGain Systems, and Tim Wood from ING Financial Markets.
For organizations wanting to learn more about in-memory computing and how it can help them achieve their technical and business goals, the In-Memory Computing Summit Europe 2018 is the place to hear from in-memory computing experts and interact with other technical decision makers, business decision makers, architects, CTOs, developers and more.
Complexity has very much become the norm for today’s businesses. The rapid rise in the adoption of public, private and hybrid cloud platforms, combined with hugely intricate networks consisting of a growing number of network devices and the rules that govern them, means network architectures are constantly evolving.
By Andrew Lintell, Regional Vice President Northern EMEA, Tufin
This rate of development presents a huge number of opportunities for businesses, including the ability to offer new, innovative services, work in more efficient ways and achieve greater business agility. However, it is also resulting in significantly increased levels of complexity for IT teams, which makes staying secure a real challenge.
Indeed, complexity is now viewed as one of the leading risk factors impacting cybersecurity. According to a recent report from the Ponemon Institute, 83% of respondents believe their organisation is at risk because of the intricacy of business and IT operations, highlighting just how prevalent the issue has become. And, with nearly three-quarters (74%) of respondents citing a need for a new IT security framework to improve their security posture, businesses need to find a way to deal with this complexity and the risks it presents.
Ultimately, it comes down to efficiently managing a complex web of solutions, while also keeping cyber defences intact.
When it comes to maintaining security, one of the biggest issues facing businesses today can be best visualised through a ‘patchwork quilt’ analogy. Not only are networks increasing in size, firms are also being faced with the challenge of figuring out how to patch together several different systems and services from a wide range of vendors, all of which have distinctive features and capabilities.
The sheer quantity of tools and services being used across heterogeneous environments – multi-vendor and multi-technology platforms, physical networks and hybrid cloud – means a larger attack surface. As the attack surface grows, gaps can appear where attackers can find their way inside the network. And, without true visibility across the entire architecture and a clear view of each piece of technology, it’s difficult to find and close those gaps.
The services and applications in these various systems will also likely require different security policies, further adding to the complexity. For example, changing one security policy could have implications elsewhere, and without proper visibility, IT teams aren’t always aware of how one change impacts the entire network. Not only can this have security repercussions, but it can also have a negative impact on business continuity. But it’s not just the technical side of things that businesses should be solely concerned with. The human factor of security also must be addressed.
It has become clear that the complexity issue is further heightened by the fact that today’s IT security teams are often understaffed and don’t have the required levels of expertise to effectively deal with cyber threats.
The so-called ‘skills gap’ has been a widely discussed topic in cybersecurity and one that is becoming more prevalent as cybercriminals expand their capabilities, and corporate environments become more intricate. As a result, many businesses are lacking skilled information security personnel needed to securely manage their complex networks.
Human error and misconfiguration risks are also more prevalent than ever. The likes of security lapses, improper firewall management and vulnerabilities being overlooked are all very real concerns that, due to the complexity of modern networks, can become commonplace.
To address these challenges, businesses need to be able to streamline the management of security policies. By using a centralised policy management tool that looks across the entire network and automatically flags policy violations, the task for IT teams will be significantly simplified, giving them greater levels of visibility and control.
Furthermore, policy-driven automation can be used to ensure a company’s security strategy is consistent across the whole organisation, while also being able to identify high-risk or redundant rules with a greater degree of accuracy than through manual efforts. This way, businesses can continue to develop their infrastructures and grow their businesses without having to worry about opening themselves up to security risks.
From a people point of view, carrying out reviews of existing rules and policies is a tedious and time-consuming task to do manually, which can easily result in mistakes being made. But, an automated tool can remove the threat of human error. It can also complete this job in a fraction of the time, thereby making IT teams more efficient and freeing them up to perform higher level functions that increase the business’s overall security.
Coping with complexity is a very real problem for IT security teams, but it is one that can be overcome. By embracing automation, organisations can be sure that nothing will fall through the cracks and, even when a new piece of software is introduced, the overall system will remain as secure and agile as possible.
Businesses addressing the technical complexity and the human factor of corporate networks can continue to grow and add new services, safe in the knowledge that their defences are stronger than ever.
Gartner has predicted that by 2020, a corporate 'no-cloud' policy will be as rare as today's 'no-internet' policy. Research vice president Jeffrey Mann says that cloud will increasingly be the default option for software deployment, with the same being true for custom software, which more and more is designed for some variation of public or private cloud. This prediction supports the fact that cloud is here to stay, and organisations that embrace and use it will enable IT to play a more significant role in achieving business goals.
By Paul Mills, managing director, customer business unit, Six Degrees Group.
The cloud is a very exciting prospect. However, for businesses that are hesitant to fully embrace the technology, using a combination of on-premises and public cloud services is a great solution. But what are the key components and advantages of an effective hybrid cloud strategy and what are the features that make it a secure and reliable service?
Advantages of a cloud ‘fusion’ model
A key advantage of using a hybrid cloud model is that it enables an organisation to optimise its assets – balancing the utilisation of internal resources with the private cloud as well as the external services offered by public clouds. Additional advantages can include:
Implementing a hybrid cloud infrastructure successfully can be a major undertaking. A good starting point is to enlist the expertise of a service provider who can help plan and create the hybrid cloud strategy that will best support a companies unique business needs and objectives.
Key components of the strategy should include:
1. Integration of business services, applications and data inside and outside of the cloud boundaries.
2. Data encryption and sharing within and across cloud infrastructures, and the all-important backup and recovery element. With GDPR coming, the strategy must of course cover the enforcement of regulatory compliance and data sovereignty policies across locations.
3. Security relating to issues such as access control, intrusion detection, monitoring, as well as policies around app-to-app authentication and integrating with identity providers.
4. Management of the hybrid cloud components – people, tools and processes. Operations should also cover integrated alerts and monitoring across the cloud, and make provision for collecting and analysing metrics.
5. A cloud-mapping matrix – the correct choice of cloud per application and identifying which application instances match which customer needs.
6. Provider transparency. Not only is it essential that there is transparency across the cloud providers for ease of access to performance, usage and billing data, it is also crucial for entry and exit strategies – knowing when to use a specific cloud service and when to leave.
7. A customer service strategy that incorporates a support service that is globally scalable, ideally available 24/7, and ensures a consistent customer experience.
8. A plan to ensure business continuity should there be an operational failure in the hybrid cloud system. Agreements should also be in place to guarantee business service level agreements (SLA’s) regardless of outages in the cloud environments.
The list is certainly a long one, but working with a knowledgeable service provider who can guide the process will make ticking the boxes much easier and less daunting.
Hybrid as a secure and reliable service
Data that moves between private and public cloud environments must be able to do so reliably and securely. An encrypted connection that facilitates application portability makes it possible for data to be transferred securely between multiple environments. This can be a complicated process in a hybrid environment as the security protocols that work in private cloud might not directly translate to public cloud.
From a security point of view, most hybrid storage cloud solutions provide security measures at multiple levels. These include secure access to the cloud storage provider through authentication mechanisms, encryption of data in transit across the network using specific protocols, and protecting the data while ‘at rest’ inside the provider’s storage environment.
Embarking on the hybrid cloud journey
Once an organisation has made the decision to adopt a hybrid cloud strategy, the role of the IT team leading the transition is to examine the reliability and performance of various approaches, based on the needs of the business. Absolutely crucial is to constantly monitor performance, data protection and overall costs.
The end result should be a blended infrastructure that combines the best technology has to offer with a combination of private cloud, public cloud and dedicated servers that work together to achieve the desired business goals and propel it forwards.
IT managers are facing many new challenges as emerging technologies push them to set aside long-held strategies and operations in favour of new ways to get the job done by Evan Kenty, Vice President EMEA, Park Place Technologies.
This is happening in a variety of IT segments, and the movement toward new methodologies is particularly clear in the hardware maintenance industry. IT managers increasingly find themselves in a fiscal or operational climate where traditional OEM extended warranty models are unable to work as a viable option, making third-party hardware maintenance plans a necessary part of operations.
Many technology trends are agnostic when it comes to OEM vs TPM. Here are some topics sure to dominate conversation in 2018:
After years of development, an April 2016 adoption, and a delay period to allow affected parties to prepare, the European Union’s General Data Protection Regulation (GDPR) will go into effect next spring. This sweeping new oversight is designed to strengthen data privacy and security for EU residents and also covers export of data outside the EU.
The GDPR aims to harmonize data protection, ending the “patchwork” of laws across the 28 EU member states. Although there is some debate over the policy’s tension between standardization and member-state flexibility, it is commonly accepted that flexibility will be highly limited.
This may be a win for enterprises, even as they work to understand and comply with the GDPR’s nearly 100 articles. As it will become the world’s most stringent regulation on most counts, it is expected to have global impact, becoming the de facto international standard. “GDPR-compliant” may become a consumer catchword for “safe.”
Once the multinationals—or any business operating or collecting data across borders—can get accustomed to the new rules, the simplicity of a single dominant standard, not to mention the bolstering of consumers’ flagging confidence, may be beneficial.
Caution will remain dominant when it comes to Cloud adoptions.
In the opinion of the research group Gartner that “the journey to the Cloud is a slow, controlled process” for many enterprises. They mention colocation and hosting providers offering private and shared Clouds as safe spaces, of sorts, to supply basic Cloud capabilities and woo reluctant organizations into the arena.
Other sources are confirming that a measured approach is the modus operandi of most IT organizations.
Notwithstanding the “baby steps” approach of many tech leaders, the Cloud is set for big growth. Forrester predicts over 50% of global enterprises will use at least one public Cloud platform. And more and more IT organizations are getting into Cloud-native capabilities, containers and microservices, and other technologies.
Although few businesses expect to ever move Cloud apps back to the data center, reservations about the Cloud mean they aren’t rushing everything off-premises, either.
The big data analytics movement is emerging as a key trend across a diverse range of business sectors. For IT managers, this often means finding ways to resolve major storage challenges. Central to this issue is a need to balance high-performance systems for active data and high-capacity arrays for archived information. Many businesses lack the fiscal resources to pour into new systems in both of these areas, but big data requirements make it extremely difficult to skimp on either. One solution is to use legacy storage systems to support archiving and purchase new arrays to meet ongoing performance needs.
This option can reduce the costs of big data plans without forcing organisations to make sacrifices in terms of quality. In such a set-up, hardware maintenance partnerships can prove invaluable to ensuring reliability for the legacy systems used to archive data.
Sustainability is moving from being a secondary, or even tertiary, concern for IT leaders to a primary issue. The reality is that power and electronics waste issues are growing exponentially as technology demands rise. Green IT strategies are becoming a priority with this problem facing companies in just about every sector. Dealing with waste is a particularly challenging issue because computer, server, storage and network components can feature a combination of hazardous chemicals, materials that require special disposal methods and precious metals that need to be recycled.
A third-party hardware maintenance plan can enable IT managers to deal with waste more effectively through supplementary services that support equipment disposal. This is particularly valuable for storage systems because companies must not only consider environmental factors when casting aside solutions, they must also enact data protection strategies to ensure information is inaccessible when hard disks are thrown away. Many hardware maintenance providers offer specialised services in this area, making them a valuable asset for sustainability-minded IT leaders.
Rack Scale Design (RSD), pushed by Intel, isn’t new. In fact, the first international workshop on the topic appeared back in 2014. And the Ericsson Hyperscale Datacenter System 8000, the first commercial implementation, dates back to 2016. But it seems 2018 may be the year RSD finally gains traction.
The idea behind RSD is to free data center administrators to scale and control their assets not as servers and storage components, but as full racks of resources and even pools of several racks, or pods. This promises to add substantial efficiency and automation, just in time for the 5G roll-out, Big Data advancements, and other demands coming at teleservices carriers, cloud services providers (CSPs), and business enterprises.
The folks over at Cloud Foundry are among the RSD believers. First of all, they cite the various OEM products now available. The Ericsson system was a start, a set of highly standardized x86 servers inside a specialized rack with shared Ethernet switching and power/cooling. Firmware layers allow administration of the rack for asset discovery and management of switches, server nodes, storage, and more, with open APIs providing control. But now there are a variety of options on the market, including the Dell EMC DSS 9000, the HPE Cloudline, and the FusionServerE9000 and X6800.
And there is evidence of adoption. CenturyLink, for example, is using the DSS 9000 to offer build-to-order public and private cloud infrastructure for customers.
To spur greater uptake, Intel has been opening resource centers for customers to test rack scale systems, and Dell EMC has the DSS 9000 up for evaluation in its Amsterdam Solution Center. Supermicro in San Jose also permits test drives, and more labs are expected to open soon.
Data centers are looking hard at predictive maintenance to help avoid these costs. They’re taking a page from the “industry 4.0” or industrial IoT (IIoT) playbook with techniques shown to work in factories, along oil pipelines, and in other industrial settings, Why not in tech facilities, too?
The trend is a departure from reactive maintenance, which responds only when problems arise. It’s also a step above scheduled maintenance, which tries to prevent downtime with regular interventions timed based on historical lifespan data.
Predictive maintenance uses real-time data to anticipate issues before they happen. In fact, a computerized maintenance management system (CMMS) can identify potential issues and launch trouble tickets with no user intervention.
Of course, what keeps things interesting for IT managers are the trends and topics that cannot be predicted. The challenges are many, the rewards are great … and the mystery keeps it all compelling and moving forward.
There is an emerging trend among nation-states requiring data on citizens be stored in their own country. Commonly referred to as data residency or data localization regulations, these rules are becoming a major challenge to IT operations for a variety of companies doing business across borders.
Under the concept of data sovereignty, digital data is subject to the laws or legal jurisdiction of the country where it is stored. Increasingly, nation-states want oversight of their citizens’ data. Moreover, in the post-Snowden era, many governments are especially interested in guaranteeing that any snooping that is going on is done themselves, not by allies or adversaries.
Dozens of countries have enacted data localization/residency rules. They include China, Israel, Switzerland, Turkey, Belgium, Brazil, South Korea, South Africa, Argentina, Mexico, Uruguay, India, Malayasia, and Singapore.
Not all are equally stringent, however. Canada, France, and Germany are known for strictness of data residency. Australia specifically requires health data to be stored in country, and the U.S. demands that federal government data be housed domestically.
At present, the requirements are more onerous for companies operating in certain spheres, such as healthcare, finance, and government. Fortunately, some cloud services providers are developing specialized offerings and expertise in these and other key verticals.
Peter Ruffley, Chairman at Zizo, considers the importance of Digital Transformation (DX) and creating a data-driven culture to prepare businesses for an AI enabled world.
Artificial Intelligence (AI) is dividing the world. Heralding the fourth industrial revolution, is AI a force for good, boosting the UK economy by 10% over the coming decade as predicted by PwC, or the beginning of the end for a raft of manual and knowledge workers? Given the catchall nature of technologies apparently falling within the AI definition, it is hard to validate any predictions right now.
AI is not some magic wand that will eradicate manual tasks or transform day-to-day operations – it is simply another step forward in harnessing huge data resources to better understand the business and hence drive new efficiencies. And that is where the problem lies. Right now, organisations are still struggling to gain insight from existing data sources – where are the robust and efficient data gathering processes that enable businesses to access insight more quickly and effectively? Where is the implicit trust in data? Organisations need to take huge steps forward in data confidence and trust before any of these intuitive tools, from machine learning onwards, can gain any realistic foothold within day-to-day corporate operations.
The speed with which Artificial intelligence (AI) has permeated into everyday life has taken some by surprise. While we are still some distance from the self-driving cars eradicating road traffic accidents; bots leveraging X-Rays, MRI scans and medical research to transform diagnostics; even robot surgeons, advances in machine learning, speed recognition and visual recognition technology are already embedding AI in everyday activities.
From the Internet of Things (IoT) devices used within retail supply chains to minimise food wastage to the translation engines transforming global communication, the concept of intelligent automation is becoming familiar. Look ahead and the promise of AI – if fulfilled – will transform every aspect of life. And that’s before the killer drones rise up!
However, while every headline paints a picture of a machine-dominated future, a world where by 2025, the work of 150 million knowledgeable workers will be completed by cognitive robots, right now the majority of organisations have no clear understanding of, or strategy for effectively using AI in the future. In fact, they are nowhere near.
So how do we get there? How can a business be ready for the intuition led operations that can and will be driven by AI? The fundamental shift will be achieving a cultural willingness to trust and believe the data. The whole premise of AI is that technology is trusted to do the job, based on the information provided. If that information is inaccurate or incomplete, the AI cannot perform effectively – or maybe even at all.
Certainly, complex diagnostic processes will be less than convincing if the AI is provided with a limited subset of essential patient data. Fighter pilots will be reluctant to take off in a plane that is using an AI-based predictive maintenance system if they lack complete confidence in the quality of the information.
But even less ground breaking developments will depend on high quality data. And that is the challenge. The transition from where we are today to relying implicitly in AI is going to be huge. Quite simply, when an organisation today lacks the confidence in its information resources to even take data driven decisions, it is hardly going to embrace AI, a model predicated on data driven activity. Any business that is not already making decisions based on information that is trusted implicitly is going to struggle to embrace AI at any level.
Digital Transformation (DX) Foundation
Those organisations able to follow a digital transformation process and truly embrace data driven decision making will be well placed to explore the amazing opportunities AI will deliver. From predictive models and regression algorithms, that can accurately predict shopping patterns and enable retailers to put the right store and security staff in place; to fully automated tills, discounting rates automatically created based on stock availability, even web analysis and product affinity, AI will transform every aspect of the retail model. Indeed, trials are already underway of checkout-less stores, using sensors and cameras to monitor customer picks and automatically debiting accounts via Smartphones. The future of retail is advanced intelligent.
But the contrast to current operations is stark. This is not a one-step evolution. Just consider the wealth of current data that is not being effectively captured and deployed. Real time supply chain optimisation in response to actual consumer demand should be a given considering the existing depth of EPOS information and inventory data. Yet a number of supermarkets still rely on individuals to manually check the shelves several times a day in every single store to highlight any gaps in stock availability. A report is created, sent to head office and, eventually, the stock arrives.
Yet the data exists. Why are these retailers not tracking sales patterns across each store in order to predict local demand? Why are they not factoring in seasonality, demand variation by day of week and restocking accordingly? Improved insight can transform stock accuracy, minimise stock outs and improve the customer experience. But right now, without a willingness to take the next step – namely record stock availability against sales and embrace data driven decision making, the retailer is stuck with an inaccurate, inefficient manual approach and is a long way from exploring a truly automated business model.
And this is the key. Looking too far ahead is a mistake. Forget the killer drones, for now at least. Put the robot surgeons to one side. AI is on the way, no doubt, but there are several steps that organisations must take to get anywhere near AI in practice. And effective utilisation of existing data sources is step one.
Right now, despite the board level commitment to DX strategies, how many organisations can truly claim to be data driven? To have a user base with implicit trust in the data and a culture that has confidence in data led decisions? Without that essential foundation, the scope of possibilities of using any type of AI or predictive or machine learning is going to be vastly reduced. AI may herald the fourth industrial revolution; but successful deployment of AI will demand fundamental cultural change and an entirely different attitude to, and trust in, data.
Digital transformation projects are a key business requirement in today’s corporate enterprise world. Enterprises by their nature constantly seek operational efficiency and increased agility in order to better serve existing customers and help open up new market opportunities. This drive to improve business outcomes is also what drives the pursuit of digital transformation to create long-term value for all stakeholders.
By Delphine Masciopinto, Chief Commercial Officer at France-IX.
The digital transformation process will inevitably include a careful assessment of how applications, databases and computing environments are utilized and how this could be improved so that they perform more efficiently to support business goals better. Sooner or later, connectivity will be assessed, especially if the migration of business critical applications and data to the cloud is part of an enterprise’s digital transformation strategy. This assessment is often triggered when employees with new requirements, such as the need to rent virtual computers, to store database and new cloud-based applications, to access social media and online video for marketing purposes or online CRM platforms for the sales team, push network teams to multiply their network access methods and to constantly improve their network performance. Another specific reason is the wide availability of Microsoft’s Office 365 portfolio of applications among corporates and the possibility to send the associated traffic through public peering. Whatever the driver, network teams in the corporate world find using the public Internet to host mission critical applications to be an obstacle at first. Their second obstacle is that corporate enterprises are not IP companies and so the rapid adoption of systems such as SaaS, PaaS and IaaS raises all kinds of new questions regarding routing control, quality of service or network visibility.
Connectivity is the Key
There are various strategies and project scope for digital transformation, and moving at least some applications to the cloud is usually a first move. According to Gartner, organizations are saving an average of 14% through cloud migration. However, migrating business applications to the cloud whilst maintaining reliable and consistent access to them is a major challenge and can also be very expensive. But business-grade access to applications hosted in the cloud requires adequate links to the cloud-hosting company and the perceived complexity involved and step into unknown territory can be barriers for some enterprises.
There are a good number of recommendations for how best to address the challenges of cloud adoption; some favour a hybrid model, some recommend edge computing but all seem to agree that one of the best ways for businesses to use the cloud more efficiently and securely is through direct, dedicated interconnections between network and cloud providers on the one hand, and users and data on the other.
IXPs: a door to cloud migration
If last year’s growing trend was the adoption of digital transformation projects, this year’s seems to be enterprises joining IXPs. An IXP is a rich ecosystem of carrier networks, CDNs, social networks, cloud and IT service providers who choose to interconnect with a high number of others for the interest of all. Enterprises, seeking to leverage this concentration of connections, are starting to join peering communities but what exactly is behind this trend and what do IXPs provide?
An IXP peering connection is the fastest and shortest route to other peering members and greatly reduces latency, enhances speed and reduces the cost of an organisation’s Internet traffic. It also improves bandwidth and routing efficiency.
The good news is that a variety of players on the cloud are already IXP peering members. By becoming an IXP member themselves, enterprises gain access to business critical applications as well as gaining direct access to national and international carriers and also to non-business critical traffic such as social media and online video, which is driving the explosive growth of their Internet bandwidth. Companies such as Schneider Electric, Lacoste, Saint-Gobain, Air Liquide, Kering, LVMH or AXA technology services have all joined the leading IXP in France.
"Connecting to France-IX has really empowered Schneider in its fast adoption of Cloud technologies,” says Lionel Marie, Network Innovation Leader, Schneider Electric. “From a few Mbps of traffic exchanged 5 years ago, France-IX now delivers around 1Gbps of Cloud and Internet traffic towards Schneider Electric's 40,000+ employees based in Europe. Business critical traffic such as AWS, Office 365, WebEx, OVH, zScaler, Akamai as well as commodity traffic such as Google, Facebook, YouTube and others, is delivered through the France-IX platform, with unbeatable performance, bandwidth flexibility and price."
Benefits of IXP membership
The adoption of the cloud by enterprises as a digital transformation strategy has brought about a growing trend that is happening now: more and more enterprises are finding that public peering is a clever business solution. There are a number of excellent reasons for this. Firstly, IXPs offer ongoing value for money: for one monthly subscription, priced in cents per Mbps per month instead of euros per Mbps per month, enterprise members can peer with a myriad of useful service providers including public cloud providers such as Google Cloud Platform, Amazon Web Services and Microsoft Office 365 applications; security apps like Zscaler; collaborative creation environments like Adobe Creative Cloud operators; SaaS and IaaS providers and major content and social media networks such as Webex, Facebook, Twitter, LinkedIn or Dailymotion to access their content directly and cost-effectively.
Secondly, public peering offers latency similar to a LAN and therefore improves performance. Routing is also optimized as up to 70% of traffic can be routed by other peering members and, if the IXP has a Marketplace, there is always the option of purchasing additional services such as IP transit, anti-DDoS, Cloud Direct, paid peering and network traffic intelligence solutions to name just a few. Some IXPs also provide optimization of port filling to make connection management easier.
Thirdly, public peering optimizes network resilience. With track records of proven stability and new SLA offers matching the enterprises expectations, corporate users soon discover that peering can offer the redundancy and the resiliency in their IP network that they have been looking for when it comes to reaching the public Internet. The marriage between enterprises wanting business-grade access to the cloud and IXPs, which historically count operators and content providers as members, is a happy one. Enterprises gain resilience for their critical applications, optimized network performance and a means of carrying non-critical Internet traffic at the lowest cost. Internet Exchange Points gain members, which further encourages more to join and contributes to the overall growth and rewards for the community. Win-win indeed.
Lastly, with service fulfilment counted in term of days instead of weeks, with 70-90% of the routes available on day one thanks to route servers and the freedom from long-term contract duration, IXPs offers the agility any business is looking for when going through a digital transformation of its network.