44% of organisations currently have a Digital Transformation strategy and a further 32% plan to implement one.
The latest research from the Cloud Industry Forum (CIF) has revealed that Digital Transformation is climbing up UK businesses’ agendas as fears of digital disruption mount. However, the industry body has warned that organisations will need to focus on upskilling the workforce if their Digital Transformation efforts are to be a success.
Conducted in February 2017, the research polled 250 IT and business decision-makers in large enterprises, small to medium-sized businesses (SMBs) and public sector organisations and found that that a significant proportion of UK organisations are keenly aware of the threat of disruption. Two in five (40%) of respondents expect their organisation’s sector to be significantly or moderately disrupted within the next two years, and a similar proportion (37%) expect the same for their organisation’s business model.
Against this backdrop, it should come as little surprise that many organisations have transformation in their sights. The results revealed that 44% of organisations have already implemented, or are in the process of implementing, a Digital Transformation strategy, and a further 32% expect to have done so within the next two years.
However, 94% reported facing barriers to their organisation’s Digital Transformation. Over half (55%) stated that their organisation did not have the skills needed to adapt to Digital Transformation, 48% cited privacy and security concerns, while 47% were worried about legacy IT systems. Worryingly, just 17% were completely confident that their senior leadership team would be able to deliver Digital Transformation.
Commenting on the findings, Alex Hilton, CEO of CIF, said: “It is clear that UK-based organisations can see some big changes on the horizon and Digital Transformation strategies are gaining traction as a result. This is certainly encouraging, but the results from our research indicate that many organisations lack the strategic thinking, direction, and support needed to make a success of Digital Transformation. UK business and technology leaders need to consider the digital imperatives and look at how they support their businesses with technology to meet them. Moreover, they need to invest in skills development and training schemes for staff to help drive digital initiatives further.
“Digital Transformation is about more than just turning legacy processes into digital ones; it looks at how an organisation interacts and engages with its employees, partners and customers. Having the right skills in the broader workforce to deliver Digital Transformation is critical and the research revealed that just 45% of respondents believe that their organisation has the skills required to adapt to Digital Transformation. Looking to the future, UK organisations need to focus on plugging the digital skills gap if they are to enjoy the full benefits of Digital Transformation. To this end, the Cloud Industry Forum has built and launched our Professional Membership and eLearning scheme, which provides the means to aid the development of key digital and cloud computing skills,” concludes Alex.
Enterprise server rooms will be unable to meet the compute power and IT energy efficiencies required to meet the demands of fluctuating technology trends, pushing a higher uptake in hyperscale cloud and colocation facilities.
Citing the latest IDC research, which predicts a growing fall in the number of server rooms globally, Roel Castelein, Customer Services Director, at The Green Grid argues that legacy server rooms are failing to keep pace with new workload types and causing organisations to seek alternative solutions.
“It wasn’t too long ago that the main data exchanges going through a server room were email and file storing processes, where 2-5KW racks was often sufficient. But as technology has grown, so have the pressures and demands placed on the data centre. Now, we’re seeing data centres equipped with 10-12KW racks to better cater for modern-day requirements, with legacy data centres falling further behind.
“IoT, social media, and the number of personal devices now accessing data are just a handful of factors that are pushing the demands of compute power and energy consumption, which is causing further pressures on legacy server rooms used within the enterprise. As a result, more organisations are now shifting to cloud-based services, dominated by the likes of Google and Microsoft, and also colo facilities. This trend is not only reducing carbon footprints, but also guarantees that the environment organisations are buying into are both energy efficient and equipped for higher server processing.”
In IDC’s latest report, ‘Worldwide Datacenter Census and Construction 2014-2018 Forecast: Aging Enterprise Datacenters and the Accelerating Service Provider Buildout’, it claims that while the industry is at a record high of 8.6 million data centre facilities, after this year, there will be a significant reduction in server rooms. This is due to the growth and popularity of public cloud based services, occupied by the large hyperscalers including AWS, Azure and Google, which is expected to grow to 400 hyperscale data centres globally by the end of 2018. Roel continued:
“While server rooms are declining, this won’t affect the data centre industry as a whole. The research identified that data centre square footage is expected to grow to 1.94bn, up from 1.58bn in 2013. And with hyperscale and colo facilities offering new services in the form of high-performance compute (HPC) and Open Compute Project (OCP), more organisations will see the benefits in having more powerful, yet energy efficient IT solutions that meet modern technology requirements.”Migration to Windows 10 is expected to be faster than previous operating system (OS) adoption, according to a survey by Gartner, Inc. The survey showed that 85 percent of enterprises will have started Windows 10 deployments by the end of 2017.
Between September and December of 2016, Gartner conducted a survey in six countries (the U.S., the U.K., France, China, India and Brazil) of 1,014 respondents who were involved in decisions for Windows 10 migration."
Organizations recognize the need to move to Windows 10, and the total time to both evaluate and deploy Windows 10 has shortened from 23 months to 21 months between surveys that Gartner did during 2015 and 2016," said Ranjit Atwal, research director at Gartner. "Large businesses are either already engaged in Windows 10 upgrades or have delayed upgrading until 2018. This likely reflects the transition of legacy applications to Windows 10 or replacing those legacy applications before Windows 10 migration takes place."
When asked what reasons are driving their migration to Windows 10, 49 percent of respondents said that security improvements were the main reason for the migration. The second most-often-named reason for Windows 10 deployment was cloud integration capabilities (38 percent). However, budgetary approval is not straightforward.
"Windows 10 is not perceived as an immediate business-critical project; it is not surprising that one in four respondents expect issues with budgeting," said Mr. Atwal.
"Respondents' device buying intentions have significantly increased as organizations saw third- and fourth-generation products optimized for Windows 10 with longer battery life, touchscreens and other Windows 10 features. The intention to purchase convertible notebooks increased as organizations shifted from the testing and pilot phases into the buying and deployment phases," said Meike Escherich, principal research analyst at Gartner.
Top-performing organizations in the private and public sectors, on average, spend a greater proportion of their IT budgets on digital initiatives (33 percent) than government organizations (21 percent), according to a global survey of CIOs by Gartner, Inc. Looking forward to 2018, top-performing organizations anticipate spending 43 percent of their IT budgets on digitalization, compared with 28 percent for government CIOs.
Gartner's 2017 CIO Agenda survey includes the views of 2,598 CIOs from 93 countries, representing US$9.4 trillion in revenue or public sector budgets and $292 billion in IT spending, including 377 government CIOs in 38 countries. Government respondents are segmented into national or federal, state or province (regional) and local jurisdictions, to identify trends specific to each tier. For the purposes of the survey, respondents were also categorized as top, typical and trailing performers in digitalization.
Rick Howard, research vice president at Gartner, said that 2016 proved to be a watershed year in which frustration with the status quo of government was widely expressed by citizens at the voting booth and in the streets, accompanied by low levels of confidence and trust about the performance of public institutions.
"This has to be addressed head on," said Mr. Howard. "Government CIOs in 2017 have an urgent obligation to look beyond their own organizations and benchmark themselves against top-performing peers within the public sector and from other service industries. They must commit to pursuing actions that result in immediate and measurable improvements that citizens recognize and appreciate."
Government CIOs as a group anticipate a 1.4 percent average increase in their IT budgets, compared with an average 2.2 percent increase across all industries. Local government CIOs fare better, averaging 3.5 percent growth, which is still more than 1 percent less on average than IT budget growth among top-performing organizations overall (4.6 percent).
The data is directionally consistent with Gartner's benchmark analytics, which indicate that average IT spending for state and local governments in 2016 represented 4 percent of operating expenses, up from 3.6 percent in 2015. For national and international government organizations, average IT spending as a percentage of operating expenses in 2016 was 9.4 percent, up from 8.6 percent in 2015.
"Whatever the financial outlook may be, government CIOs who aspire to join the group of top performers must justify growth in the IT budget by clearly connecting all investments to lowering the business costs of government and improving the performance of government programs," Mr. Howard said.
Looking beyond 2017, Gartner asked respondents to identify technologies with the most potential to change their organizations over the next five years.
Advanced analytics takes the top spot across all levels of government (79 percent). Digital security remains a critical investment for all levels of government (57 percent), particularly in defense and intelligence (74 percent).
The Internet of Things will clearly drive transformative change for local governments (68 percent), whereas interest in business algorithms is highest among national governments (41 percent). All levels of government presently see less opportunity in machine learning or blockchain than top performers do. Local governments are slightly more bullish than the rest of government and top performers when it comes to autonomous vehicles (9 percent) and smart robots (6 percent).
The top three barriers that government CIOs report they must overcome to achieve their objectives are skills or resources (26 percent), funding or budgets (19 percent), and culture or structure of the organization (12 percent).
Drilling down into the areas in which workforce skills are lacking, the government sector is vulnerable in the domain of data analytics (30 percent), which includes information, analytics, data science and business intelligence. Security and risk is ranked second for government overall (23 percent).
"Bridge the skills gap by extending your networks of experts outside the agency," Mr. Howard said. "Compared with CIOs in other industries, government CIOs tend not to partner with startups and midsize companies, missing out on new ideas, skills and technologies."
The concept of a digital ecosystem is not new to government CIOs. Government organizations participate in digital ecosystems at rates higher than other industries, but they do so as a matter of necessity and without planned design, according to Gartner. Overall, 58 percent of government CIOs report that they participate in digital ecosystems, compared with 49 percent across all industries.
As digitalization gains momentum across all industries, the need for government to join digital ecosystems — interdependent, scalable networks of enterprises, people and things — also increases. "The digital ecosystem becomes the means by which government can truly become more effective and efficient in the delivery of public services," Mr. Howard said.
A look at recent developments in the delivery of IT Services
Tony Lock, Director of Engagement and Distinguished Analyst, Freeform Dynamics Ltd, April 2017
For much of the past decade, many vendors and researchers suggested that cloud would become the only sensible model for delivering most IT services. Indeed, in the early days of public cloud marketing they evangelised that it would be the dominant model. Today things have settled down, and the majority now accept that most organisations will consume IT services via a range of models, combining systems running inside their data centres as well as others taken from external clouds.
But how are enterprises transforming the way they deliver IT to their users, and what factors are driving that evolution? Recent research carried out by Freeform Dynamics (link http://www.freeformdynamics.com/fullarticle.asp?aid=1923 and http://www.freeformdynamics.com/fullarticle.asp?aid=1922) provides a good indication (Figure 1).
It is interesting to note that while much is said about the need for IT to become more responsive to rapidly shifting business requirements, a matter of which there is little dispute, a few other factors jump out. It is especially worth considering the “business alignment” results, as these point out some facets with the potential to dramatically impact IT’s responsiveness to business change, or even get ahead of it.
Adopting more collaborative relationships with business stakeholders is essential if IT, and hence the data centre, is to deliver new services rapidly, but it is subtly different from simply reacting to change requests quickly. And if it is combined with IT becoming more proactive, i.e. actually suggesting how IT can help the business move forwards, the potential impacts could be valuable.
One other point from the results also has the potential to modify how many projects will be financed and undertaken going forwards. Over half of the respondents taking part in the survey already see or foresee a shift away from big IT projects towards more continuous improvement models. For many, the days of massive projects taking years to show results are coming to an end. The way forward is to do little and often, always keeping the big picture in mind. This approach is helped by recent advances to make the core IT infrastructure inherently more flexible and rapidly reconfigurable, as ‘software defined’ IT becomes a reality.
While there is plenty of talk about IT and the business working together more effectively, what IT projects are taking place? The answer, perhaps unsurprisingly, is that there are developments across both data centres and public cloud (Figure 2).
The results show a wide range of infrastructure project activity, despite the steady increase in expected life of individual items of hardware in data centres. Indeed, the ever-increasing dependence of nearly all business operations on IT requires more IT services to be available without interruption than in the past. This in turn makes itself felt in projects to modernise core data centre facilities such as power systems, power management and cooling.
It is worth noting the slow, but steady, implementation of virtual desktop solutions. In terms of user awareness, desktops, whether physical or virtual, are a focal point for users, who will notice should anything happen to impact service quality or availability. This is therefore yet another critical service that data centre managers must keep an eye on.
Overall, these research results reaffirm a key finding from many other studies executed by Freeform Dynamics in recent years, which is that while public cloud services are growing, the data centre is not heading for extinction. As ever in IT, pragmatism rules. Consequently, the public cloud is simply another delivery option to be considered in the light of all the relevant business, legal and IT requirements, and of how well it fits in with the corporate mind-set (Figure 3).
This pragmatism is well reflected by deeper analysis of the survey results, where an index of the progressiveness of organisations was built based upon the answers respondents provided to questions around their organisation’s personality/culture (including their attitude to investment, leadership style, and response to external events and developments).
The results revealed some interesting differences, with the more progressive organisations (those with an above average personality/culture score) being more committed in their use of various cloud and data centre solutions compared to the less progressive group. What is an eye-opener is the difference in scale of usage and commitment of more progressive organisations to almost every data centre and cloud solution listed.
Interestingly, the analysis also revealed the importance of executive support in the more progressive organisations to assist in the fast and broad adoption of new IT technologies and ultimately overall performance with respect to meeting evolving business needs and expectations (Figure 4).
The Bottom Line
The shape of IT infrastructure is changing faster than at any time in history. Data centres are being forced to evolve rapidly to keep up with increasing demands to house expanding numbers of systems and to do so with an expectation that the systems will run 24 by seven, without fail. Getting the technology right is only part of building IT solutions that can drive the enterprise forwards. The need to communicate clearly and effectively with senior business management about why IT investments are valuable to them is just as important as being proactive in IT solution advice to business stakeholders.
By Steve Hone, CEO, DCA Data Centre Trade Association
Making predictions about the future is never easy, especially when it comes to technological advances or the impact these might have on the data centre sector as it attempts to keep up with demand and change.
We live in a fast moving world whose insatiable appetite for digital services is both rapidly growing and evolving at an alarming rate leaving Moore’s Law in its wake. In this month’s edition of the DCA Journal Dr Jon Summers article titled “standing on the shoulders of giants” touches on this very point. Additional contributions from Ian Bitterlin, David Hogg from ABM (formally 8 Solutions) and Laurens Van Reijen from LCL all provide additional insight into what might lie ahead and the impact this could have both positively and negatively on the data centre sector.
You would be wise not to ignore the past when peering into our crystal ball to predict what’s likely to be round the next corner. It’s also healthy to review some of the previous forecasts and predictions to see if they were proved correct or were widely over or under estimated, as Robert Kiyosaki (American Author) said “if you want to predict the future, study the past”.
There is no denying that we are using far more digital services than we ever predicted, to put this into perspective in 2012 an IDC's Digital Universe Study*1, sponsored by EMC, calculated that based on historical data collected since 2010 the worlds data usage would rise from a modest 10,000 Exabyte’s to 40,000 Exabyte’s by 2020. Well, a Cisco white paper*2 confirmed that we had reached and exceeded that forecast by January 1st 2016 in mobile data alone, so it’s anyone’s guess where we go from here! Up and on a very steep curve would be a very safe prediction! Remember the concept of ‘Smart Cities’ and the ‘Internet of Everything’ is only just warming up.
Now I’m not suggesting for one minute that this explosion in demand for digital services is something to be frowned upon or that we should try in some way to slow it down, as quite frankly any attempt to do so would be utterly futile, now the genie is out of the bottle it’s simply unstoppable.
I was still in a highchair throwing food at my parents in 1969 when an American flag was first planted on the moon; that was nearly 50 years ago, today it is reported that the same amount of computing power it took to put man on the moon can now be found in my sons Xbox 360! If you need even more statistics take a look at the info graphic below produced in 2015 – 2M Google searches and 204M emails sent every 60 seconds together with over 4300 Hours of You Tube videos uploaded every 60 minutes. These figures are staggering and remember these numbers are now two years old and were compiled before the likes of On Demand TV, Netflix and Now TV streaming services kicked off.
It is only when you take these sorts of statistics into consideration that you realise how far we have travelled in such a short amount of time and why predicting the future is proving so hard to predict! As uncomfortable as it may be, the undeniable fact is we are now completely reliant and if you take my kids as a good (or bad) example, utterly dependant on the IT based technology and online digital services we use every day without thinking twice about them.
We have also become completely intolerant when it fails us, you would have thought the world was about to end if you can’t get 3G or Wi-Fi. We expect access to these services 24x7xforever[SH1] and to make that happen an unbelievable amount of work goes on behind the scenes at an infrastructure level to ensure you are not let down.
The Data Centre Industry, represents the beating heart of any digital infrastructure and is arguably now just as important to the health of our nation as Water, Gas and Electricity - ironically the supply of which are all controlled by servers located in data centres.
If the revised statistics coming out are to be believed, the demands on the Data Centre Operators and the import role they play in supporting our digital world is probably going to increase five times quicker than originally predicted. Like the Enterprise in Star Trek, life is now moving at warp speed and we need to find the right solutions to keep up with this voyage into the unknown.
The DCA plays a vital role as the Trade Association for the Data Centre Sector in ensuring the industry remains on the ball and fit for purpose. It was created with the express purpose of both supporting existing business leaders attempting to address the many challenges faced today and to collaborate with suppliers on R&D, training and skills development programmes to ensure we meet future demand. This is a team effort and we are here to help.
Next month’s DCA Journal theme is focused on Energy Efficiency, deadline for copy is the 16th May this is followed by Education, Skill and Training with a copy deadline of 13th June. If you would like to submit articles for either one of these editions please contact info@datacentrealliance.org, full details are on the DCA website http://data-central.site-ym.com/page/DCAjournal.
By David Hogg, Managing Director, ABM Critical Solutions
With the advent of the UK’s forthcoming departure from the European Union, much has been made of the ‘uncertainty’ that threatens to dog future trading relations.
But of all the challenges the data centre industry faces over the next few years, Brexit is, perhaps surprisingly, the least of our concerns. British firms and the UK Government are hopefully leaning towards pragmatism and avoiding unnecessary complexity. It is highly likely, therefore, that the UK will adopt the same data control laws as exist in the EU currently, meaning that there will be no difference for the major US tech players in where their data is stored.
It is true to say, of course, that restrictions on the free movement of labour may drive up the cost (and reduce the availability) of labour, but this tends to impact the lower skilled ‘commodity’ roles (e.g. within the hospitality sector) and will therefore have little or no impact on data centres.
Indeed, for all of our predictions for the future, most are likely to have a positive impact on the data centre industry of tomorrow. Consolidation of data centres, for example, continues at a pace, and this will include more pan-European deals as the maturity of the market and expansion of clients continues. This in turn will add to an increased focus on achieving best practice.
Whilst there is yet to be a ‘one-size fits all’ set of standards, best practice levels have risen markedly across the UK in the past few years. Bodies such as the Data Centre Alliance have been key to promoting the need for best practice, complemented by the commercial imperatives of the co-location data sector segment looking for a differentiator in order to attract new clients. The updated European standard for data centres (EN50600 – Information Technology – data centre facilities and infrastructures) will also drive an increase in best practice adoption rates.
Leading on from this, the increasing density of IT equipment is similarly prompting closer attention on performance against best practice. As a case in point, ABM Critical Solutions recently upgraded one of its client’s data centres (the customer is a major high street retailer) following the client’s investment in new IT. The retailer is now able to generate the same IT computing power using only 25% of their previous space occupied by the ageing IT infrastructure.
Best practice is similarly enabling a focus on insurance, and driving down the cost of premiums. There is already evidence that insurance companies recognise that data centres that adopt best practice inherently contain less risk, and therefore adjust premiums directly. Allianz, for example, appears to be taking the lead in this area, and we predict that others are likely to follow once the idea fully takes hold.
From a technology perspective, ‘Edge’ data centres will become increasingly important as high-speed networks such as 5G are rolled out (5G is currently expected in 2020). We predict a real growth opportunity in this type of facility, especially where the big data centre players don’t have a presence, or there is demand from a specific market niche. The Internet of Things (IoT), Virtual Reality (VR) and 5G will lead to a massive growth in the world’s data centre volume, and a key opportunity for data centre providers and service companies alike.
We also predict a period of evolution in the way services to data centres will be delivered. There is already an increased demand for companies to provide a full complement of services based around a core expertise or skilled workforce. This is being driven, in part, by a desire by data centre operators to manage a smaller number of external suppliers, wherever possible, to reduce costs.
By way of example, ABM Critical Solutions is now using the same teams that complete its technical cleans to also undertake simple, additional tasks such as cleaning CRAC units and changing filters. AC engineers are expensive, and this allows their time on site to be more productively utilised in areas where their skills can be better deployed. In this way, service providers will be able to deliver greater value, while supporting their clients’ need for greater operational efficiency.
By Ian Bitterlin, Consulting Engineer & Visiting Professor, Leeds University
Most of us like a bit of speculation and making lists, so this month, when asked for ‘predictions’, I have decided to make a list of my top three.
The first in my list, ‘in no particular order’, concerns ASHRAE and their Thermal Guidelines for microprocessor based hardware. We, the data centre industry in Europe, are very lucky to have ASHRAE. OK, they are a purely North American based trade association that serves its members but they are the sole global source for the limits of temperature, humidity and air-quality for ICT devices and have proven themselves to be far more progressive for the environmental good than anyone could have expected. If you follow their guidelines from the first to the latest you could be saving more than 50% of your data centre power consumption since the members of TC9.9, who include all the ICT hardware OEMs, have consistently and regularly updated their Thermal Guidelines, widening the temperature/humidity window to enable drastic improvements in cooling energy. So, what is my prediction? Well it certainly is not that they will make the same improvements in the future that they have made in the past, since the latest iteration leaves server inlet temperature warmer than most ambient climates where people want to build data centres and requiring almost zero humidity control. My prediction is, in fact, that the conservative nature of our data centre users will keep the constant lag in ASHRAE adoption at a lackadaisical and slightly unhealthy five years. What I mean by that is simple – the 2011 ‘Recommended’ Guidelines are, in 2017, just about accepted by the mainstream users as ‘risk free’ whilst many users still regard the 2011 ‘Allowable’ limits as avant-garde. So, I predict that ‘no humidity control’ and inlet temperatures of 28-30°C will be mainstream by 2022….
The second prediction in my trio concerns the long-forecasted, but now clearly closer, demise of Moore’s Law. When Gordon Moore, chemical engineer and founder of Intel, wrote his Law it was clear to him that the photo-etching of transistors and circuitry into silicon wafer strata doubled in density every two years. That was soon revised by his own company to a doubling of capacity every 18 months to consider the increasing clock-speed and, more recently by Raymond Kurzweil (sometime nominated as the successor to Edison), to 15 months when considering software improvements. It lost its simple ‘transistor count per square mm’ basis long ago but Koomey’s Law took up the baton and converted the 18 months’ capacity doubling to computations per Watt. Effectively that explains why it is so beneficial to refresh your ICT hardware every 30 months (or less) and more than halve your power consumption for the same ICT load. To make a little visualisation experiment in ‘halving’ take a piece of paper of any size and fold it in half, and again, and again... You will not get to seven folds since you will have reached the physical limit.
So why have data centres been growing if Moore’s Law and its derivatives have been providing a >40% capacity compound annual growth rate (CAGR)? The explanation is simple – our insatiable hunger for IT data services (notably including social networking, search, gaming, gambling, dating and any entertainment based on HD video such as YouTube and Netflix et al) has been growing at 4% per month CAGR (near to 60% per year) for the past 15 years. The delta between Moore’s Laws’ 40-45% and the data traffic rise at 60% gives us the 15-20% growth rate in data centre power. The problem comes when Moore’s Law runs out, which it surely will with a silicon base material, as then we will have to manage the 60% traffic per year without any assistance from the technology curve. Moore’s Law probably has 5 years left without a paradigm shift away from silicon (to something like graphene) but that is unlikely to happen ‘in bulk’ within the 5-year time frame. Looking at one of the major internet exchanges in Europe shows that peak traffic is running at 5.5TB/s with reported capacity at 12TB/s – but if we consider even a slight slowing down of the annual growth rate to 50% then it will be less than 2 years before the peak traffic will be pushing the present capacity limits. I predict a couple of years of problems during the dual-event of a paradigm shift away from silicon and a sea-change in network photonics capacity.
The last of my trio of predictions concerns the reuse of waste heat from data centres and is simply stated as: By 2027 waste heat will not be ‘wasted’ from a huge array of ‘edge’ facilities and they will become close to net-zero energy consumers. From my perspective, there is a gathering ‘perfect storm’ of drivers that will converge to drive infrastructure designers to liquid based cooling:
The solution is simple and within our grasp today – liquid cooling of the heat generating components, particularly the microprocessors. With liquid immersed or encapsulated hardware and heat-exchangers pushing out 75-80°C into a local hot-water circuit with 94% efficiency the data centre will have a net power draw of just 6%. Just five cabinets (a micro-data centre by todays definition), equivalent to 80x todays ICT capacity, will be able to offer to the building 100kW of continuous hot-water. Consider embedded 100kW micro-facilities in offices, hotels, sports centres, hospitals and apartment buildings. Indeed, could this be the ‘major’ future? Could giant, remote, air-cooled facilities become obsolete? Probably not for twenty years, but then…
By Dr Jon Summers, University of Leeds
How much do you trust the weather predictions for tomorrow? If you are observant you may have noticed that such predictions have improved over time. This is in fact a direct consequence of Moore’s law, which I am sure you have heard much about, but suffice it to say it has been a self-fulfilling prophecy for successful growth of the ICT industry for nearly 50 years. Weather predictions become more accurate with faster supercomputers, which can then provide predictions in time for the broadcast weather forecast.
Talking about Moore’s law and making predictions and forecasts, it is interesting to ask the question if there are physical limits that restrict Moore’s law from continuing as it has done for many decades. Recently the academic and technical literature abounds with indications that manufacturing transistors with gate lengths of only a couple of atoms is limited due to two main reasons, namely fabrication cost and quantum effects. The former is likely to be the main limitation as indicated in the 2016 Nature News article called the chips are down for Moore’s law, which included the quote “I bet we run out of money before we run out of physics”.
In 1961, IBM employee Rolf Landauer published a paper that highlighted a relationship between energy and information that amongst other things reinforced the point that digital information is not ethereal. What the paper implied was that there was an ultimate minimum amount of energy dissipated (as heat) in a transistor (switch) at room temperature of 3 zeptoJoules (0.000000000000000000003 Joules), which is due to the erasure of information as part of the logical steps in the digital processes, which lead to the notion of the “physics of forgetting”. This minimum energy became known as the Landauer limit, but if information is never erased it would be theoretically possible to build switches that do not adhere to this limit. In fact this was discussed by a colleague of Landauer, Charles Bennett, in 1973 where it was suggested that if the computer logic was made reversible so that information could flow both ways without digital erasure, then it would be possible to compute with far less energy. This was in fact recently achieved in the laboratory using what is called an adiabatically clocked microprocessor.
The question you may be asking yourself now is how the demise of Moore’s law impacts on data centres. The answer is probably not much since the heat removal and power requirements will continue to be an issue for facility management, but it is worth trying to understand how ICT may change as we march into the next decade. Analysing the literature there are a number of interesting developments in building a replacement for Field Effect Transistors (FETs), the switch that creates the logic necessary for processing digital information. The immediate idea with today’s technology is to keep heat dissipation down but continue to increase transistor count by lower voltages, introducing three dimensional features, switching at lower speeds and making use of new materials. There are also a range of activities that are being pursued in the laboratory, namely computers that use reversible logic, superconducting switches, quantum processes, approximation and neuromorphic processes, which are not ready for the mainstream data centres. However, the issue of dark silicon, i.e. part of the “silicon chip” that cannot use power simultaneously with other parts, is likely to grow. This in effect has already happened when multicore microprocessors were introduced in 2005, but rather than having more “general purpose” processing cores they will be specific for certain functions, such as encoding, encryption, compression, etc., a development that is already occurring in the smartphone. ICT hardware is likely to become heterogeneous and application software development will then become the main focus for energy efficiency.
Our ability to make prediction based on scientific theory has only been possible with the developments of calculating machines, but predicting how ICT will develop in the future may be a question that we really need to ask the machines themselves.
“If I have seen further, it is by standing on the shoulders of giants” was the phrase that Isaac Newton used in a letter to his rival, Robert Hooke, in 1675.
LCL Survey of Belgian Companies By Laurens van Reijen, Managing Director of LCL Belgium
Belgium's listed companies have a false sense of security when it comes to data storage. 97% do not test power back-up systems and 50% plan to outsource activities. CIOs and IT managers of listed companies incorrectly assume that their corporate data is stored safely and securely. According to a survey of Belgian listed companies carried out by LCL Data Centers, they underestimate risks such as power cuts and fire, they fail to test their protective systems and they do not invest sufficiently in redundancy.
The survey of Belgian, quoted companies that LCL ordered, shows that data security is not seen as essential within IT governance, not even with quoted companies. For instance: with only one data center, in case of a disaster, you risk losing absolutely all your data. After your power shuts down, your company does too. If you really want to be safe, at least 30k’s should separate both data centers. Moreover, best practices dictate that one should separate the development environment from the production systems.
The CIOs and IT managers of 168 Belgian quoted companies took part in the survey. Of these companies, 87.5% felt they were protected from disasters such as fire or lengthy power cuts. Surprisingly, these respondents said that this was ‘because power cuts rarely happen’. The fact that they also have a disaster recovery service also added to their sense of security. Just 5% of respondents indicated that their organization was ‘reasonably protected', while 7.5% said that their organization had inadequate protection. This final group stated that in the event of a disaster it would not be possible to guarantee the continuity of the organization.
However, when asked whether their systems are also tested by switching off the electricity supply, only 3% of respondents answered yes. This means that a full 97% of respondents will effectively ‘test’ their backup systems for the first time when a disaster occurs.
“Our conclusion is that Belgian listed companies have a false sense of security,” Laurens van Reijen, LCL's Managing Director, said. “Many of the smaller listed companies, and some of the larger organizations, are not adequately equipped to deal with power cuts or other risks. They don't even know how well-protected their systems are, as they don't test their power backup systems. All organizations, and quoted companies in particular (in the context of corporate governance), should have all the protective systems they need to guarantee that the servers are dependable 24 hours a day, 7 days a week and they should actually test these systems on a regular basis.”
More than half of the listed companies store their data internally at the head office. One tenth of them rely on their own server room or a data center at another location owned by the company. A total of 44% of the respondents have a server room that is less than 5 m² in area. In this kind of set-up it is clearly impossible to include appropriate protective measures or specialized staff.
That said, most of the respondents do not have a second data center: 53%. That means, they have no backup in case of fire or theft of the servers. At the same time, half of the listed companies included in the survey have plans to outsource activities. At one third of the quoted companies that have a second data center, the second data center is located less than 25 km from the company's first data center. A major power cut is therefore likely to affect both data centers, which means the back-up plan will not be very effective.
“And yet business continuity is a must for virtually every business today,” Laurens van Reijen added. “The rise of digital technology has led to more and more business processes being digitized. Digital technology is being adopted in new, disruptive business models more than ever before, and these business models are thus dependent on the availability of the IT infrastructure. Shutting down servers in order to carry out maintenance work is no longer an option, as customers also need to be able to visit the website at night to submit orders. And as we have seen recently in Belgium at Delta Airlines, Belgocontrol and the National Register, a server breakdown can cause serious problems.”
“What are the odds that the current mentality – we all trust that all will go well - will change in the short term? Only a minority of companies interviewed said they were planning to set up a second data center. If we really want change, it will have to be directed by the Belgian stock exchange control body: FSMA. So in the best interest of our Belgian quoted companies, for the sake of their business continuity and employment - not to mention the shareholders who want return on their investment; data loss will almost certainly cause share devaluation - we call upon FSMA to issue a new guideline for quoted companies. A guideline pushing quoted companies to have a second data center, and to either thoroughly test all back-up systems, including power backup, or to confide in a party that does just that for them. It’s a pain in the lower back part, but people will not move unless they have to”, Laurens van Reijen concluded.
LCL has many years' experience and know-how in data centers and colocation. The company has three independent data centers: in Brussels East, Brussels West and Antwerp. The Belgian IDC-G member is ideally located in the center of Europe. At 4 miliseconds from Amsterdam and 5 miliseconds from London and Paris in terms of round trip latency, LCL is a vital link in IDC-G’s international data center network.
LCL has clients in a wide range of sectors. Multinationals and small and medium-sized enterprises, government bodies, internet companies and telecom operators all call upon the services of LCL. The company is ISO 27001 and Tier 3 certified. LCL also opts resolutely in favor of sustainability and is 14001 certified.
Laurens van Reijen, LCL’s CEO, is a seasoned data center professional. He was a founder and Operations Director at Eurofiber before founding data center company LCL in 2002.
For more information:
http://www.lcl.beHow the Cloud changes storage.
By John Kim, Chair, SNIA Ethernet Storage Forum.
Everyone knows cloud is growing. According to analysts, cloud and service providers consumed between 35-40% of servers in 2016 while enterprise data centers consumed 60-65%. By 2018, cloud will deploy more servers each year than enterprise.
This trend has challenged traditional storage vendors because more storage has also moved to the cloud each year, following the servers and applications. But it’s also challenging storage customers—the IT departments who buy and manage storage—as well, because they are expected to offer the same benefits as cloud storage at the same price.
The appeal of cloud storage is four-fold:
1) Price: Cloud storage might be cheaper than on-premises storage, as public cloud providers leverage economies of scale and frequently lower prices.
2) Rapid deployment: Application users can rent cloud storage capacity in a few hours, using a credit card, whereas traditional enterprise storage often requires weeks to acquire, provision and deploy.
3) Flexibility and automation: Cloud allows rapid increases or decreases in the amount and performance of storage, with no concerns about hardware management or refreshes, while changes and monitoring can be automated with scripts or management tools.
4) Cost structure: Cloud storage is billed as a monthly operating expense (OpEx) instead of an upfront capital expense (CapEx) that turns into a depreciating asset. You only pay for what you use and it’s typically easy to charge storage costs to the application or department using it.
Despite this appeal, many enterprise users are against moving all their storage to the public cloud for various reasons. Security: they might not trust their data will be sufficiently private or secure in the cloud. Regulations: government regulations might prevent them from using shared cloud infrastructure. Or from a performance standpoint, they might have locally-run applications that cannot get sufficient performance from remote cloud storage. (This can be resolved by moving applications to run in the same cloud as the storage.)
Other times, hardware is already purchased and the IT team strives to prove they can deliver on-premises storage solutions at a lower price than the public cloud. Either way, in the face of public cloud storage that is easy to consume and always falling in price, enterprise IT departments need to make storage cheaper and more flexible, either with a private cloud deployment or more efficient enterprise storage.
One way to “cloudify” the enterprise is software-defined storage (SDS). This separates the storage hardware from the software, and in some cases separates the storage control plane from the data plane. The immediate benefit is the ability to use commodity servers and drives to reduce storage hardware costs by 50%. Other benefits include increased agility and more deployment flexibility. You can choose different types and amounts of CPU, RAM, drives (spinning and/or solid-state), and networking for different projects and refresh or upgrade the hardware when you want instead of the storage vendor’s schedule. If you buy some of the fastest servers and SSDs, they can be your fast block/database storage today with one SDS solution then converted to archive/object storage three years from now using a different SDS solution.
Some SDS solutions let you choose between scale-up vs. scale-out and even hyper-converged deployments, and you can deploy different SDS products for different workloads. For example it’s easy to deploy one SDS product for fast block storage, a second one for cheap object storage, and a 3rd one for hyper-converged infrastructure. Compared to traditional arrays, SDS products are more likely to be scale-out and based on Ethernet (rather than on Fibre Channel or InfiniBand), but there are SDS products that support nearly every kind of storage architecture, access protocol, and connectivity option.
Other SDS vendors include more automation, orchestration, monitoring and charge-back/show-back (granular billing) features. These make on-premises storage seem more like public cloud storage, though it’s important to note that many enterprise storage arrays have also been adding these types of management features to make their products more cloud-like.
The benefits of SDS are appealing but not “free” because it requires integration and testing work. Achieving the 5 or 6-nines (99.999 % or 99.9999% availability) desired for enterprise storage typically requires careful qualification and testing of many aspects including server BIOS, drive firmware, RAID controllers, network cards, and of course the storage software. Enterprise storage vendors do all this in advance with rigorous qualification cycles and develop detailed plans for each model that covers support, upgrades, parts replacement, etc.
This integration work makes the storage more reliable and easier to support and service, but it takes a significant effort for an enterprise to do all this. It could easily require a few months of testing for the first rollout, followed by more months of testing every time the server model, software, drive model, or network speed changes. Cloud providers—and very large enterprises—can easily invest in hardware and software integration work then amortize the cost of their thousands of servers and customers. The larger ones customize the hardware and software while the huge Hyperscalers typically design their own hardware, software, and management tools from scratch. Enterprises need to determine if the savings of SDS are worth the cost of integrating it themselves.
Customers who want the cost savings and flexibility of SDS without the testing and integration requirements often turn to SDS appliances or bundles created by server vendors and system integrators who do all the testing and certification work. These appliances may cost more to buy and be less open to hardware choices than a “raw” SDS solution that is 100% integrated by the end user. But they still cost less and offer more frequent hardware refreshes than a traditional enterprise storage array. For these reasons the SDS appliances offers a good solution to customers who want the benefits of SDS but don’t want to do their own testing and integration work.
In the end choosing between SDS and traditional enterprise arrays usually comes down to a tradeoff between time and money. SDS lets you save money on hardware by investing a lot of time up-front for qualification and testing, while traditional arrays cost more to buy but don’t require the upfront time investment. Generally speaking, larger customers find SDS more appealing than smaller customers, but choosing a pre-integrated SDS appliance—which can include hyper-converged or hypervisor-based solutions—can make SDS accessible and affordable to customers of any size.
For more perspective on how the cloud changes storage, see the following SNIA resources on Hyperscaler Storage at www.snia.org/hyperscaler
At times we focus so much on one specific topic and its nitty-gritty details that we miss the big picture: this is what people mean when someone can't see the woods for the trees. Today the IT world is impacted by the ongoing debate around cloud. People tend to visualise the cloud as a location on some sort of geographical map. The cloud can be within your datacentre or outside of it, at times in some not better specified location. In reality the cloud is not a place, it’s a paradigm: it is a consumption model and an expectation of instant delivery. It opened up everyone’s eyes about the possibilities for greater agility, automation, efficiency, and simplicity. And those attributes can be found in all flavours of clouds.
By Fausto Vaninetti, SNIA Europe.
So should enterprises embrace private, public or hybrid cloud? The question is so strongly connected send youto datacentre technologies that one aspect is often forgotten: how will users reach their cloud services? The answer is easy: through the wide area network (WAN). This is why the debate around the alternative cloud deployment models should get a start from the wide area network options. And this is also the reason the IT crew should be talking to the network team as a first step when approaching the cloud.
Nobody would have even considered building skyscrapers if Otis had not invented the elevator a couple of centuries ago. That was a key enabling technology and also a differentiator among builders. When you look at cloud IT in a broader sense, it is apparent that users can access the required services in the cloud only by connecting via the WAN. You can have a fantastic application running in whatever form of cloud, but it will look pretty nasty if you have poor connectivity. Bandwidth and latency are clearly very important, but packet drops and security will also play a key role to make users happy.
In the past the most adopted solution for enterprise connectivity was to deploy a virtual private network on top of MPLS technology from a service provider. More recently, with the increasing adoption of Internet as a viable enterprise WAN transport solution and the move of applications to either the public cloud (typically for SaaS) or hybrid cloud (typically for PaaS and IaaS), customers find themselves at the cusp of a WAN evolution. A top priority now becomes uncompromised application performance and availability regardless of the application type and how it is consumed. Customers are expecting that the evolving landscape of WAN solutions will incorporate more and more of the WAN optimisation capabilities available on the market, even better if they are designed specifically for application and cloud-access optimisation. Bandwidth reduction, handling of packet drops and high jitter, high throughput even on long distance links, encryption and application specific quality of service are just a few features that come to mind. The focus should be on user experience, application performance, visibility, control and security. The success of a cloud service will strongly depend on how well and easily users will be able to connect to it through the WAN.
Software-defined WAN solutions are becoming a hot topic these days and they have to thank the wide adoption of IT services from the cloud as a major driver behind this momentum. Moreover, SD-WAN connectivity can be controlled with software living in the cloud space, making the marriage even better. SD-WAN is a flavour of software-defined networking as applied to long distance connections, outside the datacentre. Using SD-WAN technology, enterprises connect their branch offices to their datacentre networks and to the cloud, crossing geographical distances.
Branch routers are incorporating WAN optimisation and SD-WAN capabilities, alleviating the need for dedicated hardware appliances. Products in the Network Function Virtualisation Infrastructure category are also emerging as an appealing new approach for enterprises to combine multiple services (like security, load-balancing, WAN optimisation) on a single x86 server engine. WAN optimisation is also being included within data replication software tools, facilitating backup toward a cloud environment.
Hybrid cloud is definitely under the spotlight as the ideal candidate for an affordable, secure and flexible delivery of IT services. When combined with the ongoing trends of virtualisation, mobility, desktop virtualisation and analytics, this leads to an increase in bandwidth and management complexity. SD-WAN solutions seem to be the answer, determining a swift shift in enterprise WAN architectures, where there is a need to unify management of WAN application performance across Internet, MPLS and cellular links. Integrated platform offerings are top of mind within large enterprises with complex networks and requirements. Virtualised solutions, as part of the broader NFVI approach, are considered best for simpler branch and datacentre environments.
For organisations that are considering adopting some form of private, public or hybrid cloud, it would be a savvy move if they would start from the WAN. Just like elevators enabled skyscrapers, SD-WAN technology enables cloud IT. A famous song in the 70s suggested a lady was buying a “stairway to heaven”. Now it could be the time for IT managers to buy an “elevator to the cloud IT”.
A key revelation to some at the first European Managed Services and Hosting
Summit in Amsterdam on 25th April was that, outside of the managed services
industry, no-one is calling it that. With a strong focus on customers and how
they engage with managed services, the event discussed how the model had become
mainstream in the last year, and was now the assumed way of working for many
industries.
Over 150 attendees from nineteen different European countries met to review the state of the market and the ways to take the industry forward. Bianca Granetto, Research Director at Gartner, set the scene with a keynote on how Digital Business redefines the Buyer-Seller relationship. In this she showed how customers are using more and more diverse IT suppliers, while still looking for a trust relationship with those suppliers, and that this process will continue in coming years. “The future managed services company will look very different from today’s,” she concluded.
This was reinforced by TOPdesk’s CEO Wolter Smit who, in a discussion on the new services model, said that MSPs were actually in the driving seat as the larger IT companies could not reach their level of specialisation. Dave Sobel, SolarWinds MSP’s partner community director also pointed out that many of the existing IT services companies were decades old and, with management due for replacement, new thinking among the providers was inevitable.
The top trends affecting the market were outlined by several speakers, with IoT, user experience and smart machines within the list – and IoT will be profitable for suppliers, according to Dave Sobel, with the MSPs top of the list as beneficiaries.
IT Europa’s editor John Garratt highlighted the differences between the US and European managed services markets, with the US more focused on financial returns. Price was apparently less important to European customers, who were more focused on gaining control of their IT resources. Autotask’s Matthe Smit said that price indeed mattered less than a good supportive relationship. But, he said, less than half of providers actually measured customer satisfaction, and this would have to change.
If anyone was in any doubt of the impact of the new model, Robinder Koura, RingCentral’s European channel head, showed how cloud-based communications had pushed Avaya into bankruptcy, and the new force was cloud-based and more flexible.
Security was never going to be far from the discussions, and Datto’s Business Development Director Chris Tate shook up the meeting with some of the latest statistics on ransomware. MSPs are in the firing line in the event of an attack like this, and he gave some sound advice on responses and precautionary measures. Local MSP Xcellent Automatisering’s MD Mark Schoonderbeek also revealed how he launched new services using a four-layered security offering: “First we'll search for vendors through our existing partnerships. When we find a good product - we'll R&D it from a technical standpoint. If the product meets our quality standards we will roll out within our own production environment. Then we'll go to one of our best customers in a very early stage, we tell them it's a test-phase and we'll implement the service for free, but in return we want the customers feedback (what went well, what went not so well and what is the perceived value of the service that is offered). Then we'll make a cost calculation and ask the customer what the service is worth. We'll put a price on the product and deliver it fixed price. Next step is to sell the product to all existing customers.”
The impact of the new EU General Data Protection Regulation (GDPR) was starting, but there were many unknowns, not least how various regulators across Europe would react to the provisions, warned legal expert and partner at Fieldfisher, Renzo Marchini, while the opportunities and general strong confidence in the European IT market were illustrated by Peter van den Berg, European General Manager for the Global Technology Distribution Council (GTDC).
Finally, a well-received analysis of what was going on in the tech M&A sector showed attendees where to make their fortunes and how to do so quickly. Perhaps unsurprisingly the key to creating value within a company turns out to be generating highly repeatable revenues – which is what managed services is all about.
For further information on the European Managed Services and Hosting Summit visit www.mshsummit.com/amsterdam.
Many of the issues debated during the European Managed Services event will be further discussed at the UK Managed Services and Hosting Summit, which will be staged in London in September – www.mshsummit.com
The IoT is changing the workplace in many ways, altering working patterns, driving new cultural trends and even creating new job roles. Whether it’s influencing business decisions, enhancing organisational efficiency, or creating more informed employees, the technology is undoubtedly a disruptive force. However, to achieve its full potential, we must get the security dimension right. The expansion in data unfortunately presents more opportunities for cyber-attackers.
By Manfred Kube, Head of M2M Segment Marketing, Gemalto.
While there are understandable concerns over the impact of technology on jobs, we think the future is positive. The IoT is going to create more intelligent employees, allowing them to access huge volumes of data to make more informed decisions. They will need help in this task from advanced machine learning and Artificial Intelligence techniques, which will help make sense of the data and discover patterns. Furthermore, the IoT going to make work more flexible, rendering the traditional office space increasingly obsolete.
The rise in IoT-enabled devices, such as smart glasses, wristbands and powerful tablets, will equip workers with the tools to perform better. Just imagine how much more successful an aircraft engineer would be wearing connected smart glasses, enabling information on the plane to be fed into their field of vision while they fix a problem. We are also seeing products such as industrial smart gloves emerge, which can speed up assembly line processes by enabling workers with hands-free scanning and documentation of goods. The same principles apply to corporate leaders; think about how much more productive a boardroom meeting might be if directors had a constant stream of real-time data flowing to them from around the business which can be used to the company’s advantage.
The expansion in data can bring benefits for organizations, allowing them to better understand customers and make more informed decisions. Take banks as an example. The proliferation of connected devices means that financial institutions have more information at their disposal, allowing them to conduct more rigorous market analyses.
With IoT and M2M technology, banks can access data from across customers’ value chain. M2M sensors are set to enhance underwriting processes, since banks can better track physical performance of individuals, the shipping of goods and manufacturing quality control. Better informed lending decisions are also possible, since powerful IoT sensors can monitor the condition of retail, agricultural and manufacturing businesses.
While we’re optimistic about the future of work, there is a major obstacle – and that’s getting the cyber security right. The IoT is going to lead to an expansion in data, while the rise of mobile working is going to place more pressure on company networks to deliver cloud-based systems. Vulnerabilities could allow hackers to cripple organizations, potentially seizing control of organizational AI systems and wreaking havoc. We’ve covered real life examples of this on our blog before.
With cybercrime projected to cause losses of $2 trillion by 2019, companies need to develop strong identity management systems, as well as deploying tools like encryption and tokenization to combat cybercriminals. In addition, businesses running IoT projects will have to do more to ensure the identity of their connected machines and the sensors they are attached to, and the integrity of the data they are producing. After all, if a business will be using this data to make informed business decisions, they had better be sure the data is correct.
Clearly, the IoT is set to radically change the way we work, encouraging employees to make better use of data and pushing cyber security to the top of the agenda in the boardroom.
Hundreds of millions of cyber threats travel the internet every day and businesses of all sizes are at serious risk. For example, did you know that between 2010 and 2014, successful cyber-attacks on businesses of all sizes increased by 144%? On top of this, the National Cyber Security Alliance reports that approximately 60% of all businesses who experience a loss due to a security breach go out of business within six months.
By Atif Ahmed, Vice President EMEA Sales, Cyren.
This is quite staggering and clearly cyber-crime is big business—we’re talking large-scale organised crime and billions of dollars to be made from corporate and personal data. And, the Web is the primary highway for these attacks. Sophisticated phishing emails, malware, and even spam target more than just servers and desktops; laptops, smartphones, and tablets can also be the focus of a cyber-attack, from any location around the globe.
We recently conducted research to explore how smaller businesses are coping with the level of escalating cyber threats to look at whether it was just the larger organisations that are being attacked and how the small to medium sized enterprise and businesses are bearing up. The research which was conducted by: Osterman Research and sponsored by Cyren highlighted that security problems in small to medium sized businesses are rampant. The research, which was conducted: in February 2017, surveyed: IT and security managers at 102 UK companies with anywhere from 100 to 5000 employees The reason we say security problems are rampant is because75% of organisations surveyed reported a security breach or infection in the last 12 months, rising to 85% for businesses with 1000 or fewer employees.
In terms of frequency and the type of breaches causing organisations anguish, the average number of known breaches reported was 2.1. The threats that were rated of greatest concern were data breaches, ransomware, targeted attacks and zero-day exploits. Interestingly, ransomware infections were reported at twice the rate among organisations with fewer than 1,000 employees, when compared to organisations with 2,500-5,000 employees, which came out as 6 percent versus. 3 percent, respectively.
The greatest security gaps, where IT managers’ level of concern most outstrips their evaluation of their security capabilities, are in dealing with targeted and zero-day attacks. The threat of data breaches, botnet activity, and malicious activity from insiders were also cited. Only 19% of the respondents said that their web security is inspecting SSL traffic for threats.
The research also showed that IT managers are far more concerned about the costs of infection than the cost of protection. The initial cost of web or email security solutions or their total lifecycle cost were ranked much lower as decision criteria than features like ease of administration, visibility, and advanced security protection (the top three categories). Therefore, it is safe to assume that IT managers are far more concerned with stopping malware than controlling employee web behaviour, with the exception of preventing access to pornography from business networks.
“Shadow IT” is a moderate concern for larger companies, but a low priority for those with 1,000 employees or less, with only 9% considering it of concern. The largest organisations surveyed, with 2,500 to 5,000 employees, are currently rating application control as the most important capability in evaluating new solutions, with 73% rating it extremely important. This compares to just 43 and 41 percent of organisations in the two smaller employee size categories.
Data Loss Prevention is highly utilised in the UK, ranking as the second-most-deployed capability for both web security (64%) and email security (62%), among the capabilities evaluated. Less than 25% say they protect company-owned or BYOD mobile devices, and less than 30% of remote offices and Guest Wi-Fi networks have gateway security. The vast majority of organisations rely on endpoint protection for traveling employees’ laptops and to protect use of the web at remote offices.
2017 has started with some major developments in cybersecurity. The UK’s National Cybersecurity Centre opened up its doors and the work to get UK businesses and citizens more aware about cybersecurity intensified. This is hardly surprising as the country, and the rest of Europe, is only just 12 months away from new tougher legal regulations on cybersecurity going live.
By Aaron Miller, Systems Engineering Manager, Palo Alto Networks.
So, there has been plenty of opportunities to hear about cybersecurity in the media, even in Parliament. But, what’s the lie of the land as the half year mark gets closer? Attending both major public and private events on security, the prognosis is the industry is becoming more mature and less manic but there remain some challenges to still address.
Cybersecurity vendor collaboration is becoming a real benefit for customers. The Cyber Threat Alliance (CTA), of which my company is a founding member, brought into the fold more vendors to share vital threat intelligence and apply to this to tackle cyber threats much more effectively. As a result, every major security vendor is now a member of the alliance, working together to help our joint customers in the challenges they face.
However, what is really ground breaking about how the alliance has grown is how the CTA has committed itself to the ongoing development of a new, automated threat intelligence sharing platform. This could be transformative for how threat intelligence sharing delivers a real rather than theoretical blow to threat actors and their exploits.
This new platform automates information sharing in near real-time to solve the problems of isolated and manual approaches to threat intelligence. It better organises threat information into “adversary playbooks” focused on specific attacks, increasing the value and usability of collected threat intelligence. This innovative approach turns abstract threat intelligence into real world action and lets users speed up information analysis and deployment of the intelligence in their respective products. This kind of collaboration strengthens the industry and makes cyberattackers’ jobs more difficult.
Awareness of how legacy antivirus approaches do not work has arrived and more organisations are actively seeking alternatives. Hardly surprising when endpoint security was such a buzz at the events earlier this year and there are lots of approaches being presented. The most intriguing alternative to me is one that not only checks for compliance in antivirus replacement boxes, but is also natively integrated with the rest of the network security stack. As 2017 rolls on and organisations realise the magnitude of responding to cyber threats and complying with the tougher structures on data protection set out by GDPR and NIS, there is going to be a trend towards solutions with the native ability to integrate newly discovered threat intelligence into the platform with a minimum of human intervention. This is the only way to deal with both the floods of threat alerts most organisations receive and the growing number of endpoints connecting to networks.
There is a varied ecosystem of security products targeting new threat vectors and techniques. This is no surprise but while new thinking and innovation are vital, an ad hoc approach to building a cybersecurity infrastructure doesn’t give organisations the complete visibility into their risk posture they need to prevent attacks. The feedback that I get from CISOs and others is point solutions have some value but they don’t interact.
Orchestration is a term that’s going to become more frequently heard in 2017. So, expect more vendors to claim they have found ‘THE’ solution for managing a mixed-vendor cybersecurity environment. While each company’s claims of supporting heterogeneous security, as an industry we must do better in delivering natively engineered security platforms in which many of the capabilities delivered by a point product have been integrated into the greater whole. If done well, this can be much more beneficial solution.
As threats become more common and damaging, and the legal requirements on organisations to protect their users for cybercrime become more exacting, we are exposing a shortage of ready-to-go cybersecurity expert talent at all levels.
If you boil down much of the current debate about cybersecurity, finding ways to identify, hire and budget for more staff is the number one concern for government and business. This nut has to be cracked but there is a twin track approach that needs to be followed.
On one hand, we must encourage more cybersecurity learning within the education system. People are interested in these kind of jobs – indeed, almost 1250 people applied for the UK government’s 23 cybersecurity apprentice positions. Therefore, we need to fund more of these initiatives whether within universities or more practical training within the workplace. The new proposals of T-levels also could be a vehicle for getting more cybersecurity into the school curriculum and technical education system.
Although training the next generation of cybersecurity experts is vital, we need more cybersecurity capabilities today and to enable the preventative strategies that are best able to protect organisations from cyberattacks. So, expect more organisations to evaluate how machine learning and artificial intelligence can be used alongside greater automation of cybersecurity processes to drive effective prevention strategies.
Over the last ten or more years we have seen tremendous changes in how our societies and economies have become more digitised. And, threats to these new ways of working and living our lives have not been unusual. So maybe one of those past years might have felt significant but, three months in, 2017 has got a strong claim to be transformative year for my industry. Or, at least until 2018 begins.
DCSUK talks to Leo Craig, Managing Director of Riello UPS about recent developments at the company and some of the industry issues that are already having an impact on the way power plant is being designed and used across the data centre industry.
1. Recently, Riello introduced the Netman 204 firmware rev 2.03 – what’s new?
The upgraded firmware in the Netman 204 communications card now gives more interaction between the Riello UPS product and the user. We have redesigned the web interface, and the same network card is now upgraded to use on our new Multi Power Modular UPS as well as all ranges within the Riello UPS family. There is also an improved setup wizard to add extra environmental sensors and contacts that can be monitored as well as the UPS.
2. And can you tell us a little bit about Riello’s new Riello UPS Hub?
The Riello Hub is a new platform we have developed to offer customers, resellers, consultants and Riello certified engineers a one-stop shop of all information about products and services. This includes content tailored to specific customer news such as data sheets, drawings, stock availability, order progress, firmware updates, training videos, price lists, and much more. In essence, it’s about making life easier for all our clients so that they can find all relevant information in one place.
3. And the company recently opened a new subsidiary where was that?
Riello UPS, part of the Riello Elettronica group, is proud to announce the opening of Riello UPS America. Riello UPS America will take care of marketing, engineering, pre and post-sales of Riello UPS products, particularly those with UL certification, for field applications such as Data Centers, Automation, Medical / Hospitals and for all applications where continuous and reliable power quality is essential.
4. Looking ahead, what can we expect from Riello’s Sentinel product line over the next 12 to 18 months?
At Riello, we continuously look to improve our range, and we listen to our clients and what they ask for, so in the coming months we will be upgrading some of the range to introduce unity power factor and a paralleling capability, so that multiple units can be paralleled together to increase capacity or add redundancy.
5. And what about the Multi product line – any developments here?
We have recently increased the Multi Power modular range with the Multi Power Combo which offers 126KVA of redundant power and batteries in one single rack. Soon, there will also be another addition to the range, but I am keeping that under wraps at the moment, but as they say - watch this space over the coming weeks!
6. And then there’s the Master product line?
On our large UPS ranges we will have some very exciting new products on the horizon, which we hope to launch in the Autumn, but again I really cannot say much yet.
7. And your Solutions offering?
Our solutions offerings are as strong as ever, encompassing best practice in Resilience, Efficiency and Total cost of Ownership (TCO), which is no mean feat. The combination of high efficient modular products and being the only major UPS manufacturer to use open protocol means we really can achieve the best power solutions for our clients.
8. And, finally, what might we see in the software/connectivity space?
We have just launched our Riello Connect cloud service to enable our service team and the client to monitor their UPS’ performance in the cloud via PC, laptop or smart phone. This replaces our old telenetgaurd service, and existing clients are being migrated across to the new service throughout 2017.
9. And how is the service side of the business developing?
Service and maintenance is aways an important element of the the life of any UPS, and the maintenance contract is at the heart of this. Over the years, I have heard horror stories of clients selecting a third party service company that on the face of it, offers superb SLA’s such as 2 hour response. The problem is that, yes they do respond within two hours but not with a qualified engineer but with a tradesperson such as a plumber! So the SLA is met, but the UPS is still down and it could be like that for days.
Riello is looking to change this by giving clear and achieveable SLAs in response time but also in fix time, with pentalties for Riello if we fail to meet our own SLAs. Riello’s new contracts will benefit the client more than benefiting Riello, which demonstrates another area where the business is striving to offer outstanding customer service.
10. Moving on to less product-specific topics, what are the main challenges facing today’s UPS manufacturer as the data centre design changes and the IT workload is also evolving?
The main challenge for any UPS manufacturer is to have an efficient product across all possible loads. When technology such as virtualisation was first introduced a UPS which was sitting a 70 per cent load would suddenly drop to 30 per cent. Quite rightly, the client wants their UPS to be efficient at low loads as well as high loads which is where the modular UPS comes into its own because of the flexibility to power up or power down individual modules.
11. Is energy efficiency beginning to move up the agenda?
Energy efficiency has never really been off the agenda. At the end of the day, no data centre owner or manager wants an inefficient product because it wastes money and hits the bottom line.
The real issue is ensuring that the product you buy is efficient across all load levels. You can’t rely on the Carbon Trust’s Energy Technology list (ETL) because it allows a lower UPS performance at low loading, which is not right. A recent survey of 3000 customers conducted by Riello revealed that more than 75 per cent were running under 40 per cent load, with the majority around the 20 per cent load mark.
12. And how will IoT and artificial intelligence impact on Riello’s business – both in terms of how the company is/will use it, and also how it is changing the data centre environment, and hence the UPS requirement?
The IoT and AI are great technical advances which will no doubt impact on the way we do business and run our lives, but its implementation at Riello will be carefully considered before jumping in. The reason behind this is customer service. If you automate too much, you lose customers. People like to deal with people – for example, how many times have you called a utility company or a telecoms company and pressed so many buttons / listened to so many automated messages and been left on hold for so long that you want to scream and never deal with that supplier again. At Riello UPS, we recognised this pitfall and when someone calls our office, they speak to a person after the first department selection. We do not have voicemail, we want to speak to our clients. That’s why we will consider new technologies and ensure we only implement the ones that enhance customer experience.
13. Everything seems to be going modular in the data centre – is this true for UPS as well?
Absolutely, modular is very popular because of the flexibility and reliability with built in redundancy and lower TCO. Our R&D department are busy developing new additions to our modular range.
14. And how does Riello address the growing trend towards edge computing and micro data centres?
Data centres and computing technologies have always changed, but whether it was an old IBM Mainframe from years gone past to modern micro datacentres, the common thread is they need secure clean power. The question is how much power, and as UPS manufacturers, our range covers from 400VA to 6.4MVA, so we will have solution for whatever happens in the IT industry.
15. With all that’s going on in the data centre space, do you think that the definition of ‘power resilience’ will have to change from the standard definitions as found in the Uptime Institute’s Tiering levels?
In short no, the tier structure gives clients a basis to work from and an understanding of the risks. In essence, a single UPS offers some protection, but if your application is more critical, then you need redundancy. If it is even more critical, then a higher tier is required. Resilience is really risk mitigation, it’s the level of ‘insurance cover’ you need to to keep your business or data centre going. You could look at the tier structure as insurance categories, Tier 1 being 3rd party only with an excess and Tier 4 being fully comprehensive with no excess.
16. Can you share a recent data centre customer success story with us?
Success stories are hard to identify or quantify within the UPS industry. If they are installed and work efficiently, that’s great, when they fail it’s a problem. I think success from Riello’s perspective, is when we go beyond a customers expectation. Last week, we got a call from a small but very critical application involving a utlity. Their old 15kVA UPS had failed and was beyond repair - they called Riello to see how quickly we could get a 15KVA UPS to site. We took the call at 10am and the UPS was despatched an hour later and was onsite and installed at 2.30pm the same afternoon - that’s four and half hours from start to finish - and that is what we look at as a success, a very happy customer.
Our lives are increasingly spent online, both personally and professionally. As each day passes our dependence on immediate connectivity grows, and the traffic generated by the communication between devices and machines is mounting. Gartner has predicted that in 2017 there will be 8.4 billion connected things in use worldwide. By 2020 this figure will reach 20.4 billion. With businesses and consumers expecting these connections to happen instantaneously, this will put enormous strain on the data centre load, particularly as many of the services we use most often, for everyday business IT continuity, industrial applications and streaming TV series, are all cloud-based.
By Appal Chintapalli, vice president of integrated rack systems for Vertiv.
As a result, cloud storage centres – which are based hundreds, often thousands of miles away from the end user – are becoming laden with complex data processing requests between devices and machines. Putting such pressure on the system means a slower service or as many call it, “latency” or “lag”. This unprecedented amount of data being generated has pushed organisations to rethink how they’re using computing power and whether or not using a centralised system to interact with data is the best possible approach.
Over the last 10 years, the process of interacting with data has involved it moving from the end user all the way back to the central data centre facility – a process that is both time and resource heavy.
To relieve this pressure, and to maintain a seamless connected experience as data grows, the industry is moving towards a distributed data centre infrastructure which brings computing capabilities much closer to the end user, a model which is known as edge computing.
Edge computing is the process of bringing computing power closer to the “edge” of the network, where digital transactions and machine-to-machine communication takes place. Within edge computing, each device on the network is able to perform basic processing, storing and control at the local level. As a result, shorter geographical distances mean that data can be managed and interacted with locally and therefore, more quickly. Edge computing also reduces the amount of resource needed to deliver data to the end user.
Advertisement: Vertiv
It will come as no surprise that edge computing is rapidly being deployed. With close proximity, response times and latency are minimised thanks to data being received and processed in situ. It allows remote sites to function irrespective of failings or delays in the core infrastructure, and enables devices that previously suffered from limited storage capacity to have access to far more data-intensive content, such as rich media.
Secondly, data transmission costs are lower as the amount of data transferred back to a central location for storage is reduced. It also enables local processing of the information so that only data that meets a certain criteria is sent back for further processing or storage. There are fewer points of failure in edge computing, as data processing and control occurs at the device level without relying on a LAN network.
But the advantage of edge computing extends further than relieving immediate pressure on the network, it also improves business continuity and industrial operations. By nature, the micro data centres involved in edge computing are based geographically away from the core data centre facility. This means that should anything happen to the central data centre – such as power failures or security breaches – a business’ service can continue safely and seamlessly through the edge computing systems.
In the years to come, secure device-to-device communication and the remote deployment of software will increasingly depend on the availability of edge computing. Without local data processing power, connected devices will not be able to reach their full potential. Edge computing will also be instrumental in creating a future where IoT is mainstream in smart homes, smart cities and industry 4.0.
In the entertainment sector, the recent phenomenon of Pokemón Go is a good example of the growing importance of edge computing. During gameplay, the app itself collects a significant amount of data – location, player movement and internet connectivity. Simultaneously, the app syncs to your Google account, meaning it also has access to your Google data – that’s a huge amount of information being obtained and processed by the core game servers. It’s no wonder that reports of server crashes and security breaches are happening. This is the perfect scenario where edge computing becomes critical – moving data transfer away from core data centres, and towards the edge.
Edge computing is the answer for small-to-medium sized companies that are looking to bring data closer to businesses and their end users. The decision to place computing power at the edge is driven primarily by the type of application served. Banks, logistics centres or hospital facilities often require bespoke data management, making them the ideal type of business to adopt edge solutions. These may be made up of one micro integrated infrastructure or an aisle of racks, which would typically be housed in a room not specifically designed for it. The size of the infrastructure and its exact location will therefore depend on the computing needs in relation to the ambient environment. Increasingly, we will see data centre infrastructures coexisting as part of other buildings and implemented as standard office or store equipment.
Edge applications and micro data centres have become critical to many businesses, but building and equipping facilities fast enough to meet growing data demands is a real issue. The need for innovative pre-engineered solutions has never been more immediate. To combat this, earlier this year Vertiv announced the rollout of Vertiv SmartCabinet, a complete IT infrastructure solution in a fully-integrated enclosure. It combines all of the necessities for a micro data centre into one single unit, and eliminates the need to build complex computer rooms while enhancing system deployment.
Prefabricated solutions are a simple option for micro data centres with limited space, but also for companies looking to quickly upgrade their server rooms and local nodes to support increased data processing for machine-to-machine communication. In addition, integrated systems which are simplified and rapidly deployable are vital for edge applications. Common in education, telecoms, government and healthcare, our SmartRow and SmartAisle integrate racks, power, cooling and infrastructure management into a holistic data centre solution – maintaining productivity in a cost effective way.
There is an unprecedented amount of data being generated which will prompt organisations to rethink how they’re using computing power. This is because data centre systems don’t only store and process data, they generate it too – with an established IoT framework businesses will be required to put computing further away from the core. The future of successful enterprise computing will consist of a healthy, bespoke combination of core and edge sites, thus it’s essential to count on reliable and efficient solutions for both central and micro data centres.
The simplicity of edge computing means that businesses can now build integrated and flexible solutions that are tailored to the demands of an application and customer. Micro data centres – in some cases, as small as one rack – are working their way into rooms that were never designed for servers.
The evolution of machine communication, coupled with the impact of IoT Industry 4.0 has already had a massive effect on our IT foundations and without addressing underlying infrastructure issues, connected devices of the future will struggle to ever reach their full potential.
As a top six Premier League club, with European football on their 2016/2017 fixture list, Southampton FC wanted to create an all new digital platform to reflect their success on the field. Their ambition was to significantly raise the standard of digital fan engagement in the Premier League, and as a result, the club appointed creative and technology partners who could deliver on that vision.
Leading the project was Delete, an award-winning digital creative and marketing agency based in London, Leeds and Munich specialising in the delivery of digital transformation strategies for major brands.
As a business focused on digital campaign design and delivery, Delete regularly requires managed hosting services to support its creative efforts. They needed an experienced, specialist service partner to provide the club with high performance, reliable hosting services backed up by exceptionally strong service levels. That partner was Hyve Managed Hosting.
Delete has worked on a number of strategic campaigns with Hyve, and turned to them to support the delivery of Southampton’s new strategy. “We are very strong both creatively and technically, but the idea of also becoming a web host with the expertise to support our campaigns was not part of our planning,” explained James Carrington, Partner and Chief Technology Officer at Delete. “We needed a specialist partner who could work directly with us and clients such as Southampton FC to deliver the strategic technical advice, services and support they require.”
Ultimately, the site needed to be resilient to spikes in web traffic, highly secure and offer maximum service uptime.
Advertisement: Schneider Electric
Delete designed a brand and promotional channel for the club, driven by a number of key objectives. These included promoting tickets, hospitality products and club merchandise, creating a digital advertising channel for partners and sponsors, and building the fan database to provide an experience fit for a top six club.
Delete turned to Hyve to help the club understand their particular hosting needs, and then to provide the advice and guidance to optimise the performance of their site. Hyve’s approach is to remove the stress and complexity of deployment, migration, and evolution from online platform deployment by employing the following continuous processes: Consult, Design, Deploy, Maintain.
The consult phase seeks to understand client needs in order to architect the perfect cost effective solution, within budget and to a project planning timeline, which fits client planning precisely.
Project deployment focuses on key technical milestones ranging from server build, migration and content delivery, to platform configuration, fine-tuning and launch. Hyve maintains and delivers 24/7 monitoring and support, backed up by ongoing performance tuning, giving every client the ability to scale Hyve’s services according to their requirements.
Experience and track record are key for Delete and their clients. “Some clients need a hosting company with capabilities and expertise around specific platforms such as Kentico and Sitecore,” explains Carrington. “As a result they put Delete and our hosting partner through a lot of due diligence – Hyve’s experience in areas such as these is vital to winning business and overall project delivery.”
Service and support is also central to the overall approach. Each Delete client is allocated a dedicated Hyve Technical Account Manager, who has detailed knowledge of the priorities and challenges they face. Clients receive direct, personal support from their account manager contact
For clients such as Southampton FC, Delete and Hyve have been able to deliver a creative, reliable online presence, backed up by the highest levels of customer service. “We love the confidence and flexibility Hyve provides,” explains Carrington. “Quite a lot of the work we do is complex, and Hyve play a vital role in creating smarter ways of doing things that will save money, or do more with the same budget.”
"We have a responsibility to our supporters locally and around the world to ensure that our website is a high performance, always available resource, so needed a secure, stable hosting platform able to cope with high demand,” said James Kennedy, Head of Marketing. “Hyve not only fully understand this but they have exceeded all of our expectations. They have been agile enough to meet our exact architecture requirements and deliver a hosting solution that allows us to bring the best experience to our fans."
In a world of intellectual property theft, data breaches, and other cybercrimes, businesses are under intense pressure to protect sensitive data. This has only been heightened by the EU’s introduction of General Data Protection Regulation (GDPR). Worryingly, a fifth of UK organisations don’t understand the impending regulatory requirements.
By Iain Chidgey, general manager and vice president, EMEA at Delphix.
We asked them about GDPR preparation and found that that 21 per cent have no understanding of the impending General Data Protection Regulation (GDPR) being introduced. A further 42 per cent in the UK have considered some aspects of the GDPR but not the pseudonymisation tools that the legislation recommends and approximately, one in five of those that have studied the pseudonymisation requirements admit that they are having trouble understanding it.
As a result, the panic is upon us and it is guaranteed that there is a senior executive losing sleep somewhere over the impending regulation. However, while the GDPR will force organisations to ensure compliance and reduce the risk of a data breach, it will also help them to usher in a new wave of IT innovation. In fact, as organisations look at how they store, manage and secure data as part of compliance demands, there is a real opportunity to think about how data can be better used.
So, what are the steps that businesses need to take to comply by 2018 and hone innovation?
While data masking provides organisations with a tool that fits key challenges emerging from the GDPR, businesses must apply it with a “data first” approach that involves greater awareness of how data changes and moves over time, and how to better control it. Specifically, businesses will be most effective in achieving pseudonymisation through masking if they address the following questions:
Enterprises create many copies of their production data for software development, testing, backup, and reporting. This data can account for up to 90 per cent of all data stored and is often spread out across multiple repositories and sites. Businesses that understand where their data resides – including sensitive data located in sprawling non-production environments – will be better equipped to allocate protective measures.
Although many large organisations now have Chief Data Officers there is confusion over who owns data protection, with the responsibility often being shared between compliance, risk, security or IT executives. Even those that do have them, may not have adequate control over how data is moved and manipulated. That’s because individual business units – each with their own administrators, IT architects, and developers – often define data-related processes at the project level, with little or no corporate policy enforced or even available. Businesses addressing the GDPR must take steps to regain data governance and introduce tools that drive greater visibility and standardisation into processes such as data masking. GDPR does recommend the appointement of a Chief Data Protection Officer, which will go some way towards providing a consistent view across an enterprise. However, without the right tools, it’s an overwhelming task.
Current approaches to delivering data are highly manual and resource-intensive, involving slow coordination across multiple teams. Adding pseudonymisation to already cumbersome data delivery processes only adds to this burden and enterprises often end up abandoning efforts to make technologies like data masking work. One way this is being solved is by combining data masking with new data delivery platforms. Using these, businesses can simplify and automate the management and delivery of data, placing data masking into that automated workflow to ensure that masking is repeatable and an integrated part of the delivery process.
The GDPR contains an express legal definition of ‘pseudonymisation’, describing it as: “the processing of personal data in such a way that the data can no longer be attributed to a specific data subject without the use of additional information, if such additional information is kept separately and subject to technical and organisational measures to ensure non-attribution to an identified or identifiable person.”
Put more simply, the GDPR explains that pseudonymised data is data held in a format that does not directly identify a specific individual without the use of additional information, such as separately stored mapping tables.
For example, “User ABC12345” rather than “James Smith” – to identify “James Smith” from “User ABC12345”, there would need to be a mapping table that maps user IDs to usernames. Where any such matching information exists, it must be kept separately and subject to controls that prevent it from being combined with the pseudonymised data for identification purposes. Data masking and hashing are examples of pseudonymisation technologies.
Data masking essentially means the ability to replace a company's sensitive data with a non-sensitive, "masked" equivalent while maintaining the quality and consistency needed to ensure that the masked data is still valuable to operational analysts or software developers. Although vendors have provided this technology for some time, the GDPR, which becomes law in 2018, dramatically elevates its relevance and importance.
Data masking represents the de facto standard for achieving pseudonymisation, especially in so-called non-production data environments used for software development, testing, training, and analytics. By replacing sensitive data with fictitious yet realistic data, masking solutions neutralise data risk while preserving the value of the data for non-production use.
There is a falsely held view that encryption satisfies GDPR requirements. Encryption certainly is highly valuable for data that is in transit and complimentary to data masking. However, in order for the data to be processed it has to be decrypted - exposing sensitive data to anyone that can access it. The test website in the Kiddicare breach is a good example where real data was used. Subsequently, a breach occurred and the data was stolen.
When tackling these issues, we can take one of two approaches, either wallow in the complexity of imminent legislation or ride on the wave of innovation. Take the millennium bug for example, that also forced companies to review their systems to a deadline. Yet, many used it as an opportunity to transform legacy applications, and the same can apply to GDPR. Especially when CEOs consider the potential cost of GDPR fines. A choice between 4 per cent of global turnover, or investing a fraction of that in modernising data platforms suddenly drives the speed of change. Yes, GDPR may mean new levels of compliance, but it will also provide the opportunity to excel in a data led economy.
The advantage is that compliance, however strict, will undoubtedly result in a more comprehensive approach to data. The new data-driven landscape will pave the way for data-led innovation which has the potential to make businesses more secure, robust and resilient, accelerate business initiatives and ultimately, birth new competitive strategies. Not only will the enterprise be able to make more sense of its data, by default, it will also be able to use those data insights to deliver more value.
Ultimately, we can’t shy away from the consequential shift around GDPR but we can view it as an opportunity to reconstruct and refine our businesses to be better aligned with the digital era. If we think of this as a positive advancement, look closely, and squint with one eye, there’s almost a light at the end of the tunnel.
As with many NHS organisations, Northampton General Hospital NHS Trust current infrastructure has evolved over time and seen an unprecedented increase in its reliance on IT systems and applications to support the clinical and operational aspects of the hospital, all of which play a critical role in delivering excellent patient care.
This is a challenge that many hospitals are experiencing and Northampton’s existing data centre was not fit for the demands being placed upon it, with little spare capacity or disaster recovery facility. This had become an unpalatable prospect for the trust, especially as it could have left them without IT services for extended periods of time.
“We had to increase capacity and start to future proof our data centre infrastructure for the increased demands of mobile computing and data storage,” said Christina Malcolmson, Deputy Director of ICT at Northampton General Hospital NHS Trust. “We didn’t want to replace the existing DC as it is in constant use and the risks would have been huge – we strategically decided to build a new DC to share the load and bring new features.”
Secure IT Environments was commissioned to design and build a second purpose built data centre as part of a programme to improve security and disaster recovering for the whole hospital IT infrastructure. This project aimed to help deliver the NHS vision of enabling robust and resilient access, quickly and efficiently, to all patient data, whilst lowering the risk of outages.
The project comprised of an external modular room building to secure 20 x 19” cabinets, ground works, hot aisle capture, N+1 UPS, raised access flooring, Novec Fire suppression and VESDA detection and energy efficient LED lighting. Security doors rated LPS1175 security rating 3 were installed across access points with perimeter security fencing, CCTV and Access Control for added protection.
The new data centre was built to the rear of an existing car park. As a result, the area was cordoned off for the safety of hospital staff and visitors with early morning deliveries specifically arranged to keep disruption to a minimum. Secure IT Environments was responsible for the full build including planning applications, ground works, bringing HV power supplies to the new data centre, commissioning and testing.
The ground works presented some of the most challenging aspects of the project due to the data centre’s position on the site, slope of the ground and the trenching required to bring essential supplies to the new data centre, in a constantly busy area.
Special attention was paid to water drainage for the room, trenching and back-filling as well as the power supply, which included a generator hook up in the unlikely event that the mains should ever fail. Northamptonshire has large areas of porous soil, which leads to high levels of Radon gas being emitted into the atmosphere. Outside the gas dissipates quickly, but it can build up inside, and is lethal, therefore a Radon system was incorporated into the concrete base.
The project was completed over a 14 week period and now provides a secure and safe environment with full disaster recovery back up. The data centre measures 80cm2 and contains 20 cabinets. As with all public sector organisations keeping ongoing costs to a minimum is key and one of the ways that Northampton achieves this is its energy efficiency. The configuration of the cooling systems and the hardware efficiency means the site is achieving a power usage effectiveness (PUE) of 1.15.
Malcolmson concluded, “We are in a great place now, and better able to meet the needs of clinical and back office staff at the hospital. We are to start moving forward with an exciting list of projects to improve the experience of all at the hospital, including the most important of all, the patient experience.”
Owned by Walmart, Asda’s 160,000 colleagues serve up to 18 million customers each week across over 620 stores across the UK as well as online. As a retailer committed to “exceeding customer needs”, Asda wanted to go beyond offering affordable goods to their customers and provide a new service that would bring convenience to their daily lives.
The answer was toyou, an innovative end-to-end parcel solution. The service works through Asda’s extensive logistics and retail network enabling consumers to return or collect purchases from third-party online retailers across its stores, petrol forecourts and Click & Collect points. This now meaning, a consumer who buys a non-grocery product from another retailer, is able to pick it up or return the item at an Asda store rather than wait for home or office delivery.
In order to implement the service, Asda required a far more agile warehouse and supply chain management system. This new system needed to be hosted off-site so that there was no reliance on a single store or team.
“We were launching a new service, in a new field, on a scale we had never undertaken before. We needed a pedigree IT solutions provider that could support the full scale of our full end-to-end implementation, so CenturyLink stood out to us,” explains Paul Anastasiou, Senior Director toyou in Asda.
Despite the scale of the project, Asda could not risk putting extra pressure on its existing legacy IT system – a seamless and secure transition was required. In addition to this, as a brand dedicated to making goods and services more affordable to customers, keeping costs low as the business model expanded was crucial. As such, Asda chose to outsource the administration of the operating system and application licenses to manage costs, whilst still mainlining a high level of customer service.
Asda chose a warehouse management software platform from CenturyLink's partner Manhattan Associates. The multi-faceted solution was deployed as a hosted managed solution across two data centres. CenturyLink Managed Hosting administered Asda's operating systems, and Oracle and SQL databases on a full life cycle basis as part of the solution. Asda created its complete development, test certification and production environments for the Manhattan Associates platform on that dedicated infrastructure.
Asda used CenturyLink Dedicated Cloud Compute, which provided Compute and Managed Storage with further capacity to house the data flowing through Asda's business on-demand. The security requirement was accomplished with a dedicated cloud firewall protecting the entire solution.
CenturyLink instituted Disaster Recovery services between the two data centres at the application and database level, as well as managed firewalls to secure the data. CenturyLink also implemented Managed Load Balancing to manage the entire virtual environment and interface to all the linked warehouses.
Despite the scale of the operation, speed was also an important factor. The project took 18 months from concept to implementation.
Asda’s launch of toyou with the support of CenturyLink, has greatly boosted the retailer’s customer service. By moving most of the retailer’s operations to CenturyLink, Asda has effectively launched a huge scale new venture, all the while maintaining the same level of service and value to customers.
Asda and CenturyLink are continuing to develop the working relationship and discuss what opportunities could be available as toyou grows and develops to provide a better experience for customers.
There is a seemingly constant influx of news regarding cloud adoption trends, but what seems to be somewhat missing from industry discussion is the trend towards multi-cloud adoption.
By Oliver Pinson-Roxburgh, EMEA director at Alert Logic.
Analysts and industry experts including Gartner recommend standardisation on multiple IaaS cloud service providers as a security and availability best practice. For security workloads in public clouds, their top recommendation is a hierarchical list starting with foundational items that fall under operations hygiene (access control, configuration, change management) and then focus on core work-load protection like vulnerability management, log management, network segmentation and whitelisting. Organisations should also be aware not to place too much trust in traditional endpoint protection platforms commonly used in physical/ on-premise deployments.
Most advice on best practice in this area tends to be focused on workload security, but what are the likely consequences for security operation professionals (SecOp) that have a solid understanding of what success looks like in traditional enterprise environments? What do they secure first? What security technology should they choose? The criteria that should be considered in answering these questions should be influenced by “shared responsibility models” from the cloud service provider as well as common compliance mandates as a start. The next step following this is to identify the most critical assets. The security of access control at the application layer (think databases or other data-driven controls) is equally important, and often overlooked. Every CSP is different and sometimes these models overlap or conflict with existing best practices and corporate security mandates.
I can understand how intimidating this approach can seem to enterprise professionals, but it’s necessary to point out installing software is simply not enough as a security deterrent Businesses should never be afraid to ask for help and seek the aid of security professionals who are subject matter experts and can work with enterprises throughout all phases of a successful security plan. beyond seeking quality assistance...
Securing the cloud workload must be the first priority. Access controls serve as the basic foundational requirements. Who or what has access should be determined by server workloads. This means having tighter controls over administration access and the utilisation of multi-factor authentication. Having established proper access control, the configurations will have all unnecessary components removed and it should be hardened and configured strictly in line with the enterprises standard guidelines and it must be patched regularly in order to close up potential security holes.
Network isolation and segmentation is another foundational component of workload security. This process of limiting the server’s ability to communicate with external sources can be done either via internal firewalls or the external firewalls on Windows or Linux. While this segmentation is important, enterprises should also closely examine the logging capabilities of their systems. Logging systems allow security managers to keep a close eye on the overall health of a security plan.
A concluding point of concern regarding security cloud workloads is secure code and application control. Applications are a popular avenue of approach for potential attackers and they should be as secure as possible. Even at the very beginning of an application’s life-cycle, security should be kept in mind. Whitelisting should be utilised to limit what executables are allowed to run within a system. This simple step is a powerful security tool as all malware in the form of an executable will be immediately prevented from running.
Developing a solid workload protection scheme should be top priority for any enterprise utilising cloud infrastructure services. While this should remain a priority, it is not enough alone to constitute a full security plan. Having considered workload protection, enterprises should then go on to evaluate a number of other aspects of their security plan. It is also important to remember that cloud security is a shared responsibility, and no matter what cloud platform you are utilising it is essential to be crystal clear when considering who is responsible for what aspect of security. Responding appropriately to all of these factors will ensure that an enterprise can stop worrying about its security plan, providing them with the peace of mind they deserve.
In less than 12 months the EU’s Markets in Financial Instruments Directive (MiFID) will be replaced by MiFID II. The legislation regulates firms who provide services to clients linked to ‘financial instruments’ and the venues where they are traded. MiFID II will come into force on 3rd January 2018 and for firms impacted by the regulation, benefits are to be had by choosing data centre colocation and the adoption of ‘as-a-service’ tools as part of the adherence strategy.
By Bill Fenick, Strategy and Market Director for Financial Services at Interxion.
It is certainly the case that there is a lot to distract firms right now. If preparing for MiFID II wasn’t enough, the financial services sector is facing the ongoing uncertainly surrounding the UK’s planned departure from the EU and likely withdrawal from the single market. Furthermore, there is speculation about the potential unwinding of existing regulation (most notably the US Dodd-Frank Act), following the new administration taking office in The White House.
However, exciting or daunting (depending on your point of view) the changing political landscape is, it does not affect the requirement for MiFID II compliance. Brexit isn’t a regulation ‘get out of jail free card’ as some have mooted in the past. Even if Article 50 is invoked tomorrow, UK organisations will still need to abide by new EU regulation, whether it be MiFID II or GDPR (General Data Protection Regulation), as both will come into force before the UK leaves, which will be 2019 at the earliest.
The scale of the impact MiFID II will have should also not be underestimated. The regulation will directly affect a firm’s trading infrastructure considerations on several levels. Most notably, it establishes a new category of execution venue, the Organised Trading Facility (OTF), which aims to level the playing field for the trading of non-listed non-equity instruments, alongside the Regulated Markets (exchanges) and Multilateral Trading Facilities (MFTs) established under the preceding MiFID I (which was introduced in 2007). From January 2018, any firm wishing to participate in these markets must be able to connect to the new platforms, and apply rigorous best execution policies to comply with the new rules, which put simply include…
The good news is that independent research from the A-Team Group mirrors our own anecdotal experiences, through the ongoing work we are doing with our capital markets customers. It would appear the marketplace is by and large in a state of readiness for MiFID II.
What’s more, as these organisations firm up their plans for MiFID II adherence (for those already in compliance with the Dodd-Frank Act it will prove less of an upheaval) many are choosing to remain steadfast in their preference to not burden themselves with the addition of more on-site technology and leverage the existing infrastructure and expertise by collocating and taking advantage of specialist services offered by data centres with expertise in the field.
Recently, Interxion has seen a considerable surge in demand at our London campus. Firms in London are attracted by the proximity of its locations to their base of operations, providing the low latency they require for their execution infrastructure, as well as the appeal of implementing key aspects of MiFID II compliance ‘as-a-Service’. Firms are also expressing a keen interest in various auxiliary services specifically related to MiFID II, namely highly granular time-stamping of trade data (firms must retain records relating to the entire trade lifecycle and be able to reconstruct transactions on request) and additional connectivity to further data sources and exchanges.
By making the decision to access as-a-service tools for MiFID II, the costs associated with the hardware and related infrastructure needed for system monitoring, time-stamping, testing and so on, can be greatly reduced. Meanwhile, some shrewd firms are actively exploring how to optimise their execution processes and ensure MiFID II compliance, to create competitive advantage in the marketplace.
Again, the traction we are seeing echoes that of the A-Team Group survey, in which 60% of respondents saw at least some value in their efforts to comply, while 30% are expecting the cloud to contribute ‘significantly’ to their MiFID II solutions. It is evident that the message from the regulator is being heeded, but more firms need to take a closer look at the approach of their peers. As the saying goes ‘There is doing the right thing and doing things right’!
To learn more, download Interxion’s free whitepaper: ‘Countdown to MiFID II: Best Execution Brexit and Trading Infrastructure Best Practices’
For the past few years a regulation has been plaguing the minds of businesses everywhere – EU GDPR. Now almost 3 years later we know what the regulation will be and when it will come into effect (25th may 2018). But still many business heads are left wondering what it really means and how will it affect them and their business. Let’s take a look at GDPR and some of the myths that surround it.
By Nathaniel Wallis, Security Account Manager at Axial Systems.
Many UK firms are still trying to figure out how the shock result in 2016 to leave the European Union will affect them. But one that is clear is that regardless of where you stood on the result or where your business is based if you want to trade with the EU you will have to meet their regulations. This includes the EU GDPR. So anybody who thought they were going to get away with not having to follow the GDPR, are now having to wake up to the fact that they will need to still prepare for its imminent arrival. This means that if you have been putting off preparations to see the result all you have done is given yourself an even shorter time scale to be prepared.
Within the regulation it provides that each institution or body must appoint at least one person as a Data Protection Officer (“DPO”) to enforce the regulations internally. This means that any organization that meet the requirements of performing Regular and systematic monitoring of data subjects on a large scale or processing Sensitive Personal Data on a large scale will need to have a person or persons to fill this role. The above statements cover either internal staff or external customers/vendors meaning that almost every medium to large enterprise will need to have this role filled.
Advertisement: Riello UPS
The GDPR places accountability obligations on data controllers to demonstrate compliance. This includes requiring them to: (A) maintain certain documentation, (B) conduct a data protection impact assessment for riskier processing (DPAs should compile lists of what is caught), and (C) implement data protection by design and by default. This places all of the pressure on organizations to ensure that all future processes are designed with the above in mind. But what about existing processes? What of all the current data stores that an organization has? These will all have to be evaluated and the new regulations applied to their functions and processes.
Business must notify most data breaches to the DPA. This must be done without undue delay within 72 hours of awareness. In some cases, the data controller must also notify the affected data subjects without undue delay. Additionally, the UK ICO, for example, already expects to be informed about all “serious” breaches. Research has shown that most organizations have no formal incident response plan and as such would be unable to meet the 72 hour requirement due to being ill prepared for a breach. This under the GDPR is not a valid reason to miss the required timeline.
All of the above are areas that if not met, and in the event of a data breach, will result in heavy fines. A two-tiered approach will apply. Breaches of some provisions by businesses, which law makers have deemed to be most important for data protection, could lead to fines of up to €20 million or 4% of global annual turnover for the preceding financial year, whichever is the greater, being levied by data watchdogs. For other breaches, the authorities could impose fines on companies of up to €10m or 2% of global annual turnover, whichever is greater. This will result in many businesses being forced out of business as a result.
Merger and acquisition activity is set to reshape the multi-tenant data centre market: The popularity of colo data centres is on the rise. At the same time, the industry demands greater levels of operational efficiency backed by service level agreements, transparency and reporting. Data Centre Infrastructure Management (DCIM) offers the required transparency and performance measurement of activities in colo data centres.
By Philippe Heim, Global Portfolio Manager DCIM, Siemens Building Technologies.
Typically, a colo data centre provides the building, cooling, power, bandwidth and physical security while the customer provides servers and storage. Space in the facility is often leased to customers by the rack, cabinet, cage or room. Colo offerings reflect the pulse of the market, as evidenced by the growth forecasts of 451 Research: The market researchers calculated how much space data centres worldwide will occupy by 2020. Their findings indicate that the colo market is outperforming other types of data centres, reaching annual growth rates of 8 percent.
Many colo data centres have extended their offerings by customer demand to include Data Centre Infrastructure Management (DCIM). DCIM tools offer Asset Management, central data centre management – a “single pane of glass” –, improved forecasting and an extended lifecycle. For the colo data centre customers DCIM provides complete visibility in order to track and monitor the performance of their assets.
Advertisement: Schneider Electric
For colo data centres DCIM forms a bridge between facilities management and IT operations by providing a single, comprehensive view of both areas. It monitors the use and energy consumption of IT-related equipment and facility infrastructure components. The decision by colo data centres to adopt DCIM tools marks a significant shift in attitudes. Three to five years ago, quantifying the value of DCIM was difficult, and the tool itself was seen as more of a luxury than a necessity for small- to mid-size businesses.
Today, DCIM represents a win win for both colo data centres and customers alike. For customers, operating with a colo data centre provider will allow them to customize their data centre processes to meet specific workflow needs. This ensures that DCIM solutions will provide maximum benefits and reduce the risk of roadblocks in data access and use. DCIM tools will provide companies flexibility to adjust operations quickly. This will permit a company to increase analytics and the workflow substantially as required and reduce operations when they don’t need optimal analytics power.
As increased competition is driving market consolidation, data centre managers are recognizing the importance of tools such as DCIM to their own organizations and the benefits it can bring to their customers. DCIM is now seen by many as being business critical not only to remotely manage and benchmark multiple colo data centre sites but also for the security of those sites. Datacenter Clarity LC, offered by Siemens, is a good example of DCIM software and how it can enhance the customer experience as it accurately and efficiently manages the colo data centre infrastructure. The software provides real-time visualization of asset attributes and powerful tools to determine the most efficient data centre configuration. It is based on proven industry-leading software in lifecycle management and features real-time monitoring that offers global coverage and stability.
The real-time monitoring of assets is an essential feature of Datacenter Clarity LC. Sensors and powerful data collector software are used to acquire operational and environmental data throughout the data centre. Real-time monitoring with Asset management ensures that energy, equipment and floor space are used as efficiently as possible. With DCIM, colo data centres gain live, actionable data that allows immediate response and better control of their facility and customers IT assets.
With the data centre, market consolidating the fusion of multiple business systems is common place with an ever-growing need for the DCIM to integrate with third-party vendors. Having an open protocol interface to facilitate such integration is important as data centre managers are increasingly recognizing that such DCIM integration can provide competitive advantages. Today colo data centres are readily investing in DCIM systems as it is seen by both data centres and customers to be one of the most important tools needed to complement any data centre facility infrastructure. It can be described as the command centre with the software providing a comprehensive overview/transparency for making decisions based on real facts.
The integration of multiple business systems, a frequent requirement during mergers and acquisitions, also means that the system capacity of colo data centres must be scalable and effectively managed and delivered on a global basis. Critical functions such as rack space, power, cooling and network connectivity, require close oversight which is achieved through a KPI-based client dashboard. However, asset moves, adds and changes (MACs) can create an imbalance in capacity utilization and can result in stranded assets - unidentified and underutilized capacity on some servers while exceeding resources on others. Without clear insight into available capacity, operators could invest in unneeded assets while others sit underutilized. The result is a negative impact on profitability at a time when capital is needed for growth. To better manage such capacity issues, colo data centre operators can utilize DCIM tools that will reduce costs and improve capacity utilization. With sophisticated tracking and reporting capabilities, DCIM allows colo data centres to accurately assess capacity levels of all of their assets. IT managers better control capacity and workflow to maximize usage. Facility managers can maximize environmental conditions to help prevent costly downtime.
DCIM systems offer another important advantage: real-time billing. Real-time billing improves customer transparency and is a tremendous advantage for accurate billing and budgeting. By tracking customer usage in real time, operators invoice customers based on actual usage. This may sound counter-intuitive to the traditional method of billing customers at a set fee. For customers, it gives access to new levels of data that will allow improved planning and perhaps expanded usage. Most importantly, customer satisfaction increases.
While costs for DCIM systems will vary depending upon the capabilities required to address particular challenges by the colo data centre, perhaps it is more relevant to examine the added value that DCIM can provide. The ideal scenario is when a colo data centre agrees with their customer the level of access and clarity given to them into the data centre to monitor power consumption or real-time data gathering. It’s also important for colo data centres to understand that they need a process for DCIM. Without that, DCIM as a tool will not create a process for the business. This is key, as colo data centres expect to achieve full return on investment (ROI) on their DCIM system investment in two to three years or even less.