I’ll keep this month’s comment brief. I’m no travel expert, but it would appear that the demise of Thomas Cook is largely due to the fact that the business failed to modernise quickly enough. By not taking sufficient notice of the digital world, the travel company joins a growing list of household names, in the UK at least, that have all failed to grasp the need to embrace technology and its potential.
Much of the content of Digitalisation World might seem a world away from the business that you run right now. However, as I’ve stated in the past, very few companies in very few industries can afford to continue doing things the way that they have always done.
At the most basic level, you might be lucky enough to have a unique business model that means your customers can’t go anywhere else. Don’t be complacent, though. You could have a rival quicker than you think – thanks to the ability to start a company almost overnight and buy in the technology required to run it.
Then again, however loyal your customers, if they find that your lack of technology (a poor website, continued reliance on post rather than email etc.) a major obstacle, they could well look around for another supplier.
Cost is often cited as the major obstacle to modernising/digitalising a business. Thanks to the rise of the cloud and managed services, this should no longer be an issue. In any case, the cost of doing nothing demands you pay the ultimate price: extinction.
All a bit gloomy, maybe, but the good news is that you can revolutionise your business and your wider industry by selective use of digital technology. It’s not too late.
Whilst few would refute that technology contributes to making our lives easier, new global research from Lenovo has found that a large proportion of those surveyed feel that technology has the power to make us more understanding, tolerant, charitable and open-minded.
The survey polled more than 15,000 people from the US, Mexico, Brazil, China, India, Japan, UK, Germany, France and Italy and revealed that eight out of ten respondents (84 per cent) in the UK think that technology plays a large role in their day-to-day lives. Meanwhile, 78 per cent of the UK said that technology improves their lives.
Although we might presume the main ways in which technology impacts our lives is by helping us achieve our daily tasks – such as emails, streaming and so on – Lenovo’s research has found that in many cases, technology is actually having a strong impact on our human values. For example, 30 per cent of UK respondents believe smart devices such as PCs, tablets, smartphones and VR are making people more open-minded and tolerant.
It is likely that the rise of social media and video sharing platforms are key to this, allowing people to connect with those from other countries and cultures, gaining an insight into their lives through social posts, blogs, vlogs, video and other content. The window into the world of other peoples’ lives through technology is also a key contributor for the 66 per cent of UK respondents who believe technology makes us more ‘curious’.
The study also found that nearly a third (30 per cent) of people in the UK are of the opinion that technology makes us more charitable. This is likely the result of the increased prevalence of charitable ‘giving’ platforms which allow people to make donations online, as well as enabling people to share their charitable endeavors via social platforms.
Psychologist, Jocelyn Brewer, comments:
“Technology is often blamed for eroding empathy, the innate ability most humans are born with to identify and understand each other’s emotions and experiences. However, when we harness technological advances for positive purposes, it can help promote richer experiences that develop empathetic concern and leverage people into action on causes that matter to them.
“Developing the ability to imagine and connect with the experiences and perspectives of a broad range of diverse people can help build mental wealth and foster deeper, more meaningful relationships. Technology can be used to supplement our connections, not necessarily serve as the basis of them.”
Respondents to the survey also believe emerging technologies could have an even more significant impact on our values in the future. Indeed, nearly two-thirds (58 per cent) in the UK say that VR has the potential to cultivate empathy and understanding, and help people be more emotionally connected with people across the globe by allowing them to see the world through their eyes.
The global nature of this survey has meant that interesting differences can be found around the world. In particular, the developing economies of the BRIC countries (Brazil, Russia, India and China) appear to have the most faith in the ability of technology to positively impact our values. For example, 90 per cent of respondents in China think VR has the ability to increase human understanding, whilst that figure is 88 per cent in India and 81 per cent in Brazil. By comparisons, just 51 per cent believe this to be true in Japan, a country that is rather technologically advanced, whilst the lowest figure is in Germany (48 per cent).
One respondent in the study remarked: “VR would give those who think the world is perfect an insight into other people’s world and make them realise the pain and suffering some people have to endure in their daily lives.”
There are of course two sides to every coin, and some respondents do feel that technology can instead divide people. For example, 62 per cent of UK respondents in the survey agreed that tech makes people more judgmental of others, especially through the lens of social media.
The ‘immediate’ nature of the internet also has some side-effects. For example, 48 per cent of UK respondents believe that technology is making us less ‘patient’, whilst 65 per cent said it can make us ‘lazy’ and 54 per cent said that it can make us ‘selfish’.
As a global technology company, Lenovo believes such innovative uses of technology for good are important to lead as an example in actively promoting qualities like empathy and tolerance as products are developed for widespread adoption.
Dilip Bhatia, Vice President of User and Customer Experience, Lenovo, comments:
“In many ways society is becoming more polarised as many of us are surrounded by those who share similar views and opinions. This can reinforce both rightly and wrongly held views and lead to living in somewhat of an echo chamber. We believe smarter technology has the power to intelligently transform people’s world view, putting them in the shoes of others and allowing them to experience life through their eyes – leading to a greater understanding of the world and the human experience.
“This could be through using smart technology to connect people from diverse backgrounds or allowing you to literally see their world in VR. The more open we are to diversity in the world around us, the more empathetic we can be as human beings – and that is a good thing.”
Companies of all sizes have trouble competing and disrupting in a software-driven world due to persistent shortage of developers, as well as IT funding that falls short of supporting strategic goals.
Mendix has published key findings on the readiness of companies to successfully implement digital transformation initiatives. The survey shines a spotlight on the rise of “shadow IT” — when business teams pursue tech solutions on their own, independently of IT — in an effort to deal with the persistent shortage of software developers and inadequate budgeting to meaningfully advance a digital agenda. The research, conducted by California-based Dimensional Research, surveyed more than a thousand business and IT stakeholders in medium and large companies in the United States and Europe.
Clear communication between business and IT is often highlighted by leading analysts as imperative for successfully launching innovative technology solutions and user experiences. The survey, “Digital Disconnect: A Study of Business and IT Alignment in 2019,” did find a few strong areas of agreement, including a broad foundation of organizational respect for IT. There was near-unanimous agreement (95%) by all respondents that IT’s involvement in strategic initiatives is beneficial. Additionally, 70% of those surveyed give IT high marks as a value driver versus cost center for the enterprise; for example, 66% believe in IT’s potential to enable rapid, competitive responses to market changes and to increase employee productivity (65%). However, survey respondents identified significant hurdles to realizing that potential.
Where Business-IT Communication Breaks Down
The alignment between business and IT fades quickly when the topic turns to budgeting and operational priorities. In the survey, 50% of IT professionals believe IT budgets are insufficient to deliver solutions at scale. Conversely, 68% of business respondents do not see any challenges with the level of funding. Nearly two-thirds of IT (59%) cite the need to support legacy systems as a drain on resources and impediment to innovation. Nearly half of IT (49%) reports difficulties in achieving stakeholder agreement on important business initiatives.
This impasse is clearly reflected in the shared belief that a huge pipeline exists of unmet requests for IT solutions (77% IT and 71% business), requiring many months or even years for completion. Nearly two-thirds of business stakeholders (61%) say that less than half of their requests rise to the surface for IT implementation. Not surprisingly, both sides strongly agree (78%) that business efforts to “go it alone” or undertake projects in the realm of “shadow IT” — without official IT support or even knowledge — have greatly increased over the last five years.
“While business and IT users agree on the urgent need to advance the enterprise’s digital agenda, they are worlds apart on how to eliminate the backlog and take proactive steps to develop critically important solutions at scale,” says Jon Scolamiero, manager, architecture & governance, at Mendix.“For many years, IT has been budgeted and managed like a cost center, leading to a tremendous increase in shadow IT. Business leaders say they want IT’s help in achieving strategic goals and ROI, but only a small percentage (32%) grasp that current IT budgets are insufficient and inflexible. This disconnect is difficult for enterprises to surface, yet resolving it is a necessary first step in changing the calculus. The research findings point out the barriers that are impeding successful, cross-functional collaboration.”
Drilling Down into the Disconnect
The survey data highlights a number of nuanced differences in perceptions. For example, nearly all respondents (96%) agree there is business impact when IT doesn’t deliver on solutions. But IT staff feels that impact largely as frustration and a loss of staff morale. Business staff, on the other hand, believes these delays lead to missed key strategic targets and revenue reductions, loss of competitive advantage, and other ROI impacts.
“We also see a disconnect on the wish list of digitalization priorities from business, centering on innovations and new technologies that impact ROI,” says Scolamiero. “However, IT feels its hands are tied because the lion’s share of its resources go to ongoing support issues, including having to support legacy systems, back-end integrations, maintenance, and governance.”
The study’s sharpest contrast is around business users’ preferred solution of undertaking digital projects without IT’s help — going it alone as “citizen developers” or “shadow IT.” Business respondents overwhelmingly (69%) believe such action to be mostly good, while an almost identical number of IT professionals (70%) believe it to be mostly bad.
IT is strongly united in its fear that business professionals tackling application development on their own will create new support issues that IT will have to clean up (78%), and open core systems to security vulnerabilities (73%).
To underscore their concerns, 91% of IT respondents say it is dangerous to build applications without understanding the guardrails of governance, data security, and infrastructure compatibility. However, there is virtually unanimous agreement (99%) by all respondents that capabilities such as easy integrations, fast deployment, easy collaboration — which are inherent in low-code application development — would benefit their organizations.
Scolamiero said, “IT can be expensive, and there’s a reason that people believe most IT projects fail. IT involves the reasoned application of science, logic, and math, and requires a high level of expertise to get things done right. This level of expertise is viewed as extremely costly. But when the business side sees IT as a revenue driver instead of a cost center, and empowers IT to come up with creative ideas for solving problems, important and positive changes ensue. These changes drive an ROI that can dwarf the perceived cost. Low-code, when done right, solves much of the shadow IT problem by enabling and empowering intelligent and motivated people on the IT side and the business side to come together and make valuable revenue-generating solutions faster than ever before.”
Mending the Gap with Low-Code Platforms
While business experts are still not widely familiar with the term “low-code,” business users who are familiar with low-code’s capabilities have aggressive plans for adoption, with 90% citing plans to adopt low-code. Interestingly, more business executives (55%) than IT executives (38%) believe their organization would achieve significant business value by adopting a low-code framework.
“Enabling meaningful collaboration to bridge the gap between business and IT is one of the main reasons we founded Mendix and developed our unified low-code and no-code platform,” says Derek Roos, Mendix CEO. “Years of budgeting and managing IT as a cost center has led to a crisis in business. Business and IT leaders must redefine teams and empower new makers to drive value and creativity and become top-performing organizations. Every iteration of Mendix’s application development platform has advanced the tools needed for successful collaboration between business users and IT experts, enabling greater participation by all members of the team to produce creative, value-delivering, transformative solutions that advance their enterprise’s digital agenda. Because that’s absolutely required to succeed in the marketplace today. This research identifies for leaders the pain points and communication gaps they must address in order to break out of the bonds of legacy systems and thinking, and chart a meaningful digital future.”
Despite rising optimism, 86 percent of organizations prevented from pursuing new digital services or transformation projects.
Despite rising optimism in digital transformation, the vast majority of organizations are still suffering failure, delays or scaled back expectations from digital transformation projects, research from Couchbase has found. In the survey of 450 heads of digital transformation in enterprises across the U.S., U.K., France and Germany, 73 percent of organizations have made “significant” or better improvements to the end-user experience in their organization through digital innovation. Twenty-two percent say they have transformed or completely “revolutionized” end-user experience, representing a marked increase over Couchbase’s 2017 survey (15 percent). However, organizations are still experiencing issues meeting their digital ambitions, including:
At the same time, transformation is not slowing down. Ninety-one percent of respondents said that disruption in their industry has accelerated over the last 12 months, 40 percent “rapidly.” And organizations plan to spend $30 million on digital transformation projects in the next 12 months, compared to $27 million in the previous 12.
“Digital transformation has reached an inflection point,” said Matt Cain, CEO, Couchbase. “At this pivotal time, it is critical for enterprises to overcome the challenges that have been holding them back for years. Organizations that put the right people and technology in place, and truly drive their digital transformation initiatives, will benefit from market advantages and business returns.”
Organizations are well aware of the risks of failing to digitally innovate. Forty-six percent fear becoming less relevant in the market if they do not innovate, while 42 percent say they will lose IT staff to more innovative competitors, in turn making it harder to innovate in the future. As a result, organizations are pressing forward with projects, perhaps recklessly. Seventy-one percent agree that businesses are fixated on the promise of digital transformation, to the extent that IT teams risk working on projects that may not actually deliver tangible benefits.
One key to delivering tangible benefits is ensuring that digital transformation strategy is set with the needs of the business in mind. The majority of organizations (52 percent) still have digital transformation strategy set by the IT team, meaning the C-suite is not guiding projects and strategy that should have a major impact on the business. At the same time, the primary drivers for transformation are almost all reactive – responding to competitors’ advances, pressure from customers for new services and responding to changes in legislation were each reported by 23 percent of respondents. Conversely, original ideas from within the business only drive eight percent of organizations’ transformations.
“In order for companies to succeed with their digital projects and overcome the inherent challenges with these new approaches, they have to attack the projects in a comprehensive and systemic way,” continued Matt Cain. “Transformation is ultimately achieved when the right combination of organizational commitment and next-generation technology is driven across the entire enterprise as a true strategic imperative, not left in the sole hands of the IT team. The best technology will then help companies enable the customer outcomes they desire.”
Digital transformation, migration to the enterprise cloud and increasing customer demands have tech leaders looking to AI for the answer.
Software intelligence company, Dynatrace, has published the findings of an independent global survey of 800 CIOs, which reveals that digital transformation, migration to the enterprise cloud and increasing customer demands are creating a surge in IT complexity and the associated costs of managing it. Technical leaders around the world are concerned about the effect this has on IT performance and ultimately, their business. Download the 2019 global report “Top Challenges for CIOs in a Software-Driven, Hybrid, Multi-Cloud World” here.
CIO responses captured in the 2019 research indicate that lost revenue (49%) and reputational damage (52%) are among the biggest concerns as businesses transform into software businesses and move to the cloud. And, as CIOs struggle to prevent these concerns from becoming reality, IT teams now spend 33% of their time dealing with digital performance problems, costing businesses an average of $3.3 million annually, compared to $2.5 million in 2018; an increase of 34%. To combat this, 88% of CIOs say AI will be critical to IT’s ability to master increasing complexity.
Findings of the 2019 global report include:
Software is transforming every business
Every company, in every industry, is transforming into a software business. The way enterprises interact with customers, assure quality experiences and optimize revenues is driven by applications and the hybrid, multi-cloud environments underpinning them. Success or failure comes down to the software supporting these efforts. The pressure of this “run-the-business” software performing properly has significant ramifications for IT professionals.
Enterprise “cloud-first” strategies increase complexity
Underpinning this software revolution is the enterprise cloud, allowing companies to innovate faster and better meet the needs of customers. The enterprise cloud is dynamic, hybrid, multi-cloud, and web-scale, containing hundreds of technologies, millions of lines of code and billions of dependencies. However, this transformation isn’t simply about lifting and shifting apps to the cloud, it’s a fundamental shift in how applications are built, deployed and operated.
The age of the customer increases pressure to deliver great experiences
We are squarely in the age of the customer, where high quality service is paramount due to the ease with which customers will try competitive offerings and share their experiences instantly via social media.
The research highlights the extent to which businesses are struggling to combat IT complexity that threatens the customer experience, with CIOs revealing:
IT teams are feeling the strain
Digital transformation, migration to the enterprise cloud and increasing customer demands are collectively putting pressure on IT teams, who continue to feel the strain, especially as it relates to performance.Revealing the extent of this dilemma, key findings of the research also show that:
CIOs look to AI for the answer
Exploring the potential antidote to these challenges, the research further reveals that 88% of CIOs say that they believe AI will be critical to IT’s ability to master increasing complexity.
“As complexity grows beyond IT teams’ capabilities, the economics of throwing more manpower at the problem no longer works,” said Bernd Greifeneder, founder and CTO, Dynatrace. “Organizations need a radically different AI approach. That’s why we reinvented from the ground up, creating an all-in-one platform with a deterministic AI at the core, which provides true causation, not just correlation. And, unlike machine learning approaches, Dynatrace® does not require a lengthy learning period. The Dynatrace® Software Intelligence Platform automatically discovers and captures high fidelity data from applications, containers, services, processes, and infrastructure. It then automatically maps the billions of dependencies and interconnections in these complex environments. Its AI engine, Davis™, analyzes this data and its dependencies in real-time to instantly provide precise answers – not just more data on glass. It’s this level of automation and intelligence that overcomes the challenges presented by the enterprise cloud and enables teams to develop better software faster, automate operations and deliver better business results.”
Harvard Business Review Analytic Services, in association with Cloudera, reveals enterprise IT mandate to protect data and minimize risk at odds with business needs for speed and agility.
Cloudera has published the findings of a global research report, created in association with Harvard Business Review Analytic Services, which examines the trends, pain points, and opportunities surrounding Enterprise IT challenges in data analytics and management in a multicloud environment.
The report, “Critical Success Factors to Achieve a Better Enterprise Data Strategy in Multicloud Environment,” is based on insights from over 150 global business executives representing a wide range of industries, with almost half being organizations with $1 billion or more in revenue. The study found that the majority (69%) of organizations recognize a comprehensive data strategy as a requirement for meeting business objectives, but only 35% believe that their current analytics and data management capabilities are sufficient in doing so.
“This report reveals specific obstacles modern enterprises must overcome to realize the true potential of their data and validates the need for a new approach to enterprise data strategy,” said Arun Murthy, Chief Product Officer of Cloudera. “Cloudera is committed to helping our customers with the data analytics their people need to quickly and easily derive insights from data anywhere their business operates, with built-in enterprise security and governance and powered by the innovation of 100% open source. We call that an enterprise data cloud.”
The future is hybrid and multicloud
The report confirms that the future of analytics and Enterprise IT data management is multicloud, with businesses managing data across private, hybrid and public cloud environments — but there is still progress to be made. Over half (54%) of the organizations surveyed have plans to increase the amount of data they store in the public cloud over the next year, but the majority still manage much of their data on-premises. As enterprises create cloud strategies that are customized to their needs, the ability to securely access data no matter where it resides and to seamlessly migrate workloads has never been more imperative.
Functions are diversifying
Most organizations are leveraging their data to support traditional functions like business intelligence (80%) and data warehousing (70%). Newer functions are less common but on the rise, with half of the organizations surveyed planning to implement artificial intelligence and machine learning in the next three years. To fully extract the business value embedded in data, an enterprise data strategy must support a full buffet of functions, from real-time analytics at the Edge to artificial intelligence.
A deeper understanding of regulatory compliance and security
The introduction of new regulations and increasingly complex processes around governance means that every single person in an organization must understand the importance of keeping data secure and compliant. Ten percent of those surveyed did not know if they were required to secure data within a regulatory framework or not, which is a small ignorance that could result in serious risk.
An open foundation
A specific approach to open source is essential for an effective enterprise data strategy, but half of those surveyed agreed that current cloud service providers are unable to meet their need for access to open-source software. Open compute, open storage, and open integration are baseline capabilities that a comprehensive enterprise data platform must provide.
More than two thirds (69%) of business decision-makers in senior management positions say that AI speeds up the resolution of customer queries, according to a recent survey commissioned by software provider, Enghouse Interactive.
Further underlining support for AI-driven customer service, more than a quarter (27%) of the sample claim that implementation of new technologies such as robots and automation would be the single thing that would help them serve customers the most during 2019, while a further 27% said better use and business-wide visibility of customer data (again often driven by intelligent technology) would be the top thing.
Jeremy Payne, International VP Marketing, Enghouse Interactive, said: “We are living in a digital age and the use of AI and robotics are on the increase in the customer service environment. This ongoing drive to automation has the potential to bring far-reaching benefits to organisations and the customers they serve. Yet, in making the journey to digital, businesses must guard against leaving their own employees behind and failing to allay their concerns about the digital future and their role in it.”
Three-quarters (75%) of respondents said they have had a positive experience of technological change within a contact centre or customer service environment, while just 6% said their experience had been negative. Moreover 85% of the sample agree with the statement: ‘my employer gives me the tools and technologies to deliver the best possible service to customers’.
However, the survey results do highlight some scepticism among customer service teams as to the motives their business has for implementing solutions. When asked to name what they thought their company was hoping to do by moving to robotics and automation technology, more than a third of respondents (37%) thought ‘cutting costs’ was among their company’s top two priorities and 34% said ‘deliver a competitive edge’. ‘Empower employees and add value to their jobs’ was lower down the list (referenced by 31%).
“That’s an area organisations will need to look at,” said Payne. “After all, the implementation of bots and AI typically helps them deal more efficiently with routine enquiries, thereby relieving contact centre employees from having to deal with standard questioning all the time and adding greater value to the interactions with which they are involved.”
Some enterprises lack understanding and mature data infrastructures to make Artificial Intelligence (AI) a success.
Mindtree has revealed the findings of its recent survey of AI usage across enterprises. The study found most businesses are well underway with AI experimentation, but many still lack an understanding of the use cases to deliver business value and the data infrastructures for making AI a success across the enterprise on a sustainable basis.
The survey, which gathered data from 650 global IT leaders from key business markets, found 85% of organisations have a data strategy and 77% have implemented some AI-related technologies in the workplace, with 31% already seeing major business value from their AI efforts. Organisations are achieving their vision to industrialize AI, but many can do more to gain real business value.
Greater focus is needed on use cases that deliver business value
When implementing an AI strategy, there’s a pressing need for use cases to demonstrate business value. The survey revealed that 16% of enterprises globally focus on a pain point and then define a use case, with smaller organisations (13%) being less likely to focus on the business impact compared with their larger counterparts (18%). With all the pressure to harness AI, many organisations are experimenting but not all have found the formula to deploy at scale and add significant value.
The survey found there are certain business functions such as sales (35%) and marketing (32%) gaining the most value from AI, as it accelerates the delivery of improved customer experiences. The most popular technologies deployed by global organisations are machine learning (34%), chatbots (34%), and robotics (28%).
Success with AI means merging rapid experimentation, organisational agility and skills
AI is already delivering measurable business benefits, but the majority of enterprises have yet to find a formula for repeatable success. An important requirement for enterprises to successfully start their AI journey is to experiment with different use cases and technologies with agile and rapid innovation methodologies. Just over a quarter (29%) of the enterprises surveyed said they are agile enough to rapidly experiment with AI, with large organisations (39%) having an edge compared to their smaller counterparts at 19%.
Progressive enterprises are spending 25% of their IT budget on digital innovations with a focus on use case definition, experimentation, and operationalization for scale. Businesses also understand the need to upgrade and refresh their skills to capitalize on AI. 44% say they will hire the best talent available externally, 30% have partnerships with academia, and 22% run hackathons to solve data challenges and identify potential candidates.
The AI-led enterprise – it’s all about data
Finding the right use cases and building alignment and support for AI initiatives are critical, but data is the make-or-break variable when it comes to scaling AI across the enterprise. Businesses need to modernize their data infrastructure, architectures and systems, along with an overarching data strategy and robust data governance processes. The survey revealed more than half (51%) of large enterprises say they don’t fully understand the data infrastructure needed to implement AI at scale, while 6 out of 10 organisations believe their data infrastructure and architectures are immature and not well positioned to deliver business value.
“The potential of AI to disrupt, transform and rebuild businesses is clearly felt in the C-Suite, even if it is not yet fully understood,” said Suman Nambiar, Head of Strategy, Partners and Offering for Digital at Mindtree. “Business and technology leaders are increasingly expected to prove business value, unlock the power of their data, and define their AI strategy and roadmap. To thrive in the Digital Age, businesses must be agile and unafraid of failure. They must also constantly refine their understanding of how AI will give them a competitive edge and deliver real and measurable business value to maximize their investment in these disruptive and powerful technologies.”
New survey reveals only 12% of today’s enterprises have fully transitioned to modern tools.
A new IT Operations Management survey has found that enterprises depending exclusively on legacy monitoring tools are falling behind in business agility and operational efficiency. The commissioned study, conducted by Forrester Consulting, uncovered that organizations with disjointed and outdated IT offerings that utilize legacy tools and strategies are trapped in a perpetual survival mode and unable to innovate. Only an alarming 12% of respondents report having fully transitioned to modern monitoring tools, with 37% stillrelying exclusively on legacy tools keeping them stuck in a digital deadlock.
Respondents revealed that legacy toolsets remain prevalent in their IT ecosystem, further relaying the negative implications of legacy IT vendors and tools that undermine enterprises service resilience, fast mean-time-to resolution, and the ability to automate to scale. A full 86% said they are still using at least one legacy tool, which is actively exposing their business to negative consequences, primarily high costs of IT support, service degradation, and increased security risks.
Top findings from the ITOM study include:
The Opportunity Ahead
Mature enterprises are attempting to match their digital-native counterparts by adopting cloud-based architectures, but continue to fall short, as many modern tools are unable to manage outdated legacy systems. To address IT visibility and remediation challenges, over two-thirds (68%) of companies surveyed have plans to invest in AIOps-enabled monitoring solutions over the next 12 months. These solutions apply AI/ML-driven analytics to business and operations data to make correlations and provide real-time insights that allow IT operations teams to resolve incidents faster–and avoid incidents altogether. IT decision-makers reported that the major benefits of AIOps solutions include increased operational efficiency and business agility, as well as reduced cost of downtime.
“Enterprises that operate on dozens of legacy vendor tools are siloing the view of their IT environment, leading to prolonged service disruptions, issues with incident resolution, and ultimately, providing for a poor customer experience. These “survival mode enterprises” have little chance of getting ahead of the agility curve and are in real danger of being left behind,” said Dave Link, founder and CEO of ScienceLogic. “As the adoption of newer technologies like containers and microservices continues to rise, forward-thinking companies will drive extensive automation with artificial intelligence and machine learning algorithms. This study shows that companies will need to adopt innovations like AIOps to ensure a successful modernization and automation journey.”
“These enterprises are starting to take the leap to modernize their IT environment, however, survival will require a cultural shift in how people and organizations understand the flow and impact of clean data as part of a broader strategy towards automation,” said Link. “The reality is that those who have not started are already behind, but it is not too late to future-proofs your IT systems and teams so they may focus on innovative advancements to propel your enterprise to market success.”
Veracode has released results of a global survey on software vulnerability disclosure, “Exploring Coordinated Disclosure,” that examines the attitudes, policies and expectations associated with how organisations and external security researchers work together when vulnerabilities are identified.
The study reveals that, today, software companies and security researchers are near universal in their belief that disclosing vulnerabilities to improve software security is good for everyone. The report found 90% of respondents confirmed disclosing vulnerabilities “publicly serves a broader purpose of improving how software is developed, used and fixed.” This represents an inflection point in the software industry – recognition that unaddressed vulnerabilities create enormous risk of negative consequences to business interests, consumers, and even global economic stability.
“The alignment that the study reveals is very positive,” said Veracode Chief Technology Officer and co-founder Chris Wysopal. “The challenge, however, is that vulnerability disclosure policies are wildly inconsistent. If researchers are unsure how to proceed when they find a vulnerability it leaves organisations exposed to security threats giving criminals a chance to exploit these vulnerabilities. Today, we have both tools and processes to find and reduce bugs in software during the development process. But even with these tools, new vulnerabilities are found every day. A strong disclosure policy is a necessary part of an organisation’s security strategy and allows researchers to work with an organisation to reduce its exposure. A good vulnerability disclosure policy will have established procedures to work with outside security researchers, set expectations on fix timelines and outcomes, and test for defects and fix software before it is shipped.”
Key findings of the research include:
·Unsolicited vulnerability disclosures happen regularly. The report found more than one-third of companies received an unsolicited vulnerability disclosure report in the past 12 months, representing an opportunity to work together with the reporting party to fix the vulnerability and then disclose it, thus improving overall security. For those organisations that received an unsolicited vulnerability report, 90% of vulnerabilities were disclosed in a coordinated fashion between security researchers and the organisation. This is evidence of a significant shift in mindset that working collaboratively is the most effective approach toward transparency and improved security.
·Security researchers are motivated by the greater good. The study shows security researchers are generally reasonable and motivated by a desire to improve security for the greater good. Fifty-seven percent of researchers expect to be told when a vulnerability is fixed, 47% expect regular updates on the correction, and 37% expect to validate the fix. Only 18% of respondents expect to be paid and just 16% expect recognition for their finding.
·Organisations will collaborate to solve issues. Promisingly, three in four companies report having an established method for receiving a report from a security researcher and 71% of developers feel that security researchers should be able to do unsolicited testing. This may seem counterintuitive since developers would be most impacted in having their workflow interrupted to make an emergency fix, yet the data show developers view coordinated disclosure as part of their secure development process, expect to have their work tested outside the organisation, and are ready to respond to problems that are identified.
·Security researchers’ expectations for remediation time aren’t always realistic. After reporting a vulnerability, the data shows 65% of security researchers expect a fix in less than 60 days. That timeline might be too aggressive and even unrealistic when weighed against the most recent Veracode State of Software Security Volume 9 report, which found more than 70% of all flaws remain one month after discovery and nearly 55% remain three months after discovery. The research shows that organisations are dedicated to fixing and responsibly disclosing flaws and they must be given reasonable time to do so.
·Bug bounties aren’t a panacea. Bug bounty programs get the lion’s share of attention related to disclosure but the lure of a payday actually is not driving most disclosures, according to the research. Nearly half (47%) of organisations have implemented bug bounty programs but just 19% of vulnerability reports come via these programs. While they can be part of an overarching security strategy, bug bounties often prove inefficient and expensive. Given that the majority of security researchers are primarily motivated by seeing a vulnerability fixed rather than money, organisations should consider focusing their finite resources on secure software development that finds vulnerabilities within the software development lifecycle.
Half of professionals also admitted concerns around their current cloud providers.
An overwhelming number of cybersecurity professionals (89%) have expressed concerns about the third-party managed service providers (MSPs) they partner with being hacked, according to new research from the Neustar International Security Council.
While most organisations reported working with an average of two to three MSPs, less than a quarter (24%) admitted to feeling very confident in the safety barriers they have in place to prevent third-party contractors from violating security protocols. Along with this concern, security professionals expressed a desire to switch cloud providers, with over half (53%) claiming they would if they could.
These threat levels were also apparent in Neustar’s International Cyber Benchmarks Index, which NISC began mapping trends in 2017, the most recent index revealed an 18-point increase over the two-year period.
Aside from third-party threats, security professionals ranked distributed denial of service (DDoS) attacks as their greatest concern (22%), closely followed by system compromise (20%) and ransomware (19%). Insider threat remained bottom of the list, with 29% seeing it as the least concerning.
In light of continued fears around DDoS attacks, organisations outlined their most recent security focus to be on increasing their ability to respond to DDoS threats, with 57% admitting to having been on the receiving end of an attack in the past. In the most recent data set collected and analysed by NISC, enterprises were most likely to take between 60 seconds and 5 minutes to initiate DDoS mitigation.
“Regardless of size or sector, every organisation relies on third-party service providers to support and enable their digital transformation efforts. Whether it’s a business intelligence tool, cloud platform or automation solution, the number of MSPs businesses work with is only set to increase, as enterprises continue to chase agility and find new ways to attract customers,” said Rodney Joffe, Chairman of NISC, Senior Vice President, Security CTO and Fellow at Neustar. “However, by multiplying the number of digital links to an organisation, you also increase the potential for risk, with malicious actors finding alternative ways to infiltrate your networks.”
“While businesses should adopt their own, always-on approach to security, it’s essential that they are also questioning the security of their whole digital network, including the third parties they work with. Missed connections or weak links can cause lasting damage to an organisation’s bottom line, leaving no room for error,” Joffe added.
The smart money is very interested in software at present, and this is driven by a booming SaaS sector; it means more to think about for the managed services sector as it wrestles with the rate of mergers and acquisitions among its suppliers. It also pushes consolidation among the MSPs themselves as they find themselves alongside similar businesses using the same limited resources and think M&A is a solution.
Overall, the M&A landscape in enterprise software is thriving, with volumes, value and valuations currently peaking as target firms continue to receive interest from established trade buyers and investors. This comes from new research by Hampleton Partners, experts in tech M&A, who will be speaking at the Managed Services Summit North on October 30 in Manchester.
In its latest analysis of global Mergers & Acquisitions activity in Enterprise Software, Hampleton Partners reports the highest deal count ever recorded, with 651 transactions inked over the first half of 2019. Multiples were also sky-high; the trailing 30-month median EV/EBITDA multiple peaked at 17.5x, inching its way up from last year’s levels.
And this boom in demand shows no signs of slowing, it says. Miro Parizek, founder and principal partner, Hampleton Partners, says: “In spite of any economic slowdown and public market volatility, companies are sprinting towards SaaS and software targets to cover all areas of digitalisation and digital transformation. Financial buyers, with access to cheap debt, are eager to capitalise on this drive for digital transformation.”
“Although we expect the sector’s record-breaking metrics to ease away from their current peak, the sector is set to remain stable and strong over the long-term. The need to improve business insights and resource allocation will remain imperative for every business and be spurred by further advances in machine learning and artificial intelligence for SaaS and cloud-based software.”
All of which is good news for managed service providers, although it does mean more technology-based changes to be considered at a strategic level. The fear of being left behind which has driven processes of change by customers in recent years is now finding its way to their suppliers and the MSPs, who find themselves in a race to keep up with advances.
And it does not mean that the MSPs themselves can rely on M&A action to drive their businesses forward or give them access to the limited resources they need. As a veteran of nine M&A deals, MSP IT Lab CEO Peter Sweetbaum told the London Managed Services Summit on September 18: “M&A is not a proxy for a poor sales and marketing strategy; if you cannot grow organically and don’t have that sales and marketing strategy fueling your growth, you need to rethink. M&A is additive, not a solution. We’ve seen buy-and-builds in this space that have been crushed by the weight of debt – there was no vision and they paid the price.”
And don’t underestimate the costs of growth, he added: “What we are seeing and what we all need to realise, is that vendors are changing the model. We [at IT Lab] have to keep up with Azure, Dynamics and Office365. Our clients do not have the skills and capacity to keep up with the likes of Microsoft and its monthly, weekly, sometimes daily, release cycles. At IT Lab we have to keep up with that - it is a challenge for us and we have to invest to keep up.”
Not every MSP can do that, he concluded. The rapid change in the enterprise software market and the application of large sums to its valuations may create even more pressure on MSP strategies.
Peter Sweetbaum will be presenting again at the Managed Services Summit North, in Manchester on October 30, as will Jonathan Simnett from Hampleton Partners.
Worldwide shipments of devices — PCs, tablets and mobile phones — will decline 3.7% in 2019, according to the latest forecast from Gartner, Inc.
Gartner estimates there are more than 5 billion mobile phones used around the world. After years of growth, the worldwide smartphone market has reached a tipping point. Sales of smartphones will decline by 3.2% in 2019, which would be the worst decline the category has seen (see Table 1).
“This is due to consumers holding onto their phones longer, given the limited attraction of new technology,” said Ranjit Atwal, senior research director at Gartner.
The lifetimes of premium phones — for example, Android and iOS phones — continue to extend through 2019. Their quality and technology features have improved significantly and have reached a level today where users see high value in their device beyond a two-year time frame.
Consumers have reached a threshold for new technology and applications: “Unless the devices provide significant new utility, efficiency or experiences, users do not necessarily want to upgrade their phones,” said Mr. Atwal.
Table 1
Worldwide Device Shipments by Device Type, 2018-2021 (Millions of Units)
Device Type | 2018 | 2019 | 2020 | 2021 |
Traditional PCs (Desk-Based and Notebook) | 195.3 | 188.4 | 177.9 | 169.2 |
Ultramobiles (Premium) | 64.4 | 67.3 | 71.8 | 76.4 |
Total PC Market | 259.7 | 255.7 | 249.7 | 245.6 |
Ultramobiles (Basic and Utility) | 149.6 | 140.9 | 137.3 | 135.7 |
Computing Device Market | 409.3 | 396.6 | 387 | 381.3 |
Mobile Phones | 1,813.40 | 1,743.10 | 1,768.80 | 1,775.50 |
Total Device Market | 2,222.70 | 2,139.70 | 2,155.80 | 2,156.80 |
Source: Gartner (September 2019)
5G-Capable Phones Continue to Be On The Rise
The share of 5G-capable phones will increase from 10% in 2020 to 56% by 2023.
“The major players in the mobile phone market will look for 5G connectivity technology to boost replacements of existing 4G phones,” said Mr. Atwal. “Still, less than half of communications service providers (CSPs) globally will have launched a commercial 5G network in the next five years.
“More than a dozen service providers have launched commercial 5G services in a handful of markets so far,” said Mr. Atwal. “To ensure smartphone sales pick up again, mobile providers are starting to emphasize 5G performance features, like faster speeds, improved network availability and enhanced security. As soon as providers better align their early performance claims for 5G with concrete plans, we expect to see 5G phones account for more than half of phone sales in 2023. As a result of the impact of 5G, the smartphone market is expected to return to growth at 2.9% in 2020.”
5G will impact more than phones. The recent Gartner IoT forecast showed that the 5G endpoint installed base will grow 14-fold between 2020 and 2023, from 3.5 million units to 48.6 million units. By 2028, the installed base will reach 324.1 million units, although 5G will make up only 2.1% of the overall Internet of Things (IoT) endpoints.
“The inclusion of 5G technology may even be incorporated into premium ultramobile devices in 2020 to make them more marketable to customers,” said Mr. Atwal.
PC Device Trends in 2019
While worldwide PC shipments totaled 63 million units and grew 1.5% in the second quarter of 2019, unclear external economic issues still cast uncertainty over PC demand this year. PC shipments are estimated to total 256 million units in 2019, a 1.5% decline from 2018.
The consumer PC market will decline by 9.8% in 2019, reducing its share of the total market to less than 40%. The collective increase in consumer PC lifetimes will result in 10 million fewer device replacements through 2023. With the Windows 10 migration peaking, business PCs will decline by 3.9% in 2020 after three years of growth.
“There is no doubt the PC landscape is changing,” said Mr. Atwal. “The consumer PC market requires high-value products that can meet specific consumer tasks, such as gaming. Likewise, PC vendors are having to cope with uncertainty from potential tariffs and Brexit disruptions. Ultimately, they need to change their business models to one based on annual service income, rather than the peaks and troughs of capital spending.”
The blockchain effect – five to ten years away?
The 2019 Gartner, Inc. Hype Cycle for Blockchain Business shows that the business impact of blockchain will be transformational across most industries within five to 10 years.
“Even though they are still uncertain of the impact blockchain will have on their businesses, 60% of CIOs in the Gartner 2019 CIO Agenda Survey said that they expected some level of adoption of blockchain technologies in the next three years” said David Furlonger, distinguished research vice-president at Gartner. “However, the existing digital infrastructure of organizations and the lack of clear blockchain governance are limiting CIOs from getting full value with blockchain.”
The Hype Cycle provides an overview of how blockchain capabilities are evolving from a business perspective and maturity across different industries (see Figure 1).
Figure 1. Gartner’s Hype Cycle for Blockchain Business, 2019
Source: Gartner (September 2019)
Key Industries
Banking and investment services industries continue to experience significant levels of interest from innovators seeking to improve decades old operations and processes, however only 7.6% of respondents to the CIO Survey suggested that blockchain is a game-changing technology. That said, nearly 18% of banking and investment services CIOs said they have adopted or will adopt some form of blockchain technology within the next 12 months and nearly another 15% within two years.
“We see blockchain in several key areas in banking and investments services, primarily focused on permissioned ledgers,” said Mr. Furlonger. “We also expect continued developments in the creation and acceptance of digital tokens. However considerable work needs to be completed in nontechnology-related activities such as standards, regulatory frameworks and organization structures for blockchain capabilities to reach the Plateau of Productivity – the point at which mainstream adoption takes off, in this industry.
Blockchain in gaming. In the fast-growing esport industry, blockchain natives are launching solutions that allow users to create their own tokens to support the design of competition as well as to enable trading of virtual goods. The tokens provide gamers with more control over their in-game items making them more portable across gaming platforms.
“High user volumes and rapid innovation make the gaming sector a testing ground for innovative application of blockchain. It is the perfect place to monitor how users push the adaptability of the most critical components of blockchain: decentralization and tokenization,” said Christophe Uzureau, research vice president at Gartner. “Gaming startups provide appealing alternatives to the ecosystem approaches of Amazon, Google or Apple, and serve as a model for companies in other industries to develop digital strategies.”
In retail, Blockchain is being considered for “track and trace” services, counterfeit prevention, inventory management and auditing, any of which could be used to improve product quality or food safety, for example. Whilst these examples have value, the real impact of blockchain for retail industry will depend on supporting new ideas — such as using blockchain to transform or augment loyalty programs. Once it has been combined with the Internet of Things (IoT) and artificial intelligence (AI), blockchain has the potential to change retail business models forever, impacting both data and monetary flows and avoiding centralization of market power.
As a result, Gartner believes that blockchain has the potential to transform business models across all industries — but these opportunities demand that enterprises adopt complete blockchain ecosystems. Without tokenization and decentralization, most industries will not see real business value.
Self-service isn’t serving the customer
Only 9% of customers report resolving their issues completely via self-service, according to Gartner, Inc. Many companies create more channels for customer service, but this creates complex customer resolution journeys, as customers switch frequently between channels.
“The idea behind providing customers with more channels in order to give them what they ‘want’ and in an attempt to offer more choice in their service experience sounds like a great idea, but In fact, it has unintentionally made things worse for customers,” said Rick DeLisi, vice president in Gartner’s Customer Service & Support practice. “This approach of ‘more and better channels’ isn’t living up to the promise of reduced live call volume and is only leading to more complex and costly customer interactions to manage. That becomes a ‘lose-lose’ for customers and the companies that are trying to serve them.”
When customers can’t solve their issues via self-service, they resort to live customer service calls in order to solve their problem — thereby driving up operating costs as a result. Gartner’s 2019 Customer Service and Support Leader poll identified that live channels such as phone, live chat and email cost an average of $8.01 per contact, while self-service channels such as company-run websites and mobile apps cost about $0.10 per contact.
“As customer behavior in self-service continues to evolve, we are learning that most people have become used to the idea of using more than one channel (i.e., phone, chat, text/SMS, online videos, review sites in-store visits) during the resolution of one problem or issue with a given company,” added Mr. DeLisi. “The bad news is that it is definitely forcing a higher cost-to-serve for companies with no significant increase in the overall quality of the customer experience.”
In a survey of 8,398 customers, the top five customers’ preferred channels for issue resolution included: phone (44%), chat (17%), email (15%), company website (12%) and search engine (4%). Gartner research shows that both customer effort and customer satisfaction levels do not statistically differ between channels, and customer loyalty is not affected by use or availability of a preferred channel.
Yet, many service organizations continue to add more and more channels even though access to these channels does not produce the expected customer experience benefits. The addition of digital channels often results in varying levels of maturity and an inconsistent experience. To make matters worse, live call volume and associated costs for issue resolution aren’t decreasing.
Given the varying costs associated with each channel and customers’ willingness to use any channels available to them, service organizations must rethink their overall service strategy to move toward a more self-service dominant approach. This requires a thoughtful approach to channel offerings — one where channels can no longer be “bolted on” after the fact. Instead, customer service and support leaders must consider the following four imperatives to move to a more self-service dominant approach:
Global spending on artificial intelligence (AI) systems is expected to maintain its strong growth trajectory as businesses continue to invest in projects that utilize the capabilities of AI software and platforms. According to the recently updated International Data Corporation (IDC) Worldwide Artificial Intelligence Systems Spending Guide, spending on AI systems will reach $97.9 billion in 2023, more than two and one half times the $37.5 billion that will be spent in 2019. The compound annual growth rate (CAGR) for the 2018-2023 forecast period will be 28.4%.
"The AI market continues to grow at a steady rate in 2019 and we expect this momentum to carry forward," said David Schubmehl, research director, Cognitive/Artificial Intelligence Systems at IDC. "The use of artificial intelligence and machine learning (ML) is occurring in a wide range of solutions and applications from ERP and manufacturing software to content management, collaboration, and user productivity. Artificial intelligence and machine learning are top of mind for most organizations today, and IDC expects that AI will be the disrupting influence changing entire industries over the next decade."
Spending on AI systems will be led by the retail and banking industries, each of which will invest more than $5 billion in 2019. Nearly half of the retail spending will go toward automated customer service agents and expert shopping advisors & product recommendation systems. The banking industry will focus its investments on automated threat intelligence and prevention systems and fraud analysis and investigation. Other industries that will make significant investments in AI systems throughout the forecast include discrete manufacturing, process manufacturing, healthcare, and professional services. The fastest spending growth will come from the media industry and federal/central governments with five-year CAGRs of 33.7% and 33.6% respectively.
"Artificial Intelligence (AI) has moved well beyond prototyping and into the phase of execution and implementation," said Marianne D'Aquila, research manager, IDC Customer Insights & Analysis. "Strategic decision makers across all industries are now grappling with the question of how to effectively proceed with their AI journey. Some have been more successful than others, as evidenced by banking, retail, manufacturing, healthcare, and professional services firms making up more than half of the AI spend. Despite the learning curve, IDC sees higher than average five-year annual compounded growth in government, media, telecommunications, and personal and consumer services."
Investments in AI systems continue to be driven by a wide range of use cases. The three largest use cases – automated customer service agents, automated threat intelligence and prevention systems, and sales process recommendation and automation – will deliver 25% of all spending in 2019. The next six use cases will provide an additional 35% of overall spending this year. The use cases that will see the fastest spending growth over the 2018-2023 forecast period are automated human resources (43.3% CAGR) and pharmaceutical research and development (36.7% CAGR). However, eight other use cases will have spending growth with five-year CAGRs greater than 30%.
The largest share of technology spending in 2019 will go toward services, primarily IT services, as firms seek outside expertise to design and implement their AI projects. Hardware spending will be somewhat larger than software spending in 2019 as firms build out their AI infrastructure, but purchases of AI software and AI software platforms will overtake hardware by the end of the forecast period with software spending seeing a 36.7% CAGR.
On a geographic basis, the United States will deliver more than 50% of all AI spending throughout the forecast, led by the retail and banking industries. Western Europe will be the second largest geographic region, led by banking and discrete manufacturing. China will be the third largest region for AI spending with retail, state/local government, and professional services vying for the top position. The strongest spending growth over the five-year forecast will be in Japan (45.3% CAGR) and China (44.9% CAGR).
Cloud IT infrastructure revenues decline
According to the International Data Corporation (IDC) Worldwide Quarterly Cloud IT Infrastructure Tracker, vendor revenue from sales of IT infrastructure products (server, enterprise storage, and Ethernet switch) for cloud environments, including public and private cloud, declined 10.2% year over year in the second quarter of 2019 (2Q19), reaching $14.1 billion. IDC also lowered its forecast for total spending on cloud IT infrastructure in 2019 to $63.6 billion, down 4.9% from last quarter's forecast and changing from expected growth to a year-over-year decline of 2.1%.
Vendor revenue from hardware infrastructure sales to public cloud environments in 2Q19 was down 0.9% compared to the previous quarter (1Q19) and down 15.1% year over year to $9.4 billion. This segment of the market continues to be highly impacted by demand from a handful of hyperscale service providers, whose spending on IT infrastructure tends to have visible up and down swings. After a strong performance in 2018, IDC expects the public cloud IT infrastructure segment to cool down in 2019 with spend dropping to $42.0 billion, a 6.7% decrease from 2018. Although it will continue to account for most of the spending on cloud IT environments, its share will decrease from 69.4% in 2018 to 66.1% in 2019. In contrast, spending on private cloud IT infrastructure has showed more stable growth since IDC started tracking sales of IT infrastructure products in various deployment environments. In the second quarter of 2019, vendor revenues from private cloud environments increased 1.5% year over year reaching $4.6 billion. IDC expects spending in this segment to grow 8.4% year over year in 2019.
Overall, the IT infrastructure industry is at crossing point in terms of product sales to cloud vs. traditional IT environments. In 3Q18, vendor revenues from cloud IT environments climbed over the 50% mark for the first time but fell below this important tipping point since then. In 2Q19, cloud IT environments accounted for 48.4% of vendor revenues. For the full year 2019, spending on cloud IT infrastructure will remain just below the 50% mark at 49.0%. Longer-term, however, IDC expects that spending on cloud IT infrastructure will grow steadily and will sustainably exceed the level of spending on traditional IT infrastructure in 2020 and beyond.
Spending on the three technology segments in cloud IT environments is forecast to deliver growth for Ethernet switches while compute platforms and storage platforms are expected to decline in 2019. Ethernet switches are expected to grow at 13.1%, while spending on storage platforms will decline at 6.8% and compute platforms will decline by 2.4%. Compute will remain the largest category of spending on cloud IT infrastructure at $33.8 billion.
Sales of IT infrastructure products into traditional (non-cloud) IT environments declined 6.6% from a year ago in Q219. For the full year 2019, worldwide spending on traditional non-cloud IT infrastructure is expected to decline by 5.8%, as the technology refresh cycle driving market growth in 2018 is winding down this year. By 2023, IDC expects that traditional non-cloud IT infrastructure will only represent 41.8% of total worldwide IT infrastructure spending (down from 52.0% in 2018). This share loss and the growing share of cloud environments in overall spending on IT infrastructure is common across all regions.
Most regions grew their cloud IT Infrastructure revenues in 2Q19. Middle East & Africa was fastest growing at 29.3% year over year, followed by Canada at 15.6% year-over-year growth. Other growing regions in 2Q19 included Central & Eastern Europe (6.5%), Japan (5.9%), and Western Europe (3.1%). Cloud IT Infrastructure revenues were down slightly year over year in Asia/Pacific (excluding Japan) (APeJ) by 7.7%, Latin America by 14.2%, China by 6.9%, and the USA by 16.3%.
Top Companies, Worldwide Cloud IT Infrastructure Vendor Revenue, Market Share, and Year-Over-Year Growth, Q2 2019 (Revenues are in Millions) | |||||
Company | 2Q19 Revenue (US$M) | 2Q19 Market Share | 2Q18 Revenue (US$M) | 2Q18 Market Share | 2Q19/2Q18 Revenue Growth |
1. Dell Technologies | $2,355 | 16.7% | $2,565 | 16.4% | -8.2% |
2. HPE/New H3C Group** | $1,749 | 12.4% | $1,748 | 11.2% | 0.1% |
3. Cisco | $1,101 | 7.8% | $1,029 | 6.6% | 7.0% |
4. Inspur/Inspur Power Systems* *** | $820 | 5.8% | $690 | 4.4% | 18.9% |
4. Lenovo* | $670 | 4.8% | $813 | 5.2% | -17.5% |
ODM Direct | $4,059 | 28.9% | $5,349 | 34.1% | -24.1% |
Others | $3,307 | 23.5% | $3,473 | 22.2% | -4.8% |
Total | $14,061 | 100.0% | $15,655 | 100.0% | -10.2% |
IDC's Quarterly Cloud IT Infrastructure Tracker, Q2 2019 |
Notes:
* IDC declares a statistical tie in the worldwide cloud IT infrastructure market when there is a difference of one percent or less in the vendor revenue shares among two or more vendors.
** Due to the existing joint venture between HPE and the New H3C Group, IDC reports external market share on a global level for HPE as "HPE/New H3C Group" starting from Q2 2016 and going forward.
*** Due to the existing joint venture between IBM and Inspur, IDC will be reporting external market share on a global level for Inspur and Inspur Power Systems as "Inspur/Inspur Power Systems" starting from 3Q 2018.
Long-term, IDC expects spending on cloud IT infrastructure to grow at a five-year compound annual growth rate (CAGR) of 6.9%, reaching $90.9 billion in 2023 and accounting for 58.2% of total IT infrastructure spend. Public cloud datacenters will account for 66.0% of this amount, growing at a 5.9% CAGR. Spending on private cloud infrastructure will grow at a CAGR of 9.2%.
Converged Systems market grows 10.9%
According to the International Data Corporation (IDC) Worldwide Quarterly Converged Systems Tracker, worldwide converged systems market revenue increased 10.9% year over year to $3.9 billion during the second quarter of 2019 (2Q19).
"The value proposition of converged infrastructure solutions has evolved to align with the needs of a hybrid cloud world," said Eric Sheppard, research vice president, Infrastructure Platforms and Technologies at IDC. "Modern converged solutions are driving growth because they allow organizations to leverage standardized, software-defined, and highly automated datacenter infrastructure that is increasingly the on-premises backbone of a seamless multi-cloud world."
Converged Systems Segments
IDC's converged systems market view offers three segments: certified reference systems & integrated infrastructure, integrated platforms, and hyperconverged systems. The certified reference systems & integrated infrastructure market generated nearly $1.5 billion in revenue during the second quarter, which represents 10.5% year-over-year growth and 37.5% of total converged systems revenue. Integrated platforms sales declined 14.4% year over year during the second quarter of 2019, generating $626 million worth of sales. This amounted to 16.0% of the total converged systems market revenue. Revenue from hyperconverged systems sales grew 23.7% year over year during 2Q19, generating $1.8 billion worth of sales. This amounted to 46.6% of the total converged systems market.
IDC offers two ways to rank technology suppliers within the hyperconverged systems market: by the brand of the hyperconverged solution or by the owner of the software providing the core hyperconverged capabilities. Rankings based on a branded view of the market can be found in the first table of this press release and rankings based on the owner of the hyperconverged software can be found in the second table within this press release. Both tables include all the same software and hardware, summing to the same market size.
As it relates to the branded view of the hyperconverged systems market, Dell Technologies was the largest supplier with $533.2 million in revenue and a 29.2% share. Nutanix generated $258.8 million in branded revenue, which represented 14.2% of the total HCI market during the quarter. Cisco was the third largest branded HCI vendor with $114.0 million in revenue representing a 6.2% market share.
Top 3 Companies, Worldwide Hyperconverged Systems as Branded, Q2 2019 ($M) | |||||
Company | 2Q19 | 2Q19 Market Share | 2Q18 | 2Q18 Market Share | 2Q19/2Q18 Revenue Growth |
1. Dell Technologiesa | $533.2 | 29.2% | $418.7 | 28.4% | 27.4% |
2. Nutanix | $258.8 | 14.2% | $275.3 | 18.7% | -6.0% |
3. Cisco | $114.0 | 6.2% | $77.7 | 5.3% | 46.8% |
Rest of Market | $919.1 | 50.4% | $703.8 | 47.7% | 30.6% |
Total | $1,825.2 | 100.0% | $1,475.4 | 100.0% | 23.7% |
Source: IDC Worldwide Quarterly Converged Systems Tracker, September 24, 2019 |
Notes:
a – Dell Technologies represents the combined revenues for Dell and EMC sales for all quarters shown.
From the software ownership view of the market, systems running VMware hyperconverged software represented $694.1 million in total 2Q19 vendor revenue, or 38.0% of the total market. Systems running Nutanix hyperconverged software represented $522.0 million in second quarter vendor revenue or 28.6% of the total market. Both amounts represent the value of all HCI hardware, HCI software and system infrastructure software, regardless of how it was branded.
Top 3 Companies, Worldwide Hyperconverged Systems Based on Owner of HCI Software, Q2 2019 ($M) | |||||
Company | 2Q19 | 2Q19 Market Share | 2Q18 | 2Q18 Market Share | 2Q19/2Q18 Revenue Growth |
1. VMware | $694.1 | 38.0% | $497.7 | 33.7% | 39.5% |
2. Nutanix | $522.0 | 28.6% | $497.7 | 33.7% | 4.9% |
3. Cisco | $114.0 | 6.2% | $77.7 | 5.3% | 46.8% |
Rest of Market | $495.1 | 27.1% | $402.4 | 27.3% | 23.0% |
Total | $1,825.2 | 100.0% | $1,475.4 | 100.0% | 23.7% |
Source: IDC Worldwide Quarterly Converged Systems Tracker, September 24, 2019 |
Taxonomy Notes
Beginning with the release of 2019 results, IDC has expanded its definition of the hyperconverged systems market segment to include a new breed of systems called Disaggregated HCI (hyperconverged infrastructure). Such systems are designed from the ground up to only support distinct/separate compute and storage nodes. An example of such a system in the market today is NetApp's HCI solution. They offer non-linear scaling of the hyperconverged cluster to make it easier to scale compute and storage resources independent of each other while offering crucial functions such as quality of service. For these disaggregated HCI solutions, the storage nodes may not have a hypervisor at all, since they don't have to run VMs or applications.
IDC defines converged systems as pre-integrated, vendor-certified systems containing server hardware, disk storage systems, networking equipment, and basic element/systems management software. Systems not sold with all four of these components are not counted within this tracker. Specific to management software, IDC includes embedded or integrated management and control software optimized for the auto discovery, provisioning and pooling of physical and virtual compute, storage and networking resources shipped as part of the core, standard integrated system. Numbers in this press release may not sum due to rounding.
Certified reference systems & integrated infrastructure are pre-integrated, vendor-certified systems containing server hardware, disk storage systems, networking equipment, and basic element/systems management software. Integrated platforms are integrated systems that are sold with additional pre-integrated packaged software and customized system engineering optimized to enable such functions as application development software, databases, testing, and integration tools. Hyperconverged systems collapse core storage and compute functionality into a single, highly virtualized solution. A key characteristic of hyperconverged systems that differentiate these solutions from other integrated systems is their scale-out architecture and their ability to provide all compute and storage functions through the same x86 server-based resources. Market values for all three segments includes hardware and software but excludes services and support.
IDC considers a unit to be a full system including server, storage, and networking. Individual server, storage, or networking "nodes" are not counted as units. Hyperconverged system units are counted at the appliance (aka chassis) level. Many hyperconverged appliances are deployed on multinode servers. IDC will count each appliance, not each node, as a single system.
Server revenue declines 11.6%
According to the International Data Corporation (IDC) Worldwide Quarterly Server Tracker, vendor revenue in the worldwide server market declined 11.6% year over year to just over $20.0 billion during the second quarter of 2019 (2Q19). Worldwide server shipments declined 9.3% year over year to just under 2.7 million units in 2Q19.
After a torrid stretch of prolonged market growth that drove the server market to historic heights, the global server market declined for the first time since the fourth quarter of 2016. All classes of servers were impacted, with volume server revenue down 11.7% to $16.3 billion, while midrange server revenue declined 4.6% to $2.4 billion and high-end systems contracted by 20.8% to $1.3 billion.
"The second quarter saw the server market's first contraction in nine quarters, albeit against a very difficult compare from one year ago when the server market realized unprecedented growth," said Sebastian Lagana, research manager, Infrastructure Platforms and Technologies. "Irrespective of the difficult compare, factors impacting the market include a slowdown in purchasing from cloud providers and hyperscale customers, an off-cycle in the cyclical non-x86 market, as well as a slowdown from enterprises due to existing capacity slack and macroeconomic uncertainty."
Overall Server Market Standings, by Company
The number one position in the worldwide server market during 2Q19 was shared* by Dell Technologies and the combined HPE/New H3C Group with revenue shares of 19.0% and 18.0% respectively. Dell Technologies declined 13.0% year over year, while HPE/New H3C Group was down 3.6% year over year. The third position went to Inspur/Inspur Power Systems, which increased its revenue by 32.3% year over year. Lenovo and IBM tied* for the fourth position with revenue shares of 6.1%, and 5.9% respectively. Lenovo saw revenue decline by 21.8% year over year while IBM saw its revenue contract 27.4% year over year. The ODM Direct group of vendors accounted for 21.1% of total revenue and declined 22.9% year over year to $4.23 billion. Dell Technologies led the worldwide server market in terms of unit shipments, accounting for 17.8% of all units shipped during the quarter.
Top 5 Companies, Worldwide Server Vendor Revenue, Market Share, and Year-Over-Year Growth, Second Quarter of 2019 (Revenues are in US$ millions) | |||||
Company | 2Q19 Revenue | 2Q19 Market Share | 2Q18 Revenue | 2Q18 Market Share | 2Q19/2Q18 Revenue Growth |
T1. Dell Technologies* | $3,809.0 | 19.0% | $4,375.8 | 19.3% | -13.0% |
T1. HPE/New H3C Groupa* | $3,607.4 | 18.0% | $3,743.5 | 16.5% | -3.6% |
3. Inspur/Inspur Power Systemsb | $1,438.8 | 7.2% | $1,087.9 | 4.8% | 32.3% |
T4. Lenovo* | $1,212.0 | 6.1% | $1,549.1 | 6.8% | -21.8% |
T4. IBM* | $1,188.6 | 5.9% | $1,637.5 | 7.2% | -27.4% |
ODM Direct | $4,232.7 | 21.1% | $5,488.2 | 24.2% | -22.9% |
Rest of Market | $4,536.1 | 22.7% | $4,764.5 | 21.0% | -4.8% |
Total | $20,024.6 | 100% | $22,646.3 | 100% | -11.6% |
Source: IDC Worldwide Quarterly Server Tracker, September 4, 2019 |
Notes:
* IDC declares a statistical tie in the worldwide server market when there is a difference of one percent or less in the share of revenues or shipments among two or more vendors.
a Due to the existing joint venture between HPE and the New H3C Group, IDC will be reporting external market share on a global level for HPE and New H3C Group as "HPE/New H3C Group" starting from 2Q 2016.
b Due to the existing joint venture between IBM and Inspur, IDC will be reporting external market share on a global level for Inspur and Inspur Power Systems as "Inspur/Inspur Power Systems" starting from 3Q 2018.
Top 5 Companies, Worldwide Server Unit Shipments, Market Share, and Growth, Second Quarter of 2019 (Shipments are in thousands) | |||||
Company | 2Q19 Unit Shipments | 2Q19 Market Share | 2Q18 Unit Shipments | 2Q18 Market Share | 2Q19/2Q18 Unit Growth |
1. Dell Technologies | 479,942 | 17.8% | 576,956 | 19.4% | -16.8% |
2. HPE/New H3C Groupa | 438,060 | 16.3% | 464,962 | 15.7% | -5.8% |
3. Inspur/Inspur Power Systemsb | 232,885 | 8.7% | 203,225 | 6.8% | 14.6% |
4. Lenovo | 181,166 | 6.7% | 224,138 | 7.6% | -19.2% |
T5. Super Micro* | 139,289 | 5.2% | 175,092 | 5.9% | -20.4% |
T5. Huawei* | 116,994 | 4.3% | 187,356 | 6.3% | -37.6% |
ODM Direct | 678,940 | 25.2% | 732,643 | 24.7% | -7.3% |
Rest of Market | 424,675 | 15.8% | 403,124 | 13.6% | 5.3% |
Total | 2,691,952 | 100% | 2,967,496 | 100% | -9.3% |
Source: IDC Worldwide Quarterly Server Tracker, September 4, 2019 |
Table Notes: Please refer to the Notes following the first table in this press release.
Top Server Market Findings
On a geographic basis, Canada was the fastest growing region in 2Q19 with 13.4% year-over-year revenue growth. Europe, the Middle East, and Africa (EMEA) grew 2.0% on aggregate while Japan declined 6.7% and Asia/Pacific (excluding Japan) contracted 8.1%. The United States was down 19.1% and Latin America contracted 34.2%. China saw its 2Q19 vendor revenues decline 8.7% year over year.
Revenue generated from x86 servers decreased 10.6% in 2Q19 to $18.4 billion. Non-x86 servers contracted 21.5% year over year to $1.6 billion.
But who is really in charge of what tech is used?
Asks Tony Lock, Freeform Dynamics Ltd.
Some writers love a battle – or at least, the idea of one. So they exaggerate perfectly normal differences of opinion, spinning debates into fierce zero-sum conflicts where there can be only one winner.
It is almost as true in IT as it is in politics. In recent years – perhaps even for the last decade – much has been written about how data centre managers and IT professionals in general are losing out, as business managers and departmental heads seize their authority to acquire systems and solutions. Some have even postulated that in the not too distant future, IT inside business will dwindle in importance as ‘the business’ decides instead what IT to use and how it would be consumed.
Such an extreme is obviously unlikely to occur in any sizeable organisation, but the question remains, who is influencing IT decision making and to what extent are data centre professionals in control?
An online study by Freeform Dynamics of over 1,100 IT professionals involved in IT acquisitions (link: https://freeformdynamics.com/for-vendors-service-providers/content-priorities-in-the-it-decision-cycle/) throws some light on matters which many of us take for granted, but which could probably benefit from greater attention. The study highlights how broad the community involved in IT decision making is and how their thoughts are shaped over time. And the main finding is that the “war between business and IT” is fake news.
Influencers of IT acquisitions
There is a second common assumption that we ought to question, as well. This time it is one that is held dear by many large IT vendors and salespeople, and it’s the belief that “it’s the CIO we need to influence” (Figure 1).
Figure 1
Again, our survey finds that the truth is rather different. The results here highlight that except in small and very small organisations, the head of IT is likely to be heavily involved only in decisions of a very strategic nature. Indeed, in the largest of organisations they are more likely to not be involved directly in even the most important IT decisions. So who is helping shape such decisions?
Hybrid decision making
Any reader who works in the data centre knows that IT decision making can be complex, especially when it comes to selecting what systems, software, management and security tools to acquire. Even decisions about the very basics of a data centre can be influenced by non-IT stakeholders, particularly if the subject under discussion is the data centre itself – should we rebuild, should we move to a CoLo, should we migrate to “The Cloud”. Clearly these are often just as much about politics as they are about cost and technology. But what about more “routine” decisions (Figure 2)?
Figure 2
Figure 2 illustrates that decision making in IT really can be a team sport, but as might be expected, everything depends on context. As some commentators say, there are situations where technology-related investment decisions are controlled entirely by the line of business. Most of these are likely to be applications or services that the business unit can acquire easily to meet a particular need, and that (hopefully at least) doesn’t need to integrate with any data centre systems.
But, and it is a big “but”, the same can still be said about IT controlling and making decisions unilaterally. In some areas of data centre technology acquisition, it is obvious that IT is likely to be in the decision driving seat. After all there are few business managers who want to understand the intricacies of storage, compute and networking. All they want to know is that it provides them with the service levels they require.
However, many investment decisions are made, and indeed have long been made, by business and IT staff working together. In some cases, the business may identify a need and, perhaps, even a solution and ask IT to make it happen. This is entirely logical.
Perhaps the most interesting facet of this chart concerns the IT Driven result. In these we can see that there are many occasions, perhaps more than ever before, where IT itself is identifying opportunities where technology investment could help the business and then selling the idea to business stakeholders to get their buy in. This is something where until recently many IT teams have struggled. And equally encouragingly, it is clear that IT and business stakeholders can, and are, working closely together to define needs and find solutions.
But don’t think that everything always runs smoothly, especially if an IT investment project runs into trouble. Another study carried out by Freeform Dynamics (link: https://freeformdynamics.com/for-vendors-service-providers/politics-practicalities-procurement/) shows that people issues can cause problems (Figure 3).
Figure 3
The results here do not say that these issues happen all the time, or even that they happen often. But when they do occur, the consequences are serious. This shows that IT and business people still need help working together, especially to ensure that both sides fully understand each other. Interpersonal relationship training can be a fruitful investment in and of itself.
It’s not always straightforward
While the idea of IT and business teams working together is great in theory, some practical issues keep on cropping up that can derail investment decisions. But we should also recognise that such collaborative teamwork between IT and business does not always run smoothly. Another result from the report mentioned above illustrates some of the ways in which investment decisions can still go off track (Figure 4).
Figure 4
The biggest two issues remain cost, namely that either senior business managers may want to overrule some technology acquisitions in order to drive down associated costs. Alternatively, the dreaded “Procurement” team can have a similar impact, usually because the KPIs on which they are judged have nothing to do with the business value delivered by an acquired solution, IT or any other, but simply by how much they can reduce the project’s cost base.
The other challenge here can arrive when the organisation has a preferred supplier system in place and the solution being sought is from a vendor not on the existing list. All three of these are process and internal politics issues which, as we all know, are not the simplest to fix quickly.
The IT press and some vendor marketing departments have expended considerable energy over the years trying to convince us that IT and the business are at war with each other. This is plainly untrue and has always been wildly inaccurate. IT and business managers have always cooperated to get new projects into use, and things are getting better all the time as both sides accrue more experience and as personal relationships deepen.
Business needs IT solutions and it requires they be available more quickly than ever before and be able to change rapidly as the market environment modifies. This demands IT and business work effectively and efficiently, but as Figure 3 above shows, some human skills development on both sides would not go amiss. But in times when misleading headlines and publicity are common, don’t let anyone tell you there is a war going on between IT and the business when it comes to tech procurement. Sure, there may be the odd skirmish, but by and large everyone wants the same thing, for the business to win.
Tim Bandos, VP of Cybersecurity at Digital Guardian, looks at the impact of GDPR on the number of DPOs worldwide, whether the available figures paint a true picture and importantly, what the overall impact data security and compliance has been as a result.
The General Data Protection Regulation (GDP) recently marked one year since its official implementation. In the build-up, a key aspect of the regulation which came under a lot of scrutiny was the requirement for public bodies, or organisations that carry out certain types of data processing activities to appoint a Data Protection Officer (DPO). This was to ensure these organisations had a designated employee in place to lead the monitoring of internal compliance and inform/advise on data protection obligations, amongst other things.
Prior to the implementation of GDPR, Germany and the Philippines were the only countries in the world with mandatory DPO laws, making the role pretty niche up until that point. Back in 2017, the International Association of Privacy Professionals (IAPP) estimated that the introduction of GDPR would create the need for around 75,000 new DPOs worldwide, across private-and public-sector organisations, with roughly 28,000 needed in Europe.
However, the latest figures collected by the IAPP suggest there are now a staggering half a million DPOs registered in both private and public-facing organisations across the European Economic Area (EEA), more than six times the original estimates.
These figures are based on information obtained by the IAPP from data protection authorities in Austria, Bulgaria, Denmark, Finland, France, Germany, Ireland, Italy, the Netherlands, Spain, Sweden and the United Kingdom, a large portion of the EEA. There were 376,306 DPO registrations across these countries and the IAPP extrapolated that figure and applied it to estimate the number of DPOs in the remaining EEA members.
Do the figures tell the complete story?
While the 500,000 figure is certainly encouraging, it's difficult to pinpoint the exact number of DPOs installed in the EU, let alone worldwide. For a start, some organisations use external DPOs, meaning the same individual can serve multiple organisations simultaneously, skewing the figures. A great example of this is pointed out by Caitlin Fennessy, a Certified Information Privacy Professional with the IAPP. She notes that in France there are almost 52,000 organisations with a registered DPO, but the actual number of individual DPOs in the country is closer to 18,000. As such, these figures need to be taken with a fairly hefty pinch of salt.
Is all this having an impact data protection?
Another big question is whether the influx of new DPOs (and the GDPR itself) is having any impact on data security and protection? Well, a survey back in February 2019 found that there had been over 59,000 data breaches reported to data protection authorities across the EEA since the regulation went into effect. The Netherlands, Germany and the UK topped the table with approximately 15,400, 12,600, and 10,600 reported breaches respectively, all of which were significant increases on previous years and evidence that the GDPR is proving a success from a breach reporting perspective.
However, over the same period, only 91 actual fines were recorded as a result of these breaches. The highest of those fines was €50 million, against Google on 21 January 2019, which itself makes up well over 90% of the total monetary value of fines handed out (€55,955,871). Google’s fine was a French decision in relation to the processing of personal data for advertising purposes without valid authorisation, rather than a personal data breach. So while the GDPR and the influx of DPOs appear to be having a significant positive impact on the number of breaches being reported, it’s doing less well when it comes to holding those responsible accountable for their actions.
Since GDPR came into force, the number of registered DPOs has risen far more quickly than anyone could have imagined, even when figures are adjusted for key discrepancies. This shows that the majority of organisations out there are taking GDPR seriously, as evidenced by the significant rise in breaches being reported since the regulation came into force. However, it appears that there’s still some way to go before many of those companies unable (or unwilling) to adequately protect their customers’ data are punished appropriately.
‘Data is the new oil’ is old hat. Data is so plentiful these days that it’s more like the new air. Our society runs on digital information in ways that would have been utterly imaginable just decades ago. Data runs through everything we do, and the amount in circulation is only going to increase.
By Spencer Young, RVP, Imperva.
To put the sheer amount of data that’s whizzing around in perspective, we currently generate about ten zettabytes of information globally every year. That’s ten trillion gigabytes, which equates to about two trillion DVDs or 62.5 billion iPod classics. If you stacked those iPods on top of each other, the tower would be 812,500 kilometres high – more than double the distance to the moon. That’s a lot of data.
Give it six years, though, and we’ll be producing eighteen times that amount – 180 zettabytes for every trip round the sun. Now you’ve got 1.2 trillion iPods. The stack is 14.6 million kilometres high. At that rate, it’d only take three and a half years to fill enough iPods to build a bridge to Mars – and Mars is a really long way away.
Using data effectively
The relatively sudden availability of all this information begs the question: what on Earth (or Mars, for that matter) are we doing with it all? For businesses, the rise in available data should be a strategic benefit – the more you know, the more intelligent your decision-making should be. That’s the theory, but is it the reality?
We pay a lot of money for data collection, hosting and maintenance, so how do we go about creating a data driven organisation that reaps the rewards of data analysis and insights? By going through the process of discovering where your most valuable data is stored, you can actually use compliance mandates as an opportunity to put data at the centre of your business.
Compliance programs, such as preparing for GDPR, provide the right level of impetus for your organisation to investigate and locate sensitive data so that you can efficiently protect these assets. By getting a better handle on where your sensitive data sets reside, and the best practice processes for overall security, you’re creating a data governance program that provides greater control over your data assets.
Practical steps
In any data governance and compliance prep program, the first step will involve an assessment of your current data environment. This should include a few different steps, starting with a discovery process and comprehensive inventory of all of your known and unknown data repositories. You’ll then need to look at how data flows within your organisation, including all of your touch points and sub-processors, before mapping out your current security and compliance technology to see where any gaps might be hiding.
For most organisations, this level of data discovery and inventory of sensitive data isn’t a process they can realistically perform manually. Many will have a combination of large and disparate database environments, so the first technology investment to look into should be in data discovery and classification.
The other important point is that data is increasingly dynamic by nature. That’s why your discovery and classification process should be occurring on a regular basis, as the nature of your data will continue to change.
But going through this process then provides you with actionable results for ongoing audits and compliance reporting. If you can leverage automation within these processes, you can also transition from basic discovery and classification to policy application, activity monitoring and user rights management – the next step in developing a robust, layered security posture.
Business benefits
When implemented effectively, layered security allows businesses to significantly reduce the amount of private data they manage, improving overall business efficiency. Layered security also drastically reduces the risk of a data breach, while facilitating a rapid incident response and reporting process to ensure compliance with the breach notification requirements.
To ensure they get real value out of their compliance projects, companies need to ensure that their data privacy solutions meet key components of these regulations out of the box and provide the most effective automated data protection available. The best security systems help companies to understand where databases are located and what type of information they hold by automatically scanning enterprise networks – making the process of regulatory compliance smoother and giving companies a deeper understanding of what’s going on inside their systems.
With so much data now in play, businesses must prioritise the creation and maintenance of a detailed, real-time inventory of data scattered across their organisations and enable automated, scheduled scans and holistic identification of sensitive data. Once these processes are in place, the company will not only benefit from a lighter compliance admin load, but also from a more joined-up, data-centric approach to running the business.
Like all digital services, cybersecurity has a key role to play in building an effective data-handling strategy for the new data age. Businesses must take action now to ensure their defence systems are supporting the overall effort to make the most of data – as securely as possible.
In any one enterprise organisation right now, there may be four or five separate asset management tracking systems or, more likely, several separate spreadsheets with different naming conventions and probably some overlaps and out-dated information. Many data centers are still managing their assets on spreadsheets, which is, in our opinion, sub-optimal.
By Venessa Moffat, Head of Product Marketing, RiT Tech.
In an article earlier this year in Digitalisation World, Marcus Doran said: “…the average organization can expect an error rate of 15 percent or more when manually tracking IT assets.”[1]
This article shows you 10 reasons why great asset management can significantly and positively impact your bottom line in data center operations.
Have a look at the following questions and if you answer no to any of them, then keep reading….
- Do you have a common asset management system across your business?
- Are you sure you don’t have any ghost / comatose servers / assets in your inventory?
- Are you positive that you are tracking all assets – eg. tracking blades and the chassis?
- Do you have flexible architectures allowing integration in your IT systems for asset tracking and management.
- Are you 100% sure that you know exactly what you have in your DC? Do you have an up-to-date CMDB?
To start, you need to know the scope of requirements for asset management within your business. Start from the top – your business objectives, and work down from there. Then read on to see the benefits that you could realise in a relatively short time frame.
1. Good processes and procedures
To ensure successful moves / adds / changes, good processes and procedures are required, based on solid asset management – knowing where the assets are located, who owns them, what maintenance there is against them and so on. These internal cycles are saved instantly. Without good business processes, however, this could all be undone very quickly.
2. Efficient resource / IMAC management
If you know where everything is, then you don’t have to send resources out to run an audit. The entire change project takes fewer resources, and is lower cost and risk to complete.
3. Single source of the truth through the entire stack, and business
Capacity planning runs through the full stack, from the physical infrastructure, to the application layer, so it makes sense to manage it all in one place. An integration with your company ERP / CMDB systems ensure one single database. No more pointless reconciliation projects.
4. Essential foundation for DCIM and SDDC
If a data centre area has a high weekend bandwidth, and compute utilization, then ensuring that the appropriate power is available is critical to keeping the service alive. So the very foundations to implementing DCIM or a Software Defined Data Centre, is ensuring you have both IT Asset Management and DC Asset Management in the same system. You can’t make decisions or program automated actions without it.
5. ISO9001 (and other standards) compliance
If you have a successfully implemented Asset Management system, ticking the boxes for a compliance audit will be a doddle – at least from an asset and category perspective.
6. Purchase, lease, and warranty information
All of this information is stored and tracked within most asset management systems – the collated information ensures that you can get the best service / price possible from your vendors.
7. Comatose / Zombie servers
Comatose, or zombie servers frequently occur when they are orphaned – eg their owner leaves the company. If you are tracking ownership, then you can reassign that ownership when someone leaves the organisation and utilize the assets appropriately.
8. Open-source APIs, easy access
With the Internet of things (IOT) maturing and the digital mesh becoming more of a reality, it rarely makes sense nowadays to create stand alone software that doesn’t talk to anything else. Make sure your environment has a flexible architecture, and the vendors your choose should have an open-source mindset when it comes to integrating with other systems. Some of the more forward-thinking providers are catching on to this, and making the application very easy to access.
9. Real-time reporting instead of constantly assessing or auditing
A transparent environment is the most efficient. Yes it takes a little bit more effort up front to keep the information up-to-date, but if your staff are strict with it, you will create real agility within your data center. Imagine being able to answer the queries from the business instantly!
10. Migrations / mergers
As with the above, the auditing part of the project is already complete, saving you around 30% of the resource cost if you had to audit before a migration.
My final word on this is that you must have your internal processes and procedures clear, in order to successfully implement great asset management and realise all the above benefits. If not, you may fall at the first hurdle.
Today’s businesses are experiencing significant change as they increasingly migrate business applications from corporate data centres to the cloud. While every enterprise’s digital transformation journey is different, we have already seen some enterprises move 100 percent of their applications to Software-as-a-Service (SaaS) and Infrastructure-as-a-Service (IaaS), and cease operating their own data centres.
By Simon Pamplin, EMEA technical director at Silver Peak.
The problem is that today’s router-centric wide area network (WAN) approaches weren’t designed for the cloud, limiting enterprises from realizing the true transformational promise of the cloud. This is because network traffic patterns have shifted, the fundamental nature of applications has changed, and security needs are different when everything is open and connected in the cloud. It’s likely that this challenge will only get worse, driven by the everchanging needs of business, the continuous evolution of every cloud application, and the uniqueness of every cloud and application environment.
On top of this, the business network has grown ever more complex, with distributed enterprises operating over multiple geographies and regions. At these branch sites, nuances in the way they operate, applications they favour, and local policies they must adhere to, can add to the complicated network management structure IT already has to deal with. As such, IT organisations have to take into account that all of this must be managed across thousands of locations, which are all different.
In order to stay on top of the new global cloud era, enterprises must regain control and have complete observability over the network.
The router-centric WAN approach has hit the wall
The traditional router-centric WAN architecture was designed when all applications were hosted in enterprise data centres; there was no cloud. In a router-centric model, all traffic is routed from branch offices to the data centre using private MPLS circuits. With the emergence of the cloud, applications are no longer centralised. Traditional routers require enterprises to inefficiently route cloud-destined applications from branch offices back to the data centre instead of directly to SaaS and IaaS from branch sites, and this impairs application performance and the end user quality of experience.
Enterprises struggle trying to stretch the old router-centric WAN; indeed, it’s too cumbersome and complicated. Now, companies can choose to deploy an SD-WAN, a virtual network overlay that uses any underlying transport – MPLS, 4G LTE or broadband internet – to connect users efficiently, effectively and securely with applications and other users. However, not all SD-WANs were created equal. Basic SD-WAN solutions have emerged as an alternative and they are a step in the right direction but they fall well short of the goal of the fully automated business-driven networks that enterprises require in today’s cloud-era. There is a better way forward that will help enterprises operate aided by, rather than hindered by, their network infrastructure.
Total control: orchestration and the shift to a business-first networking model
A business-first networking model is a top down approach that’s business-driven. It is one in which the network conforms to the business in contrast to the legacy router-centric approach where applications – and the business – are forced conform to the constraints imposed by the network. In this approach, the WAN is turned to a business accelerant that is fully automated and continuous, giving every application the resources it truly needs, while delivering ten times the bandwidth for the same budget – ultimately achieving the highest quality of experience to users and IT alike.
Using a business-first networking approach with the best orchestration available, IT can centrally orchestrate Quality of Service (QoS) and security policies for groups of applications based on business intent. This means that from a single, central pane of glass, IT can tell the network what it wants to do or change, and through automation, the SD-WAN solution and orchestrator will do it. Business-driven orchestration can also provide the highest levels of visibility. This includes historical and real-time dashboards displaying a wealth of metrics for network health, as well as application, network and WAN transport service performance. It provides complete observability – or visibility – of a business’s WAN ecosystem enabling faster troubleshooting and comprehensive reporting when enterprises need it most.
Distributed organisations and the advantages of centralised orchestration
SD-WAN orchestration provides centralised management and an aggregated view of multiple independent SD-WAN fabrics. Each SD-WAN fabric – which is deployed over specified regions – is a deployment of the overall SD-WAN edge platform, which can be centrally managed by IT using a secure, dedicated instance orchestrator. This is advantageous to businesses that operate at multiple locations across the world, as the SD-WAN can be fine-tuned to the meet the specific needs of the region. An intuitive graphical user interface (GUI) provides IT organisations with unprecedented levels of visibility, control and management across these large enterprises with thousands of sites.
Business-driven SD-WANs come with centralised orchestration that can be programmed to automatically configure hundreds or thousands of locations across the network, ideal for large businesses that need flexibility. From that point onwards, the network automatically and continuously connects users directly and securely to applications delivering optimum performance. Through real-time monitoring of applications and WAN services, a business-driven network can automatically learn of any changes in network conditions that might impact application performance, like packet loss, latency, or jitter. It then automatically adapts to give every application the network and security resources it needs to deliver the best quality of experience to users, freeing up time for IT to concentrate on mission critical tasks.
Orchestration and security in the enterprise
When it comes to security in the cloud-first enterprise, knowing what’s what and what’s where on the network is key. With so many threats on the network and stringent data regulations, such as GDPR, businesses can’t be too careful. On top of this, large global enterprises with multiple business units or subsidiaries are able to span multiple regions (administrative domains) where there is a requirement to support different regional QoS or security policies. Each individual SD-WAN fabric can be centrally configured and managed in alignment with business and regulatory requirements, while also enabling each fabric to be managed independently.
When choosing a networking solution with security in mind, network managers should look out for an SD-WAN approach with an orchestrator that allows businesses to easily define end-to-end zones to segment applications, WAN services or users in order to adhere to business or compliance mandates. Enterprises should look for orchestrators that provide an intuitive matrix view that clearly displays allowed, un-allowed and exception-based zone-to-zone communications. This segmentation keeps branches safe from threats and assists IT in meeting enterprise compliance mandates. And enterprises should have the ability to service chain applications to next generation firewall infrastructure or to cloud security services in accordance with pre-defined security policies.
Indeed, in this new cloud era, businesses need full control over their applications and they need their network architecture to support the business rather than define it. With a host of complexities on the modern network, like keeping users constantly connected, secure and compliant across multiple geographies and locations, using the best orchestrator technology in partnership with SD-WAN can ensure businesses get the best out of the network so that IT can focus on being more responsive to the business.
AIOps is an emerging technology which offers the promise of helping IT and data centre teams get to grips with the growing complexity of their respective infrastructure environments – with the ultimate objective of ensuring application performance optimisation. In this issue of Digitalisation World, you’ll find a variety of thoughts and opinions as to just what AIOps offers, and why it matters. Part 1.
Michael Allen, VP and EMEA CTO at software intelligence company Dynatrace, looks at why companies have started turning to AIOps and what they need to be doing to make it successful:
“Over the past few years, companies have been rapidly transitioning to dynamic, hybrid cloud environments to keep up with the constant demand to deliver something new. However, whilst the cloud offers the agility that enterprises crave, the ever-changing nature of these environments has generated unprecedented levels of complexity for IT teams to grapple with. Over the last two years, IT teams have identified hope on the horizon in the form of the emerging market of AIOps tools. This new breed of solution uses artificial intelligence to analyse and triage monitoring data faster than humans ever could, helping IT teams to make sense of the endless barrage of alerts by eliminating false positives and identifying which problems need to be prioritised. However, AIOps is not a silver bullet, and there’s a risk that enterprises will fail to realise its potential if it simply becomes just another cog in the machine alongside the array of monitoring tools they already rely on.”
“AIOps tools are only as good as the data they’re fed, and to radically change the game in operations, they need the ability to provide precise root case determination, rather than just surfacing up alerts that need looking into. It’s therefore critical for AIOps to have a holistic view of the IT environment so it can pull in any pertinent data and contextualise alerts using performance metrics from the entire IT stack. Integration with other monitoring capabilities is therefore key when adopting AIOps, ensuring there are no gaps in visibility and issues can be understood and resolved faster. It’s this potential for simplifying IT operations and delivering a more efficient organisation that should be the end goal of AIOps. When done right, the software intelligence enabled by AIOps can be used to drive true efficiencies, through automated business processes, auto-remediation and self-healing. Ultimately, this can enable the transition towards autonomous cloud operations, where hybrid cloud environments can dynamically adapt in real-time to optimise performance for end-users, without the need for human intervention. As a result, problems can be resolved before users even realise there’s been a glitch.”
“This AI-driven automation will fuel the next wave of digitalisation and truly transform IT operations. However, reaching this nirvana can’t be achieved by cobbling together a mixed bag of monitoring tools and an AIOps solution into a Frankenstein’s monster for IT. Companies need a new, holistic approach to performance management that combines application insights and cloud infrastructure visibility with digital experience management and AIOps capabilities.”
“Taking this approach will help to deliver the true promise of AIOps, providing IT with answers as opposed to just more data. As a result, IT teams will be freed up to invest more time in innovation projects that set the business apart from competitors, instead of focussing their efforts on keeping the lights on.”
Wes Cooper, global product marketing manager, IT Operations Management, Micro Focus discusses automated monitoring and management, powered by AIO:
“IT teams don’t tend to change their monitoring and management tools without an outside trigger to push them into it. Sometimes, the need arises out of mergers and acquisitions, posing teams with significant integration problems that can lead to lost productivity, outages and downtime. Today, however, a number of significant shifts are happening across businesses which necessitate a change in approach to IT operations. Hybrid IT solutions are increasingly becoming the default for enterprises. And with this, more organisations are moving storage and processing capacity to the cloud and rebuilding corporate networks on the foundations of software defined networking.
“At the same time, the heightened adoption of the internet of things (IoT) and AI-based tools in diverse business functions, from customer experience to logistics to HR management, demands massive data collection and processing. In the context of an ongoing drive towards lower costs and increased demonstrable value to the business, IT teams can easily be overwhelmed by the scale of the challenge.
“AIOps can offer solutions which stand up to the pressure of these modern IT demands. Having massive volumes of operational data spread across a diverse IT ecosystem can make the process of identifying the root cause of a problem and remediating the incident painfully slow. By implementing big data analytics and machine learning into the management of IT, teams can automate much of the most time-consuming aspects of management, freeing them up to employ their expertise where it’s really needed. This will only become more important as companies increasingly make cloud and IoT technologies a core part of their businesses, which will massively increase the volume and complexity of their data.
“Of course, teams cannot simply flip a switch and expect AIOps to just magically take over; systems must be prepared in order to be addressable by AIOps. First, data in the enterprise must be consolidated into a data lake in order to make it possible for AIOps to infer conclusions from across the available source material. Second, teams must consider the diversity of their data, ensuring that both structured analytics data such as dwell time and unstructured interaction data such as voice input are interpretable by the chosen AIOps solution. Third, a careful assessment of what monitoring information is available – including systems’ native monitoring tools – must be made to avoid missing potentially valuable insights.
“Against a background of teams using many different monitoring tools for many different systems and manually bridging the gaps between them, AIOps can provide a view of services which is unified and business-focused, operating across multiple clouds, applications and networks. Integrating it can require significant labour upfront, but the long-term upside is significant.”
With a need to deliver business and customer value faster than before, many enterprises are taking their agile software development practices to the next level and adopting DevOps methodologies as well as microservices. Critical to the success of these initiatives are platforms that can support these ways of working whilst still keeping efficiency and utilisation high.
By Matthew Beale, Modern Datacentre Architect, Ultima.
This is where the datacentre becomes crucial as the central repository for data. Not only are they required to manage increasing amounts of data, more complex machines and infrastructures, we also want them to be able to generate improved information about our data more quickly.
In this article, Matthew Beale, Modern Datacentre Architect at automation and infrastructure service provider, Ultima explains how RPA and machine learning are today paving the way for the autonomous datacentre.
Why do we need an autonomous datacentre?
As we enter this new revolution in how businesses operate, it’s essential that every piece of data is handled and used appropriately to optimise its value. Without cost-effective storage and increasingly powerful hardware, digital transformation and the new business models associated with it wouldn’t be possible.
Experts have been predicting for some time that the automation technologies that are applied in factories worldwide would be applied to datacentres in the future. The truth is that we’re rapidly advancing this possibility with the application of Robotic Process Automation (RPA) and machine learning in the datacentre environment.
The legacy datacentre
Currently, businesses spend too much time and energy on dealing with upgrades, patches, fixes and monitoring of their datacentres. While some may run adequately, most suffer from three critical issues;
Lack of speed when it comes to increasing capacity or migrating data or updating apps.
Human error is by far the most significant cause of network downtime. This is followed by hardware failures and breakdowns. With little to no oversight of how equipment is working, action can only be taken once the downtime has already occurred. The cost impact is much higher as the focus is taken away from other things to manage the cause of the issue, combined with the impact of the actual network downtime. Stability, cost and time management must be tightened to provide a more efficient datacentre. Automation can help achieve this.
The journey to a fully automated datacentreFull datacentre automation is rather like moving from a manual drive car to driver assistance and then to a fully autonomous ‘driverless’ car. Currently, humans manage, monitor and operate the datacentre which requires manual tooling and thresholding. This is extremely labour intensive and often requires tweaking infrastructure to deal with unexpected issues.
The journey to a fully automated datacentre varies according to the type and the individual intricacies of an organisation. However, within the next two years we can expect to see many businesses, especially in fast moving sectors, either already there or on their way to having a fully autonomous datacentre.
There are various levels of achievable automation that can occur in the datacentre to move it on from current manual systems:
Assisted action:The first step along the journey provides information for administrators to take action in a user-friendly and consumable way, such as centralised logins. It can also ensure high availability by retrieving backups if something fails. The process essentially replaces the administrator hitting the ‘go’ button.
Partial automation:This step moves to a system that provides recommendations for administrators to accept actions based on usage trends. Using Dynamic Resource Scheduling (DRS) the system looks at trends on performance and which areas are getting particularly busy so that it can distribute resources to ensure an even balance, resulting in better performance. This can be especially effective for billing or HR payroll systems which tend to peak at the end of the month.
Conditional automation: This leads to a system using modern technology that will automatically take remediation actions and raise tickets based on smart alerts. For example, the system looks at security information and event management to collate lots of information from lots of different data points, such as user logins and the data being accessed. Machine learning algorithms will take this information and compare it with historical usage data to identify trends. Based on these metrics it will take action if it believes an account has been compromised.
Fully autonomous: Utilising Artificial Intelligence (AI) and Machine Learning (ML), the autonomous datacentre determines the appropriate steps and can self-learn and adjust thresholds when needed to allow for efficient storage that delivers cost savings. It can plan ahead by modelling scenarios based on current and future usage patterns and make changes depending on how much storage a particular project needs.
Benefits of the fully automated datacentreOne major benefit of automation is the introduction of the self-healing datacentre. Robotics and machine learning restructures and optimises traditional processes, meaning that humans are no longer needed to perform patches to servers at 3 am. Issues can be identified and flagged by machines before they occur, eliminating downtime. Automation minimises the amount of time that human maintenance of the datacentre is required.
Another benefit is efficient resource planning and capacity management. As the lifecycle of an app across the business changes, resources need to be redeployed accordingly. With limited visibility, it’s extremely difficult, if not impossible, for humans to distribute resources effectively without the use of machines and robotics. Automation can increase or decrease resources accordingly towards the end of an app’s life to maximise resources elsewhere. Ongoing capacity management also evaluates resources across multiple cloud platforms for optimised utilisation.
AI-driven operations start with automationIn the next two years we’ll begin to see datacentres supporting traditional and next-generation workloads which can be automated in a self-healing, optimum way at all times. This means that when it comes to migration, maintenance, upgrades, capacity changes, auditing, back-up and monitoring, the datacentre takes the majority of actions itself with no or little assistance or human intervention required.
Whatever the process within the datacentre is, automation robots ensure that it is consistent and accurate, meaning that every task will be much more efficient. Ultima calculates that the productivity ratio of an automation ‘cobot’ to human is 6:1, empowering teams to intervene only to make decisions in exceptional circumstances. This means that the type of operational effort requirement from humans changes from ensuring that something happens and fixing problems, to querying the business and spending time developing applications and platforms.
Similar to autonomous vehicles, the possibilities for automated datacentres are never-ending; it’s always possible to continually improve the way work is carried out.
The huge popularity of cloud services has seen organisations outsource applications, data and infrastructure to third party providers with the result that security perimeters have widened significantly. This has brought with it unwanted attention from cybercriminals, not only because cloud offers a potential route into corporate IT networks, but also due to the scope to evade detection from traditional security solutions.
By Anurag Kahol, CTO at Bitglass.
As recent research revealed, 45% of organisations now store customer data in the cloud, 42% store employee data in the cloud, and 24% store intellectual property in the cloud. But, when organisations move workloads and data into the cloud, in doing so they are increasing the likelihood of data leakage if proper security is not employed. Adopting appropriate security measures, therefore, is critical.
Working with any type of cloud provider means handing over important responsibilities to an external third party. No matter what their level of expertise, track record, or number of security accreditations they have, if their security fails, then everyone fails. So, how should businesses be updating their security strategies and methodologies for the cloud era?
A new approach to new vulnerabilities
The cloud era means organisations need to look afresh at both external and internal vulnerabilities. The cyber kill chain, for example, was developed by Lockheed Martin as a threat model that represents the anatomy of a cyberattack. It sets out that attacks arrive in phases and defences can be organised at each specific phase. As a model focused primarily on perimeter security, the well-established steps of the chain (Reconnaissance, Weaponisation, Delivery, Exploitation, Installation, Command & Control, and Action on Objectives) are still valid, but they differ in the cloud.
As businesses outsource much or all of their infrastructure to the cloud, it can potentially enable a greater number of insider attacks. According to recent research, organisations are at significant risk from insider threats, with a 73% of respondents revealing that insider attacks have become more frequent over the past year. When asked the same question in 2017, that figure was 56%.
According to just over half (56%) of organisations, it is more challenging to detect insider threats after migrating to the cloud. Additionally, 41% said that they hadn’t been monitoring for abnormal user behaviour across their cloud footprints, and 19% were unsure if they did. To underline the point, four of the top five reasons for the growing difficulty in detecting insider attacks are related to data moving off premises and into a growing number of applications and devices.
So, what’s is the answer? When infrastructure changes, security must change with it. Any organisation that has at all migrated to the cloud needs to update its definition of security across the cyber kill chain. Relying solely upon legacy security technologies that came out of the on-premises era will increase the chance of security blind spots being exploited once organisations begin to move to the cloud.
While research has shown that access control (52%) and anti-malware (46%) are the most-used cloud security capabilities, these and others (like single sign-on (26%) and data loss prevention (20%)) are still not deployed often enough. Additionally, as 66% of respondents said that traditional security tools don’t work or have limited functionality in the cloud, adopting appropriate cloud security solutions becomes even more critical. Fortunately, cloud access security brokers (CASBs) can provide many of these essential capabilities.
For example, successfully defending against malware requires organisations to implement a three-point strategy that encompasses devices (endpoint protection), the corporate network (secure web gateways), and the cloud. A few cloud apps provide some built-in malware protections, but most do not. This means a combination of tools is necessary, because neglecting to use tools like CASBs is the missing link that enables infection.
While the cloud has broadened the security perimeter, the risks are manageable when the right tools and processes are put in place. Everyone with an interest in leveraging cloud technologies should take steps to ensure they don’t put security at risk.
Alex Blake, Business Development Director for ABM Critical Solutions explores some of the challenges the financial services sector - one of the early adopters of data centres – faces today and in the future.
With every swipe of a bank card, tap onto the tube or post on Facebook, a data centre is hard at work behind the scenes, embedded in everyday transactions. This hasn’t always been the case; the financial services sector was the first adopter of the concept over 70 years ago, paving the way for the robust critical frameworks we have today. However, with legacy sometimes comes a hangover, and as new and old processes collide, how should the industry navigate this?
In the 1950s and 60s, data centres, or mainframes1 as they were known, were a different beast. Running these facilities was labour intensive and demanded enormous expense.
Pitt Turner, Executive Director of the Uptime Institute, summed this up nicely, when recalling how the process worked historically at a large regional bank: “In the evening, all trucks would arrive carrying reams of paper. Throughout the night the paper would be processed, the data crunched and print outs created. These print outs would be sent back to bank branches by truck in the morning.”
The previously cutting-edge, mainframes are a far cry from where we are today, and frankly, with pace and accuracy at the heart of how all industries run, they wouldn’t cut the mustard. Especially in the financial sector which has grown exponentially and relentlessly, demanding speed and efficiency. What’s come with this growth, is a trend towards mixed processes – the sector uses a combination of outsourcing data centres via co-location services, alongside operating out of original sites, to manage huge data footprints.
For financial institutions working under a cloud of uncertainty and risk, these centres need constant investment but often there’s an unwillingness for this to come from CAPEX, so building their own data centres or updating and maintaining legacy systems isn’t a priority. Instead, data centres expand with more racks and hardware, making monitoring a constantly evolving job. At what risk though?
Downtime
Downtime is the biggest risk factor in legacy data centres, regularly driven by air particle contamination. Unlike new facilities which restrict airflow and the ability for particles to contaminate equipment, legacy centres are often more open to threat, require expert cleaning teams and take constant management from specialists.
It’s hard to equate cleaning to serious financial risk, but in the financial sector there’s pressure for online banking, payment processing and the protection of personal information to work around the clock. Failure to deliver means fines and reputational damage – which can be avoided with the right technical cleaning services and expert infrastructure management.
Preventative cleaning measures
Frequent air particle testing is fundamental in identifying issues ahead of time, especially in legacy centres, which must be done by specialist engineers and cleaners. Companies shouldn’t wait for issues to occur - as the saying goes, prevention is better than the cure; a preventative cleaning regime comes at a cost, but it will help manage issues before they threaten service.
Some specialists can determine the cause of contamination on surfaces, but often the real damage can be caused by airborne particles not visible to the eye. The solution is to implement an annual preventative technical cleaning programme to ensure ISO Class 8 standards are maintained in critical spaces. ABM Critical Solutions carries out air particle tests as part of our technical cleaning programme, being one of just a few companies in the UK that simultaneously carries out particle tests on surfaces as well as zinc whisker testing and remediation.
The right infrastructure
Downtime can also be successfully avoided in a legacy data centre by implementing data centre infrastructure management (DCIM). Relying on older, outdated solutions can be a gamble, given how susceptible legacy centres are to building degradation and contamination.
DCIM can enable smart, real-time decision making, and has the ability to introduce fail safes2, meaning an issue doesn’t have to stunt services to catastrophic effect. For example, a custom alarm can be developed and installed, which works to alert a specific team or contact as soon as error occurs. This ensures that technology and people work together; a problem will always be flagged immediately and attended to by an expert to assess and remedy the issue in quick time.
The future
Monitoring technology
Monitoring technology will continue to grow and expand its remit, becoming more intelligent, precise and affordable. This will benefit legacy sites and with the right measures in place, will limit vulnerabilities. I see a time in the not-too-distant future whereby advanced monitoring technology will help to drive efficiencies that lead to a more remote and cost-effective offsite management models. Used correctly, it would ultimately provide users with data that will guide their decisions and ensure they are one step ahead.
The role of sensors
Sensor technology, managed remotely, will play a huge role in flagging areas of concern. ABM Critical Solutions is currently trialling a new sensor technology in a new-build data centre. We’ve implemented sensors into our maintenance cleaning routine and hope to share our findings soon.
New locations
Last year, we saw a data centre submerged into the sea off the coast of Scotland. As technology increasingly helps us identify and fix issues remotely, I expect we’ll see more non-traditional data centre locations come in to play.
We’re at a very exciting inflection point in the industry; infrastructure, technology and artificial intelligence are working together in ways we didn’t think possible. There are more options than ever before to get it right, and while I believe we’ll continue to see a shift towards utilising colocation services, legacy centres will be more protected than ever, owing to advancements across the board.
Alex Rinke, co-founder and co-CEO, Celonis, talks robotic process automation – the benefits, when it is, or isn’t the right fit, and how to ensure successful implementation.
AIOps is an emerging technology which offers the promise of helping IT and data centre teams get to grips with the growing complexity of their respective infrastructure environments – with the ultimate objective of ensuring application performance optimisation. In this issue of Digitalisation World, you’ll find a variety of thoughts and opinions as to just what AIOps offers, and why it matters. Part 2.
Giving a data injection to business
By Richard Smith, SVP Technology at Oracle.
We have entered a truly data-driven world. According to Domo’s Data Never Sleeps 6.0 report, there are 473,400 tweets made, 49,380 photos posted on Instagram, and almost 13 million text messages sent every minute of every day. Data like this holds immense value for those able to collect, organise and leverage it.
For businesses today, data is the new battleground, and being able to master it through sound management has a direct impact on an organisation’s success. This makes the lowly database, whether on-premise or in the cloud, the beating heart of the enterprise. If left unoptimised and running slowly, employee productivity and customer experience will suffer, and the data within cannot be leveraged for innovation. Furthermore, if the database is not secure, the company faces a greater risk from security breaches.
Data management is clearly integral to the successful running of an organisation, but the task only gets harder as databases grow in size and complexity. Enterprises must look to new solutions driven by automation and machine learning to unlock their data potential.
To err is human
Traditionally, the task of data management has fallen to the database administrator (DBA). Their role is to create, modify and tune enterprise databases for maximum performance and security. It’s a role that is far more complex than it perhaps initially appears, and it should not be underestimated in terms of importance.
When an employee or client wants to retrieve data, the process is complex and can consume a great deal of time, as well as compute and disk-access resources. This is especially true at peak times when thousands or potentially millions of hosts are trying to access the database.
This manual approach is beginning to crumble under the weight of organisational data. Traditional database management has become extremely time-consuming and expensive: 72 per cent of IT budgets are spent simply maintaining existing information systems. Time and resources could be better spent elsewhere, such as on IT innovation.
With DBAs often finding themselves managing 50 or more databases a day, human error can often result – such as failing to apply a security update or being unable to keep a database fully optimised.
These errors can be disastrous for uptime and security, but there’s a more fundamental problem. A company unable to keep its databases performant is less able to utilise its data effectively. Employees will struggle to get the data they need and will be slower to make decisions, while customers will suffer from an unsatisfying user experience. Every second of every day the company is running slower than the competition and, gradually but surely, falling behind the pace of the market.
The benefits of the autonomous approach
To remain relevant and competitive in the long run, companies must explore new ways to reduce the effort needed to maintain databases, limit downtime and, above all, accelerate performance and as a result innovation.
While putting databases in the cloud has already gone some way towards taking human effort and responsibility out of active database management, a revolution is taking place behind the scenes. Increasingly, with the advent of emerging technologies like artificial intelligence, machine learning and automation, autonomous systems are being born.
The resulting ‘autonomous database’ is self-driving, self-securing and self-repairing, making it both easy and cost effective to adopt, while freeing IT up to focus on innovation and value-adding tasks.
A data shot to the arm
Above all else, an autonomous database gives organisations a data shot to the arm - the ability to better access and utilise their data faster and more efficiently, enabling greater productivity and a more seamless, competitive customer experience.
Oscar Jalón, IT director, Santiveri, a leading producer of organic food, beverages and beauty products in Spain, has experienced exactly this. “To stay ahead of the rising competition in the organic food and beverage market, we need a granular understanding of our business, but our existing IT systems were struggling to keep up. With Oracle Autonomous Database, we can execute queries 75 to 80% faster. This means we can see, not only which products are selling well, but exactly where the sales are happening, right down to the individual store and time of day, and then work out the optimum extra resources to put behind a push. The sooner you can make that happen, the sooner you sell more,” said Jalón. “We are now applying this speed and smarter decision-making right across the business, to areas like R&D and customer service, and can do so with less resources. As the technology is self-securing and repairing it is giving our people the freedom to focus on innovation and business improvement.”
The direction of travel is clear – databases are only going to get larger, more complex and more important to business success. The companies that will be successful are those that fully utilise the benefits of the cloud and other emerging technologies arriving on the scene, while marring their human talent with the self-learning capabilities of the machines. The resulting data injection will enable these companies to become truly data-driven, able to boost areas critical for success and drive the customer and employee experience to new heights.
Tim Flower, Global Director of Business Transformation at Nexthink, warns that IT services in the enterprise are now interconnected to such an extent that isolated failure can start a chain reaction causing customer frustration, employee dissatisfaction and revenue shrinkage:
To address this challenge, more strategically-minded CIOs and their IT departments are electing to use advanced analytics and artificial intelligence (AI). The technology helps to comprehensively monitor and assess the user experience of devices and applications across the organisation in real-time. Leveraging the power of intuitive algorithms, companies can take a more advanced approach to IT in the enterprise, by using end-user performance data to identify IT-related issues automatically and generate IT tickets. Additionally, AI-based tools can remediate minor issues as they pop up without any human intervention — freeing up IT staff to tend to the most pressing concerns – or better yet, more strategic and value-add assignments.
For example, service desk and IT operations staff can solve problems quicker than it takes users to realise the issue and contact support, with alerts and recommended actions about endpoint and user experience issues coupled with the ability to automatically resolve them in one click. Meanwhile, staff wins the support of end users when they empower them with the ability to quickly resolve their own issues or even fully automate the remediation process so neither the employees nor IT resources are impacted to get back to their desired state.
AI and machine learning (ML) algorithms can sift through reams of data and identify issues faster than a human ever could, which is why successful organisations are choosing to add AI into the mix to bolster the abilities of their IT staff to tackle tech problems and take the end-user out of the process. By letting the algorithms do what they're best at, businesses can reduce the overall number of IT support tickets, eliminate employee frustration related to IT disruptions and most importantly, ensure employees across every department have devices and applications consistently available so they can do their jobs properly.
This integrated approach to digital experience management, which combines real-time endpoint monitoring and analytics with both end-user engagement and automatic remediation of incidents, even before they occur, can truly revolutionise the role of IT managers. And employees have a job to do; notifying IT about issues shouldn't be bolted on to that.’
This article will outline how to master the most common misconceptions surrounding AI and automation so your business can start reaping the many benefits of Robotic Process Automation (RPA).
By Alice Henebury, Head of Marketing, Engage Hub.
Oxford Economics’ latest report estimating that robots are expected to ‘replace up to 20 million factory jobs’ by 2030 opened up a heated debate about what the future of the workforce would look like. One of the most worrying potential outcomes being the impact it will have on already stagnating wages. According to the report, each new industrial robot will wipe out 1.6 manufacturing jobs, with the least-skilled regions being more affected. However, for the most part, these are just scary soundbites being used to fearmonger people. Businesses need to shift their attention away from the misconceptions of AI and automation and focus on the many positives it can bring to its employees and the workplace overall. Automation will not only boost jobs but encourage economic growth. If deployed correctly, AI and automation could be the perfect digital workforce assistant (the secret weapon every business wishes they had up their sleeve). Instead of replacing jobs, these developments can help your business strike the right balance between technology and that all important human touch.
Misconception 1 – Customer service will be impersonal
There is no denying a human touch is the cornerstone of customer experience however, we are starting to see AI and automation customer service success stories from the likes of big brands such as Sainsburys. Thanks to automation, Sainsbury’s has improved call delivery procedure by successfully integrating an Interactive Voice Response (IVR) system, automated by Synapse, the AI-powered logic-building software, which facilitated intelligent call routing so that Sainsbury’s customers were diverted to the most appropriate resolution, whether that an automated response by a virtual assistant or to a live advisor.
By freeing up time previously spent on monotonous tasks, workers can dedicate more of their resources, time and skills on the evolving needs of today’s customers. One element that is proving particularly effective is AI’s predictive capabilities, which can estimate which customer requests can be diverted to contact centres, to free up instore resources before they’re even submitted, as well as improving the efficiency of the supply chain process.
What’s exciting is that automation provides brands more opportunity to provide high-quality, more engaged and memorable customer experiences. This is helping brands to retain the loyalty of their existing customers as well as attract new ones. In today’s climate, brands must keep their pulse on technological developments otherwise they risk losing their customers to competitors.
Misconception 2: Workforce morale will deteriorate
One of the most common misconceptions surrounding robots replacing jobs is the negative impact this will have on workforce morale. However, with the UK’s productivity record being less than favourable, only rising by 0.5% last year, Britain is in need of new solutions to keep employees engaged. This is where AI and automation comes in. Not only can these new technologies help to reduce the workload of employees, they can take away the repetitive, mundane tasks, allowing workers to focus on the human element of work, that requires more creativity and innovative thinking. Ultimately, when employees are able to focus more time on the interesting and rewarding aspects of roles, their productivity and satisfaction levels are bound to rise. In turn, this will also increase employee retention rates within companies.
One area we are already seeing evidence of this, is within companies’ HR departments. HR teams are using AI-enabled automation to streamline the recruitment process and improve workers productivity; whether it’s sorting through CVs submitted online or through third parties to identify potential candidates. This just goes to show how automation allows teams to focus on tasks that require a human touch, such as interviewing candidates face to face. Rather than devoting resources and people to a repetitive, time-intensive job.
Misconception 3 – AI and automation will overshadow employees
When the topic of AI and automation is raised what is often forgotten is the various opportunities that will arise as a result. As these new technologies are deployed more widely employers will have to upskill and reskill their employees so they’re prepared for the future of the rapidly changing workplace. This provides the perfect opportunity for workers to develop and hone their technological skills. According to the World Economic Forum (WEF), organisations need to act now in equipping their staff so they can take full advantage of the opportunities the “Fourth Industrial Revolution” has to offer.
However, although employers may be the facilitators of this workforce transformation, the government and individuals also have crucial roles to play. For instance, the government should look to enforce more technical and “soft skills’ training throughout the education system. Whereas individuals should take a proactive approach to develop their own skills learning when it comes to technology. At the end of the day, a lot of these new jobs will be centred around understanding and managing the technology. What must be remembered is that AI and machine learning are still in their infancy and still rely on human intervention to guide their deployment. Ultimately, the more awareness and understanding there is about AI and automation the more successful its implementation and our collaboration with them will be.
Looking to the future
Once businesses dispel the negative rumours surrounding AI and automation and start to gain an understanding of its capabilities and substantial benefits, there is no doubt that companies will be lining up to implement this into their strategies. Looking to the future there is no preventing the workplace from changing. Businesses need to realise that successful implementation is all down to proper preparation.
There is no denying that global levels of data traffic are surging at a previously unseen rate, as the world’s internet population grows significantly. The internet now reaches 56.1% of the world’s population, or 4.39 billion people – a figure 9% higher than in January 2018. According to cloud software firm DOMO, the USA alone uses 4,416,720 GB of internet data every minute. This includes 188 million emails being sent and 4,497,420 Google searches.
By David Hall, Senior Director, Technology Innovation, Equinix.
These astounding statistics are only set to increase as new technologies such as 5G are deployed and utilised by both consumers and enterprises alike. With huge data loads moving around the world every second of every day, the need for a robust and secure digital infrastructure that can handle this web traffic is ever more critical. With this in mind, it is important to consider the data centre of the future, and the cutting-edge technology required to continue powering the global digital boom as efficiently and sustainably as possible.
Efficiency is key
Data centre efficiency is about optimising the equipment within the building so that it can match digital demand to the highest degree possible. For companies like Equinix, that operate a global portfolio of data centres each with different customers with their own IT and patterns of demand, this is a more nuanced challenge because the number of variables impacting the data centre increases exponentially, and some of the tools available to hyperscalers in their own locations, such as load shedding, are not available to Equinix. Differing climates, varied weather patterns and possible natural disasters make it difficult to plan for optimal efficiency. But these challenges make an ideal use case for machine learning – an artificial intelligence (AI) approach that enables machines to self-learn over time, without the need for human input.
Machine learning, particularly deep learning, can examine a large data set and find patterns within it. It can also predict patterns that will repeat in the future. With modern data centres being highly technical, they already contain sensors that can provide a significant amount of real-time and historical data which can be utilised to improve efficiency. This is particularly useful when dealing with one of the most complex challenges a data centre faces – power management.
Growing energy costs and environmental responsibility have placed the data centre industry under increased pressure to improve their operational efficiency. Indeed, some predictions state that data centres are positioned to use 4.5% of global energy by 2025, which means that even slight improvements in efficiency yield sizable cost savings and cut millions of tonnes of CO2 emissions. As a company, Equinix is committed to operating its global footprint of data centres as efficiently and sustainably as possible, which is why we are augmenting our entire platform with intelligence to help us make better decisions. Ground-breaking technology such as this will become increasingly commonplace in data centres of the future, as industry players look to pioneer solutions that will streamline and optimise operations.
Maintaining momentum
AI networks can also help improve equipment maintenance. Scheduling maintenance according to manufacturer guidelines is effective but comes at significant cost. Indeed, many experienced data centre engineers have enough experience to tell when a piece of equipment is faulty or coming to the end of its lifespan. But by efficiently utilising AI networks, there is the potential to predict these failures long before they are detectable to engineers.
At Equinix we are testing deep learning networks, known as ‘AI personas’, which are being optimised for specific equipment; so they can monitor all of the critical infrastructure within the data centre. As we learn new things about that type of equipment, the AI persona is updated with knowledge to be leveraged across our entire global platform, helping to improve the availability of the equipment with fewer and faster maintenance sessions, and a better lifetime return on investment.
Green by nature
Power usage effectiveness (PUE) is an important industry metric for measuring the energy efficiency of a data centre’s infrastructure under normal operating conditions. PUE helps track power usage trends in an individual facility over time, and measures the effects of different design and operational decisions. The PUE of a data centre is affected by site-specific variables such as design, construction and age of the facility, the nature of the IT housed within the facility, and the utilisation rate of the data centre.
As part of our commitment to improve the efficiency of our data centres, we recently adopted more stringent regional PUE design targets for new sites as well as major expansions. Our intention is to demonstrate industry best practice and lower our PUE through a mix of capital investments, improved design standards, best practices and operational discipline.
But this isn’t a new commitment. Back in 2017, we introduced Bloom Energy fuel cells in 12 of our North American data centres to help decrease the overall impact of the growing digital economy. Since we implemented this technology, we have installed more than 37 megawatts of capacity, which has helped to reduce stress on the local energy grid and significantly reduced our CO2 emissions.
Energy efficiency and sustainability is something that underpins the design process of all our data centres, whether they are new build or retrofitted. All Equinix sites are built to global environmental standards, and we consciously choose building materials that complement our overall corporate sustainability goals across our entire portfolio. In fact, our commitment to implementing market-leading technology has helped to make our Amsterdam data centre campus class-leading, by running on a PUE of about 1.2 (that’s pretty low!).
In recent years, particularly since the Paris climate agreement was signed in 2016, there has been a noticeable shift in the number of our customers that are looking to reduce their carbon footprint. This is something that is only set to increase as enterprises become more environmentally conscious and look to hit their own sustainability targets – it is imperative data centres of the future are built with this in mind.
A global agenda
The global explosion of data may mean huge progress for the digital economy, but it poses a new set of challenges for data centre operators that are looking to provide sustainable, dynamic and efficient solutions. Today’s “always on” society has created a rapidly changing landscape that has forced the data centre to intelligently and seamlessly handle increasingly complex workloads.
At Equinix, we believe our position as a global leader in the data centre and colocation space, means we have a duty to show leadership in creatively tackling the biggest issues our industry faces. With over 200 data centres around the world, we can consistently swap information around optimisation and best practice between our dispersed locations. This is something that will become increasingly important as global levels of data surge with the advent of 5G. It is crucial that data centres of the future are designed and constructed with innovative solutions that not only cater to the data demands of today, but also the future demands of a sector that is continually being disrupted.
The workplace is changing. Not since the Industrial Revolution have we seen such a shift in pace that we have witnessed over the last couple of years. In my view, this evolution is only set to accelerate in the coming years – you only have to read the headlines on a daily basis to see stories filling column inches about automation, robotics and the skillset of the workforce, and how these will all influence the workplace of the future.
By Mike Walton, CEO and Founder, Opsview.
As an employer myself, I’m no stranger to the conundrum about the make-up of the workforce. Whilst it’s undeniable that technology and robotics will have a significant impact on how we do our jobs in the future, it must be said that sectors will be affected in different ways. For example, whilst the Bank of England predicts that 15 million jobs could be at risk from automation - it’s retail and hospitality that seem to be most at risk.
But the future employee make-up does not stop at which roles can be automated. Having skilled employees is one, if not the biggest challenge we face, both now and in years to come. Gartner continues to cite the lack of skilled talent as one of the biggest concerns for firms – but again this is not exclusive to IT. Manufacturing, Engineering and other highly skilled sectors are struggling too – so what can we do to change the status quo?
When it comes to IT, we are at the forefront of the pace of digital change. Therefore, we have to take a proactive approach to help manage the transition to the workplace of the future. We hear a lot about digital transformation and the connected enterprise – but to make these a reality we can’t just sit back and hope the correctly skilled talent comes to us. We need to be on the front-foot not only in terms of external recruitment, but upskilling current employees too.
This means deciding what a business will look like both in the short-term (one to three years), but also in five to ten years. This is the only way we can sculpt whom we hire. Of course, we can’t predict exactly what a business will look like as you can’t accurately pinpoint future events, but deciding on automation levels or the scale of technological support within a business can help decide whom you may need to hire.
For example, automation within IT has already changed the sector. It’s influenced software and development, alongside creating new specialities. Even in its most basic form, automation is able to work alongside the foundations of infrastructure – such as IT monitoring – to help implement automatic responses to issues such as low disk space, freeing up employees as a result. Thanks to this support, technologists have needed to master innovative skills for building, deploying and monitoring large, dynamic business services; this wasn’t the case just a few years ago and demonstrates the rapidity of change within the sector. With machine learning becoming increasingly influential and AI eventually set to revolutionise service provision, the skills required in our sector will keep developing in a relatively short time-frame as jobs and best practices continue to evolve.
Due to this continuous evolution I believe IT workforces will actually start to eventually decrease in size. This smaller, more compact make-up will make them more agile – which is ideal when going through transformation projects as quick reactions can help drive ventures forward and prevent delay. The make-up of teams will also see more strategic thinking, as automation, machine learning and AI will pick up many manual tasks which are still being completed by employees to this day. Whilst some may be concerned about a loss of control, actually, it should liberate employees as they will be exposed to more exciting projects which can expand their horizons quicker. Juniors will be able to test and learn freely, driving a culture of innovation and fast-tracked development.
If you think about how things stand now, this isn’t always the case – often it’s all hands-on-deck to fix issues within the IT estate due to tool sprawl and no single version of the truth, leaving little time for a culture of learning and experimentation, which is needed for innovation. As I talked about earlier, IT monitoring via a single pane of glass combined with automation could solve that issue instantly, freeing up employees to do what they do best. But imagine what employees could do if other manual jobs could be digitised?
I’d also hope that in the future issues which plague us now will be eradicated. Take IT outages – an issue which consistently hits the headlines for all the wrong reasons. Issues will be detected and fixed before they cascade (providing the right IT monitoring and resolution programmes are deployed); and employees can be freed up to enjoy more strategic, engaging work than firefighting.
Upskilling will also become more prevalent, as whilst the issue of the skills shortage continues to bite, the best way to overcome this is to retrain existing talent as they will know the business and can implement new skills immediately. Obviously external hires will continue to play a valuable part, but due to the fast-paced nature of the industry, investing in people who are then reinvested in the company will be the best way forward.
Addressing current and future workforce needs should be a priority for IT departments. The Public Sector is struggling to implement transformation projects due to not having the right talent, and Accenture has estimated that by failing to realise the potential of digitalisation, the UK could miss out on £140 billion. The growing remit of the IT department and its importance to business operations, combined with an ageing population, means we need to invest in talent now and using technology to free them up is certainly a good way to plug any holes and drive innovation.
This new breed of IT teams will change the perception that IT is a cost centre, and liberating teams from manual toil will go a long way in achieving this. Granted, this will not be possible for all but giving employees the chance to grow and hone their skills will help drive them and the business forward in tandem – leaving a culture of consistency and excellence in its wake.
Excitement for the arrival of 5G has been building for a number of years. It was almost a decade ago that the first commercial LTE (Long-Term Evolution – a 4G upgrade) network was launched in Sweden, and it took mobile operators another three years to launch it in the UK.
By Dipl. Ing. Karsten Kahle, Corning Services, responsible for the SpiderCloud-E-RAN system.
In 2019, the world is a very different place, underpinned by the movement of massive quantities of data each day, both individually and industrially. The new 5G standard is critical to supporting the continued development of the data economy.
This is as true in private 5G networks as it is within public. Implemented effectively, it can allow for deployment of Internet of Things (IoT) capabilities across superior ranges and with significantly low ranges. Within industrial environments, applications with extremely high availability and low latency are also possible, as well as applications with high bit rates driven by the use of new technologies such as augmented reality of Ultra-HD video.
Graphic 1: Application examples for private LTE and 5G networks
5G for local applications
As technology infiltrates industries across almost every conceivable vertical, 5G will become a cornerstone of a full range of new applications. Take the Smart Cities phenomenon, for example. The use of data and new technology is enabling better control of urban areas across many areas, from sustainable waste management and infrastructure planning to parking and traffic control. In manufacturing, ‘Industry 4.0’ is driving new implementations of automation, artificial intelligence and robotics within the industrial process. However, as you look closer at the many applications (Graphic 1), it becomes clear that these are often limited to local, non-public areas, such as private properties, grounds and commercial buildings. In these cases, it is not necessary to have connections provided by mobile network operators.
The UK Telecoms Regulator Ofcom proposes offering local licenses for private 5G networks
At the 5G World Summit at London’s ExCel in June this year Mansoor Hanif, the CTO of the UK telecoms regulator Ofcom, presented the body’s latest proposal for offering local licenses for such applications: a licensing scheme with frequencies in the range between 3.8 GHz to 4.2 GHz that would open prime 5G spectrum to be used for local applications by owners, tenants or leaseholders. Ofcom is currently assessing the consultation responses and, this summer, aims to announce its decision with precise guidelines for applying for the license bands. The allocation of a frequency band for local use enables private users to set up and operate their own private mobile radio network.
As the allocation of frequencies is independent from technology, the private operator is free to choose whether to build on the established and powerful LTE technology or to enter the emerging 5G technology immediately.
Increase of data supply within buildings
One of the major considerations for 5G is that it occupies a transmission frequency that is significantly higher than legacy network standards. This can be problematic, as the higher a transmission frequency is, the less able it is to penetrate obstacles. For mobile networks, this means buildings and commercial structures.
With studies showing that more than 80% of mobile phone traffic is generated inside buildings, there is a danger that users will encounter increasing issues with connectivity as 5G becomes more prevalent. Even before 5G implementation, around 75% of mobile phone users complain of poor mobile phone services indoors.
Data usage will only increase from here, which means that there will likely be an increased demand for solutions that tackle this problem. In future, it could be virtually impossible to supply buildings with the highest network frequencies from the outside.
Graphic 2: Modern buildings attenuate mobile communication signals
In-house solutions could become indispensable
With external solutions expected to reach a breaking point, we can conclude that an in-house supply with mobile radio will be indispensable in the future. Systems that can perform this function are available. For example, Corning’s SpiderCloud-E-RAN (Enterprise Radio Access Network) can already provide an in-house mobile network via LTE, with 5G soon to follow.
The technical organisation for this type of solution is found through the implementation of components: the Radio Node, Services Node and Evolved Packet Core (EPC) LTE component.
1. The Radio Node
This is a radio base station the size of a standard WLAN access point. It operates in the designated frequency range for private mobile communications networks and can simultaneously serve up to 128 active subscribers, while the number of registered subscribers can be many times higher. A radio node is supplied with PoE+ via the structured building cabling. The connection between Radio Node and Services Node can be made both on LAN/WAN infrastructures and via the Internet.
2. The Services Node
The Services Node manages and coordinates the connected Radio Nodes. For example, the handover, i.e. the transfer of the connection from one radio cell to the other, takes place between the Radio Nodes of moving objects. The Service Nodes also deliver connectivity to the higher-level EPC or to the Internet, as well as the coordination of the encryption of the transmission on all sections.
3. The EPC
An Enterprise EPC also belongs to a private mobile network. Several Services Nodes, with their Radio Nodes, can be connected to this network. The EPC service is used for SIM card authentication and for authorising subscribers and services, as well as for checking and ensuring data integrity. Quality of service parameters (QoS) are also assigned here and their compliance is monitored.
Graphic 3: Simple in-house supply with existing LAN infrastructure
Optimal reception performance within the building
The SpiderCloud E-RAN system is also referred to as a self-organising network (SON), in which individual radio nodes recognise and measure each other in order to permanently self-optimise the radio parameters. This ensures the best possible reception within the building.
Since the system is a distributed mobile communications system, individual buildings can be supplied with mobile communications via in-house cabling, several buildings on a campus via WAN connections, or individual buildings distributed over a larger area via WAN connections or the Internet.
This system can be set up by owners or tenants of properties. In the course of the trend towards outsourcing, however, more and more companies are being commissioned to provide these services for customers.
A new business model for local carriers and infrastructure operators
Local carriers and infrastructure operators can host the two central components – the Services Node and EPC – in one location and only have to set up the Radio Nodes at a customer's premises, which are connected to the Services Node via WAN routes. This opens up possibilities for a new business model. Private network operators can expand their portfolio and increase their value chain by providing infrastructures for private mobile communications. The private mobile network is a logical extension of the fibre-optic infrastructure and enables supply not only to the building, but also to the mobile device. The operation of the infrastructure is an essential building block for the benefit of the local carrier.
Graphic 4: Provision of a private mobile network by local carriers and service providers
The security and integrity of the data is critical, which is why the high security measures for mobile networks are defined in the LTE / 5G standard. The possibility of very granular allocation of QoS classes allows a uniformly high user experience across all services.
Furthermore, the customer's private applications can be hosted in the carrier's data centres and made available to the user directly via the fibre optic infrastructure and the private mobile network. This leads to an all-in-one solution for the customer provided by the local carrier.
A 5G landscape changes more than just quality of signal
In conclusion, it is, of course, no surprise that 5G is going to change the landscape for mobile networks and infrastructure providers. However, looking closer at the technical impact of the new standard, it also becomes clear that there are going to be obstacles, and with them opportunities, for those same stakeholders. As public networks are rolled out, and private networks become more prevalent, we will see who can seize these opportunities.
AIOps is an emerging technology which offers the promise of helping IT and data centre teams get to grips with the growing complexity of their respective infrastructure environments – with the ultimate objective of ensuring application performance optimisation. In this issue of Digitalisation World, you’ll find a variety of thoughts and opinions as to just what AIOps offers, and why it matters. Part 3.
The benefits of AIOps
By Paul Mercina, Director of Product Management, ParkPlace Technologies.
AI technologies are all the rage and in many cases the capabilities as they exist today are over-hyped. One area where AI is delivering on its promise, however, is in AIOps.
Originally coined to mean “algorithmic IT operations” and then updated to “artificial intelligence for IT operations,” AIOps nonetheless remains securely rooted in the mature fields of big data analytics and machine learning. When applied to increasingly complex IT infrastructure, these tools tame the barrage of information and provide real-time and predictive insights to improve operations.
Humans Can’t Keep Up
AIOps has arrived just in time to save data center teams from manual oversight of IT systems and facilities and the overwhelming volume of alerts and information delivered across a variety of monitoring systems. Separating signal from noise and highest impact issues from less critical problems has become extremely difficult for any IT organization. Root cause analysis is, similarly, reaching beyond mere human capabilities.
By aggregating log files and monitoring system data, AIOps moves the data center toward the coveted single-pane-of-glass visibility. Automation features can take over the burgeoning variety of administrative tasks, reducing costs and stretching staff resources further. And machine learning systems, armed with large volumes of data, can perform pattern analysis and outlier identification, resulting in increasingly accurate and predictive fault identification and recommended interventions.
These are powerful advantages, but the downstream impacts for the business and its customers are what’s truly driving adoption. AIOps is fast becoming indispensable for organizations determined to keep pace with customers’ high demands for availability, reliability, and performance. In fact, AIOps is among the most promising—and proven—options for boosting uptime. Interventions shift from after-the-fact repairs to real-time and even proactive solutions implemented before systems go down in the first place.
Where to Apply AIOPs Now
The applications for AIOps are quickly multiplying. Google, in a well-publicised example, has developed a system to monitor temperature, cooling system, and other information from hundreds of sensors around the data center and recommend changes. The company recorded a 40 percent reduction in cooling-related energy requirements, and commercial AIOps-based facilities management systems are seeking similar impacts.
Park Place Technologies deploys AIOps for hardware monitoring and has achieved a 31 percent reduction in mean time to repair across thousands of customer sites. We’re benefiting from proactive fault detection and the automation of triage and trouble ticket generation to give engineers more timely and complete information to effect repairs and prevent downtime.
Additional use cases span resource utilisation to storage management to cyberthreat analysis. Fortunately, there are off-the-shelf AIOps tools available for a variety of purposes, as well as managed services providers integrating AIOps applications. These third-party solutions not only offer turnkey opportunities to engage AIOps, they also achieve faster time to value.
As progress continues, we can expect increasingly integrated, end-to-end AIOPs systems capable of analysing language and other complex inputs, leveraging deep neural networks, and automating more of the adjustments recommended by the machine learning algorithms. Data center leaders will need to get used to the idea of turning over more functions to these powerful autonomic solutions, so they themselves can make better use of their precious time.
AIOps: do you really need it?
By Nigel Kersten, Field CTO Puppet.
AIOps is the natural evolution of IT operations analytics, where we employ big data and machine learning to do real-time analysis of our IT data sources in order to automatically identify and remediate issues.
Everyone is dealing with the fact that their environments have become too complex and vary far too frequently to manage manually, and that even a small amount of automation is insufficient to keep up with business demands. AIOps sounds great! Let’s get the robots to do all of that menial, manual and mundane work that most IT departments are suffering under! It’s a seductive proposition – just rub some machine learning on a problem and it will go away!
The reality is that, as we say in computing, “garbage in, garbage out”. Most companies don’t have consistent enough data to make automated decisions on, or if they do, then that data is stuck inside an organizational silo and isn’t easily accessible to machine learning tools.
To get to a position where you can choose to take advantage of AIOps, you need consistent data about your infrastructure and release processes. To get consistent data about infrastructure state, invest in an automation platform, stop making changes manually, and adopt an infrastructure-as-code solution. To get consistent data about process times and change releases, invest in automating as much of your software delivery lifecycle as you can and minimise human interaction in your change management processes. Build robust automation and try to get rid of your change committees.
Many of the problems I see people trying to solve with AIOps could be more easily solved by taking an incremental approach to automation and addressing the underlying root causes. If your teams are drowning under a constant wave of menial and low-value tickets, don’t look to adopt an AIOps tool to tell you which issues are actually important so they can work on them – apply systems thinking, look for the underlying causes, automate away the inconsistency in your environments, automate your release processes and empower your teams to operate with minimum viable bureaucracy.
Once you’ve got to that point, then investigate whether AIOps can help. You might find that the problems no longer exist.
The new paradigms for acquiring storage have enabled flexible financial models which allow a company to use its financial assets, both CapEx and OpEx, to dynamically manage infrastructure to fit within the company’s current budget.
By Mike Jochimsen, Director of Alliances, Kaminario and SNIA CSTI Member.
Financial accounting isn’t normally a topic for discussion in the SNIA Cloud Storage Technologies Initiative (CSTI) because we are all about technology, right? After all, why would a storage architect care about this. Isn’t that why we have accountants? Yes, sort of. However, it is the need to identify the most effective use of the company’s financial position during a cloud transition which drives this discussion. Disclaimer: the author is not an accountant and is especially not YOUR accountant, so all information contained herein should be considered opinion and you should consult your own accountant for accurate interpretation and application to your company’s strategy. Further, this article does not address the question of how your company should allocate its capital and operational expenditures, but simply points out some options to consider when making that decision. Having said that…
With the advent of the cloud and software defined storage (SDS), the question for business today is whether to categorize a storage purchase as capital expense (CapEx) or operating expense (OpEx). And the paradigm behind that question so far has been that storage purchased for a private data center is usually CapEx and storage capacity rented in the public cloud is usually OpEx. Recent advances in how cloud storage can be acquired for use in a private data center are changing the paradigm behind the question again.
Traditionally, when a business acquired computer storage for use in its operation it was generally considered a fixed or long-term asset, meaning the company needed to account for it as a capital expense. Of course, there were exceptions to this, mainly regarding whether the acquisition cost exceeded the capital limit defined by the company or whether the useful life exceeded one year. With data center storage costs in the tens to hundreds of thousands of dollars and minimum life expectancies generally at least 3-5 years, it was safe to assume that purchasing storage fell under the definition of CapEx. On the other hand, the cost to manage and maintain that storage was generally considered to be an operating expense, which covered those expenses deemed necessary for the day-to-day running of the business.
This approach has been standard in business for many years. However, as more businesses either began to migrate workloads into public cloud environments, or build private clouds based on software-defined storage, the lines between CapEx and OpEx expenses became more blurred. At the same time, off-balance sheet financing became popular to keep debt from appearing on the company balance sheet making leverage ratios look more appealing to potential investors or lenders. Operating leases became a popular tool, allowing businesses to treat expensive computer equipment as an operating expense by leasing the equipment for less than its useful life with the right to purchase it at fair market value or return it at the end of the lease. Recently, the Financial Accounting Standards Board (FASB) and the International Accounting Standards Board (IASB) have released new guidelines for accounting which require leases spanning more than one year to be accounted for on the balance sheet to more accurately reflect company leverage.
The fundamental question to ask when determining if a storage expense should be CapEx or OpEx is whether the company can or does take ownership of the asset, either at the time of acquisition or at the termination of a lease. Generally, in rental or subscription agreements, the company does not and cannot take ownership of the asset. Off premises public cloud infrastructure is by default a subscription and the company does not take ownership of the asset. It stays in the cloud data center under control of the cloud provider. However, in the spirit of ultimate flexibility, public cloud providers have begun to offer contracts which include a license for the infrastructure that the company is using rather than a subscription, which then qualifies the acquisition cost as a capital expense.
On the other hand, storage brought into a private data center, including its software and maintenance cost, has traditionally been capitalized. As with the flexibility of acquiring public cloud storage, companies can also rent hardware and sign subscription agreements for the accompanying software. In this scenario, even though the storage is physically in the company’s data center the company has not taken ownership of the asset and it is returned at the end of the agreement, qualifying the expense as OpEx. In addition, utility-based storage acquisition models, which combine the hardware and software into a single subscription price which can be billed based on capacity utilized are now becoming de rigueur.
Why should this be a consideration to a company, especially its technical staff? In order to optimize the financial position of the company to achieve its goals, the different instruments described above can be used to achieve your company’s business and financial goals. Therefore, technical staff should be aware of the various models so they can present the appropriate options to their financial team. The following table recaps the various storage acquisition methods described above and their potential CapEx/OpEx impact.
Storage Acquisition Method | OpEx | CapEx |
Purchase storage appliance with a perpetual license and term maintenance from a single vendor |
| X |
Purchase storage hardware from an OEM, purchase storage software perpetual license from an SDS vendor with term maintenance from both |
| X |
Purchase storage hardware from an OEM or distributor, sign subscription agreement for a software license from an SDS vendor | X (SW) | X (HW) |
Sign utility-based storage license from vendor and pay a monthly fee for HW and SW based on actual consumed storage – don’t take ownership of the storage asset | X |
|
Use public cloud storage resource and pay only for the storage consumed | X |
|
Use public cloud storage resources with a license for the software from the public cloud provider |
| X |
As you can see, acquiring storage for your company’s usage is not quite the straightforward process it used to be. The new paradigms for acquiring storage have enabled flexible financial models which allow a company to use its financial assets, both CapEx and OpEx, to dynamically manage infrastructure to fit within the company’s current budget. As a customer looking at OpEx based offers from multiple vendors, it can often be hard to compare and understand the value each offer would bring to your company. The following questions can be a starting point for that comparison.
These are just a few questions which can help identify the strategy used to construct the offer price. As you dive deeper into the analysis, ask lots of questions from your vendors and be sure to ask the same question in various ways if you are not satisfied with answers you get.
About the SNIA Cloud Storage Technologies Initiative
The SNIA Cloud Storage Technologies Initiative (CSTI) is committed to the adoption, growth and standardization of storage in cloud infrastructures, including its data services, orchestration and management, and the promotion of portability of data in multi-cloud environments. To learn more about the CSTI’s activities and how you can join, visit snia.org/cloud.
Chris Wellfair, Projects Director at Secure I.T. Environments Ltd, outlines the key maintenance areas that should be covered in a maintenance programme, so data centre managers can think about whether it’s time to update their regime.
Ask any data centre manager whether they maintain their data centre and you’ll get a reflex reaction “Yes!” and probably a funny look. The majority of companies have an ongoing maintenance schedule, but does it really cover everything needed? Sometimes it's a good idea to stop and review your service level agreement and perform some maintenance on the maintenance regime itself!
We all know why ongoing maintenance is needed, after all it is our job to keep the business running, minimise downtime, as well as ensure that our cooling and general energy efficiency are at an optimum. A maintenance regime also gets you ‘closer’ to the equipment. What do I mean by that? It gets you more familiar just like if you owned a bicycle, classic car, or your own small boat, by getting your hands dirty you get a better idea of what works, what doesn’t, where an upgrade might be needed, or whether there are persistent problem areas.
There is an endless amount of best practice maintenance information available covering every distinct piece of equipment or cabling in your data centres, and therefore no way we could cover that here. However, there are key areas that should be part of every regime, regardless of the make-up of your data centre or its size. Here is my take, on what those areas should be:
Part of the nature of maintenance regimes is that they are unique to a business so many will be looking at this list and saying “that point is not relevant for me” or Chris has left X, Y and Z out. The whole point of this piece was to get you thinking about the regime itself, not just performing the checklists that were put in place when the DC was handed over. Unless it is very new the DC will have evolved, but has your regime evolved too? If not, then it is probably time to take a step back and consider whether your data centre is underperforming as a result or are the risks of downtime higher than they should be.
How a new network infrastructure increases sustainably and network reliability.
In a growing organisation the IT network capacity and reliability are essential and so is the need for access speed. The modular system that Groupe SMA implemented for data centre and its network infrastructure, together with enhanced support leads to an easy to install and easy to manage solution including the scalability for future growth.
The Groupe SMA is a French mutual insurance company primarily serving the construction and civil engineering sector. With more than 3,200 employees serving its 150,000 members, the company has a turnover of more than two billion Euro. As a result, Groupe SMA is the leading French insurance company for construction industries.
For over 160 years, the organisation has constantly adapted to different market developments by trusting its values of expertise and innovation. Previously, 1,300 head office employees were spread among five buildings with outdated network infrastructures. This inevitably led to challenges in scalability and capacity, as well as increasing operational expenses which it needed to control.
To support the continued growth of its overall business and to drive operational efficiencies, the organisation decided to migrate to a new 380,000 square foot building in Paris. This move would allow the company to manage a facility with full IP capabilities, allowing for more communication points than the previous facilities and drive its critical voice, data, and video needs, along with security-surveillance, access control, clocks, and AV systems and controls. The consolidation would allow opportunities to expand its business and maximise space utilisation while reducing its operational expenses.
The new building would involve merging 1,300 people, two data centres, more than 7000 drops and 39 telecom rooms. In addition, the new facility needed to highlight Groupe SMA’s commitment to building quality structures to stay relevant within the construction industry.
“Upgrading our facility would provide an optimised environment for employees to collaborate freely, enhancing productivity. We also want the ability to quickly add new business applications and upgrades as they occur,” said David Marais, DSI, Groupe SMA.
Increased Resource Utilisation
The merged facilities would provide better use of existing resources and increase manpower productivity. Consolidating all offices into one facility would increase manageability and eliminate the operating cost of multiple facilities. Implementing a complete upgrade of its IT Infrastructure, would increase its support for business innovation and help maintain its competitive advantage. Specifically, supporting 5,400+ public IPs for access and use of centralised corporate resources would allow the company to realise this goal.
“We needed a solutions provider that could offer a one-stop shopping experience where our business could run smoothly and quickly throughout the transition and into the future,” said Marais.
As part of the project, the company chose system integrator SPIE for its expertise in infrastructure cabling while Eiffage Energie handled the building cabling. Cisco Catalyst 3850 switches were chosen and needed a maintenance solution compatible with the Cisco equipment. SPIE and distributor CCF Sonepar, which managed the global products logistics, recommended Panduit for its quality, experience and expertise.
Considering the building’s specific design and significant size - 10 levels split into five geographical zones, served by four telecom rooms per level -, the client planned to pre-connect the switches to all the network access points. This would reduce the accesses and the handling operations in the telecom rooms and easily manage the 7000 ports and ensure a clean aesthetic in the operational areas.
The company deployed Net-Access S-Type Server Cabinets to house the Cisco core switches and network fabric devices. These cabinets provide effective thermal management to ensure optimised equipment operation and increased uptime.
The highly flexible modular system provides for future expansion of the data centre, as more cabinets can be added to the system and as network upgrades occur. The open-rack accessibility contributes to data centre aesthetics with properly routed cables which improves network accessibility and availability. Cabinet and door design has been optimised for airflow management to ensure maximum cool air passes across the live equipment.
Cabling to 39 Telecom Rooms
After finding the demo results from another supplier to be unsatisfactory, the company decided to consider jumper cabling and met with Panduit during the RFP (request for proposal) process. Panduit Implemented the QuickNet Plug Pack Assembly in each of the 39 telecom rooms to help facilitate quick and easy connections and to provide rapid patching application for the Cisco Catalyst 3850 switch. The mass plugging capability reduces the time and cost associated with installing and maintaining structured cabling links.
Category 6A copper cabling connects several of Cisco’s higher band switches internally for maximum bandwidth, network reliability, and a migration path for future applications in Groupe SMA’s data centre and across the enterprise. Panduit Mini-Com modular jacks, patch panels, and adapters were deployed across the installation, providing flexibility to simplify moves, adds, and changes, (MACs) and reduce operational expenses.
Groupe SMA successfully moved from multiple locations it had outgrown to one with a larger and more reliable data centre in a very short time. The features of the new data centre will also help reduce the cost of MACs in the future and maintain an optimised topology across the enterprise.
“The result meets our expectations and the solution is flexible enough to plug cable jumpers in various colours. Within the framework, where there is a specific need, instead of one cable for a jumper, the QuickNet connector is on every port of the switch,” said Marais.
Automation is becoming a vital component for business survival, according to the majority of respondents to a recent Harvard Business Review Analytic Services survey and sponsored by Oracle.
By Neil Sholay, VP of Digital, Oracle EMEA.
Little wonder that some forward-looking organisations are already embedding artificial intelligence (AI) and machine learning (ML) technologies into their critical business systems and processes, with key areas of the business predicted to benefit the most from this type of automation being operations, customer service, decision support, IT and finance.
Yet, according to the report, few have made the move to any significant extent – for a number of reasons. The combination of fear and a natural resistance to change, means time is essential -companies need time to get their heads around how these emerging technologies can fit into their current enterprise systems and just how to do that within their existing budgets, skills and culture.
While this change certainly won’t happen overnight, respondents are realising the potential and are expecting to see rapid integration of intelligent automation over the next three years. With that the case, business and IT leaders need to start considering how to move along their automation journey from basic adoption to full intelligent automation. But how?
One way to do this might be to consider a model devised by Gartner around the ‘six levels of automation’. While specific to the supply chain, it in fact has much wider business applications.
The six levels can be seen as a path which organisations can consider, helping them decide where they want to be on that scale – both as an enterprise as a whole, but also around particular use cases or business processes. The phases gradually ramp up, starting with early, low-hanging fruit for intelligent automation around data-intensive and repetitive tasks that machines can do better and faster than humans, and moving up through stages where the system itself gains more and more autonomy before it becomes fully autonomous.
From looking at their business and its core activities as a whole, they can then determine what it means to them to operate as an intelligent enterprise, and how they can move from pockets of automation to a strategic, enterprise-wide approach.
So what are the six stages?
Level one starts with a very basic form of automation – ‘give me the facts’, essentially asking the system to take general data around business areas like Sales and production and analyse it before presenting the results.
The next stage, ‘give me a suggestion’, progresses things a bit further, with the system moving on to making recommendations from specific information, typically in the form of statistical forecasting.
Then we have an initial level of having the system thinking for itself where it starts to ‘help me as I go’. Here, automated alerts add a level of advisory guidance to the user as they plan and execute tasks.
This then progresses on to ‘do this task for me’, where we rely on the system to take the information at hand, make an assessment and then have the capacity, if told via an ‘opt-in’, to act on it. This could apply to tasks such as recommending the best approach.
Level five focuses on the system actually having responsibility for a task, such as automatic reordering ‘until it is told otherwise’. While at this fifth level there is still some form of human input, the final, sixth level is far more sophisticated – and the one that really pushes the boundaries.
Here the systems use a mix of machine learning and automation to display more human-like traits, so they can begin thinking ‘autonomously’ for themselves. An example of this would be the Oracle Autonomous Database, which is essentially a self-driving, self-securing and self-repairing database that automates key management processes, including patching, tuning and upgrading to deliver unprecedented availability, performance and security—at a significantly lower cost.
The implementation of autonomous databases will revolutionise data management, with rapid data access and enhanced insights driving significant increases in productivity, manpower can be optimised and resources can be deployed to higher value tasks.
While this final level is by far the most sophisticated, even at the early stages of automation businesses can gain tremendous benefit. In addition to improving the efficiency and quality of processes, intelligent automation enhances decision making by automating the routine and learning from the large data sets enterprises increasingly have access to. This was named a primary goal of intelligent automation efforts by nearly two thirds of respondents.
In fact, with the massive explosion in data, it is increasingly clear that some decisions can’t be executed without automation, one respondent to the report says. For example, “any kind of real-time, next best offer, next best action kinds of things like what ads appear on which customers’ screens in digital marketing—that’s got to be done in milliseconds. Humans can’t think that fast and digest the information required.”
And it is these benefits that are likely to drive the rapid adoption of intelligent automation. As a result, it is hardly surprising that the number of companies describing their enterprise’s use of AI and automation today as sophisticated and extensive will jump from just 10 percent now to well over half within three years.
To get there, a substantial amount of change will need to happen, not least in the digital transformation of data, skills, processes, and culture. Fortunately, most companies are now travelling well along their digital journey, so that with the help of tools like the six levels they can then take best advantage of this emerging area and reap the benefits.
As a parting thought, it’s worth bearing in mind, that just because they can, machines should not be used for everything. Whilst they are excellent for taking on the dull, repetitive tasks, they lack the creativity and flare – a gift exclusive to the human workforce.
FORGET Casablanca and It’s A Wonderful Life in the 1940s or Lawrence of Arabia and The Great Escape in the 1960s. As any film buff worth their salt will tell you, the 1980s was the real golden age for cinema – especially if you were a child.
By Gary Lessels, General Manager at HotDocs, Powered by AbacusNext
Movies like Back to the Future, The Goonies and the Indiana Jones series captured the imaginations of a generation of young viewers, allowing them to explore weird and wonderful worlds from the comfort of their local fleapit. For the price of a ticket and a bucket of popcorn, parents could keep their children entertained throughout a Saturday matinee.
One of the most memorable adventures from that era came in the form of Flight of the Navigator, a 1986 science fiction romp in which Joey Cramer plays David Freeman, a 12-year-old boy abducted from 1978 and returned in 1986 by an alien spacecraft piloted by a robot known as “Trimaxion Drone Ship”, or “Max” for short. David is the “navigator”, with his brain containing the star charts that Max needs to fly the ship, with Max responding to David’s instructions with the film’s famous catchphrase – “Compliance”.
Fast-forward 30 years and, while we may not yet have spaceships that can fly 560 light years in just over 2.2 hours, a whole host of technologies from analytical tools and automation through to artificial intelligence (AI) and machine learning – all with echoes of Max – are helping to make our working lives easier. Indeed, those technologies are aiding companies with compliance and, even more importantly, with the underlying reason for compliance – minimising risk.
Risks lie all around us. Whether it’s crossing the road, buying a used car or sniffing the pint of milk that’s been in the fridge for just a wee bit too long, we must assess risks all the time during our daily lives.
And it’s exactly the same when it comes to business, with risks requiring assessment each and every day. Assessing the credit worthiness of a new customer, weighing up the pros and cons of switching suppliers, getting the wording for a contract finalised – the list goes on and on and on.
Such risks are being brought into sharp focus by Brexit, with businesses unsure what the coming months or years will bring for them, their customers and their suppliers. For all the talk of political wranglings at Westminster and Brussels, it’s the hard-pressed companies on the ground that will have to try to make sense of the risks – and, indeed, the opportunities – that the UK’s exit from the European Union (EU) will present to them and their industries.
Complying with regulations is a major consideration in that risk mix. If a company doesn’t meet the requirements of its industry regulator then it simply won’t be able to operate.
Brexit raises hundreds if not thousands of questions when it comes to regulatory compliance. Few sectors of the economy are untouched by EU law and if the UK leaves the union without a deal in place then government officials will be inundated with a flood of questions – not all of which will have immediately obvious answers.
Which piece of paperwork will I need to fill-in to export my goods to the continent? Which forms will I need to complete online to carry out internet transactions through my retail website? What permits will I need for the workers in my factory in they’re citizens of other countries?
It’s not simply about Brexit either. Across a whole range of industries – from the Financial Conduct Authority through to Ofgem – we’re seeing regulators stepping in to protect consumers and tighten the rules with which companies need to comply in order to retain their ability to trade, whether they’re selling personal loans, gas and electricity, or a host of other products and services.
Using automation and other forms of technology helps to minimise risk via two main routes. The first involves minimising the chances of humans making mistakes when they’re putting together important documents.
We’ve seen this over the past 20 years at HotDocs. Our clients – whether they’re law firms, banks or other important cogs in the economy – use document automation software to ensure that complex, business-ready paperwork is created without human error creeping in.
For example, having automated documents allows senior lawyers in a bank’s legal department to create watertight paperwork that can be used by non-legal staff out in their network of branches. A loan officer can then have confidence filling in a document because they know that an expert has kicked off the automated process.
This helps to minimise the legal risk; after all, it’s when members of staff who don’t have the appropriate training start to get involved that legal exposure begins to creep into agreements. Equally as important, document automation aids in minimising the financial risk too – legal disputes cost money and entering into the wrong agreement with the wrong customer is bad for the bottom line.
Technology can also help to stop the experts from making mistakes too. When the regulatory environment is changing quickly – as is likely to happen no matter what the outcome of Brexit might be – it’s difficult for experts to keep on top of each and every single change in the legal landscape.
By harnessing technology, experts can be aided in keeping up to date on the very latest changes. AI and machine learning can also be used to analyse related documents, highlighting changes or discrepancies that can make an expert cast another eye over their paperwork.
Such technology is being utilised around the world. In the past few months, Bizibody in Singapore has incorporated HotDocs into its LawCloud suite, while docQbot is using them to power its legal contract drafting service in China.
The second route via which technology can help to minimise risk is by supporting human insights, work and judgement to bolster their effectiveness and efficiency. Knowing that a document is already strong gives an expert the confidence to make changes using the inherent insight unavailable to a computer system.
Ultimately, automation and other technology allow businesses to free up their workforces to focus on the job at hand by concentrating on the tasks that require the human touch – such as creativity, innovation and client support. There are so many processes in which technology can make a difference, but when it comes to listening to customers’ needs and creating the products and services that they need then only talented members of staff are up to the task.
Spoiler alert: at the end of Flight of the Navigator, David orders Max to take him back to 1978. While the idea of travelling back in time might be attractive to some businesses – going back to a period before the Brexit vote, before the global banking crisis, before the internet – the reality is that all companies must face the future, whatever it might bring.
It may not contain Max’s robotic voice merrily cheeping “Compliance”, but businesses can use document automation and risk minimisation software to help them to navigate their way through the challenges and opportunities that lie ahead.
AIOps is an emerging technology which offers the promise of helping IT and data centre teams get to grips with the growing complexity of their respective infrastructure environments – with the ultimate objective of ensuring application performance optimisation. In this issue of Digitalisation World, you’ll find a variety of thoughts and opinions as to just what AIOps offers, and why it matters. Part 4.
Today, most IT teams find themselves facing a number of challenges presented by the new and increasingly complex infrastructure that accompanies digitisation, including an exponential increase in data volumes and types. In fact, Gartner estimates that the data volumes generated by IT infrastructure and applications are increasing two- to three-fold every year (and that’s compounding growth). There’s clearly too much data for the humans on the IT team to sort through on their own.
By Vijay Kurkal, COO at Resolve.
Expanding infrastructure also results in 1000s of events streaming in every day to overtasked admins on the front lines. Given the immense volumes and the high rate of false alarms, IT teams are forced to simply ignore many of these alerts. On top of that, teams are tasked with tracking a dynamic, ever-morphing infrastructure that is heavily virtualised and spread across hybrid environments in the cloud and multiple data centres. Despite these challenges, IT is still expected to resolve requests, incidents, and performance issues in seconds, not days – without introducing more people to their already overburdened teams.
AIOps, particularly when combined with automation, can help IT teams survive and thrive in this new era of increasing complexity. AIOps is a term coined to describe the use of artificial intelligence (AI) to aid in IT operations. AIOps technologies harness AI, machine learning, and advanced analytics to aggregate and analyse immense volumes of data collected from a wide variety of sources across the IT infrastructure. In doing so, AIOps quickly identifies existing or potential performance issues, spots anomalies, and pinpoints the root cause of problems. Through machine learning and advanced pattern matching, these solutions can even effectively predict future issues, enabling IT teams to automate proactive fixes before issues ever impact the business.
AIOps technologies also offer advanced correlation capabilities to determine how alarms relate to one another. This separates the signal from the noise and ensures IT teams focus their attention in the right place, streamlining operations. Additionally, many AIOps solutions can automatically map the dependencies between dynamic, changing infrastructure components to provide real-time visualisation of the relationships between applications and underlying technology. This makes it much easier to see how things are connected when troubleshooting and significantly reduces the time to solve problems.
While AIOps on its own drives tremendous value, the magic really happens when it is combined with robust automation capabilities that can take immediate and automated actions on the insights powered by the AI. When these technologies come together, they deliver a closed-loop system of discovery, analysis, detection, prediction, and automation, bringing IT closer to achieving the long-awaited promise of truly “self-healing IT.”
For IT teams faced with managing increasing complexity, AIOps and automation are the key to improving operational efficiency, reducing mean time to resolution (MTTR), and increasing the performance of business-critical infrastructure. It’s finally IT’s turn to harness powerful AI-driven and automation technologies – and it must to ensure the success of its digital transformation efforts.
The fundamental promise of AIOPs is to enhance or replace a range of IT operations processes through combining big data with AI or machine learning, explains Justyn Goodenough, International Area VP at Unravel Data:
This promise holds appeal across industries as enterprises recognise the potential for AIOPS to solve expensive, challenging, and time-consuming problems in their big data deployment. Not only does AIOPS have the potential to drastically reduce the cost of deployments, it can do so while improving performance.
These increases in performance are largely achieved through automating or enhancing processes across an extensive range of use cases. For almost all workloads, AIOPS has the potential to automate several integral tasks including: workload management, cloud cost management, performance optimisation and remediation, and other processes. While running these tasks would typically be the responsibility of the data team, where they are a time-consuming and repetitive process, AI or ML can independently perform them at a much greater speed. This allows for data teams to focus on value on initiatives instead of constantly firefighting.
However, getting AIOPS right is a challenge - you only get out what you put in. This is to say that these AI-solutions need quality data inputs in order to generate useful outputs. These inputs need to be relevant, accurate, timely and comprehensive. To ensure their data inputs satisfy these criteria, enterprises need to measure the business outcomes they care about (time to insight, transaction response time, job completion time etc.) accurately, frequently and consistently.
That being said, integrating AIOPS into big data workloads should be addressed on a case-by-case basis. What works for one organisation may not for another. In recognition of this, several distinct approaches to AIOPS have been developed:
For enterprises looking to optimise their big data deployments, AIOPS is a necessary consideration. While a daunting prospect, evaluating which of these approaches is most pertinent to enterprise needs and if there are sufficient data inputs available is a good first step in the path to AIOPS.
An Interview with Dr. Thomas King
Thomas, where do we stand with AI for computer networks today?
From my point of view, we stand at the very beginning with AI in computer networks, because AI is a very hot topic today in general, but if we think about AI we mainly talk about autonomous driving and Industry 4.0 or things like that. It is very rare for us to talk about AI in the context of computer networks. However, I'm pretty sure that this will change because I can see a lot of applications for AI algorithms in computer networks. There's so many things to do more intelligently, with more AI support, which could drive a lot of value.
What is DE-CIX's experience with the topic?
So far, we have used a basic AI algorithm in monitoring tools. We monitor our network infrastructure very heavily. To give you an example of the successful use of an AI algorithm in our work, we had an issue with the charge of optics. Optics are the part of our switches that transforms electrical power into light and the other way around. Optics can easily be replaced, and there was a problem with the charge from a particular optic vendor, which meant that it died very early in its projected lifetime. In addition, all of the optics had this issue. When we realized this, and realized that we could detect the problem from the decreasing light levels that the optics measure and report, we came up with a very simple AI algorithm that warns us that we have to replace this optic before it dies. We need to do this in advance in order to make sure the optic can be replaced without any service interruption. And this is an example for a simple but effective AI supported activity which helps network operations to be very effective – and reduces the impact of service degradation.
Another example is our patch robot, which we have dubbed “Patchy McPatchbot”. The robot allows the interconnection of different dark fiber cables in a data center without any manual work involved. It is able to handle many hundreds of these connections. In daily operations, new connections are installed, and existing ones are changed or removed. The robot uses an AI technology to make sure that the cabling does not end up in spaghetti-like knots – making sure that we have a very clean and simple cabling. Without AI technology, such cabling would not be possible and would us require to do it like we did in the old days – by handling each connection manually, step-by-step.
We had a lot of fun implementing Patchy, and it was great to get acknowledgement from the team at Capacity Media for our work, honoring it as the “Best Internet Exchange Innovation” at their Global Carrier awards in late 2018. So far, he’s one of a kind – the only patch robot in action at an Internet exchange anywhere in the world. But he’s not going to be alone much longer. We’re planning some family members for him at some of the other Frankfurt sites – and we’ve been getting some great name suggestions on our Twitter channel.
What do you expect in the future?
I expect that AI technologies may well be used heavily in computer networks, because there are some really good use cases for AI. I have already touched on two of them, which are monitoring and routing. With monitoring, you can already derive some information from the data in your systems – and this means you can provide better service quality. With network optimization and channels, especially if there is an outage somewhere, you can use AI technologies to more intelligently route traffic in areas where there are no issues, and keep the traffic out of the areas where there is a bottleneck. The bottleneck might be caused by errors or equipment failure, for example.
Another area where it probably makes sense to use AI technologies, and we already see that today, is for security – mainly for detecting malicious activities. It is similar to monitoring: with security, you have patterns and you have behavior in your network, and through this you can recognize if a DDoS is occurring, or if somebody is scamming you, or trying to hack into your network. AI algorithms make it very easy to recognize these patterns and warn you that something weird or dangerous is going on.
Dr. Thomas King is Chief Technology Officer at DE-CIX, the world’s leading Internet Exchange operator. The Frankfurt based company runs 18 IXs globally. DE-CIX Frankfurt is the world’s leading IX by peak traffic with a data throughput of +6.8 Terabit per second.Andrew Fray, Managing Director at Interxion UK
Think about some of the busiest cities in the world. What sort of picture springs to mind? Are you thinking about skyscrapers reaching into the atmosphere, competing for space and attention? Perhaps you’re thinking about bright lights, neon signs and the hustle and bustle of daily life. Chances are that whichever city you’re thinking about – New York, Tokyo, Singapore – is actually a smart city.
Smart cities are so-called based on their performance against certain criteria – human capital (developing, attracting and nurturing talent), social cohesion, economy, environment, urban planning and, very importantly, technology. Speaking specifically to this last point, real-life smart cities are less about flying cars and more about how sensors and real-time data can bring innovation to citizens.
And the key to being ‘smart’ is connectivity, to ensure that whatever citizens are trying to do – stream content from Netflix, drive an autonomous vehicle or save money through technology use in the home – they can do so with ease, speed and without disruption. Crucial to this, and the often-overlooked piece in the connectivity puzzle – the urban data centre. Urban data centres are the beating heart of all modern-day smart cities, and we’ll explore why here.
Becoming smart
Just as Rome wasn’t built in a day, neither was a smart city. Time for some number crunching. According to the United Nations, there will be 9.8 billion of us on Earth by 2050. Now, consider the number of devices you use on a daily basis, or within your home – smartphones, laptops, wearables, smart TVs or even smart home devices. Now multiply that by the amount of people there will be in 2050, and you get a staggering number of devices all competing for the digital economy’s most precious commodity – internet access. In fact, Gartner predicted that by 2020, there will be 20.4 billion connected devices in circulation. At a smart city level, this means being able to translate this escalating demand for access to the fastest connection speeds to unparalleled supply. Connectivity is king, and without it cities around the world will come screeching to a halt. To keep up with the pace of innovation, we need a connectivity hub that will keep our virtual wheels turning – the data centre.
Enter the urban data centre
In medieval times, cities protected their most prized assets, people and territories by building strongholds to bolster defenses and ward off enemies. These fortresses were interconnected by roads and paths that would enable the exchange of people and goods from neighbouring towns and cities. In today’s digital age, these strongholds are data centres; as we generate an eyewatering amount of data from our internet-enabled devices, data centres are crucial for holding businesses’ critical information, as well as enabling the flow of data and connectivity between like-minded organisations, devices, clouds and networks. As we build more applications for technology – such as those you might find in a typical smart city – this flow needs to be smoother and quicker than ever.
Consider this – according to a report by SmartCitiesWorld, cities consume 70% of the world’s energy and by 2050 urban areas are set to be home to 6.5 billion people worldwide, 2.5 billion more than today. With this is mind, it’s important that we address areas such as technology, communications, data security and energy usage within our cities.
This is why urban data centres play a key role in the growth of smart cities. As organisations increasingly evolve towards and invest in digital business models, it becomes ever more vital that they house their data in a secure, high-performance environment. Many urban data centres today offer a diverse range of connectivity and cloud deployment services that enable smart cities to flourish. Carrier-neutral data centres even offer access to a community of fellow enterprises, carriers, content delivery networks, connectivity services and cloud gateways, helping businesses transform the quality of their services and extend their reach into new markets.
The ever-increasing need for speed and resilience is driving demand for data centres located in urban areas, so that there is no disruption or downtime to services. City-based data centres offer businesses close proximity to critical infrastructure, a richness of liquidity and round-the-clock maintenance and security. Taking London as an example, the city is home to almost one million businesses, including three-quarters of the Fortune 500 and one of the world’s largest clusters of technology start-ups. An urban data centre is the perfect solution for these competing businesses to access connectivity and share services, to the benefit of the city’s inhabitants and the wider economy.
The future’s smart
London mayor Sadiq Khan recently revealed his aspirations for London to become the world’s smartest city by 2020. While an ambitious goal, London’s infrastructure can more than keep pace. Urban data centres will play a significant role in helping the city to not only meet this challenge, but become a magnet for ‘smart tech’ businesses to position themselves at the heart of the action. The data centre is already playing a critical role – not just in London, but globally – in helping businesses to innovate and achieve growth. And as cities become more innovative with technological deployments, there’s no denying that smart cities and urban data centres are a digital marriage made in heaven.
Mat Jordan, Head of EMEA - Procurri
No matter the industry, tech capability or type of business your customers fit, there’s no doubt that people are asking and looking to operate within the Cloud; public, private or otherwise. The common belief that a giant satellite hosts a myriad of secure internet services is, of course, false: we know that when it comes to IT, Clouds don’t float.
The ongoing appetite for Cloud computing is fuelling data centre installation and growth rates worldwide. The convenient on-demand nature of the Cloud – and the amount of IT providers now excelling at Cloud service offerings – means that the requirement for robust data centres are at an all-time high. Where businesses already run their own data centres, this often means they need to expand their existing centres and even set up new centres to satisfy the demand for their services.
Here at Procurri, we see the increase in data centres as a good thing. A greater freedom of access to information, to connection and to new possibilities through online (and offline services) is exciting, and the further that can happen, the better.
Where we’re able to be involved in data centre expansion, we can work with our customers to help extend the lifespan of the IT equipment within them, as well as meeting growth requirements quickly, even where the equipment needed is hard-to-find or discontinued. The growth of physical IT capability need not necessarily mean an increase in environmentally affecting output or unnecessary waste. There’s lots that can be done to increase physical capacity in a way that’s cost-effective, quick and can maintain a seamless service to the end user throughout changes.
There’s no debate to be had, data centres include bulky and unwieldy equipment and any work done on the equipment within them (for maintenance purposes, to increase or decrease capacity, or otherwise) historically has been difficult to complete without resulting in service disruption. This is where Procurri can offer their expertise and aid to avoid interruption and to instead allow for business-as-usual to continue whilst servicing takes place. The world has moved on from the necessity of having to close or switch data centres for weeks as engineers work around-the-clock to wire up new servers and program them all in – work can be done quickly without interruption to end users.
Procurri’s maintenance support services extend to end-of-life and out-of-warranty equipment, and our around-the-world presence allows us to tap into the knowledge of IT specialists globally who have expert knowledge in a wide variety of hardware and software – so no matter how niche or rare a data centre setup, Procurri can offer practical support both on the phone and in person. This is an unrivalled service only accessible through a large global network of knowhow and superiorly trained staff.
What’s more, should rapid expansion of existing data centre facilities be required, Procurri’s global distribution network is able to offer a range of flexible options for the purchase or consignment of hardware. Acting completely vendor agnostic, Procurri can source a range of enterprise servers, storage and networking products from the world’s leading IT brands, as well as using a robust inventory management system to deploy the equipment hassle-free, wherever the customer is in the world. Procurri’s international locations allow them to stock an extensive inventory in three key regions, with fast deployment and delivery available. However fast it is that a centre need to expand, Procurri works with the client to make it happen – ensuring the end users remain delighted at the efficiency and efficacy of their services received, and best of all, don’t even notice any changes. Such services aren’t commonplace in the IT industry, so this is a disruption of sorts; to the market itself! It can be clear to see, however, that in this case, it is a positive one.
This allows businesses not just to recycle and reuse equipment where they can, but also to gain new equipment for expansion even where it’s not currently readily easily available. Procurri isn’t tied into vendor contracts or affiliations; allowing us the freedom to work with any brand of equipment without the tie into any financial commitments to recommend you one item over another. Clients are guaranteed complete transparency and only genuine unbiased advice.
When the time does come for data centre equipment to be disposed of for upgrade or replacement, Procurri can manage that too. A comprehensive suite of ITAD services take a holistic approach in managing the process from end-to-end, from assessment and verification right through to disposal or reconfiguration for resale. This allows the customer to work to the highest possible best practice standards within corporate social responsibility scales and to reap the benefits from such ethical business.
With sustainability rightly becoming more of a focus than ever, we at Procurri believe that businesses should strive to act in the most eco-friendly and ethical way possible. Corporate social responsibility is no longer a ‘nice to have’ policy typed only in the back of an HR guide but instead a must-do, no matter the size or shape of your company. Whilst this has historically often meant that more palatable recycling, reuse and disposal services have cost a premium compared to those other offered, Procurri instead has embedded ethical and environmentally-friendly best practice within everything we do: so such standards are integrated throughout our service offerings and not as part of a superior product package – all of our services are ‘premium’ in this respect.
There is no doubt that demand for data centres will continue to grow worldwide as a result of increasing Cloud computing, but when managed properly, it need not be the ‘big job’ we so often consider it to be. There are, for the first time, manageable possibilities in the world of tangible data centre expansion and changes, and even if the computing isn’t quite walking on air, the happy customers will be!
Matthew Edgley, Commercial Director, TeleData
We live in exciting times. The Fourth Industrial Revolution is bringing innovation at a rate of knots, with emerging technologies impacting everything from business and communication to industry and education.
As the backbone of the internet, data centres collect, store, process and distribute data around the clock, and across the globe. Which of course means that as our use of the internet and internet enabled devices grows, so does the amount of data we’re creating. This means that data centres are having to adapt.
In June this year, there were over 4.4 billion internet users. This is an 83% increase over the past five years.1 Every time we run a search in Google, speak to Alexa, fire up our smart TVs, post on social media, or host a Skype call we are contributing to the 2.5 quintillion bytes of data created daily.2
The Internet of Things (IoT) is responsible for a huge amount of this data, and Gartner estimates that IoT will include 25 billion units by 2021, with global spending touching $1.7 trillion. This giant global network of web-enabled devices - including everything and anything from washing machines and fitness trackers, to sensors on oil rigs and astronauts orbiting Earth - generates an enormous and increasing amount of data which needs to be stored and processed in data centres where it can be analysed and used by the organisations collecting it.
Edge computing brings another element of change to the industry. It enables data to be analysed in real time, as it’s collected. At the edge of the network. Thus creating the need for new, innovative ways to store and manage data.
These types of technology are not only changing the way we live, work and communicate, but they're also changing the way the modern data centre approaches architecture, storage, resilience and security.
Evolving technology trends also throw new security challenges into the mix. More internet enabled devices, means more networks, which means more vulnerabilities to cyber attacks. Data centres need to adapt their security landscapes to mitigate risk, designing facilities to combat the increasing range of threats and ensuring the highest levels of compliance and data privacy, whilst maintaining guarantees of uptime, speed and resilience.
Compliance can be very industry specific, and as industries evolve through the advent of new technologies, compliance can become increasingly complex. This means that data centres are having to move away from the traditional ‘power and pipe’ approach, and start to take a more consultative approach to business, increasing understanding of customer’s needs and challenges. Alternatively, an industry specific approach could be taken, leading data centres into vertical markets, however this approach is not generally a realistic design principle for a multi-tenant DC operator.
As advances in technologies such as IoT and edge computing continue to develop, this will drive an increase in smaller data centres and edge-driven systems that will work alongside existing data centres. However, the traditional data centre model will continue to thrive, maintaining its position as the backbone of the internet for essential data processing and storage requirements. Deployments that lean heavily on edge computing will likely also stream data back to a central data centre for storage and further analysis, meaning a heavier workload for larger, centralised data centres.
New and evolving technologies will also spur businesses to speed up their transition to the cloud as they innovate using advanced emerging technologies to stay ahead of the curve. These companies will need to rely on the speed, agility and expertise of data centres and managed hosting providers to develop their offerings, and handle the increasing amounts of data. So the modern data also needs to think about the speeds at which the growing volumes of data can be processed.
This means that the global network of data centres could start to become more distributed, with regional micro data centres appearing in smaller cities and towns, supported by bigger, modern data centres that together, serve as the network that connects the billions of devices that make up the Internet of Things. We’re likely to see an increase in the development and deployment of solutions such as Azure Stack, as businesses take the cloud to the edge utilising ruggedised servers to leverage modern technologies in disconnected environments - mobile data centres if you like - which will be supported by larger more traditional data centres for longer term data storage and analysis.
Taking all of this into account, we see that the modern data centre plays a key role in innovation and emerging technologies, providing a core infrastructure for everything from IoT and edge computing, to AI, blockchain and quantum computing. The businesses running on, and building upon these platforms depend completely on the data centre facilities that house their data, making data centres the key facilitator of industry 4.0.
Technology is an enabler, and data centres enable the technology. They are a crucial part of an ever evolving and fast changing industry that is shaping the world as we know it. We have never demanded more of our IT environments than we do today, and the modern data centre needs to constantly and consistently adapt to provide the agility, scalability, speed and efficiency needed to keep up with the increasing demands of an ever changing industry.
Within a very physical industry where infrastructure is often sized, designed and installed to last for 10-15 years or longer, this poses a consistent challenge to data centre operators as they attempt to keep up with the fast paced nature of the world that relies on them.
1 https://blog.microfocus.com/how-much-data-is-created-on-the-internet-each-day/#
2 https://blog.microfocus.com/how-much-data-is-created-on-the-internet-each-day/#