Fairly sure I’ve already written about the second coming that is 5G. But various recent incidents have caused me to think about this technology again and, especially, the claims being made for it. A break in Corfu was a timely reminder that there are many places in the world where technology has yet to establish a firm foothold. The friends who were with us were horrified at the absence of Uber (!) and it was clear from the local shopping experience that Amazon and its cohorts have much to do before they establish world domination.
A visit to a London sports stadium, where, presumably, there wasn’t enough 4G bandwidth available to the 30,000+ crowd, meant that checking up on websites and various other applications before, at half-time and after the game, was a frustrating experience.
And, of course, the return to the west country, where 4G gives way to 3G and, not infrequently GPRS, is a timely reminder that to the many have/have not divides which threaten to cause long and lasting damage to the UK, can be added the technology split. Live in a large town or city and chances are connectivity and a digital lifestyle are very much achievable (desirable – who knows?!); live in a rural area and we are apparently expected to exist not quite on candlelight and parchment, but a long way away from digital transformation.
In such a world, are autonomous vehicles a serious proposition? Are zillions and zillions to be spent on ensuring that every road up and down the land is autonomous-vehicle ready, in the belief that accidents and deaths will be eliminated? Or will there be the eventual realisation that, after a detailed cost/risk analysis, spending these zillions will not be worthwhile, as there will still be the odd technology glitch which will cause accidents and deaths – so better still to spend the money elsewhere where it will do more for human safety/happiness, and allow us to keep driving, albeit with seriously smart cars that do improve road safety dramatically?
Similar decisions will be made across every aspect of the worlds of work and leisure. What is possible, and what is sensible, when it comes to technology deployment, are often two quite separate things.
For example, watching a recent video all about smart cities, it started out with someone waking up and being advised of their best route to work, taking into account the traffic, transport reliability and cost, and being asked if they wanted their regular coffee order to be ready for them at the usual café, just around the corner from work. This set me thinking.
Allowing for the possibility that even the recommended journey to work might encounter unforeseen delays, would the coffee shop receive a constant update as to the arrival of this individual, hence make the coffee just in time, or would they make the coffee for the estimated time of arrival – risking it being ready too early or, god forbid, too late.
Additionally, did this worker really value his/her time so much that they could not afford to wait for a few minutes to go into the coffee shop, order their drink, chat to the staff and, maybe even other waiting customers, and maybe even be tempted by a pastry or two?
So, the simple purchase of a cup of coffee becomes an incredibly (unnecessarily?) complicated process. On the one hand, the customer is always right; on the other, the customer risks becoming anti-social and unable to indulge in an impulse purchase. What’s best for the customer may well not be the best outcome for the coffee shop and, on a rather metaphysical level, has implications for the society in which we wish (or not) to live.
Cynics will say that whatever suits big business will end up happening. Optimists will say that, and/or because of technology, individual businesses will still be able to make a real difference when it comes to customer service.
Morality and philosophy are not frequent topics within the pages of Digitalisation World, but consumers, corporations and governments all need to have a long, hard think about the world in which they, and their descendants, wish to live, unless they wake up one day and discover that Orwell’s 1984 is no longer a work of fiction.
Findings show global organisations are making significant progress with digital transformation projects with 39 percent of global respondents saying projects are completed or close to completion and satisfaction levels at over 90 percent.
New Relic has published findings from a global survey evaluating the success of digital transformation projects. According to the survey, progress has been made despite obstacles, however technology leaders are finding that running their digitally transformed organisations is challenging and they are under increased pressure to prove business value.
Key findings from the survey include:
●1 in 2 tech leaders are challenged in managing and monitoring their digitally transformed organisations
●Almost 50 percent of respondents admit that their customers more likely to uncover problems before them
●89 percent of the survey respondents believe AI and ML will become important for how organisations run their digital systems
The study commissioned by New Relic in partnership with Vanson Bourne surveyed 750 global senior IT decision makers of enterprises with 500 to 5000-plus employees in U.K., Australia, France, Germany, and the U.S.
“The next phase of digital transformation will focus on making sense of all the data so that organisations can move faster, make better decisions, and create best-in-class digital experiences,” said Buddy Brewer, GVP and GM Client-Side Monitoring, New Relic. “As indicated in our research, observing and acting on insights from data collected will play a critical role in helping digitally transformed organisations truly scale and realise the benefits of modern technological advances.”
The Challenges of Digital Transformation
Global organisations claim to be significantly progressing their digital transformation projects, with 39 percent of global respondents saying these are completed or close to completion. Satisfaction levels also seem to be high with 91 percent of respondents saying results met and exceeded. However, respondents shared that the top five challenges to successfully sustain digital transformation are:
1.Separate parts of the organisation are moving at different speeds to embrace digital transformation, which holds back collective progress;
2.shortage of skilled employees;
3.restricted budgets;
4.understanding and measuring business benefits;
5.and finally, continued resistance to shutting down legacy systems.
Factors Contributing to These Challenges
● Increased complexity: More than 50 percent of respondents say they find their complex new software and infrastructure hard to manage and monitor for performance issues. Most (63 percent) say that the pressure to respond to business needs means they are having to work longer hours to observe and manage software performance correctly.
● Higher expectations: Most respondents (79 percent) agree that the rest of the business has higher expectations in how digital systems perform; and expect the technology team to deliver more and more innovations and updates (72 percent).
● Lack of visibility: 48 percent of respondents admitted that their end users or customers tell them about a problem with digital apps before they know about it; and a further 46 percent say they are told about these issues before they know how to fix them.
● Accountability: 46 percent of their C-suite executives want daily updates about how software systems are performing for staff and customers (54 percent of US respondents reported this trend). A further 40 percent of CEOs also want more answers when outages or performance problems happen.
● Challenges analysing data: A root cause for how teams struggle to manage modern software may be how the amount of machine generated data is rising rapidly. More than half (56 percent) of all respondents acknowledged that it is humanly impossible to properly assess this data. Notably, the larger organisations agree more strongly this is a problem: 58 percent of respondents from businesses of 3,000 to 4,999 employees and 55 percent of those with more than 5,000 staff.
● Determining business metrics: 1 in 3 respondents report that they are challenged on business benefit metrics for their digital transformation projects.
Looking Ahead: Harnessing the power of cloud and AI to fuel digital transformations
● Moving to public cloud: The majority of respondents agreed that migration to public cloud (i.e., Amazon Web Services, Azure, Google) is at the core of respondents’ digital transformation journey – 82 percent in U.S., 75 percent in the U.K., 75 percent in Australia, 66 percent in France, and 63 percent in Germany.
● Efficient way of using resources: Many (46 percent) agree that while migration to cloud is great, they don’t have a clear way of knowing what their cloud bill is going to be every month. More than half (54 percent) also say that while cloud computing promises more efficient usage of resources, the promise of greater control is not a given.
● Expectations around AI and ML replacing jobs: Overall, 37 percent of the global respondents agree that AI and ML will replace their job in a decade, while 41 percent disagreed. These numbers were highest in France with 55 percent of the respondents confirming that they expect their current jobs to be replaced by these advanced technologies. These numbers were lowest in the US (32percent) and U.K. (23 percent).
● Promise of AI and ML: Interestingly, more than 92 percent of U.S. respondents agree that artificial Intelligence (AI) and Machine Learning (ML) will become important for how they run their digital systems. Globally, almost 84 percent of the respondents believe that AI and ML will make their role easier.
Flexera has published the findings of the Flexera 2020 State of Tech Spend Report. The first annual Flexera 2020 State of Tech Spend Report provides insight into current and future technology spend from the perspective of enterprise CIOs and IT executives. The report highlights how companies are shifting spending to support their critical IT initiatives, how they’re tracking and managing IT spend, and the challenges they face in optimising spend.
Survey respondents are IT executives working in large enterprises with 2,000 or more employees, headquartered in North America and Europe, encompassing industries such as financial services, retail, e-commerce and industrial products. More than half are C-level executives.
“With this survey, we wanted to gain more insight into how enterprise organisations are embracing digital transformation, cybersecurity, cloud computing and other initiatives,” said Jim Ryan, President and CEO of Flexera. “By doing so, we can then see if/how these initiatives are providing a competitive advantage for their particular industries, as they require sizeable investments in technology.
“The survey found that 8.2 percent of respondent revenue is being spend on IT,” Ryan continued, “but the return is dubious at best. It’s likely that many of these investments aren’t attaining maximum ROI, and we identified that 30 percent or more of technology spend is actually being wasted. With the increasing uncertainty of U.S. and global economies, enterprises need to be prepared to manage their operations and finances when—not if—the next downturn kicks in. Proactively being able to dial IT spend up or down is a tremendous opportunity, and there is never a better time to address a potential problem than right now.”
According to survey respondents, increasing spend efficiency and cutting waste are challenging with respect to gaining visibility into costs and managing IT spend effectively. The biggest obstacle to visibility, cited by 61 percent of respondents, is reporting on IT spend by business service. The top challenge to managing spend effectively, cited by 86 percent of respondents, is the large number of manual processes. Considering the magnitude of potential savings, tackling these challenges can have a major impact on the bottom line.
A few key highlights from the Flexera 2020 State of Tech Spend Report:
AppDynamics has released the latest report in its App Attention Index research series, revealing the emergence of ‘The Era of The Digital Reflex’ - a seismic shift in the way consumers interact and engage with digital services and applications. The global study, which examines consumers’ reliance on applications and digital services, also identified how these digital dependencies impact consumers’ expectations of the businesses and brands they engage with, their increasing intolerance of performance problems and the urgency with which brands must take action in order to remain relevant and competitive in a world where application loyalty is the new brand loyalty.
Growing Reliance on Applications and Digital Services
Modern technology has transformed the way we live, work and play, making digital experiences a fundamental part of everyday life. However, many consumers are unaware of how much their use of digital services has evolved. While the average person estimates that they use seven digital services each day, in fact they are using more than 30 digital services on a daily basis. While 68 percent recognize they use many more digital services than they are consciously aware of, they also acknowledge the positive impact digital services have on many aspects of their daily lives.
●70% say digital services have helped reduce stress.
●68% claim digital services have improved their productivity at home and work, an increase from 43% in 2017.
The Era of The Digital Reflex
The use of digital services has evolved to become an unconscious extension of human behavior - a ‘Digital Reflex.’ While consumers used to make a conscious and deliberate decision to use a digital service to carry out a task or activity, they now happen spontaneously, with the majority (71%) of consumers admitting that digital services are so intrinsic to their daily lives that they don’t realize how much they now rely on them. As these digital reflexes become habitual, consumers are becoming increasingly dependent on devices and digital services, relying on them to complete many of their daily tasks.
●55% can only go without a mobile device for up to 4 hours before they find it difficult to manage tasks in their everyday life.
●61% admit they reach for their mobile phone before talking to anyone else when they wake up.
Poor Digital Performance Impacts Daily Life and Buying Decisions
The Era of the Digital Reflex sees more than three quarters of consumers (76%) reporting that their expectations of how well digital services should perform are increasing, compared to only 62 percent in 2017. This marked increase in consumer expectations is further evidenced with the majority (70%) of respondents claiming to be less tolerant of problems with digital services than they were two years ago. This increasing intolerance for problems is prompting consumers to demand a better and higher performing digital customer experience from the brands they engage with:
●50% would be willing to pay more for an organization’s product or service if its digital services were better than a competitor’s.
●Over the next three years, 85% of consumers expect to select brands on the variety of digital services (web, mobile, connected device, etc.) they provide.
●More than half of consumers (54%) now place a higher value on their digital interactions with brands over the physical ones.
Application Loyalty is the New Brand Loyalty
Businesses need to pay attention because consumers now have a zero-tolerance policy for anything other than an easy, fast and exceptional digital experience. The research shows that in the event of performance issues, consumers will take decisive action such as turning to the competition (49%) and actively discouraging others from using a service or brand (63%) without notifying the brand and giving them a chance to make improvements.
“In The Era of the Digital Reflex, consumers will no longer forgive or forget poor experiences. A great digital performance is now the baseline for any business, but the real winners will be those that consistently exceed customer expectations by delivering a flawless experience,” said Danny Winokur, General Manager, AppDynamics. “Cisco and AppDynamics help the world’s best companies achieve greatness with their applications by providing critical real-time data on application and business performance to pinpoint bottlenecks and enable immediate action."
How Brands Can Survive in the Era of The Digital Reflex
Many businesses are already investing heavily in digital innovation to drive customer loyalty and revenue, but failure to monitor the performance of those applications and digital services puts brands at significant risk of unhappy customers, or even losing those customers to another brand. However, there are simple steps that brands can take to meet these challenges and in turn, exceed increasing digital customer experience expectations:
●Focus on application performance - implementing a robust application performance management solution enables organizations to safeguard the performance of mission critical applications and user experience in production.
●Align performance to business outcomes - measuring and analyzing the performance of applications and correlating this to business performance ensures that digital services are always aligned to business objectives, such as customer experience and revenue.
●Make decisions and take action based on factual insight - delivering exemplary digital experiences requires real-time monitoring of the full technology stack, from the customer’s device to the back-end application to the underlying network. However, it's critical that enterprises choose solutions that take an AIOps approach, turning the monitoring of data into meaningful insights quickly or automatically using machine learning and AI.
Brands must integrate disparate applications, data sources and devices to deliver connected experiences and earn loyalty.
MuleSoft has released a new global study that reveals four out of five consumers continue to receive disconnected experiences from organisations. As a result of this continued frustration, consumers are more willing than ever to seek out new service providers that can deliver connected, personalised experiences. Based on the findings from the Customer Experience and the Connectivity Chasm report, it is clear that organisations must deliver the connected experiences consumers expect or risk losing their loyalty and business.
“Globally, consumers are feeling the effects of data silos that create disconnected experiences,” said Simon Parmett, CEO, MuleSoft. “To meet consumer expectations, organisations must integrate disparate data sources to better understand their customers and make every touchpoint an opportunity to earn loyalty and add value. With the help of APIs and API-led integration, brands can position for future innovation, create more meaningful relationships and earn customer trust.”
Frustration with disconnected experiences continues
Globally 82% of consumers believe organisations in at least one of the five sectors surveyed – banking, insurance, retail, healthcare and public sector – provide a disconnected experience, failing to recognise preferences across touchpoints and provide relevant information in a timely manner. This figure indicates a lack of improvement in customer experience (81% in 2018) and is pushing consumers to consider new service providers.
Consumers are conflicted with sharing data to fuel connected experiences
To receive a more personalised experience, 61% of global consumers would be willing for service providers in at least one of the sectors surveyed – banking, insurance, retail, healthcare and public sector – to share relevant personal information with partners and trusted third parties.
Across industries, consumers want organizations to nail the basics
The report shows that across industries, consumers’ expectations continue to evolve, but getting the basics right is vital to maintain customer satisfaction and loyalty.
The Coherence Economy takes off among millennials
Consumers are starting to become more familiar with engaging multiple service providers through one application or experience. Common experiences like using a music streaming service via a ride hailing app and integrating multiple financial accounts into a planning app are part of the broader Coherence Economy – a new approach to customer engagement where multiple brands partner to add value through an ecosystem approach.
“Organisations must cultivate partnerships to surprise and delight consumers. In the Coherence Economy, organisations need to develop strategies to collaborate with partners in a digital ecosystem and orchestrate personalised experiences for consumers,” said Uri Sarid, CTO, MuleSoft. “In order to innovate at scale and accelerate the delivery of products and experiences to customers, organisations will likely need to leverage a majority of third party services. By leveraging an API-led approach to integration, brands across all industries can easily connect their applications, data and devices to provide a holistic view of the consumer and easily empower new, connected experiences.”
Global research highlights how AI is changing the relationship between people and technology at work.
People have more trust in robots than their managers, according to the second annual AI at Work study conducted by Oracle and Future Workplace, a research firm preparing leaders for disruptions in recruiting, development and employee engagement. The study of 8,370 employees, managers and HR leaders across 10 countries, found that AI has changed the relationship between people and technology at work and is reshaping the role HR teams and managers need to play in attracting, retaining and developing talent.
AI is Changing the Relationship Between People and Technology at Work
Contrary to common fears around how AI will impact jobs, employees, managers and HR leaders across the globe are reporting increased adoption of AI at work and many are welcoming AI with love and optimism.
Workers Trust Robots More Than Their Managers
The increasing adoption of AI at work is having a significant impact on the way employees interact with their managers. As a result, the traditional role of HR teams and the manager is shifting.
AI is Here to Stay: Organizations Need to Simplify and Secure AI to Stay Competitive
The impact of AI at work is only just beginning and in order to take advantage of the latest advancements in AI, organizations need to focus on simplifying and securing AI at work or risk being left behind.
Majority of organisations neglect due diligence during the artificial intelligence development phase as they struggle with data issues, skill shortages and cultural resistance.
O’Reilly, the premier source for insight-driven learning on technology and business, has revealed the results of its 2019 ‘AI Adoption in the Enterprise’ survey. The report shows that security, privacy and ethics are low-priority issues for developers when modelling their machine learning (ML) solutions.
Security is the most serious blind spot. Nearly three-quarters (73 per cent) of respondents indicated they don’t check for security vulnerabilities during model building. More than half (59 per cent) of organisations also don’t consider fairness, bias or ethical issues during ML development. Privacy is similarly neglected, with only 35 per cent checking for issues during model building and deployment.
Instead, the majority of developmental resources are focused on ensuring artificial intelligence (AI) projects are accurate and successful. The majority (55 per cent) of developers mitigate against unexpected outcomes or predictions, but this still leaves a large number who don’t. Furthermore, 16 per cent of respondents don’t check for any risks at all during development.
This lack of due diligence is likely due to numerous internal challenges and factors, but the greatest roadblock hindering progress is cultural resistance, as indicated by 23 per cent of respondents.
The research also shows 19 per cent of organisations struggle to adopt AI due to a lack of data and data quality issues, as well as the absence of necessary skills for development. The most chronic skills shortages by far were centred around ML modelling and data science (57 per cent). To make progress in the areas of security, privacy and ethics, organisations urgently need to address these talent shortages.
“AI maturity and usage has grown exponentially in the last year. However, considerable hurdles remain that keep it from reaching critical mass,” said Ben Lorica, chief data scientist, O’Reilly.
“As AI and ML become increasingly automated, it’s paramount organisations invest the necessary time and resources to get security and ethics right. To do this, enterprises need the right talent and the best data. Closing the skills gap and taking another look at data quality should be their top priorities in the coming year.”
Other key findings include:
Global market study and independent survey shows companies are leveraging interconnection to compete in the digital economy as data explosion gets further distributed at the edge.
The latest Global Interconnection Index (GXI), an annual market study published by Equinix, predicts private connectivity at the edge will grow by 51% compound annual growth rate (CAGR), and exceed a total bandwidth capacity of more than 13,300 Tbps, equivalent to 53 zettabytes of data exchanged annually. This is enough to support every person on earth simultaneously downloading a complete season of Game of Thrones in ultra-high definition resolution in less than a single day.
The GXI market study finds interconnection bandwidth – the capacity for direct and private traffic exchange between key business partners – is an essential component to digital business and validates that to compete in the digital economy, companies must address growing data volumes and increasing data exchange velocity across a rising number of clouds and business ecosystems. In fact, according to a separate independent survey commissioned by Equinix of more than 2,450 global senior IT professionals, almost half (48%) of global IT decision-makers believe interconnection is a key facilitator of digital transformation. 4 in 10 IT decision-makers in EMEA feel the same and, in the UK, over a third (33%) of IT decision-makers cite interconnection as being key to the survival of their business. And companies are continuing to invest in this critical infrastructure, with almost half (49%) of IT decision-makers in the UK stating that any uncertainty around the final Brexit deal has not impacted their company’s decision to invest in IT infrastructure.
“People, software and machines are creating and consuming data faster and in all the places where we work, play, and live,” said Rick Villars, Research Vice President, Datacenter & Cloud, IDC. “The significant increase in data created, aggregated and analysed in these new locations is contributing to a major shift away from deploying IT in traditional corporate data centres. Enterprises need access to robust, modern data centre facilities near the edge locations where businesses want to deploy dedicated infrastructure and interconnect to the increasing number of clouds, customers and partners that are at the core of digital transformation efforts.”
Strong data compliance regulations across Europe are unlocking data exchange and growth of interconnection bandwidth in Healthcare & Life Sciences, Government & Education, and Business & Professional Services. This is leading Europe (51% CAGR) to overtake North America (46% CAGR) in the race to digital growth. Latin America is leading the charge with a 63% CAGR, with Asia-Pacific not far behind (56% CAGR). Expansion plans across the world, according to the survey, tell a slightly different story with 55% of EMEA businesses – and 42% of UK businesses, specifically – planning to expand in to new metros, versus more aggressive expansion plans in other regions (Americas 69%, Asia-Pacific 65%). 6 out of 10 (62%) IT decision-makers globally, and over half (53%) in the UK, are utilising virtual connections to support these growth plans.
Key Findings:
The GXI Vol. 3 delivers insights by tracking, measuring and forecasting growth in interconnection bandwidth—the total capacity provisioned to privately and directly exchange traffic, with a diverse set of partners and providers, at distributed IT exchange points inside carrier-neutral colocation data centers. The GXI finds:
The ability to exchange large volumes of data through interconnection is essential to compete in the digital economy
·In response to rapidly growing volumes of data, enterprise consumption of interconnection bandwidth will grow at a 64% CAGR globally, outpacing other forms of business data exchange. This is due to be even higher for EMEA, with consumption growing at a 67% CAGR, leading enterprises to account for 60% of total interconnection bandwidth in 2022. And, by 2022, London alone will account for over a third (34%) of European traffic, with leading European cities – Frankfurt, London, Amsterdam and Paris – together accounting for almost 78% of European traffic.
·To manage increasing volumes of data, enterprises are on average deploying in nine locations, with a total of 340 interconnections to networks clouds and business partners. The survey shines more light on this – IT decision-makers in the UK are utilising interconnection to connect to other enterprises (20%), network service providers (23%) and cloud service providers (39%).
·The independent survey found that over a third (34%) of IT decision-makers in the UK, believe interconnection can help their business to gain competitive advantage within the marketplace. This assertion was true for almost half (46%) of global IT decision-makers and for 4 out of 10 (39%) IT decision-makers in EMEA.
Distance is the biggest performance killer for digital business
·Deploying direct, private connections at the edge propels both application performance and user experience.
·Today’s latency-sensitive workloads require response times ranging from <60 to <20 milliseconds, forcing IT infrastructure closer to the points of consumption (the edge).
·According to the survey, a quarter (25%) of IT decision-makers in the UK are using interconnection to increase speed of connectivity. This compares to a finding of almost a third (31%) in EMEA and over a third (34%) globally. To add to this, 6 out of 10 (60%) IT decision-makers in the UK are using interconnection to improve security and half (50%) are using it to reduce the cost of connectivity.
Leading businesses are gaining competitive advantage using a combination of key interconnection deployment models
·Interconnecting to multiple network providers across multiple edge locations is the most prominent use case for interconnection bandwidth and is expected to grow 4x by 2022. According to the survey, optimising the performance of networks is a key priority for almost half (46%) of IT decision-makers in the UK.
·Interconnecting to multiple clouds and IT services across multiple edge locations and cloud regions represents the next largest and fastest use of interconnection bandwidth and is predicted to grow 13x by 2022. The move to multi-cloud strategies is cited by respondents to the survey as a priority for over a third (36%) of IT decision-makers in the UK.
·Interconnecting to digital business partners for financial services, content and digital media and supply chain integration makes up the remainder of interconnection bandwidth use cases and is forecasted to grow 5x by 2022.
Ping Identity has released results from its 2019 Consumer Survey: Trust and Accountability in the Era of Breaches and Data Misuse. The results expose how today’s environment—ripe with data misuse and large-scale security breaches—is impacting consumer behavior and relationships with service providers around the world.
Data security is a legitimate worry for today’s consumers around the world. Approximately one half (49%) of respondents report that they are more concerned about protecting their personal information than they were one year ago. This is evident by the lack of confidence consumers around the world have in a brand’s ability to safeguard personal information.
● A data breach could be game over for a brand. A significant number of respondents (81%) would stop engaging with a brand online following a data breach.
● Consumers expect companies to protect them. The expectation from 63% of consumers is that a company is always responsible for protecting data. This includes when users fall victim to phishing scams or use an unencrypted Wi-Fi connection.
● Sharing of personal data is a problem for consumers. More than half of respondents (55%) say a company sharing their personal data without permission is even more likely than a data breach (27%) to deter them from using that brand’s products.
● Social media companies don’t instill trust. Social media companies are the least trusted among sectors, with only 28% of respondents reporting they feel confident in these platforms’ ability to protect their personal information.
● Poor login experiences lead to cancelled service. Almost two-thirds of consumers (65%) are frustrated by login experiences and one-third (33%) have stopped using a device, app or service, or have left a bad review following an inconvenient login experience.
“There’s no question, businesses risk losing customers and damaging their brands if they lack strong, transparent data protection practices,” said Richard Bird, chief customer information officer, Ping Identity. “With a large percentage of consumers holding companies responsible for data protection, there is a competitive advantage for organizations that deliver secure and convenient experiences through identity management—and with that, a danger for those who don’t.”
Younger employees anxious about company’s ability to tackle growing security threats.
According to a new report on behaviour and attitudes to cybersecurity among different age groups, employees over the age of 30 are more likely to adopt cybersecurity best practice than younger colleagues who have grown up around digital technology. The report – ‘Meeting the expectations of a new generation. How the under 30s expect new approaches to cybersecurity’ - also indicates that the younger generation is more anxious about cybersecurity and their company’s ability to tackle the number of security threats.
Launched by the Security division of NTT Ltd., a leading global technology services company, the report reveals that while the over-30s demonstrate better cybersecurity behaviour in the UK, US, Nordics and Hong Kong, it is under-30s who are cybersecurity leaders in France and Brazil.
NTT’s report identified good and bad practice for global organisations researched as part of its Risk:Value 2019 report, scored across 17 key criteria. It reveals that, on average, under-30s score 2.3 in terms of cybersecurity best practice, compared to 3.0 for over-30s. In the UK, under-30s (4.3) and over-30s (5.5) are among the highest scores globally.
The data suggests that just because Millennials and Generation Z workers are born in the digital age, it does not necessarily mean they follow cybersecurity best practice. In fact, employees who have spent longer in the workplace gaining knowledge and skills and have acquired ‘digital DNA’ during that time, sometimes have an advantage over younger workers.
Overall, under-30s expect to be productive, flexible and agile at work using their own tools and devices, but half of respondents think responsibility for security rests solely with the IT department. This is 6% higher than respondents in the older age categories.
Azeem Aleem, VP Consulting (UK&I) Security, NTT, comments: “It’s clear from our research that a multi-generational workforce leads to very different attitudes to cybersecurity. This is a challenge when organisations need to engage across all age groups, from the oldest employee to the youngest. With technology constantly evolving and workers wanting to bring in and use their own devices, apps and tools, business leaders must ensure that security is an enabler and not a barrier to a productive workplace.
“Our advice for managing security within a multi-generational workforce is to set expectations with young people and make security awareness training mandatory. Then execute this training to test your defences with all company employees involved in simulation exercises. Finally, team work is key. The corporate security team is not one person, but the whole company, so cultural change is important to get right.”
Adam Joinson, Professor of Information Systems, University of Bath, an expert on the intersection between technology and behaviour, adds: “There is no ‘one size fits all’ approach to cybersecurity. The insights from the NTT study demonstrate that treating all employees as posing the same risk, or having the same skills, is problematic for organisations. We do need to be careful not to assume that the under-30s simply don’t care so much about cybersecurity. While this may be true in some cases, in others it is more likely that existing security policies and practices don’t meet their expectations about ‘stuff just working’.
“If we want to harness the fantastic creativity and energy of younger workers, we need to think about security as something that enables their work, not something that blocks them from achieving their tasks. This is likely to mean security practitioners having to fundamentally rethink the way security policies operate, and finding ways to improve the fit between security and the tasks employees are required to undertake as part of their core work.”
NTT’s six cybersecurity best practice tips for a multi-generational workforce:
The typical enterprise expects the threat to arrive within three years.
A new study from DigiCert reveals that 71 percent of global organizations see the emergence of quantum computers as a large threat to security. Most anticipate tangible quantum computer threats will begin arriving within three years. The survey was conducted by ReRez Research in August 2019, within 400 enterprise organizations in the U.S., Germany and Japan from across critical infrastructure industries.
Quantum Computing Threat is Real and Quickly Approaching
Quantum computing is on the minds of many and is impacting their current and future thinking. Slightly more than half (55 percent) of respondents say quantum computing is a “somewhat” to “extremely” large security threat today, with 71 percent saying it will be a “somewhat” to “extremely” large threat in the future. The median prediction for when PQC would be required to combat the security threat posed by quantum computers was 2022, which means the time needed to prepare for quantum threats is nearer than some analysts have predicted.
Top Challenges
With the threat so clearly felt, 83 percent of respondents say it is important for IT to learn about quantum-safe security practices. Following are the top three worries reported for implementing PQC:
● High costs to battle and mitigate quantum threats
● Data stolen today is safe if encrypted, but quantum attacks will make this data vulnerable in the future
● Encryption on devices and applications embedded in products will be susceptible
95 percent of respondents reported they are discussing at least one tactic to prepare for quantum computing, but two in five see this is as a difficult challenge. The top challenges reported include:
● Cost
● Lack of staff knowledge
● Worries that TLS vendors won’t have upgraded certificates in time
“It is encouraging to see that so many companies understand the risk and challenges that quantum computing poses to enterprise encryption,” said Tim Hollebeek, Industry and Standards Technical Strategist at DigiCert. “With the excitement and potential of quantum technologies to impact our world, it's clear that security professionals are at least somewhat aware of the threats that quantum computers pose to encryption and security in the future. With so many engaged, but lacking good information about what to do and how to prepare, now is the time for companies to invest in strategies and solutions that will help them get ahead of the game and not get caught with their data exposed when the threats emerge."
Preparing for PQC
Enterprises are beginning to prepare for quantum computing, with a third reporting they have a PQC budget and another 56 percent working on establishing a PQC budget. In terms of specific activities, not surprisingly, “monitoring” was the top tactic currently employed by IT. Understanding their organization’s level of crypto-agility came next. This reflects the understanding that when the time comes to make a switch to PQC certificates, enterprises need to be ready to make the switch quickly and efficiently.
Rounding out the top five current IT tactics were understanding the organization’s current level of risk, building knowledge about PQC and developing TLS best practices.
Recommendations
The DigiCert 2019 Post Quantum Crypto Survey points to three best practices for companies ready to start planning their strategies for securing their organizations for the quantum future:
More than 300 executives surveyed to determine the frequency and severity of people-centric data breaches.
Proofpoint has announced the availability of a new survey from The Economist Intelligence Unit to help organizations gauge the frequency and severity of people-centric data breaches, and the steps companies are taking to address them. The study, entitled “Cyber Insecurity: Managing Threats From Within,” surveyed more than 300 corporate executives, including CIOs and CISOs, from North America, Europe, and Asia/Pacific. Respondents overwhelmingly identified people-centric threats as the cause for the most detrimental cybersecurity breaches, which include socially-engineered attacks and human errors, rather than failure of technology or process.
“More than 99 percent of targeted cyberattacks depend on human interaction to be successful,” said Ryan Kalember, executive vice president of Cybersecurity Strategy for Proofpoint. “The Economist Intelligence Unit findings reinforce just how important it is for organizations to take a people-centric approach to their security strategy. Security teams need to know exactly who within their organization is being targeted and why—and educate their people on best security practices. Cybersecurity has clearly evolved into a human challenge as much as a technical challenge.”
The Economist Intelligence Unit findings highlight how more than 300 respondents are addressing today’s top threats, the major obstacles that impede implementing best practices, and how organizations are moving forward. Key insights include:
· The majority of executives surveyed (85%) agree that human vulnerabilities cause the most detrimental cybersecurity breaches rather than failure of technology or process.
· Eighty-six percent of executives surveyed have experienced at least one data breach in the past three years, with well over half (60%) having experienced at least four.
· Nearly half (47%) say it’s very or extremely likely that they will face a major data breach in the next three years. Only 56% of healthcare executives are confident their organization can prevent, detect or respond to a data breach.
· The top three ways a data breach disrupted their businesses include: loss of revenue (33%), especially at large companies (38%); loss of clients (30%); and termination of staff involved (30%).
· 91 percent agree that their organization needs to better understand which cybersecurity measures work best—their focus needs to shift from quantity to quality. Almost all respondents (96%) say the board and C-suite strongly support efforts to control cybersecurity risks and 93% say the board and C-suite are regularly updated on cybersecurity risks.
· Addressing data breaches at the organizational level and alternating human behavior within the organization are critical steps to mitigating data breaches. 82% agree that data breach risk is an essential C-suite priority.
The Managed Services Summit North provided a useful snapshot of the state of the industry outside the dominating London market. Regional MSPs at the event in Manchester reported a lively market, while those with experience of life inside the M25 said that while the North was less frenetic as a sales environment, the customers were no less demanding.
And scale might be a problem in the future, MSPs at the Managed Services Summit North were told. As an acquisitive MSP and now one of the largest global managed service providers, IT Lab Group’s CEO Peter Sweetbaum told them that changes from Microsoft and others were causing all MSPs to think hard about resourcing just to keep up. If his organisation, with 750 staff, recognises the challenge of the changes this brings, how much more challenging must it be for the small supplier to keep up?
Panels of MSPs in the afternoon were keen to point out both similarities and issues in the North – there was a general concern at the abilities of MSPs to market themselves and to encourage the software skills of customer engagement. For too many, it seemed that they were relying on the technology selling itself without a true understanding of the difference it would make to a customer.
For many smaller MSPs the issue was about breaking out of their local and specific markets to increase their scale, but without incurring extra costs. This drive for efficiency was a topic which came up many times, along with the realisation that the skills needed for expansion were not going to become available in the short term, and an element of sharing of limited resources would be desirable, even if the implications of shared responsibility are not easy.
Having a group of knowledgeable partners around to help is a good idea and does not increase costs overmuch said Scott Tyson from Auvik. “There is a very big security skills gap,” added Kyle Torres of Sophos. It is in the vendor interest to help MSPs skill up, not necessarily with specific accreditations.
MSPs are also talking to more customers about digital transformation. “We have seen a shift from journeys to the cloud, then business strategy; now transformation. And when you add customers on a monthly basis, the management issue is demanding,” says Nigel Church from MSP First Solution.
Keith Halford, now with Acert Associates, but previously with Unisys’ MSP division, sympathised with the plight of the small MSP, but added that resources to help with bid strategies, marketing and other issues, were available on a short term basis if needed.
“I am speaking with a number of small MSPS looking for consultant resources - there is a desire to grow and budgets are an issue, so there is a real focus on returns,” he said.
Adam Clements from Zen Internet, a local hosting and MSP specialist provider, says value comes from having a real understanding of the customer and where technology can make a difference.
Jason Fry from PAV Services: “we’ve been a lot of things in our 30-year plus time, now we are an MSP, having adapted to the market.” On the question of value, he says it has been important to add what he called “operational maturity” with good tooling, a good structure and management to deliver the services. This also helped when the time came to put the company up for sale, he added.
Getting the cost base right and keeping business processes as lean as possible is one solution, but this is not necessarily a skill that comes naturally for a sales-driven operation.
It may be a question of planning: Nigel Church says the value of an MSP changes during its lifetime and management needs to reflect on where it is in its evolution, and decide what the plan is.
Gartner, Inc. has revealed its top strategic predictions for 2020 and beyond. Gartner’s top predictions examine how the human condition is being challenged as technology creates varied and ever-changing expectations of humans.
“Technology is changing the notion of what it means to be human,” said Daryl Plummer, distinguished vice president and Gartner Fellow. “As workers and citizens see technology as an enhancement of their abilities, the human condition changes as well. CIOs in end-user organizations must understand the effects of the change and reset expectations for what technology means.”
Augmentations, decisions, emotions and companionship are the four aspects that are forging a new reality for human use of technology. “Beyond offering insights into some of the most critical areas of technology evolution, this year’s predictions help us move beyond thinking about mere notions of technology adoption and draw us more deeply into issues surrounding what it means to be human in the digital world,” said Mr. Plummer.
By 2023, the number of people with disabilities employed will triple due to AI and emerging technologies, reducing barriers to access.
“People with disabilities constitute an untapped pool of critically skilled talent,” said Mr. Plummer. “Artificial intelligence (AI), augmented reality (AR), virtual reality (VR) and other emerging technologies have made work more accessible for employees with disabilities. For example, select restaurants are starting to pilot AI robotics technology that enables paralyzed employees to control robotic waiters remotely. Organizations that actively employ people with disabilities will not only cultivate goodwill from their communities, but also see 89% higher retention rates, a 72% increase in employee productivity, and a 29% increase in profitability.”
By 2024, AI identification of emotions will influence more than half of the online advertisements you see.
Artificial emotional intelligence (AEI) is the next frontier for AI development, especially for companies hoping to detect emotions in order to influence buying decisions. Twenty-eight percent of marketers ranked AI and machine learning (ML) among the top three technologies that will drive future marketing impact, and 87% of marketing organizations are currently pursuing some level of personalization, according to Gartner. Computer vision, which allows AI to identify and interpret physical environments, is one of the key technologies used for emotion recognition and has been ranked by Gartner as one of the most important technologies in the next three to five years.
“AEI makes it possible for both digital and physical experiences to become hyper personalized, beyond clicks and browsing history but actually on how customers feel in a specific purchasing moment. With the promise to measure and engage consumers based on something once thought to be intangible, this area of ‘empathetic marketing’ holds tremendous value for both brands and consumers when used within the proper privacy boundaries,” said Mr. Plummer.
Through 2023, 30% of IT organizations will extend BYOD policies with “bring your own enhancement” (BYOE) to address augmented humans in the workforce.
The concept of augmented workers has gained traction in social media conversations in 2019 due to advancements in wearable technology. Wearables are driving workplace productivity and safety across most verticals, including automotive, oil and gas, retail and healthcare. Although wearables are only one example of physical augmentations available today, humans will look to additional physical augmentations that will enhance their personal lives and help do their jobs.
“IT leaders certainly see these technologies as impactful, but it is the consumers’ desire to physically enhance themselves that will drive the adoption of these technologies first,” said Mr. Plummer. “Enterprises need to balance the control of these devices in their enterprises while also enabling users to use them for the benefit of the organization. This means embracing and exploiting the benefits of physical human augmentation through the implementation of a BYOE strategy.”
By 2025, 50% of people with a smartphone but without a bank account will use a mobile-accessible cryptocurrency account.
Major online marketplaces and social media platforms will start supporting cryptocurrency payments by the end of next year. At least half the globe’s citizens who do not use a bank account will instead use these new mobile-enabled cryptocurrency account services offered by global digital platforms by 2025. This will open trading opportunities for buyers and sellers in growing economies like sub-Saharan Africa and Asia/Pacific.
By 2023, a self-regulating association for oversight of AI and machine learning designers will be established in at least four of the G7 countries.
“Regulation of products as complex as AI and ML algorithms is no easy task. Consequences of algorithm failures at scale that occur within major societal functions are becoming more visible. For instance, AI-related failures in autonomous vehicles and aircraft have already killed people and attracted widespread attention in recent months,” said Mr. Plummer.
Public demand for protection from the consequences of malfunctioning algorithms will in turn produce pressure to assign legal liability for the harmful consequences of algorithm failure. The immediate impact of regulation of process will be to increase cycle times for AI and ML algorithm development and deployment. Enterprises can also expect to spend more for training and certification for practitioners and documentation of processes, as well as higher salaries for certified personnel.
By 2023, 40% of professional workers will orchestrate their business application experiences and capabilities like they do their music streaming experience.
The human desire to have a work environment that is similar to their personal environment continues to rise — one where they can assemble their own applications to meet job and personal requirements in a self-service fashion. The consumerization of technology and introduction of new applications have elevated the expectations of employees as to what is possible from their business applications.
“Applications used to define our jobs. Nowadays, we are seeing organizations designing application experiences around the employee. For example, mobile and cloud technologies are freeing many workers from coming into an office and instead supporting a ‘work anywhere’ environment, outpacing traditional application business models,” said Mr. Plummer. “Similar to how humans customize their streaming experience, they can increasingly customize and engage with new application experiences.”
By 2023, up to 30% of world news and video content will be authenticated as real by blockchain countering deep fake technology.
Fake news represents deliberate disinformation, such as propaganda that is presented to viewers as real news. Its rapid proliferation in recent years can be attributed to bot-controlled accounts on social media, attracting more viewers than authentic news and manipulating human intake of information.
By 2021, at least 10 major news organizations will use blockchain to track and prove the authenticity of their published content to readers and consumers. Likewise, governments, technology giants and other entities are fighting back through industry groups and proposed regulations. “The IT organization must work with content production teams to establish and track the origin of enterprise-generated content using blockchain technology,” said Mr. Plummer.
Through 2021, digital transformation initiatives will take large traditional enterprises on average twice as long and cost twice as much as anticipated.
Business leaders’ expectations for revenue growth are unlikely to be realized from digital optimization strategies, due to the cost of technology modernization and the unanticipated costs of simplifying operational interdependencies. Such operational complexity also impedes the pace of change along with the degree of innovation and adaptability required to operate as a digital business.
“In most traditional organizations, the gap between digital ambition and reality is large,” said Mr. Plummer. “We expect CIOs’ budget allocation for IT modernization to grow 7% year over year through 2021 to try to close that gap.”
By 2023, individual activities will be tracked digitally by an “Internet of Behavior” to influence benefit and service eligibility for 40% of people worldwide.
Through facial recognition, location tracking and big data, organizations are starting to monitor individual behavior and link that behavior to other digital actions, like buying a train ticket. The Internet of Things (IoT) – where physical things are directed to do a certain thing based on a set of observed operating parameters relative to a desired set of operating parameters — is now being extended to people, known as the Internet of Behavior (IoB).
“With IoB, value judgements are applied to behavioral events to create a desired state of behavior,” said Mr. Plummer. “Within Western countries, the most notable example of a usage-based and behaviorally based business model is in property and casualty insurance. Over the long term, it is likely that almost everyone living in a modern society will be exposed to some form of IoB that melds with cultural and legal norms of our existing predigital societies.”
By 2024, the World Health Organization will identify online shopping as an addictive disorder, as millions abuse digital commerce and encounter financial stress.
Consumer spending via digital commerce platforms will continue to grow over 10% year over year through 2022. The ease of online shopping will cause financial stress for millions of people, as online retailers increasingly use AI and personalization to effectively target consumers and prompt them to spend discretionary income that they do not have. The resulting debt and personal bankruptcies will cause depression and other health concerns caused by stress, which is capturing the attention of the WHO.
“The side effects of technology that promote addictive behavior are not exclusive to consumers. CIOs must also consider the possibility of lost productivity among employees who put work aside for online shopping and other digital distractions. In addition, regulations in support of responsible online retail practices might force companies to provide warnings to prospective customers who are ready to make online purchases, similar to casinos or cigarette companies,” said Mr. Plummer.
Global IT spending to grow 3.7 percent
Worldwide IT spending is projected to total $3.7 trillion in 2019, an increase of 0.4% from 2018, according to the latest forecast by Gartner, Inc. This is the lowest forecast growth in 2019 so far. Global IT spending is expected to rebound in 2020 with forecast growth of 3.7%, primarily due to enterprise software spending.
“The slowdown in IT spending in 2019 is not expected to stretch as far into 2020 despite concerns over a recession and companies cutting back on discretionary IT spending,” said John-David Lovelock, research vice president at Gartner.
Today’s complex geopolitical environment has pushed regulatory compliance to the top of organizations’ priority list. Overall spending on security increased 10.5% in 2019, with cloud security projected to grow 41.2% over the next five years. “This is not just about keeping the ‘bad guys’ out,” said Mr. Lovelock. “It is also about the expanding need to be compliant with tariffs and trade policy, intellectual property rights, and even with the multiple and sometimes overlapping privacy laws.”
Despite the ongoing tariff war, U.S. IT spending is forecast to grow 3.5% in 2019, but IT spending in China is expected to grow only 0.1%. “Tariffs do not have a direct effect on IT spending, yet,” said Mr. Lovelock. “Should tariffs extend to devices like PCs and mobile phones, we will likely see manufacturers switch supply routes to minimize costs and have their technology made outside of China.”
The device market will see the sharpest spending decline among all segments in 2019, down 5.3% from $713 billion in 2018 (see Table 1). However, the market is expected to see modest growth of 1.2% in 2020. “Similar to how consumers have reached a threshold for upgrading to new technology and applications, technology general managers and product managers should invest only in the next generation of products that will push them closer to becoming a true technology company,” said Mr.Lovelock.
Table 1. Worldwide IT Spending Forecast (Billions of U.S. Dollars)
| 2019 Spending | 2019 Growth (%) | 2020 Spending | 2020 Growth (%) | 2021 Spending | 2021 Growth (%) |
Data Center Systems | 205 | -2.5 | 210 | 2.6 | 212 | 1.0 |
Enterprise Software | 457 | 8.8 | 507 | 10.9 | 560 | 10.5 |
Devices | 675 | -5.3 | 683 | 1.2 | 685 | 0.4 |
IT Services | 1,031 | 3.7 | 1,088 | 5.5 | 1,147 | 5.5 |
Communications Services | 1,364 | -1.1 | 1,384 | 1.5 | 1,413 | 2.1 |
Overall IT | 3,732 | 0.4 | 3,872 | 3.7 | 4,018 | 3.8 |
Source: Gartner (October 2019)
IT spending growth is being driven by the rest of the world catching up on cloud spending. The U.S. is leading cloud adoption and accounts for over half of global spending on cloud. In some cases, countries that Gartner tracks lag one to seven years in cloud adoption rates. “For perspective, the country directly behind the U.S. on cloud spending is the United Kingdom, which only spends 8% on public cloud services. An interesting outlier is China, which has the highest growth of cloud spending out of all countries. While China is closing the spending gap, it still will not reach U.S. levels by 2023,” said Mr. Lovelock.
Gartner predicts that organizations with a high percentage of IT spending dedicated to the cloud will become the recognized digital leaders in the future. “Most companies are caught trying to either cut costs or invest for growth, but the top-performing enterprises are doing both. A core challenge facing the industry is how organizations can operate as both a traditional company and a technology company at the same time,” said Mr. Lovelock. “These ‘and’ dilemmas will drive future IT spending trends.”
Digitalisation anxieties top business leaders’ concerns
Concerns about digitalization misconceptions, as well as the pace of their organizations’ digitalization efforts, topped business leaders’ concerns in Gartner, Inc.’s latest Emerging Risks Monitor Report.
Gartner surveyed 144 senior executives across industries and geographies and the results showed that “digitalization misconceptions” had risen to the top emerging risks in the 3Q19 Emerging Risks Monitor survey. “Lagging digitalization” remains in second position (see Table 1). Last quarter’s top emerging risk, “pace of change,” has now become an established risk after ranking on four previous emerging risk reports.
“While the threat of external macro risks such as the U.S.-China trade war are an increasing source of concern for executives, it’s notable that the top three risks in this quarter’s report are all centered around internal operations,” said Matt Shinkman, vice president with Gartner’s Risk and Audit Practice. “Business leaders are most concerned with their strategies around digital and having the resources in place to execute these plans.”
Table 1. Top Five Risks by Overall Risk Score: 4Q18-3Q19
Rank | 4Q18 | 1Q19 | 2Q19 | 3Q19 |
1 | Talent Shortage | Accelerating Privacy Regulation | Pace of Change | Digitalization Misconceptions |
2 | Accelerating Privacy Regulation | Pace of Change | Lagging Digitalization | Lagging Digitalization |
3 | Pace of Change | Talent Shortage | Talent Shortage | Strategic Assumptions |
4 | Lagging Digitalization | Lagging Digitalization | Digitalization Misconceptions | Data Localization |
5 | Digitalization Misconceptions | Digitalization Misconceptions | Data Localization | U.S.-China Trade Talks |
Source: Gartner (October 2019)
Digitalization Anxiety Driven by Business Model Change
Digitalization misconceptions was cited as a top risk by 52% of respondents in the Emerging Risks Monitor Report, with lagging digitalization close behind with a 51% frequency rate. Executives from the IT and telecom sectors were clearly most concerned with digitalization misconceptions, with 75% of executives surveyed indicating this as a risk. The banking and energy industries were most concerned with lagging digitalization, with nearly seven in 10 executives indicating this area as a top risk.
Additional data collected by Gartner from corporate strategists reveals the extent of concerns with organizations’ digital strategies. Slow strategy execution and insufficient digital capabilities were tied among the top concerns of corporate strategists for 2019, with 60% of respondents indicating both areas as top concerns.
Adding to the unease is the large percentage of organizations undergoing some form of digital business change. A large majority of strategists indicated that digitalization was impacting four distinct areas of their business model: business capabilities, profit models, value propositions and customer behavior.
While these changes are underway, uncertainty persists about a clear path from current to future business models. Only 35% of strategists said they felt confident about which investments and initiatives were needed to drive their future business model, while only 20% said they had clarity on how those changes would positively impact their organizations. Just 8% of strategists felt confident that senior leadership agreed with these changes.
“Our data suggests that executives lack confidence in transformational business model changes, even as many are already underway,” said Mr. Shinkman. “To help mitigate the risks of digitalization missteps, organizations should take an incremental approach to business model transformation, with each step in the process building knowledge for future initiatives. Organizing business model transformation into discrete projects lessens the chance of major disruptions.”
Mr. Shinkman also noted that risk executives need to ensure they have a seat at the table during digitalization projects. Gartner research indicates that while two-thirds of risk executives said their organization currently has a digital transformation project underway, only 35% report that enterprise risk management teams are playing a role in the project.
The top 10 strategic technology trends for 2020
Gartner, Inc. has highlighted the top strategic technology trends that organizations need to explore in 2020. Analysts presented their findings during Gartner IT Symposium/Xpo, which is taking place here through Thursday.
Gartner defines a strategic technology trend as one with substantial disruptive potential that is beginning to break out of an emerging state into broader impact and use, or which is rapidly growing with a high degree of volatility reaching tipping points over the next five years.
“People-centric smart spaces are the structure used to organize and evaluate the primary impact of the Gartner top strategic technology trends for 2020,” said David Cearley, vice president and Gartner Fellow. “Putting people at the center of your technology strategy highlights one of the most important aspects of technology — how it impacts customers, employees, business partners, society or other key constituencies. Arguably all actions of the organization can be attributed to how it impacts these individuals and groups either directly or indirectly. This is a people-centric approach.”
“Smart spaces build on the people-centric notion. A smart space is a physical environment in which people and technology-enabled systems interact in increasingly open, connected, coordinated and intelligent ecosystems. Multiple elements — including people, processes, services and things — come together in a smart space to create a more immersive, interactive and automated experience,” said Mr. Cearley.
The top 10 strategic technology trends for 2020 are:
Hyperautomation
Hyperautomation is the combination of multiple machine learning (ML), packaged software and automation tools to deliver work. Hyperautomation refers not only to the breadth of the pallet of tools, but also to all the steps of automation itself (discover, analyze, design, automate, measure, monitor and reassess). Understanding the range of automation mechanisms, how they relate to one another and how they can be combined and coordinated is a major focus for hyperautomation.
This trend was kicked off with robotic process automation (RPA). However, RPA alone is not hyperautomation. Hyperautomation requires a combination of tools to help support replicating pieces of where the human is involved in a task.
Multiexperience
Through 2028, the user experience will undergo a significant shift in how users perceive the digital world and how they interact with it. Conversational platforms are changing the way in which people interact with the digital world. Virtual reality (VR), augmented reality (AR) and mixed reality (MR) are changing the way in which people perceive the digital world. This combined shift in both perception and interaction models leads to the future multisensory and multimodal experience.
“The model will shift from one of technology-literate people to one of people-literate technology. The burden of translating intent will move from the user to the computer,” said Brian Burke, research vice president at Gartner. . “This ability to communicate with users across many human senses will provide a richer environment for delivering nuanced information.”
Democratization of Expertise
Democratization is focused on providing people with access to technical expertise (for example, ML, application development) or business domain expertise (for example, sales process, economic analysis) via a radically simplified experience and without requiring extensive and costly training. “Citizen access” (for example, citizen data scientists, citizen integrators), as well as the evolution of citizen development and no-code models, are examples of democratization.
Through 2023, Gartner expects four key aspects of the democratization trend to accelerate, including democratization of data and analytics (tools targeting data scientists expanding to target the professional developer community), democratization of development (AI tools to leverage in custom-developed applications), democratization of design (expanding on the low-code, no-code phenomena with automation of additional application development functions to empower the citizen-developer) and democratization of knowledge (non-IT professionals gaining access to tools and expert systems that empower them to exploit and apply specialized skills beyond their own expertise and training).
Human Augmentation
Human augmentation explores how technology can be used to deliver cognitive and physical improvements as an integral part of the human experience. Physical augmentation enhances humans by changing their inherent physical capabilities by implanting or hosting a technology element on their bodies, such as a wearable device. Cognitive augmentation can occur through accessing information and exploiting applications on traditional computer systems and the emerging multiexperience interface in smart spaces. Over the next 10 years increasing levels of physical and cognitive human augmentation will become prevalent as individuals seek personal enhancements. This will create a new “consumerization” effect where employees seek to exploit their personal enhancements — and even extend them — to improve their office environment.
Transparency and Traceability
Consumers are increasingly aware that their personal information is valuable and are demanding control. Organizations recognize the increasing risk of securing and managing personal data, and governments are implementing strict legislation to ensure they do. Transparency and traceability are critical elements to support these digital ethics and privacy needs.
Transparency and traceability refer to a range of attitudes, actions and supporting technologies and practices designed to address regulatory requirements, preserve an ethical approach to use of artificial intelligence (AI) and other advanced technologies, and repair the growing lack of trust in companies. As organizations build out transparency and trust practices, they must focus on three areas: (1) AI and ML; (2) personal data privacy, ownership and control; and (3) ethically aligned design.
The Empowered Edge
Edge computing is a computing topology in which information processing and content collection and delivery are placed closer to the sources, repositories and consumers of this information. It tries to keep the traffic and processing local to reduce latency, exploit the capabilities of the edge and enable greater autonomy at the edge.
“Much of the current focus on edge computing comes from the need for IoT systems to deliver disconnected or distributed capabilities into the embedded IoT world for specific industries such as manufacturing or retail,” said Mr. Burke. “However, edge computing will become a dominant factor across virtually all industries and use cases as the edge is empowered with increasingly more sophisticated and specialized compute resources and more data storage. Complex edge devices, including robots, drones, autonomous vehicles and operational systems will accelerate this shift.”
Distributed Cloud
A distributed cloud is the distribution of public cloud services to different locations while the originating public cloud provider assumes responsibility for the operation, governance, updates to and evolution of the services. This represents a significant shift from the centralized model of most public cloud services and will lead to a new era in cloud computing.
Autonomous Things
Autonomous things are physical devices that use AI to automate functions previously performed by humans. The most recognizable forms of autonomous things are robots, drones, autonomous vehicles/ships and appliances. Their automation goes beyond the automation provided by rigid programing models, and they exploit AI to deliver advanced behaviors that interact more naturally with their surroundings and with people. As the technology capability improves, regulation permits and social acceptance grows, autonomous things will increasingly be deployed in uncontrolled public spaces.
“As autonomous things proliferate, we expect a shift from stand-alone intelligent things to a swarm of collaborative intelligent things where multiple devices will work together, either independently of people or with human input,” said Mr. Burke. “For example, heterogeneous robots can operate in a coordinated assembly process. In the delivery market, the most effective solution may be to use an autonomous vehicle to move packages to the target area. Robots and drones aboard the vehicle could then affect final delivery of the package.”
Practical Blockchain
Blockchain has the potential to reshape industries by enabling trust, providing transparency and enabling value exchange across business ecosystems, potentially lowering costs, reducing transaction settlement times and improving cash flow. Assets can be traced to their origin, significantly reducing the opportunities for substitutions with counterfeit goods. Asset tracking also has value in other areas, such as tracing food across a supply chain to more easily identify the origin of contamination or track individual parts to assist in product recalls. Another area in which blockchain has potential is identity management. Smart contracts can be programmed into the blockchain where events can trigger actions; for example, payment is released when goods are received.
“Blockchain remains immature for enterprise deployments due to a range of technical issues including poor scalability and interoperability. Despite these challenges, the significant potential for disruption and revenue generation means organizations should begin evaluating blockchain, even if they don’t anticipate aggressive adoption of the technologies in the near term,” said Mr. Burke.
AI Security
AI and ML will continue to be applied to augment human decision making across a broad set of use cases. While this creates great opportunities to enable hyperautomation and leverage autonomous things to deliver business transformation, it creates significant new challenges for the security team and risk leaders with a massive increase in potential points of attack with IoT, cloud computing, microservices and highly connected systems in smart spaces. Security and risk leaders should focus on three key areas — protecting AI-powered systems, leveraging AI to enhance security defense, and anticipating nefarious use of AI by attackers.
The four phases of the Blockchain spectrum
Blockchain is forecast to generate $3.1 trillion in new business value worldwide by 2030, half of it by 2025 with applications designed for operational improvement, according to Gartner, Inc. However, enterprises can make missteps that will leave them out of position to capitalize fully on blockchain, under competitive threat by using the wrong strategy and being lulled into a false sense of progress and capability.
“Beyond operational improvements and increased efficiency, fully mature blockchain complete solutions will allow organizations to re-engineer business relationships, monetize illiquid assets and redistribute data and value flows to more successfully engage with the digital world. That is the real business of blockchain,” said David Furlonger, distinguished research vice-president and Gartner Fellow. “To unlock this potential, CIOs should use a framework to help their organizations better understand the timing of investments and the value proposition for blockchain usage based on different solution archetypes.”
Gartner created the Blockchain Spectrum to examine the phased evolution of blockchain solutions and how this path aligns to the anticipated value businesses can derive. In the book “The Real Business of Blockchain: How Leaders Can Create Value in a New Digital Age,” launched this month, Mr. Furlonger and co-author Christophe Uzureau, research vice president at Gartner, use the spectrum as one of several analytics models to reveal how blockchain will evolve from what it is today to what it will be by 2030.
The Blockchain Spectrum is made up of four evolutionary phases that segment solution offerings and characteristics, some of which won’t fully develop for years, but will have critical implications for the future of business and society. Each of these phases offers opportunities and risks, but CIOs should begin experimenting at some level based on a clear understanding that the choices they make will have significant consequences for their enterprise competitiveness and respective industries.
Blockchain-Enabling Technologies
These technologies provide the foundation upon which existing future blockchain solutions can be created and business modeled. This foundation can also be used as part of nonblockchain solutions; for example, to improve the operational efficiency. The foundational technology includes cryptography, distributed computing, peer-to-peer networking and messaging.
Blockchain-Inspired
In 2012, business leaders, primarily in financial services, started exploring blockchain through proofs of concepts and pilots. This phase will last through the early 2020s. Blockchain-inspired solutions leverage the foundational technologies but use only three of the five elements of blockchain — distribution, encryption and immutability. While some of these solutions make use of tokenization, they are not sufficiently decentralized to use such tokens to create new value exchange systems as part of the internet of value. As a result, these solutions often aim to reengineer existing processes specific to an individual organization or industry while maintaining centralized controls.
Blockchain-Complete
Blockchain-complete solutions deliver the full value proposition of blockchain using all five elements — distribution, encryption, immutability, tokenization and decentralization. Blockchain-complete solutions will feature tokenization enabled by smart contracts and decentralization, two components blockchain-inspired solutions lack. These solutions enable trade in new forms of value (such as new asset types) and unlock monopolies on existing forms of value ad processes (such as digital commerce or digital advertising).
“Few mainstream organizations are building blockchain-complete solutions yet. However, many startups providing blockchain-native solutions are doing so, and some will gain market momentum by the early 2020s, with more scale apparent after 2025,” said Mr. Uzureau. “Though not immediate, the proliferation of blockchain-complete solutions will push organizations to explore new ways of operating with greater degrees of decentralization than they have now.”
Enhanced Blockchain
After 2025, complementary technologies such as the Internet of Things (IoT), artificial intelligence (AI) and decentralized self-sovereign identity (SSI) solutions will converge and become more integrated with blockchain networks. The resulting enhanced blockchain solutions will expand the types of customer and the value that can be tokenized and exchanged and will enable a large number of smaller transactions to occur that would not be possible with traditional mechanisms.
“The evolution of blockchain cannot be ignored,” said Mr. Furlonger. “Blockchain-complete solutions will begin to gain traction in about three years. Only slightly further out lies a future business and societal environment that includes IoT and AI in which autonomous and intelligent things own assets and trade value. Business leaders who fail to do scenario planning or experiment with the technology, and delay consideration of the two fundamental blockchain components, decentralization and tokenization, risk being unable to adapt when blockchain matures.
Over the past five years, International Data Corporation (IDC) has been documenting the rise of the digital economy and the digital transformation that organizations must undergo to compete and survive in this economy. As the current decade comes to an end, the digital economy is approaching a critical tipping point. In just a few short years, IDC predicts that half of all GDP worldwide will come from digitally enhanced products, services, and experiences.
Over the past five years, International Data Corporation (IDC) has been documenting the rise of the digital economy and the digital transformation that organizations must undergo to compete and survive in this economy. As the current decade comes to an end, the digital economy is approaching a critical tipping point. In just a few short years, IDC predicts that half of all GDP worldwide will come from digitally enhanced products, services, and experiences.
As the digital economy becomes responsible for a growing share of enterprise revenue, it has also become an increasingly important item on the CEO's agenda. IDC believes that CEOs should focus on four key areas when thinking about the digital economy:
"Organizations face a challenging timeline over the next three years," said Meredith Whalen, Chief Research Officer at IDC. "By the end of 2019, IDC believes 46% of organizations will be set up for a successful digital transformation– these are the 'digitally determined.' In 2020, these organizations will spend $1.3 trillion on the technologies and services that enable the digital transformation of their business models, products and services, and organizations. But investor fatigue will start to set in by 2021 and the organizations that have been working toward transformation for four or five years will be expected to show results. Organizations that are lagging will be acquired, out of business, or subjected to new management. The organizations that are succeeding at digital transformation will deliver 'multiplied innovation' with new business models driving a significant increase in products, services, and experiences."
To help guide CEOs, key decision makers, and technology suppliers to meet the demands of the digital economy, IDC will launch nine new research practices in 2020 that bridge its traditional technology market view with a business outcomes view. The objective of the practices will be to provide context to what is happening in the digital economy – explaining the desired business outcomes, such as engendering trust or becoming an intelligent organization, and how technology can be used to achieve these outcomes. Each practice will bring together multiple IDC products under a common theme that relates back to the business outcomes organizations are pursuing. The practices will be global, cross-functional, and organized as a virtual team.
The practices will also be aligned with the new CEO agenda. To address new customer requirements, three practices will focus on: developing relationships with customers at scale; pivoting operations from efficiency and resiliency to meet market demand for customization; and developing digital trust programs. To help the C-suite develop new capabilities, three more practices will examine how to: add intelligence to business processes; become a software developer that creates and distributes digital services at scale; and create a work model that fosters human-machine collaboration and enables new skills and worker experiences. To overcome legacy thinking about what constitutes critical infrastructure, two practices will show that: business metrics are tied to reliable digital services and experiences which depend on digital infrastructure; and orchestrating connectivity across the workforce, customers, operations, and partners is critical to creating pervasive experiences. The final practice will help CEOs to define new value in the digital economy.
Worldwide digital transformation spending to reach $2.3 trillion in 2023
Worldwide spending on the technologies and services that enable the digital transformation (DX) of business practices, products, and organizations is forecast to reach $2.3 trillion in 2023, according to a new update to the International Data Corporation (IDC) Worldwide Semiannual Digital Transformation Spending Guide. DX spending is expected to steadily expand throughout the 2019-2023 forecast period, achieving a five-year compound annual growth rate of 17.1%.
"We are approaching an important milestone in DX investment with our forecast showing the DX share of total worldwide technology investment hitting 53% in 2023," said Craig Simpson, research manager with IDC's Customer Insights and Analysis Group. "This will be the first time DX technology spending has represented the majority share of total worldwide information and communications technology (ICT) investment in our forecast, which is a significant milestone and reflective of the larger commitment to enterprise-wide digital transformation."
"Worldwide DX technology investments will total more than $7.4 trillion over the next four years," said Eileen Smith, program vice president with IDC's Customer Insights and Analysis Group. "Industries have achieved varying levels of maturity to date and continue to pursue their DX objectives. The financial services sector will see the fastest overall growth with the banking, insurance, and security and investment services industries each delivering CAGRs of more than 19% over the forecast period. The distribution and services sector which includes industries like retail and professional services will also outpace the overall market with an 18.0% CAGR while public sector spending growth will match the overall market at 17.1%."
Discrete and process manufacturing will deliver the largest DX spending amounts throughout the forecast, accounting for nearly 30% of the worldwide total. The leading DX use cases – discretely funded efforts that support a program objective – in these industries are autonomic operations, robotic manufacturing, and root cause. Retail will be the third largest industry for DX spending with omni-channel commerce platforms and omni-channel order orchestration and fulfillment the leading DX use cases. Professional services and transportation will be close behind retail in terms of overall DX spending. The top DX use cases for these two industries are intelligent building energy management and freight management, respectively.
Of the 219 DX use cases identified by IDC, three will see the largest investment amounts throughout the forecast. Autonomic operations will be the largest use case in 2019 but will be overtaken by robotic manufacturing, which will more than double in size by 2023. The third largest use case will be freight management, followed by root cause, self-healing assets and automated maintenance, and 360-degree customer and client management. The use cases that will see the fastest spending growth will be virtualized labs (109.5% CAGR), digital visualization (49.9% CAGR) and mining operations assistance (41.6% CAGR).
"In the current competitive business world, digital transformation is the topmost strategic priority for every organization. Nevertheless, the concept is confusing and intricate. Digital transformation involves managing the existing business and building for the future at the same time, something like changing the engine of the plane while in flight," said Ashutosh Bisht, senior research manager for IDC’s Customer Insights & Analysis Group. "Enterprises across Asia/Pacific are adopting emerging technologies to enhance their operational excellence and connect more efficiently with their customers."
The United States will be the largest geographic market for DX spending, delivering roughly one third of the worldwide total throughout the forecast. The U.S. industries that will lead the way are discrete manufacturing, professional services, transportation, and process manufacturing. Western Europe will be the second largest geographic market in 2019, followed closely by China, which is forecast to move into the number 2 position by the end of the forecast. The leading industries in Western Europe will be discrete manufacturing, retail, and professional services. In China, DX spending will be led by discrete manufacturing and process manufacturing. In all three regions, the top DX spending priorities will be smart manufacturing and digital supply chain optimization.
CIO agenda 2020 predictions
Time for action is growing short for CIOs in the digital era. Many continue to struggle with siloed digital transformation initiatives, leaving them adrift and buffeted by competition and market forces. To support CIOs with guidance on complex, fast-moving environments and prescriptive, actionable recommendations, IDC has published IDC FutureScape: Worldwide CIO Agenda 2020 Predictions (IDC #US45578619). The predictions provide a strategic context that will enable CIOs to lead their organizations through a period of hyperscale, hyperspeed, and hyperconnectedness over the next five years. They also lay out IDC's vision for the ten most important shifts that will happen in IT organizations over the next 60 months and will help senior IT executives form their strategic IT plans.
"While there has been no single 'year of reckoning' for CIOs in the digital era, time for action is growing short, as competitors are accelerating their digital efforts. The ten predictions in this study define the concrete actions that CIOs can and must take to create digital-native 'future enterprises,'" said Serge Findling, vice president of Research for IDC's IT Executive Programs (IEP). "In this hyperspeed, hyperscale, and hyperconnected phase of digital transformation, CIOs must rapidly transform their organizations to become the Future IT."
The predictions from the IDC FutureScape for Worldwide CIO Agenda are:
Prediction 1: By 2024 the IT strategy for 80% of digitally advanced organizations will evolve to a broad, flexible, self-service mashup of digital tools to replace the "walled-garden" IT-as-an-Enabler model.
Prediction 2: By 2023, 65% of CIOs will be entrepreneurial leaders who evolve their organizations into centers of excellence that engineer enterprise-wide collaboration and innovation.
Prediction 3: Driven both by escalating cyber threats and needed new functionality, 65% of enterprises will aggressively modernize legacy systems with extensive new technology platform investments through 2023.
Prediction 4: By 2023, as a pillar of their IT multicloud approach, 70% of IT organizations will implement a strategic container/abstraction/API playbook to enhance application portability and hosting flexibility.
Prediction 5: By 2022, 70% of IT organizations will have transitioned from builders and operators to designer and integrators of digital solutions that come to define every product, service, or process.
Prediction 6: Through 2023, 80% of IT organizations will accelerate software development to enable them to deploy at least weekly code updates/revisions and business value delivery.
Prediction 7: By 2022, as innovation becomes synonymous with disruption, 40% of CIOs will co-lead innovation, articulating digital visions and infusing intelligence enterprise-wide.
Prediction 8: Through 2022, deployment of artificial intelligence to augment, streamline, and accelerate IT operations will be a principal IT transformation (ITX) initiative for 60% of enterprise IT organizations.
Prediction 9: Through 2024, 75% of CIOs will reshape all IT resources, including budgets, assets, and talent, to support real-time resource allocation and enterprise agility, dramatically reducing fixed costs.
Prediction 10: By 2023, driven by the mandate to deliver engaging, agile, continuous learning fueled workspaces, 60% of CIOs will implement formal employee experience programs.
The importance of proactive performance monitoring and analysis in an increasingly complex IT landscape. Digitalisation World launches new one-day conference.
The IT infrastructure of a typical organisation has become much more critical and much more complex in the digital world. Flexibility, agility, scalability and speed are the watchwords of the digital business. To meet these requirements, it’s highly likely that a company must use a multi-IT environment, leveraging a mixture of on-premise, colocation, managed services and Cloud infrastructure.
However, with this exciting new world of digital possibilities comes a whole new level of complexity, which needs to be properly managed. If an application is underperforming, just how easily can the underlying infrastructure problem be identified and resolved? Is the problem in-house or with one of the third party infrastructure or service providers? Is the problem to do with the storage? Or, maybe, the network? Does the application need to be moved?
Right now, obtaining the answer to these and many other performance-related questions relies on a host of monitoring tools. Many of these can highlight performance issues, but not all of them can isolate the cause(s), and few, if any, of them can provide fast, reliable and consistent application performance problem resolution – let alone predict future problems and/or recommend infrastructure improvements designed to enhance application performance.
Application performance monitoring, network performance monitoring and infrastructure performance monitoring tools all have a role to play when it comes to application performance optimisation. But what if there was a single tool that integrated and enhanced these monitoring solutions and, what’s more, provided an enhanced, AI-driven analytics capability?
Step forward AIOps. A relatively new IT discipline, AIOps provides automated, proactive (application) performance monitoring and analysis to help optimise the increasingly complex IT infrastructure landscape. The four major benefits of AIOps are:
1) Faster time to infrastructure fault resolution – great news for the service desk
2) Connecting performance insights to business outcomes – great news for the business
3) Faster and more accurate decision-making for the IT team – great news for the IT department
4) Helping to break down the IT silos into one integrated, business-enabling technology department – good news for everyone!
AIOps is still in its infancy, but its potential has been recognised by many of the major IT vendors and service and cloud providers and, equally important, by an increasing number of end users who recognise that automation, integration and optimisation are vital pillars of application performance.
Set against this background, Angel Business Communications, the Digitalisation World publisher, is running a one day even, entitled: AIOPs – enabling application optimisation. This one-day event will be dedicated to AIOPs - as an essential foundation for application optimisation – recognising the importance of proactive, predictive performance monitoring and analysis in an increasingly complex IT landscape.
Presentations will focus on:
To find out more about this new event – whether as a potential sponsor or attendee, visit the AIOPS Solutions website: https://aiopssolutions.com/
Or contact Jackie Cannon, Event Director:
Email: jackie.cannon@angelbc.com
Tel: +44 (0)1923 690 205
Reflecting the transformational nature of the enterprise technology world which it serves, this year’s 10th edition of Angel Business Communications’ premier IT awards has a new name. The SVC Awards have become... the SDC Awards!
Ten years ago, SVC stood for Storage, Virtualisation and Channel – and the SVC Awards focused on these important pillars of the overall IT industry. Fast forward to 2019, and virtualisation has given way to software-defined, which, in turn, has become an important sub-set of digital transformation. Storage remains important, and the Cloud has emerged as a major new approach to the creation and supply of IT products and services. Hence the decision to change one small letter in our awards; but, in doing so, we believe that we’ve created a set of awards that are of much bigger significance to the IT industry.
The SDC (Storage, Digitalisation + Cloud) Awards – the new name for Angel Business Communications’ IT awards, which are now firmly focused on recognising and rewarding success in the products and services that are the foundation for digital transformation!
Make sure to vote for companies in as many categories as you like as listed below (voting closes on the 15th November), and don’t forget to book you place at the awards evening, 27 November in London. www.sdcawards.com
Sponsor success
The new name for our awards has already attracted a range of sponsors. Lightbits have signed up as the awards’ entertainment sponsor, with Schneider Electric, EBC Group, Ninja RMM and Efficiency IT sponsoring key award categories.
Cloud-scale data centres require a different approach
Cloud-scale data centers require disaggregation of storage and compute, as evidenced by the top cloud giants’ transition from inefficient Direct-Attached SSD architecture to low-latency shared NVMe flash architecture.
The Lightbits team, who were key contributors to the NVMe standard and among the originators of NVMe over Fabrics (NVMe-oF), now bring you their latest innovation: the Lightbits NVMe/TCP solution.
In stark contrast to other NVMe-oF solutions, the Lightbits NVMe/TCP solution separates storage and compute without touching the network infrastructure or data center clients. With NVMe/TCP, Lightbits delivers the same IOPS as direct-attached NVMe SSDs and up to a 50% reduction in tail latency.
Schneider Electric provides energy and automation digital solutions for efficiency and sustainability. We combine world-leading energy technologies, real-time automation, software and services into integrated solutions for homes, buildings, data centres, infrastructure and industries. We make process and energy safe and reliable, efficient and sustainable, open and connected.
EBC Group is an award-winning managed service provider of IT services and solutions, telecommunications, print solutions and document management services. As an integrated provider of managed services we plan, implement and support our clients IT and technology, enabling them to run their business smoothly and securely.
Having provided managed IT services for 30 years, we have grown from strength to strength, with the business now encompassing three geographical locations and offering private cloud solutions from our privately owned data centre housed at our head office in Halesowen, near Birmingham and fully replicated at our Northampton office.
NinjaRMM was founded in 2013 to help MSPs and IT professionals simplify their workday with an intuitive and user-friendly RMM. 5 years later and the company has grown to support over 3,000 customers across the globe.
Headquartered in Silicon Valley, NinjaRMM has just surpassed 100 employees and we continue to grow at a rapid rate. We remain one of the fastest-growing SaaS companies in the IT management space.
We hate to brag but it’s true. When it comes to technology infrastructure we know IT. We like to challenge the status quo of what is acceptable versus what is achievable within traditional datacentre and IT infrastructures. We chose a business name that reflects our reputation and we live up to IT. Get rid of the background noise and get down to business, that’s our ethos. If time waits for no man then technology certainly doesn’t.
The shortlist
We’ve now reached the all-important voting stage of the SDC Awards. Below you’ll find listed all of the finalists. If you go to the awards website: www.sdcawards.com, then you’ll be able to read the entries in full and then get voting.
IT teams are overwhelmed with the constant roll-out of new digital services. They are simultaneously tasked with keeping existing business-critical apps and infrastructure performing optimally. On the business side of the organisation, C-suite executives are working to find new solutions for streamlining operations within their organisations. It is time both sides needs are met by employing tactics which free up both time and capital for employee innovation. To solve their challenges, automation can help as IT leaders can now turn to AIOps.
By Paul Cant, Vice President EMEA at BMC.
Last year IDC predicted 70% of CIOs would apply a mix of data and artificial intelligence (AI) to tools, processes and IT operations by 2021. It shouldn’t be assumed that a few algorithms are a quick fix to an organisations’ worldwide IT operations, however, if these innovative technologies are implemented with a strategic course of action, they can help the business thrive.
The ‘what’ & ‘how’
AIOps is now a market category which has gained serious momentum in the last few years. The concept brings together machine learning (ML) and data analytics in several different contexts. The technology empowers quicker, simpler, and more efficient IT operations management by enhancing some processes such as application performance monitoring, behavioral learning (dynamic baselining) as well as predictive event management, probable cause analysis and log analytics. As more and more businesses are going digital, requirements and infrastructures are becoming more complex. IT needs in turn must respond at the most optimal level possible and AIOps can enable this.
IT operation teams are able to move from rule-based, manual analysis to machine-assisted analysis and machine learning systems when AIOps is implemented. Human agents are only capable of completing a certain amount of analysis in a given time frame, dependent on the amount and complexity of the task. AIOps enables a level of change with the help of AI.
AIOps in action
One example of AIOps at work is the use case of Park Place Technologies (PPT). They are the world’s largest post-warranty data center maintenance organisation, helping thousands of customers manage their data centres globally. When customers have performance issues, costs escalate for Park Place if manual triage is required. To improve customer satisfaction and drive uptime, Park Place decided that an AIOps platform was the right solution for automating the support process, and began their implementation with 500 customers, with plans to support more.
“Using AIOps helps us move from a reactive service model to proactive, and ultimately to predictive. We’re able to see signs that there’s an impending failure and remediate it before it happens, really saving our customers a lot of downtime,” said Paul Mercina, Director of Product Management, Park Place Technologies.
Since applying its AIOps solution, Park Place has reduced the number of times a ticket is touched by a human by 80 percent. Not only that, the AIOps solution also allowed them to automate the triage process and optimize the customer experience – in fact they’ve experienced a 10 percent faster time to resolve incidents by leveraging advanced anomaly detection. These figures exemplify the reality of AIOps coming into works at an organisation and the wide variety of benefits it brings.
Holistic monitoring
For all organisations, IT services can use AIOps to produce holistic monitoring strategies. These allow the IT team to see patterns across various channels and make predictions based on the combined data, events, logs, and metrics. There is also a behavioral learning capacity gained from AIOps which uses the identified patterns to suppress any unusual events which fall outside of the recognised patterns of operational normalcy.
Lastly, AIOps can centralise the way organisations address and respond to IT incidents at different locations around the world. It can produce insights to identify problems affecting operations between services and send automated notifications when a problem has been identified. This helps to not only increase incident response times, but better resolve issues when they take place.
These are just some of the many ways enterprises are using AIOps today to cut costs, improve customer experience, avert problems, and free IT staff to focus their time on innovations their organisations need. It might not work miracles, but AIOps elevates the strategic importance and visibility of IT to the business by delivering the performance and availability needed no matter how complex environments become.
With or without realising, we’ve encountered the benefits of digital transformation in various parts of our lives – from everyday tasks such as shopping, working and socialising with friends and family, to how we stream films, and pay for things using our phone or watch.
By Hubert Da Costa, VP and GM EMEA, Cybera.
Within the last couple of decades, the digital world has changed beyond belief due to digital disruptors like Amazon, Netflix and Uber, raising the expectations for today’s connected and smartphone-wielding consumers. Once upon a time, these consumers waited for ‘dial-in’ internet to load and now the rules of engagement have changed, with ease and convenience becoming a ‘must have’ whenever they enter a store, head online, or both.
It’s no surprise then that organisations of all sizes in various sectors have put digital transformation at the top of the agenda – in order to support their operations and stop them falling behind trend. For small businesses and franchise owners, this is a crucial aspect that could make or break their business.
From a commercial aspect for them, adapting to digital trends is vital. Failure to keep up with digital trends risks could mean missing out on important opportunities or getting left by the kerbside in an era that’s characterised by fast-paced disruption.
Here are some of the top digital transformation challenges they face.
1. Transferring data to the cloud
The widespread presence of digital real-time interactions means modern businesses are moving to digital-first strategies to support them with innovation, drive growth, and improve efficiency.
In bid to connect people, things and locations, they’re turning to the cloud to access a rich choice of applications that reduce operational expenditure, while enabling new and agile ways of working.
Cloud computing delivers numerous benefits – especially for small business and franchise owners. It provides a low-cost way to access the infrastructure and IT resources needed to drive digital business advantage. British SMBs have been quick to seize the opportunity. According to the British Chamber of Commerce, 69 percent of SMBs now harness some form of cloud computing service.
As the cloud becomes the gateway to utilising cutting edge technologies such as analytics, AI and automation, small businesses are set to benefit as these capabilities become democratised and productised. Able to compete on a level playing field with much larger enterprises, they’ll leverage these advanced technologies to develop new improved products and elevate how they engage with customers.
But all this adds up to a growing volume of business-critical network workloads that will need to be protected from any potential failure in network availability, performance or reliability.
2. Bringing digitalisation to every channel
Nowadays with consumers expecting convenience at their fingertips – businesses are expected to comply by providing an integrated omnichannel experience, whether it’s through an online website or through an app on their phone. Moving seamlessly between the physical and digital worlds, they expect services like click and collect, reserve in-store, and more.
It’s not just the retail sector that’s been impacted by this trend. Just about every consumer-facing business or brand needs to connect, serve, support and deliver services in the channels their customers are using.
Enabling consistent and personalised interactions across every channel and touchpoint will prove critical for small businesses and franchise owners looking to gain a competitive edge.
That can prove particularly challenging for those based in remote or geographically dispersed locations looking to deliver the same level of digital sophistication as larger, metropolitan based commercial entities. To enable exceptional web and mobile experiences, they’ll need a high-performance network that ensures the internet doesn’t slow down their business.
3. Staying secure and compliant
With ‘data leaks’ constantly hitting the headlines, it’s more imperative than ever to gain and maintain customer trust. Small businesses and franchises must balance these seamless digital experiences with the provision of fail-safe security that keeps customer personal and payment data safe and protected.
Today’s small business owners have to comply with a slew of regulations that extend from GDPR to PCI security standards. The proliferation of the Internet of Things (IoT) means the amount of contextual customer data is set to explode, compounding the cybersecurity and compliance challenge.
Add a rapidly changing cyber-threat landscape into the mix and digital transformation can seem like an uphill task for the smaller enterprise.
Fortunately, cybersecurity technology for the digital age is rapidly becoming available for small businesses. Featuring security measures such as multi-factor authentication, multi-layer security and advanced encryption, enabling differentiated and secure digital experiences is at last within the grasp of any scale of business.
These new adaptive approaches and cyber technologies make it possible to easily separate traffic into separate networks, so that each application and its associated data has its own space and doesn’t infringe on other applications in the system.
All of which makes it easier to keep attackers away from valuable customer data, stay compliant, and protect vital business infrastructure and assets.
4. Managing the customer experience
Managing a small business in the age of technology isn’t a walk in the park. It is increasingly harder to please consumers as they expect a superior experience that’s connected, intelligent and personalised.
To stay relevant in a modern marketplace, small businesses need to listen and respond to customer needs. That means using digital tools to gain a clear picture of who their customers are, what turns them on and off, and engage in personalised relationship marketing that resonates with existing and new customers.
For example, loyalty programmes can be integrated with a customer’s mobile device and point-of-sale systems, including mobile point-of-sale. It’s an approach that makes it possible to track customer preferences, monitor which discounts and offers incentivise purchases, and can even be integrated with a customer’s preferred digital wallet app.
For local businesses that want to create an affinity with their brand and extend their awareness and reach, taking advantage of such tools produces a sense of trust and connection that goes beyond the physical location alone.
Next steps
For small businesses and franchise owners, digitising their businesses is a turning point. By uniting their existing strengths and local market knowledge with digital tools such as taking it online across channels, they will be well placed to challenge their competitors, win and retain consumers.
Ahead of 2020, DW asked a whole host of individuals working in the tech space for their thoughts and observations on the business trends and technology developments likely to be major features of the year ahead. No one gave us the kitchen sink, but otherwise it seems that we’ve uncovered a broad spectrum of valuable, expert opinion to help end users plan their digital transformations over the next 12 months or so. Part 1.
What’s next for the data centre? – five key trends
Enterprise IT looks with envy at the “hyperscale” data centers powering Internet giants like Amazon Web Services, Facebook, Microsoft and Alibaba. These contain hundreds of thousands of powerful computers (servers) connected by thousands of miles of network cable through tens of thousands of networking devices such as switches and routers. Google is estimated to have 900,000 servers across its 13 data centers worldwide, using enough electricity to power 200,000 homes. Have more modest size companies any chance to compete in such a world?
Trend 1 – a smarter network there are many elements to building a smart network but the focus here is on NICs
In a traditional data center the servers centralize the intelligence, one trend today to make the network itself more intelligent, by installing “smart” network cards called SmartNICs. According to Michael Kagan, CTO Mellanox Technologies: “It's not just about how fast you move the data along the wires. The secret is to process the data while it moves… in our most advanced network products we have computational units within every switch, so we can do data aggregation on the fly”. In his keynote address at NetEvents 2019 EMEA IT Spotlight he compared this to the distributed intelligence of an ant termite colony. Although individual termites may be programmed to specific roles like foraging, building or defence – they can rapidly regroup to repair, say, a collapsed mound. This is self-healing group behaviour – not driven by instructions from the centralised intelligence of the colony queen – and that puts it way ahead of traditional data center architecture.
Trend 2 – faster storage
The revolution is storage capability is well advanced, with Flash Solid State Storage revenue exceeding sales in traditional disk hard drives, and the arrival of integrated single chip storage controllers.
While massive archives will remain on traditional lower cost media for many years, as solid state storage becomes cheaper it is increasingly first choice for high speed access. Again, intelligent networks have a role to play, as Michael Kagan explains: “SmartNIC Virtualization presents anything on your cloud as a true local device… On your machine with its legacy operating system, you see a local device that can touch everything… we can allocate resources from different machines on the network and make them available as local storage devices, or local storage services on the local machine.”
Faster storage also needs faster networks…
Trend 3 – faster networks
High speed Ethernet (25G and faster) went mainstream last year. It is no longer just the privilege of hyperscale giants as more medium size enterprises are upgrading from 10G to 40G Ethernet and finding up to 50-fold performance gains from more efficient use of resources.
The hyperscalers are still leading the trend, however. One giant social media company, upgrading to cutting-edge Rome generation servers, found that their 25Gb/s networks were no longer sufficient and jumped to 100Gb/s Ethernet to get the full performance benefits. According to Kevin Deierling from Mellanox: “This is the perpetual game of technology leap-frog… Whenever there is a leap forward in one element of the processing-storage-network triad, another element falls behind. With the newest CPUs making big performance leaps, it’s now the networks turn to leap”.
Trend 4 – Intent-Based Networking (IBN)
IT historically developed up from the bottom: from lengthy, unintelligible machine code, through a hierarchy of ever higher level languages, to today: when a single screen swipe conveys our “intent” to erase a file.
IBN takes networking that way. In the case of an enterprise network, the business intent might be to accelerate the system’s response to customers. Ongoing benefits of the agile IBN include reducing network outages and grey failures as well as reducing operating costs. According to Apstra CEO, Mansour Karam: “With Intent Based Analytics network operators can quickly detect and prevent a wide range of service level violations – including security breaches, performance degradations, and traffic imbalances – bringing the industry closer to the vision of a Self-Operating Network”.
Trend 5 – Application Specific Networking (ASN)
ASN extends the software-defined networking principle to build virtualized network layers on top of a physical network for specific application needs and turns the ordinary public internet into a secure and performant enterprise-class network by enhancing it with next-generation zero-trust cybersecurity, while at the same time boosting "best effort" Internet resilience & performance. As it is abstracted and virtualised, it is completely vendor, (in the Wide Area Network) service-provider and internet agnostic.
According to Philip Griffiths, NetFoundry’s Head of EMEA Partnerships: “By extending permission-less innovation to networking, we are putting programmable app connections in the hands of the practitioners. We're partnering with ISVs and developers so that they can use API's and SDKs connect their applications in minutes regardless of where they are hosted, accessed and consumed from – essentially they get private network benefits without private network hardware, wires, complexity and costs.” He compares this agility to the days when developing from scratch and rolling out a new application– since advent of cloud organisations deployment has gone from nine months to nine minutes and ASN does the same for networks and security.
Conclusion
We’ve focused on “What’s next” – ie new and upcoming trends in the data center. But don’t forget that there are many equally significant trends already well established: “Software defined everything” according to IDC analyst Ksenia Efimova; “If you're not looking at hyper-converged, then you're building yourself into a problem for the future” says Joe Baguley from VMware. And, of course, there will be ever more automation.
2020 will be the year of the hybrid cloud, according to Vicky Glynn, Product Manager, Brightsolid:
Brightsolid believes that in 2020, hybrid cloud technology use (meaning cloud computing that uses a mix of on-premises, private cloud and public cloud) will grow significantly amongst Scottish businesses. This will happen as organisations increasingly realise the benefits they can gain across cost, efficiency and flexibility by using the right cloud for the right workload – one size doesn’t fit all when it comes to cloud computing.
Larger organisations in Scotland working across finance, public sector and oil and gas will be the biggest winners and see the most success over the next two years as they embrace the hybrid cloud to help transform their business.
But it won’t just be large organisations that will benefit from the hybrid cloud. The technology sector in Scotland is thriving. Typically made up of fast-growing SMEs – from fintech to gaming companies – the industry is flourishing, and is more than ready to compete with technology businesses based around the Silicon Roundabout or along the M4 corridor. However, growing from a start-up to a scale-up organisation requires an evolving technical environment that can respond dynamically to changing needs. We believe that for those organisations about to reach that tipping point, utilising hybrid cloud technology will be pivotal in the next stage of their evolution.
Technology and hybrid cloud will underpin Scottish businesses even more in 2020, says Elaine Maddison, CEO, Brightsolid:
2019 will be remembered as the year of uncertainty as businesses prepared for not one but two Brexit deadlines. However, 2020 will be the year that organisations will need to adapt their businesses to deal with the long-term effects of Brexit - whatever happens - and other uncertainties, including ourselves. But adapt we must and in doing so, the technology that the organisation sits on must be able to provide the agility and flex that they need to adapt to whatever the business landscape may throw at them. We believe the answer to this will lie in the cloud; and specifically, hybrid cloud due to the flexibility and efficiency benefits this offers. While this need for flexibility will impact businesses of all shapes and sizes across the whole of the UK, Scotland will define its own path with regard to technology which will see its use of cloud computing continue to increase exponentially into 2020 and beyond.
This shift towards hybrid cloud will also impact economic growth as a whole. As a home to start-ups, scale-ups and global organisations, Scotland is in a unique position to drive economic growth. With a legacy for innovation and entrepreneurship, there is a fire in the belly of businesses based here with a significant desire to grow. However, while budgets might prove restrictive, organisations should look within to drive this growth. We believe that this will be done in two ways – firstly through people, and encouraging talent within the organisation to innovate and secondly; through smart use of technology. For these organisations, adopting the right cloud strategy will ultimately support and enable this environment of growth.Company culture and change arising from hybrid cloud in 2020 - Linda King, CMO, Brightsolid, explains:
The impact of culture within a business is fundamental to how successful it is. Over the last five years, this has become even more prevalent as organisations have considered not only the environments that their teams work in to create that culture, but have also worked hard to ensure that their staff feel bought in to the future success of the organisation.
Change is a constant though, and as we move into 2020, more and more organisations will be transforming the way they work by migrating to hybrid and public cloud environments in order to take advantage of the flexibility and efficiencies that this offers. This isn’t just coming from the IT department – the rise of new cloud models means decision making has evolved to the whole business owning and managing their technology landscape, including finance, marketing and more. Thanks to hybrid approaches, all departments now have the opportunity to use the right cloud to meet the business needs, instead of an unsuitable or outdated environment. But this will drive significant change.
Programmes of change related to cloud transformation aren’t just about IT – they’re also about people and culture.
When implementing cloud related transformations, leaders must keep this in mind and remember to start with the ‘why’. They should explain the vision, including the justification and rationale. It’s really important, in order to overcome cultural barriers, that all stakeholders are on the same page regarding why the change is being instituted, and exactly what the transformation is, before they consider how to undertake it. Cloud computing can be complex enough, so clear communications and clarity around this will undoubtedly increase the likelihood of success. This will also mean that staff will feel they have more of a personal stake in the business, and this growing loyalty may also benefit your organisation in the future.
Forrester has produced a major report on the future of technology. Here we highlight some of the major talking points:
Forrester’s 2020 Cybersecurity predictions specifically include:
Forrester’s 2020 Privacy and Data Ethics predictions specifically include:
Forrester’s 2020 Automation predictions specifically include:
Forrester’s 2020 AI predictions specifically include:
The volumes of data that now need to be processed are soaring as a consequence of digital transformation, so companies need a quick and easy solution for establishing new data centres directly, where that data is generated. Modular edge data centres offer the ideal solution here. The example of retail shows how the point of sale can be optimised using data-assisted processes and why this calls for decentralised IT resources.
By Clive Partridge, Rittal technical manager for IT Infrastructure.
Edge data centres are decentralised IT systems that deliver computing power directly to the location where the data is generated. They are situated in the immediate vicinity of the data sources – which helps ensure exceptionally fast initial data processing – and are also linked to cloud data centres for downstream processing.
Software applications in connected data centres ultimately use this up-to-the-minute data to perform analyses that require high levels of computing power.
Data is becoming increasingly important in physical retail spaces
Additional computing power enables companies to evaluate data relating to customer behaviour and enterprise resource planning more quickly and precisely. For example, a retailer can compare the sales in its nationwide branches using surveys from social media platforms in order to identify new trends. Alternatively, on entering a shop, customers – provided they have consented beforehand – can be identified via their smartphone and greeted with offers that are tailored specifically to them. This, too, requires an IT system that responds in real time and can access large volumes of data.
In general terms, edge data centres help companies to evaluate all customer data and thus optimise sales. They intelligently network branches that are spread out geographically with regional warehouses and a central data centre so as to optimise product availability at the point of sale (POS). Retailers can thus harness networked edge computing to increase the availability of products, optimise logistics and use customer preferences to regularly improve product displays at the POS, for example. The continuous and rapid availability of data gathered via edge computing makes it possible to manage customer behaviour more effectively. If necessary, this can be done as often as every day based on up-to-date data.
Retailers can also use the additional computing power and real-time stock tracking to optimise their supply chain management. In this case, long-term analyses help identify patterns in sales and thus provide plenty of notice regarding when specific products may be affected by bottlenecks. Without predictive analyses of this kind, there is the risk of losing customers, who switch to the competition because they can’t get what they want.
Networked IT infrastructure at the POS
To track goods and customers, retailers are installing networked sensors or using cameras to analyse patterns of movement. This is creating an Internet of Things that utilises a large number of sensors and data sources to generate a continuous data stream. Retail chains use sensors, for example, to identify the positions on the shelf where each product sells best. This also involves developing the supply chain to the extent that a shop reorders new products in an automated process. In the future, a growing number of companies will be using edge data centres to expand the requisite IT infrastructure at the POS. According to market analysts from IDC, edge IT systems could be processing and analysing 40 per cent of data from the Internet of Things throughout industry by 2019.
What types of edge data centres are available?
An edge data centre is designed so that companies can adapt it to the required performance level using preconfigured, standardised modules. Climate control and power supply modules, stable IT racks and robust security components are already aligned with each other – this is particularly important for sites that do not have a specific security concept at building level – i.e. access controls or airlocks, for example.
If factors such as dust, humidity or dirt also pose problems at the site – because industrial production is carried out there, for instance – then the IT racks should have a high protection category, such as IP 55.
Edge systems come in a wide range of output classes depending on the requirements and area of application. Edge gateway systems, for example, consolidate data directly on site and then initiate its transfer to downstream cloud data centres.
However, initial evaluations can also be carried out close to the data source. For instance, smaller systems for retail can perform tasks such as the initial aggregation of sensor data in a department store, supermarket or shopping centre, while powerful edge data centres can also be utilised that significantly increase the computing power at the relevant location. The latter may be necessary if retailers want to offer their customers elaborate product presentations based on virtual and augmented reality.
The technology used in these edge designs can vary greatly – from a basic service rack to a specially secured IT rack with an additional protective cover. If more power is required, a high-performance edge data centre based on a modular data centre container with weather-resistant and fire-resistant covering is the answer. The solution is then installed in the immediate vicinity of the location where the data is generated, either inside or outside buildings. With appropriate cooling technology, it will support an output of up to 35 kW per IT rack.
Thanks to their steel walls, IT containers are both stable and secure. Their excellent mobility also makes them highly flexible and means powerful data centres can be installed anywhere on company grounds or inside warehouses.
Requirements determine the configuration
If edge systems are being used to boost on-site computing power, the first step is to specify the associated business objectives. Technical and IT experts use this information to define the necessary software applications and it’s then possible to determine the configuration of an edge data centre based on this list of requirements. A number of criteria need to be taken into account during this process, for example, edge systems must be quick and easy to use in order to meet technical requirements promptly. The ideal scenario is for the manufacturer to supply a turnkey, ready-assembled system, complete with cooling technology, for plug-and-play connection to the power supply and network technology.
Edge system operation should also be automated and largely maintenance-free to minimise running costs. This requires comprehensive monitoring that covers the power supply, cooling, fire detection and extinguishing. The necessary protection category is determined by factors such as location and how fail-safe the system needs to be. It is also important to use a monitoring system that covers enclosure/rack doors as well as side panels; electronic door locks have the added benefit of making it easier to ascertain which staff had access to the IT and when.
During remote maintenance or emergencies, it may be necessary to completely power down the system, which means having to interrupt the power supply. Switchable PDUs (power distribution units) are required for this purpose.
Enhanced security with edge
Edge data centres can be installed in a room-in-room environment for the toughest security demands and a security room of this kind offers maximum protection in the event of fires or highly contaminated surroundings. Outdoors, it should also be ensured that the protection category supports reliable IT operation across a wide range of temperatures, for example from -20 °C to +45 °C.
Suppliers such as Rittal have developed a modular concept for these varying requirements and companies can use a modular system to create the ideal solution for their needs.
Rittal adopts a holistic approach when seeking a solution, working with partners such as ABB, HPE, IBM and the German cloud provider iNNOVO so that customers get all the services they need from a single source. The resulting pre-defined, standardised all-in-one edge system can be augmented with active IT components and “as-a-Service” options in a turnkey solution. The retail sector is therefore able to use continuously updated data to optimise the customer journey, and thus secure customer loyalty on a long-term basis.
The world today is getting faster and as consumers, we have increasingly high expectations.
By Jennifer Gill Didoni, Head of Digital Services Creation, Cloud and Security, Vodafone Business.
We want issues to be resolved right away, our goods to arrive the next day and we are accustomed to communicating with anyone, anywhere, at any time. For enterprises, this “always on” mindset is mirrored in the increasing demand for real-time data processing. While cloud computing continues to play a central role in the modern network architecture, the potential of IoT and other emerging technologies is forcing companies to rethink their approach to IT infrastructure.
This is where Multi-access Edge Computing (MEC) plays a critical role. Edge computing is a form of network architecture that brings cloud computing closer to the end user. By moving to the edge of the operator network MEC creates faster, more efficient and intelligent networks. Both AI and IoT require the processing of vast amounts of data, most of which is not generated in the cloud but at the edge. Therefore, edge computing capabilities will catalyse the innovation of these next-generation technologies by streamlining the way they handle data.
However, the potential of edge remains mainly untapped. Many businesses are still behind on adopting this technology, choosing to stick with cloud applications that offer a mixed user experience. But businesses need to understand that MEC is an essential component for unlocking the benefits and potential of 5G, IoT, SDN and other emerging technologies. Ultimately, MEC is key to creating truly intelligent communications solutions that drive tangible business outcomes.
Improving digital services
There are a growing number of sensors and devices that send more and more data – and the introduction of 5G will only see this rise further. 5G will transform the way enterprises manage their networks and will make automation a critical component of any network management strategy. Real-time monitoring of networks and datacentres will become imperative. The sheer number of computations and connections will demand automated traffic management that considers conditions beyond geography; such as real-time datacentre performance, traffic volume and speed. These considerations will be key to optimising application performance.
The physical constraints of today’s cloud and network infrastructure mean that there are latencies that can range from 50ms to more than 200ms. There are many use cases that require less than 50ms of latencies to be viable which will be unlocked with the advent of edge computing in the network.
Through edge computing data can be streamed quickly to the cloud in batches when bandwidth needs aren’t as high. This improves performance and saves money. For different industries, this technology can be utilised to transform the products and services they offer consumers and partners. For instance, autonomous vehicles can use the abilities of edge to automatically decelerate when they sense a human within a set distance to improve the safety of our roads.
MEC also provides more flexibility for data privacy and local hosting of data. With local decisions made at the edge, the cloud may not need all the data generated immediately – or even at all.
Unpicking the potential of edge
For keeping pace in today’s digital age, edge computing gives businesses and developers more choice of where to run their workloads, depending on their specific latency, bandwidth and data sovereignty requirements. Certain applications, such as gaming, augmented reality (AR), virtual reality (VR) and video analytics, perform much better when processing and rendering are performed immediately. In fact, a joint neuroscience study by Vodafone and Ericsson uncovered that even relatively quick round-trips over the internet from a device to the public cloud can present enough latency to make an AR user feel motion sick.
Another benefit of MEC is that it will enable lighter and more cost efficient devices in the future. Today to render AR services on a mobile device the processing must be done on the device to prevent the motion sickness effect. This limits the potential of the AR experience to processing capability on the device. Furthermore, the more processing is on the device, the higher the device cost, lower battery life, and clunkier form factor. By sending the processing to the edge of the network, we can have better experience on lighter, more affordable devices.
MEC’s low latency could optimise production in an IoT-enabled smart factory and improve safety and efficiencies. By using data analysis and machine learning at the edge engineers will also be able to pinpoint and resolve faults on equipment in minutes, rather than hours, improving productivity and efficiency in the production line.
Despite this, it’s important to remember that edge will never completely replace the public cloud or even on device computers. To take full advantage of the technology, businesses should consider how to orchestrate and distribute workloads, ensuring that the right data is always in the right place at the right time to meet specific requirements.
Edging ahead
Edge computing is becoming an increasingly popular and faster option for businesses needing to keep pace with the changing demands of customers and consumers. As the world of technology continues to expand, and the reach of IoT grows, businesses need to be able to manage devices and optimise them to provide the best services and solutions to customers.
Edge computing is more than a technology, it is a design framework that will redefine the way IoT systems are built and the way they function. Although the combination of other solutions will still be needed to expedite the widespread adoption of IoT, edge computing will prove to be the key mechanism to bring digital services to the next frontier.
While widescale adoption may be a way off, developers will soon start to embrace and create new use cases for distributed low latency applications. Ultimately, MEC helps businesses to place their intelligence where it’s needed, to drive better business outcomes. In fact, this technology marks a step forward for businesses looking to streamline processes for efficiency and effectiveness.
Ahead of 2020, DW asked a whole host of individuals working in the tech space for their thoughts and observations on the business trends and technology developments likely to be major features of the year ahead. No one gave us the kitchen sink, but otherwise it seems that we’ve uncovered a broad spectrum of valuable, expert opinion to help end users plan their digital transformations over the next 12 months or so. Part 2.
10 security mistakes that should stay in 2019
Cyber attacks are inevitable, regardless of the size of a business or the sector it operates in. Cyber criminals will try their luck with any business connected to the internet. But as Andy Pearch, Head of IA Services, CORVID explains, there are steps that businesses can take to keep them as safe as possible from danger. As we stand in the last quarter of 2019, it's time for businesses to address 10 common security mistakes.
1. Assuming an attack won’t happen
Any business could be attacked. It’s important for businesses to prepare their IT estate for compromise, so in the event of an attack, they’re able to limit the damage that can be done to their operations, finances and reputation. There’s an assumption that cyber security is a problem to be dealt with by the IT department but in reality, every user is responsible. The more aware users are of the risks, the more resilient a business can become.
2. Poor password management
Passwords aren’t going away any time soon, but there are additional measures that can be taken to avoid them being compromised. Use strong, unique passwords and ensure all users do the same – the NCSC’s guidance encourages using three random words. Additionally, implement two-factor authentication (2FA) on internet-facing systems and all remote access solutions, and for privileged users and requests to sensitive data repositories. For both professional and personal life, making use of a password manager requires remembering only one strong, unique password instead of lots of them.
3. Inadequate backup
If the IT estate is compromised and data lost, can it be retrieved? Implement a rigorous backup regime to ensure business-critical data can be recovered if the business is attacked. Store this backed up data in multiple secure locations, including an ‘offline’ location where infected systems can’t access it. Regularly test that backups are being done correctly and that data restoration procedures work as intended.
4. Reactive rather than proactive strategies
Some attacks bypass firewalls and anti-virus programmes, so businesses need to proactively hunt their systems for signs of compromise that haven’t been picked up by these traditional methods. The longer an adversary sits on a network undetected, the more damage they can do. Email is the single biggest attack vector, so implement the same level of proactive security for the email client too. Firewalls and email security solutions can block known malicious senders and strip certain types of file attachments that are known to be malicious before they have the chance to reach a user's inbox.
5. Generic user privileges
Users should only be permitted access to the information they need to do their job. Limit the number of privileged user and admin accounts. For IT admins, adopt a least-privilege approach and consider using a privileged access management solution to restrict access throughout the network. The more users who have access to privileged information, the more targets there are for cyber criminals, and the more likely they are to succeed as a result.
Additionally, all accounts should be monitored for unusual activity. If a user is accessing files or drives they have no reason to be interacting with or have never interacted with before, such activity should prompt a review. Keep a record of all accounts each user has access to, and remove their permissions as soon as they leave the company.
6. Poorly configured, out of date systems
Environments that are not configured securely can enable malicious users to obtain unauthorised access. It’s therefore imperative to ensure the secure configuration of all systems at all times. Regular vulnerability assessments should be scheduled to identify weaknesses in the IT infrastructure that would leave an organisation open to exploitation. The results should be used to define detection and response capabilities and ascertain if an outsourced managed security provider is needed. To avoid allowing malicious access through unpatched vulnerabilities, apply security patches regularly and keep all systems and applications up-to-date.
7. No remote working policy
If users in the business work on the move or from home, it's important to have policies in place that will protect any sensitive corporate or personal data in the event of a mobile device being lost, stolen or compromised. Many corporate mobile devices – laptops, phones and tablets – not only contain locally saved sensitive data but are also connected to the company's internal network through VPNs and workspace browsers, giving attackers a direct route to the heart of a business. To enforce secure remote working practices, employ a suitable and robust enterprise mobile management solution and policy, applying your secure baseline and build to all devices.
8. Inconsistent monitoring
By not monitoring their systems, businesses could be overlooking opportunities that attackers won’t miss. Continuously monitor all systems and networks to detect changes or activities that could lead to vulnerabilities. Consider setting up a security operations centre (SOC) to monitor and analyse events on computer systems and networks.
9. Creating an incident response when it’s too late
There is a simple answer for businesses that don’t have an incident response plan: write one! Make it specific and ensure it accurately reflects the company’s risk appetite, capabilities and business objectives. Being adequately prepared for a security breach will go a long way towards minimising the business impact. This incident response plan should be tested on a regular basis, using a variety of different scenarios, to identify where improvements can be made.
10. Putting users as the first line of defence
Humans make mistakes, and no amount of training will negate that. Most users can’t be trained in complex IT processes, simply because they’re not IT experts. It’s unrealistic and unfair to expect otherwise. Invest in cyber security solutions that remove the burden of being on the frontline of email security defence, allowing users to get on with their day jobs.
Conclusion
These ten cyber security mistakes might be common, but they don’t have to be accepted as the norm. By taking the first step of assuming that all organisations are vulnerable to an attack, businesses can consequently focus on putting cyber security strategies in place that are proactive and consistent and that use technology to keep the business resilient against a backdrop of a constantly evolving cyber landscape.
Some thoughts from Chris Roberts, Head of Datacentre & Cloud at Goonhilly:
1. Supercomputers on the Moon and Mars. We’re edging ever closer to getting compute power out to the ultimate edge - space. HPE’s Spaceborne computer recently spent almost two years on the ISS and plans are now underway for Spaceborne 2 systems, initially focused on HPC and machine learning/AI applications. There is also talk of combining these supercomputers with additional hardware for communications processing, which ushers in the possibility of flying helicopters on Mars.
2. AI using open space data. We’ll see a surge in the use of processing power for deep learning (AI) to extrapolate patterns from visual satellite data being captured from high-quality infrared, SAR (Synthetic Aperture Radar) and other space data sources.
We’re all familiar with analysing space data to predict and reduce the impact of accidents and natural disasters – for example, wildfire burn scars in California.
Other applications in the works or already in use include: predicting retailers’ and restaurant chains’ commercial health by counting cars in car parks as a proxy for revenues; monitoring crop yields to predict prices; detecting shadows in fuel tankers to estimate the world’s oil inventory; and monitoring the economic activity and poverty levels of countries by analysing car density, construction rates, and electricity consumption alongside traditional statistics.
3. A green power surge. We will see more organisations redesigning data centres to slash carbon emissions.
At Goonhilly, our new data centre uses an innovative liquid immersion cooling system from Submer to mitigate our carbon footprint, as well as an onsite array of solar panels that can support the data centre’s full power requirements of 500KW; local wind power will be added to the mix shortly.
We are also close to seeing the world’s first supercomputer powered solely by solar energy, which will need an acre of solar panels to generate the required 1 MW of energy.
4. Managed HPC hubs for faster innovation. We predict a growth in businesses and academia collaborating on new AI and ML-fuelled approaches and applications in sectors such as automotive, life sciences and space/aerospace.
For example, our NVIDIA-based managed HPC platform for AI and ML compute on demand is designed to be a marketplace that brings academia and enterprises together to share ideas. It dramatically reduces the cost of deployment and helps to accelerate time to market.
Traceability will come of age, according to Richard Hoptroff, Founder of Hoptroff London:
“There is an ever-increasing demand for data storage, but it is becoming an increasingly crowded market where data centres will have to start to differentiate themselves from the competition. Operators will have to start offering additional services that make them stand out from the crowd, like Traceable Time as a Service (TTaaS®). Precision timing is a very attractive service to data centre customers – it offers the opportunity to verify their data in place and time as well as being compliant with regulations affecting industries such as the financial services. These add-ons will help improve the data centre’s stickiness and customer retention, a looming problem in the rapidly expanding marketplace for data storage.
I would predict that traceability will come of age, enabled by distributed ledgers and traceable timing. Regulations such as GDPR and PECR are being flouted in industries such as real-time buying in advertising, and regulatory authorities are going to clamp down. Anyone who handles personal data will have to prove when consent was given, who they shared it with, and when it was deleted. All of this can be enabled by an accurate, traceable, verifiable time feed. The growth of smart cities will also necessitate the adoption of synchronised timing to ensure automation runs smoothly and safely, integrated services can be relied upon and monitoring, such as CCTV or traffic cameras, can be verified.
Standards for storage and security will have to rise to accommodate for our increasing reliance on integrated technology as a society, and to minimise our vulnerability to hacking or abuses of data privacy.
Expectations for the management of data – from sharing permissions to storage to blockchain – are being levied from all directions. Regulators expect more transparency from companies and customers expect more integrity from those trusted with their sensitive personal data. Digitalisation and the expansion of IoT, edge, and the cloud is a runaway train of technological progress. However, there must be ethical and security considerations that come along with rapid advancement. While the growth of the digital world will likely make our lives easier, more efficient and more convenient it will also bring the possibility the manipulation and mishandling of our data.”
Four out of five enterprises are now running containers, and 83% are running them in production. Given that only 67% were doing so in 2017, it’s clear that containers are more than a fad.
By Anthony Webb, EMEA Vice President at A10 Networks.
With containers’ newfound popularity, some companies are struggling to establish an efficient traffic flow and effectively implement security policies within Kubernetes, one of the most popular container-orchestration platforms.
As a container orchestrator and cluster manager, Kubernetes focuses on providing fantastic infrastructure, and has been adopted by countless companies as a result. Companies that use a microservices architecture (MSA) for developing applications, tend to find that Kubernetes offers a number of advantages when it comes time to deploy those applications.
For all those reasons, it’s essential that organisations understand the unique traffic flow and security requirements that Kubernetes entails.
What Is Kubernetes?
Kubernetes is an open-source container-orchestration system. It’s a portable and extensible program for managing containerised workloads and services and provides a container-centric management environment.
Kubernetes has one master node and two worker nodes. The master node functions by telling the worker nodes what to do, and the worker nodes function to carry out the instructions provided to them. Additional Kubernetes worker nodes can be added to scale out the infrastructure.
Another primary function of Kubernetes is to package up information into what are known as “pods,” multiples of which can run inside the same node. This way, if an application consists of several containers, those containers can be grouped into one pod that starts and stops as one.
Challenges in Kubernetes Environment
Like all other container-orchestration systems, Kubernetes comes with its own set of obstacles.
The networking of Kubernetes is unconventional in that, despite the use of an overlay network, the internal and external networks are distinct from one another.
Plus, Kubernetes intentionally isolates malfunctioning or failing nodes or pods in order to keep them from bringing down the entire application. This can result in frequently changing IP addresses between nodes. Services that rely on knowing a pod or container’s IP address then have to figure out what the new IP addresses are.
When it comes to access control between microservices, it’s important for companies to realise that traffic flowing between Kubernetes nodes are also capable of flowing to an external physical box or VM. This can both eat up resources and weaken security.
Kubernetes and Cloud Security Requirements
There are many requirements of Kubernetes and cloud security:
1. Advanced Application Delivery Controller
Companies already use advanced Application Delivery Controller for other areas of their infrastructure, it’s necessary to deploy one for Kubernetes as well. This allows administrators to do more advanced load balancing than what’s available with Kubernetes by default.
Kubernetes is equipped with a network proxy called kube-proxy. It’s designed to provide simple usage and works by adjusting iptables rules in Layer Three. However, it’s very basic and is different than what most enterprises are used to.
Many people will place an ADC or load balancer above their cloud. This provides the ability to create a virtual IP that’s static and available to everyone and configure everything dynamically.
As pods and containers start up, the ADCs can be dynamically configured to provide access to the new application, while implementing network security policies and, enforcing business data rules. This is usually accomplished through the use of an “Ingress controller” that sees the new pods and containers start up, and either configures an ADC to provide access to the new application or informs another “Kubernetes controller” node about the change.
2. Keep the Load Balancer Configuration in Sync With the Infrastructure
Since everything can be constantly shifting within the Kubernetes cloud, there is no practical way for the box that’s sitting above it to keep track of everything. Unless, however, you have something like the purple box, generally referred to as an Ingress controller. When a container starts or stops, that creates an event within Kubernetes. The Ingress controller identifies that event and responds to it accordingly.
This takes a great burden off of administrators and is significantly more efficient than manual management.
3. Security for North-South Traffic
North-south and east-west are both general terms to describe the direction of traffic flow. In the case of north-south traffic, traffic is flowing in and out of the Kubernetes cloud.
As mentioned before, companies need traffic management above the Kubernetes cloud to watch and catch malicious traffic.
If there’s traffic that needs to go to specific places, this is the ideal place to do that. If enterprises can automate this kind of functionality with a unified solution, they can achieve simplified operations, better application performance, point-of-control, back-end changes without front-end disruption and automated security policies.
4. Central Controller for Large Deployments
Scaling out is something else that enterprises need to take into account, especially in terms of security.
The Ingress controller is still there, but this time it’s handling multiple Kubernetes nodes and is observing the entire Kubernetes cluster. Above the Ingress controller would be the A10 Networks Harmony Controller. Such a controller allows for efficient load distribution and can quickly send information to the appropriate location.
With a central controller like this, it’s imperative to choose one that can handle scaling in and scaling out with little to no additional configuration on existing solutions.
5. Access Control Between Microservices
East-west traffic flows between Kubernetes nodes. When traffic flows between Kubernetes nodes, this traffic can be sent over physical networks, virtual or overlay networks, or both. Keeping tabs on how traffic flows from one pod or container to another can become quite complex without some way of monitoring those east-west traffic flows.
Plus, it can also present a serious security risk: attackers who gain access to one container can gain access to the entire internal network.
Luckily, companies can implement something called a “service mesh” like the A10 Secure Service Mesh. This can secure east-west traffic by acting as a proxy between containers to implement security rules, and is also able to help with scaling, load balancing and service monitoring.
With this type of solution, companies like financial institutions can easily keep information where it should be without compromising security.
6. Encryption for East-West Traffic
Without proper encryption, unencrypted information can flow from one physical Kubernetes node to another. This presents a serious problem, especially for enterprises that handle particularly sensitive information.
When evaluating a cloud security product, it’s important for enterprises to select one that encrypts traffic when it leaves a node and unencrypts it when it enters.
7. Application Traffic Analytics
Lastly, it’s of vital importance that enterprises understand the details of traffic at the application layer.
With controllers in place to monitor both directions of traffic, there are already two ideal points to collect traffic information.
Doing so can aid in both application optimisation and security and allows for several different functions. Organised from the simplest to the most advanced, those functions can allow for:
So, when companies are talking to vendors, it’s essential that they determine which of those benefits their products can offer.
Additional Considerations for Dev and Ops Simplicity
Companies should be looking for a simple architecture with a unified solution, central management and control for easy analytics and troubleshooting, common configuration formats, no change in application code or configuration to implement security and gather analytic information and automated application of security policies. If companies prioritise those items, enterprises can enjoy streamlined, automated and secure traffic flow within Kubernetes.
Every industry, and every company, is transforming itself with software to deliver new, improved digital services that capture new markets and reduce operational costs.
By Michael Allen, VP and EMEA CTO, Dynatrace.
In this quest for constant improvement, organisations are set to spend $1.2 trillion this year. However, despite the goal to provide customers and business users with better, more seamless experiences, rarely does a week go by without performance problems causing disruption. In today’s always on, always connected digital world, mere milliseconds of downtime can cost millions in lost revenue and, as we become ever more reliant on software, there’s increasingly less margin for error.
To protect their organisation against the chaos that performance problems can cause, the root cause must be found quickly, so IT teams can get to work resolving it before users are impacted. However, as the software landscape evolves to drive faster innovation, enterprise applications, and the hybrid cloud environments they run in, are becoming increasingly dynamic and complex. Organisations are now reliant on thousands of intricately connected services, running on millions of lines of code and trillions of dependencies. A single point of failure in this complex delivery chain can be incredibly difficult to pinpoint accurately. If this complexity goes unchecked, digital performance problems will increase in frequency and severity, creating an unacceptable risk for the business.
The flip side to agility
This escalating complexity is largely being driven by the accelerating shift towards the cloud. In modern, cloud native IT stacks, everything is defined by software. Applications are built as microservices running in containers, networks and infrastructure are virtualised, and all resources are shared among applications. This has been a key part of many businesses’ digital transformation strategy, enabling them to drive greater agility and faster innovation. However, the downside to all this is that complexity is off the charts. To understand their apps, IT teams now need to understand the full stack, with visibility into every tier, not just the application layer. This has made it impossible for humans to quickly identify where problems originate, leaving IT teams desperately trying to put out a growing number of fires, with little to no visibility into where and why they’re occurring.
As digital services and technology environments become increasingly defined by software, being unable to quickly detect and resolve performance problems will have wider ramifications for businesses and their revenues. While currently it’s frustrating when, say, an online banking website is down, glitches in the code of the driverless cars or drones that will dominate our roads and skies in the future could have catastrophic consequences. Businesses must act now if they are to relegate performance problems to the past before they have a devastating impact on our future.
Anyone call for some AI assistance?
It should come as a comfort that there is hope on the horizon for IT teams, in the form of a new breed of AI that has emerged over the past few years; AIOps. AIOps tools can automatically identify and triage problems to prevent IT teams drowning in the deluge of alerts from their monitoring solutions. The global AIOps market is expected to grow to reach $11bn by 2023, which demonstrates a real appetite for these capabilities. However, these solutions have their limitations, which is why we’re now seeing the emergence of more holistic, next-generation approaches to monitoring that combine AIOps capabilities with deterministic AI. This provides access to software intelligence based on performance data that’s analysed in real-time, with full-stack context that provides IT teams with instant answers, so they can fix performance issues before users feel any impact. This type of 20:20 vision will help teams combat modern software complexity and gain clearer insight into their hazy cloud environments.
Taking it one step further, AI will eventually be capable of stopping performance degradations in their tracks, before they begin to develop into a real problem. For this to become reality, AI-powered monitoring solutions will need to be fully integrated with the enterprise cloud ecosystem, with access to metrics and events from other tools in the CI/CD pipeline, such as ServiceNow and Jenkins. AI capabilities will then be able to pull all monitoring data into a single platform, analyse it in real-time and deliver instant and precise answers that trigger autonomous problem remediation without the need for human intervention– something often referred to as application self-healing.
Smooth sailing into the future
It’s no secret that user experience is absolutely crucial for all companies operating today. While it may sound like a pipedream, AI is fast becoming the key to helping businesses ensure these experiences remain seamless, by relegating performance problems to the history books. Whether you look at it in the long or short term, AI capabilities will ultimately give companies total peace of mind that performance problems will be dealt with quickly and efficiently – minimising impact on user experience and protecting revenues and reputations against the devastation they can cause.
Ahead of 2020, DW asked a whole host of individuals working in the tech space for their thoughts and observations on the business trends and technology developments likely to be major features of the year ahead. No one gave us the kitchen sink, but otherwise it seems that we’ve uncovered a broad spectrum of valuable, expert opinion to help end users plan their digital transformations over the next 12 months or so. Part 3.
5G set to make an impact in the enterprise, says Russ Mohr, Engineering Director at MobileIron:
Extremely rapid 5G connectivity is already beginning to appear in a few major metropolitan areas in Europe, Asia and the US. The same can be said in the UK where is slowly rolling out across major cities in 2019. Although the promise of up to 10 or even 20 Gbps download speeds and low latency is incredibly enticing, we shouldn’t get too excited just yet. It could in fact take years for 5G to become ubiquitous, since the technology requires a very high number of antennas or “small cells”. These small cells can also serve as points of ingress for hackers to gain access to network unauthorised, which is one of 5G’s main caveat when it comes to cybersecurity.
Despite the extremely high demand for higher connectivity speeds among consumers, it’s the enterprise that may reap the initial benefits of 5G. Network providers are banking that “network slicing” is going to be a hot new product in their portfolio, and there are some good reasons for this. Just like fibre in the ground can be divided into wavelengths and sold as virtual networks, 5G frequency can also be “sliced” into different segments to be sold to different industries. Those network slices can have different characteristics. For instance, a low latency slice might be sold to a hospital performing robotic surgeries. Or, a less-expensive, slower pipe could be sold for an IoT application like smart vending machines that only need to report inventory every so often.
As 5G evolves beyond the traditional boundaries of corporate perimeters, the enterprise can no longer be certain of the integrity of the network. Organisations need to shift their focus to embrace an approach of never trusting and always verifying the devices that access these networks. That means taking a layered approach to security. Is the device on the 5G network compromised? If the device is untrusted, should it be able to access company content, whether it resides on the device, on the company network, or in the cloud?
While we don’t know a lot about the risks that will emerge with 5G yet, a mobile-centric zero trust approach to security can mitigate many of the risks. Zero trust assumes that bad actors are already in the network and secure access is determined by the “never trust, always verify” approach. A zero trust approach validates the device, establishes user context, verifies the network and makes sure traffic is originating from an approved and vetted application. It’s capable of rooting out and remediating threats before granting a device access to any company resources. Zero trust ensures only authorised users, devices, apps, and services can access business resources, making it a suitable approach to cybersecurity in the age of 5G.
Some 2020 telecoms predictions, courtesy of Niall Norton, CEO, Openet:
2019 has been the year of 5G, launching onto the scene sooner than many expected, but reluctantly in the half-baked form of non-standalone, with 5G antennas on 4G cell sites and backhaul. Today South Korea, the U.S., the UK and Switzerland have switched on their 5G, with a clear focus on consumer services. Investment has revolved around boosting consumer broadband with faster speeds and more capacity, but surprisingly operators globally aren’t charging a 5G premium. With the consumer use case still at the helm of 5G deployment, but more obvious revenue opportunities in enterprise services, what will change in 2020?
5G monetisation for operators splits into three categories: enhanced mobile broadband (eMBB), ultra-reliable low latency communications (URLLC) and massive machine-type communications (mMTC). Importantly, enhanced mobile broadband, including fixed wireless access, can be delivered in a non-standalone 5G environment, with 4G backhaul. However, both mMTC and URLLC cannot function without dedicated standalone 5G networks. Problematically, standalone 5G networks are unlikely to come to fruition next year. The race to deploy 5G for consumers means operators are in an ROI limbo, waiting for infrastructure development before enterprise use cases can be deployed and monetised.
The buzz of the industry is around URLLC, which requires careful planning and significant technological innovation, but also has significant revenue potential. Virtual reality and gaming will likely dominate the initial consumer use cases for URLLC. Operators could create specific plans offering a seamless online gaming experience, in a global multiplayer environment. These types of bundles will help operators target a younger demographic and develop more digital, cutting-edge brands – a key goal of many digital transformation projects.
Where the real money will be made with URLLC however, is in enterprise services. Network slicing will create expansive dedicated performance-guaranteed virtual networks which operators can offer exclusively to enterprises. Analyst firm CCS Insight predicts broadband media companies will thrive in this environment, in fact by 2023, they will start taking up network slices to radically change the feasibility and affordability of live broadcasting.
Perhaps the most unpredictable use case for 2020 is mMTC. Will enterprises wait for operators to deploy 5G, or will they bid for their own spectrum to power private IoT projects? This year has seen enterprises partner with operators to deploy private networks to drive productivity, but what about enterprises that look to unlicensed 5G spectrum and deploy private networks alone? Given the number of global digital power brands this is an inevitability, so operators need to capitalise on the massive machine opportunity now.
The wave of 5G deployment will surely gain momentum in 2020. The pressure to on-board 5G subscribers as quickly as possible will continue for operators, convincing people to upgrade and pay more. There is an urgency to deploy and monetise new services, and to react quick enough operators need to prioritise their flexibility and agility. It’s about speed and reactivity, regardless of the use case, so who will take the lead?
Pascal Geenens, security evangelist at Radware says that there are several emerging threats that will become normal occurrences in the coming 12 months. They will influence how we develop technology to fight back and how we strategically plan security mitigation in the coming 12 months. There is also a real possibility that the technology we develop to fight back could be used against us:
1- AI and Fake data/disinformation
Fake data and disinformation will become an import tool in the cyber arsenal of nation states and will impact business and organisations. AI will be a catalyst for generating targeted and individualised fake data to influence individuals. This tactic will be used to influence major political and economic events such as elections.
2- Privacy vs Security imbalance
The balance is turning in favor of privacy over security, cyber defense is getting exponentially harder and more expense because of the pervasive growth in dark data and privacy measures (cfr Quic, DoH, TLS 1.3, …) This means organisations will need to re-consider how they manage the complexities associated to compliance of legislation and keeping networks secure, in a world where anonymity is growing.
3- Data breaches through stupidity or ignorance will fade out
Data breaches caused by storing data publicly or bad password management will fade out as cloud and service providers put technology in place to prevent this from happening. For many, the first experience in the cloud was a bad one, but now there is more assurance and better automation to facilitate a safer journey to the cloud.
4- Attack surface of the Cloud and Distributed enterprise
The attack surface of organisations is expanding exponentially as they move to hybrid-, multi-, and edge-clouds. Adding to this the fact that the attack surface is getting bigger, and the complexities associated to privacy and managing more dark data, it will become harder to secure the enterprise and keep visibility of the threats.
5- Automation is a double edged sword
AI and automation is breaking through as we look for new ways to fight back. This coupled with deep learning has become the go to strategy for many ground-breaking technologies and solutions. We are only now starting to see the value they bring, but also see that they pose a threat too. Fooling automations will likely lead to the next disaster – like fooling a cars’ autonomous drive through slightly altering traffic signs or road markings, which has happened already. Imagine the impact on cyber defence systems, weapons of (physical) warfare, planes. As new ways to poison or influence the decision making of deep learning algorithms are discovered, a new attack surface is forming.
6- Quantum computing
Quantum computing will become an import part of the security policy of organisations that trade secrets and highly valuable information. Quantum key generation and distribution as well as quantum encryption will start to be applied. We’ll likely face a scenario of 'better to be safe than sorry’ as the first nations develop quantum computers with enough qubits to crack the planet’s encrypted communications.
Technology experiences a throwback, says Simon Johnson, UK General Manager, Freshworks:
Consumers and businesses alike have been somewhat mesmerised by all-singing, all-dancing technologies for the last couple of years. But that’s soon to change – and has already started to. From devices recording our conversations to the Cambridge Analytica scandal, we’ve certainly not been short of reasons to justify the ‘techlash’ movement that we have started to experience this year. Such events have acted as catalysts for businesses to refocus on what really matters: technology that does what it says on the tin.
When it comes to technology in the workplace, we’re finding that employees are fed-up with complex, and inefficient software. So much so that nearly one in four (24%) people say using software they hate makes them want to quit their jobs, this rises to 30% amongst millennials. Whether in customer service, marketing or sales, in 2020 we’ll see companies going back to basics when it comes to the tools they select; choosing technologies that are simple, reliable, functional and support business objectives – and improve employees’ moral too.
Voice interfaces become sophisticated
Customer engagement is a key differentiator for brands. But the big difference for 2020, is that consumers will stop thinking in terms of communication channels. They will simply want to engage with a brand the way they want to, have their questions answered, their queries resolved, and their comments acknowledged – irrespective of the channel. They’ll also expect engagement to be relevant and contextual, with brands using behavioural data to know how to best communicate with them in terms of channel and content, both reactively and proactively.
As customer demand for voice interaction grows, and with most businesses now adept with phone, email, social and chat, attention will turn to voice interfaces – such as Siri and Alexa. Going beyond simple commands, we’ll see companies incorporate more sophisticated voice engagement into their products and services. The big learning for next year is that if brands want to win customers for life, they must make engagement easy, relevant and on their customers’ terms.
When it comes to technology, every business, large or small, faces the same common problem- simply keep up, or be left behind. For many however, the prospect of digitally transforming their business can seem costly, complex and disruptive.
By Richard Lane, Group Managing Director, EBC Group.
Although your systems may meet your business needs now, any future growth or success could be hindered by the inability to adopt new technologies. Replacing your current systems when they grow tired or break may seem like an easy short term resolve, but in the long run businesses will only suffer as a result of their avoidance to adopt a digital transformation plan.
The right digital transformation plan can provide a host of benefits, from reducing costs and time, to improving efficiencies across the business. However, digital transformation is not a single ‘out of the box’ solution and takes time, careful planning, and the right resources. There are plenty of mistakes that can be made when embarking on a transformation journey- here are 5 steps to consider;
1. A lack of clear strategic direction will not only make the whole process longer but could also result in spiralling costs. A transformation plan requires strong collaboration and should consider all business needs, with a clear set of goals as the aim. Breaking every process and task into manageable steps will make the overall implementation much simpler.
2. Whilst the technology plays a vital part, it is important that everyone is involved in the process, not just the IT department. When it comes to the day-to-day usability, the technology must work across all departments and at all levels. If the technology is starting to become too complex for the business, be prepared to pause or roll back on steps where serious issues occur.
3. For most businesses who are performing successfully, digital transformation may not seem like a necessity. However a digital transformation should be used to make existing functions and processes more efficient and effective, this enables the business to enhance current systems, add new features and over time change the business step-by-step.
4. Having the right level of knowledge is imperative to the success of your digital transformation project. Identify any missing skills and address them accordingly, investing in the right skills early in the process will ensure your strategy and plans stay on target. If the right skills aren’t available, you could look to outsource select resources.
5. When planning, it is important to identify what needs to be upgraded, replaced or reconstructed- once you’re into your transformation any ‘unexpected surprises’ could be both costly and disruptive. Simplifying your infrastructure could be as simple as removing multiple redundant or duplicative services and systems, or moving to cloud based solutions.
Digital transformation is not a journey that takes place alone. For those with limited resources Managed Service Providers (MSP’s) can help make the process easier and provide valuable insights into your project. With the help from MSP’s like EBC Group, businesses can be assured that they are getting the best technical services and support.
Programs used to be made by creating large monolithic scripts, however, a lot has changed in the last two decades. There are now prominent methods in manufacturing applications that use small, self-contained programs in tandem to add extra functionality to hardware. Here Florian Froschermeier, technical sales manager for industrial router manufacturer INSYS, explains what Linux containers are and how manufacturers are using them to transform applications.
Linux containers (LXC) are an operating system (OS) level virtualization method that allows for multiple isolated Linux systems to run on the single Linux kernel of a control host. Meaning that these programs are isolated in individual user-spaces and operate at the OS level. These containers are self-contained and lightweight, holding very few components, making them a powerful tool for adding applications to a system without worrying about dependency errors.
Developers can use containers to package an application with the libraries, dependencies and other files it needs to run, without the host needing to install extra assets. In this way, containers can be installed and work on any Linux system that supports container functionality regardless of configuration.
For example, if a developer is working on a program on their laptop while travelling, they may encounter issues if their office computers have a different configuration, such as a missing library. Applications in development rely on the system configuration and are dependent on specific libraries, dependencies and files being available to work.
Containers provide a way of bypassing these issues. Because the programs are self-contained, they can be ported to different Linux environments regardless of configuration. Allowing developers to continue working anyplace and anytime.
On the other hand, in the example of a Linux system that has been stripped back and hardened to create a secure OS for a narrow use-case, containers can add in extra functionality. At INSYS this is a key feature for our industrial routers that run our Linux based icom OS, designed specifically for this purpose.
Plant managers can use the icom Smartbox, which comes preinstalled on any INSYS industrial router, to enable LXCs and develop their own application or choose from an array of off-the-shelf applications. These containers can be used to connect legacy machinery, including legacy software designed to run on Raspberry Pi’s.
Some of our customers have already used these devices to add edge computing to their network, as well as benefit from data analysis and reporting functions that send messages regarding anomalies immediately to users. Showing that containers are a great way to bring machinery into the present and push it into the future.
Containers greatly increase the value that end-users can extract from industrial hardware. LXCs have the potential to achieve this with a wide range of products. In some cases, the LXCs can completely redefine the function of a piece of hardware, giving it a new lease of life for use on the network.
Another benefit of containers is that they increase the security of the system. Because they’re isolated, if one is compromised by a malicious attack, the others can maintain their integrity. Their isolation also means that even if one of the containers is compromised, the others and the host are still secure.
All these benefits are leading to continuous developments in the realm of containers. In fact, some developers are beginning to create new programs by stitching together containers. This method allows the programs to become more flexible as individual containers can be swapped in and out easily allowing programs to be updated in line with user requirements.
Containers such as LXCs are proving to be an incredibly strong and versatile tool for developers and end-users. They have the potential to extend the life of hardware by redefining functions and giving old pieces of technology new functions. Their use is a gateway to continuous development.
Ahead of 2020, DW asked a whole host of individuals working in the tech space for their thoughts and observations on the business trends and technology developments likely to be major features of the year ahead. No one gave us the kitchen sink, but otherwise it seems that we’ve uncovered a broad spectrum of valuable, expert opinion to help end users plan their digital transformations over the next 12 months or so. Part 4.
By now we were all supposed to be more connected, but instead we’re getting more fragmented and siloed. “Likes” in social media polarize us, where algorithms favor inflammatory content, evoke stronger reactions and keep us hooked longer. We've seen fragmentation when it comes to local laws, regulations and privacy. In the private sector, business schools, strategy heads and activist investors preach to divest anything that's not a core competency but in a fragmented world, with digital giants lurking around the corner, do we need to think different? For regulations, business models, and data – which increasingly is the same thing - we can turn a fragmenting landscape into an opportunity. But analysis isn’t enough. We need synthesis AND analysis to connect distributed data to the analytic supply chain - with catalogues as the connective tissue. The tech is there today but it also needs to be followed by the right processes and people. Synthesis and analysis is critical to make use of pervasive data and facilitate the evolution towards what we call “laying the data mosaic.” Below is a curated subset of the top 10 Trends we see being most important in the coming year, a full version of which we will publish in January 2020.
1. Big Data is Just Data. Next up – “Wide Data”
Big Data is a relative term, and a moving target. One way to define big data is if it’s beyond what you can achieve with your current technology. If you need to replace, or significantly invest in extra infrastructure to handle data amounts, then you have a big data challenge. With infinitely scalable cloud storage, that shortcoming is gone. It’s easier now than ever to do in-database indexing and analytics, and we have tools to make sure data can be moved to the right place. The mysticism of data is gone - consolidation and the rapid demise of Hadoop distributors in 2019 is a signal of this shift.
The next focus area will be very distributed, or “wide data.” Data formats are becoming more varied and fragmented, and as a result different types of databases suitable for different flavors of data have more than doubled – from 162 in 2013, to 342 in 2019.* Combinations of wide data “eat big data for breakfast” and those companies that can achieve synthesis of these fragmented and varied data sources will gain an advantage.
2. DataOps + Analytic Self-Service Brings Data Agility Through-out the Organization
Self-service analytics has been on the agenda for a long time, and has brought answers closer to the business users, enabled by “modern BI” technology. That same agility hasn’t happened on the data management side – until now. “DataOps” has come onto the scene as an automated, process-oriented methodology aimed at improving the quality and reducing the cycle time of data management for analytics. It focuses on continuous delivery and does this by leveraging on-demand IT resources and automating test and deployment of data. Technology like real-time data integration, change data capture (CDC) and streaming data pipelines are the enablers. Through DataOps, 80% of core data can be delivered in a systematic way to business users, with self-service data preparation as a standalone area needed in fewer situations. With DataOps on the operational side, and analytic self-service on the business user side, fluidity across the whole information value chain is achieved, connecting synthesis with analysis.
3. Active Metadata Catalogues - the Connective Tissue for Data and Analytics
Demand for data catalogues is soaring as organizations continue to struggle with finding, inventorying and synthesizing vastly distributed and diverse data assets. In 2020, we’ll see more AI infused metadata catalogues that will help shift this gargantuan task from manual and passive to active, adaptive and changing. This will be the connective tissue and governance for the agility that DataOps and self-service analytics provides. Active metadata catalogues also include information personalization, which is an essential component for relevant insights generation and tailoring content. But for this to happen, a catalogue also needs to work not just “inside” one analytical tool, but incorporating the fragmented estate of tools that most organizations have.
4. Data Literacy as a Service
Connecting synthesis and analysis to form an inclusive system will help drive data usage, but no data and analytic technology or process in the world can function if people aren’t on board. And dropping tools on users and hoping for the best is no longer enough. A critical component for overcoming industry standard 35% analytics adoption rates is to help people become confident in reading, working with, analyzing and communicating with data. In 2020, companies expect data literacy to scale, and want to partner with vendors on this journey. This is achieved through a combined software, education and support partnership – as a service – with outcomes in mind. The goal could be to drive adoption to 100%, helping combine DataOps with self-service analytics, or to make data part of every decision. For this to be effective, one needs to self-diagnose where the organization is and where it wants to get to, and then symbiotically work out how those outcomes can be achieved.
5. “Shazaming” Data, and Computer/Human Interactions
The effects of data analysis on vast amounts of data have now reached a tipping point, bringing us landmark achievements. We all know Shazam, the famous musical service where you can record sound and get info about the identified song. More recently, this has been expanded to more use cases, such as clothes where you shop simply by analyzing a photo, and identifying plants or animals. In 2020, we’ll see more use-cases for “shazaming” data in the enterprise, e.g. pointing to a data-source and getting telemetry such as where it comes from, who is using it, what the data quality is, and how much of the data has changed today. Algorithms will help analytic systems fingerprint the data, find anomalies and insights, and suggest new data that should be analyzed with it. This will make data and analytics leaner and enable us to consume the right data at the right time.
We will see this combined with breakthroughs in interacting with data – going beyond search, dashboards and visualization. Increasingly we’ll be able to interact sensorally through movements and expressions, and even with the mind. Facebook’s recent buy of CTRL Labs – a mindreading wristband, and Elon Musk’s Neuralink project, are early signals of what’s to come. In 2020, some of these breakthrough innovations will begin to change the experience of how we interact with data. This holds great human benefits for all, but can also be used for ill, and must be used responsibly.
Turn fragmentation to your advantage, by connecting synthesis and analysis – to form a dynamic system. DataOps and Self Service will be the process and method. Data Literacy and Ethics will guide people to do the right thing. Innovative technologies powered by AI will facilitate through-out the entire chain to enhance and accelerate data use. These trends form tiles for laying a data mosaic in a complex fragmented world, making the use of data pervasive across the enterprise and ushering us into the next phase of success in the digital age.
People are becoming more comfortable disrupting with technology as low priced powerful cloud technology starts the reach the hands of technologically-minded individuals. Every business is a 'technology business' and for those who don’t think they are, it may be too late to join in.
Nathan Thomas, Digital Product & Software Engineering Lead, Ricoh explains:
The rate of technological disruption has increased exponentially over the last few years and we expect this to continue.
Businesses will start to pay for enterprise business software and hardware in a similar way they currently do for mobile phones and personal technology. This will result in rapid updates and continuous innovation, and businesses will expect the latest hardware & software updates regularly.
Cloud
Cloud has enabled us to be more technologically minded. It’s easier than ever to learn the basics like coding through utilising the power of cloud technology. This is embraced by creating a ‘culture of innovation’ and enabling people to contribute to the future of the business. Research suggests that in the next couple of years over half of enterprise-generated data will be created and processed outside the centralized cloud, showing a real evolution in cloud technology.
The Human-Machine Interface
People are seriously underestimating the impact that HMI’s will have on the world and how quickly they’ll become commonplace. Augmented Reality is becoming a standard. Not too long ago it was hard to find AR applications; they now come as default in common apps such as Google Maps, Google Camera and Measure. This encourages people to imagine, 'What else could I do with this technology?'.
Brain-Machine Interfaces (BMI) is an important consideration for the future and holds promise for the restoration of sensory and motor function, and the treatment of neurological disorders. We’re only a year or two away from seeing practical applications for this technology in everyday life and changing the perception of what is possible when technology and humans come together.
Trust
The trust crisis is something that has come into being off the back of the current climate with massive organisations like Facebook losing the public’s faith. Legislation and regulations are being implemented to help guide organisations and encourage transparency around privacy and data. The implementation and encouragement of transparency is something that we will continue to see in the future, now that people are becoming more comfortable in our modern IT world.
Operators will need to shift their 5G messaging, says Gavin Hayhurst, Product Marketing Director, TEOCO:
Much of 5G marketing is as we might expect it to be. One operator is hyping it as offering speed “like you’ve never seen before”, with an “instant connection” to the content you love most. Another touts “ultra-fast speeds and ultra-low latency”. But the reality is that the main benefits of 5G are complex and not best communicated with such simple messages.
Network slicing remains one of the most compelling uses of 5G technology.
By creating multiple end-to-end virtual networks than run on top of a physical infrastructure, operators can tune different network slices for specific needs to ensure they can meet the demands of different applications—for example, IoT applications, such as connected vehicles. It’s a potential game-changer for several use cases, but not exactly a compelling consumer message.
In fact, to date, there’s no real evidence that consumers are terribly interested in 5G, and the current rhetoric claiming that the technology is offers faster speeds than ever before is unlikely to change that. Instead, we’ll see operators focus on enterprise customers as early adopters of the technology, and rather than emphasising the speed of 5G, we’re likely to see marketing focus on 5G benefits for specific use cases.
Developing 5G network opportunitiesIf operators are to make the most of the 5G opportunity, they must look at what it enables. Building a better network is only part of the story—the world may beat a path to the operator’s door for access to 5G networks, but it will be those who build on top of these networks that stand to gain the most.
For example, smart building management requires integrating data from connected devices, sensors, services and applications. Operators have an opportunity here. Given that they will be required to manage masses of data to keep their 5G networks optimised, they will also be well-placed to assist with other areas where crunching similarly large data sets is necessary.
Operators should therefore be focused on building the means to handle and manage networks that produce unprecedented amounts of data, and if they’re smart they will use this technology to manage, orchestrate and analyse data for others. Smart building management is predicted to be worth nearly $20bn dollars in a few years, and it is those platforms that can scale to billions of data records per day and use open APIs to collect data from anywhere, that will be best placed to take advantage.
Today’s IT infrastructure is a hybrid environment on many levels: a cacophony of legacy architecture, coupled with new technologies and architectures across private and public clouds. It is almost impossible for operations teams to effectively manage all of these disparate elements without the business at some point suffering from performance issues or worse, outages.
By David Cumberworth, MD EMEA, Virtana.
The rise of AI and machine learning has opened up a whole new world of possibilities for dealing with this level of complexity. The global data centre automation software market is predicted to reach a value of $9.42 billion between 2019 and 2023, with a CAGR of 22%*. Data centres of the future will continue to augment the management by humans; the industry is on a path towards more automation of these environments. And the journey between the status quo and that automated future is being fast tracked by the application of artificial intelligence for IT operations: “AIOps”, of which there has lately been much hype.
What is AIOps?
Gartner’s definition** for AIOps is:
“Platforms that utilise big data, modern machine learning and other advanced analytics technologies to directly and indirectly enhance IT operations (monitoring, automation and service desk) functions with proactive, personal and dynamic insight. AIOps platforms enable the concurrent use of multiple data sources, data collection methods, analytical (real-time and deep) technologies, and presentation technologies.”
The drivers for AIOps
The future of IT operations is autonomous: enterprises need IT operations that never fail to deliver mission-critical services, that adapt to business innovation, and consume resources ever more efficiently.
Achieving this is difficult because the enterprise data centre today is made up of multiple infrastructure silos with disparate element managers that cannot adequately correlate or communicate with each other. The amount of complexity and moving parts in today’s data centre is staggering and IT operations and infrastructure teams simply do not have the visibility, capability or tooling to orchestrate across silos and private/public clouds.
As newer technology is deployed, the need for visibility across the entire environment becomes ever greater. The ability to have cross silo transparency and automation coupled with unifying and streamlining this mass of data, starts to tackle the numerous challenges that have arisen as a result of this limited silo structure.
Why AIOPs is different
If we compare running a data centre to driving a car, most monitoring tools provide a post-process analysis of why a problem occurred across a certain layer of infrastructure, or to use the car analogy, why your car crashed! The insight that AIOps provides ensures alerts are delivered in advance of the impending accident so that incidents can be anticipated, and through machine learning and algorithmic learning, intelligent action can be taken and the course corrected ahead of an accident ever taking place. The massive expense, disruption to customers and damage to an enterprise’s reputation should an outage occur, make AIOps capabilities in today’s hybrid data centre essential.
For a data centre to be truly autonomous, a massive amount of information must be collected about its operations and infrastructure. This is where the years of machine learning underpinning AIOps comes to the fore. It enables AIOps to discern and prioritise from thousands of alerts, essentially cutting out the noise. AIOps intelligence as a result of machine learning is also relevant in the context of providing a performance baseline, as well as delivering vital analytics for both applications and delivering business SLAs.
AIOps benefits and use cases
AIOps significantly reduces or removes mean time to resolution, preventing outages and enabling massive cost savings for the business due to increased uptime.
For operations teams, AIOps provides visibility and insight across silos and allows IT to become more business focussed, allowing them to innovate rather than simply operate, as has previously been the case.
True AIOps can enable an end-to-end real-time visibility, allowing for utilisation and capacity planning, which entirely mitigates infrastructure overprovisioning, delivering massive CAPEX and OPEX savings.
Moreover, the detailed insights enabled by AIOps can help increase the profiles and credibility of IT teams and application owners, shifting from a situation of finger pointing and “war room” scenarios, to providing valuable intelligence and reporting on the capacity, health and utilisation of the hybrid infrastructure.
AIOps’ ability to simplify data centre operations is vital; with cloud migration, there is a perceived risk in placing infrastructure in the hands of public cloud vendors. With AIOps, cloud vendor choice becomes less of an issue, as the entire heterogeneous infrastructure is controlled by the owners of IT, rather than a faceless public cloud vendor.
Barriers to AIOps adoption
AIOps enables strategic monitoring and control, ensuring application and infrastructure SLAs are met. One of the major barriers to its adoption is lack of trust and resistance to change. Organisations do not like moving away from the technology and tools they are familiar with. But simply put, without automation, managing the complexity and multiple moving parts of the hybrid data centre is becoming overwhelming for operations and infrastructure teams.
One of the core propositions of AIOps is “handing the keys” (and the control) of the infrastructure back to operations teams. Much as with a commercial aircraft, the flight controls can be set to automatic, but the pilot is free to take over at any time. It is ultimately up to the operation teams as to how much they wish to automate.
AIOps is more than an insurance policy against risk, it is a pivotal enabler of digital transformation, allowing companies to powerfully shift from reactive trouble shooting to proactive optimisation, streamlining operations to better serve their business and deliver organisational agility.
*Global Data Centre Automation Software Market – Technavio, June 2019
Ahead of 2020, DW asked a whole host of individuals working in the tech space for their thoughts and observations on the business trends and technology developments likely to be major features of the year ahead. No one gave us the kitchen sink, but otherwise it seems that we’ve uncovered a broad spectrum of valuable, expert opinion to help end users plan their digital transformations over the next 12 months or so. Part 5.
Container and orchestration adoption will accelerate and spread, comments Ben Newton, Masters of Data, Podcast Host, Director of Product Marketing, Sumo Logic:
According to Gartner, by 2020, more than 50% of global organizations will be running containerized applications in production, up from less than 20% today. I expect to see container adoption, in general, and Kubernetes adoption, in particular, accelerate as companies try to leverage the platform for new architectures and multi-cloud initiatives. I expect to see that adoption trend move beyond first adopters into the mainstream with late adopters, though it will be years yet before enterprises are able to move large portions of their applications to new, containerized architectures.
A new category will emerge: Quality Engineering
As engineering teams take more responsibility for their applications from code to the grave, we are seeing more specialized teams arising to fulfill specific platform needs for those development organizations. Like Site Reliability Engineering (SRE) teams before them, we should see a sharp increase in the creation of Quality Engineering (QE) teams and roles that focus on driving agile and DevOps practices into testing and quality control - including a move away from traditional QA teams to a shared responsibility model with developers (again similar to SRE models).
Kubernetes will continue to evolve as the “Linux of the cloud”
Kubernetes has won the cloud orchestration war. Today, all the major cloud providers - Amazon, Google and Microsoft - all offer a managed Kubernetes service. As multi-cloud adoption continues to accelerate its been highly correlated with higher Kubernetes adoption. Kubernetes will continue to remain an orchestration tool of choice as it offers broad multi-cloud support and can be leveraged by many organizations to run applications across on-prem and cloud environments. Additionally, we should see the maturation, and increase adoption, of fully-managed Kubernetes-based platforms, like Fargate on AWS, that will be very attractive for organizations that don't want the management headache of Kubernetes.
Kubernetes will continue to evolve as the “Linux of the cloud”
Kubernetes is winning the cloud orchestration war. Today, all the major cloud providers - Amazon, Google and Microsoft - all offer a managed Kubernetes service. As multi-cloud adoption continues to accelerate its been highly correlated with higher Kubernetes adoption. Kubernetes will continue to remain an orchestration tool of choice as it offers broad multi-cloud support and can be leveraged by many organizations to run applications across onprem and cloud environments.
The role of European developers will continue to evolve, according to Colin Fernandes, Director of Product Marketing (EMEA), Sumo Logic:
European developers want to use the latest application deployment technologies - containers and serverless adoption is just as high as in other regions of the world. However, we can see from the data on cloud implementations and adoption of best practices that they are not as far ahead in using tools for monitoring and security. In 2020, I think developers will take on more responsibility for ensuring security across their pipelines as they want to roll out their code more efficiently but without risking what they put out.
How will they achieve this in practice? For some developers, this will be added to their roles and responsibilities like they have taken on more operational support. For others, this will be through working more efficiently with existing security and operations teams on their DevSecOps processes. Getting this right will be an important aim for 2020.
Serverless will dominate developer conversations - and IT operations headaches - in 2020, says Mark Pidgeon, VP, Technical Services (EMEA), Sumo Logic:
2019 was the year of containers and orchestration - in our Continuous Intelligence Report for this year, we saw Kubernetes adoption going up massively in line with multi-cloud adoption. In 2020, we’ll see more adoption of serverless alongside containers to meet specific needs within applications.
AWS Lambda is already one of the top ten most used services on AWS, and more than 36 percent of companies use Lambda in their production applications. This use of serverless will continue to grow over time.
What impact will this have on IT and on developers? I think we’ll see more adoption grow as people figure out how they can apply serverless in their own environments, but we will also see developers face up to how serverless is billed and paid for too. Alongside this, IT Operations and security teams will have to get to grips with serverless architecture and make sure that all the necessary security, compliance and backup rules are being followed. This will be a big change of mindset, and data will be essential to make this work.
In 2020, I believe we’ll see the accelerated adoption of finer granular objects to drive efficiencies, comments Steve Cohen, Security Services Manager at Synopsys:
“As developers adopt these finer granular objects within their cloud applications, such as containers, microservices, micro-segmentation, and the like, security testing tools will need to be object aware in order to identify unique risks and vulnerabilities introduced by utilizing these objects.
I anticipate that new approaches to collecting security related data may become necessary in the cloud. In addition to application logs, cloud API access will be seen as necessary. There will also be a growing focus on centralized logging in the upcoming year.
In addition to application security, the cloud management plane will become an additional security layer that needs addressing in 2020. Developers, for example, will require access to the management plane to deploy applications. Incorrect settings here could expose the application to security risks as sensitive information flows through it.
Reduced transparency around what’s going on within a given application will likely be a growing trend. A cloud provider doesn’t necessarily tell you what security controls exist for the PaaS services they expose to you. Businesses will therefore need to make some assumptions about their security considerations and stance.
In terms of data security and integrity in the cloud, there will be more of a need to have proper policies in place so prevent improper disclosure, alteration or destruction of user data. Policies must factor in the confidentiality, integrity and availability across multiple system interfaces of user data.
In 2020, the adoption of PaaS and serverless architecture will provide even more of an opportunity to dramatically reduce the attack surface within the cloud.”
Tim Mackey, Principal Security Strategist, CyRC suggests that:
“Cyber-attacks on 2020 candidates will become more brazen. While attacks on campaign websites have already occurred in past election cycles, targeted attacks on a candidate’s digital identity and personal devices will mount.
With digital assistants operating in an “always listening” mode, an embarrassing “live mic” recording of a public figure will emerge. This recording may not be associated directly with a device owned by the public figure, but rather with them being a third party to the device. For example, the conversation being captured as “background noise”.
With the high value of healthcare data to cybercriminals and a need for accurate healthcare data for patient care, a blockchain-based health management system will emerge in the US. Such a system could offer the dual value of protecting patient data from tampering while reducing the potential for fraudulent claims being submitted to insurance providers.”
Emile Monette, Director of Value Chain Security at Synopsys, comments:
“In the year to come, I anticipate that we’ll see continued developments in software transparency (e.g., NTIA Software Component Transparency efforts). Additionally, a continued need for software testing throughout the software development life cycle (SDLC) will also persist as a focus in 2020—most assuredly a positive step in terms of firms understanding the criticality of proactive security maturity. I also have reason to believe we’ll see increased efforts to secure the hardware supply chain, and specifically efforts to develop secure microelectronic design and fabrication will come into focus in the upcoming year.”
Ransomware Will Evolve from Smash & Grab to Sit & Wait, according to Brian Vecci, Field CTO at Varonis:
Ransomware isn’t the most pervasive or common threat, it’s simply the noisiest. In 2020 attacks will become more targeted and sophisticated. Hackers will pivot from spray-and-pray tactics. They will instead linger on networks and hone in on the most valuable data to encrypt. Imagine an attacker that encrypts investor information before a publicly traded bank announces earnings. This is the type of ransomware attack I expect we’ll see more of in the coming year, and organizations that can’t keep up will continue to get hit.
Fake News Will Become Fake Facetime
Forget fake news: 2020 will be the year of the deepfake and at least one major figure will pay the price. Thanks to leaky apps and loose data protection practices, our data and photos are everywhere. It will be game-on for anyone with a grudge or a sick sense of humour. It raises the ultimate question: What is real and what is fake?
A Political Party Will Cry Wolf
In 2020, one or both of our political parties will claim a hack influenced the elections to delegitimize the results. Foreign influence has been an ongoing theme, and few prospects are more enticing than affecting the outcome of a U.S. presidential election. With so much at stake, a nation state attack is practically inevitable. The federal government has failed to pass meaningful election security reform. Even if an attack doesn’t influence the results, it’s likely that those who don’t like the outcome will claim interference, and this scenario will discredit our democracy and erode trust in the electoral process. If we want to maintain the integrity of our elections and avoid political upheaval, real change needs to happen in how we store and protect our data.
CCPA...Cha-Ching!
Once January hits, the fines will roll in. A recent report released by California’s Department of Finance revealed that CCPA compliance could cost companies a total of $55 billion - and this isn’t even taking into consideration the firms that fail to comply. In 2019, we saw GDPR’s bite finally match its bark, with more than 25 fines issued to offenders, totalling more than $400M, and the same is likely to happen in the U.S. under CCPA. In 2020, at least 5 major fines will be issued under CCPA, racking up upwards of $200M in fines. While a federal regulation is still a ways off, at least 3 other states will begin to adopt legislation similar to California, though none will be as strict.”
In a world of ever-increasing apps, websites, and digital experiences, there are immense benefits to deploying microservices. By dividing an application into smaller, stand-alone services, each performing a single function and communicating together to work collaboratively, microservices offer increased modularity, making applications easier to scale and faster to develop.
By Martin Henley, Senior Vice President, Technology Services, Globality.
The limits of AI and potential pitfalls
The use of microservices has its own limitations and challenges—scale, complexity, and constant change are the new realities of the modern enterprise when it comes to deploying microservices. Microservices architecture introduces complexity with respect to network latency, network communication, load balancing, fault tolerance, and message formats. But with the greater flexibility also comes a higher risk of something slipping through the cracks. Managing many services requires understanding deep structures, dependency graphs, and complex systems.
Process scalability is another significant challenge. An enterprise can run one cloud-based service with relative ease, but simultaneously running dozens of services across multiple clouds quickly becomes a complicated juggling act. In fact, with all the moving parts that come with microservices, it is becoming increasingly difficult to manage these processes manually.
No wonder that, along with the rise of microservices, we have seen the emergence of auto-remediation/self-healing capabilities. To the untrained eye, the terms auto-remediation and self-healing would seem to be the answer, but don’t be fooled. Many of these features, although making the management of microservices easier, use static thresholds for data points such as waiting time, throughput, message queue size, and more. When the thresholds are passed, the system will correct or self-heal as necessary. Although helpful, they’re not necessarily smart or intelligent enough to get by without a person behind the scenes—that’s where AI can help.
Problem-solving using AI
AI can help solve some of the hardest problems facing microservices today. Below are a few achievements we’re already seeing AI accomplish:
Benefits aside, using AI is not without some pitfalls. For example, despite all its potential, the so-called state-of-the-art AI models are still quite primitive when compared to what they have the potential to achieve. Another issue is how to manage the possibility of bias creeping into AI models to avoid unintended consequences. Technology professionals are already working on ways to solve these complications, but it will take time to develop the right models, data sets, and algorithms to create the perfect solution.
The reality is that microservices are simply becoming far too complex for humans to effectively manage on their own. Because of the dynamic nature of their structure, their complex dependencies, and their sheer scale, machine learning must be applied to assist with monitoring and management tasks.
Digital transformation has seen application teams embrace a broad range of new technologies and associated approaches, from mobile first to cloud-native architecture. These deliver greater agility; end-user focus and more innovation than ever before. They also create significant volumes of complex data.
By Conor Molloy, Senior Vice President at Riverbed Aternity International Division at Riverbed Technology.
In order to provide an optimum digital experience for your internal and external users, it’s vital to gain visibility into this data so you can track the performance of your technology, to identify and resolve issues before they negatively impact users. This is referred to as application performance monitoring (APM). The following top learnings will help you create an effective APM strategy to deliver the high-quality digital experiences that will set you apart from competitors.
1. Despite reliance on digital applications, few are monitored
Concerns around cost, scale and integration with legacy architecture have all impeded traditional APM adoption. However, in today’s end-user orientated climate, businesses cannot afford to allow this visibility gap – the space between what your tools are telling you and what your users are actually experiencing – to persist. Doing so could decrease organisational productivity, negatively affect customer satisfaction, and ultimately damage revenue.
Relying on employees to close this gap, by flagging poor system performance, is untenable. Instead, companies need to monitor end-user experience from the device to the application, through to the infrastructure and into the network. This approach provides IT teams with direct visibility into performance so they can find and fix issues as they occur, in some cases even before users become aware of them.
2. Monitor everything, including services out of your control
Your business relies on the end-to-end experience of all your applications, regardless of whether you or a third-party are delivering the service. Given the explosion of cloud adoption – 80 per cent of organisations will have migrated to the cloud and colocation by 2025, according to Gartner – it’s more important than ever that companies ensure they have visibility into these solutions. This visibility will empower businesses to hold vendors accountable and keep an eye on service-level agreements.
3. Application technologies are multiplying, increasing the need for APM
Cloud-native application development was designed with modern hyperscale infrastructure in mind. This approach sees traditional monolithic application infrastructure broken up into microservices – smaller, functional components – that can be scaled individually to deliver large-scale efficiencies. Particularly for the enterprise-scale businesses that are therefore driving their adoption.
The typical cloud-native technology stack is made up of a wide variety of commercial and open-source platforms. As the number of vendors and technologies grows, so too does the complexity and need for a strategy that delivers simplified visibility – APM.
4. APM’s big data challenge
Businesses create a significant volume of varied data on a daily basis. For instance, a credit card processing company will create petabytes of data every day by executing millions of application transactions. To make matters more challenging, the transactions will have been processed by tens of thousands of distributed application components in the cloud, making them subject to state changes. All of this data will need to be processed and stored and as a consequence, whilst the complexity is great, so is the need for high definition data.
5. Detailed data is key to troubleshooting
Detailed, second-by-second visibility into the systems your applications run on is crucial in today’s high-paced business environment. Checking the health of your systems at one-minute intervals is acceptable for infrastructure monitoring, but to troubleshoot business transactions it must be done in a matter of seconds. After all, five seconds is considered critical for revenue-impacting transactions, so application performance monitoring data must be equally high definition.
6. It’s time to implement big data for application monitoring
In an attempt to consistently monitor device, application, infrastructure and network performance, many companies fall into the trap of only sampling a small fraction of executed transactions. This leaves sizeable blind spots, creating the possibility for critical information to easily go unanalysed. In turn, this can compromise the overall quality of the data set.
However, it can be avoided with big data technology, which has already been proven to be capable of handling large-scale data collection and analysis. This allows businesses to deliver the lightning fast responses consumers expect from their applications and, in doing so, maintain brand credibility.
7. APM will quantifiably improve your bottom line
The driving force behind the majority of digital transformation projects is a company’s desire to improve their bottom line. The value of transformations can only be quantified, and their positive impacts amplified, through monitoring the end-user experience new technology is delivering. As such, a systematic approach to APM is crucial to delivering tangible financial results.
Take Riverbed as an example, its APM technology monitors the digital experience of every kind of application in an enterprise’s portfolio from the point of consumption – the user’s device – and upgrades the approach to a big data practice by delivering high definition data quality at scale. As a result, customers enjoy five times the return on investment over three years for large-scale APM deployments.
APM has a proven ability to deliver optimum digital experiences that translate into business results, so what’s stopping you embracing it?
Ahead of 2020, DW asked a whole host of individuals working in the tech space for their thoughts and observations on the business trends and technology developments likely to be major features of the year ahead. No one gave us the kitchen sink, but otherwise it seems that we’ve uncovered a broad spectrum of valuable, expert opinion to help end users plan their digital transformations over the next 12 months or so. Part 6.
Four retail trends set to drive change in 2020 – thoughts from Roy Reynolds, Technical Director at Vodat International:
The pace of digital change within bricks-and-mortar stores is accelerating rapidly as retailers search for new and innovative ways to protect their margins, increase footfall and boost customer experience.
Here we consider four in-store digital trends set to drive change during 2020, how fast, reliable and safe networks make them achievable and also the steps retailers need to take to ensure their network cybersecurity solutions are up-to-date and secure.
The retail cloud will come of age
The retail industry’s adoption of cloud computing continues at speed. But during 2020, cloud integration will step up a gear with retailers starting to do much more than reducing the cost of computing and data storage.
Instead retailers will increasingly use cloud computing to tackle data-heavy tasks such as managing their pricing and margin strategies in real time.
This task involves the analysis of a long list of complex variables, including competitors’ prices, sales history at a granular store level, predicting repricing opportunities, analysis of margin and sales and then using all this information to decide pricing in-store. For most retailers this process is long, laborious and mostly manual. Data is often scattered across multiple channels with no single source of the truth, making valuable real-time insight virtually impossible to achieve.
Pricing and margin management is just one example of a data-heavy retail task for which cloud data platforms offer a compelling solution.
The beauty of cloud is that it can prepare multiple sources of data for analysis and leverage machine learning and artificial intelligence to provide the real-time insights retailers need to make decisions on pricing and margins.
When it comes to retail, however, cloud adoption is a double-edged sword; on one hand it’s a great step forward, but on the other it can be an infinite opportunity for cybercriminals.
And it is important to realise that the job of securing your cloud data and workloads is the responsibility of the retailer…not the cloud service provider.
Retail is already a top target for hackers due the sheer volume of customer data that organisations handle and network, access-from-anywhere cloud systems offer a tempting target. A big issue in 2019, which will continue into 2020, will be an increase in credential compromises, so retailers need to carefully manage cloud users’ log-in details and operate a mindset of ‘when, not if’ an account compromise will occur and have a robust contingency plan ready for any data loss or web-based threats
Internet of things to unlock personalisation
Internet of things (IoT) devices are by no means new to retail. Tesco, for example has been using RFID labels to track stock from its F&F range for years. In 2020, however, we’ll see lots more IoT devices used in fresh and innovative ways to inject personalisation and improve in-store customer experience.
This trend follows recent research showing that 75% of consumers are more likely to buy from a company that knows them by name, recommends options based on past purchases or knows their preferences.
Grocery stores are among those heading the charge with Tesco and Amazon planning their first foray into cashier-less scan-and-go outlets next year.
Bluetooth-enabled beacon technology also promises to inject personalisation into the store experience, enabling retailers to send promotional push notifications or text messages to shoppers’ smartphones as they shop in-store.
Several big-name brands have already found success with beacon deployments. US department store Lord & Taylor has seen a 60% engagement rate with customers as a result of the beacon technology they installed in one of their Boston locations.
As with any internet-connected device, security must be a priority if retailers are to stop hackers using IoT as a weak access point to their wider digital networks. Ensuring IoT devices are on their own protected network is a wise move as it helps to secure the data transmitted from any external attack.
In-store customer connectivity will become even more important
Rather than splashing out on expensive in-store hardware, more and more retailers will be leveraging shoppers’ mobile phones to deliver high-quality branded content.
To deliver this content and protect consumers’ precious data allowances, however, retailers will have to ensure their in-store Wi-Fi is fast, reliable and safe.
A great example of this trend is US clothing retailer Tillys use of in-store treasure hunts that encouraged customers to explore their stores and interact with hero products using tech such as augmented reality (AR).
Tillys use of incentives such as discount codes and instantly shareable social media content fronted by famous YouTuber ‘Shonduras’ was a huge hit with customers, with 55,000 scans and users spending an average of three minutes per visit connecting digitally with the Tillys brand.
The US convenience store 7-Eleven has also harnessed the power of branded content delivered to mobile to drive in-store footfall. It used a Deadpool-themed campaign featuring in-store games, redeemable rewards and sharable selfie opportunities.
One of the most effective ways of ensuring Wi-Fi cyber security is to use a wireless intrusion prevention system. Leading systems provide automatic and comprehensive protection from all types of wireless threats, including rogue APs, soft APs, honeypots, Wi-Fi DoS, ad-hoc networks, client mis-associations, and mobile hotspots.
Product customisation
One-of-a-kind products are a classic symbol of luxury - having something that nobody else in the world has. In 2020 more brands and retailers are expected to tap into the customer’s demand for customised products.
For example, Levi’s offers personalised embroidery on jeans and denim jackets, while NIKEiD lets shoppers completely customise their sneakers for a truly unique pair.
Iconic footwear brand Dr Martens has its own take on the customisation trend, recently embarking on a 47-stop European tour with two talented tattoo artists — not to tattoo its customers, but to uniquely customise their favourite pair of DMs.
The cybersecurity risk for retailers here is integrating customisation software and IoT hardware onto their networks. It’s easy to connect these assets to your system and assume that they’re hackerproof, but this would be a big mistake. This is especially the case with third-party plug-in software which is constantly updated by creators and constantly adding new ways for hacker to breach retailer networks. Due diligence is absolutely essential when integrating any software or hardware to a retail system.
Keeping your software delivery in peak condition.
By Jeff Keyes at Plutora.As security is becoming an ever-more pressing challenge for businesses, ensuring that software is secure enough to prevent a cyber-criminal taking advantage of any weaknesses should be a priority. IT teams may primarily focus on making sure that their application is flawless and runs smoothly for users, but now the threat of data breaches is causing developers to reconsider how they introduce security features into software at every stage of the delivery pipeline. Finding issues at the end of this process is better than the bug going out to users, but the ideal solution is to find problems earlier so that developers can reduce the amount of backtracking through the code that is required.
DevSecOps is a philosophy that streamlines these processes by incorporating them alongside the development process to help ensure that breaches are prevented. It improves the collaboration between development and operations teams by placing security at the heart of the process and creating faster, more efficient ways to safely deliver code in an agile architecture. Essentially, DevSecOps involves adding security to the existing DevOps process, whereby automated tests, non-functional requirements and compliance gating are incorporated into the standard DevOps cycle.
So how can organisations put a fully-functional DevSecOps philosophy into practice?
Share the load across every team
Shifting the focus of security to the left in the development cycle essentially means that identifying vulnerabilities should be an integral part of the development process from the beginning. To do so, security cannot be the responsibility of a single team or person, but rather a shared initiative across IT operations, security and development teams. By making this shift in the software development lifecycle, the process will run both more quickly and more securely.
If it’s a shared responsibility, then it requires a shared knowledge of what and how to watch and implement. To be able to move left in the cycle with this shared knowledge, pipeline phases and gates need to be incorporated. By breaking down delivery into phases and gates, teams can include threat analysis as an iteration to make sure it happens, and they can incorporate non-functional requirements into the product features.
By adopting this “shifting left” philosophy, development will not only be accelerated, but it will also limit potential security threats in the future while addressing existing threats at the least cost with minimal damage to the platform.
Weave automation in from the start
Applying continuous and focused automation such as linting is essential to the success of the DevSecOps environment. Automation, when woven into the software development lifecycle from the start, can reduce the friction that occurs between security and development teams by quickly addressing existing and potential concerns at the lowest cost.
Adding automated security checks earlier in the process enables developers to work on code that is current, rather than doing a final threat push on three-sprints of code where the developers are looking back on code that was written more than six weeks ago, which can be a difficult switch of context. By eliminating this challenge, both quality and hardening are built into the code far more effectively than adding these in at the end of the process.
Get your governance in gear
Governance and DevOps are often at odds over how they make sure that there are no security issues before they go to release. Release orchestration tools can be introduced to solve this conflict, and criteria gates can be added to make sure that governance and DevOps work together.
When security testing is conducted in the development process is an important consideration in terms of lessening impact as well. Addressing security issues in completed code is much more cumbersome and expensive than addressing them while still coding. To combat this, governance also needs to be added into the beginning of processes so that it can be tracked throughout the entire lifecycle. Security teams can audit, monitor and coach the progress throughout the lifecycle as well.
Monitor your microservices for better security
In the world of legacy software, the number of interactions with other sources is not very high. In microservices, it is the complete opposite, and there is an added need to make sure all of these interactions are communicating with each other in a secure way.
Single-function modules that contain well-defined operations and interfaces are essential for successfully implementing a comprehensive DevSecOps approach. By constantly monitoring, upgrading and tweaking the microservice-based infrastructure, organisations will be better equipped for new developments.
There needs to be a concerted effort to stop leaving technical debt in the form of insecure computing. If you don’t have time to do it securely now, when will you? By going down the road of fully implementing DevSecOps philosophies, organisations will be armed with massive economic and technical advantages over less secure organisations.
Adopting a DevSecOps approach is beneficial for all parties involved, from the CEO to the end-user, as everyone can be sure that the security of the software is as tight as it can be. By integrating security much earlier in the pipeline and checking for issues throughout the process, developers can prevent vulnerabilities from ever going out to market – and in today’s competitive age, that could be the difference that makes your business come out on top.
Ahead of 2020, DW asked a whole host of individuals working in the tech space for their thoughts and observations on the business trends and technology developments likely to be major features of the year ahead. No one gave us the kitchen sink, but otherwise it seems that we’ve uncovered a broad spectrum of valuable, expert opinion to help end users plan their digital transformations over the next 12 months or so. Part 7.
2020: The era distributed cloud services, according to Ankur Singla, CEO and founder of Volterra:
Where applications reside is changing. Indeed, in its 2018 report ‘How Edge Computing Redefines Infrastructure,’ Gartner predicted that, “by 2022, more than 50 percent of enterprise-generated data will be created and processed outside the data centre or cloud.”
Digital transformation programmes, industrial automation initiatives and the wide scale deployment of IoT devices in public safety, retail locations, power plants, automotive, and factories are all driving this trend. However, all this innovation creates new challenges for enterprises, which must now find a way to manage all these increasingly distributed apps.
The first challenge enterprises face is around connecting their distributed applications and clouds together, and doing that securely. Carrying all this data created at the edge to a centralised cloud or data centre – where it can be processed and stored – can be prohibitively expensive. It can also clog bandwidth, killing the performance of other business-critical apps such as voice or ERP, which must share the same infrastructure and connectivity. Secondly, some of these distributed apps – live video surveillance data or machine control data, for example – are latency sensitive. They can’t handle any sort of delay as their data is transferred to the cloud or data centre.
There’s also another challenge to consider. Enterprises aren’t just generating more data at the edge, they are also making use of multiple clouds, cherry-picking different environments that are optimised for particular applications or workloads. Afterall, compute-intensive deep learning applications have very different requirements than business process applications like ERP or HR management systems. Enterprises are also migrating to multi-cloud architectures to maximise availability and geographic compliance.
The result is enterprise applications are getting increasingly distributed, not just across the edge and the cloud, but also between multiple clouds. The differences and inconsistencies between these environments can make it a major challenge to efficiently operate applications across different cloud providers – a problem that is only set to get worse as the volume of most enterprises’ applications continues growing.
To counter this in 2020, expect to see new mechanisms designed to help enterprises manage all these siloed applications in a centralised, more consistent way.
One solution is to create and implement a ‘distributed cloud’. In essence, this is a consolidated cloud-native environment that spans all applications, regardless of whether they located - at the edge, within the traditional data centre, or in one or more clouds.
Under this model, the edge acts as a series of micro-clouds, while different central cloud environments – with all their nuances – can also be managed as a single, uniform entity. Enterprises can pick and choose which applications reside where, deploying, securing and managing them from a central console using a consistent set of services and policies.
This approach will bring many operational, security, and cost advantages. For enterprises, it will mean that they’ll be able to control all of their applications in a consistent, efficient and centralised way – adhering to the same security policies and benefitting from the same cloud-native functionality – no matter where they physically sit.
In short, wherever enterprise apps need to be, in 2020, the cloud will start to be there too.
Predictions for cloud technology and data demands, courtesy of David Friend, CEO & Co-Founder, Wasabi:
Companies moving to the cloud is as inevitable as the adoption of any essential infrastructure that has come before. In the same way people don't generate their own electricity anymore or have their own inhouse phone networks or travel departments, companies have come to realise the benefits of outsourcing their data storage and computing. And with cloud technologies now beyond the early-adopter stage, data centre providers can expect a period of consistent year-on-year growth throughout the next half decade.
The rollout of 5G networks and related technologies is a key contributor to this growth. The associated increase in last mile bandwidth will enable industries with the highest data storage demands - such as automotive, scientific research (genomics), surveillance and video production - to grow.
Cars are already capturing troves of data via in-built cameras, radars, light detection and ranging (LIDAR) and other types of sensors - Teslas, for example, generate gigabytes of data already every day. At present, there isn’t enough bandwidth to send all that data back to manufacturers, but 5G will change this.
Video production is another huge source of data, with more people using their smartphones as mini video production studios that have huge data storage requirements. The need for quick iteration in media and entertainment production is key, and VFX (visual effects) and content producers who need to create, iterate and feedback in real-time throughout the production cycle, can only do so by leveraging cloud storage. Data forecasting consultancy Coughlin currently estimates that cloud storage for the M&E industry alone will grow about 13.3x between 2017 and 2023 (from 5.1 to 68.2 exabytes). Trends like these explain why we’re seeing cloud storage becoming essential.
As a result of cloud storage adoption and the proliferation of cheap, fast and reliable wireless networks, we’ll see an increase in IT service outsourcing in 2020. Computing doesn't have to be done in any particular location. With computing needs no longer tied to specific locations, services such as backups and media and entertainment editing demand a more rapid evolution of the cloud. Unless you are physically in the business of running a data centre, there is no logical reason for you to burden yourself with this extra hassle (in the same way you wouldn’t supply your own water or electricity).
Lastly, within the evolution of the cloud comes the evolution of multi-cloud, an increasingly popular approach as companies grow mindful of the dangers of vendor lock-in. The big three cloud providers (Google, Amazon and Microsoft) want to own all of the cloud, and will continue to battle it out. But considering their proprietary APIs and expensive egress fees, they will likely start to lose some of their market share. Meanwhile other providers are picking off smaller pieces of the cloud infrastructure and providing specialist services that will eventually force the hyperscalers to adopt standards of interoperability.
In the context of the growth of 5G and increase in demand for data storage facilities, the case for multi-cloud solutions will be made more and more in businesses that value strategic flexibility.
2020 tech trends in procurement, from Pete Kinder, chief technical officer at Wax Digital:
Procurement is one of the key business functions that benefits from digital technology. eProcurement systems that digitise this manual business function enable businesses to manage their buying process and supply chain far more effectively, since it makes spending activities quicker and more effective.
Here are four trends we think that’ll be important in 2020:
· Multi-layered catalogues: eProcurement systems are moving away from only using a single catalogue with pre-approved suppliers. Instead, web scraping will help software to track online retailers such as Amazon to find the cheapest possible deal among a bigger pool of businesses.
· Artificial intelligence: Traditionally, eProcurement systems have processed data and informed users on key statistics, giving them the information to make decisions. But this will change as big data and artificial intelligence makes the software learn from data and patterns to base its own decisions on. This includes the procurement system anticipating the business’ demand for a certain product based on factors such as spend histories, seasonality and environmental conditions.
· Integration: When it comes to integration, eProcurement systems are usually led by other business processes and systems. But that will change as technologies such as Integration Platform as a Service (iPaaS) become more popular. Procurement won’t only link to the finance system, it will become embedded into a broad range of business processes such as planning and budgeting, supply chain forecasting and human capital management.
· Commercial planning and delivery: eProcurement has the potential to steer the business’ commercial delivery too. There are benefits of governing the organisation around factors such as real time supply chain risks, as well as financial and market forces and eProcurement can tap into this data and act as an advisor.
2020 will be all about making AIoT a reality, according to Mark Lippett, CEO, XMOS:
2020 will be the year that IoT and edge AI come together to form AIoT, the Artificial Intelligence of Thinngs, unlocking a whole host of opportunities and applications.
IoT is well known as the phenomenon of machines that can ‘talk’ to one another and while is currently less hyped, it’s no less exciting. Embedded Artificial Intelligence holds the key to maximising the potential of the IoT. By localising intelligence, devices can collect and analyse swathes of data quickly, and then act on the result without the need to send it elsewhere.
Processing data in this way outperforms what’s possible with the cloud-based alternative, and paves the way for a more streamlined, private and cost-effective IoT environment where futuristic technologies like medtech and smart cities can become a reality.
Head (not) in the clouds
But without a platform that can economically deliver the performance AIoT needs, these applications are a pipe dream. The most common obstacle here is the IoT’s current reliance on the cloud.
A centralised cloud model performs adequately for many enterprise applications, but cloud networks pose systemic security risks. Furthermore, they aren’t designed to scale to the requests of tens of billions of AIoT devices – eventually latency and poor connectivity would render devices impossible to use.
The trend towards data processing at the edge — and often to the device itself — is set to continue in 2020. Integrating processors capable of using collected data before it leaves the device represents a major performance and security upgrade to on-device processing.
Many relevant applications cannot afford high-end CPUs to enable this – new approaches are required. 2020 will see a growing appetite for chips that can enable high-performance edge processing for the AIoT at an acceptable price to the manufacturer.
2020 vision
New classes of hardware solution will serve as the platform for a wide variety of use cases. For example, the next generation of smart speakers could support connected healthcare, monitoring biometric data — heart rate, breathing rate, temperature – to predict and prevent an illness before the user is even aware.
As we grow more comfortable with such devices, the rise of the smart appliance will incorporate similarly accessible tech. Voice-controlled thermostats, white-goods, security locks could give you total control of your home from the second you arrive at your front door.
Extrapolate this further, and smart technology will become unbiquitous in our smart cities. This isn’t just smart homes plus; imagine sensors capable of delivering you to the only empty parking space in a busy city, or that can conserve energy by turning off streetlights that aren’t needed.
2020 will be another pivotal year in the evolution of our future smart environments. The success of AIoT is dependent on driving innovation in many areas, but its foundation is in new and economical approaches to delivering processing performance at the very edge. Success is not just about delivering smart, it is about delivering value.
Keeping your data centre clean is often something nobody thinks about or in most cases actually notices you do until you don’t do it.
By Steve Hone CEO and Cofounder, The DCA
This month’s theme and the articles kindly submitted by DCA members all focus on this very subject and clearly highlight both the benefit and value of doing something well along with the potential cost and risk to one’s business of choosing to cut corners or do nothing at all.
We have a report from Future Hygiene on how to mitigate risk and enhance IT ROI. Toby Hunt, Guardian Water Treatment highlights the fact that contamination can occur within a system not just on it. 2BM, one of the UK’s leading companies for bespoke design, build and refurbishment provide an insightful client case study as to why keeping things ship shape is so vital and finally, Mike Meyers from CFS (Critical Facility Services) wraps things up by explaining exactly why flawless performance can only come from a flawless facility.
Some time ago the DCA formed an Anti-Contamination and Cleaning Special Interest Group (SIG) chaired by Gary Hall from CFS. The group now has 14 participating members from several leading cleaning and filtration companies who all work in the data centre sector. The group are working on the latest edition of the Anti-Contamination Report for publication and release in early 2020. “There is never a bad time to clean house” so if you or your organisation would like to be a contributing partner within this group then please contact the DCA Team about becoming a member as the best ideas come from first-hand experience and collaboration.
On the subject of insight and collaboration, it’s worth noting that November is always a crazy month not just in terms of sales but also in terms of the sheer number of conferences and events which seem to take place. The November Conference season is an exciting and busy time for the Data Centre Trade Association as the DCA are official event Partners for many of these conferences taking place including Frankfurt, Paris, London and Ireland. While the sales teams are busy closing deals for the end of the year, organisations should already have their sights firmly set on the year ahead, this is where the value of attending these conferences truly lies as they can assist with strategic planning for the year ahead.
These days we seem to spend far too much time looking down at the devices in the palm of our hands. I am sure that being continually fixated on that tiny screen can’t be good for you. Not only does it result in you walking into lamp posts it also means we seldom take the time to look up and see the bigger picture. Industry events provide business owners and decision makers with the unique opportunity to disconnect from their SmartPhones for a short while and reconnect with 100s of likeminded professionals who all share the same challenges. Many of these conferences are free to attend and details on all the events and conferences can be found on the Data Centre Trade Association website www.DCA-Global.org/events
Next month is the last publication of the year and we have decided to re-publish the most popular articles from 2019; as voted for by our readers so look out for these and the release of the confirmed themes for 2020.
Mike Meyer, Managing Director at Critical Facilities Solutions UK
As home to mission-critical equipment, it’s easy to see why you’d want your facility to be as contaminate-free and well maintained as possible. Yet, even with the necessary procedures in place, contamination still occurs. Everyday activities such as running cooling systems, employees opening and closing doors, installing new equipment — all of these are activities that introduce various levels of contamination.
Maintaining your DC should take a "minimize, regulate and maintain" approach to contamination control and cleaning but how do you find a cleaning schedule and program that is in line with your operational goals?
Hardware manufacturers such as IBM, EMC and Dell recommend you maintain your environment to ISO14644-1:2015 Class 8 utilizing professional data centre cleaners. In fact, failing to do so may void your warranty in instances where preventable airborne contamination was found to be a cause of the device failure. ASHRAE recommends having an annual sub-floor clean and quarterly floor and equipment surface cleaning. Many of the ‘standards’ and ‘recommendations’ seemingly contradict one another.
Nevertheless, a clean data centre is essential… and here’s why! Airborne contaminants are the unnoticed threat. The trouble with airborne contaminants is that the source (or sources) isn’t always easy to identify and harmful buildup can occur over the course of days, months, or even years.
You might not see the source, but airborne and contact-based contaminants build up on equipment. Even solid-state storage mediums can be compromised by buildup on heat sinks, bearings and vents. There’s no such thing as an airproof data centre. Therefore, contamination from airborne sources is — for all intents and purposes — unavoidable. Electrostatic dust, corrosive oxides, volatile organic compounds, solvents and other contaminants put equipment at risk. Even seemingly mundane, everyday sources of contamination such as pollen, dust, hair and carpeting fibers can prove to be problematic.
Periodic indoor air quality testing, otherwise known as air particle testing, has long been the best and only, method for ascertaining and confirming compliance to the ISO standard for machine room and data centre air cleanliness. The faults with this method are twofold; firstly, it’s a snapshot in time and; secondly, it only measures contaminates that are airborne and not those that have already settled.
There have been significant new advancements in the equipment and methods used to test the air quality and the volume of particulate in the air. At Critical Facilities Solutions we are introducing new methods of testing. While we still use hand-held, snapshot, air particle testing where necessary and relevant we are also installing robust, cost effective alternatives that measure the air quality on a constant or predetermined basis. We’ve coined the phrase Constant Air Monitoring. The product and system we supply and install can operate as a standalone system or be integrated into any BMS system.
While Continuous Particulate Air Monitors (CPAMs) have been used for years in nuclear facilities to assess airborne particulate radioactivity (APR) and pharmaceutical cleanrooms to measure air particulate (AP) the CPAMs have typically been extremely costly to install in other environments especially when taking the test parameters of the ISO standard and integration into data centre systems into account.
Settled contaminants cause decreased performance and thermal clogging. When airborne or touch contaminants buildup on the surface of equipment, this is known as "settled contamination." These tiny particles make their way onto (and into) delicate equipment, resulting in thermal clogging, data loss and performance bottlenecks due to thermal throttling. Contamination-related failures can even occur with solid-state drives (SSDs).
Densely packed racks are more susceptible to contamination. Servers and drives continue to shrink and become even more compact. This is great for reducing floorspace, but it also means equipment is packed in tightly, creating opportunities for settle contaminants to go unnoticed. It’s important to note that the more contamination accumulates on equipment and in air filters, the less efficient equipment becomes, leading to performance bottlenecks and wasted energy mostly down to the additional cooling requirements which then lead to further environmental impact.
On to lesser known risks, but for those that have experienced it firsthand, the threat of zinc whiskers — and how they cripple essential equipment — is very real. But, there are several factors that are making this once-rare phenomenon all the more common.
So, what are zinc whiskers and how do you know if your server room or data center is at risk?
Zinc whiskers are microscopic, crystalline slivers of zinc that form through corrosion. Whiskering can originate from any number of sources; flooring panels, ductwork, ceiling hangers, server racks, electrical components and virtually any source galvanised with this brittle metal — even bolts, nuts and washers may exhibit signs of whiskering.
While it is now fairly well understood how whiskering occurs, tracing the source isn’t always so easy. For one, these "whiskers" are incredibly light, which means they can easily travel through HVAC systems and subfloor voids.
These metallic, fiber-like "whiskers" are highly conductive and can cross circuit board traces, corrupt data, compromise hardware and cause extensive downtime. PCB boards and other pieces of electronic equipment (servers, SSDs, PCB boards, etc.) are all at risk of being affected by zinc whiskers.
To neutralise the risks associated with zinc whiskers, Critical Facilities Solutions offers a complete solution that includes:
Getting started with professional cleaning doesn’t have to be difficult, if you’re new to the concept of hiring specialists to clean your critical facility, a professional data centre cleaner can walk you through the entire process, explaining each step and making recommendations along the way. Since no two facilities are alike, its highly recommended that a thorough inspection and survey be commissioned before you set out to create a service profile and schedule.
Following a consultation, its highly likely that a full deep clean will be recommend as the starting point for any on-going maintenance cleaning (especially if your facility has never received a professional service, or if there has been a lapse in cleaning). A deep clean may include cleaning every square inch of the data hall, equipment surfaces, as well as flooring, stringers, pedestals and the sub-floor voids. These aren’t "precautionary steps," but essential parts of preventing recontamination and ensuring your facility is as dust- and contamination-free as possible.
Selecting the best ‘starting point’ for your Data Centre’s maintenance regime can prove challenging. The Data Centre Alliance (DCA), the Data Centre Trade Association has, in consultation with the leading UK Data Centre cleaning authorities and companies, produced and distributed an Anti-Contamination Guide which looks to focus on overall best practice and should be considered a great resource in determining your starting point for any maintenance schedule.
Toby Hunt, Key Account Director at Guardian Water Treatment
What has water got to do with data centres? A factor in almost all types of data centre cooling, water is an essential part of the operational jigsaw puzzle - every Gigabyte of outbound data has a water footprint of up to 250 litres. Toby Hunt, from Guardian Water Treatment discusses the ways in which we can reduce the volumes of water used in data centre processes, while ensuring that these essential power houses of the modern world avoid downtime and improve efficiency.
Data is an inescapable factor of modern life. With our phones smart, our homes becoming increasingly automated and every type of business relying on computing and the internet in one way or another, we need more data than ever before. Disconnection simply won’t be tolerated.
The next big thing set to further improve our connectivity is 5G, which will rely on a larger fleet of smaller data centres to deliver on its promise of ultra-low latency and high levels of interconnection, pushing the move towards Edge computing which is seeing data centres pop up round towns and cities throughout the UK.
While essential, the downside of data centres is the amount of energy they consume. The carbon footprint of this industry will soon overtake that of aviation, with much of this energy used for cooling – 38% of a data centres running costs go towards this task. A constant temperature of around 20 to 24 degrees is needed to prevent equipment from overheating and if it is not, servers will start to perform inefficiently and unreliably. Eventually, if temperatures are not maintained at safe levels, services will shut down completely.
Cooling, therefore, is one of the most significant challenges facing the data centre sector, and in most instances this cooling uses water, with hundreds of thousands of gallons of water used by the UK’s bigger centres in a single day.
Water worries
For data centre owners and the teams looking after their safe operation, there are two main issues with water that need to be addressed. In closed-circuit HVAC systems, such as those usually found in support of air cooled data centres, it is essential that corrosion is kept at bay; pipe fouling can lead to reduced efficiencies, as a system has to work harder to do its job. If left unchecked for too long, breakdown and downtime may follow, leading to great expense, disruption and reputation damage.
Secondly, there is the issue of water wastage. As mentioned at the start of this article, data centres use a lot of water, particularly the large ones which tend to rely on cooling towers. With sustainability in mind, reducing water consumption should be a priority, one which will save money long term.
Out with the old…
Closed-circuit water systems, commonly found in smaller data centres relying on air cooling – a popular choice in Edge computing - have historically relied on decidedly old fashioned methods when it comes to checking water condition and preventing corrosion.
Despite the high tech nature of the data centre, the industry in charge of water management and treatment has remained unchanged for many years. Sampling, where a small amount of water is sent to a lab for testing, has been the main way of checking whether water is likely to cause corrosion. This method has a number of issues: results take days, if not weeks to come back, by which time conditions may have changed, sampling focuses on bacteria, which if present may be a case of too little too late and the results themselves only represent a snap shot of the past.
Relying on something which is not instantaneous and open to flaws is at odds with the fast-paced and technology driven world of the data centre. If a system is left to foul due to corrosion, energy efficiency will be compromised, increasing carbon footprints and exacerbating the risk of expensive repairs and downtime. In Guardian’s opinion, sampling alone just isn’t fit for purpose in this environment.
...And in with the new
So, what is the answer? We believe that real-time, 24/7 water monitoring is the way forward, a method being adopted by many of our data centre clients. By taking live readings on a wide range of parameters, including temperature, pressure, pH and inhibitor levels, building owners and maintenance teams can have a true picture of what’s actually going on, allowing them to act quickly when conditions change, preventing expensive repairs and breakdown.
Importantly, by using Hevasure’s real-time monitoring system, dissolved oxygen is checked for – something that sampling fails to identify. Oxygen is the key indicator when it comes to corrosion potential, either directly or by creating the conditions for bacteria to thrive, which can lead to Microbial Induced Corrosion (MIC). By detecting oxygen levels as soon as they rise, problems can be nipped in the bud before they even start.
Equally, this approach can actually lead to reduced maintenance requirements, less call outs and a reduction in flushing and chemical dosing. When changes in a system occur, following routine maintenance for example, disruption in water condition, such as an oxygen spike, is normal. 24/7 monitoring can track how quickly conditions return to base levels, meaning intervention can be avoided if this happens within a reasonable time frame. A sample taken following disruption would most likely show issues that lead to flushing and dosing.
For responsible parties, such as the Facilities Management team, this ability to pin point and track disruption and resolution provides peace of mind, avoiding misplaced blame. In one example, we prevented an FM team from being charged £300,000 for an issue that was believed to have happened following handover. By using Hevasure, we could see that the problem was in fact caused by another party who came in to do some works that caused major oxygen ingress. We could identify exactly when the issue occurred, easily tracking back to the situation that caused it. Historically, the FM team would’ve had to fit the bill.
Reuse and recycle
Another potential benefit of real-time monitoring is that water can be saved by avoiding unnecessary flushing – a process which uses gallons of water and chemicals. In larger-scale data centres reliant on cooling towers there are even greater savings to be made by recycling process water and utilising rainwater.
Water recycling systems can recover around 60 -70% of bleed water, as well as collecting rainwater. If 65% of the cooling tower bleed off can be recovered using a 8m3/hr recycling system, up to £120,000 can be saved every year, quickly recouping the £70,00 installation costs.
To ensure purity, a combination of media filtration, ion exchange and membrane filtration means that a high percentage of solids are removed, as well as bacteria, algae and viruses. The result is water that is often purer than when it originally entered the cooling system.
The ‘scale’ of the problem
Another major issues in cooling towers is scale. Just 1.6mm of scale can reduce thermal transfer by up to 12%, if left unchecked it can block heat exchanges, increasing pumping costs, leading to corrosion and Legionella risk. As cooling towers are open to the elements, bacteria becomes an issue for humans as well as the plant itself, so preventing its proliferation is doubly important. An outbreak of Legionnaires disease could lead to data centre closure, hefty fines and crucially, risk to human life.
A simple fix is to install water softening plant, which can reduce the labour costs associated with de-scaling by around £20,000 per month. ROI can be recouped in less than a year.
Sympathetic growth
We need more data, it’s an unavoidable fact of the modern world. We also need to make the way we live more sustainable so all industries must improve their carbon footprints and prevent wastage. In the data centre sector, water monitoring and recycling is a key part of this process, activities which will ultimately improve reliability and efficiency, while reducing the need for expensive repairs. With more data centres popping up throughout the UK there is a huge opportunity to change the status quo and move away from practices which are at odds with a modern and technology driven world.
Data Centre Cleaning and Decontamination Minimise Downtime and Enhance your IT Return on Investment
Author: Natalie Coleman Owner and Director at Future Hygiene Services
With IT departments often facing an uphill battle when it comes to arguing the case for more spend compared with other areas of a Business’ budget (advertising, Public Relations, Research and Development etc.), it is essential that every aspect of IT investment is maximized and protected.
As defined by the GTA (1), IT Investment is meant to be all inclusive of an information technology solution in that it can consist of a single project, or of several logically related projects. The same Authority defines Downtime as the time when a configuration item or IT Service is not available during its Agreed Service Time.
It therefore stands to reason that every minute your IT Solution is unavailable, the investment is compromised.
You may be asking yourself: What is the relevance of IT Cleaning in the myriad of operational and cost complexities associated with running my Data Centre or enterprise computer network?
An uncleaned and contaminated data centre will slowly build up dirt and debris until it eventually starts to impact equipment reliability, customer perception and eventually creates a fire risk, thus gravely impacting your IT Investment.
Invariably, your hardware and environment may also be exposed to:
AND THE ABOVE IS DWARFED IN COMPARISON TO REPUTATIONAL RISK!
The Relative Cost of Data Centre Cleaning
According to recent research, the average value per square metre of high-quality Data Centre space in the U.K. is approximately £3,300 per annum. (2)
Maintaining a clean Computer Room or Data Centre is often overlooked and regarded as excessive for a seemingly unnecessary or “nice to have” service. However, let us consider the average cost of an annual data centre cleaning project:
Whilst every Data Centre environment is unique and a full survey should be carried out prior to compiling costs, it is safe to say that the indicative market rate for computer room cleaning is around £17.00 per m2. (3). Applied to an average space of 600m2, with a potential annual revenue at just under £2 million, the cost of the annual clean equates to £10,200: a mere 0.51% of the value of the data centre space. In every good sense of the phrase, this is rather eye-opening!
© Capitoline Ltd 2019
Not only is the above comparison quite refreshing in today’s cost competitive market, but as mentioned earlier, we would like to highlight the implied and far greater cost of failure to invest in a well-managed preventative cleaning and decontamination programme.
Capitoline, probably the most experienced independent data centre consultants offering data centre audits, a wide range of data centre certifications and data centre design has seen numerous operational problems arising from dirty computer rooms and other technical areas. Its experience illustrates two of many case studies below:
Case study 1 – Europe: A computer room was extremely dusty, and nobody gave much thought as to how problematic this could be. A very minor fire set off the fire suppression gas. The resulting dust and grit thrown up entered dozens of items of IT equipment and 1.8 million Euros of disk drives were destroyed, plus the business was disrupted for six weeks.
Case study 2 – Europe: An electrical plant room was left very dusty and this was made worse by storing old cardboard boxes and papers. Cleaning was not seen as necessary. The filters on the UPS (Uninterruptible Power Supply) air intakes became so clogged that the UPS overheated and caught fire. The entire electrical room was destroyed. The business ran with no electrical backup power for two months.
In addition, in a recent statement, a multinational financial services company, said: "We’re experiencing system issues due to a power shutdown at one of our facilities, initiated after smoke was detected following routine maintenance. We’re working to restore services as soon as possible. We apologise for the inconvenience."
Full story at https://stocknews.com/news/wfc-wells-fargo-wfc-confirms-outage-on-mobile-app-and-online/
https://www.datacenterdynamics.com/news/widespread-wells-fargo-outage-blamed-data-center-fire/
The Solution
The best way to eliminate downtime is to prevent problems before they start.
Ensure that when selecting a professional service provider, it not only has a thorough understanding of the technical aspects of safe cleaning methods in your critical area, but also has a comprehensive understanding of your business, your investment and the dynamics within your IT environment.
It is crucial that a cleaning and decontamination regime is included in your annual preventative maintenance programme. With clear communication and an understanding of your periodic objectives, your service provider should be able to schedule the critical cleaning at a time which is least disruptive, but most appropriate to your data centre business activities.
New projects must be handed over with a certificate of cleanliness as part of the O&M (Operations & Maintenance) documentation and this must also happen after any building works affecting the computer room.
In addition, a reputable service provider should, as standard procedure, provide you with evidence and supporting documentation that international standards of cleanliness have been achieved. The provision of certification to ISO14644 Class 8: 2015 is essential, as this will also validate the care and maintenance of your server and telecommunications equipment, which ultimately may have insurance implications.
Preserving the integrity of your IT environment has never been so important and technological advancements are setting the pace for securing your business’ growth. At the heart of today’s modern business, is that out-of-sight IT backbone and the end products of seamless IT delivery would appear quite a miraculous thing to the untrained eye. However, with an understanding of the stakes at play in these business and mission critical areas, a Technical Cleaning service provider who understands all risks and does not compromise on training, can help avoid the proverbial “downtime sting” before it has time to strike!
References:
(1) Glossary of Terms and Definitions Supporting Policies, Standards and Guidelines for Information Technology and Information Security (Georgia Technology Authority)
(2) “According to 2016 statistics from Telecoms Pricing published in January 2017 the average price per rack in London and the South East is £1000 per month.”
“While the UK Data Centre market remains the largest European Data Centre market by space and power the number of data centres and providers isn’t driving down costs. The average UK market pricing is now higher than Germany, France & the Netherlands with the average UK Data Centre retail rack space pricing currently at €939 (£665) per month, but there is a wide spread of pricing available in the market, ranging from Euro €500 (£354) per month up to Euro €1,600 (£1133) per month – with the highest pricing available in the London area.
TCL forecasts that UK average Data Centre pricing will increase by around three per cent over the five-year period from the end of 2015 to the end of 2020. Key UK Data Centre providers have been reducing their pricing and users are now faced with a wide range of choice from regional Data Centre clusters using different price points.” (Tariff Consultancy 2015)
(3) Cost to carry out a full annual deep clean, based on fully occupied DC space of 600m2. All costs inclusive of enhanced wage rates materials, equipment, air particulate reporting, management / supervision. Please note that there may be a marginal variation in this rate, dependent on the size and any extraordinary features of the environment.
Comment of Data Centre Cleaning and Case Study
Whilst Nottingham-based 2bm is widely regarded as the UK’s leading company for bespoke design, build, refurbishment and upgrade of server rooms and data centres, they also have a growing reputation for tailored audits and health checks for all types of data centre.
Neil Roberts, sales director of 2bm, commented: “As there is often nothing obvious or visible to the naked eye that needs immediate attention, clinical cleaning is an area of maintenance that is consistently overlooked by many data centre operators.”
“Clinical cleaning schedules are vital to not only eliminate dirt but also discover the potential sources of contamination and the depreciation and malfunction of company assets. Modern-day computer equipment is often sensitive to environmental conditions, highlighted in recent studies that show 75% of storage and hardware failures are a result of factors in the environment,” added Neil.
Whilst temperature and humidity accounted for most of the issues, carbon and concrete dust, together with zinc whiskers are often stated as key areas of concern.
Clinical cleaning, which involves the removal of dust particles, static and other contaminants, also incorporates the use of anti-static solutions within rooms. Regular clinical cleaning of a server room helps prevent the build-up of static electricity, dust and contaminants which cause overheating, reduced filter life and additional wear to components.
Certain hardware manufacturers actually specify that equipment should be housed in a clean environment to ISO14644/8 standard, with some warranties deemed void if data centres or server rooms are not regularly cleaned and decontaminated.
The clinical cleaning process includes ceiling voids, subflooring, raising floor surfaces, wall surfaces, frames and windows, high-level trunking, ducting, tray works and lighting systems. It also includes internal clean of cabinets and servers (where required), cabinet exteriors, wall-mounted equipment and CRAC units.
Neil Roberts said: “With simple things like dust and dirt having a major impact on your operation and, ultimately, the efficiency of a data centre, a regular schedule of preventative clinical cleaning is essential. At 2bm, we offer flexible working hours 24 hours a day, seven days a week, which means there are no interruptions to a normal working day or the running of a data centre.”
“It’s possible to increase the lifespan of server equipment through proper cleaning procedures that improve air quality. The frequency of cleaning depends on the size and type of data centre or server room,” added Neil.
2bm’s services have been designed to work around the day-to-day operations of a data centre, ensuring a business can run as usual. They also include an initial audit to establish the extent of any problem and recommend a solution which varies from a one-off deep clean to ongoing preventative cleaning on a monthly, quarterly or annual basis.
The key is to ensure that the temperature and airflow remain constant, and by removing mould, dust, pollen and other bacteria, it not only helps maintain flooring hardware and equipment but also reduces the build-up of static electricity.
2bm provide data centre and server room audits in line with corporate governance and industry best practice, including safety, health, environment, security, quality and energy. They also offer auditing capability for a wide range of International and British Standard accreditations as well as providing ongoing support to assist with meeting set targets.
As an endorser of the EU Code of Conduct, 2bm is recognised as an organisation that promotes the Code’s best practice and has been implementing best practice for many years and understands the need to develop industry-wide guidelines for designers and operators.
The Code of Conduct can be used as a specific reference document for existing facilities as well as for the design of new facilities. With an unrivalled knowledge of the EUCoC, we can work with customers to bring them in line with the Code as well as become participants in the scheme.
“Our wider passion for innovation in our industry drives us to be engaged with emerging technologies and use them to achieve the very best results for our clients,” commented Neil Roberts.
“Every data centre is different in terms of its content and environment, so we pay attention to the finest details to make certain of optimum outcomes.”
“Our team thrive on offering solutions which improve efficiency and reduce running costs and importantly are within budgets. We only recommend an action which is right for a business,” said Neil.
CASE STUDY: CLINICAL CLEANING
2bm has worked in partnership with ExcelRedstone for almost ten years within London’s financial sector, and today they are regarded as the go-to company for all their clinical cleaning projects.
Over three decades, ExcelRedstone has become one of the foremost companies in delivering IT infrastructure and support services to clients in some of the UK’s landmark buildings and offices.
ExcelRedstone makes spaces, which are critical to organisations, work smarter and harder. Its smart building technology solutions are developed in partnership with customers, from the design phase through to operation, implementation, delivery and management.
2bm was asked to provide clinical cleaning for a prestigious fit-out in the financial sector within the City of London. The complex and thorough cleaning was needed in-service equipment rooms to ensure no remaining dust and dirt.
The scope of works for cleaning the data centres and comms rooms included intensive cleaning of floors and walls, together with everything above the line of lighting, such as cables, trunking, pipework and fire/smoke detectors.
Without being addressed, the build-up of dirt and/or static discharge could have potentially led to the loss of critical data. Therefore, by improving the circulation of the air, computer equipment will function more efficiently, contributing to long-term energy cost savings.