Many years ago, Volkswagen advertised a new vehicle which, if memory serves, could do 800 miles on one tankful of fuel. Now, even though closer inspection of this claim revealed the ‘laboratory-like’ conditions under which this amazing feat had been achieved, all these years later, I can still remember the claim, and it still impresses me.
At a time when gaslighting, at least in the world of politics, appears to be the new normal, I sincerely hope that the business world does not follow suit. The reason I bring this subject up? Well, much of the news I receive is in the form of survey-based research. And the headline findings of many such news releases are arresting to put it mildly. And many of them contradict one another. So, one report will ‘reveal’ that, say, 90 percent of all businesses are already well on their way to digital transformation. Another might suggest that digital transformation projects are absent from a similar number of organisations. And then there will be the reports that either suggest digital transformation projects are, or are not, meeting expectations.
Look closely, and there’s often a correlation between the research findings and the company which has carried out and published them. If your business is persuading everyone that the flexible, hybrid working model is the future, you won’t be publishing survey results that suggest everyone wants to go back to the office. Similarly, if your technology solution requires everyone to be in the office, you’ll make sure to publish statistics that downplay the working from home trend.
All of which leads me to conclude that research surveys are far from worthless – indeed, many do uncover some fascinating trends and provide much food for thought; but they do come with a slight warning: be aware of where they come from. Just as the daily newspapers push their own right/left (not much interest in the middle ground these days!) political views with carefully crafted articles, so vendors are not immune from lobbying potential customers with carefully selected, perhaps incomplete, data sets. So, read as many surveys as you can, and you should end up with a balanced view!
Fifty-five percent of CIOs plan to increase their total number of full-time employees (FTEs) in IT across the course of 2021, according to a recent survey* from Gartner, Inc. They will predominantly focus staffing growth in the areas of automation, cloud and analytics platforms, and support for remote work.
“The critical role IT played across most firms’ response to the pandemic appears to have had a positive impact on IT staffing plans,” said Matthew Charlet, research vice president at Gartner. “The initial pessimism around the 2021 talent situation that many CIOs expressed mid-2020 has since dwindled.”
Among the CIOs surveyed, the need to accelerate digital initiatives is, by a large margin, the primary factor driving IT talent strategies in 2021. This is followed by the automation of business operations and increase in cloud adoption.
Overall, CIOs are much more likely to expand FTEs in newer, more-emerging technology domains. Growth in security personnel is necessary to reduce the risks from significant investments in remote work, analytics and cloud platforms. Data center, network, systems administration and applications maintenance are the most likely areas to see staffing decreases due to the shift towards cloud services (see Figure 1).
Figure 1. Expected 2021 IT FTE Change in IT Activities
Source: Gartner (March 2021)
“While CIOs plan to hire more staff in several areas critical to meeting changed consumer and employee expectations, most will not be able to meet their planned talent strategy goals without also upskilling or refocusing their existing teams,” said Mr. Charlet.
Global devices installed base to reach 6.2 billion units
The number of devices (PCs (laptops and deskbased), tablets and mobile phones) in use globally will total 6.2 billion units in 2021, according to Gartner, Inc. In 2021, 125 million more laptops and tablets are expected to be in use than in 2020.
“The COVID-19 pandemic has permanently changed device usage patterns of employees and consumers,” said Ranjit Atwal, senior research director at Gartner. “With remote work turning into hybrid work, home education changing into digital education and interactive gaming moving to the cloud, both the types and number of devices people need, have and use will continue to rise.”
In 2022, global devices installed base is on pace to reach 6.4 billion units, up 3.2% from 2021 (see Figure 1). While the shift to remote work exacerbated the decline of desktop PCs, it boosted the use of tablets and laptops. In 2021, the number of laptops and tablets in use will increase 8.8% and 11.7%, respectively, while the number of deskbased PCs in use is expected to decline from 522 million in use in 2020 to a forecasted 470 million in use in 2022.
Figure 2. Installed Base of Devices, Worldwide, 2019-2022 (Thousands of Units)
Source: Gartner (April 2021)
Smartphone Installed Base Set for Upturn in 2021
Users’ confidence is returning in the smartphone market. Although the number of smartphones in use declined 2.6% in 2020, smartphone installed base is on pace to return to growth with a 1% increase in 2021. “With more variety and choice, and lower-priced 5G smartphones to choose from, consumers have begun to either upgrade their smartphones or upgrade from feature phones,” said Mr. Atwal. “The smartphone is also a key tool that people use to communicate and share moments during social distancing and social isolation.”
The integration of personal and business lives, together with a much more dispersed workforce requires flexibility of device choice. Workers are increasingly using a mix of company-owned devices and their own personal devices running on Chrome, iOS and Android, which is increasing the complexity of IT service and support.
“Connectivity is already a pain-point for many users who are working remotely. But as mobility returns to the workforce, the need to equip employees able to work anywhere with the right tools, will be crucial,” said Mr. Atwal. “Demand for connected 4G/5G laptops and other devices will rise as business justification increases.”
Security and risk management leaders must address eight top trends to enable rapid reinvention in their organization, as COVID-19 accelerates digital business transformation and challenges traditional cybersecurity practices, according to Gartner, Inc.
In the opening keynote at the recent Gartner Security & Risk Management Summit, Peter Firstbrook, research vice president at Gartner, said these trends are a response to persistent global challenges that all organizations are experiencing.
“The first challenge is a skills gap. 80% of organizations tell us they have a hard time finding and hiring security professionals and 71% say it’s impacting their ability to deliver security projects within their organizations,” said Mr. Firstbrook.
Other key challenges facing security and risk leaders in 2021 include the complex geopolitical situation and increasing global regulations, the migration of workspaces and workloads off traditional networks, an explosion in endpoint diversity and locations and a shifting attack environment, in particular, the challenges of ransomware and business email compromise.
The following top trends represent business, market and technology dynamics that are expected to have broad industry impact and significant potential for disruption.
Gartner Top Security and Risk Management Trends, 2021
Source: Gartner, March 2021
Trend 1: Cybersecurity Mesh
Cybersecurity mesh is a modern security approach that consists of deploying controls where they are most needed. Rather than every security tool running in a silo, a cybersecurity mesh enables tools to interoperate by providing foundational security services and centralized policy management and orchestration. With many IT assets now outside traditional enterprise perimeters, a cybersecurity mesh architecture allows organizations to extend security controls to distributed assets.
Trend 2: Identity-First Security
For many years, the vision of access for any user, anytime, and from anywhere (often referred to as “identity as the new security perimeter”) was an ideal. It has now become a reality due to technical and cultural shifts, coupled with a now majority remote workforce during COVID-19. Identity-first security puts identity at the center of security design and demands a major shift from traditional LAN edge design thinking.
“The SolarWinds attack demonstrated that we’re not doing a great job of managing and monitoring identities. While a lot of money and time has been spent on multifactor authentication, single sign-on and biometric authentication, very little has been spent on effective monitoring of authentication to spot attacks against this infrastructure,” said Mr. Firstbrook.
Trend 3: Security Support for Remote Work is Here to Stay
According to the 2021 Gartner CIO Agenda Survey, 64% of employees are now able to work from home. Gartner surveys indicate that at least 30-40% will continue to work from home post COVID-19. For many organizations, this shift requires a total reboot of policies and security tools suitable for the modern remote workspace. For example, endpoint protection services will need to move to cloud delivered services. Security leaders also need to revisit policies for data protection, disaster recovery and backup to make sure they still work for a remote environment.
Trend 4: Cyber-Savvy Board of Directors
In the Gartner 2021 Board of Directors Survey, directors rated cybersecurity the second-highest source of risk for the enterprise after regulatory compliance. Large enterprises are now beginning to create a dedicated cybersecurity committee at the board level, led by a board member with security expertise or a third-party consultant.
Gartner predicts that by 2025, 40% of boards of directors will have a dedicated cybersecurity committee overseen by a qualified board member, up from less than 10% today.
Trend 5: Security Vendor Consolidation
Gartner’s 2020 CISO Effectiveness Survey found that 78% of CISOs have 16 or more tools in their cybersecurity vendor portfolio; 12% have 46 or more. The large number of security products in organizations increases complexity, integration costs and staffing requirements. In a recent Gartner survey, 80% of IT organizations said they plan to consolidate vendors over the next three years.
“CISOs are keen to consolidate the number of security products and vendors they must deal with,” said Mr. Firstbrook. “Having fewer security solutions can make it easier to properly configure them and respond to alerts, improving your security risk posture. However, buying a broader platform can have downsides in terms of cost and the time it takes to implement. We recommend focusing on TCO over time as a measure of success.”
Trend 6: Privacy-Enhancing Computation
Privacy-enhancing computation techniques are emerging that protect data while it’s being used — as opposed to while it’s at rest or in motion — to enable secure data processing, sharing, cross-border transfers and analytics, even in untrusted environments. Implementations are on the rise in fraud analysis, intelligence, data sharing, financial services (e.g. anti-money laundering), pharmaceuticals and healthcare.
Gartner predicts that by 2025, 50% of large organizations will adopt privacy-enhancing computation for processing data in untrusted environments or multiparty data analytics use cases.
Trend 7: Breach and Attack Simulation
Breach and attack simulation (BAS) tools are emerging to provide continuous defensive posture assessments, challenging the limited visibility provided by annual point assessments like penetration testing. When CISOs include BAS as a part of their regular security assessments, they can help their teams identify gaps in their security posture more effectively and prioritize security initiatives more efficiently.
Trend 8: Managing Machine Identities
Machine identity management aims to establish and manage trust in the identity of a machine interacting with other entities, such as devices, applications, cloud services or gateways. Increased numbers of nonhuman entities are now present in organizations, which means managing machine identities has become a vital part of the security strategy.
Gartner, Inc. has identified the top 10 data and analytics (D&A) technology trends for 2021 that can help organizations respond to change, uncertainty and the opportunities they bring in the next year.
“The speed at which the COVID-19 pandemic disrupted organizations has forced D&A leaders to have tools and processes in place to identify key technology trends and prioritize those with the biggest potential impact on their competitive advantage,” said Rita Sallam, distinguished research vice president at Gartner.
D&A leaders should use the following 10 trends as mission-critical investments that accelerate their capabilities to anticipate, shift and respond.
Trend 1: Smarter, Responsible, Scalable AI
The greater impact of artificial intelligence (AI) and machine learning (ML) requires businesses to apply new techniques for smarter, less data-hungry, ethically responsible and more resilient AI solutions. By deploying smarter, more responsible, scalable AI, organizations will leverage learning algorithms and interpretable systems into shorter time to value and higher business impact.
Trend 2: Composable Data and Analytics
Open, containerized analytics architectures make analytics capabilities more composable. Composable data and analytics leverages components from multiple data, analytics and AI solutions to rapidly build flexible and user-friendly intelligent applications that help D&A leaders connect insights to actions.
With the center of data gravity moving to the cloud, composable data and analytics will become a more agile way to build analytics applications enabled by cloud marketplaces and low-code and no-code solutions.
Trend 3: Data Fabric Is the Foundation
With increased digitization and more emancipated consumers, D&A leaders are increasingly using data fabric to help address higher levels of diversity, distribution, scale and complexity in their organizations’ data assets.
The data fabric uses analytics to constantly monitor data pipelines. A data fabric utilizes continuous analytics of data assets to support the design, deployment and utilization of diverse data to reduce time for integration by 30%, deployment by 30% and maintenance by 70%.
Trend 4: From Big to Small and Wide Data
The extreme business changes from the COVID-19 pandemic caused ML and AI models based on large amounts of historical data to become less relevant. At the same time, decision making by humans and AI are more complex and demanding, requiring D&A leaders to have a greater variety of data for better situational awareness.
As a result, D&A leaders should choose analytical techniques that can use available data more effectively. D&A leaders rely on wide data that enables the analysis and synergy of a variety of small and large, unstructured, and structured data sources, as well as small data which is the application of analytical techniques that require less data but still offer useful insights.
“Small and wide data approaches provide robust analytics and AI, while reducing organizations’ large data set dependency,” said Ms. Sallam. “Using wide data, organizations attain a richer, more complete situational awareness or 360-degree view, enabling them to apply analytics for better decision making.”
Trend 5: XOps
The goal of XOps, including DataOps, MLOps, ModelOps, and PlatformOps, is to achieve efficiencies and economies of scale using DevOps best practices, and ensure reliability, reusability and repeatability. At the same time, it reduces duplication of technology and processes and enabling automation.
Most analytics and AI projects fail because operationalization is only addressed as an afterthought. If D&A leaders operationalize at scale using XOps, they will enable the reproducibility, traceability, integrity and integrability of analytics and AI assets.
Trend 6: Engineering Decision Intelligence
Engineering decision intelligence applies to not just individual decisions, but sequences of decisions, grouping them into business processes and even networks of emergent decisions and consequences. As decisions become increasingly automated and augmented, engineering decisions give the opportunity for D&A leaders to make decisions more accurate, repeatable, transparent and traceable.
Trend 7: Data and Analytics as a Core Business Function
Instead of being a secondary activity, D&A is shifting to a core business function. In this situation, D&A becomes a shared business asset aligned to business results, and D&A silos break down because of better collaboration between central and federated D&A teams.
Trend 8: Graph Relates Everything
Graphs form the foundation of many modern data and analytics capabilities to find relationships between people, places, things, events and locations across diverse data assets. D&A leaders rely on graphs to quickly answer complex business questions which require contextual awareness and an understanding of the nature of connections and strengths across multiple entities.
Gartner predicts that by 2025, graph technologies will be used in 80% of data and analytics innovations, up from 10% in 2021, facilitating rapid decision making across the organization.
Trend 9: The Rise of the Augmented Consumer
Most business users are today using predefined dashboards and manual data exploration, which can lead to incorrect conclusions and flawed decisions and actions. Time spent in predefined dashboards will progressively be replaced with automated, conversational, mobile, and dynamically generated insights customized to a user’s needs and delivered to their point of consumption.
“This will shift the analytical power to the information consumer — the augmented consumer — giving them capabilities previously only available to analysts and citizen data scientists,” said Ms. Sallam.
Trend 10: Data and Analytics at the Edge
Data, analytics and other technologies supporting them increasingly reside in edge computing environments, closer to assets in the physical world and outside IT’s purview. Gartner predicts that by 2023, over 50% of the primary responsibility of data and analytics leaders will comprise data created, managed, and analyzed in edge environments.
D&A leaders can use this trend to enable greater data management flexibility, speed, governance, and resilience. A diversity of use cases is driving the interest in edge capabilities for D&A, ranging from supporting real-time event analytics to enabling autonomous behavior of “things”.
A new forecast from International Data Corporation (IDC) shows that the continued adoption of cloud computing could prevent the emission of more than 1 billion metric tons of carbon dioxide (CO2) from 2021 through 2024.
The forecast uses IDC data on server distribution and cloud and on-premises software use along with third-party information on datacenter power usage, carbon dioxide (CO2) emissions per kilowatt-hour, and emission comparisons of cloud and non-cloud datacenters.
A key factor in reducing the CO2 emissions associated with cloud computing comes from the greater efficiency of aggregated compute resources. The emissions reductions are driven by the aggregation of computation from discrete enterprise datacenters to larger-scale centers that can more efficiently manage power capacity, optimize cooling, leverage the most power-efficient servers, and increase server utilization rates.
At the same time, the magnitude of savings changes based on the degree to which a kilowatt of power generates CO2, and this varies widely from region to region and country to country. Given this, it is not surprising that the greatest opportunity to eliminate CO2 by migrating to cloud datacenters comes in the regions with higher values of CO2 emitted per kilowatt-hour. The Asia/Pacific region, which utilizes coal for much of its' power generation, is expected to account for more than half the CO2 emissions savings over the next four years. Meanwhile EMEA will deliver about 10% of the savings, largely due to its use of power sources with lower CO2 emissions per kilowatt-hour.
While shifting to cleaner sources of energy is very important to lowering emissions, reducing wasted energy use will also play a critical role. Cloud datacenters are doing this through optimizing the physical environment and reducing the amount of energy spent to cool the datacenter environment. The goal of an efficient datacenter is to have more energy spent on running the IT equipment than cooling the environment where the equipment resides.
Another capability of cloud computing that can be used to lower CO2 emissions is the ability to shift workloads to any location around the globe. Developed to deliver IT service wherever it is needed, this capability also enables workloads to be shifted to enable greater use of renewable resources, such as wind and solar power.
IDC's forecast includes upper and lower bounds for the estimated reduction in emissions. If the percentage of green cloud datacenters today stays where it is, just the migration to cloud itself could save 629 million metric tons over the four-year time period. If all datacenters in use in 2024 were designed for sustainability, then 1.6 billion metric tons could be saved. IDC's projection of more than 1 billion metric tons is based on the assumption that 60% of datacenters will adopt the technology and processes underlying more sustainable "smarter" datacenters by 2024.
"The idea of 'green IT' has been around now for years, but the direct impact of hyperscale computing can have on CO2 emissions is getting increased notice from customers, regulators, and investors and it's starting to factor into buying decisions," said Cushing Anderson, program vice president at IDC. "For some, going 'carbon neutral' will be achieved using carbon offsets, but designing datacenters from the ground up to be carbon neutral will be the real measure of contribution. And for advanced cloud providers, matching workloads with renewable energy availability will further accelerate their sustainability goals."
The International Data Corporation (IDC) Worldwide Artificial Intelligence Spending Guide estimates that spending on artificial intelligence (AI) will reach $12 billion in 2021 in Europe and will continue to experience solid double-digit growth through 2024. Automation needs, digital transformation, and customer experience continue to support spending on AI, even in times when COVID-19 has impacted negatively on revenues for many companies.
"COVID-19 was a trigger for AI investments for some verticals, such as healthcare. Hospitals across Europe have deployed AI for a variety of use cases, from AI-based software tool for automated diagnosis of COVID-19 to machine learning-based hospital capacity planning systems," said Andrea Minonne, senior research analyst at IDC Customer Insights & Analysis. "On the other hand, other verticals such as retail, transport, and personal and consumer services had to contain their AI investments, especially when AI was used to package personalized customer experiences to be delivered in-store."
The COVID-19 pandemic did not end in 2020 and will have effects throughout 2021 and the years to come. COVID-19 has revolutionized the way many industries operate, changing their business processes but also the products, services, and experiences they deliver. Many non-essential retailers are still closed today due to strict lockdowns, meaning that retailers were forced to shift their focus from in-store AI toward AI-driven online experiences and services. Customers also had to adapt to a new reality, and that triggered their behavior to change. Shopping online is the new normal. For that reason, retailers are looking closely at use cases such as chatbots, pricing optimization, and digital product recommendations to guarantee customer engagement but also secure revenues from digital channels.
The same is the case for transportation, an industry that has been heavily affected by COVID-19. With travel being restricted to essential reasons only and quarantine measures widely in place, many travelers have stalled or cancelled their plans, which has a strong impact on transportation companies' revenues. In 2020, transportation's focus has shifted from AI-driven innovation to cost-containment, at least until the industry recovers. For that reason, AI investments across transportation companies will grow below average this year.
According to the International Data Corporation (IDC) Worldwide Quarterly Server Tracker, vendor revenue in the worldwide server market grew 1.5% year over year to $25.8 billion during the fourth quarter of 2020 (4Q20). Worldwide server shipments declined 3.0% year over year to nearly 3.3 million units in 4Q20.
Volume server revenue was up 3.7% to $20.4 billion, midrange server revenue also increased 8.4% to $3.3 billion, while high-end servers declined by 21.8% to $2.1 billion.
"Global demand for enterprise servers was relatively flat during the fourth quarter of 2020 with the strongest increase to demand coming from China (PRC)," said Paul Maguranis, senior research analyst, Infrastructure Platforms and Technologies at IDC. "From a regional perspective, server revenue within PRC grew 22.7% year over year while the rest of the world declined 4.2%. Blade systems continued to decline, down 18.1% while rack optimized servers grew 10.3% year over year. Similar to the previous quarter, servers running AMD CPUs as well as ARM-based servers continued to grow revenue, increasing 100.9% and 345.0% year over year respectively, albeit on a small but growing base."
Overall Server Market Standings, by Company
HPE/New H3C Groupa and Dell Technologies were tied* for the top position in the worldwide server market based on 4Q20 revenues. Inspur/Inspur Power Systemsb finished third, while IBMc held the fourth position. Huawei and Lenovo tied* for the fifth position in the market.
Notes:
* IDC declares a statistical tie in the worldwide server market when there is a difference of one percent or less in the share of revenues or shipments among two or more vendors.
a Due to the existing joint venture between HPE and the New H3C Group, IDC is reporting external market share on a global level for HPE and New H3C Group as "HPE/New H3C Group" starting from 2Q 2016. Per the JV agreement, Tsinghua Holdings subsidiary, Unisplendour Corporation, through a wholly-owned affiliate, purchased a 51% stake in New H3C and HPE has a 49% ownership stake in the new company.
b Due to the existing joint venture between IBM and Inspur, IDC is reporting external market share on a global level for Inspur and Inspur Power Systems as "Inspur/Inspur Power Systems" starting from 3Q 2018. The JV, Inspur Power Commercial System Co., Ltd., has total registered capital of RMB 1 billion, with Inspur investing RMB 510 million for a 51% equity stake, and IBM investing RMB 490 million for the remaining 49% equity stake.
c IBM server revenue excludes sales of Power Systems generated through Inspur Power Systems in China, starting from 3Q 2018.
In terms of server units shipped, Dell Technologies held the top position in the market, followed closely by HPE/New H3C Groupa in the second position. Inspur/Inspur Power Systemsb, Huawei, and Lenovo finished the quarter in third, fourth, and fifth place, respectively.
Notes:
a Due to the existing joint venture between HPE and the New H3C Group, IDC is reporting external market share on a global level for HPE and New H3C Group as "HPE/New H3C Group" starting from 2Q 2016. Per the JV agreement, Tsinghua Holdings subsidiary, Unisplendour Corporation, through a wholly-owned affiliate, purchased a 51% stake in New H3C and HPE has a 49% ownership stake in the new company.
b Due to the existing joint venture between IBM and Inspur, IDC is reporting external market share on a global level for Inspur and Inspur Power Systems as "Inspur/Inspur Power Systems" starting from 3Q 2018. The JV, Inspur Power Commercial System Co., Ltd., has total registered capital of RMB 1 billion, with Inspur investing RMB 510 million for a 51% equity stake, and IBM investing RMB 490 million for the remaining 49% equity stake.
Top Server Market Findings
On a geographic basis, China (PRC) was the fastest growing region with 22.7% year-over-year revenue growth. Latin America was the only other region with revenue growth in 4Q20, up 1.5% in the quarter. Asia/Pacific (excluding Japan and China) decreased 0.3% in 4Q20, while North America declined 6.2% year over year (Canada at 23.7% and the United States at 5.5%). Both EMEA and Japan declined during the quarter at rates of 1.1% and 6.3%, respectively.
Revenue generated from x86 servers increased 2.9% in 4Q20 to around $23.1 billion. Non-x86 server revenue declined 9.0% year over year to around $2.8 billion.
The worldwide Unified Communications & Collaboration (UC&C) market grew 29.2% year over year and 7.1% quarter over quarter to $13.1 billion in the fourth quarter of 2020 (4Q20) , according to the International Data Corporation (IDC) Worldwide Quarterly Unified Communications & Collaboration Q V iew. Revenue growth was also up an impressive 24.9% for the full year 2020 to $47.2 billion.
How business was conducted changed dramatically in 2020 due to COVID-19, driving companies of all sizes to consider and adopt scalable, flexible, cloud-based digital technology solutions (e.g., Unified Communications as a Service or UCaaS) as part of their overall integrated UC&C solution. Vendors and service providers also saw exponential growth in the number of video and collaboration end users in 2020. In 2021 and beyond, IDC expects worldwide UC&C growth will be driven by customers across all business size segments (small, midsize, and large) with interest especially in video, collaboration, UCaaS, mobile applications, and digital transformation (DX) projects.
Some UC&C market specifics include the following:
"In 2020, COVID-19 caused many businesses and organizations to re-think their plans for leveraging digital technologies and accelerated interest in and adoption of solutions such as team collaboration, team messaging, videoconferencing, and UCaaS, among other technologies," said Rich Costello, senior research analyst, Unified Communications and Collaboration at IDC. "In 2021, IDC expects positive growth numbers across these key UC&C segments to continue, albeit at slightly more modest rates."
From a regional perspective, the UC&C market saw positive numbers across the board in 4Q20 and for the full year 2020.
UC&C Company Highlights
Everyone is petrified of ransomware attacks right now, and with good reason. The attacks have penetrated every sector, from academia to local government organizations, to manufacturing, healthcare, high tech and every other sector.
By Bill Andrews, President and CEO of ExaGrid
The ransoms that hackers demand have increased drastically in recent years, with the most audacious at over $12M dollars (10 million Euros). Ransomware attacks occur all of the time, studies estimate that a ransomware attack is carried out every 14 seconds.
Ransomware disrupts the functionality of an organization by restricting access to data through encrypting the primary storage and then deleting the backup storage. Ransomware attacks are on the rise, becoming disruptive and potentially very costly to businesses. No matter how meticulously an organization follows best practices to protect valuable data, the attackers seem to stay one step ahead. They maliciously encrypt primary data, take control of the backup application and delete the backup data.
The challenge is how to protect the backup data from being deleted while at the same time allow for backup retention to be purged when retention points are hit. If you retention lock all of the data, you cannot delete the retention points and the storage costs become untenable. If you allow retention points to be deleted to save storage, you leave the system open for hackers to delete all data.
How Do Hackers Get Control of Backed Up Data?
Often, hackers are able to gain control of a server on a network and then work their way into critical systems, such as primary storage, and then into the backup application and backup storage. Sometimes hackers even manage to access the backup storage through the backup application. The hackers encrypt the data in the primary storage and issue delete commands to the backup storage, so that there is no backup or retention to recover from. Once the backup storage is deleted, organizations are forced to pay the ransom, as its users cannot work.
How Can Organizations Recover from a Ransomware Attack?
One of the best practices for data protection is to implement a strong backup solution, so that an organization can recover data whenever it is deleted, overwritten, corrupted or encrypted.
However, even standard backup approaches, such backing up data to as low-cost primary storage or to deduplication appliances, are vulnerable to ransomware attacks. To eliminate this vulnerability, a backup solution needs to have second non-network-facing storage, so that even if the hacker deletes the backup they cannot reach the long-term retention data.
If an organization is hit with a ransomware attack but their backup data remains intact, then the organization can recover the data without paying a ransom.
ExaGrid’s Unique Feature: Retention Time-Lock for Ransomware Recovery
ExaGrid has always utilized a two-tiered approach to its backup storage, which provides an extra layer of protection to its customers, called Tiered Backup Storage. Its appliances have a network-facing disk-
cache Landing Zone Tier where the most recent backups are stored in an undeduplicated format, for fast backup and restore performance. Data is deduplicated into a non-network-facing tier called the repository where deduplicated data is stored for longer-term retention. The combination of a non-network facing tier (virtual air gap) plus delayed deletes and immutable data objects guards against the backup data being delete or encrypted.
As ExaGrid monitored the growing trend of ransomware attacks, the backup storage company worked on a new feature to further safeguard its repository tier: Retention Time-Lock for Ransomware Recovery. This feature allows for “delayed deletes” so that any delete commands that might be issued by a ransomware attack are not processed for a period of time determined by the ExaGrid customer, with a default of 10 days that can be extended by policy. ExaGrid released this feature in the 2020 and many of its customers have already successfully recovered from ransomware attacks.
Don’t pay the ransom! Implement a solution that is designed to help your organization recover.
Now more than ever, digital transformation (DX) has become a strategic priority for every organisation.
By Ash Finnegan, digital transformation officer, Conga
Navigating what is currently the most complex business landscape to date, leaders have had to rapidly transform their operations in order to deliver their services remotely. In their panic, business leaders have invested heavily in the latest technological solutions to keep their organisations running. From artificial intelligence (AI) to wider automation, such as robotics process automation (RPA) and natural language processing (NLP) or machine learning (ML). Chaos has been the driver of change.
And over the past few months, whole departments have undergone the most intense and complicated transformation programmes their senior leadership teams have ever delivered, but that does not necessarily mean they have been well executed.
How to approach automation – it is a process, not a race
Even before the pandemic, most companies would approach digital transformation projects all wrong. Many companies aspire to be disrupters, picking a technology and implementing it at speed. They want to adopt the latest AI programme and automate their business as fast as possible, with no real idea of how this will improve their services. These projects rarely result in success. In fact, according to Conga research, only half of all digital transformation initiatives of this kind are considered somewhat successful. Compared to Europe, where the success rate is even lower, only 43 percent of these programmes result in success.
The issue lies with how businesses approach automation in the first place. Many strategies are driven by the desire to use and incorporate the latest technology as opposed to identifying clear business goals or reconsidering their current operational model and where AI would be best suited. Covid-19 has only accelerated this issue. Whilst AI and automation offer many competitive advantages, that does not necessarily mean they are easy to implement or deliver as part of a wider digital transformation programme. Too many business leaders are prioritising technology over strategy and simply do not understand what AI, or digital transformation for that matter, really is, can achieve and should drive.
Without stepping back and reviewing their current operational model, what works or what needs improving, how can leaders really understand whether AI is best suited to automate their business? Companies have essentially adopted ‘transformational’ technology without having any clear business objectives in mind or considering where this technology may be better placed to improve overall operability. If there are bad processes in place, that fail to deliver real business objectives or real commercial outcomes, automation will only accelerate this issue.
In reality, businesses need to establish clear commercial objectives, before adopting any disruptive or automation technology. It is crucial that companies first establish where they currently stand in their own digital transformation journey, by considering their own digital maturity.
The digital maturity model – how to adopt AI
‘Digital maturity’ refers to where a business currently stands in its digital transformation journey. Before adopting any new technology or starting any transformation programme, and most importantly, automating areas of the business, companies need to take a step back and reconsider their current operational model. Given the speed at which businesses transformed last year, teams may have stumbled across a number of roadblocks and bottlenecks; the ‘older’ operational model likely did not translate well in the switch to remote
working. Complicated or unnecessary processes have more than likely limited the business’ performance and stunted any commercial growth.
By taking a step back and reviewing their business, leaders will have a clear picture of the current state and what the next stage of their company’s digital transformation journey should be, as opposed to simply guessing, or learning through trial and error. Only by identifying areas where there are operational issues or room for improvement, can businesses establish clear objectives and a strategy – then leaders can incorporate new technology such as AI and automation, to streamline their services and help them achieve these goals.
Once they have assessed their current maturity, leaders can accelerate the processes that work well and can add value to their business, instead of speeding up flawed processes or legacy systems. Automating a bad process doesn’t stop it from being a bad process and, by this same logic, AI isn’t a silver bullet or a ‘quick fix’ – rushing a transformation programme won’t bolster company growth. By assessing their digital maturity and approaching automation in this methodical way, companies can improve their overall operability and streamline the processes that matter – that is, improving the customer experience, generating revenue, and managing key commercial relationships.
As companies progress along their digital transformation journeys, they will streamline processes, break down silos and enable cross-team working across departmental boundaries. The maturity model framework does not prescribe a linear change programme. It is vital that every stage, from foundation and core business logic, to reevaluating current systems, fine-tunes basic workflows to ensure any inefficiencies are removed from the overall business process. By the next stage, leaders can consider the possibility of further integration between systems, such as customer lifecycle management (CLM) or enterprise resource planning (ERP) to deliver more multi-channel management.
Only then can organisations enter the next stage of transformation. As processes are streamlined, cross-team collaboration increases and leaders will begin to break down any departmental silos, establishing true data intelligence. Their operations will be seamless with end-to-end processes that inform decision-making. Following this, leaders can then perhaps consider further integrating their systems and exploring other areas to automation and AI across other areas of their businesses, because this is now clear to them.
AI is only as good as the data provided
Businesses will proceed through these stages of digital maturity at different rates depending on the complexity of their structures, and how many roadblocks they encounter across the business cycle. No doubt some will have to go back several stages to tackle any issues regarding the business operability or efficiency. But by no means can leaders prioritise technology over strategy. If organisations think technology has all the answers and AI will solve all their problems, they are approaching transformation all wrong. Organisations need to optimise the business process; it needs to be frictionless from end to end before they consider adopting AI or any form of automation technology.
It is vital that businesses understand their digital maturity – where they are and where they need to get to – to create a transformation programme that actually aligns teams and departments. It’s important to ensure that systems, teams and processes are working together smoothly. After all, it is about establishing cross-functional collaboration, not fine-tuning a process for a particular department – whether sales, legal or finance – but improving the overall business process. By reviewing every stage of the maturity model for their organisation, from foundation (data transparency and business logic) to full system integration, leaders can take their business to a truly intelligent state, where they are actually using data to make decisions to allow for further business growth. Companies can create a seamless enterprise and a fully connected customer and employee experience, which automation can then accelerate even further. From here, AI can actually add real value.
Invention is the creation of technology; innovation is how you use that invention to extract value.
By Dr. Colin Parris, Senior Vice President and Chief Technical Officer, GE Digital
As industrial companies move forward with Digital Transformation, the first question they need to ask is, “How do I build and implement solutions that provide the best and most lasting value?”
Wherever you are on your path to digital transformation, the goal is the same: make your business smarter, leaner, and more profitable. We can create new technologies; we can build powerful solutions and scale them like never before. However, if there is no clear path to value, no clear return on investment, all we are building is barriers. This is especially true in the world of industrial IoT, where innovation is often seen more as evolution than revolution. The risks are tremendously high.
One of the technologies that can make a difference for businesses in this transformation journey is the Digital Twin. A Digital Twin is a software representation of a physical asset, system, or process that is designed to detect, prevent, predict, and optimize through real time analytics. All of these industrial necessities cost millions to build and to fix, and if they stop working, they can cost millions in unplanned downtime. You don’t get to fail fast.
Digital Twins are used to give us early warnings on equipment failures so companies can take actions early to maintain availability targets. These Twins provide continuous predictions so that we can get estimates on the on-going damage on a part, and then have these parts ready when we do maintenance events. This is crucial in the industrial world as the lead time to some of the more critical parts are six to 18 months. Imagine the business impact if you did not have that critical part when you needed it. This is why many industrial companies stockpile these expensive parts and, therefore have high inventory cost.
But what if these critical pieces of your business could help you even more? What if they could help you in a way that aided business adoption and reduced your business risk? What if they could protect themselves from outside threats? What if they could talk to each other? This is where Digital Twin technology is headed and what makes software mission critical today and tomorrow.
As we look to the future, one of the software technologies we will look to is called Humble AI. Humble AI is a Digital Twin that optimizes industrial assets under a known set of operating conditions but can relinquish control to a human engineer or safe default mode on its own when encountering unfamiliar scenarios to ensure safe, reliable operations. Humble AI affords a zone of data competency, which pinpoints where the digital twin model is most accurate and in which it is comfortable making normal operational decisions. If a situation is outside of the zone of competency, the Humble AI algorithm recognizes and redirects the situation to a human operator or reverts back to its traditional algorithm. Just as a trainee engineer might call their supervisor when faced with a new challenge, Humble AI escalates anything outside of its comfort zone.
A second technology is something we call Digital Ghost, a new paradigm for securing industrial assets and critical infrastructure from both malicious cyber-attacks and naturally occurring faults in sensor equipment. Much attention is given to a company’s external firewalls, but some viruses can be accidentally spread by doing something as innocuous as using a compromised USB drive inside the network. A Digital Ghost uses a Digital Twin which understands both the physics associated with the asset and the operational states given the data from its operators and environments and can use this to detect when conditions on its sensors seem to be malicious. In this way it can provide early detection of any security issues that might occur and also provide a path to both detection and neutralization of the fault.
A third interesting and forward-thinking technology is called “Twins that Talk.” This is an emerging, and exciting, technology that gives machines the ability to mimic human intuition and react to evolving situations.
We’re at a place where we’ve generated enough data and harnessed artificial intelligence applications to the point that Digital Twins can now “talk” to each other, learn from each other, and can “educate” other assets. Using neural network technologies (sponsored by DARPA) we have pilots that allow machines to generate their own language for communications. A turbine or an engine, can “inform” other like machines by communicating the prior problems it has experienced and the symptoms associated with the issue. This information forms potential root causes for field engineers even before the problem has been diagnosed.
Here’s an example of machines learning and communicating in the field: Imagine a number of wind turbines in the same geographical area being able to detect icing on their blades and understanding that this reduces the amount of electricity they can create. Not only would they be able to flag potential disruption, they’d be able to predict icing in the future by amalgamating contextual cues, weather patterns, and prior experience.
These new and exciting innovative Digital Twin inventions allow us to build our capacity for business transformation. Humble AI helps create value by leveraging artificial intelligence technology. Digital Ghosts and Twins that Talk help accelerate value. These innovative technologies all provide insight that compels business to take actions that deliver the most value.
Gurpreet Purewal, Associate Vice President, Business Development, iResearch Services, explores how organisations can overcome the challenges presented by AI in 2021.
2020 has been a year of tumultuous change and 2021 isn’t set to slow down. Technology has been the saving grace of the waves of turbulence this year, and next year as the use of technology continues to boom, we will see new systems and processes emerge and others join forces to make a bigger impact. From assistive technology to biometrics, ‘agritech’ and the rise in self-driving vehicles, tech acceleration will be here to stay, with COVID-19 seemingly just the catalyst for what’s to come. Of course, the increased use of technology will also bring its challenges, from cybersecurity and white-collar crime to the need to instill trust in not just those investing in the technology, but those using it, and artificial intelligence (AI) will be at the heart of this.
1. Instilling a longer-term vision
New AI and automation innovations have led to additional challenges such as big data requirements for the value of these new technologies to be effectively shown. For future technology to learn from the challenges already faced, a comprehensive technology backbone needs to be built and businesses need to take stock and begin rolling out priority technologies that can be continuously deployed and developed.
Furthermore, organisations must have a longer-term vision of implementation rather than the need for immediacy and short-term gains. Ultimately, these technologies aim to create more intelligence in the business to better serve their customers. As a result, new groups of business stakeholders will be created to implement change, including technologists, business strategists, product specialists and others to cohesively work through these challenges, but these groups will need to be carefully managed to ensure a consistent and coherent approach and long-term vision is achieved.
2. Overcoming the data challenge
AI and automation continue to be at the forefront of business strategy. The biggest challenge, however, is that automation is still in its infancy, in the form of bots, which have limited capabilities without being layered with AI and machine learning. For these to work cohesively, businesses need huge pools of data. AI can only begin to understand trends and nuances by having this data to begin with, which is a real challenge. Only some of the largest organisations with huge data sets have been able to reap the rewards, so other smaller businesses will need to watch closely and learn from the bigger players in order to overcome the data challenge.
3. Controlling compliance and governance
One of the critical challenges of increased AI adoption is technology governance. Businesses are acutely aware that these issues must be addressed but orchestrating such change can lead to huge costs, which can spiral out of control. For example, cloud governance should be high on the agenda; the cloud offers new architecture and platforms for business agility and innovation, but who has ownership once cloud infrastructures are implemented? What is added and what isn’t?
AI and automation can make a huge difference to compliance, data quality and security. The rules of the compliance game are always changing, and technology should enable companies not just to comply with ever-evolving regulatory requirements, but to leverage their data and analytics across the business to show breadth and depth of insight and knowledge of the workings of their business, inside and out.
In the past, companies struggled to get access and oversight over the right data across their business to comply with the vast quantities of MI needed for regulatory reporting. Now they are expected to not only collate the correct data but to be able to analyse it efficiently and effectively for regulatory reporting purposes and strategic business planning. There are no longer the time-honoured excuses of not having enough information, or data gaps from reliance on third parties, for example, so organisations need to ensure they are adhering to regulatory requirements in 2021.
4. Eliminating bias
AI governance is business-critical, not just for regulatory compliance and cybersecurity, but also in diversity and equity. There are fears that AI programming will lead to natural bias based on the type of programmer and the current datasets available and used. For example, most computer scientists are predominantly male and Caucasian, which can lead to conscious/unconscious bias, and datasets can be unrepresentative leading to discriminatory feedback loops.
Gender bias in AI programming has been a hot topic for some years and has come to the fore in 2020 again within wider conversations on diversity. By only having narrow representation within AI programmers, it will lead to their own bias being programmed into systems, which will have huge implications on how AI interprets data, not just now but far into the future. As a result, new roles will emerge to try and prevent these biases and build a more equitable future, alongside new regulations being driven by companies and specialist technology firms.
5. Balancing humans with AI
As AI and automation come into play, workforces fear employee levels will diminish, as roles become redundant. There is also inherent suspicion of AI among consumers and certain business sectors. But this fear is over-estimated, and, according to leading academics and business leaders, unfounded. While technology can take away specific jobs, it also creates them. In responding to change and uncertainty, technology can be a force for good and source of considerable opportunity, leading to, in the longer-term, more jobs for humans with specialist skillsets.
Automation is an example of helping people to do their jobs better, speeding up business processes and taking care of the time-intensive, repetitive tasks that could be completed far quicker by using technology.
There remain just as many tasks within the workforce and the wider economy that cannot be automated, where a human being is required.
Businesses need to review and put initiatives in place to upskill and augment workforces. Reflecting this, a survey on the future of work found that 67% of businesses plan to invest in robotic process automation, 68% in machine learning, and 80% investing in perhaps more mainstream business process management software. There is clearly an appetite to invest strongly in this technology, so organisations must work hard to achieve harmony between humans and technology to make the investment successful.
6. Putting customers first
There is growing recognition of the difference AI can make in providing better service and creating more meaningful interactions with customers. Another recent report examining empathy in AI saw 68% of survey respondents declare they trust a human more than AI to approve bank loans. Furthermore, 69% felt they were more likely to tell the truth to a human than AI, yet 48% of those surveyed see the potential for improved customer service and interactions with the use of AI technologies.
2020 has taught us about uncertainty and risk as a catalyst for digital disruption, technological innovation and more human interactions with colleagues and clients, despite face-to-face interaction no longer being an option. 2021 will see continued development across businesses to address the changing world of work and the evolving needs of customers and stakeholders in fast-moving, transitional markets. The firms that look forward, think fast and embrace agility of both technology and strategy, anticipating further challenges and opportunities through better take-up of technology, will reap the benefits.
As the fallout from the COVID-19 pandemic continues to disrupt the majority of industries, its impact on supply chains has been nothing short of seismic. As teams continue to face increasing pressure to make the right decisions at the right time - squeezing every last drop of insight and information out of vast lakes of data is now more important than ever.
By Will Dutton, Director of Manufacturing, Peak
The phrase ‘data is the new oil’ has framed a large amount of discourse in the twenty-first century. The statement, although contestable, does beg the question that should always follow: exactly what data are we talking about? These tricky times call for a new approach to data-driven decision making. There’s now a real need for supply chains to focus on the greater data ecosystem, accessing wider sources of data and utilising it to its fullest capacity. While making effective decisions based on data from current systems, or by joining up a few previously-siloed sources across the organisation is becoming easier than ever – there’s potential to go even further than this. The more data there is to play with, the more informed supply chain decisions will be. Here are four data sources that can help accelerate smart supply chain decisions:
1. Linking the supply chain with customer systems
The more systems that talk to each other, the better. Linking data from supply chain systems with customer systems, including behaviour data points, can help understand the pain points that arise. For instance, this could be the customer's ERP system or even the logistics systems between the business and the customer. Taking consumer-packaged goods businesses and manufacturers as an example, with a better handle on Electronic Point of Sale (EPOS) and any other sell-out data from customers’ systems, the business can better predict what demand is going to be like, and better understand their stock levels in order to help anticipate their own. Factoring into account things like receipts data, what baskets are shoppers generally buying together, and how can this help better anticipate the groups of products that are going to sell together. This closer relationship with customers’ systems allows the business to better serve them, increasing efficiency and anticipating demand fluctuations. Inherently it’s all about creating more competitive supply chains which are more cost-effective, with better service levels and a more accurate view of demand.
2. Supplier data for efficiency
By leveraging data points from suppliers’ systems, businesses can plan ahead in the most efficient way and execute an effective just-in-time (JIT) inventory management strategy, holding minimal assets to save cash and space whilst still fulfilling customer demand. By employing this methodology, businesses are able to understand when a supplier is going to deliver, to what location, and anticipate the arrival of goods and raw materials whilst also better understanding the working capital implications.
3. Using environmental data
Don’t underestimate the power hidden away in external, non-industry related data sources and the impact it can have on supply chain decision making. Think about the ways a business can utilise, let’s say, macroeconomic data to understand what could be driving issues connected to supply and demand. Yes, we immediately think of things like GDP, or maybe even exchange rates, but there is now a plethora of data out there, that may be more industry and company-specific, that helps predict demand or implications for business performance. In recent months, appropriate data feeds impacting the supply chain could be an increase in Covid-19 cases near a supplier, hampering their ability to operate as normal. Connecting these data points up to technology such as Artificial Intelligence (AI) could help understand the impact of these incidents with supply performance, and create accurate forecasts on the trends.
4. Sharing data across the network with co-opetition
For many, the rule of thumb is not giving the game away to competitors, so this may seem a little pie-in-the-sky for many businesses at first. However, the benefits of sharing data with the industry and accessing competitor data sources can be enormous. The data of those providing similar products is at first harmless – but using it in the right way, to make intelligent decisions, will allow the business to gain a unique view of what is happening across the rest of the market. This ultimately leads to a better understanding of wider trends and the ability to make smarter decisions. With a mutually beneficial relationship with the wider network, a business can understand supply issues, and work with competitors or neutral parties to deliver better products and services to customers creating a form of ‘co-opetition.’
Accessing the ecosystem requires digital transformation
Tapping into the greater data ecosystem and utilising it in decision making will be essential for supply chain teams to run smooth operations in disruptive climates. However, to truly unlock the potential this offers, a central AI system is needed.
In the same way that business functions have their own systems of record, the ability to power decision making based on a wide range of data sources hinges on the introduction of a new, centralised enterprise AI system. Using AI gives teams the ability to leverage unlimited data points at scale and speed. Utilising AI in this manner, to make decisions that are both smarter and faster to supercharge teams. At Peak, we call this Decision Intelligence (DI).
Decision Intelligence results in being able to connect the dots between data points with AI, to prescribe recommendations and actions to make more informed commercial decisions across the entire supply chain.
By feeding external data from the points above into both demand and supply planning systems, leveraging it with AI, enterprises can optimise that connection between these two core areas. Not only does it allow a better sense of demand with a higher degree of accuracy, but also enables a better understanding of how supplier and operations constraints are affecting supply – automatically making micro-adjustments to optimise the way demand is being fulfilled.
One of the most critical steps in any operational machine learning (ML) pipeline is artificial intelligence (AI) serving, a task usually performed by an AI serving engine.
By Yiftach Schoolman, Redis Labs Co-founder and CTO
AI serving engines evaluate and interpret data in the knowledgebase, handle model deployment, and monitor performance. They represent a whole new world in which applications will be able to leverage AI technologies to improve operational efficiencies and solve significant business problems.
AI Serving Engine for Real Time: Best Practices
I have been working with Redis Labs customers to better understand their challenges in taking AI to production and how they need to architect their AI serving engines. To help, we’ve developed a list of best practices:
Fast end-to-end serving
If you are supporting real-time apps, you should ensure that adding AI functionality in your stack will have little to no effect on application performance.
No downtime
As every transaction potentially includes some AI processing, you need to maintain a consistent standard SLA, preferably at least five-nines (99.999%) for mission-critical applications, using proven mechanisms such as replication, data persistence, multi availability zone/rack, Active-Active geo- distribution, periodic backups, and auto-cluster recovery.
Scalability
Driven by user behavior, many applications are built to serve peak use cases, from Black Friday to the big game. You need the flexibility to scale-out or scale-in the AI serving engine based on your expected and current loads.
Support for multiple platforms
Your AI serving engine should be able to serve deep-learning models trained by state-of-the-art platforms like TensorFlow or PyTorch. In addition, machine-learning models like random-forest and linear-regression still provide good predictability for many use cases and should be supported by your AI serving engine.
Easy to deploy new models
Most companies want the option to frequently update their models according to market trends or to exploit new opportunities. Updating a model should be as transparent as possible and should not affect application performance.
Performance monitoring and retraining
Everyone wants to know how well the model they trained is executing and be able to tune it according to how well it performs in the real world. Make sure to require that the AI serving engine support A/B testing to compare the model against a default model. The system should also provide tools to rank the AI execution of your applications.
Deploy everywhere
In most cases it’s best to build and train in the cloud and be able to serve wherever you need to, for example: in a vendor’s cloud, across multiple clouds, on-premises, in hybrid clouds, or at the edge. The AI serving engine should be platform agnostic, based on open source technology, and have a well-known deployment model that can run on CPUs, state-of-the-art GPUs, high- engines, and even Raspberry Pi device.
In a lot of ways, the economy is like a road; businesses are the drivers and customers are the passengers. And the vehicle propelling those organisations forward? Their IT architecture.
By Nick Ford, Chief Technology Evangelist at Mendix
For most of the journey so far, businesses have been able to take a traditional approach to their vehicle maintenance: they would drive hard for a few years until the parts were well and truly worn down, by which point there were probably a few new upgrades available. Only then would that business pull their architecture in for a massive tune-up.
That’s when IT department would conduct a complete overhaul, bring their infrastructure into the present day so it could chug along for another few years. This would take a lot of time and more money, but since it only needed to happen once or twice every decade, the system worked.
But not anymore. These days, business is highspeed – and new competitors are joining industries at ever more disruptive rates.
And It’s time to consider a new, more modifiable motor – and that’s the composable enterprise.
Speeding IT up
Any organisation that wants to stand a chance at continuing to successfully navigate the pot-hole ridden path of the post-COVID world needs to become incredibly adaptable.
The pandemic may have wreaked havoc on the economy, but it has also accelerated the digitalisation of society even further. It has led to 54% of businesses to accelerate their digitalisation in a bid to support newly remote workforces and keep up with customers’ rapidly changing needs, according to Mendix research.
The scope of these digital transformation projects is just too large and time-sensitive for IT teams to build solutions for from scratch, like they used to – making the shift to a moving to a composable model so attractive.
If a firm can tack on new services, features and improve their customer experience when they need to, one application at a time, by reusing automated, vetted functionalities, they can dramatically lower their IT costs while also scaling-up their time to value.
This is what it means to become a composable enterprise – having the ability to build solutions from best of breed. This means assembling solutions from a variety of vendors and all of them working seamlessly together.
Low code path to high performance
This is where low-code platforms come into the picture.
On a low-code platform, all the functionalities needed to build out a new application can be pulled from an existing library. These components can be dragged and dropped into a visual workflow, meaning app development doesn’t even require coding experience. This reduces the burden placed on IT to be responsible for all digital transformation projects (as we can all probably agree, they have more than enough on their plates).
So, businesses can rapidly assemble new business applications like their kitting out a custom car – an accessory at a time, each one a different functionality. And even though IT doesn’t have to be
directly involved in this app development by way of assembly, they still have an element of oversight as every part is made up of tried and tested functionalities that they’ve already approved/built.
But more than that, a low-code platform can be a bridge connecting non-technical staff and IT departments and giving them a common language. So, these teams can move closer than ever before and collaborate in new, more efficient and creative ways.
Once a business is able to inspire collaboration at every level, empowering workers to create new innovations to both make their work lives easier and improve customer experience, it will have a chance of staying on track.
Because the iterative changes lead to incredibly adaptable IT infrastructures – perfectly geared for whatever conditions the world throws at them.
From old banger to hot hatch
The age of the massive overhaul is over. The businesses holding pole position use all the information they can get their hands on to make incremental and iterative changes – without even needing to slow down.
So, if you’re a driver of the old style, struggling to switch from your static IT architecture into a dynamic, composable enterprise, try starting with a shift to low code development. It may be the boost of nitro your organisation needs.
Here, Matt Parker, CEO of Babble, explains the inherent differences between adaptability and agility and why both are vital.
Believe it or not, we’re no longer tied to our desks, hooked up to our workspace via wires and telephone lines. The world of work is transforming, and smart businesses are making sure they’re ahead of the curve. If business is booming, your staff are working tirelessly and phones are ringing off the hook, you might not notice a problem. But, before you know it, your old-fashioned way of working will be overtaken by swift, forward-thinking competitors, and, if 2020 showed us anything, it’s that an ability to adapt is invaluable. The world around us isn’t static – it’s constantly evolving, and businesses must learn to change with the times.
Agile working connects people to technology that helps to improve effectiveness and productivity. Therefore, agility is defined by the way a business proactively evolves in order to thrive. Agility allows a business to realise its full potential through implementing new systems. As technology evolves and the way we work changes, businesses can’t just stand still. Continuing a way of working because “this is how we’ve always done it” will get mediocre results and hinder a business’s progress.
For example, a company that embraced agile working is Aquavista. The leading leisure and mooring supplier that needed to implement a scalable cloud technology solution that would align to its rapid growth. To enable employees to operate efficiently and remotely, Babble deployed a fibre grade connection and a hosted telephony platform. These solutions facilitated agile working, reduced operational downtime, boosted productivity and ultimately enhanced customer experience.
Adaptability, however, is a business’s ability to respond to these changes. Businesses must recognise the importance of both of these competencies. It’s how prepared a business is to change or evolve its practices to overcome challenges or align with environmental changes. Businesses should be able to adapt to new changes in a productive, positive way – with company culture emerging unscathed and business processes improved as a result. Maintaining business effectiveness through times of change will ensure a business can thrive, no matter what life throws at it.
Whilst “agile working” in a software development sense isn’t wholly aligned with the term used when discussing business, there are some shared principles and key takeaways. Fundamentally, viewing change positively and approaching new processes with flexibility helps businesses adapt. Improving internal processes benefits the business, its staff and ultimately customers as the service provided is more effective.
Businesses need a unified vision and passionate leaders to ensure agile adoption. Problem solving with a positive, productive mindset will ensure successful agile transformation. It’s all about understanding what currently works but being open to exploring what could work better.
Businesses must be both agile and adaptable in order to weather new storms and safeguard its future. However, achieving agility and adaptability is a fine balancing act. Constantly striving for change without strategic rationale may cause a business to flounder. You should always aim for the final goal, with a strong end result in mind and a clear idea of how you’re going to get there.
Implementing change within a business can be a challenge. It’s true that leaders can sometimes be too close to their own operational processes to be able to clearly see better ways to work, but partnering with a specialist such as Babble offers a unique advantage.
Flexible, scalable solutions ensure that a business can adapt to new challenges or changes in circumstance. As we’ve seen from the COVID-19 pandemic, there’ll be times when businesses must adapt to new ways of working almost overnight. Think ahead by utilising an optimised network and deploying cloud-based communication tools to allow your business to work in an agile way. This results in maximum efficiency, maximum benefit to your customers and maximum business productivity.
Being agile requires business leaders to identify change and understand how it could impact the business. Seeking new opportunities and deploying the resources needed to secure these opportunities futureproofs a business and helps it succeed. Although it’s worth noting that there’s no set formula for agile working. Businesses can’t follow a ‘how to’ guide for agility. Every business has its own way of working; its own processes. Identifying where there’s need for change is unique across each business. However, ultimately, businesses that don’t adopt agile working will stagnate as proactive, forward-thinking competitors take the lead. Don’t just wait for new ways of working to become a necessity – take the initiative now.
Brian Johnson, ABB Data Center Segment Head, explores how digitalisation is shaping the evolution of green data centres, and provides a range of sustainable growth tactics for delivering cost and energy efficiencies.
It is a common misconception that data centers are responsible for consuming vast amounts of the world’s energy reserves. The truth is that data centers are leading the charge to become carbon free and are supporting members of their supply chains in achieving the same.
Even in the face of rapid digital acceleration, the vast proliferation of smart devices and an upward surge in demand for data, the data center sector remains a force for positive change on climate action and is well on course to fulfil its ‘green evolution’ masterplan.
To be more specific, data centers are estimated to consume between one and two percent of the world’s electricity according to the United States Data Center Energy Usage Report (1). A recent study confirmed that, while data centers’ computing output jumped six-fold from 2010 to 2018, their energy consumption rose only six percent (2).
To better envisage just how energy efficient data processing has become, imagine that if the airline industry was able to demonstrate the same level of efficiency, a typical 747 passenger plane would be able to fly from New York to London on just 2.8 liters of fuel in around eight minutes.
How has the data center sector reduced total energy consumption?
Using a range of safe, smart and sustainable solutions, ABB is helping its data center customers to reduce CO2 emissions by at least 100 megatons until 2030. That is equivalent to the annual emissions of 30 million combustion engine cars.
Solutions include more energy efficient power systems innovations or even moving entirely to large scale battery energy storage systems which ensure reliable grid connectivity in case of prolonged periods of power loss.
These are obviously big budget changes, with impressive yields, but there is much more that can be done to harness the power of digitalization on a smaller scale.
Making every watt count
The need for additional data from society and industry shows no sign of stopping, and it is the job of the data center to meet this increased demand, without consuming significantly more energy. Unlike many industries which wait for regulation before forcing change, the desire to offer a more environmentally conscious data center comes from within the industry, with many big players and smaller facilities too, taking an “every watt counts” approach to operational efficiency.
By digitalizing data center operations, data center managers can react to increased demand without incurring significant additional emissions. Running data centers at higher temperatures, switching to frequency drives instead of dampers to control fan loads, adopting the improved efficiency of modern UPS and using virtualization to reduce the number of underutilized servers, are all strong approaches to improve data center operational efficiency.
To understand this further, let us explore some key sustainable growth tactics for delivering power and cost savings for green data centers:
Digitalization of data centers
One of the most recent developments has been the implementation of digital-enabled electrical infrastructure. Data center operators can take advantage of techniques to make their equipment more visible, efficient and safer. One development has been the use of sensors instead of traditional instrument transformers, which communicate digitally via fiber optic cables, reducing total number of cables by up to 90% vs traditional analog, and also use low energy circuits, which increases safety.
The resultant digital switchgear can then be manufactured, commissioned and repaired much more easily thanks to both the sheer number of cables and the intelligent nature of the connections. Other innovations allow circuit protective devices to be configured wirelessly, and even change their settings when alternate power sources are connected. Visibility into the electrical consumption is much easier with digital signals, and analytics are enabled with this “democratization” of the data stream. From this, insights can be gained to both increase efficiency, and tailor consumption based on specific business goals.
Minimizing idle IT equipment
There are several ways data centers can minimize idle IT equipment. One popular course of action is distributed computing, which links computers together as if they were a single machine. Essentially, by scaling-up the number of data centers that work together, operators can increase their processing power, thereby reducing or eliminating the need for separate facilities for specific applications.
Virtualization of servers and storage
Undergoing a program of virtualization can significantly improve the utilization of hardware, enabling a reduction in the number of power-consuming services and storage devices. In fact, it can even improve server usage by around 40 percent, increasing it from an average of 10 to 20 percent to at least 50 or 60 percent.
A server cannot tell the difference between physical storage and virtual storage, so it directs information to virtualized areas in the same way. In other words, this process allows for more information storage, but without the need for physical, energy consuming equipment. More storage space means a more efficient server, which saves money and reduces the need for further physical server equipment.
Consolidating servers, storage, and data centers
Blade servers can help drive consolidation as they provide more processing output per unit of power consumed. Consolidating storage provides another opportunity, which improves memory utilization while reducing power consumption. Some consolidation methods can use up to 90% less power once fully operational (3).
Big savings are also coming from moving to solid state disc drives (SSD) from traditional optical drives (HDD). While a bit more expensive, they’re much smaller and energy efficient and can be done during an IT “refresh” cycle every 3-5 years or so.
Managing CPU power usage
More than 50 percent of the power required to run a server is used by its central processing unit (CPU). Most CPUs have power-management features that optimize power consumption by dynamically switching among multiple performance states based on utilization.
By dynamically ratcheting down processor voltage and frequency outside of peak performance tasks, the CPU can minimize energy waste.
Distribution of power at different voltages
Virtually all IT equipment is designed to work with input power voltages ranging from 100V to 240V AC, in accordance with global standards and the general rule is of course, the higher the voltage, the more efficient the unit.
That said, by operating a UPS at 240/415V three-phase four wire output power, a server can be fed directly, and an incremental two percent reduction in facility energy can be achieved.
Adopting best cooling practices
Traditional air-cooling systems have proven very effective at maintaining a safe, controlled environment at rack densities of two kW to three kW per rack, all the way to 25 kW per rack. But operators are now aspiring to create an environment that can support densities in excess of 30-50 kW as demand for AI and Machine Learning increases, and at these levels, air cooling technologies are no longer effective.
That shouldn’t be seen as a barrier though, with alternate cooling systems such as rear door heat exchangers, providing a suitable solution.
Plugging into the smart grid
Smart grids enable two-way energy and information flows to create an automated and distributed power delivery network. Data center operators can not only draw clean power from the grid, they can also install renewable power generators within their facility in order to become an occasional power supplier back into the grid.
For further advice on safe, smart and sustainable digitalization tactics which support the green data center masterplan, visit https://new.abb.com/data-centers
1 United States Data Center Energy Usage Report | LBL Sustainable Energy Systems
2 Cloud Computing Is Not the Energy Hog That Had Been Feared - The New York Times (nytimes.com)
3 Servers: Database consolidation on Dell PowerEdge R910 servers
Automation is disrupting the very nature of how humans work. AI, cloud computing, bots, robots, cybersecurity and real-time data appear in headlines every day and have built a dual reputation as both a villain and a saviour; they disrupt industries, eliminate jobs and – yet – are also responsible for creating tremendous value.
By Sherif Choudhry, Managing Director, BCG Platinion.
According to the World Economic Forum’s Global Risk Report, automation – accelerated by COVID-19 – may displace 85 million jobs in the next five years. The report went on to warn that accelerating automation could even damage our efforts to build a digitally inclusive society, thus reinforcing the suspicion that automation may not be a force for good or improve lives as is often promised.
Perhaps the reality lies somewhere in between: automation is at its best when we put human needs at the heart of innovation - using technology as an enabler to improve society and create value. Then with creativity and ingenuity that’s focused on behavioural change, we can use the technology as an enabling tool to create new value. And it is only through clear demonstration of this value that we can try to mitigate against the fear that automation exists simply to eliminate jobs.
Making the case for automation
Alex Garland, writer and director of fiction AI film, Ex-Machina put it best when he said: “any major breakthrough, whether nuclear power or industrial revolution, contains latent danger and latent benefit, but it’s up to us how we contain that.” The same can be applied to technology.
When automation revolutionised the textile industry, new industries and jobs were created. Today, we engage with AI when talking to chatbots while we shop or pay our bills. And while chatbots have replaced call centre agents, help can now be offered to more clients around the clock and in many languages. For employees, the use of chatbots can create new and higher-value roles, such as those working with data to develop new customer offerings and support. This evidenced by a 2020 report by The World Economic Forum forecasted forecasting an upsurge in global AI jobs, with new jobs per 10,000 opportunities jumping from 78 to 123 by 2022.
When viewed through this lens, automation presents opportunities, but there is a need for organisations and governments to help employees and citizens understand what automation might mean for them and to slowly build trust. Organisations can be instrumental in this process, leading the way with strong change management programmes and investment into upskilling and reskilling.
The role of the organisation
It is critical for organisations to communicate what the re-imagined future looks like before adopting these technologies. The ‘why’ and the future benefits needs highlighting at the very beginning of the change journey. Too often, the story is lost in IT or marketing reports. After all, our behaviours inform the technology we use at work, and after time, the technology we use informs our behaviours in turn. Only with that understanding, can organisations begin to overcome the
barrier to trust, and how they carefully navigate this change journey is vital to mitigate disruption to society.
Here, organisations must recognise that the role of the Chief Digital Officer is a key one to take responsibility for the vision of a re-imagined future with new technologies and communicating its impact to the workforce.
The solution is bionic
It might feel impossible to design and implement new technology, but a bionic approach changes this. Bionic combines the best of human expertise, data and technology to create a resilient organisation that’s able to take advantage of technology - in an ethical way - and win in the new reality.
Of course, this kind of planning is easier said than done: technology changes in significant ways every six to twelve months – and realising how those changes will impact ways of working and society is a key strategic function of businesses.
Think of all of all the emerging technologies: Robotic Process Automation, the Internet of Things and Artificial Intelligence (AI). For organisations to adopt, design, implement and operate these technologies requires new talent and updated skills. They need people who can use these technologies to develop new areas of value for the business – an act that will subsequently create additional new roles and bring in yet more people.
Clearly businesses need to take a more human-centred design approach to their services and products. Any strategy must be built on a blend of human expertise and technology capabilities. This is something that only comes from clear strategy and purpose, one that creates a roadmap to an outcome via a connective tissue of people and IT. This is bionic approach is the backbone of future success – one that allows an organisation to response to change and uncertainty quickly, and, ultimately, to grow.
Aligning ambitions and driving change
The responsibility to ensure automation benefits employees and society at large, in a safe but effective way, sits with governments and businesses.
While at a grassroots level, digital literacy in schools is largely happening in the developed world but it also important for emerging countries to counter the digital divide. Business and governments, for example, could work together to better support education systems across all parts of the world, ensuring future generations are taught how to harness continually evolving technology and be energised by its potential to create new societal value. For those entering the workforce, the case for automation should be based on the tangible returns for business, employees and society.
Yet those already within the organisations must not be left behind, Investment in digital upskilling and reskilling, training, all while communicating the benefits of technology and automation consistently is vital. Not only will this help to build trust, but also ensures that the benefits of technology are realised and risks mitigated.
When automation does inevitably render jobs redundant, businesses should never lose sight of the human factor, being ready to step in and re-skill employees in new areas. Indeed, we suggest that the future roadmap of an organisation should look not only at shareholder value, but employee and societal value driven by human-centric design thinking.
2020 was a difficult year for everyone, with a huge number of industries suffering thanks to the pandemic. This was certainly the case for the sports industry, which saw reliable sources of revenue, such as ticket sales and food and drink at stadiums, decimated.
By Lars Rensing, CEO of enterprise blockchain service provider Protokol
As we move into 2021, the situation seems to be easing, but teams will still find it hard to recover from the losses of 2020 without investing in digital transformation.
To recuperate lost revenue from the past year, digital innovation and transformation is going to be essential for teams who want to create new streams of income and increase engagement of fans in the absence of live game attendance. There are a number of ways clubs can do this, and those that innovate sooner will be more likely to survive in the long run.
Although live games have started up again for most sports, attendance is still limited or prohibited. This means that although fans can watch their teams from home, they may feel disconnected from the action. Sports teams can help to improve their engagement with fans watching from home by exploring new viewing experiences enhanced by digital transformation. For example, clubs can use virtual reality technology to create ‘cheer’ and other reaction buttons for supporters, which can be used during the match to celebrate with the players. These reaction buttons allow fans to feel included in the action, as if they were watching the game at the venue. Clubs can also offer fans a whole new range of viewing angles - putting cameras on the players and even the balls - to further engage them remotely. This allows fans to feel more engaged with the action, letting clubs leverage their remote fanbase and encourage viewing numbers.
Just improving game viewing isn’t enough though. Teams need to be able to engage their fans without games, in order to make up for the losses of last year. Teams also need to be able to attract a younger, digital-native audience, as well as tap into eSports’ audience. This is where blockchain technology comes in. Blockchain can be used to create alternative revenue streams for sports teams, something which will be crucial this year. Perhaps the most popular of blockchain’s offerings for sports is in the creation of fan tokens. A number of prominent clubs, such as FC Barcelona and Dynamo Kyiv, had been investing in these tokens even before the pandemic to broaden their offerings and create innovative ways for fans to engage digitally. Blockchain-based fan tokens can be purchased by fans or earned when they complete certain actions, such as interacting with their club or other fans on social media. The tokens can then be exchanged for exclusive rewards, from VIP events to early merchandise access. Fan tokens solutions not only engage fans in the absence of game attendance, but also allow sports teams to tap into a larger and more global fanbase, even when game attendance returns. These tokens have already been proving extremely popular, and more and more clubs are looking to adopt them. For instance, FC Barcelona’s first round fan tokens sold out in under two hours, at a value of $1.3 million.
Another innovative way that blockchain can be used to engage fans and make up for the losses of 2020 is in the creation of digital collectibles and trading cards. Like fan tokens, digital trading cards and collectibles can be purchased by fans, creating a new stream of revenue for teams hit hard by the pandemic. Digital trading cards also let fans from all over the world play with each other, establishing a more connected global fanbase that teams can capitalise on. In the same way, unique digital collectibles are an inventive way to simultaneously engage younger fans and increase revenue. This is something that has already been successful - the NBA’s Top Shot digital collectibles sold $2.3 million worth of NFTs in 30 minutes this year. What’s more, these digital trading cards and collectibles are underpinned by blockchain technology, ensuring that they cannot be replicated, forged or destroyed, making them a better investment for teams and more attractive to dedicated fans.
We may certainly see the situation for the sports industry improve this year, but that in itself won’t be enough to make up for 2020s losses. Teams that need to recuperate from last year need to diversify their offerings through digital transformation. Only through this kind of innovation will teams be able to get back on track. Investment in digital transformation and creative thinking around fan engagement will not only allow clubs to survive the fallout of 2020, but will also earn the participation of a younger fanbase, diversify revenues and successfully monetise international fans.
Schrems II enforcement is getting off the ground in Germany, highlighting the serious and urgent need for companies to begin steps towards compliance.
By Gary LaFever, CEO & General Counsel at Anonos: Lawful Borderless Data.
A discussion between German Data Protection Authorities (DPAs) at their joint Datenschutzkonferenz (DSK) meeting highlighted the next steps of a Schrems II Task Force: DPAs, led by Hamburg and Berlin, will begin initiating enforcement measures.
Most notably, the Hamburg DPA will conduct random checks on companies to determine whether or not they are in compliance with Schrems II requirements. This highlights the high priority of Schrems II concerns for Boards and C-Suite Executives, as investigations and enforcement actions in other jurisdictions are likely to follow soon.
Another indicator of increased pressure in other jurisdictions comes from NOYB – European Center for Digital Rights, the non-profit privacy organisation founded by Max Schrems. In a questionnaire sent to numerous companies in 2020, NOYB asked:
If you send personal data to the US, which technical measures are you taking so that my personal data is not exposed to interception by the US government in transit? Thirty-three companies received this questionnaire as part of NOYB’s “Opening Pandora’s Box investigation”, but very few were able to respond satisfactorily. It is clear that enforcement actions and compliance pressures are coming from both regulators and privacy organisations, highlighting the urgency of Schrems II compliance. In a recent webinar on “Briefing the C-Suite & Board of Directors on Schrems II Risk Exposure”, 83% of respondents responded “No” to the following question:
Would your company be able to answer a similar question from NOYB regarding the technical measures you have in place to comply with Schrems II?
This response indicates a high level of unpreparedness for Schrems II compliance. However, other than invalidating the Privacy Shield treaty for EU-US transatlantic data flows, the Schrems II ruling does not represent “new law”, but rather clarifies requirements under the EU General Data Protection Regulation (GDPR) passed in 2016. Under the GDPR, the fundamental rights of individual data subjects must be protected. The Schrems II ruling clarifies GDPR requirements for protecting EU personal data by leveraging technical measures when data is in use. Until now, most organisations have focused on protecting data when it is at rest or in transit, but that approach is no longer sufficient. Organisations that are found not to be in compliance with Schrems II may therefore not be in compliance with the GDPR generally.
The court in Schrems II ruled that the appropriate relief for noncompliance is injunctive termination of processing, rather than the assessment of penalties – highlighting the potential for immediate material disruption to business operations. This shifts the burden of proof onto data controllers in order to regain the right to process their data. Since there is no grace period, compliance became mandatory immediately on 16 July 2020, the date of the Schrems II court ruling. Now, over six months later, organisations must evaluate whether the technical controls they have in place will be sufficient to overcome claims of non-compliance. Given that the European Data Protection Board (EDPB) has already released preliminary recommendations on how to comply with Schrems II, not taking action is a high-risk strategy.
Action Plan In Germany, the recent Data Protection Report from law firm Norton Rose Fulbright recommends that “companies with headquarters in Germany or with affiliates operating from Germany should be aware that they might receive a questionnaire from their regulator [and] should prepare for how they might respond”. More specifically, they note that German DPAs engaging in random questionnaires or compliance checks will expect companies to already be taking steps towards complying with EDPB recommendations for Schrems !!.
For those outside of Germany, companies should also take steps to comply with EDPB recommendation before DPAs in other jurisdictions begin to take stronger enforcement measures or privacy organisations initiate new investigations. Finalisation of EDPB guidelines and new Standard Contractual Clauses (SCCs) are projected to occur near the end of March 2021, leaving companies with few options if they are investigated and found to be non-compliant. Briefing Boards and C-Suite Executives and reviewing and procuring relevant technology may take several months at a minimum; even companies that have already started the work necessary to comply with Schrems II may be found to have responded too slowly.
Taking steps to implement technical measures to protect data is critical, and companies with lower risk tolerances should take steps immediately. Companies electing not to take action now should document their decision-making process for evaluating the risk of noncompliance as well as the consequences of terminated data flows and interruptions to business operations.
Schrems II webinar participants were also asked about this potential outcome, namely:
If your company was told to halt processing and/or data transfers, what would be the immediate impact to your business? 89% of respondents in the “Briefing the C-Suite & Board of Directors on Schrems II Risk Exposure,” characterised the results of terminated processing as “catastrophic” or “serious” to their operations. All companies are urged to consider the potential impacts on their own businesses in the face of potential enforcement action.
It is critically important that, throughout this process, companies understand that they must implement new technically-enforced “Supplementary Measures” to support Standard Contractual Clauses (SCCs) to comply with Schrems II requirements. Merely updating SCCs without implementing new technically enforced “Supplementary Measures” is not enough. Without appropriate technical measures to protect data when in use - not just when at rest and during transit - compliance will not be achieved. As enforcement actions draw increasingly near, companies should not wait to find out what happens in Germany before taking action themselves.
Why is it that the security industry talks about network security, but data breaches? It’s clear that something needs to change, and according to Paul German, CEO, Certes Networks, the change is simple.
For too long now, organisations have been focusing on protecting their network, when in fact they should have been protecting their data. Paul outlines three reasons why the security industry has been protecting the wrong thing and what they can do to secure their data as we move into 2021.
Reason one: They’re called data breaches, not network breaches, for a reason
Looking back on some of the biggest data breaches the world has ever seen, it’s clear that cyber hackers always seem to be one step ahead of organisations that seemingly have sufficient protection and technology in place. From the Adobe data breach way back in 2013 that resulted in 153 million user records stolen, to the Equifax data breach in 2017 that exposed the data of 147.9 million consumers, the lengthy Marriott International data breach that compromised the data from 500 million customers over four years, to the recent Solarwinds data breach at the end of 2020, over time it’s looked like no organisation is exempt from the devastating consequences of a cyber hack.
When these breaches hit the media headlines, they’re called ‘data breaches’, yet the default approach to data security for all these organisations has been focused on protecting the network - to little effect. In many cases, these data breaches have seen malicious actors infiltrate the organisation’s network, sometimes for long periods of time, and then have their pick of the data that’s left unprotected right in front of them.
So what’s the rationale behind maintaining this flawed approach to data protection? The fact is that current approaches mean it is simply not possible to implement the level of security that sensitive data demands as it is in transit without compromising network performance. Facing an either/or decision, companies have blindly followed the same old path of attempting to secure the network perimeter, and hoping that they won’t suffer the same fate as so many before them.
However, consider separating data security from the network through an encryption-based information assurance overlay. Meaning that organisations can seamlessly ensure that even when malicious actors enter the network, the data will still be unattainable and unreadable, keeping the integrity, authentication and confidentiality of the data intact without impacting overall performance of the underlying infrastructure.
Reason two: Regulations and compliance revolve around data
Back in 2018, GDPR caused many headaches for businesses across the world. There are numerous data regulations businesses must adhere to, but GDPR in particular highlighted how important it is for organisations to protect their sensitive data. In the case of GDPR, organisations are not fined based on a network breach; in fact, if a cyber hacker were to enter an organisation’s network but not compromise any data, the organisation wouldn’t actually be in breach of the regulation at all.
GDPR, alongside many other regulations such as HIPAA, CCPA, CJIS or PCI-DSS, is concerned with protecting data, whether it’s financial data, healthcare data or law enforcement data. The point is: it all revolves around data, but the way in which data needs to be protected will depend on business intent. With new regulations constantly coming into play and compliance another huge concern for organisations as we continue into 2021, protecting data has never been more important, but by developing an intent-based policy, organisations can ensure their data is being treated and secured in a way that will meet business goals and deliver provable and measurable outcomes, rather than with a one-size-fits-all approach.
Reason three: Network breaches are inevitable, but data breaches are not
Data has become extremely valuable across all business sectors and the increase in digitisation means that there is now more data available to waiting malicious actors.
From credit card information to highly sensitive data held about law enforcement cases and crime scenes, to data such as passport numbers and social ID numbers in the US, organisations are responsible for keeping this data safe for their customers, but many are falling short of this duty. With the high price tag that data now has, doing everything possible to keep data secure seems like an obvious task for every CISO and IT Manager to prioritise, yet the constant stream of data breaches shows this isn’t the case.
But what can organisations do to keep this data safe? To start with, a change in mindset is needed to truly put data at the forefront of all cyber security decisions and investments. Essential questions a CISO must ask include: Will this solution protect my data as it travels throughout the network? Will this technology enable data to be kept safe, even if hackers are able to infiltrate the network? Will this strategy ensure the business is compliant with regulations regarding data security, and that if a network breach does occur, the business won’t risk facing any fines? The answer to these questions must be yes in order for any CISO to trust that their data is safe and that their IT security policy is effective.
Furthermore, with such a vast volume of data to protect, real-time monitoring of the organisation’s information assurance posture is essential in order to react to an issue, and remediate it, at lightning speed. With real-time, contextual meta-data, any non-compliant traffic flows or policy changes can be quickly detected on a continuous basis to ensure the security posture is not affected, so that even if an inevitable network breach occurs, a data breach does not follow in its wake.
Trusting information assurance
An information assurance approach that removes the misdirected focus on protecting an organisation’s network and instead looks at protecting data, is the only way that the security industry can move away from the damaging data breaches of the past. There really is no reason for these data breaches to continue hitting the media headlines; the technology needed to keep data secure is ready and waiting for the industry to take advantage of. The same way that no one would leave their finest jewellery on display in the kitchen window, or leave their passport out for the postman to see, organisations must safeguard their most valuable asset and protect themselves and their reputation from suffering the same fate as many other organisations that have not protected their data.
Armed with significant enforcement powers, regulators continue to grow bolder and more confident in taking enforcement action in support of data regulation and the protection of citizens’ digital information.
By Mark Keddie, Global Director of Privacy, Veritas Technologies.
While some data regulators have adopted a more sympathetic tone on account of businesses’ continued struggle with COIVD-19, others have shown no such leniency with multi-million Euro fines. For multinational business, the challenge of knowing what data you have, and how to manage it compliantly so as to avoid becoming the next data disaster headline remains a Board risk.
Data breaches are often viewed as being sensationalist and frequently contentious in nature and yet the scope of what constitutes a data breach remains poorly understood. Consequently the most seemingly innocuous risks such as data retention, can emerge as the unexpected root cause of regulatory failure.
Consider the dark data the business didn’t realise it had, or the personal data it had forgotten to delete after selling off a part of the business. Maybe it’s the poorly enforced data retention strategy that’s resulted in personal data being stolen by cybercriminals from an unsecured, forgotten server. A data breach that orginates from poor data retention practices can be just as difficult and costy to manage as the more headline-grabbing cybersecurity incidents that we’ve become all too familiar with.
Businesses need clarity across their entire data estate to be confident that they are meeting their regulatory obligations, but without the tools that automate data classification and deletion policies, the process of classifying and deleting data can be extremely resource-intensive.
An unknown quantity
A combination of historic best practice, a fear of deleting data and the growing avilabilty of large-scale cheap storage, means employees, IT staff and data managers can mistakenly believe that they’re doing the right thing by storing ‘everything’ without a real understanding of the data that they are retaining. The resulting data bloat can have real consequences when it comes into conflict with newer regulatory and legal obligations.
Many industries, like the banking sector for example, have established requirements to retain data for set periods of time which have become ingrained in the consciousness of long-term employees. However, regulations like GDPR dictate that data should only be held for as long as required for its orginal purpose and offer inividuals the ‘right of erasure’. It is unsuprising then that businesses are confused around what they can and should keep – and for how long – often chosing to ignore the issue, keep all their data and quietly forget about the risk.
With data sets becoming more complex and increasingly challenging to manage and secure, the risk of data retention is becomining onipressent. The growing popularity of hybrid multi-cloud environments – where data is stored across both private on-premise networks as well as a range of cloud environments – means data can exist in multiple, often disparate, locations in an organisation for years to come. It’s a situation that’s exacerbated if deletion or categorisation of that data is delayed or ignored, with much of it simply forgotten and going ‘dark’. The most recent research from Veritas found that half (52%) of an organisation’s data, on average, can be classified as dark – meaning that the person who’s managing it, doesn’t know what it is, or may not know it exists.
Dark data quickly loses its strategic value and evolves instead into a data risk. Unknown volumes of dark data mean an increasing likelihood of data incicents with the potential for breaches, fines and reputational damage. Just how confident can a business be that there it knows where all its data is and how that imapcts its compliance obligations?
Insight and automation
Organisations need a fresh approach to data management. It can no longer be treated as a low-priority, back office function. A new approach requires both operational and cultural changes across organisations, but it also demands ownership and accountabilty if the compliance risk is to be effectively mitigated.
Every board member or departmental head today is, in their own way, a chief data officer, accountable for their business unit’s data. That means setting a proactive tone form the top that
every business leader and data owner should take a principal role in defining the data deletion strategy, resolving the management challenges that frustrate them and provding employee education to meet data retention policies.
To enable this, businesses must maintain their focus and improve data visibility by adopting tools that help organisations see what data they have and where. With these in place, better-informed decisions can be made on what data to keep and what to delete. Deploying tools to automatically label data on upload, limiting error and improving future accuracy can reduce risk with data set to expire after a pre-defined period of time and within regulatory obligations. This prevents unclassified and vulnerable ‘dark’ data from building back up again over time.
Careful data management, clear policies and tools for classification and deletion are all central to meeting regulatory compliance. To execute it effectively, a business must give its employees the insight, confidence and control over the data they handle, and enable them with the right tools and technologies. By encouraging data responsibility and implementing new automation capabilities, they can cut through the fog and find a safe path through the regulatory landscape.
Innovation can be difficult to judge, but a strategic framework will help judge progress.
By Peter Skyttegaard, Senior Research Director, Gartner
Demonstrating the value of innovation is challenging. The exploratory nature of innovation means that organisations have to measure something where business outcomes are not fully known or guaranteed and where the path to success is often unclear. In many cases, innovation also has auxiliary goals such as stronger employee engagement, culture change or increased organisational energy by doing something new and different. These results are also difficult to measure.
There is also a timing challenge. The time lag between the original innovation investment or activity and its eventual outcome can be very long. This can make it virtually impossible for executive leaders to assess the value of innovation and intervene with adjustments to the program when things don’t go as planned. To better assess the business value of innovation, executive leaders should set clear goals and select a combination of input, process and output measures to track progress while the innovation program is being undertaken.
Determine Business and Innovation Goals to Create Clarity in Your Innovation Initiative
Designing good innovation metrics starts with an understanding of the organisation’s business goals. To demonstrate business value, organisations need to be clear on what they are innovating for. Are they pursuing incremental innovation to improve existing business? Are they looking for radical ‘moonshot’ innovation in entirely new business models? Or are they simply validating the possibility or potential of an idea before the organisation invests heavily in it?
For example, if an organisation’s business goals are related to better customer experiences, they may set an innovation goal to improve customer involvement in ideation and set a “benchmark” key performance indicator (KPI) against which to track this engagement. This can be measured much earlier by, for instance, measures on customer-submitted innovation ideas.
Executive leaders can also measure “failure rate.” This can include good outcomes that are not originally planned, or endpoints, which advise the business not to invest in an innovation idea as early exploration disproves the perceived benefits. A lesson from an experiment that didn’t go as expected is a good outcome that can be measured much earlier than the realised value of a successful new product.
For even earlier measures, executive leaders should look for leading indicators on “inputs” to the innovation process. Some of these can be easy-to-quantifiable inputs, such as budget spend or time dedicated, but executive leaders should not limit themselves to just these kinds of metrics. More “fuzzy” factors such as organisational culture are at least as important indicators for the innovation success as the hard numbers. When deciding the innovation metrics, organisations should consider factors of their current culture that can inhibit or nurture the innovation efforts, such as the level of risk acceptance in the organisation. Then, include these measures when setting KPIs for the innovation program, even if these measures are more subjective.
Defining good innovation metrics is not just about aligning to business goals and demonstrating value – innovation measurement has many jobs. It helps you determine resource allocation, prioritise activities, signal importance and tell the team what they can be held accountable for.
Setting a goal for customer involvement in innovation is a strong signal to the innovation team and stakeholders that the organisation is focusing on customer value in its innovation program. Defining KPIs for “good failures” demonstrates to the team that failure is acceptable and that it is OK to take risk.
Step 1: Innovation inputs
In this category, tangible, objective inputs such as employees assigned and funding. But these hard measures cannot stand alone. Large organisations, sometimes, kill innovation unconsciously because their steady-state, business as usual activities — those that drive the current money-making campaign — are generally hostile to doing things differently, even if they can achieve a superior outcome.
As a result, it is beneficial to anyone interested in innovating to create a culture that is open to innovation – one of the key themes of this year’s IT Symposium/Xpo. And since culture is important, it should be monitored and measured.
Cultural metrics measure the:
· Amount of creative space given to individuals (time)
· Individuals’ innovation inclination (interest)
· Level of predictability the organisation is comfortable giving up (risk)
· Ability of the organisation to change its ways in the name of innovation (learning)
An uncomplicated 1- to 10-point scale (where 10 is high and 1 is low) to assess these four questions can yield a good picture of the culture’s health. By nature, they are subjective, not objective, but that is OK. They act as surrogates early in the game for a more objective measurement of results.
Step 2: Innovation process
The health of the innovation process is measured along with the team’s productivity. Innovation process metrics are a leading indicator of innovation outcomes. They take into consideration the number of ideas, the time taken for the ideas to get through the process and the number of innovations that resulted in improvement over the past 12-month period.
Step 3: Innovation outputs
The most important measure is the quality of the innovation itself. It is, however, a lag indicator and can often take years to co-relate the innovation project, its deployment and its ultimate benefit to the business.
Measurement criteria include the level of success of the delivered innovation after three years. If this isn’t possible, quantitative analysis should be done, such as the number of people who have adopted the innovation, financial benefits or the achievement of operational efficiencies.
It’s also important to include metrics that measure what was learned from failures. Many times, a failed outcome points to a leading indicator such as governance, team, culture or process, and provides insight into what can be changed for the next time.
Conclusion
Even though innovation measurement is challenging, do it anyway. Measure in three areas: innovation inputs, innovation process (a valuable leading indicator of outcome) and innovation outcomes (a lag indicator, but what pays for the whole innovation process).
When doing so, be prepared to mix objective hard numbers with subjective measures such as feeling and morale. Before starting the process, be clear about the reason for innovating, how the organisation will know that it has innovated and what will be done to change as a result of what has been measured.
An automation CoE is the essential foundation needed to successfully launch automation initiatives, and, in doing so, becoming a business that thrives within the ever-changing and challenging business environment.
By Vijay Kurkal, CEO, Resolve
Automation has been at the heart of technological progress as far back as the first industrial revolution. In 1733, John Kay’s revolutionary invention of the flying shuttle increased weaving speeds and changed manufacturing forever. The fourth industrial revolution, also known as Industry 4.0, began in 2010 and ushered in an era focused on digitisation and virtualisation fuelled by IT automation. In fact, digital transformation and IT automation are inextricably linked, and success with both requires organisations to embrace a digital-first mindset.
Businesses have turned to digital transformation over the last few decades to cope with a variety of challenges. The last twelve months in particular have seen an incredible acceleration in these initiatives in response to the challenges posed by the pandemic. IT teams faced a new set of demands as workforces became remote and digital channels became the lifeblood of our personal and professional lives.
Digital transformation has also been driven by astronomical growth in data volumes. According to a recent IDC report, data volumes increased from 2 zettabytes in 2010 to 59 zettabytes in 2020, with claims that this figure will reach 175 zettabytes by 2025. More data means more insights, better predictive capabilities and further business optimisation, yet these volumes of data are overwhelming and require new solutions, like automation and artificial intelligence (AI), to make sense of all that data. Businesses that invest in automation and AI will reap the rewards with rich analytics that drive agility and innovation, enabling them to not only endure constant global changes, but to thrive in our current environment.
Not surprisingly, strategic leaders are looking for ways to quickly advance their organisations’ automation maturity. It is no longer enough to simply automate repetitive tasks. Companies must be innovative in what and how they choose to automate. As organisations move from automating simple processes and achieving quick wins, to hyperautomation scenarios that can handle more complex use cases and advanced integrations, part of the challenge is properly assessing the full scope of automation’s capacity. With so many promising opportunities for optimisation, it is difficult to know where to begin. Introducing an automation Centre of Excellence (CoE) is not only an essential part of the journey, but the very foundation from which to start building.
What is an automation Centre of Excellence?
A Centre of Excellence brings together a cross-functional group of experts who are dedicated to deploying, scaling, and leveraging a specific technology (or group of technologies). Their sole focus is to improve business outcomes by leveraging that technology to its maximum potential, and ensuring that it is being utilised in an innovative and successful way, which also complies with corporate governance and compliance standards. To form an automation Centre of Excellence is to guarantee this technology is tested, analysed and brought into operation effectively and efficiently.
In practice, a CoE establishes the overarching automation strategy and framework, develops the operating model, and ensures the appropriate skills are in place to support the strategy. The very existence of the CoE also signifies an organisation’s dedication to automation and can foster the culture required for automation to succeed.
Scaling and optimising automation with metrics
Measuring, sharing, and communicating the value of automation is critical for the long-term success of your automation initiatives and is a core responsibility of the Centre of Excellence. Illustrating
automation ROI secures the cross-functional support required to scale, from the C-suite members who control the purse strings to the practitioners who are in the automation trenches.
Adopting a data-driven approach also enables a CoE to identify where automation can be leveraged to provide the best business benefits, including areas for quick wins. Tracking and reporting on key metrics, such as hours saved, cost savings, and improvements in service delivery, can be done at a granular level for each automated process. These metrics offer visibility into which automation candidates should be prioritised next for the greatest gains. And, of course, aggregating these metrics across the entire automation ecosystem measures the overall impact of automation on business outcomes.
There are many additional KPIs that can be tracked as indicators of automation success, going well beyond costs and hours. Examples include mean time to repair (MTTR) problems, call volumes, the number of incidents or requests handled solely by automation, or the time required to complete service requests. Customer satisfaction and employee happiness are also important metrics for many organisations. An automation Centre of Excellence should develop a committee to determine which KPIs are most important based on business goals and dynamics, and then develop a baseline for each one in order to show demonstrable improvements post-automation.
Cultivating collaboration and culture
Fostering a culture of automation is critical for automation initiatives to succeed, and every CoE member should be prepared to take an active role in driving the cultural shift. The CoE bears responsibility for creating an environment where new ideas can flourish and that prioritises innovation, cultivates collaboration, inspires participation, and offers employees new opportunities.
Within a CoE, collaboration and unity are key tenets. Siloed thinking and competitive departmental rivalries must be avoided at all costs; working together towards a positive result should be the central focus. From here, it is possible to achieve a company-wide sense of collaborative culture and ensure there are enthusiastic automation ambassadors within every department.
An automation CoE is the essential foundation needed to successfully launch automation initiatives, and, in doing so, becoming a business that thrives within the ever-changing and challenging business environment. Gartner predicts that 69% of routine work currently completed by managers will be fully automated by 2024, making it clear that automation is here to stay and will continue to play an integral role in digital transformation. Do not get left behind.
Living through ‘unprecedented times’ is nothing new for organisations. Businesses have always had to deal with highs and lows and adapt accordingly. The pandemic has shown that the flexibility to maximise or minimise the impact of planned or unplanned opportunities or threats is vital.
By Chris Huggett, Senior Vice President, Europe & India.
Hybrid working and hybrid socialising will be in place for the foreseeable, and many predict that a hybrid working model will remain even when life goes back to ‘normal’. Hybrid IT, on the other hand, isn’t new, but adoption, having increased for the last few years, has seen a boost as companies look for the answer to safeguard against future disruption.
Put simply, Hybrid IT is the mixture of IT infrastructure platforms – legacy on-premise and private/public hybrid clouds – that an enterprise uses to satisfy its application workload and data needs. It is the perfect solution for companies that demand agility, scalability and an OPEX cost model of the cloud, but want consistent performance and control of security, compliance, and costs long term. Hybrid IT is about deciding which workloads should be deployed to the cloud, and which should run on a company’s infrastructure.
So how do organisations decide what goes where?
Gaps in cloud compliance
The move towards greater use of the cloud has followed growing concerns on the management and protection of data. Cyber threats are continuing to evolve and accelerate, and the skills required to defend against are becoming more complex.
Regulations such as the GDPR bring additional rights and safeguards for individuals, but the move towards cloud IT could expose a compliance gap – especially for organisations that handle personal data. Organisations that host their data on-premise in local storage systems should be in a position to identify the location of most, hopefully all, of their data, quite quickly and those that host data elsewhere could have concerns over not knowing where the data is stored.
However, one of the challenges with public cloud adoption are the skills required to build and maintain it. Do organisations have the skills to ensure that data that is stored on-premise is secure and compliant? For many organisations, meeting compliance and regulatory requirements can be easier to achieve using private clouds. Just because organisations have outsourced their data storage, it doesn’t mean they can outsource responsibility for compliance, however. Organisations must ensure third-party cloud providers meet current standards and show due diligence. Complying with laws such as the GDPR and penalties for breaches fall squarely on the organisation, so assessing any gaps in compliance is key.
Performance enhancing infrastructure, but at what cost?
As an organisation considers infrastructure options, it doesn’t need to choose only one model. The best approach is often a combination of clouds and infrastructures to best meet the requirements of the business.
One approach, often referred to as ‘own the base and rent the spike’ addresses cost and performance requirements. A common scenario for organisations to experience is spikes in demand, such as sales events that drive increased traffic but still require consistent performance. A public cloud Infrastructure-as-a-Service (IaaS) environment provides the agility and scalability to rapidly accommodate these demand spikes. Traditionally this is addressed by scaling out a web or application tier during these spikes and contracting outside of such periods.
Conversely, public cloud, as well as some multi-tenant private cloud offers, are usually a pay per use model, meaning an organisation pays for the infrastructure it rents only for the time it is being rented. Renting IaaS capacity can be very cost efficient, when compared to throwing money at the problem and purchasing hardware, as you are not investing in unused capacity outside of demand spikes.
Owning the base on the other hand is about calculating the capacity needed to securely support the steady state outside of demand spikes, and procuring the capacity and associated hosting, such as buying servers, networking on storage to be hosted on-premise or in a collocated data centre. This is a relatively simple exercise for existing applications, but when there are new applications to be deployed how much capacity does an organisation need? One simple way of addressing this is to rent capacity in the cloud and then evaluate the utilisation and performance needs of that application, then procure the resulting requirements and deploy it on company owned infrastructure. This approach can very quickly identify the true capacity and performance needs prior to committing to a large capital outlay. Not applying this approach, however, can result in significant overprovisioning. Analysis of customer purchased capacity and performance, compared to what is actually used, often shows significant levels of over provision, which is all wasted investment. Buying the base provides organisations with the peace of mind that they are meeting their predictable needs, while renting the spike accommodates the unpredictable.
When adopting clouds, the on-premise IT footprint is reduced. However not all workloads are suitable for cloud deployment, meaning there are often legacy systems that need accommodation, and maintaining an on-premise hosting for a smaller footprint does not always make economic sense. Many organisations recognise the benefits of utilising a co-location facility to meet the needs of legacy workloads, because they offer advanced hosting and support capabilities with resilient power cooling and networking that would simply be over-kill to deliver on-premise for a smaller set of workloads. Many organisations can then repurpose on-premise hosting environments into more productive space.
The best of both worlds
The benefit of the ‘own the base, rent the spike’ scenario is that it ensures maximum cost-efficiency, performance, security, and reliability, with the least risk of lost revenues and customers. In essence, it’s the best of both worlds, which, of course, is what hybrid IT is all about.
The success of a business isn’t just about one application, one workload or one environment. It’s the ability for all applications and workloads to reliably work together within or across multiple environments and to always be available. Today, that’s what keeps a business moving forward.
As the Trade Association to the Data Centre sector the DCA understands that it is imperative that key issues affecting the sector have a point of focus. The DCA SIG’s (Special Interest Groups) / Working Group regularly come together over shared interests to discuss issues, resolve problems, and make recommendations. Outcomes can result in best practice guides, collaboration between group members, participation in research projects and much more. Members find these groups are a great way to ensure their opinions and views are considered in a positive and cooperative environment.
The DCA currently facilitates nine Special Interest or Working Groups. DCA members can join any of the groups and contribute.
The purpose of the DCA Colocation Working Group is to provide a unified voice for the UK Plc data centre colocation and Data Centre Provider community. The Group is chaired by Dan Scarbrough, with Leon O'Neill acting as Deputy Chair.
The Groups Objectives:
To request to join this group please contact the DCA - mss@dca-global.org
Ashish Moondra – Senior Product Manager, Power, Electronics & Software at Chatsworth Products (CPI)
As the transition from the Information Age to the Age of Artificial Intelligence gives way to heightened significance of connectivity, cloud service providers and the IT industry work around the clock to ensure the life most of us know today, high-speed internet, mobile connectivity, self-driving cars and machine-to-machine (M2M) learning. A recent Cisco Annual Internet Report confirms this reality.
By 2023, for example, nearly a third of the global population is expected to have Internet access – that is about 5.3 billion users. Meanwhile, the number of IP networks is projected to be more than three times that number.
Within the data centre space, the colocation market may see the most growth, with an estimated CAGR of almost 11% from 2020 to 2025. Faster time to market- in lieu of undertaking an on-premise data centre project that may take months to complete – is the primary reason for the attention toward this segment. Needless to say, delays in bringing up a new customer within a multitenant environment directly translates into lost revenue. Therefore, it is no surprise that colocation providers are challenged to scale up with solutions that are quick to deploy, manage and service.
The following are two key points for colocation vendors to consider when looking to quickly get new customers up and running.
Vendor Selection
Within colocation environments, end customer requirements generally vary based on budgets, functionality required and the IT equipment that will be housed within the cabinets. Service-level agreements (SLAs) require colocation facilities to be able to quickly provide the infrastructure equipment that meets the needs of their end customer. Partnering with equipment vendors that have local manufacturing capabilities and a build-to-order model provides colocation vendors with the ability to quickly procure products aligned with end customer requirements. In-region manufacturers typically have a wide breadth of standard solutions and the ability to create and deliver custom solutions in a short timeframe.
While evaluating equipment vendors for their ability to deliver products in short lead times, it is critical that data centre professionals ask questions related to location of the supply chain as well as their risk mitigation plans. With the booming demand for more things to be connected to the Internet, some electronic components as well as populated, printed circuit board assemblies can have lead times spanning several months.
Equipment manufacturers in North America that rely on in-region sources for long lead time components will have a better ability to scale quickly to meet demands of larger projects.
The common denominator within the date centre white space is the equipment cabinet. Dealing with vendors who can preinstall all infrastructure solutions within the cabinet, including power distribution equipment, cable management solutions, access control and environmental monitoring per the end customer’s needs will save colocation vendors significant time, effort, and money. Additionally, preconfigured solutions that are tested together before they leave the factory minimizes any surprises that could otherwise delay schedules when multivendor equipment is received separately. Finally, consider that preinstalled solutions require minimal packaging, helping reduce waste and the tie required to deal with it.
Product Considerations
To allow remote manageability of the off-premises equipment, colocation vendors provide intelligent hardware solutions that allow monitoring and control of power and environmental parameters within the cabinet. Growing regulatory and security demands also require end customers to control physical access to the cabinet and maintain an audit log of all success attempts.
While these solutions provide significant advantages to the end customer, the challenge is to deploy them speedily over the network and quickly configure them to be fully operational. Intelligent power distribution units (PDUs) that also integrate environmental monitoring and access control provide a unified solution that require just one single network connection. The speed of deployment can be further enhanced by utilizing intelligent power distribution units with Secure Array IP Consolidation that allow up to 48 intelligent PDUs to share one primary IP address and an alternate one for failover capability. This setup allows the white space infrastructure for complete rows of cabinets to be managed by one or two ports on a network switch. The alternate and inefficient solution would have been to first install, wire and configure extra network switches purely for infrastructure monitoring, connecting them to every monitored device and then taking a crash cart to each device to perform their IP setup.
Once the PDUs are deployed on the network the next step that could take a considerable amount of time is the configuration of every monitored device that includes network access, threshold, and notification settings. In this scenario, choose PDUs with bulk configuration capabilities over the network. However, the preferences of end customers for mass configurations can differ.
For example, while a data centre operations group may prefer bulk configuration through a data centre infrastructure management (DCIM) software solution, network professionals or developers may prefer automated configuration using a Command Line Interface (CLI) or Application Programming Interface (API), This means colocation vendors that deal with multitude of end customers will be ahead of the competition if they provide a solution that supports most types of bulk configuration methods. All these capabilities not only make initial deployment and configuration easier, but also simplifies ongoing management.
Another important and usually overlooked aspect to consider is the serviceability of the products. The most common maintenance to be performed on Intelligent PDUs is timely firmware upgrades. The products chosen should allow for these upgrades to be easily performed over the network or through UB ports on the equipment. A field-replaceable controller on the unit also allows for seamless serviceability and upgradability. These upgrades should be capable of being performed while the units continue to provide basic power distribution to connected equipment. Finally, consider that intelligent products such as PDUs should include warranties with an advanced replacement coverage as a norm rather than exception.
With data consumption growing faster than ever, speed of deployment and delivery is the most pressing challenge for colocation providers. The ones who consider the two recommendations above will be able to have a competitive edge that will ultimately allow them to grow their top line revenue faster and be ahead in the race.
Ashish Moondra has a total of 20 years of experience developing, managing and selling rack power distribution, uninterruptible power supply (UPS), energy storage and Data Centre Infrastructure Management (DCIM) solutions. Ashish has previously worked with American Power Conversion, Emerson Network Power and Active Power, and has been an expert speaker at various data centre forums.
Anna Nicholls Head of Marketing, Teledata
When you’re choosing a colocation provider, you need to think about a lot more than just the location. Sure, location is important. You’ll need to be able to access the data centre fairly regularly, so it’s helpful if your provider is commuting distance for your technical engineers - although a decent data centre provider should offer a remote hands service, making location less of a deal breaker - but there are other points to consider when you make the decision on which colocation provider is right for you.
What is colocation?
Colocation (also known as colo) is when you put your equipment - servers, storage, switches, software - into somebody else’s data centre. You provide the kit, they provide the space, power, rack and connectivity. That’s usually where the provider’s involvement ends. Upgrades, monitoring and backups will be handled by you and be the responsibility of your IT team, while the data centre provider concentrates on keeping the lights on, and the buildings secure and connected. Basically, you’re renting space in a data centre.
Why colocation?
So why would a business choose colocation? What are the benefits? Well powering and cooling servers is expensive. With colo, you’re using the data centre’s power, at a much lower cost due to economies of scale. Data centres give you access to a wide range of connectivity options offering both resilience and competitive choice, so ultimately you’ll have increased availability compared to an on-premise set up. You still maintain complete control of your hardware and network, but with a reduced TCO (Total Cost of Ownership) compared to on-premise.
So other than location, what else do you need to think about when choosing a colocation data centre?
Access
We’ve talked about access from a location perspective, but check whether the data centre will be accessible to your engineers at the times they need it. Will they need to make appointments in advance, will access be available out of hours - evenings, overnight and weekends - without an appointment in emergency situations? What about bank holidays? Are there any restrictions on access which might impact your team’s ability to maintain your network?
Security
In a world of increasing threats to digital data, this is probably one of the biggest decision points when choosing a colocation provider. Your colo provider will be responsible for keeping your data physically secure, so it’s critical that whichever data centre you choose takes appropriate measures to protect itself. Look for a facility that goes above and beyond. From the obvious perimeter fences, access cards and security guards, to the higher levels of security and access control such as mantraps, virtual tripwires, SOCs (Security and Operations Control Centres) and links to police control centres. If compliance is a requirement, check that your data centre provider is ISO accredited.
Connectivity
Connectivity is king, and a data centre is only as good as its connectivity. Some data centres are carrier neutral, which will give you both choice and resilience. TeleData is carrier neutral, with multiple carriers offering diverse points of entry plus dark fibre availability. We also offer direct connections to major Manchester and London data centres, giving customers a broad range of options and a wide reaching, robust connectivity network.
Resilience
We’ll start by talking about power - but resilience covers a wide range of eventualities which need to be considered. It’s the data centre provider’s job to keep the lights on, so you need to make sure you’re happy with their procedures for keeping the facility running in the event of a power outage. Power outages simply cannot happen in a data centre, but they do happen, so what processes are in place to make sure that the cogs keep whirring? Ask about UPS, backup generators, battery storage options and be absolutely certain that you’re confident that your colocation provider will not suffer an unexpected power down.
The same goes for other events and disasters - floods, fires, attempted break-ins and anything in between. What has your provider done to pre-empt these situations and therefore, provide contingencies in case the worst happens?
Choosing a colocation provider is a big decision for any business, and if you’re going to be tied into lengthy contracts, you need to make sure your decision is a good one. Not all data centres are created equal, but what’s important is that the one you choose meets the needs of you and your customers, hits your SLAs and offers the right level of resilience, at the right price.