It’s difficult to know where to begin or what to say that won’t sound insignificant or inconsequential in the current business environment. I’d be fairly certain that even the most expert risk assessment professionals, combined with the best business continuity brains in the world, would have struggled to imagine the global chaos which seems to have engulfed us all. No matter how well thought out are the contingency plans, if many businesses are closed, and many, many people are not working, then it’s difficult to see how many organisations can continue to trade effectively.
Of course the banks (yes, those banks which brought the world to its knees in 2008) smell an opportunity to ‘help out’, although many do not appear to remember the debt of gratitude they owe the millions and millions of taxpayers who bailed them out a decade or so ago when it comes to the terms on which they are prepared to hand out money now. And anyone who does most of their business in the autumn and beyond may well be rubbing their hands with glee, even if cash flow is a struggle right now. And the food sector must think it’s Christmas come early, along with the makers of toilet rolls and hand sanitisers!
But for the vast majority of individuals and businesses, there’s a nasty, deep precipice on which we’re all uncomfortably balanced, aka the final scene of the Italian Job.
If there is any kind of silver lining to be found in these dark days, then let’s hope that the realisation that:
a) It is actually possible for many of us to work from home will lead to a massive change in work culture, giving employees more flexibility and giving the environment a break as far less work-related travel takes place;
b) We can all exist without having to ‘consume’ 24x7x365, leads to a slightly less selfish society, one where community and people matter rather more than possessions.
Of course, I could take the alternative view that, forced to stay at home, we’re becoming even more used to absolutely everything being delivered to our door, so there’s no need to go out anymore, we can live, work and enjoy our leisure without engaging with the outside world, except to ‘fight’ for vital supplies.
Usually, I’m a glass half-empty kind of person, but on this one I’m optimistic that, no matter how bad it gets for individuals, businesses and whole countries, there might just be a recognition as to what truly matters in life.
One thing is for sure, that recovery from the pandemic of 2020 will take rather longer than picking up the pieces after the financial crash of 2008.
Please note: Pretty much all of the articles in this issue were written ‘pre-coronavirus’, so bear that in mind when it comes to their content.
The choice of Healthcare as a focus for some articles in this issue was made back in the mists of 2019 – so, we were planning to cover this industry in any case – we’re not jumping on the bandwagon!
New research from IDC and Micro Focus forecasts that global business is approaching a digital transformation tipping point.
By 2023:
While DX has at times been seen as something of a buzzword for IT, it is now regarded by most organisations as an essential business mandate and will soon become the default operating mode. With the data being created globally growing at a CAGR of 25.8%, businesses pursuing DX are developing systems to manage and exploit the opportunity: modern systems of record to ensure that data is accessible and AI/ML to efficiently utilise it.
The new competitive advantage identified by this research is found in turning data into intelligence through a digital transformation platform which underpins all business systems. This stands in contrast to existing DX approaches which address specific functionalities. By 2025, it is expected that 80% of organisations will measure customer outcomes in terms of their interaction with a whole ecosystem, taking the strength of information flow through the network into account.
The key differentiator for businesses in this landscape, according to Joe Garber, Global Head of Strategy & Solutions at Micro Focus, will be ‘digital determination’: the capacity and dedication to see through end-to-end DX projects. “Although core business systems are the lifeblood of the organisation, only five percent of the organisations we studied have built enduring strategies of DX success. Rather than adopting a rip-and-replace strategy that can yield unacceptable risk, we are seeing that most businesses need to instead pursue an assertive modernisation strategy.”
While DX is rapidly becoming the new norm, only 5.1% of the organisations studied had already realised sustainable performance excellence competencies around new digital technologies and business models. ‘Businesses cannot afford the risky and time-consuming proposition of digitally transforming from scratch,’ continued Garber. “They need to run and transform at the same time. We expect that this will not be a transition stage, but the way that IT will continue to operate - and getting to that point demands determination.”.
According to a survey from global research and thought leadership organisation, Leading Edge Forum (LEF) and DXC Technology, companies have overdue homework to complete, if digital transformation is to succeed.
The study, “Connecting Digital Islands: Bridging the Business Transformation Gap,” explores where global businesses are in their transformations, and what benefits, challenges and opportunities the next decade may bring. While 79% of respondents say that they’re leveraging technology to become market leaders, Two-thirds (66%) say their mission-critical systems are so complex they are wary of changing them. A further 62% responded that they lack a common set of tools and platforms across the organization resulting in a collection of “digital islands”: units working with the right technologies but independently of each other.
The survey also found that over half of the respondents (52%) believe staff are not sufficiently using analytics to make decisions based on data insights, despite 79% claiming they are effectively using technology to grow, compete, and drive market leadership. This means insights are being missed, given 77% say the collection and use of data is built into how they compete and operate.
While enterprises have been focused on data- and technology-driven transformation for some time, survey respondents say they must get the “people and culture” aspects right for effective, long-term change in the 2020s. Significantly, 65% of the business leaders surveyed reported that employee reluctance to change work habits is a barrier to technology-enabled organizational change, while only 14% rank improving employee engagement and empowerment as their top internal priority.
More notably, 70% say more effective leadership is needed across the organization.
In the survey respondents also prioritised the emerging technologies they believe will most positively impact their organisations:
● High-speed 5G networks (60%);
● Artificial intelligence and machine learning (59%);
● Sensors and the Internet of Things (55%);
● Robots and robotic process automation (54%);
● Virtual and augmented reality (53%);
● Voice interfaces such as Alexa, Siri and Google Assist (52%);
Commenting on the research findings, Richard Davies, VP, Strategic Advisory, DXC Technology and MD at LEF, said: “This is a strong reminder that getting the right combination of people, culture and technology is essential for making effective, long-term change. Employees will also need to embrace technology-infused work cultures more strongly, and leaders must have the same priority. To close the gap, companies have to a lot of work to do — work that should have happened some time ago. This ranges from building effective leadership and internetworked teams, to modernizing IT and moving up the stack for data-driven insights, to establishing an ecosystem of partners and suppliers and instilling a culture of collaboration, learning and agility.”
The full report concludes by offering enterprises a five-step plan to help create the conditions for change if they want the 2020s to be the decade in which the cultural, technological and market barriers finally come down:
Organisations across Europe are facing a major skills challenge caused by digital transformation, with many struggling to keep pace with learning and development (L&D) needs, according to research from Skillsoft*. Carried out in the UK, France and Germany, the research found that reskilling in the face of changing and increasingly digital working environments is the biggest single issue for L&D professionals across all three countries (42 percent of respondents on average).
However, only 22 percent of respondents from across the three countries said their organisations arefully prepared to provide the new skills required by digital transformation. The UK sits below this average, with just 14 percent of organisations saying they have fully prepared employees with new skills, with France only slightly higher than the average,reporting 33 percent of organisations are fully prepared. France had the most respondents saythey are doing nothing to build digital transformation skills (20 percent), compared to Germany at three percent and the UK at one percent. In each country, most respondents believe their organisations need to do more to keep pace with digital transformation.
Despite the overwhelming response that organisations are not preparing employees for digital transformation, only half of the organisations in each country have increased investment in skills to keep pace with digital transformation (UK – 56 percent, France – 54 percent, Germany – 56 percent).
“It’s clear that across these three major territories, digital transformation is severely testing the planning, implementation and spending strategies for L&D professionals. Despite a clear trend of increasing and targeted investment, the industry faces a challenge to keep pace with the changes that digital transformation is bringing,” explainedSteve Wainwright, Managing Director EMEA at Skillsoft. “It’s vital, therefore, that we also look to technology to help build successful L&D strategies so we can all reap the benefits of this exciting era of disruptive change in the way we learn, widen our skillsets and give employees the greatest opportunity to develop,” concluded Wainwright.
Additional Research Highlights
Customer experience suffers; business opportunities missed as IT teams firefight unplanned work, finds PagerDuty research.
PagerDuty has released new research suggesting EMEA IT professionals are stressed by the increase in time-critical, unplanned work of more than 100 hours per person, per year (for 75% of respondents). The increase is impacting their ability to deliver on business priorities and assure a positive experience for customers.
In the EMEA State of Unplanned Work Report 2020, 53% of respondents in EMEA said unplanned work caused stress and anxiety for them personally (versus 61% in Asia Pacific (APAC) and a massive 80% in North America (NA)).
Disruptive, unplanned work redirects IT resources away from key responsibilities and into fire-fighting mode, making it harder for teams to deliver on business priorities and take advantage of opportunities to innovate, according to 81% of EMEA respondents (compared to 81% in APAC and 86% in NA). Adding to the pressure, 73% of EMEA respondents said they typically find out about major issues from dissatisfied customers rather than their own systems (70% in APAC and just 47% in NA) so are on the back foot before they even start resolution.
Steve Barrett, Vice President, EMEA at PagerDuty, commented, “In an always-on world, customers expect companies to deliver a perfect digital experience every time. Anything less can severely damage the bottom line. Yet in increasingly complex IT environments, it can be difficult for responders to cut through the noise and get to the issues that matter, fast. Machine learning and automation can help bring together the right people with the right information in real-time, enabling resolution in seconds or minutes rather than hours.”
Unfortunately, for many companies in EMEA, opportunities to mitigate the impacts of unplanned work through automation are being missed. 81% of EMEA respondents said their organisations have little or no automation for IT issue resolution (77% in APAC and 92% in NA).
29% of EMEA respondents said they had considered leaving their job as a result of unplanned work, creating recruitment and retention challenges for employers (31% in APAC and 38% in NA).
Barrett continued, “Companies can help improve the satisfaction and wellbeing of IT professionals by altering how, when and why individual responders are notified of IT incidents and benchmarking IT team health to ensure the effectiveness of their wellbeing strategies.”
Other EMEA findings include:
EMEA incident recovery plans are least likely to be fit for purpose
70% of respondents in EMEA said they had experienced issues that their response plans failed to account for (versus 68% in APAC and just 53% in NA).
EMEA is the worst communicator when it comes to major IT incidents
While most companies in EMEA are quick to engage IT responders in the event of a major incident, many fail to keep key stakeholders informed:
• Only 50% notify all those on relevant teams (54% in APAC and 69% in NA)
• Only 31% notify customers (43% in APAC and 47% in NA)
• Only 36% notify employees (40% in APAC and 53% in NA)
EMEA teams dismiss postmortems
EMEA is also the least likely region to maintain a constructive dialogue after the event. Only 40% of respondents in EMEA said they use incident postmortems to continuously hone their incident response (compared with 47% in APAC and 67% in NA).
52% of remote workers say they’re less likely to travel and view themselves as more productive (52%) and efficient (48%) when working remotely.
GitLab, the single application for the DevOps lifecycle and the world’s largest all-remote company, has published findings from its inaugural Remote Work Report, which surveyed 3,000 professionals from the United States, Canada, United Kingdom and Australia who work remotely or have the option to work remotely. The survey highlights the ever-increasing value employees and employers place on remote work as an alternative to traditional, in-office practices. In an era with increasing recognition and understanding that mental health and physical health directly impact employee performance, it’s undeniable that the future of work will be remote.
"We believe all-remote is the future of work, as it delivers extraordinary benefits to businesses and employees,” said Sid Sijbrandij, CEO and co-founder of GitLab. “For companies, there are unique operational efficiencies, huge cost savings on office space and a broader pool of job applicants. For employees, this structure enables off-peak lifestyles, family-friendly flexible schedules, and improved work/life harmony. We believe that a world with more all-remote companies will be a more prosperous one, with opportunity more equally distributed."
“The reality is that almost every company is already remote, whether you're working across floors or across continents,” said Darren Murph, Head of Remote at GitLab. “GitLab believes all-remote is the purest form of remote, and we're working to empower other companies to implement great remote practices. This year's report breaks down preconceived notions about remote and sheds light on its power to plant opportunities in underserved regions, make communities less transitory, and create authentically diverse teams.”
Contrary to popular belief, remote employees aren’t all traveling nomads. In fact, 52% of survey respondents reported they traveled less as remote employees. The 38% who view a lack of commute as a top benefit instead spend the time earned back from commuting with family (43%), working (35%), resting (36%), and exercising (34%). Additionally, employees find themselves overall to be more productive (52%) and efficient (48%) when working remotely.
Fourteen percent of remote workers surveyed reported they have a disability or chronic illness. Eighty-three percent of those cite remote work as a key factor in their ability to contribute to the workforce. Remote work also empowers all employees to move their organization forward. Fifty-six percent of the respondents said they feel everyone in their company can contribute to process, values, and company direction, and 50% say they default to shared documents and rely on meetings only as a last resort. Remote work levels the playing field by fostering a better sense of work-life balance and creates opportunities for everyone to contribute in the workplace.
Remote work enables employees to focus on their families without having to sacrifice their careers. Thirty-four percent of those surveyed say the ability to care for family is a top benefit of remote work, while 52% cite schedule flexibility and 38% say the lack of commute is a major benefit. Additionally, 43% reported that the absence of a commute freed up time to spend quality time with their families. Fifty-five percent of respondents have children under 18.
All-remote is the purest form of remote work, with each team member on a level playing field. Forty-three percent of remote workers surveyed feel that it is important to work for a company where all employees are remote. Currently, more than 1 in 4 respondents belong to an all-remote organization, with no offices, embracing asynchronous workflows as each employee works in their own native time zone. An added 12% work all-remote with each employee synched to a company-mandated time zone.
The benefits of remote work are only going to increase, and survey respondents agree. Eighty-six percent believe remote work is the future, but it’s also the present, as evidenced by 84% who say they are already able to accomplish all of their tasks remotely. As technology continues to improve how we communicate and how businesses operate across the globe, the need for brick-and-mortar offices and consistent, on-site attendance will continue to decrease. Remote work is here to stay, and use cases for it will only multiply
According to new research from Globalization Partners Inc., more than 90 percent of employees who work for a global organisation describe their companies as diverse. However, a lack of understanding by the organisations themselves around how to manage this growing disparate and diverse workforce means that three out of ten respondents don’t feel a sense of inclusion or belonging. This negatively impacts employee engagement, trust, happiness, as well as staff turnover.
Globalization Partners’ 2020 global employee survey, Examining the Impact of Diversity on Distributed Global Teams, asked1,725 randomly selected employees about their experience as distributed global team members. The research also uncovered that global teams are struggling to make communications work for them with 46 percent of employees still relying most frequently on email, but only 31 percent finding it effective.
Other key findings include:
●Two thirds of companies are finding it challenging to align with and be sensitive to local culture and communication styles,especially when a company spans multiple geographies.
●Organisations that embrace multilingualism are seeing better team results across the board.
●Nearly nine out of ten employees (89 percent) say their company would benefit from the assistance of outside experts to help with cultural training and cross-organisation education.
“Gartner predicts that by the end of 2022, 75 percent of organisations with frontline decision-making teams in distributed geographies and with diverse mindsets will exceed their financial targets. As a company with offices in every corner of the globe, I’ve seen the incredible business advantage in hiring and building diverse teams,” said Nicole Sahin, CEO of Globalization Partners. “I also know that now more than ever, in these trying times it is critical to have the right communication tools in place to ensure success. The employee experience matters now more than ever, and collaboration tools like Slack can make a positive impact on the ability for teams to collaborate on projects and provide a social connection for employees around the world.”
Nearly half (45 percent) say their cloud optimization is still at the ‘initial’ or ‘opportunistic’ stage.
Rising technology debt, low levels of cloud maturity and lack of in-house skills are stifling global competitiveness, innovation and speed to market, according to Avanade. A new global study has found organizations could earn an extra $1B per year in revenue and reduce operational costs by more than 11 percent through adopting a holistic approach to cloud technology, apps and modern engineering techniques – what Avanade calls being ‘ready by design.’
Avanade’s survey of more than 1,600 C-level executives revealed that just 12 percent of respondents describe their cloud capability as optimized and 45 percent say they’re still at the ‘initial’ or ‘opportunistic’ stage of their journey. Only 27 percent believing that their cloud strategy will be fully optimized by 2023. Technology debt, the costs and challenges of maintaining and integrating legacy technology, was also predicted to increase over the same period, from 17 percent to 19 percent. Around three quarters of business leaders say this drain on budgets and associated security concerns over legacy platforms is impacting speed to market (76 percent), innovation (74 percent) and their ability to retain technical professionals (74 percent).
“We know that digital disruption is pervasively impacting all industries, and it poses a very real threat, even to well established businesses,” says Adam Wengert, global applications and infrastructure lead, Avanade. “Cloud maturity is creating a two-tier business landscape with digital laggards finding themselves stagnating while their digital-native competitors are steaming ahead. The point of differentiation comes from the natives’ ability to pivot towards opportunity and away from threats within days, not months and years, thanks to their cloud-based architecture.”
One of the main causes of the debt, according to the research, is that only one fifth of applications have been rebuilt for the cloud. However, with 88 percent of respondents agreeing that innovative applications have a direct impact on growth and 94 percent identifying increasing sales and revenue as a priority over the next 12 months, it’s not surprising to see a consensus from the study around the value of cloud-based custom-built applications. Nearly 9 in 10 (89 percent) executives acknowledged that human-centred apps, those that place human needs as a higher priority, are vital for growing or defend an existing market position.
“For organizations to be successful they need to be able to move quickly to deliver customer value that can increase sales and revenue. Mobile applications are just one example of where digital natives are using human-centred apps to disrupt established markets. Mobile apps present a powerful channel to bring businesses closer to their customers, providing a more personalized service and facilitating their journeys. That is just the beginning though. Successful businesses will be those that are not afraid to experiment, that truly drive enterprise innovation and get new ideas into market faster, and more often than their competitors,” explains Wengert.
However, one obstacle that hinders an organization’s ability to seize an opportunity and experiment is the need to liberate talent to use proven modern software product engineering approaches. Around half (52 percent) of respondents cited people and skills issues as the biggest obstacle to adopting modern software product engineering practices inhouse.
“Agile, DevOps and other modern approaches will have a significant impact on improving reaction times and speed to market, making an organization more product and customer centric. However, with the lack of inhouse skills, enterprises can find themselves further paralysed, despite recognizing the need to be ‘ready by design.’ To accelerate digital maturity, enterprises need to take a holistic approach – keeping apps, cloud and people at the forefront of their business strategy; otherwise the outlook for cloud maturity and acceleration – and indeed enterprises themselves – is not going to improve,” concludes Wengert.
A survey of 400 data professionals across the US and Europe by Dataiku has revealed significant challenges in building trust in enterprise AI projects and perceptions across roles within the organisation.
Only 52 per cent of respondents said that their organisations have processes in place to ensure data projects are built using quality, trusted data. With topics like trust, explainability, responsibility, and ethics at the forefront of discussions in AI uptake, Dataiku asked respondents about how their organisations are managing these challenges.
When asked whether their organisation had processes in place to ensure data science, machine learning and AI are leveraged responsibly and ethically, fifty-seven per cent said no, or don’t know, despite thirty-five per cent saying they were working on it.
The perception of the impact of AI on people’s roles seems to differ greatly from CEO to non-management, suggesting AI projects struggle with inclusivity within organisations. Managers and C-suite executives were significantly more likely to respond that AI would “completely” (i.e., a 5 on the scale) change their company than non-managers.
On the other hand, despite the fact that non-managers in non-technical roles (business professionals in marketing, risk, operations, etc.) should see - or at least see the potential for - AI impact in their jobs, in practice, only 11 percent of non-managers in non-technical roles responded that they thought AI would “completely” change their role - a much lower percentage than the other more senior roles.
“Trust in AI projects will continue to present significant challenges if we are still tackling fundamental issues such as data quality, as well as more complex problems associated with ethics,” said Florian Douetteau, CEO at Dataiku. “Building internal trust will provide the foundation for external trust; this starts with trust in the data itself that is being used in AI systems. Data quality is one of the most basic but most important hurdles to overcome in the path to building sustainable AI that will bring business value, not risk.”
Inclusive AI encompasses the idea that the more people are involved in AI processes, the better the outcome (both internally and externally) because of a diversification of skills, points of view, or use cases. Practically within a business, it means not restricting the use of data or AI systems to specific teams or roles, but rather equipping and empowering everyone at the company to make day-to-day decisions, as well as larger process changes, with data at the core. The model today for traditional businesses leveraging AI seems to lean more toward data democratisation, or inclusive AI, for its larger potential to scale.
“It goes without saying that AI will impact individual roles, enterprises and industries, yet there are clearly some questions around trust, responsibility and inclusivity which need addressing before AI can have the optimal result,” added Douetteau.
AI Adoption in the Enterprise 2020 report finds growth in mature AI adoption; weak investment in data governance.
O’Reilly has published the results of its 2020 artificial intelligence (AI) survey, “AI Adoption in the Enterprise 2020.” The benchmark report uncovers trends in the evaluation, implementation, and outcomes of AI enterprise adoption over the past year.
Findings reveal that more than half of respondents are in the “mature” phase of AI adoption – defined by those currently using AI for analysis or in production – while about one third are evaluating AI, and 15% report not doing anything with AI. These numbers demonstrate growth when compared with O’Reilly’s 2019 AI Adoption in the Enterprise report, which found just 27% of organisations in the “mature” adoption phase and 54% in the evaluation phase.
When it comes to data governance, more than 26% of respondents say their organisations plan to institute formal data governance processes and/or tools by 2021 and nearly 35% expect this to happen in the next three years. Currently, just one-fifth of respondent organisations report having formal data governance processes and/or tools to support and complement their AI projects, similar to findings uncovered in the O’Reilly Data Quality Survey.
Difficulties in hiring and retaining people with AI skills was once again noted as a top barrier to AI adoption in the enterprise, down slightly from 18% in 2019. As in 2019, the biggest bottleneck to AI adoption was reported to be a lack of institutional support (22%), followed by “Difficulties in identifying appropriate business use cases” at 20%.
“AI practices are maturing, and adopters are experimenting with sophisticated AI techniques and tools, which bodes well for the future advancement of AI in the enterprise,” said Rachel Roumeliotis, O’Reilly Strata Data & AI conference co-chair and strategic content director at O’Reilly. “However, organisations will continue to struggle to expand and scale their AI practices if they don’t address the importance of data governance and data conditioning in ML and AI development.”
Other notable findings include:
Latest research from Neustar reveals across-the-board growth in attacks of all sizes.
Neustar says that its Security Operations Center (SOC) saw a 168% increase in distributed denial-of-service (DDoS) attacks in Q4 2019, compared with Q4 2018, and a 180% increase overall in 2019 vs. 2018. According to Neustar’s latest cyber threats and trends report, released today, the company saw DDoS attacks across all size categories increase in 2019, with attacks sized 5 Gbps and below seeing the largest growth. These small-scale attacks made up more than three quarters of all attacks the company mitigated on behalf of its customers in 2019.
DDoS attacks taking varied forms
In 2019, the largest threat Neustar mitigated, at 587 gigabits per second (Gbps), was 31% larger than the largest attack of 2018, while the maximum attack intensity observed in 2019, 343 million packets per second (Mpps), was 252% higher than that of the most intense attack seen in 2018. However, despite these higher peaks, the average attack size (12 Gbps) and intensity (3 Mpps) remained consistent year over year. The longest single, uninterrupted attack experienced in 2019 lasted three days, 13 hours and eight minutes.
Though the number of attacks increased significantly across all size categories, small-scale attacks (5 Gbps and below) again saw the largest growth in 2019, continuing the trend from the previous year. The combination of DDoS-for-hire and botnet rental services has made DDoS attacks much easier to execute, but the fact that perpetrators seem to be in many cases choosing to engage in small-scale attacks suggests that their goal may often be something other than taking a site completely offline.
“Large, headline-making DDoS attacks do still take place, but many cybersecurity professionals believe that smaller attacks are being used simply to degrade site performance or as a smokescreen for other forms of cybercrime, such as data theft or network infiltration, which the perpetrator can execute more easily while the target’s security team is busy fighting a DDoS attack,” said Rodney Joffe, senior vice president, senior technologist and fellow at Neustar. “Furthermore, with the current move of the bulk of the workforce globally to a work from home model, we expect to see a significant increase in DDoS attacks against VPN infrastructure. This risk makes an ‘always on’ DDoS mitigation service even more critical.”
In addition to conventional DDoS attacks, which seek to exhaust bandwidth, in 2019 Neustar also observed an increase in network protocol or state exhaustion attacks, which target network infrastructure directly. Volumetric attacks continued to proliferate as well, with attackers using new DDoS vectors such as Apple Remote Management Services, Web Services Dynamic Discovery, Ubiquiti Discovery Protocol and the Constrained Application Protocol.
Said Joffe, “During the shift to teleworking at scale, we would not be surprised to see the VPN protocol ports added to these targeted attacks.”
Two- and three-vector attacks ‘just right’ for attackers
In 2019, approximately 85% of all attacks used two or more threat vectors. That number is comparable to the 2018 figure; however, the number of attacks involving two or three vectors rose from 55% to 70%, with correspondingly fewer simple single-vector attacks and complex four- and five-vector attacks, suggesting that attackers have settled into the Goldilocks zone for attacks.
Security professionals continue to view DDoS attacks as a growing threat. According to the most recent Neustar International Security Council (NISC) survey, when asked which vectors they perceived to be increasing threats during November and December 2019, senior-level cybersecurity decision-makers cited social engineering via email most frequently (59%), followed by DDoS (58%) and ransomware (56%).
Web attacks increasing
2019 saw web attacks on the rise as well. Most companies recognise the danger that slow-loading websites pose to their business and attempt to protect them with web application firewalls. In the most recent NISC survey, 98% of respondents agreed that a WAF was an essential component of their security infrastructure. However, as more and more enterprises use multiple cloud providers, often involving a mix of public and private clouds, the need for consistent security across applications and platforms is growing.
“Web attacks can be difficult to track because some variation in the performance of websites is to be expected, but they are increasingly critical for businesses to address. One survey found 45% of consumers are less likely to make a purchase when they experience a slow loading website, and 37% are less likely to return to a retailer if they experience slow loading pages,” added Joffe.
A vendor-neutral cloud WAF, coupled with DDoS protection, can eliminate a large portion of threats, allowing enterprise application experts to focus their attention on the more specialised attacks. Continuous updates from a reliable threat feed can also deliver information on bad IPs and botnet command and control (C&C) sites before they are able to damage the network.
Dell Technologies Global Data Protection Index 2020 Snapshot shines light on key challenges impacting data protection readiness according to 1,000 IT decision makers across 15 countries.
The Dell Technologies Global Data Protection Index 2020 Snapshot reveals that organisations on average are managing almost 40% more data than they were a year ago. With this surge in data comes inherent challenges. The vast majority (81%) of respondents reported their current data protection solutions will not meet all of their future business needs. The Snapshot, a follow-on to the biennial Global Data Protection Index, surveyed 1,000 IT decision makers across 15 countries at public and private organisations with 250+ employees about the impact these challenges and advanced technologies have on data protection readiness. The findings also show positive progress as an increasing number of organisations – 80% in 2019, up from 74% in 2018 – see their data as valuable and are currently extracting value or plan to in the future.
“Data is the lifeblood of business and the key to an organisation’s digital transformation,” said Beth Phalen, president, Dell Technologies Data Protection. “As we enter the next data decade, resilient, reliable and modern data protection strategies are essential in helping businesses make smarter, faster decisions and combat the effects of costly disruptions.”
Costly disruptions rise at alarming rates
According to the study, organisations are now managing 13.53 petabytes (PB) of data, nearly a 40% increase since the average 9.70PB in 2018, and an 831% increase since organisations were managing 1.45PB in 2016. The largest threat to all this data seems to be the growing number of disruptive events, from cyber-attacks to data loss to systems downtime. The majority of organisations (82% in 2019 compared to 76% in 2018) suffered a disruptive event in the last 12 months. And, an additional 68% fear their organisation will experience a disruptive event in the next 12 months.
Even more concerning is the finding that organisations using more than one data protection vendor are approximately two times more vulnerable to a cyber incident that prevents access to their data (39% of those using two or more vendors versus 20% of those using only one vendor). But, the use of multiple data protection vendors is on the rise with 80% of organisations choosing to deploy data protection solutions from two or more providers, up 20 percentage points since 2016.
The cost of disruption is also increasing at an alarming rate. The average cost of downtime surged by 54% from 2018 to 2019, resulting in an estimated total cost of $810,018 in 2019, up from $526,845 in 2018. The estimated cost of data loss also increased from $995,613 in 2018 to $1,013,075 in 2019. These costs are significantly higher for those organisations using more than one data protection vendor – nearly two times higher downtime-related costs and almost five times higher data loss costs, on average.
Emerging technologies challenge data protection solutions
As emerging technologies continue to advance and shape the digital landscape, organisations are learning how to use these technologies for better business outcomes. The study reports that almost all organisations are making some level of investment in newer or emerging technologies, with the top five being: cloud-native applications (58%); artificial intelligence (AI) and machine learning (ML) (53%); software-as-a-service (SaaS) applications (51%); 5G and cloud edge infrastructure (49%); and Internet of Things/end point (36%).
Yet, nearly three-quarters (71%) of respondents believe these emerging technologies create more data protection complexity while 61% state that emerging technologies pose a risk to data protection. More than half of those using newer or emerging technologies are struggling to find adequate data protection solutions for these technologies, including:
The study also found that 81% of respondents believe their organisations’ existing data protection solutions will not be able to meet all future business challenges. Respondents shared a lack of confidence in the following areas:
Data protection joins forces with cloud
Businesses are taking a combination of cloud approaches when deploying new business applications and protecting workloads such as containers and cloud-native and SaaS applications. The findings show that organisations prefer public cloud/SaaS (43%), hybrid cloud (42%) and private cloud (39%) as deployment environments for newer applications such as these. Also, 85% of organisations surveyed say it is mandatory or extremely important for data protection providers to protect cloud-native applications
As more data moves to, through and around edge environments, many respondents say cloud-based backups are preferred, with 62% citing private cloud and 49% citing public cloud as their approach for managing and protecting data created in edge locations.
“These findings prove that data protection needs to be central to a company’s business strategy,” said Phalen. “As the data landscape grows more complex, organisations need nimble, sustainable data protection strategies that can scale in a multi-platform, multi-cloud world.”
New data from Synergy Research Group shows that hyperscale operator capex in the fourth quarter was well over $32 billion, setting a new record for quarterly spending as it was marginally higher than Q4 of 2018 – the previous recordholder.
For the full year, total hyperscale operator capex was up just 1% from the prior year. However, their capex that was specifically targeted at data centers increased substantially, growing by 11% in 2019, reflecting ongoing strength in their core business operations. The top five hyperscale spenders in 2019 were Amazon, Google, Microsoft, Facebook and Apple, whose capex budgets far exceed the other hyperscale operators. In 2019 capex growth at Amazon, Microsoft and Facebook was particularly strong, while Apple’s capex dropped off sharply, dragging down the overall figures. Outside of the top five, other leading hyperscale spenders include Alibaba, Tencent, IBM, JD.com, Baidu and Oracle.
Much of the hyperscale capex goes towards building, expanding and equipping huge data centers, which grew in number to 512 at the end of Q4. The hyperscale data is based on analysis of the capex and data center footprint of 20 of the world’s major cloud and internet service firms, including the largest operators in IaaS, PaaS, SaaS, search, social networking and e-commerce. In aggregate these twenty companies generated 2019 revenues of almost $1.4 trillion, up 13% from 2018.
“As expected there was a significant boost in hyperscale operator capex in the second half of 2019, which helped to counter a relatively soft start to the year. Most notable was that annual spending on data centers grew at a double-digit rate despite total capex being somewhat flat,” said John Dinsdale, a Chief Analyst at Synergy Research Group. “How will coronavirus impact this trend going forwards? While there are many unknowns, what is clear is that the hyperscale operators generate well over 80% of their revenues from cloud, digital services and online activities. The radical shifts we are seeing in social and business behavior will actually provide some substantive tailwinds for many of these businesses. These hyperscale firms are much better insulated against the current crisis than most others and we expect to see ongoing robust levels of capex.”
Just 12% of more than 1,500 respondents believe their businesses are highly prepared for the impact of coronavirus, while 26% believe that the virus will have little or no impact on their business, according to a recent survey by Gartner, Inc. In a Gartner business continuity webinar on March 6, Gartner experts asked participants how prepared they are for impact of COVID-19.
“This lack of confidence shows that many organizations approach risk management in an outdated and ineffective manner,” said Matt Shinkman, vice president in the Gartner Risk and Audit practice. “The best-prepared organizations will manage the disruption caused by the coronavirus far better than their less-prepared peers.”
Most respondents (56%) rate themselves somewhat prepared, and 11% said they were either relatively or very unprepared. Just 2% of respondents believe their business can continue as normal, highlighting the huge range of businesses that could be affected by the outbreak. Twenty-four percent of respondents expect little disruption, while the majority expect business to continue at a reduced pace (57%), to be severely restricted (16%) or to be discontinued altogether (1%).
The challenge lies partly in the ambiguity inherent to managing an emerging risk such as coronavirus. Organizations often have policies in place to deal with most risks, but they don’t activate them until it’s too late because no one is owning the risk or taking it seriously until it is fully manifested. The threshold for a risk to generate executive action is often too high to enable an effective response.
“Board members tend to deal with emerging risks by just assuming they will go away and instead focus their attention on what is most important today,” said Mr. Shinkman. “In good times this methodology is reinforced because sometimes emerging risks really do just go away. It’s when they don’t that problems inevitably emerge.”
Having an enterprise risk management (ERM) function in place means that an organization is more likely to see risks coming and then mitigate the impact of those emerging risks more swiftly and effectively. Gartner’s view is that a focus on impacts rather than specific scenarios is best practice for ERM.
“It’s nearly impossible to predict exactly if or how a particular scenario will unfold or even when,” said Mr. Shinkman. “That’s what creates the ambiguity and often inaction around emerging risks. It’s much more effective to focus on potential impacts and how to mitigate them.”
Pandemic provides a perfect example of how this approach works – companies that wait until the emerging risk is already impacting operations and/or many employees will likely find themselves playing catch up and losing ground to companies that were better prepared.
Companies can get better prepared by considering what interim events could occur that would suggest that a pandemic, or similar emerging risk, is about to sharply increase in terms of its impact or likelihood. By using an ERM approach to identify and prepare for those specific events – and setting up mechanisms to monitor for them – the best companies are better positioned to avoid major disruption.
For those dealing with a crisis response to the coronavirus in their organization, they should have planned responses to specific impacts. For example, what will the company do if one employee gets sick? Ask all employees to self-isolate? Are work-from-home procedures sufficiently mature to support that or will work have to stop? Do suppliers or clients need to be notified? Is finance able to support operations in the event of anticipated losses?
Using an impacts-based method makes it very clear when to trigger a response plan and to start mitigating the effect of specific impacts on an organization. Also having response plans that react to specific impacts means it is simpler to communicate the plan to staff, so that all employees can play a part in managing risk. In fast-moving situations such as this, the more people who are owning risk, the more likely it is that an organizational response will be timely.
“Avoid constructing elaborate ‘what if?’ scenarios and focus on what is known,” said Mr. Shinkman. “Many organizations likely already have plans in place to deal with the types of disruption they are facing because of the coronavirus. The job of risk management is to ensure the right plans exists and make sure they get used at the appropriate moment.”
A five-phase strategic and systematic approach to strengthen the resilience of organizations’ current business models is key to continuity of operations during the coronavirus pandemic, according to Gartner, Inc.
“Companies tend to have traditional strategies and plans that focus on the continuity of the resources and processes but omit the business model,” said Daniel Sun, research vice president at Gartner. “However, the business model itself can be a threat to continuity of operations in external events, such as the global outbreak of COVID-19.”
CIOs can play a key role in the process of raising current business model resilience to ensure ongoing operations, since digital technologies and capabilities can influence every aspect of business models.
Phase 1 — Define the business model: Facing the contingency of COVID-19 outbreaks, companies should first focus on their core customers that are essential to their continuity of operations, and then refer to a process of defining their current business models by asking questions focused on their customers, value propositions, capabilities and financial models.
Although CIOs do not normally lead the process of defining business models, they should proactively engage with senior business leaders to run through 10 key questions regarding current business models. This is foundational for CIOs to actively participate in modifying current business models.
Phase 2 — Identify uncertainties: This step can be carried out through a strength, weakness, opportunity and threat (SWOT) analysis, or by brainstorming. Given the wide range of uncertainties and threats, this step can benefit from a heterogeneous group of participants with diverse backgrounds and interests, particularly where IT is normally involved. Companies should focus on the risks that the uncertainty poses to the components of the business model.
“CIOs should participate in, or coordinate, the brainstorming sessions to identify any uncertainties from COVID-19 outbreaks,” said Mr. Sun. “CIOs can share some of IT’s potential uncertainties and threats, such as issues with IT infrastructure, applications and software systems.”
Phase 3 — Assess the impact: Multidisciplinary members should form a project team to assess, or even quantify, the impact of the identified uncertainties. CIOs can provide the potential impacts from an IT perspective.
Phase 4 — Design changes: At this point in the process, the emphasis is to develop tentative strategies rather than estimate their feasibility. Selecting and executing changes will follow in the next phase. CIOs and IT should leverage digital technologies and capabilities to facilitate the designed changes.
Phase 5 — Execute changes: The decision on which changes to execute is principally a decision for senior leadership teams. The strategies for changes defined in Phase 4 provide essential input for this decision process. Senior leadership teams should select the strategies they feel most compelling to implement, which is often based on both economic calculations and intuition.
“Once senior leadership teams select the business and IT change initiatives, CIOs should apply an agile approach in executing the initiatives. For example, they can form an agile (product) team of multidisciplinary team members, enabling the alignment between business and IT and ensuring delivery speed and quality,” said Mr. Sun. “In crises such as the COVID-19 outbreak, agility, speed and quality are crucial for enabling the continuity of operations.”
Coronavirus exposes outdated risk management practices
Organizations’ current approach to risk governance is not sufficient to tackle the complex risk environment organizations are facing today, according to Gartner, Inc. The COVID-19 pandemic is just the latest in a line of recent risk events showing how organizations are not properly set up to manage risk, especially fast-moving ones.
Gartner research showed that 87% of audit departments say their organization uses a ”three lines of defense” (3LOD) model for risk governance. This model states that line management should act as the first line of defense, identifying risks and implementing controls. Risk and assurance functions such as legal, compliance and enterprise risk management (ERM) should act as a second line, overseeing and monitoring risk management processes. Finally, internal audit should act as a third line, taking a birds’ eye view of the effectiveness of controls and risk management.
“The response to the coronavirus pandemic is a perfect example of when the 3LOD and traditional risk governance don’t work very well,” said Malcolm Murray, vice president and fellow, research for the Gartner Audit and Risk practice. “Traditional approaches fail because they can’t effectively deal with fast-moving and interconnected risks. Pandemic is a rapidly developing type of risk that needs a dynamic risk governnance (DRG) set-up.”
“The coronavirus pandemic demonstrates why organizations need a new approach for governing the management of the many complex risks they face in today’s world,” said Mr. Murray. “Adopting the DRG principles helps organizations ensure they have the appropriate governance for different kinds of risks, with the right kind of risk management activities and the right people involved.”
Dynamic Risk Governance
The effectiveness of DRG was measured in a Gartner survey to over 200 organizations, looking at whether traditional or dynamic approaches to governing risk management led to better risk management behaviors and better risk outcomes. The three pillars of DRG each increased the occurrence of high-quality risk management behaviors:
· Risk-tailored governance (18% increase)
The governance model should depend on the risk’s speed, the organization’s risk tolerance and internal constraints rather than relying on a one-size-fits-all level of scrutiny, such as centralized oversight for all risks or models based on industry norms. Corporate leaders should have the final say here, because the governance model should be determined based on the company strategy. A benefit of placing this authority with senior management rather with than the board and the assurance functions is more rapid response. These top executives can take faster action.
· Activity-based risk governance (22% increase)
This means dispensing with the idea that only the first line owns all risk activities, and assigns accountability for risk management tasks without regard for the borders between first/second/third line. Senior management – not assurance functions – should determine who will decide the task owners for a particular risk. For some risks, it will not matter which exact function is accountable for each activity – as long as there is specific accountability assigned.
· Digital-first risk governance (18% increase)
This means considering digital solutions during creation of the governance framework for the risk, not as an afterthought. For instance, if large parts of the risk management can be automated, then fewer functions need to be involved.
When looking at the risks related to the coronavirus pandemic specifically, adopting the DRG principles is beneficial at all three stages of dealing with the risk – response, recovery and restoration. For the first stage, adopting DRG means quickly identifying who in senior management should own the governance of the risk and quickly setting up an initial governance model that considers the fast speed of the risk. It means identifying the key risk management activities for this stage of the risk and assigning clear accountability for these to appropriate parties.
In subsequent stages, when attention shifts towards recovery and restoration, applying the DRG principles allows organizations to regularly revisit whether the risk is governed in the right way. Once there is more visibility to the path of the risk, additional risk management activities can be added, such as adding a focus on monitoring the risk and assessing longer-term impact.
“This isn’t just about risk managers, this is about the board of directors and senior management making risk governance a key consideration so that organizations become more resilient against fast-emerging risks, such as coronavirus,” said Mr. Murray. “The DRG methodology applies equally to the many fast-emerging risks presented by digitalization.”
As COVID-19 coronavirus spreads globally, Gartner, Inc. has identified three impact areas for customer service and support leaders to focus on to manage risk and ensure continuity of operations.
“Though service leaders are familiar with business continuity and disaster recovery planning, pandemic planning is very different because of its wider scope and the uncertainty of impact,” said John Quaglietta, senior director analyst in Gartner’s Customer Service and Support Practice. “The global and dynamic impact of COVID-19 requires planning for longer recovery times and many scenarios because pandemic events are so fluid, and things can change quickly without notice.”
The three impact areas that Gartner recommends that service leaders focus on include:
Impact Area 1: Operational Continuity
Continuity of operations in service and support organizations is largely delivered by agents, operations staff and management. However, this is being threatened by increased absence due to quarantines.
“Since service and support are labor-intensive, having large numbers of staff miss work due to pandemic-related issues can severely impact delivery,” said Deborah Alvord, senior director analyst in Gartner’s Customer Service and Support Practice. “Delivery impairment has both short- and long-range effects on organizations’ ability to deliver service to customers and meet related service goals according to customer expectations.”
Service leaders should maintain continuity of business operations by completing a workforce planning assessment and determine outsourcing and work-from-home options. Additionally, they should implement and promote digital and self-service channels.
Impact Area 2: Staff Morale
Unpredictable work conditions create additional pressure and demand on employees, fueling anxiety, morale and retention issues. Gartner recommends service leaders establish programs that promote employee well-being, focus on employee engagement, and include employees in business continuity and disaster recovery planning.
Impact Area 3: Customer Demand
Communicating with customers during the life cycle of the pandemic is critical. It is important for service leaders to consistently provide updates on developing events and how those events affect the organization’s ability to provide service and support. If customers should expect delays, let them know in advance to reduce unneeded contact volume. Where possible, service leaders should use a multichannel strategy to communicate updates, process or policy changes, and changes in service caused by COVID-19. This can be done via inbound and outbound channels such as SMS, Interactive Voice Response (IVR) and phone.
Worldwide server revenue update
The worldwide server market continued to grow in the fourth quarter of 2019 as revenue increased 5.1% and shipments grew 11.7% year over year, according to Gartner, Inc. In all of 2019, worldwide server shipments declined 3.1% and server revenue declined 2.5% compared with full-year 2018.
"The market returned to growth with a very strong fourth quarter result, largely driven by a return of demand from hyperscalers,” said Adrian O’Connell, senior research director at Gartner. “However, the outlook for the worldwide server market in 2020 is subject to great uncertainty. The impact of the coronavirus (COVID-19) outbreak is expected to temper forecast growth. Although demand from the hyperscale segment is expected to continue through the first half of the year, other buying organizations’ reactions will vary.”
Dell EMC secured the top spot in the worldwide server market based on revenue in the fourth quarter of 2019 (see Table 1). Despite a decline of 9.9% year over year, Dell EMC secured 17.3% market share, followed by Hewlett Packard Enterprise (HPE) with 15.4% of the market. IBM experienced the strongest growth in the quarter, growing 28.6%.
Table 1
Worldwide: Server Vendor Revenue Estimates, 4Q19 (U.S. Dollars)
Company | 4Q19 Revenue | 4Q19 Market Share (%) | 4Q18 Revenue | 4Q18 Market Share (%) | 4Q19-4Q18 Growth (%) |
Dell EMC | 3,986,574,446 | 17.3 | 4,426,376,226 | 20.2 | -9.9 |
HPE | 3,551,891,310 | 15.4 | 3,887,881,501 | 17.8 | -8.6 |
IBM | 2,294,258,503 | 10.0 | 1,783,691,221 | 8.1 | 28.6 |
Inspur Electronics | 1,831,676,801 | 8.0 | 1,801,622,141 | 8.2 | 1.7 |
Huawei | 1,488,740,004 | 6.5 | 1,815,071,726 | 8.3 | -18.0 |
Others | 9,860,302,857 | 42.8 | 8,186,405,788 | 37.4 | 20.4 |
Total | 23,013,443,922 | 100.0 | 21,901,048,604 | 100.0 | 5.1 |
Source: Gartner (March 2020)
In server shipments, Dell EMC maintained the No. 1 position in the fourth quarter of 2019 with 14.2% market share (see Table 2). HPE secured the second spot with 10.8% of the market. Both Dell EMC and HPE experienced declines in server shipments, while Lenovo experienced the strongest growth with a 22.4% increase in shipments in the fourth quarter of 2019.
Table 2
Worldwide: Server Vendor Shipment Estimates, 4Q19 (Units)
Company | 4Q19 Shipments | 4Q19 Market Share (%) | 4Q18 Shipments | 4Q18 Market Share (%) | 4Q19-4Q18 Growth (%) |
Dell EMC | 549,552 | 14.2 | 580,580 | 16.7 | -5.3 |
HPE | 417,699 | 10.8 | 424,422 | 12.2 | -1.6 |
Inspur Electronics | 296,934 | 7.7 | 293,702 | 8.5 | 1.1 |
Huawei | 267,157 | 6.9 | 260,193 | 7.5 | 2.7 |
Lenovo | 233,893 | 6.0 | 191,032 | 5.5 | 22.4 |
Others | 2,113,167 | 54.5 | 1,723,032 | 49.6 | 22.6 |
Total | 3,878,402 | 100.0 | 3,472,961 | 100.0 | 11.7 |
Source: Gartner (March 2020)
Full-Year 2019 Server Market Results
In 2019, both worldwide server shipments and revenue declined, with shipments falling 3.1% and revenue down 2.5%
As for vendor performance, Dell EMC took the top spot in both revenue and shipments with 20.5% market share and 16.3% market share, respectively. HPE secured the No. 2 position with market share of 17.3% in revenue and 12.3% in shipments. Inspur Electronics is the only vendor in the top five that grew in both revenue and shipments in 2019.
Strongest demand for AI talent comes from non-IT departments
For the past four years, the strongest demand for talent with artificial intelligence (AI) skills has not come from the IT department, but rather, from other business units in the organization, according to Gartner, Inc.
Gartner Talent Neuron data shows that although the IT department’s need for AI talent has tripled between 2015 and 2019, the number of AI jobs posted by IT is still less than half of that stemming from other business units (see Figure 1).
Figure 1: Total AI Jobs Posted in Top 12 Countries by GDP, July 2015 Through March 2019
Note: The top countries are derived from the IMF 2019 ranking of countries by total GDP, excluding Italy, Spain and South Korea due to limited time series data.
Source: Gartner Talent Neuron (March 2020)
“High demand and tight labor markets have made candidates with AI skills highly competitive, but hiring techniques and strategies have not kept up,” said Peter Krensky, research director at Gartner. “In the recent Gartner AI and Machine Learning Development Strategies Study, respondents ranked “skills of staff” as the No. 1 challenge or barrier to the adoption of AI and machine learning (ML).”
Departments recruiting AI talent in high volumes include marketing, sales, customer service, finance, and research and development. These business units are using AI talent for customer churn modeling, customer profitability analysis, customer segmentation, cross-sell and upsell recommendations, demand planning, and risk management.
A significant portion of AI use cases are reported from asset-centric industries supporting projects such as predictive maintenance, workflow and production optimization, quality control and supply chain optimization. AI talent is often hired directly into these departments with clear use cases in mind so that data scientists and others can learn the intricacies of the specific business area and remain close to the deployment and consumption of their work.
“Given the complexity, novelty, multidisciplinary nature and potentially profound impact of AI, CIOs are well-placed to help HR in the hiring of AI talent in all business units,” said Mr. Krensky. “Together, CIOs and HR leaders should rethink what skills are truly necessary for an AI-focused employee to have on Day 1 and explore candidate criteria adjacent to hiring specifications. CIOs should also think creatively about IT’s role in governing and supporting diverse AI initiatives and the evolving teams driving this activity.”
European total ICT spending growth for 2020 revised down from 2.8% to 1.4% in the most probable IDC European research scenario.
The coronavirus outbreak across European countries and the necessary containment measures put in place by governments will substantially affect European ICT markets, accelerating the impact already felt from the shocks in Asia. In this extremely fluid scenario, International Data Corporation (IDC) expects to see a significant slowdown in technology spending in 2020 across European organizations, with regional ICT spending growth rates for 2020 expected to halve from 2.8% to 1.4% compared to the December 2019 forecast, as the crisis seeps into virtually all European economies.
"European Technology vendors and buyers are rapidly adapting to the disruption and the extremely fast-moving market conditions," said Thomas Meyer, general manager and GVP research at IDC Europe. "In such a fluid scenario, it is still early to fully assess the overall European ICT impact picture. IDC recommends that all technology leaders recalibrate their strategies. In use cases such as patient care as well as customer, citizen, student or employee experience and proximity, we expect to see accelerated adoption of digital solutions."
"To help technology providers and buyers with their short-term business and technology investment planning, we have developed two scenarios for Europe: a probable one in which the extent of the coronavirus is broadly contained in the next few weeks, and a pessimistic one that considers a less controlled 'domino' effect on a global scale," said Philip Carter, chief analyst at IDC Europe.
In the most probable IDC scenario, European ICT spending is projected to grow by 1.4% in constant currency terms this year, down from the 2.8% forecast published at the end of 2019 in the IDC Worldwide Black Book Live Edition.
"When taking a broad historical view of European ICT spending across the past decade, the impact of the COVID-19 crisis has not reached the levels of the 2007–2008 financial crisis yet. However, it does represent the first strong deceleration in spending growth since the European debt crisis in 2013-2014," said Giorgio Nebuloni, AVP at IDC Europe.
The new outlook is shaped primarily by lower expectations in the hardware and services markets:
Impacts on the software and telecoms markets are less evident and some positive factors are expected to negate to a large extent the natural downturn. While the decrease in hardware spending will also negatively impact the overall software market to a degree, difficulties prompted by COVID-19 across industries will impact total telco connections. At the same time, the increasing need for remote collaboration will push telco services demand and drive new opportunities in the collaborative applications and platforms areas, as well as an increase in security technologies that enable them.
New Use Cases for Technology Will Emerge Quickly
"Factors weighing on investment will range from a decrease in customer demand to supply chains breaking up," said Carla La Croce, Senior Research Analyst at IDC Europe. "Nevertheless, there are areas in which spending will grow. There are specific solutions and use cases, such as videoconferencing, intelligent supply, chatbots, and elearning platforms among others, highlighting how technology can help businesses and societies face (and hopefully surmount) these new challenges."
One of the most pertinent examples is the ability to contain the COVID-19 outbreak itself with the use of artificial intelligence (AI). The report by the WHO-China Joint Mission on COVID-19 highlights how the use of big data and artificial intelligence technologies were applied "to strengthen contact tracing and the management of priority populations." Researchers have also been starting to use deep learning techniques to support COVID-19 detection when analyzing CT scans and patient records. IDC believes some of these use cases could be observed in Europe over the next few weeks, albeit at a smaller scale."
A Pessimistic Scenario Depicting no Growth
In the most pessimistic scenario, IDC expects European ICT spending to drop to a near-flat 0.2% growth in 2020, with all technology domains but software showing negative trends for the remaining part of the year. A series of domino effects, including oil price changes, currency depreciation, the inability of governments to make timely payments, delays in the supply chains and lay-offs in both public and private sectors would lead to a much more dramatic impact on the overall ICT European market and an exponential increase in the downside risk in IDC Market Forecast assumptions.
As restrictions of movement bite, supply-chain disruption becomes commonplace and demand drops, European IT spend in manufacturing, personal and consumer services, transportation, and hospitality will be strongly curbed, as these industries are the most exposed to the COVID-19 crisis impact in the short-, mid-, and long-term view. At the same time, other industries, such as healthcare and government, will be forced to accelerate investments in such a contingent situation. IDC expects this will drive additional IT investments for the public sector, pushing hard on infrastructure and collaboration tools deployments, but not before the second half of 2020.
The pre-existing digital maturity of industries will also be a factor impacting on their capacity to invest in technologies, regardless of their effective budget capabilities. Limited face-to-face business relationships between vendors and end users will inevitably reduce investment in significant digital transformation projects in less mature industries. This is particularly true for projects involving more advanced technologies. This reduced social contact (the duration of which is hard to predict) will also have significant consequences on the purchasing options of a good portion of consumers. Those consumers, especially in less digital-savvy countries, will be gradually excluded from access to technological innovations.
Server and storage markets to decline
End user spending on IT infrastructure (server and enterprise storage systems) will decline in 2020 as a result of the widespread coronavirus pandemic. According to the International Data Corporation (IDC) Worldwide Quarterly Server Tracker and Worldwide Quarterly Enterprise Storage Systems Tracker, under the current probable scenario server market revenues will decline 3.4% year over year to $88.6 billion and external enterprise storage systems (ESS) revenues will decline 5.5% to $28.7 billion in 2020. The server market is expected to decline 11.0% in Q1 and 8.9% in Q2 and then return to growth in the second half of the year. The external ESS market is forecast to decline 7.3% in Q1 and 12.4% in Q2 before returning to slight growth by the end of 2020 with further recovery expected in 2021.
IDC developed three forecast scenarios (optimistic, probable, and pessimistic) for the impact of COVID-19 on the IT infrastructure markets. The probable scenario assumes a broad negative impact starting in China and spreading into other regions before slowing toward end of the year. Elements of the impact include changing demand expectations from various groups of IT buyers, supply chain shortages and logistical delays, short-term component price increases, and a suppressed economic and social climate. The current forecast is based on the probable scenario as of March 26, 2020. However, as the situation continues to unfold, the forecasts might be adjusted further.
The fast-changing environment has revealed some remarkable differences in how the pandemic has affected various segments of the market. As the first to be hit by the coronavirus, China will see the greatest negative impact in the first quarter of 2020 while other regions will start to experience the impact in the second quarter. Similarly, some industries (transportation, hospitality, retail, etc.) are facing significantly reduced consumer activity and business closures and others are being hit by an unexpected wave of demand for services, including video streaming, Web conferencing, and online retail. Facing economic uncertainty, many businesses are being forced to consider more expedited adoption of cloud services to fulfill their compute and storage needs. This spike in demand put unplanned pressure on the IT infrastructure in cloud service provider datacenters leading to growing demand for servers and system components. As a result, the IT Infrastructure market has two submarkets going in different directions: decreasing demand from enterprise buyers and increasing demand from cloud service providers. This dynamic is impacting the server market the most, resulting in just a moderate decline for the overall market in 2020. The external storage systems market, with a higher share of enterprise buyers, will experience a deeper decline in 2020.
Worldwide End User Spend on Servers, 2019, 2020 and 2024 and Five-Year CAGR (value in $ billions) | ||||||
IT Infrastructure Market | Market Segment | 2019 Value | 2020 Value | 2020 Growth | 2024 Value | 2019-2024 CAGR |
Servers | x86 | $83.8 | $81.9 | -2.2% | $109.8 | 5.6% |
non-x86 | $8.0 | $6.7 | -16.0% | $6.8 | -3.3% | |
Total Server | $91.7 | $88.6 | -3.4% | $116.6 | 4.9% | |
Source: IDC Worldwide Quarterly Server Tracker, March 26, 2020 |
Worldwide End User Spend on External Enterprise Storage Systems, 2019, 2020 and 2024 and Five-Year CAGR (Value in $ billions) | ||||||
IT Infrastructure Market | Market Segment | 2019 Value | 2020 Value | 2020 Growth | 2024 Value | 2019-2024 CAGR |
External ESS | External RAID | $30.0 | $28.3 | -5.7% | $32.0 | 1.3% |
Storage Expansion* | $0.4 | $0.5 | 9.6% | $0.4 | -2.4% | |
Total External ESS | $30.4 | $28.7 | -5.5% | $32.4 | 1.3% | |
Source: IDC Worldwide Quarterly Enterprise Storage Systems Tracker, March 26, 2020 |
* Note: Storage Expansion category includes OEM and ODM Storage Expansions.
"The impact of COVID-19 will certainly dampen overall spending on IT infrastructure as companies temporarily shut down and employees are laid off or furloughed," said Kuba Stolarski, research director, IT Infrastructure at IDC. "While IDC believes that the short-term impact will be significant, unless the crisis spirals further out of control, it is likely that this will not impact the markets past 2021, at which point we will see a robust recovery with cloud platforms very much leading the way."
In the longer term both markets will return to growth. The server market is expected to deliver a compound annual growth rate (CAGR) of 4.9% over the 2019-2024 forecast period with revenues reaching $116.6 billion in 2024. Meanwhile the external ESS market will see a five-year CAGR of 1.3% growing to $32.4 billion in 2024.
"The IT infrastructure markets are already going though a transformation and shifts in end user spending will bring an even faster changing IT buyer landscape," said Natalya Yezhkova, research vice president, IT Infrastructure. "While the current crisis brings tensions and uncertainty to the market, it also will push organizations to expedite adoption of technologies and IT delivery models that help with optimization of IT infrastructure resources."
Enterprises move to augmented and virtual reality
According to the latest IDC Worldwide Augmented and Virtual Reality Spending Guide, Asia/Pacific spending on augmented reality and virtual reality (AR/VR) is forecast to be $3.7 billion in 2019, an increase of 34.0% from the previous year. Asia/Pacific spending on AR/VR products and services will continue this strong growth throughout the forecast period (2018-23), with a five-year compound annual growth rate of 62.0%. This growth is primarily driven by commercial industries which are going to be more than $11 billion larger than the consumer segment by the end of the forecast (2018-23). Despite this, the consumer segment (which is currently at $1.7 billion in 2019) continues to be larger than any other industry segment over the forecasted period.
The high growth in the commercial segment is primarily due to the AR/VR capability to solve complex business problems and streamline operations. The two industries that are seeing the most activities/implementation in Asia/Pacific are education (US$ 495.3 million in 2019) and retail ($244.4 million in 2019), spending the most in this technology among other industries.
“Specialized training programs in the education system that includes VR pilot training through simulations, learning of human anatomy, etc. have given an opportunity to develop a specific skill set in the virtual environment. Leveraging this technology, the chances of making errors will not have fatal consequences during the training process. This has turned out to be a huge transition for institutes to save time for distance learning purposes and help in reducing cost due to the travel expenses incurred on students. Similarly, high-end retailers came across improvised customer engagement programs using this technology. This has also helped them in delivering the products based on customizing to a specific customer’s choice with the same or less time and effort. The technology has seen an increase in consideration and solutions around Online retail showcasing, retail showcasing, and virtual test drive,” says Ritika Srivastava, Associate Market Analyst at IDC India.
Despite the fact that the two industries have the highest market spend, there are other industries that have high potential to grow at a faster pace over the forecast period (2018-23) – with some of the new use cases in the pipeline. Retail (94.8% CAGR), followed by utilities, securities, investment services, and process manufacturing are the industries that are gaining momentum to explore the new use cases, and are lucrative in terms of investments. Use cases that dealt in operational tasks with the help of the Augmented Reality for performing tasks like assembly, maintenance, and repair have a lot of impetus within the industries.
"With increasing number of enterprises embracing AR/VR technologies for diverse use cases such as retail showcase, assembling, maintenance, indoor navigation in airports, and training, the market outlook for AR/VR will continue to remain strong. As organizations prepare for Future of Work, AR/VR will play a critical role in augmenting workforce capabilities. The ongoing advancements in hardware and software – initiatives and as Google (ARCore), Apple (ARKit 3), and Microsoft (HoloLens) – will open more transformational opportunities and will further push the wide-spread adoption of AR/VR,” says Deepan Pathy, Research Manager for Future of Work, AR/VR, and Mobility at IDC efforts by tech giants such Asia/Pacific.
Hardware will account for more than half of all AR/VR spending throughout the forecast, followed by software, and services. Services spending will see strong CAGRs (94.8%) for systems integration, followed by consulting services, and custom application development, while software spending will have a (70.0%) CAGR.
Of the two reality types, spending in VR solutions will be greater than that for AR solutions initially. However, strong growth in AR hardware, software, and services spending (135.5% CAGR) will push overall AR spending well ahead of VR spending by the end of the forecast period (2018-23).
On a geographic basis, China market will represent the largest AR/VR spending in the APEJ region with more than 81% share in 2019 and the spending is projected to take off at a five-year CAGR of 63.8% over the forecast. Whilst AR/VR technology in ASEAN countries of Asia/Pacific* has started gaining prominence and has partnered with AR/VR enterprises to improve the industry experience.
Converged systems market grows
According to the International Data Corporation (IDC) Worldwide Quarterly Converged Systems Tracker, worldwide converged systems market revenue increased 1.1% year over year to $4.2 billion during the fourth quarter of 2019 (4Q19).
"Hyperconverged system sales remained robust during the fourth quarter and carried overall converged systems market growth despite annual declines of other product types," said Greg Macatee, research analyst, Infrastructure Platforms and Technologies at IDC. "The hyperconverged system growth picture was largely consistent across the globe with growth in every region in the low to mid double-digit range as these types of systems continue to provide value to a wide variety of businesses in both hybrid and multicloud environments given their easy-to-deploy and automated software-defined nature."
Converged Systems Segments
IDC's converged systems market view offers three segments: certified reference systems & integrated infrastructure, integrated platforms, and hyperconverged systems. The certified reference systems & integrated infrastructure market generated nearly $1.3 billion in revenue during the fourth quarter, which represents a contraction of 18.5% year over year and 30.7% of all converged systems revenue. The integrated platforms segment grew 0.1% year over year in 4Q19, generating $620 million worth of revenue. This amounted to 14.8% of the total converged systems market revenue. Revenue from hyperconverged systems grew 17.2% year over year during the fourth quarter and totaled $2.3 billion. This amounted to 54.5% of the total converged systems market revenue.
IDC offers two ways to rank technology suppliers within the hyperconverged systems market: by the brand of the hyperconverged solution or by the owner of the software providing the core hyperconverged capabilities. Rankings based on a branded view of the market can be found in the first table of this press release and rankings based on the owner of the hyperconverged software can be found in the second table within this press release. Both tables include all the same software and hardware, summing to the same market size.
As it relates to the branded view of the hyperconverged systems market, Dell Technologies was the largest supplier with $760.0 million in revenue and a 33.3% share. Nutanix generated $312.9 million in branded hardware revenue, representing 13.7% of the total HCI market during the quarter. There was a 3-way tie* for third between Cisco, Lenovo, and Hewlett Packard Enterprise, generating $138.0 million, $121.8 million, and $115.5 million in revenue, which represents 6.0%, 5.3%, and 5.1% share of the market share, respectively.
Top 3 Companies, Worldwide Hyperconverged Systems as Branded, Q4 2019 (revenue in $M) | |||||
Company | 4Q19 Revenue | 4Q19 Market Share | 4Q18 Revenue | 4Q18 Market Share | 4Q19/4Q18 Revenue Growth |
1. Dell Technologies (see table notes) | $760.0 | 33.3% | $552.4 | 28.4% | 37.6% |
2. Nutanix | $312.9 | 13.7% | $284.3 | 14.6% | 10.0% |
T3. Cisco* | $138.0 | 6.0% | $75.0 | 3.8% | 84.1% |
T3. Lenovo* | $121.8 | 5.3% | $85.0 | 4.4% | 43.2% |
T3. Hewlett Packard Enterprise* | $115.5 | 5.1% | $92.8 | 4.8% | 24.4% |
Rest of Market | $833.9 | 36.5% | $858.0 | 44.1% | -2.8% |
Total | $2,282.2 | 100.0% | $1,947.6 | 100.0% | 17.2% |
Source: IDC Worldwide Quarterly Converged Systems Tracker, March 19, 2020 |
Table Notes:
Dell Technologies represents the combined revenues for Dell and EMC sales for all quarters shown.
* IDC declares a statistical tie in the worldwide converged systems market when there is a difference of one percent or less in the share of revenues or unit shipments among two or more vendors.
Numbers in this press release may not sum due to rounding.
From the software ownership view of the market, new systems running VMware hyperconverged software represented $938.0 million in total 4Q19 vendor revenue, or 41.1% of the total market. Systems running Nutanix hyperconverged software represented $616.4 million in fourth quarter vendor revenue or 27.0% of the total market. Both amounts represent the value of all HCI hardware, HCI software, and system infrastructure software sold, regardless of how it was branded at the hardware level.
Top 3 Companies, Worldwide Hyperconverged Systems Revenue Attributed to Owner of HCI Software, Q4 2019 (revenue in $M) | |||||
Company | 4Q19 Revenue | 4Q19 Market Share | 4Q18 Revenue | 4Q18 Market Share | 4Q19/4Q18 Revenue Growth |
1. VMware | $938.0 | 41.1% | $742.6 | 38.1% | 26.3% |
2. Nutanix | $616.4 | 27.0% | $576.5 | 29.6% | 6.9% |
3. Cisco | $138.0 | 6.0% | $75.0 | 3.8% | 84.1% |
Rest of Market | $589.8 | 25.8% | $553.6 | 28.4% | 6.5% |
Total | $2,282.2 | 100.0% | $1,947.6 | 100.0% | 17.2% |
Source: IDC Worldwide Quarterly Converged Systems Tracker, March 19, 2020 |
Table Notes:
Numbers in this press release may not sum due to rounding.
A recent International Data Corporation (IDC) survey of IT and business personnel responsible for quantum computing adoption found that improved AI capabilities, accelerated business intelligence, and increased productivity and efficiency were the top expectations of organizations currently investing in cloud-based quantum computing technologies.
Initial survey findings indicate that while cloud-based quantum computing is a young market, and allocated funds for quantum computing initiatives are limited (0-2% of IT budgets), end-users are optimistic that early investment will result in a competitive advantage. The manufacturing, financial services, and security industries are currently leading the way by experimenting with more potential use cases, developing advanced prototypes, and being further along in their implementation status.
Complex technology, skillset limitations, lack of available resources, and cost deter some organizations from investing in quantum computing technology. These factors, combined with a large interdisciplinary interest, has forced quantum computing vendors to develop quantum computing technology that addresses multiple end-user needs and skill levels. The result has led to increased availability of cloud-based quantum computing technology that is more easily accessible and user friendly for new end users. Currently, the preferred types of quantum computing technologies employed across industries include quantum algorithms, cloud-based quantum computing, quantum networks, and hybrid quantum computing.
"Quantum computing is the future industry and infrastructure disruptor for organizations looking to use large amounts of data, artificial intelligence, and machine learning to accelerate real-time business intelligence and innovate product development. Many organizations — from many industries — are already experimenting with its potential," said Heather West, senior research analyst, Infrastructure Systems, Platforms, and Technology at IDC. "IDC's quantum computing survey provides insight into the demand-side of cloud-based quantum computing, including preferred technologies and end-user investment and implementation strategies. These insights should guide the product and service offerings being developed by quantum computing vendors, independent software vendors, and industry partners."
DW talks to Ali Siddiqui, Chief Product Officer at BMC, about all things AIOps and the importance of three key areas - observe, engage, act – on the journey towards a fully integrated, proactive and predictive solution.
1. AIOps - what's all the fuss about? In other words, what is it and why does it matter?
In essence, AIOps combines AI, ML, and big data analysis to improve IT operations (IT Ops). It does this by intelligently and autonomously spotting issues - in some cases fixing them in real time. This greatly supports a business’ need for speed, agility, and increased efficiency, while also ensuring performance and improving customer experience.
Why do they need this? IT Ops teams are faced with mounting and varied challenges. These span from managing the huge increase in operational data volumes that have scaled far beyond any human capacity to handle; to increasing complexity of IT environments; to competing with the speed and agility pressures posed from digital transformation itself. These high frequency app releases may come from Development but the performance and management responsibilities fall on IT Ops.
In short, for IT Ops to stand any hope of succeeding in the future there needs to be an evolution toward intelligent autonomy, hence AIOps.
2. AIOps – does it replace existing technologies and approaches used to monitor and manage IT, or is it more of an add-on to what an organization is already doing?
Businesses’ digital infrastructures and requirements are becoming ever more complex. AIOps is a tool to help keep track of, manage, streamline, and automate these disparate and expanding workflows – modernizing, speeding up, and automating those existing processes with ML and analytics. AIOps can assist in many ways, spanning event noise reduction, predictive alerting, probable cause analysis, and capacity analytics. Yes, some legacy tools may become obsolete but it is certainly an additional – and vital – function rather than a replacement.
3. In other words, are we talking evolution or revolution?
It is the natural evolution and convergence of IT operations infrastructure that will cause a business revolution once it reaches maturity.
4. Bearing this in mind, how much is AIOps about the new breed of monitoring and management technology solutions and how much is it about an organization’s mindset and willingness to change?
It's really about both. It's about a mind set and willingness to change in terms of modernizing those traditional monitoring and event management processes, and also adopting these new breeds of technology solutions to do that.
Some IT organizations have been scared off by thinking that they need to invest in data scientist skill sets or staffing up teams with people with data science degrees. This isn’t the case - all the intelligence should be in the solution, it should be built in. They just need the operational skill sets in order to manage it and strategically take advantage of the rich actionable insights AIOps can deliver.
Part of the responsibility here actually lies with the industry. We need to make it more tangible and the value realization clearer. At the moment, it often comes across as quite abstract and esoteric, and companies just don’t have the IT budget to invest where there is not a clear path to realizing tangible benefits.
5. Is it right to break down AIOps into separate network monitoring/management, infrastructure monitoring/management and application performance monitoring disciplines, or should AIOps be considered as one integrated monitoring and management solution?
At BMC we think an integrated solution is best. For something to be seen as a ‘true’ AIOps solution it needs to cover the three key areas of Observe (monitoring), Engage (linking ITSM and ITOM processes) and Act (for Automation). It needs to be able to detect, analyze, and act all in one solution rather than piecemeal. This holistic approach is better for AIOps as IT organizations are working across extremely complex, hybrid environments – so it’s not only more expensive to be piecemeal, but it can quickly become unmanageable too. Additionally, the value of the solution increases with the amount of cross-silo data that you can observe.
6. AIOps seems to cover a whole range of tools and solutions, ranging from the passive – this is what’s happened, and maybe why; right through to the predictive or proactive – this is about to happen and here’s what you need to do about it. What are the relative merits and drawbacks of the range of the available AIOps approaches?
Often companies will start with the passive (i.e. what can we learn), but to get the full value you need to become proactive and predictive. A good AIOps platform should support that. Yes, the historical data is certainly part of it – it’s essential to know what happened, what is the normal course of action, what is normal behavior, and what is abnormal behavior. But where you see the true value is when you move to becoming predictive and proactive. After the machine has been trained to monitor and predictively alert, you can proactively trigger automated remediation. This way you can address issues in your environment before any service impact, or before the end user even knows about it or experiences a decrease in availability or performance of their systems. That remediation part is how you close the loop in AIOps.
7. In other words, how would you characterize the relative value in working through historical data as opposed to working with streaming, live data?
You really do need both. In order to do ML you need the historical data for pattern learning. Once you’re able to identify patterns and understand system performance you can identify anomalies in current real time data and respond in a timely way.
8. AIOps – primarily, it seems to be about the optimization of an organization’s likely hybrid IT operations through better monitoring and management, but can it also offer valuable business insights at a more strategic level?
Yes, as AIOps adoption grows and evolves in maturity, it does have the potential to offer strategic insights. A good example of this is in the capacity optimization area. When looking at things like capacity management optimization, the system is analyzing historical data then making projections and forecast models. By understanding capacity metrics and workload patterns, you can predict resource saturation points, perform what-if simulations, and recommend and perform optimization actions that lower overall IT infrastructure costs.
9. So far, we’ve talked about what AIOps is, and isn’t, and the value it offers to organisations which embrace this new approach to IT operations. Before we finish, let’s look at how an organisation goes about acquiring AIOps technology. For example, what are some of the key questions to ask an AIOps vendor?
There are a number of questions. First and foremost, “what parts of the AIOps value chain do you cover?” We’ve spoken a lot about piecemeal vs. holistic approaches so understanding which (if not all) elements - observe, engage, and act - the AIOps solution covers is vitally important.
The second step is investigating which use cases are supported. IT teams need real tangible value from an AIOps strategy – they’re not just going to invest in a science experiment. So understanding and prioritizing use cases such as event noise reduction, predictive alerting, root cause analysis, and even remediation is essential.
Also you need to understand how easily these new analytics and automation tools can integrate across existing IT Ops processes and cover the entire IT environment across on-prem, cloud, and even containers. And then lastly – how immediately actionable these new automation capabilities will be.
10. And are there integrated, single vendor AIOps solutions available today, or is it more about acquiring two or three key pieces of software which together form the basis of an AIOps implementation?
Yes, there are certainly vendors (BMC being one of them) that cover the whole value chain, as well as other solutions providers which may cover only part of it. But bearing in mind the compute complexity, various hybrid environments, and huge increases in data, we see it as far more strategic to go for a holistic approach.
11. Bearing in mind that we've established the value of AIOps, where does an organization start in terms of introducing AIOps into the business? With previous technologies such as virtualization and cloud, it was possible to start with a single application in a test environment, before going more mainstream. AIOps would appear to be a bit more 'all or nothing'?
It doesn’t have to be all or nothing. There are steps to get started, and it comes back to aligning it to use cases or identifying the areas of friction within existing IT Ops processes that need to be addressed – pre-determining what the success criteria are beforehand. For example, one of the use cases we help a lot of customers with is event noise reduction. Large enterprises can have thousands of events per hour, far beyond any human scale to manage, so here AIOps can be deployed to suppress the ‘normal’ events, flag the abnormalities, and quickly provide root cause analysis and remediation guidance.
Part of the initial process is simply establishing data sources and models, and making data available and centralized in a single solution so that it can be analyzed.
Organizations certainly need to take a planned approach, but each business will have different priority use cases.
12. Are you able to share one or two examples of customers who are already benefitting from implementing AIOps?
One example is a global manufacturer for the medical industry. A key part of the AIOps value chain is acting with and engaging the service desk. This customer has a central command center that monitors over 40 critical applications across its IT infrastructure and handles events worldwide. Part of its function is ensuring the availability and performance of its global IT infrastructure as well as issue resolution. With the BMC solutions, this customer is able to monitor the entire IT environment, predictively alert before thresholds are breached, and proactively remediate more than one-third of the critical incidents. This saves the customer time by reducing the mean-time-to-identification (MTTI) and mean-time-to-repair (MTTR), and it saves the customer money by automating analysis and remediation tasks.
13. And how do you see AIOps developing over the next 12 to 18 months, both in terms of the products/solutions available and the adoption rate amongst end users?
We recently did a customer poll which found that 70% of those surveyed are currently in the “exploring options and use cases” phase. So naturally, over this time period we’d expect to see a natural migration toward “planning to” or “actively deploying” AIOps, as well as far more tangible value coming through from AIOps deployments.
We’re also going to see more of a push from the data itself. Data volumes will just keep on increasing and become less and less manageable, especially with IoT and 5G adoption. We’ll see AIOps demand increase simply for businesses to keep meeting performance demands and SLAs in the face of this tsunami of data.
We’d also expect a strengthening of the link between DevOps and IT Ops, using the rich insights from AIOps in the app development process to ensure that performance management of applications is at the forefront.
And finally, we know cloud adoption is booming; Gartner predicts that by 2025 80% of organizations will have completely shut down their data centers in favor of the cloud. AIOps will begin to take a central role in managing these cloud-based apps and services.
14. Finally, are there one or two key pieces of advice you’d like to offer individuals and organizations who are approaching AIOps for the first time?
At the core of a successful AIOps strategy and deployment lies the data. The solution needs to be able to learn patterns and apply those learnings to become more predictive and proactive. So you need to be able to ingest and consolidate diverse data sets, metrics, and logs into a single view for analysis and action.
It’s also important to be able to understand the service impact – creating a link between the events and the end services they are affecting, and getting better at analyzing the data to understand how performance is affected.
And finally prioritize your use cases – identify those areas of pain in existing processes and use specific AIOps capabilities to remove friction, increase agility, and improve service quality.
Prior to the outbreak and rapid spread of the COVID-19 pandemic, one of the most severe challenges in delivery of Healthcare Services was having sufficient skilled resources in terms of doctors, nurses and consultants to manage the perpetually increasing demand for healthcare.
By Dave O'Shaughnessy, Healthcare Practice Leader – Avaya EMEA & APAC.
The demand was mainly from an aging population that is living longer, but not necessarily healthier, lives in part due to growth in non-communicable chronic diseases such as obesity, diabetes, and heart disease. The COVID-19 pandemic has only exacerbated the strain on the availability and best usage of those precious healthcare resources. But could there be a more efficient, more effective method of leveraging these healthcare resources as we look towards a post COVID-19 world? What part does quality engagement and communication with the ‘customer’, or patient, play in provision of quality healthcare services?
In more commercial customer-facing businesses, a crucial way to maintain and help improve business outcomes is to enhance the customer experience as this leads to increased satisfaction, repeat sales, growing loyalty and a positive response from society. When it comes to consumer satisfaction, good communications using a variety of methods are key to ensuring the customer experience is joined-up, relevant and tailored to the individual.
If business can improve the customer experience through digitally transforming its communications systems, in steadier-times will the NHS be able to do the same to help improve patient engagement and positive outcomes? It’s a moot question, especially when you consider a report published by Marie Curie, which found that inadequacies in communication are damaging medical care and wasting NHS resources – it said that the total cost, in England alone, was likely to be far in excess of £1 billion a year.
Unique challenges in healthcare
The healthcare sector has some very unique challenges when it comes to embracing digital transformation. Not surprisingly, as one of the very few vertical industries where making a mistake truly could be a matter of life or death, it is understandably risk-averse and cautious when it comes to change. Even outside of the NHS, there is a cautious attitude to change when it comes to digital communications and health: although there has been a steady decline in the number of us who are classified as ‘non-internet users’, last year’s GP Patient Survey 2019 found that most people – 77.8 percent – prefer to use the phone to book a doctor’s appointment. Only one-in-ten, 11.6 percent, booked online or using an app.
Internally, NHS staff are very aware of what is possible when it comes to mobile communications, online chat and messaging, because they use it with each other and when interacting with banks and retail businesses in their everyday lives. Younger members of staff especially are likely to be frustrated by the lack of multiple and flexible ways to communicate. Within the NHS, there is a definite demand for transformative digital communications as evidenced by teams turning to applications such as WhatsApp groups to keep each other informed, but at the same time, there is an awareness that without the correct solutions in place to ensure reliability and security, there is a risk of GDPR non-compliance and data breaches and ultimately imposition of fines – but what’s the alternative? Staff need to communicate in a smarter, more integrated fashion, but should they rely on consumer technology to do so?
The situation in most hospitals is that their existing communications technology is often both analogue and siloed from Clinical IT services, preventing managers from easily being able to offer staff integrated, fit-for-purpose alternatives to the type of smart communications tools they use with friends and family. Something needs to change, and a good place to start is evolving the traditional desk phone into a linked-up communications system that includes mobile integration, voice, video, chat and messaging. Afterall, checking a situation using an instant and secure photo or video is quicker, more convenient, and a lot cheaper than sending an ambulance, so long as these communication mediums are fully compliant with appropriate data protection regulations
Patient communications continuum
When a healthcare organisation chooses to undergo digital transformation, it opens-up the opportunity for looking into how processes and workflows can be made to work in a more holistic and therefore efficient way. This enables more timely and contextual 1-to-1 communications and multiparty collaborations: for example, a traditional person-to-person phone call doesn’t provide the carer with any information from the start – everything must be asked for verbally and assumes the person calling can answer as necessary, a classic analogue communication. The digital transformation of communications provides the ability to move away from such opaque transactions, helping to accelerate the equivalent of the patient-carer workflow. If carers are freed-up from the more prosaic communications tasks, or at least if these tasks flow faster, they are available to spend more of their time on caring rather than the time-consuming administrative overheads.
If secure, reliable, and integrated voice, video, chat, and messaging can be made available for staff and also extended to patients, it would be possible to implement a complete ‘patient care continuum’ strategy. This would mean that if a person has a fall in their home (for example) all their associated patient data and health intervention workflows begins at this point, their patient history and contextual info could be communicated along with them on their patient journey from that very first call to emergency services, through to the ambulance, acute hospital, GP/community care and beyond – just like a customer experience journey in retail or other customer-facing sectors.
One area that might particularly benefit from a joined-up, flexible communications approach is mental health. When it comes to symptoms like depression, stress or feelings of shame and confusion, instant collaboration services can be especially important as sufferers don’t always want to talk via voice or video. Providing alternative ways to communicate, share feelings and extend support such as text or online chat can save lives.
NHS Glasgow and Clyde
A great example of how the digital transformation of communications is benefiting healthcare is happening now at NHS Greater Glasgow and Clyde (NHSGGC). The largest health board in the UK, it serves 1.2 million people, employs 38,000 staff, and is spread across 170 sites. It is also the largest Health Board in Europe based on the area covered. Planning for its new Queen Elizabeth University Hospital (QEUH), and the adjoining Royal Hospital for Children (RHC), allowed for an examination of new ways to improve the overall patient experience.
A key focus was how best to equip staff with business communications tools in a large-scale environment with some 6,000 phone extensions. Due to the size of the site, it is not uncommon for secretaries to be in one building, consultants in another, and wards in yet another. “Previously, to reach a consultant, you had to phone them or page them, then wait for them to find a phone. Missing calls was not an unlikely possibility but in our new model, consultants carry their extension in their pocket,” said Karen McSweeney, Telecommunications General Manager, NHS Greater Glasgow and Clyde.”
NHSGGC consultants not only travel on campus, but also across the Board sites, across Scotland and the UK. By using ‘presence’ and instant messaging tools, secretaries, operators handling public calls, and others can see at a glance the availability of physicians. Providing mobility tools for the nursing staff has also been important. “Nurses are no longer tied to fixed stations and allied healthcare professionals are much more mobile on the campus,” says McSweeney. “They’re equipped with agile mobile technology enabling them to stay in contact with patients, colleagues and patient information systems throughout their working day.”
Where NHS Greater Glasgow and Clyde is leading, other NHS trusts may follow when they are able. Digital unified communications solutions provide patients with multiple ways to connect with their healthcare providers, and carers with each other; outdated manual processes can be replaced with automated systems to free-up teams so they can focus more on delivering care.
By connecting people, resources, data, and solutions, healthcare providers can optimise operations and reduce risk while increasing operational efficiency and keeping costs down. When it comes to the digital transformation, a flexible, integrated communications solution goes a long way.
As both society and business become more reliant on the everyday use of technology, public sector organisations have been set the challenge to integrate new technology into their operations and systems to benefit their users and clients. However, introducing digital technologies into large, legacy organisations whilst ensuring that cyber security is not compromised is a fine line to walk.
By Justin Day, CEO of Cloud Gateway.
This is particularly pertinent in the NHS which operates on a huge scale, running 24-hour critical healthcare services and producing mass quantities of data daily. Encompassing new technology into such a large public institution is a complicated project reliant on multiple parties, budgets and strategies. As more technology and systems become live, the number of entry points for cyber attacks also increases. The challenge now facing the NHS is how to deploy an agile network to help deploy digital technologies whilst also prioritising cyber security across the healthcare system to mitigate against potential risks.
The leap to cloud
Demand on healthcare services continues to grow year-on-year and the NHS is struggling to cope with the ever-increasing workloads. To relieve this pressure, the Government launched multiple policies to help healthcare organisations utilise new technologies such as AI and cloud and drive digital innovation throughout the institution.
One of the key policies is ‘Cloud First’ which was introduced in 2013 to formally prioritise the use of cloud technology in the public sector, aiming for cost savings and improved efficiency. Over the last several years there has been a marked increase in the understanding and potential of cloud platforms to transform the public sector and the NHS. Having cloud platforms will allow organisations to store applications and systems based on suitability instead of shoving them into the only available data centre. It will allow the NHS to operate an agile network, meaning it will be able to be more flexible and scale up or down as needed.
However, the key to harnessing the benefits of the cloud is ensuring that the organisations pick the right cloud platform for the business need, not just the brand they like, or the cheapest solution. Only then will the NHS be able to utilise an agile network, in turn allowing itself to better deploy other new technologies.
The key role of an agile network
Making the leap to cloud isn’t as simple as a quick few steps. Challenges include the historic use of the shared private network N3 which was designed before the advent of cloud and the questions around if and how legacy applications will continue to function. The use of cloud applications will also complicate cyber security boundaries.
NHS Digital has since led the introduction of the Health and Social Care Network (HSCN) as the successor to N3 and as part of the wider NHS strategy ‘Paperless 2020’ that focuses on the use of technology infrastructure to improve services and outcomes for patients. As a unified private network for health and social care providers and third parties, the HSCN also provides public connectivity to the internet allowing users and entities on the data network to access and share information more efficiently and easily.
Historically, organisations including the NHS were tied to just one data centre with the risk being that if one area goes down, the whole network will as well. Cloud platforms will allow the NHS to diversify that data centre, hosting applications and data in more appropriate locations. However, with many entities still connected to N3, many systems are not quite internet-ready and able to make the jump to cloud. Instead, the HSCN is providing a suitable stepping stone whilst systems and services are prepared to be internet facing.
Mitigating cyber risks
At the same time, more technology means more entry points which means more risks to the cyber security of the organisation. Whilst it can be easy to focus on the improvements and results of new technologies, significant attention must also be paid to the risks associated with the increased use of digital tools. From data breaches to unauthorised system access, cyber criminals will be looking for weaknesses to exploit.
The WannnaCry cyber attack in 2017 which affected the NHS and other organisations showcased how damaging a cyber threat can be. The ransomware locked users out of IT systems, in turn disrupting critical healthcare services and resulted in a huge bill of nearly £100m. While diversifying the network will help prevent such an issue occurring again, investment in cyber security resilience must be prioritised and a strategy established for both proactive and reactive policies. This will allow cyber security professionals to address core issues such as unclear governance, vulnerable security architecture and poor practice and root out potential risks. As we saw with WannaCry, cyber security doesn’t just risk data breaches but it will help maintain the safety and operations of services affecting NHS patients.
Juggling technology priorities
Achieving a secure agile network is a huge challenge that can’t be completed quickly or easily, especially with such a legacy organisation such as the NHS. While introducing new technologies will bring important benefits to the healthcare system, it can only be adopted in a stage-by-stage process to ensure that corners aren’t cut; prioritising cyber security is one stage and choosing the correct technology solutions for the organisation is another. The NHS operates at scale and once a secure, agile network has been established, it will pave the way for the increasing use of digital tools to further improve healthcare services and patient outcomes.
Built into the mountains of Stavanger, Norway, in what was formerly a NATO ammunition storage facility, is now one of the world’s most secure and energy efficient data centres.
For site owners, Green Mountain, the transition from ammunition storage facility to data centre was not without challenges. They were faced with confined spaces, existing structures and the need to install a reliable piping system to cool the server racks. The site, however, was also not without benefits – not least the cold Norwegian climate to keep data cool at lower costs and the vast supply of hydroelectric power. It has therefore proved to be a sensible investment for Green Mountain CEO, Kristian Gyland.
Together with Victaulic, a world-leader in mechanical pipe joining systems, and pipe installation contractor, Sig Halvorsen, Gyland overcame the many challenges that the site presented, constructing a truly unique data centre.
From storing weapons to storing data
The facility was constructed by NATO in 1964, during the height of the Cold War. The site initially spanned across three halls, and, in 1994, was extended to double its storage capacity to house mines and torpedoes.
Following the de-escalation of geopolitical tensions, NATO no longer required the facility and decided to sell in 2009. And what NATO no longer needed, became a golden opportunity for Green Mountain’s first data centre.
“We opened this facility in 2013, with our first three customers. Since then we have continued to expand the site as we’ve grown. Today, we are covering 22.000 km² under the mountain and expect to reach full site capacity in 2023,” said Gyland.
Secure and sustainable in Stavanger
When they learned of the location, it was obvious to Green Mountain that the site offered several opportunities that were too good to pass up. Although the construction design would be challenging, the business potential was there.
In the data centre industry, storing data in a secure environment is of upmost importance. Green Mountain had a vision that if a mountain could keep NATO’s weapons secure, then it could also keep data secure; and they were right. According to Gyland, the Stavanger site is one of the most secure data centres in the world. A feature which can be largely attributed to the centre’s location within the mountain.
As well as being secure, the Stavanger site is located within close proximity to the fjord, home to one of Europe’s lowest-priced sources of hydroelectric power. The data centre could therefore operate at lower costs, as well as with a sustainable power source. Together with a vast supply of hydroelectric power, the site benefits from a cold Norwegian climate, providing a cost-effective means to cool data.
“We are located close to a fjord and the water we are collecting for our cooling system has a constant temperature of 8 degrees, meaning that we can use the outside to cool the data centre on the inside. So, in addition to the site being the most secure data centre in the world, it is also the most energy efficient data centre in the world.” stated Gyland.
Looking for versatility and durability
Designing and installing a piping system in an existing structure is never without its challenges and turning an existing structure into a data centre adds an additional level of complexity.
Faced with confined underfloor space, existing structures, and the need to have a system that is reliable and easily maintainable, Green Mountain needed an efficient pipe joining system, in addition to an installer that could do the job. Because Norway is heavily involved in the oil and gas business, welding is a common method of joining pipe in the region. However, the site needed more flexibility than welding could offer. It was important to Green Mountain that they had the ability to easily expand the facility as their business grew. Victaulic offered the ideal solution, but with one concern for Green Mountain; would it last?
“When we were presented with the mechanical pipe joining solution in 2012, it was only natural for us to use this technology for our piping systems. Using a system that is not welded provided us with the flexibility we needed and was a huge cost saver since it allowed us to build in phases. We didn’t have to make assumptions on where future data racks were going to be placed, and where the cooling system should run; we were simply able to build as we grew and add customers,” commented Gyland.
“The only concern we had when we installed a grooved system in our facility was whether the joints would last over time,” continued Gyland. “But having operated for close to 6 years now, we’re confident that this is a solution we will continue to work with, and we are happy with the way it’s working here at Green Mountain Data Centre.”
Beyond the flexibility and reliability of mechanical pipe joining solutions, Gyland was also pleased to partner with a manufacturer that aligned with their values on sustainability.
“As one of the most energy efficient data centres in the world, it is also evident that one of the key areas of prioritization for Green Mountain is sustainability. As the installation of the grooved system avoids toxic fumes and gases and is produced in a production facility which uses 90% recycled material, the solution fitted our company’s sustainability efforts seamlessly.” noted Gyland.
Overcoming installation challenges
Sig Halvorsen was the contractor tasked with installing the cooling system in the Green Mountain facility. Their employees are familiar with grooved pipe joining solutions, particularly within data centres. Frode Horpestad, Operations Manager at Sig Halvorsen, who worked on the Green Mountain installation, stated that the pipe joining system allowed his team to overcome some of the site’s installation challenges and even cited these solutions as an aid in winning additional projects.
“It is a simple system to learn and has simple check methods to ensure proper installation. Our employees also received close follow-up from the manufacturers’ representatives on the construction site if required – so it has worked very well. Using a flexible and robust technology has many advantages, for example utilising this pipe joining system allows us to use the same crew on every construction site, and we do not require certified welders for joining the pipes.” Considering the internet wasn’t invented until several years after the Stavanger NATO facility was built, it is safe to assume the engineers of the time never foresaw a day where the location would become a world renown data centre. But as it turned out, the safety and security of a mountain, with a nearly unlimited hydro-power supply and cool climate, made for the perfect location.
We all know that protecting the environment is important. Our growing awareness of the damage that humans are collectively doing to the planet will only be beneficial to the cause. It is heartening to see consumers and businesses alike taking on small changes in their daily behaviour in an attempt to minimise the harmful effects on the environment.
By Carmen Ene, CEO of 3stepIT.
However, many businesses today view sustainability simply as doing their bit to help the environment - recycling in the office, using as little paper as possible, and turning lights off when not needed. It should be clear that sustainability in business should be about much more than opening windows rather than using air con.
A more complete approach to sustainability considers the longevity of a company as well as its impact on the planet. A sustainable outlook should include employment policies and financial performance as well as the environment. This is also known as the triple bottom line: people, planet, profit.
Getting With the Times
Historically, businesses have tended to look only at their financial results as a barometer of progress. This is completely understandable; the key to every successful business is the ability to turn a profit. However, today, as awareness continues to grow - not only around the environment, but of mental health as well - more and more companies are beginning to sit up and take notice of the vast benefits that companies can bring about by paying closer attention to their employees and their impact on the planet. Not just as an added bonus, but within their business model.
A lot more needs to be taken into account now. In years past, companies would do business with each other based purely on economic benefits and the ability to follow through on promises. However, as industries grow, so too does competition for business, and organisations need to find different ways to set themselves apart from competitors. Company reputation, employee wellbeing, and environmental impact are all huge factors that are now taken into account.
The Bigger Picture
There is absolutely nothing wrong with making small changes to benefit areas that aren’t necessarily business driven - such as the planet or your staff. Recycling is great, so too is donating to charity, and allowing employees to take the day off on their birthday is a nice gesture that they will appreciate once a year. These are all beneficial to the planet, your staff and even in some cases, the local community - but only marginally.
Considering the Triple Bottom Line means that decisions made for the wider business as a whole also factor in the people involved with the business - whether that’s staff, the local community, or those that might otherwise be affected - and the environment, as well as finances.
In the long run, this means that as well as growing profits, companies are also benefiting people and the planet. An organisation’s reputation can improve drastically. It could introduce environmental initiatives which mean that it will only partner with companies that boast certain sustainability credentials, or it might implement a circular economy model that lessens demand for brand new IT devices, saving huge amounts of CO2 in the process. Or perhaps the company introduces flexible working for its employees, allowing them to improve their work/life balance, and enjoy their jobs a little bit more than they already do. In this day and age, where so much focus is placed on helping the environment and mental health, these changes can be the difference in winning a new business pitch, or selling to customers.
Genuine Business Sense
Because these decisions are being taken at C-level or by board members, they’re likely to be having a much stronger impact than any recycling or birthday initiative, simply because it starts to affect the entire company - not just a couple of offices.
Sustainability is about more than just the planet. It’s the welfare of people affected by a business, being able to continue to be profitable while still helping the environment. If the triple bottom line model is followed correctly, the long term future of a business remains in good hands, society benefits, and it meets the needs of the present without compromising the ability of future generations to do the same. True sustainability gives businesses no reason to fail, and the triple bottom line facilitates that.
The current Covid-19 pandemic has served to focus attention on the health of the healthcare sector, as never before. Information technology has a vital role to play. Here we cover a range of viewpoints and news, much of it relating to the ongoing battle with the coronavirus. Part 1.
When 2020, Florence Nightingale’s bicentennial year, was designated the first ever Year of the Nurse and the Midwife by the World Health Organisation, no one could possibly have imagined the extraordinary demands facing nursing staff this year. Celebrating nurses is hugely important given the well documented levels of burn out and stress reported globally, but recognition is just the start. What can be done to improve the day to day nursing experience?
The Institute for Healthcare Improvement (IHI) believes a big part of the solution is to focus on restoring joy to the healthcare workforce. But this is not just about big strategic changes; as the success of the 15s30m social movement (@15s30m on twitter) is showing, an individual making one small change can transform the day to day experience of one or more colleagues and patients, reducing frustration and improving joy. As Dan Wadsworth, Transformation Manager at TeleTracking International explains, small changes can have very significant results, not least in enabling far more time to care.
Staff Burn Out
Florence Nightingale’s heroic efforts on the battlefields of the Crimean War seem particularly poignant during the Covid 19 pandemic. Yet while medical advances over the past two centuries have transformed our ability to care, in recent years, time to care has been compromised by hugely frustrating administrative distractions. In celebrating the Year of the Nurse, it is time to refocus energies on reducing unnecessary, extraneous and often stressful activities and enable nurses to concentrate on delivering patient care.
This shift is essential if health services around the world are to address the damage caused by staff burn out. Long before the Covid 19 pandemic placed unimaginable pressure on nurses throughout the world, 50% more staff in the NHS were already reporting debilitating levels of work stress compared with the general working population, stress that affects both their well-being and patient outcomes.
According to the Kings Fund, the primary cause of that stress is excessive chronic workload; it is also the number one reason for clinicians to say they will quit. So what can be done to address this situation and, from a nursing perspective, release essential hours back to delivering patient care?
Small Actions, Big Impact
The IHI’s focus on joy in work has inspired a number of NHS-wide and Trust-wide initiatives; but small, local actions can have hugely significant results. One of the most important aspects of reducing frustration and achieving joy in work is enabling people to take control and make small, positive, immediate improvements that affect both their working lives and those of their colleagues.
Anyone on the front line of NHS services – from nurses to junior doctors, porters, healthcare assistants, cleaners and receptionists – is well placed to make such changes. They know the job inside out and understand day to day frustrations. They are in the position to step outside regimented processes and enable an immediate, if small, change that feels empowering and delivers real benefits to others.
This latter point is key, as research into clinical burnout suggests firstly, that is important to focus on just one or two things in a day that can be impactful for other people and, secondly, the importance of connection, belonging and support. Undertaking local activities that can support colleagues and helps to create a team is proven to lower stress levels.
Embracing TARDIS
This is very much the theory espoused by the 15 seconds, 30 minutes (15s,30m) social movement which uses the TARDIS model: something that can be done Today, takes only A little extra time, Reduces frustration, Doesn’t require permission, Increases joy and is easy to Share.
For example, one paramedic now asks every patient, if possible, whether they have their glasses with them before being taken to hospital. Takes seconds, sounds simple. But the impact for both patients and their clinicians will be significant throughout their stay in hospital – such as being able to keep themselves occupied reading books and take themselves to the bathroom rather than requiring assistance.
A similar small change within ED also saves time and reduces a patient’s wait for a bed. ED nurses requesting a porter to collect a patient have taken a couple extra seconds to note not just the ward but also the assigned bed. As a result, rather than leave the patient at a ward, where a nurse then has to take the time to check which bed has been assigned and move the patient, the porter can take the patient directly to the correct bed – better for both patients and nurses.
Reducing Idle Bed Time
Bed management is a prime example of extraordinary change that can be achieved with small actions. For the vast majority of nurses one of the biggest causes of interruption and frustration throughout the day is fielding constant phone calls and visits from bed management teams. With a live bed state across the entire health system and an automated process for updating patient discharges it can take a nurse just 15 seconds to mark five patients as potential or confirmed discharges.
The impact is huge – from saving time within the control centre when pre-assigning beds to waiting patients, to ensuring every patient is placed in the right bed, first time, every time. And with no phone calls or visits to disrupt essential caring activity, nurses also enjoy the benefits of reduced frustration and more time to care.
When patients are admitted and badged using a real time locating system, a Trust can track all patients from admission to discharge. Taking 15 seconds to ensure all patient badges are placed in a drop box on discharge can have a very marked impact - especially if a Trust has a dedicated bed cleaning team in place. As soon as a badge is placed in the drop box, an alert is triggered that automatically turns the bed ‘dirty’ in the system and a notification sent to the bed cleaning team. Once completed, the cleaners can use an app to confirm the bed’s availability for the next patient. The simple process of dropping the badge in the drop box has saved huge amounts of time, calls and chasing for other people and, by cutting idle bed time, has reduced patient delays.
Conclusion
It may seem strange to talk about joy in work at a time of huge Covid 19 related stress, but the fact is that small acts can have an enormous impact on the day to day experience of colleagues throughout the system. Indeed, as the NHS has flexed to adapt to these unprecedented demands, amazing changes are being achieved, changes that will have far reaching, positive implications for the way clinicians work in the future. From video consultations as standard to the transformation of outpatient services, reform and process change is being fast-tracked.
This is an ideal time for change – it is a time to eradicate frustrating and unnecessary administrative burden and it is also time to build better collaboration between teams, collaboration that improves joy in work and, critically, increases time to care. That, surely, should be the outcome of the Year of the Nurse.
Powered by storage technology supplied by Microway, the Northeast Storage Exchange is changing the way Boston-area universities approach research data storage.
Born out of a groundbreaking regional high-performance computing project, the Northeast Storage Exchange (NESE) aims to break further ground—to create a long-term, growing, self-sustaining data storage facility serving both regional researchers and national and international-scale science and engineering projects.
To achieve these goals, a diverse team has built a regional technological achievement: New England’s largest data lake.
The story of creating this data-lake is a lesson in cross organizational collaboration, the growth of oceans of research data, changes in storage technology, and even vendor management.
Finding the right technology – hardware, firmware, and software – for such a large-scale project meeting a diverse range of data storage needs is challenging. Now that the project has launched, though, both the NESE team and industry partners like Microway are confident in the project’s capacity to meet growing research computing data storage demands in a way that facilitates end-user buy-in and unprecedented collaboration.
The Beginnings of MGHPCC and NESE
The Massachusetts Green High Performance Computing Center, or MGHPCC for short, is among the most innovative large-scale computing projects in the country. This project brings together the major research computing deployments from five Boston-area universities into a single, massive datacenter in Holyoke, Massachusetts.
The 15 megawatt, 780-rack datacenter is built to be an energy- and space-efficient hub of research computing, with a single computing floor shared by thousands of researchers from Boston University, Harvard University, Massachusetts Institute of Technology, Northeastern University, and the entire University of Massachusetts system. Because the datacenter is run by hydro and nuclear power, it exerts virtually zero carbon footprint. By joining together in the Holyoke site, all of the member institutions gain the benefits of lower space and energy costs, as well as the significant intangible benefits of simplified collaboration across research teams and institutions.
As of 2018, the facility was more than two thirds full, at 330,000 computing cores total. The facility currently holds the main research computing facilities for the five founding universities, as well as those of teams of national and international collaborative data science researchers.
It follows, naturally, that an innovative research computing project like MGHPCC would require an equally innovative corresponding data storage solution. Enter NESE, the Northeast Storage Exchange project, supported by the National Science Foundation. The institutions involved are Boston University, MGHPCC, Massachusetts Institute of Technology, Northeastern University, and the entire University of Massachusetts system. Within a team of 25 from Harvard’s Faculty of Arts and Sciences Research Computing, NESE has a dedicated Storage Engineer. Scott Yockel and his team at the Harvard Faculty of Arts and Sciences Research Computing lead development, deployment, and operations of NESE for the whole collaboration. NESE is already New England’s largest data lake, with over 20PBs of storage capacity and rapid growth both planned and projected.
NESE doesn’t rely on traditional storage design. Its architects have instead chosen Ceph: an innovative object storage platform that runs on non-proprietary hardware.
The project design has attracted notice: NESE was launched with funding from the National Science Foundation’s Data Infrastructure Building Blocks (DIBBs) program, which aims to foster data-centric infrastructures which accelerate interdisciplinary and collaborative research for science, engineering, education and economic development.
In addition, NESE has attracted major industry partners who help the team to achieve the goals of both individual projects and the NSF as a whole. Microway, which designs and builds customized fully-integrated turn-key computational clusters, servers, and workstations for demanding users in the HPC and AI, has supplied NESE’s hardware and will continue to partner with NESE as it grows. Additionally, Red Hat, the creators of Ceph, have been working with the NESE team from design and testing through to implementation.
Building NESE
Of course, such a large storage infrastructure has challenges that needed to be met when designing and building out the solution. Building such an immense data lake requires knowledgeable project management and partners committed to delivering a solution tailored to research computing users.
First, the research done at MGHPCC and each of its member institutions is highly diverse in terms of its storage demands. From types and volume of data to retrievability and front-end needs, the NESE team has needed to account for many different users in building out new storage infrastructure. What’s more, the system needed to be easily scalable; while the initial storage capacity is large, the NESE teams expects it to grow rapidly over the next several years. Finally, with such a huge volume of data storage and large number of users, the system needed to be relatively failproof, so that outages do not affect huge swaths of data.
With these challenges in hand, the NESE team, including Saul Youssef of Boston University and Scott Yockel reached out to Microway for help in designing the ideal solution. Yockel and others at Harvard had previously worked with Microway for dense GPU computing solutions. Based on this trust, they gave Eliot Eshelman, Vice President, Strategic Accounts and HPC Initiatives, and the rest of the Microway team the task of helping them design and deploy the right data storage solution for the NESE project’s unique challenges. The team went through multiple rounds of consultation and possible iterations before selecting the final system design.
Originally, Yockel explained, the NESE team was interested in both dense and deep hardware systems, with 80-90 drives per node. After learning from the extended Ceph community that this kind of configuration could lead to backlogs, failures, and ultimately system outages, they instead selected single-socket, 1U 12 drive systems. He noted that the smaller, though still dense, systems are far more resilient to complete filesystem failures than the initial design, and can still support the flexibility of storage that NESE needs.
The Microway team then made this type of hardware available for testing. Youssef and Yockel were able to validate both performance and reliability of the hardware platform. Only then did they commit their ambitious project’s reputation to this validated architecture.
“Microway understands our particular approach and needs. They provided us quotes that we could use throughout the consortium to gain significant buy-in, and worked with us to iterate design based on Ceph best practices and this project’s specific demands,” Youssef said. “Our relationship with them has been straightforward in terms of purchasing, but the systems we’ve created are really at the edge of what’s possible in data storage.”
The initial NESE deployment has five racks, each with space for 36 nodes; as of September 2019, it includes roughly 100 nodes in total. All nodes are dual 10GbE connected to MGHPCC’s data network, and contain high-density storage in a mix of traditional and high-speed SSDs.
The net result is over 20 PB of overall capacity, which can seamlessly expand even as much as 10X as required in the future.
The overall solution also provides the diversity of storage that NESE needs, enabling a mix of high-performance, active, and archival storage across users. This has allowed for cost optimization, while the use of Ceph has ensured that all of that data is easily retrievable, regardless of a user’s storage use type.
Impact of an Innovative Data Storage Solution
With the implementation of NESE within MGHPCC, Massachusetts data science researchers now have a data storage resource that is large, with the ability to grow over time, and no need to migrate data across physical data storage over time. The project’s use of a distributed Ceph architecture will enable the NESE team to add new resources or decommission old ones while the system is active.
Data storage management by a single team within the consortium lowers administration labor effort and costs, adds greater flexibility for backups, and makes it easy to double storage for a lab or project.
The NESE team has elected to begin (relatively) “small,” with the 20PBs of storage currently used by a small portion of the consortium’s labs and researchers. Even so, the project has significant buy-in from throughout the MGHPCC consortium. “It’s not unreasonable to expect our storage capacity to grow five-fold in the next few years,” Youssef said.
Harvard’s overall data storage needs alone have grown 10PBs per year for each of the last four years; other member institutions have seen similarly skyrocketing data storage needs. That’s because research is creating vast amounts of data, and growth isn’t linear. New generations of instrumentation in the life sciences mean increases in data production of 5-10x every few years; even the social sciences and humanities, areas that once needed little by way of data storage, have begun to generate data through new research methodologies and other projects like library digitization.
Future Pathways
Though Youssef and Yockel aren’t sure exactly how large NESE will become, they’re certain it will – and has significant capacity to – grow. The current racks were provisioned for more nodes than they currently house, with about 1/3 of the current space free for buy-in. While the capacity has served Harvard research teams most to date, it will be allocated among all of the different universities as shared project space in the future. The start up NESE storage is mainly used as Globus endpoints across the collaboration, storage for laboratories across the Harvard campus, and storage for the Large Hadron Collider project at CERN. Everyone is on board with using NESE, so it will be used more and more in the future.
Once it does, it opens the door for significant collaboration across institutions that is currently unwieldy and thus unprecedented. Shared data storage makes sharing data sets across research teams and universities far easier: there are no more challenges of data locality. Researcher 1 at Harvard may simply point Researcher 2 at BU to a dataset already on the same storage.
The effects of such data-locality could be transformative: they could open a pathway to more innovative, collaborative research that spans some of the nation’s top universities.
Universities also are not the only space for further collaboration made possible by NESE. Already, Red Hat has conducted large-scale Ceph testing using the NESE system that was impossible with its in-house systems. The performance testing has driven changes to Ceph and contributed back to the Ceph community. Youssef and Yockel noted that the NESE team is open to finding other such spaces for collaboration with industry as the project expands.
For now, what’s certain is this: NESE will remain at the heart of the MGHPCC’s innovative research computing space. The team will continue to collaborate with Microway on the project’s expansion, as well.
The current Covid-19 pandemic has served to focus attention on the health of the healthcare sector, as never before. Information technology has a vital role to play. Here we cover a range of viewpoints and news, much of it relating to the ongoing battle with the coronavirus. Part 2.
The free, analytics-ready data set collates epidemiological data from COVID-19 cases as reported by public health authorities worldwide.
Starschema, a data services and technology company, has listed a free-of-charge, public data set that serves as a single-source of truth on the incidence and mortality of COVID-19 cases on Snowflake’s free-to-join marketplace, the Snowflake Data Exchange. The data can help organizations assess contingency plans and make informed, data-driven decisions in real-time as they respond to the global health emergency.
Snowflake’s cloud data platform enabled Starschema to amass the epidemiological data from multiple sources into a cohesive single-source-of-truth, while also allowing for the enrichment of that data with relevant information like population densities and geolocation.
The new Starschema data set also eliminates the need, and challenges associated with, cleaning and preparing the data. Public and private sector data consumers will have access to the data in an easy-to-use, analytics-ready format, so they can quickly build new models and applications.
The Snowflake Data Exchange, a secure, fully-governed platform for sharing and exchanging data, allows Starschema to easily and seamlessly share data on COVID-19 in near real-time with organizations in the public and private sectors. Organizations can connect to the Data Exchange from within their Snowflake account for seamless integration of the COVID-19 incidence data set and fast query processing.
With the Starschema’s COVID-19 incidence data set, public health authorities have access to phylogenetic studies to reference and identify whether particular strains of SARS-CoV-2, the virus that causes the COVID-19 disease, carry a higher risk. Governments will also be able to make informed data-driven decisions for civil contingency planning based on data from neighboring states. In the private sector, enterprises can use the data to support business contingency operations and analyze supply chains for possible vulnerabilities.
“Everyone is dealing with the effects of COVID-19 in one way or another. Our goal is to deliver the highest quality data sound enough to stake lives on, with the utmost transparency,” Starschema CTO, Tamas Foldi said. “Snowflake is helping us deliver on that goal so public health professionals, contingency planners, and enterprises can best respond to this global epidemic.”
“As the COVID-19 pandemic progresses, we can expect data to play an increasingly important role in both public and private operations,” Snowflake’s Head of Data Exchange, Matt Glickman said. “It’s essential organizations have access to accurate, near real-time data in this rapidly evolving environment and we’re humbled that the platform architecture is uniquely positioned to help democratize access to Starschema’s data in this time of need.”
Starschema plans to further enrich their COVID-19 incidence data set on the Snowflake Data Exchange with data like local emergency measures, demographic information for affected geographies, and additional reporting levels from regions, states, and country resources.
Million people fed with food destined for bin |
Start-up FruPro release not-for-profit platform that connects retailers and growers. In one weekend, they save enough food from the bin to feed one million people. COVID-19 has highlighted a dichotomy in our food system. Whilst supermarket shelves are empty, thousands of tonnes of fresh fruit and vegetables are spoiling in fields and warehouses across the country. The current situation has exaggerated an already pressing issue: up to 40% of fresh produce grown in the UK is wasted. Hounslow based start-up FruPro want this to stop. They are developing a communications and trading platform that connects the fruit and veg industry. Their goal is to reduce waste by creating a centralised platform that puts those with produce in touch with those who need it. Eventually, FruPro will take a commission from these trades; however, in response to the disruption caused by COVID-19 they have launched a not-for-profit version. “The key thing for us is that no food ends up in the bin,” says William Hill, CEO of FruPro. “Despite empty supermarket shelves there is a huge amount of fresh produce available, because may wholesalers sell directly into the hospitality sector – which has closed down. The issue now is getting that food to retailers.” Reynolds, a leading supplier for the foodservice industry, is one of the platform’s earliest participants. FruPro put Reynolds Catering Supplies into contact with WT Hill, a food marketing company, who have helped divert 180 tonnes of fresh produce to independent retailers and wholesalers across the country. They estimate this could feed a million people for a day. Without FruPro, much of this produce could have been wasted. “FruPro have been a great help to Reynolds in ensuring that we have been able to divert produce, which might have otherwise gone to waste, to people who really need it,” says Matthew Jones, a Senior Buyer at Reynolds. “Now more than ever it’s important that our industries works together to prevent food shortages and minimise any wastage”. Currently, FruPro is focused on diverting produce to independent retailers. This includes food markets, corner shops and greengrocers, as well as the growing fresh produce delivery sector. They are also developing a mechanism for transferring stock to food banks. “Supermarkets do a great job, but we also need to support our independent retailers and charitable organisations,” continues William. “These businesses don’t have the levels of bureaucracy and regulation that you see in supermarkets. This means that we can get stock to them quickly – and then on to the general public. Greengrocers, butchers and fishmongers have been supporting our communities for centuries – and I think the next few weeks are going to remind us how valuable they are.” FruPro is still some months away from releasing the full version of their platform, which they say must be international to reflect the complexity of supply chains. They are also in talks with Agrimetrics, an artificial intelligence and data platform with the backing of UK Government and Microsoft, regarding how to scale their solution and integrate valuable information, such as crop disease models and yield predictions. ‘In a couple of years, we want everyone to be able to buy fresh fruit that would otherwise have ended up in the bin, and be able to track that piece of fruit back to the farm that produced it: complete traceability,” concludes William. “Over the next few months, we want to make sure that no one goes hungry whilst good food is being wasted.” |
The NHS has a huge data gathering and information need in order to better understand and manage the Covid-19 pandemic.
A recent blog, authored by Matthew Gould, Dr Indra Joshi and Ming Tang, explain that the government has commissioned NHS England and Improvement and NHSX to develop a data platform that will provide secure, reliable and timely data to those national organisations that need it. In effect, one single data store is to be created, bringing multiple data sources into a single, secure location.
Once processed, this data will provide live metrics which will help track the spread of the virus and the ability of the healthcare system to deal with it.
Every assurance is being given that the gathered data will be either destroyed or returned according to the strictest legal and contractual agreements in place between the NHS and partners.
Microsoft is supporting NHSX and NHS England’s own technical teams, with the backend data store being built on Azure. Palantir Technologies UK is providing the software – Palantir Foundry – that powers the data platform. Amazon Web Services is providing ‘infrastructure and technologies thatg are enabling NHSX and its partners to quickly and securely launch the new Covid-19 response platform. Faculty, an AI technology specialist, is developing dashboards, models and simulations as part of the data response strategy. Google’s G Suite tools will be used by the NHS to collect real-time information on hospital responses to Covid-19.
The blog states: “We have chosen these organisations in particular because of their knowledge in data and the skills they have for working in complex environments and delivering at pace in this time of crisis.”
The Royal College of Emergency Medicine have partnered with 87%’s, team of psychologists and built a fully customised mental wellbeing app specific to the needs of emergency physicians.
RCEM approached 87% to offer this custom platform for their 9,400 members in an effort to help them manage their mental wellbeing throughout these especially difficult times, supporting a significant portion of Britain’s front-line emergency medicine community. All Fellows and Members will receive free access to the 87% app for the rest of the year, allowing them to monitor and track their wellbeing.
87% shareholders and the Aviva Foundation have agreed to cover all costs associated with distributing the platform to emergency workers, supporting them through these toughest of times. Members and staff will gain full access to the RCEM/87% app containing custom content, daily interactions, weekly mental fitness reports and a podcast series.
The rapid spread of Covid-19 worldwide is creating unprecedented levels of uncertainty and change in both business and personal lives. We can see this is already having materially adverse effects on mental health. Members of RCEM are literally on the front line putting their lives and mental wellbeing at risk every day for the benefit of society.
It is critical that during this crisis, mental wellbeing is prioritised alongside physical wellbeing. The 87% vision is simply to improve employee mental wellbeing resilience and indirectly benefit that of society in general.
In addition to emergency services, 87% is playing its part by offering tangible support to small enterprises, namely those businesses and their employees who are particularly feeling the impact of the current pandemic and often have constrained resources. More than 70 small businesses have already been granted free access to the mental wellbeing platform, and the opportunity to register is still open*.
The 87% business is comprised of a team of experts in Psychology, Technology and Business. Their focus is on improving mental wellbeing in the workforce whilst maintaining the absolute privacy and trust of the individual. The benefits from using the platform accrue not only directly to end users but also to each organisation through improved engagement and productivity.
“It’s crucial that our colleagues have access to good mental wellbeing support during this time. We are working hard to get some great resources on resilience and mental health into the hands of our Members.” Dr Katherine Henderson, President, The Royal College of Emergency Medicine.
“We’re pleased to be able to offer this to our members. Their mental health and wellbeing is a priority for the College, and is especially important in these challenging times. We are confident that this app will help support them.” Gordon Miles, CEO, Royal College of Emergency Medicine
“We believe that business has a pivotal role to play at this time. This sudden change in working conditions has highlighted the need for employees’ mental and physical wellness to be top of the agenda. Resilient mental health can help individuals and therefore their families and society cope better with the uncertainty and worry that unforeseen crises like the Covid-19 bring.” Andy Bibby, CEO, 87%
“We are proud to be supporting RCEM at this very difficult time. The front-line members of the global community are selflessly and tirelessly putting their own wellbeing and indeed health at risk for the benefit of society. It is our duty to support the College and its members in any way possible” Richard Glynn, Chairman, 87%
The build versus buy conundrum is pervasive across business transformation projects.
By Gary Richardson, MD for Emerging Technology at 6point6.
Choosing where to focus your in-house data scientists and engineers and when to look for external support from third-party vendors or consultants is a crucial decision, and one which your market edge could depend upon.
An added layer of complexity for this debate is the increasing demand for effective DataOps processes in an age of rapid transformation. Company-generated data is growing exponentially thanks to the growth of digital-first policies and connected devices; meaning businesses are finding it increasingly vital that successful DataOps processes are in place to manage such data and ensure value is derived from it.
Putting in place effective DataOps, however, is a complex process. Below, we will consider the build versus buy debate when it comes to DataOps, taking into account crucial considerations on each side of the argument so you can be fully informed about how successful DataOps processes can be implemented without any detrimental impact on profit margin and pave a route to innovation
Considering DataOps
DataOps has come to the fore in an era of innovation to provide a data management workflow, analytics and artificial intelligence, which works well with the ultimate goal of DevOps to enable businesses to deliver new features to end users with increasing consumer demands while maintaining a fast time to market and in the face of uncertainty. DataOps defines uniting all the data points across a business so as to better make sense of that organisation’s world. It demands big decisions to be made from the top down to implement restructuring; dismantling traditional and established areas of data analytics. This is crucial to understanding how it can operate more efficiently.
Data provides you with the ability to compete with increasing consumer demand while maintaining a fast time to market in the face of uncertainty, DataOps streamlines such data sources, data pipelines and analytics workflows. In this way, DataOps includes parts of machine learning, which enables business to integrate their data with strategic decision-making.
DataOps includes many solutions, from data warehousing automation and data science platforms, to orchestration toolsets and data engineering platforms. Certain DataOps solutions are implemented by in-house teams, whereas others can be brought in by third-party vendors. Another route is to purchase a solution and orchestrate the flow of code and data by utilising the analytical tools and data engineers already present in a company, together with the advice of an external expert.
Consequently, the build versus buy debate for DataOps is a complex and crucial decision to make. However, it can be solved by considering a series of questions about the business objectives and project you are undertaking.
The nature of the problem
Before making any decisions between build or buy, business leaders should test the market first and build later. In this way, they should look at a design thinking approach before building. This means saving resources and energy to build the right things. If this is not related to your business’ core value proposition or will not directly help with growing revenue, then committing money, time and resources to building a specialised DataOps solution is not the right route to take.
You must also take into consideration whether improved DataOps will enhance your customers’ experience of your product or service, thereby making such a solution a competitive advantage for your business. Though most commonly this will be the case, it is usually only good business sense to construct your own custom DataOps solution from scratch if your organisation is big enough to spread the cost of such a proprietary solution across a high volume of clients.
If you are seeking software to distinctly support revenue generation, then building DataOps solutions from scratch in-house might be a good decision. If the solution is tightly bound with your business’ USP, then owning the rights to such technology is an advantage over competitors.
Collaborating with third-party experts can be useful since this includes an outsider’s perspective, helping to shine a spotlight on issues you may not have known you had. Such third-party experts can assist with bringing experience-based processes and strategies to your team that may otherwise not have been considered.
The importance of time-to-market
A reduced time to market can result in a faster growth in market share, which is a distinct advantage over competitors— making this factor increasingly vital for the success of a DataOps programme during such a period of fast innovation and fierce competition.
Although in-house solutions are tailored uniquely to an organisation, they need an in-house team to dedicate months to building, testing and deploying them. Smaller businesses are less able to commit that much resource to a project for such a period. In this scenario, working with third-party experts can be useful since they can craft a roadmap that enables you to finalise your solution as quickly as possible, while delivering a high-quality project. They are also able to handle project management and any communication or collaboration with other parties. This means you need only allocate smaller amounts of your business’ resource to the project.
Additionally, large in-house projects have a tendency to finish later than expected, with unforeseen problems likely to occur given a team is building a customised solution that hasn’t been built before. Ahead of committing to building a DataOps solution in-house, you should consider when you need the project to be completed by, and if the timeframe you require is reasonable.
Building and buying should also be considered in terms of the talent you will attract to the company. Off-the-shelf solutions need no in-house development but do require some management. Consequently, you will be less likely to attract relevant talent, as such employees would look to involve themselves in the actual building of the solution.
Choosing to outsource a DataOps build means you will make more time and resource available to allocate to the business’ central output. Vendors occasionally provide their core technology as an off-the-shelf solution, and then go on to work directly with clients to adapt the solution to their unique needs. As a result, functionality may be incorporated when and where required, and integration is smoother and more efficient. Managed service is also appropriate if there are short term growth goals and ROI requirements.
How likely is your project to succeed?
A DataOps project’s likelihood of success is crucial given the effect on the business in terms of cost and resource commitment. Upon implementing a new DataOps programme, the business benefits garnered must be front and centre, as well as thorough transparency around all costs involved.
Any lack of clarity around cost and resource demands can result in outsourcing being viewed as a more suitable route. Vendors are often better positioned to provide more detail about cost up-front and can implement best-in-class solutions effectively.
Will it be build or buy?
As mentioned, DataOps is in truth far more about build, buy or rent - or a combination of all three - rather than being the traditional build versus buy binary.
Thus, whether or not a build or buy approach is chosen, the final product needs to centre around a comprehension of the interrelated nature of data engineering, data management, data quality and data security. If this is not understood, businesses will be unable to deliver data that gives them useful insights for the organisation.
Fundamentally, DataOps should be considered, not as a particular method or even tool, but instead as a method of working at an organisational, technological and cultural level. Successful deployment of DataOps demands key changes at many levels within an organisation, whether solutions are bought or built in-house. However, by taking into consideration the above key questions about specific projects, you can make a decision on the best way forward for your business. Though the path towards the completion of successful DataOps may be complicated, you will reap the rewards once you have facilitated a more efficient use of data and analytics and set yourself in the direction of success.
For most of us, when looking for any kind of information – we think; ‘Google it’. If we don’t Google it, we ask Siri, ask Alexa or use an alternative web search tool to match us to exactly what we need. As consumers, it’s as simple as that.
By Ronen Schwartz, Senior Vice President and General Manager, Data Integration, Data Engineering, and Cloud, Informatica.
But, when it comes to the workplace, accessing information isn’t as simple. The benefits of data for enterprise digital transformation, when compared with our lives as consumers, have offered us little – until now.
More organisations are focused on identifying and accessing secure, high-quality enterprise data, that can support revenue growth and problem solving.
Enabling data-led insights
Data is typically scattered across hundreds or even thousands of cloud and on-premises systems, from legacy transactional databases and spreadsheets to cloud-based marketing systems and data lakes. Adding to the complexity is the influx of newer data sources and applications, such as the internet of things (IoT) and artificial intelligence (AI).
If data is an essential element in digital transformation, the difficulty for employees to find the right data when they need it is a major reason why so many organisations fall short in their digital transformation initiatives.
Whether the goal is improving the customer experience, delivering analytical insights for decision-making, or migrating operations to the cloud, success depends on the ability of employees to track down relevant data and understand its quality and provenance. And, with experts forecasting that the amount of enterprise data will double every two years (if not more), this challenge is getting more complex.
The result is that much of the data that could be valuable in launching ambitious digital transformation efforts is vastly underutilised, if it’s used at all.
“Most organisations use only a small percentage of the data they have access to — in my experience less than 5% — even though they continue to collect and store terabytes of data,” says Shervin Khodabandeh, partner and managing director at Boston Consulting Group.[1]
Introducing: Data Catalogue
Ideally, business and IT users could search for enterprise data as easily as running a Google search. And have access to ratings and reviews on the data from other users to guide them just like we use Yelp to guide our personal choices.
To achieve this, enterprise information needs to be catalogued and classified in a logical fashion — “democratised” for use by business users, data scientists, application developers, and other stakeholders across the organisation. Nontechnical business analysts would have self-service access via semantic search, similar to the way consumers filter retail products by brand, color, and other attributes. They would have the context needed to understand and trust the data — where the data is coming from, who uses it, what other data is it related to, and the quality of the data.
That search effort would deliver relevant results no matter where data resides in the enterprise because it’s powered by an intelligent data catalogue, a technology layer to inventory data across the cloud and on-premises to make it accessible. AI and machine learning capabilities make data catalogues “intelligent,” for auto tagging with extreme accuracy, analysing data similarity and defining lineage, and, above all, empowering speed, scale, automation, and insights enterprise data management needs in the digital era.
“Managing data in today’s world without a data catalogue is ill advised and impractical,” says a report by Eckerson Group, a research and consulting firm.[2] “We’re moving rapidly to an era where communication, collaboration, and crowdsourcing are the mainstays of data management.”
Overcoming the hurdles
If data isn’t consistent, comprehensive, and accurate, digital transformation efforts may fall short of objectives in a wide range of areas, such as:
Whether we’re talking about IoT and AI, new and robust customer-centric business models or culture initiatives within the workplace – they all rely on digital transformation. The key takeaway here is that catalogue-based, smart data management sits at the very core of a thriving digital transformation project. Whilst change can be scary, adapting to it can be make or break for your business.
The current Covid-19 pandemic has served to focus attention on the health of the healthcare sector, as never before. Information technology has a vital role to play. Here we cover a range of viewpoints and news, much of it relating to the ongoing battle with the coronavirus. Part 3.
Engagements with government agencies, healthcare organizations and academic institutions around the world including in Arkansas, California, Georgia, New York, Texas, Czech Republic, Greece, Poland, Spain and UK.
With COVID-19 affecting 206 countries, areas and territories, IBM is helping government agencies, healthcare organizations and academic institutions throughout the world use AI to put critical data and information into the hands of their citizens.
With a flood of information requests from citizens, wait times in many areas to receive answers can exceed two hours. Available for no charge for at least 90 days and available to our client’s citizens online or by phone, IBM Watson Assistant for Citizens on the IBM public cloud brings together Watson Assistant, Natural Language Processing capabilities from IBM Research, and state-of-art enterprise AI search capabilities with Watson Discovery, to understand and respond to common questions about COVID-19.
“While helping government agencies and healthcare institutions use AI to get critical information out to their citizens remains a high priority right now, the current environment has made it clear that every business in every industry should find ways to digitally engage with their clients and employees,” said Rob Thomas, general manager, IBM Data & AI. “With today’s news, IBM is taking years of experience in helping thousands of global businesses and institutions use Natural Language Processing and other advanced AI technologies to better meet the demands of their constituents, and now applying it to the COVID-19 crisis. AI has the power to be your assistant during this uncertain time.”
Watson Assistant for Citizens leverages currently available data from external sources, including guidance from the U.S. Centers for Disease Control & Prevention (CDC) and local sources such as links to school closings, news and documents on a state website. IBM already is delivering this service across the United States, as well as engaging with organizations globally in Czech Republic, Finland, Greece, Italy, Poland, Spain, UK and more.
Here are examples where IBM is engaging with government and healthcare agencies on Watson Assistant for Citizens:
· ARKANSAS: University of Arkansas for Medical Sciences – In 9 days, deployed a virtual agent so citizens can get their questions answered quickly about testing, symptoms or resources. Information is automatically sent to a mobile COVID-19 triage clinic electronically to help speed response. Average registration time has been reduced by fifty percent for those using the agent.
· CALIFORNIA: City of Lancaster in Los Angeles County – COVID-19 information for citizens on common questions such as symptoms and recommended procedures to follow in case of infection.
· GEORGIA: Children's Healthcare of Atlanta – The “COVID-19 Pediatric Assessment Tool” walks parents through a series of questions and results in suggested next steps that a parent should take. Recommendations on next steps are made according to the healthcare system’s established protocols.
· NEW YORK: County of Otsego – COVID-19-related information will be available within the next few days for citizens to help them quickly get their health and non-health related questions answered regarding the pandemic. Otsego County’s COVID-19 virtual agent will be able to answer citizen’s questions like: “How do I apply for unemployment?”
· TEXAS: City of Austin – COVID-19-related information will soon be available for citizens with interactive conversation on where to get testing and other information.
· CZECH REPUBLIC: Czech Ministry of Health – COVID-19 virtual agent called "Anežka" advises citizens about prevention, treatment and other related topics on the coronavirus.
· GREECE: Hellenic Ministry of Digital Governance – COVID-19-related information for citizens and interactive conversation on preventive and precautionary measures issued by the Greek Government.
· POLAND: Polish Ministry of Health – COVID-19 information for Polish citizens on common questions such as symptoms and recommended procedures to follow in case of infection.
· SPAIN: Andalusian Government – A virtual agent to help respond to citizen’s queries about COVID-19 is available through the app “Salud Responde” and the Public Agency for Health Emergencies (EPES) website, built also in collaboration with the Andalusian Health Service (SAS).
· UK: National Health Service Wales: Cwm Taf Morgannwg University Health Board – CERi, an English and Welsh speaking virtual assistant, will soon go live to support healthcare workers and the general public in Wales who need information or have questions on the prevention and treatment of COVID-19 along with general information about the virus.
“Austin residents are counting on us to provide timely updates on COVID-19 response,” said Tauseef Khan, Chief Technology Officer, City of Austin, Texas. “The City is pleased to use artificial intelligence technology to respond to that need, with a tool that quickly and easily helps residents find the information they need 24/7.”
“The AI solution from IBM will be a great resource for the county’s residents and will help alleviate call center volume to allow county employees to dedicate efforts elsewhere,” said Brian Pokorny, Director of Information Technologies, County of Otsego, New York.
Using information provided by clients, Watson Assistant for Citizens automates responses to frequently asked questions about COVID-19 that come in via phone call or text, such as “What are symptoms?,” “How do I clean my home properly?” and “How do I protect myself?”
State and local government agencies, hospitals or other healthcare organizations can choose to customize the solution to address questions specific to their area or region, including “What are cases in my neighborhood?,” “How long are schools shut down?,” and “Where can I get tested?”
IBM is offering Watson Assistant for Citizens for no charge for at least 90 days and will assist with initial set up, which can typically be done in a few days. The initial solution is available in English and Spanish and can be tailored to 13 languages.
The offer includes access to 15 pre-trained COVID-19 “intents” or queries. “Intents” are purposes or goals that are expressed in a customer's input, such as answering a question. By recognizing the intent expressed in a customer's input, the Watson Assistant service can choose the correct dialog flow for responding to it.
Clients can also work with IBM to customize the offering on top of the base model and intents to include information related specifically to a city or region for specific information that is pertinent to those citizens or constituents, as well as integrate with client's back-end ERP systems.
IBM is also working with global businesses in other industries to apply AI to help them respond to COVID-19 and reimagine the way work will get done in this new operating environment.
A new strategic partnership between myGP software creator iPLATO and GP service GPDQ has been launched to boost the capacity for NHS patient video consultations across UK General Practice as one in four GPs are forced into isolation.
After seeing a 1451 percent spike in GP video consultations being conducted during the Sars-Cov-2 pandemic, iPLATO and GPDQ have joined forces to provide surge capacity for UK-based General Practices, enabling more patients to be seen via video across the UK.
The new collaboration will provide medical practices with a team of remote doctors, pharmacists and nurses who are all able to conduct video or remote consultations via the myGP platform, in this move to provide a scalable and cost-effective solution during an unprecedented national emergency.
iPLATO currently connects 24.5m patients in the UK with their own NHS GP practice and provides the patient facing app myGP. The myGP platform is operational across over 6,500 GP practices and launched its NHS remote consultation service in December 2019. However, due to 25 percent of GPs now taking sick leave, practices are struggling to meet demand.
As the leading GP-on-demand service, GPDQ has been using its technology platform to connect its extensive network of clinicians with NHS and private patients through home visits, in-clinic or via video since 2015. By partnering with iPLATO, any practice using myGP can access experienced and highly trained GMC-registered clinical staff to act as a remote extension of their practice team.
The partnership will also enable the 9,000 GPs who are currently on sick leave to log-in when they feel well enough and help patients while continuing to follow self-isolation requirements.
Professor Mike Lewis, Chairman at iPLATO Healthcare comments on the new partnership:
“With the rapidly increasing need for online and remote consultations within primary care it seemed obvious to partner with GPDQ, which is leading the way in using digital technology to help patient demand meet GP supply.
“We know that many practices are operating beyond capacity due to illness and GPs and the primary care workforce heading into isolation. By offering additional shared resource to a group of practices we can help to relieve pressure and patients can continue to receive the vital care that they need from the safety of their homes.”
Dr Anshumen Bhagat, Chief Medical Officer at GPDQ comments on the new partnership:
“GPs have an innate sense of responsibility for their patients’ health and will always want to see and help as many patients as they can. We naturally want to do everything in our power to help. By fusing the powerful myGP platform with a national team of passionate, digitally-connected clinicians we can get GPs, nurses and pharmacists in front of the patients who need them most, when they need them, without delay.
“As the Sars-Cov-19 pandemic applies additional pressure on primary care, partnering with iPLATO to be a part of the myGP platform is a logical thing to do to maximise our impact. iPLATO have the network of relationships with commissioners, and we have the experts to deliver the service. We are really looking forward to deploying our workforce to help our NHS colleagues throughout iPLATO’s network.”
Paul Roberts, CEO at GPDQ comments on the opportunity to match GP supply and patient demand more effectively:
“The current pandemic has forced everyone to behave differently. This includes an openness to try new ways of doing things, creating positive case studies to show how innovation can be a force for good in primary care.
“For example, we know that there are hundreds of portfolio GPs out there today who want to help but aren’t currently part of a practice team or able to work in the usual way as a locum. Our service enables all NHS-registered GPs to sign up to work with immediate effect, and myGP’s platform helps us to get them to where they need to be – seeing patients on the front line.
“Our service directly harnesses a portfolio workforce to remove the need for practices to manage variable staffing themselves, with the associated time and expense this entails. The power of collaborating with myGP is that we were able to immediately make this service available across whole groups of practices to capture further efficiencies.”
myGP, which can be accessed via a smartphone app by patients and through a secure web interface by clinicians, is the UK’s largest independent medical app with 1.6 million active users. Currently used in over 6,500 GP practices, myGP was the most downloaded medical app in the UK in 2019.
A controlled study of 750,000 appointments has shown that 26 percent of appointments booked could be met in an alternate setting rather than a face-to-face GP appointment, such as a remote or video consultations.
This strategic partnership follows the launch of iPLATO’s Remote Consultation Enterprise, which was rapidly deployed to enable PCNs, CCGs and STPs to optimise digital health services across their population during the current pandemic. Acting as a hub and spoke service GPDQ will clinically triage and treat patients across multiple practices with the same access to patient records as a staff member, GP or Locum working at the surgery.
Redscan has released an analysis of the most searched security and technology terms during the COVID-19 pandemic. The findings demonstrate the technology priorities of UK businesses, the potential security threats they face, and the extent to which many were unprepared for such an event.
Key findings, based on Google Trends global search history data, include:
· Coronavirus-related phishing scams are currently more searched for in the UK than those linked to many big brands, including Apple and Amazon. HMRC phishing scams were also widely searched for, coinciding with the introduction of unprecedented financial support for employees and businesses most affected
· Searches for “Business continuity plan” saw a huge spike between 8-21st March 2020, significantly higher than any other time in Google’s history - revealing the extent to which the pandemic has triggered panic amongst businesses, many of which would not have had such a plan in place
· Search interest in “remote working”, “collaboration tools and “remote access” reached record highs in March, as organisations sought solutions to facilitate employee home working.
· “VPN” searches also saw a significant spike in March. Since 8th March, VPN is more searched for in the UK than Chancellor of the Exchequer, Rishi Sunak, who has also seen an increased number of searches during the virus pandemic
· “Antivirus” also saw increased searches in March, but searches for this term over the last 10 years remain on a steady decline
· Zoom is currently the most searched online collaboration technology, ahead of GoToMeeting, WebEx, Slack, and Microsoft Teams. Despite reported security and privacy concerns (including the rise of “Zoombombing”) all these collaboration related tools generated a significant spike in online search interest during the month
“Google’s search data tells a clear story of businesses trying to adapt to remote working and related security and technology challenges of greatest concern,” said Mark Nicholls, Redscan CTO. “A spike in business continuity plan searches is hardly a surprise, but it is also troubling to think that so many are Googling the term now. It suggests that many businesses did not already have a continuity plan in place, and now is hardly an ideal time to implement one. But better late than never.
“Ensuring that employees have the tools in place to work from home has been a priority of IT teams but it’s important that organisations are vigilant about the increased security risks and put appropriate controls and processes in place to mitigate them – such as ensuring that cloud platforms are appropriately configured and monitored.
“At this moment, search traffic is so high for COVID-19-related phishing scams that it exceeds search volumes for phishing attacks imitating major brands like Apple. Cybercriminals are treating the pandemic as a unique opportunity to target remote employees, who may be more vulnerable to social engineering away from the protection of an office network. During this difficult time, employee cyber awareness training and proactive network and endpoint monitoring are more important than ever.”
Domo has updated its free, interactive Coronavirus (COVID-19) Global Tracker with county-level infection statistics, stay-at-home orders and testing-by-state data.
To support the worldwide effort to keep communities informed, healthy and safe, this free resource uses the Domo platform to help anyone see and understand COVID-19 data, and embed any of the visualisations in their own websites or operations. Updating every 10 minutes, the tracker aggregates and cross checks data from sources including the Center for Disease Control (CDC), the World Health Organisation (WHO), Johns Hopkins University, Worldometer and Enigma.
“We’ve seen incredible interest in this free resource as organisations of all kinds seek to quickly understand how the virus is impacting the world in which they operate. Easy access to consumable data can help inform critical decisions and actions that help navigate through this crisis,” notes Josh James, founder and CEO of Domo. “We’re seeing hundreds of customers – healthcare organisations, grocers, national retailers, logistics firms and many others – combine the underlying data sets with their own operational data to help them respond more quickly to the changing environment.”
According to the recently-published DLA Piper GDPR Data Breach Survey 2020, more than 160,000 data breach notifications have been reported across Europe since the General Data Protection Regulation (GDPR) came into force in May 2018. The survey also found that data protection regulators have imposed €114 million in fines under the GDPR regime for a wide range of GDPR infringements. It is clear therefore that that many businesses are still facing challenges when it comes to meeting and maintaining compliance.
By Mike Sanders, General Manager, Unitrends.
Backup and recovery can play key roles in helping ensure organisations remain compliant with GDPR at all times avoid a breach and the costly fines associated with it. But in today’s increasingly cloud-based environments, how can businesses best manage their data retention policies and control the location and replication of their data in the cloud? Here are some top tips they should follow to help them achieve these goals.
Make use of geo-controlled cloud data
Articles 45-47 of GDPR govern the location and privacy of EU citizens’ user data in the cloud. Organisations should consider cloud solutions that let them choose the geographic region where their cloud data is based and contain data replication within a selected region, such as the EU, unless a different geography is specifically requested.
Make sure you can access automated compliance reporting
Under GDPR, organisations are responsible for how they manage and protect the privacy of EU citizens’ user data (Article 5). Make sure you choose backup, recovery and cloud software solutions that provide robust compliance reporting built into the user interface, including outage impact predictions and comprehensive data recoverability reports that are available in formats that can be shared with leadership or auditors.
Ensure your backup and recovery is ’state-of-the-art’
As part of its commitment to protecting users and their data, GDPR encourages companies to implement backup and recovery that is State of the Art (SOTA, Articles 25 and 32). It is important to seek out solutions that provide feature advanced ransomware protection and machine learning-based predictive analytics.
Put in place data retention policies that are easy to manage
Article 6 of GDPR requires a strategic plan for storing data about EU citizens that includes a mechanism to delete data when the use case completes. It is therefore key to seek out solutions simplifies the process of defining and managing data retention policies for both on-premises back-up and data in the cloud.
Implement intuitive search and delete
One of the most talked about articles of GDPR is Article 17, Right to be Forgotten. In order to address this, it is important to ensure your chosen backup and recovery tools include intuitive search functionality that enable your administrators to find specific files. Administrators can then choose to delete data as needed, though it should be noted that, depending on the data source, deletion may require erasing a block of data and administrators should also be aware of how other compliance regulations might be impacted.
Instigate role-based access control
As another way of controlling the privacy of EU citizens’ data, Article 23 of GDPR mandates that organisations restrict access to personal data whenever possible. Role-based access control can be instrumental here in helping administrators meet this requirement by letting them manage and restrict data access levels within their team.
Provide secure encryption
GDPR Article 32 mandates that all data is securely processed and stored. With the latest high-quality backup solutions, data can be encrypted in-flight and at-rest using military grade encryption.
Instant recovery
In addition to security, GDPR article 32 also requires the ability to quickly restore data. A high-quality back up and data recovery solution makes it easier for you to recover lost data in seconds.
Running the risk
By failing to ensure compliance with GDPR, any business is running some significant risks. In addition to substantial penalties that can quickly eat into the bottom line of the business, the biggest risks are to the company’s reputation and the goodwill shown to it.
Any company that fails to follow the stipulations of GDPR and experiences a data leak as a result opens themselves up to severe monetary penalties and fines; loss of future business, network downtime, ongoing legal fees, loss of customer trust and confidence, unhappy shareholders and poor employee morale.
That’s food for thought for every business when they decide whether or not they should put the necessary measures in place to comply. GDPR is not going to go away and every organisation needs to ensure it has put its own house in order. Complying with the regulation means putting in place a multi-faceted approach that is as much about implementing new processes as it is about installing new technologies. The latest backup and recovery solutions will typically have a key part to play in delivering best practice data management and by extension helping to meet GDPR. The Unitrends Fifth Annual Cloud and Disaster Recovery Report found, however, that 30 percent of its respondents are experiencing data loss, and 40 percent suffered downtime, showing that many organisations still need support to leverage backup and disaster recovery solutions and best practices.
Moving forwards, organisations that implement these tools will often be better placed to comply with GDPR and other regulations. Ultimately too, there is no time like the present to start putting them in place.”
James Hamilton, VP and Distinguished Engineer at Amazon Web Service (AWS), on ARM processors.
AWS Graviton2
In November of last year, AWS announced the first ARM-based AWS instance type (AWS Designed Processor: Graviton). For me this was a very big deal because I’ve been talking about ARM based servers for more than a decade, believing that massive client volumes fund the R&D stream that feeds most server-side innovation. In our industry, the deep innovations are expensive and “expensive” only make sense when the volumes are truly massive.
For someone like myself that focuses on server-side computing, this is a sad fact. But the old days of server-only innovation started to die with the mainframe, and the process completed during the glory years of the Unix super-servers. Today, when I’m placing a bet on a server-side technology, the first thing I look for is which technology is fueled by the largest volumes and, most of the time, it’s the massive client and especially the consumer market that drives these volumes. For more than a decade, I’ve been watching the client computing and especially the mobile device market for new technologies that can be effectively applied server-side. The most obvious example is the Intel X86 processor family that started its life as a client processor but ended up taking over the server market. Many other examples include most power management innovations and new technologies such as die stacking that showed up first in client devices.
Understanding this dynamic, my prediction back in 2008 that ARM processors would end up powering important parts of the server market was an obvious one. If you agree that volume drives innovation in our business, it’s hard to argue with far more than 90B ARM parts shipped.
But, server-side success for ARM processors has been far from instant. Some very well-funded startups like Calxeda ended up running out of money. Some very large, competent and well-known companies have looked hard at the market, made significant investments, but ended up backing away for a variety of reasons, often completely unrelated to technical problems with what they were doing. AMD and Qualcomm are amongst the companies that have invested and then backed away, but the list is far longer. I saw the details behind some of this work and much of it was excellent. But new technology is hard. All companies, even very successful ones, need to focus their resources where they see the most value and often where they see short-term value.
I understand this, but it’s been difficult to watch so many projects fails. Some of these projects were massive investments and some of the work was very good. Nonetheless, as fast as projects were shut down, the opportunity remained obvious and, as a consequence, new investments were always being started. After nearly a decade, that’s still true. Many projects have started, almost the same number have been shut down, but the common element is that there are always many ARM Server investments underway.
In some ways it’s good that there continues to be deep investments in ARM server processors, but producing a winning part requires deep investment and patience. Much of the modern corporate world is only just “ok” at deep investments, and most are absolutely horrible at patience. Server processor development takes time, the ecosystem needs time to develop, and customers need time to adopt new technologies. Big changes never happen overnight and, without patience, they simply don’t happen at all.
Back in 2014 I was quoted as saying “the development of ARM-based chips for data center servers wasn’t progressing fast enough … to consider using them over Intel processors.” Like many quotes, it’s not exactly what I said but the gist was generally correct. In my opinion, at that time there were no ARM server parts under development that looked like they could win meaningful market segment share. All these investments were just slightly too incremental and a part that was only “about as good as what was currently in market”, isn’t going to attract much attention, isn’t going to cause the ecosystem to spring to action, and customers won’t go to the effort to port to it. Unless the new part is notable or remarkable in some dimension, it’s going to fail.
This was the backdrop to why I was almost giddy with excitement in the front row when Peter Desantis announced the AWS Graviton processor during his keynote at AWS re:Invent conference. Here’s what I posted at the time: AWS Designed Processor: Graviton. I was excited because what Peter announced was a good part with good specs that raised the price/performance bar for many workloads. But I was even more excited knowing that AWS has a roadmap for ARM processors, is patient, and specializes in moving quickly. The first Graviton part was good but, as I enjoyed the first Graviton announcement back in 2018, I knew what many speculated at that time: another part was underway.
The new part is Graviton2 and this is an exceptional server processor that will be a key part of the EC2 compute offering powering the M6g (general purpose), M6gd (general purpose with SSD block storage) the C6g (compute optimized), the R6g (memory optimized) and the R6gd (memory optimized with SSD block storage) instance families. This 7nm part is based upon customized 64-bit ARM Neoverse N1 cores and it is smoking fast. Rather than being offered as an alternative instance type that will run some workloads with better price/performance, it’s being offered as a better version of an existing, very highly-used EC2 instance type, the M5.
Here’s comparative data between M6g and M5, the previous generation instance type, from an excellent Forbes article by Patrick Moorhead:
This is a fast part and I believe there is a high probability we are now looking at what will become the first high volume ARM Server. More speeds and feeds:
The Anapurna team at AWS is doing amazing work. I wish I could show you all the work they currently have underway but only some of it is public. Even with multiple, difficult competing projects concurrently underway, they delivered Graviton2 on an unusually short schedule seldom seen in the semi-conductor world. It’s a great team to work with and Graviton2 is impressive work.
ARM Servers have been inevitable for a long time but it’s great to finally see them here and in customers hands in large numbers.
The current Covid-19 pandemic has served to focus attention on the health of the healthcare sector, as never before. Information technology has a vital role to play. Here we cover a range of viewpoints and news, much of it relating to the ongoing battle with the coronavirus. Part 4.
Fever screening is used to detect infectious diseases at an early stage, to cut off routes of spread and to enable treatment of infected persons as early as possible. Modern fever-screening thermal cameras can detect elevated body temperatures of people with an accuracy of up to 0.1 degrees.
Where can fever screenings be used sensibly?
The use of body heat measuring cameras makes sense wherever large gatherings of people are encountered, such as at the entrance or waiting areas of office buildings, public amenities and transport hubs. Measuring forehead temperature using hand held devices from the distance of one meter is time consuming and can threaten safety of the personnel. Thermal cameras can therefore help wherever manual measurement is not safe or efficient due to time constraints. If a certain limit value is exceeded, the camera sounds an alarm and sends a notification to the remote operator or its department.
At many Asian airports, fever screening has long been part of everyday life. To stem the spread of coronavirus, more and more countries are introducing fever screening at their airports, including the USA. At border crossings, for example from Germany to the Czech Republic, fever screening is now part of everyday security measures against COVID-19.
Even critical infrastructures such as the Vienna Trauma Hospital rely on thermal imaging cameras. The Robert Koch Institute (RKI), in its options for separate treatment of COVID-19 suspected or confirmed cases and other patients in the health care system, states that the health of medical staff is crucial for enough treatment capacity and patient safety. Among other things, fever screening is recommended.
The technology does not stop at shopping malls and office complexes. Chempark, whose three sites in Leverkusen, Dormagen and Krefeld-Uerdingen are home to 70 companies and around 48,600 employees, introduced fever screening with distance scanners in March 2020 to protect its people and sites.
The solution: fever measurement from a distance
unival group® and oneclick™ help companies and public institutions to improve safety of their employees and society in general, especially after Corona. Automated fever screening will become widely accepted for prevention reasons alone. While unival group® provides the thermal cameras and software for fever measurement, the oneclick™ platform enables the connection to the cloud, so that the software can be operated by security staff at different locations via a digital workspace in the browser.
An important criterion for the use of screening measures - including security screenings as part of admission controls - is the speed with which the largest possible number of people can be screened, such as employees during a shift change. Recently, Apple had to incur considerable costs to compensate its workers for overtime due to lengthy admission checks at the points of entry.
The opportunity and at the same time the challenge lies in the automation of processes and the AI-based evaluation of large amounts of data. unival group® specializes in the operation of multi-sensor solutions such as the automated combination of fever and person screenings, by means of which alarms can be generated in real time and significantly higher flow rates can be achieved at the same time. In addition, both employees and security personnel can carry out their duties in more respectful manner from a safer distance. In the event of a high volume of checks, employees from a central control centre can take over additional tasks and intervene directly in the processes via oneclick™. Thanks to the streaming technology of oneclick™, the future of carrying out similar remote security checks safely is literally within reach.
Company looking to work with governments and organisations across the UK and Europe in a bid to give citizens accurate information around global pandemic.
Yext has announced its collaboration with the State of New Jersey to deliver residents accurate, up-to-date information about COVID-19 with a new online information hub. Yext is looking to work with governmental, NGOs, charities and other businesses and organisations across the UK & Europe to combat the growing concern around misinformation.
The State of NJ’s new website, covid19.nj.gov, uses natural language processing (NLP) to understand the questions people ask about the coronavirus, then delivers answers using data collected from multiple NJ state agencies, federal resources like the Centers for Disease Control and Prevention (CDC) and the Federation of American Scientists “Ask a Scientist” interactive tool.
As the World Health Organisation categorises misinformation as contagious and as dangerous as the diseases it helps to spread, it’s increasingly important for people to have access to get the right answers and advice.
“Ensuring people have the right information during this pandemic can help save lives, it’s really as simple as that,” said Wendi Sturgis, EMEA CEO at Yext. “Unfortunately, misinformation is rife and can have dire consequences, particularly during difficult times like these; providing accurate and timely information is therefore critical in the fight against the COVID-19 pandemic. We want to be able to replicate the great work done in the State of New Jersey across the UK and elsewhere, and we’re committed to supporting everyone during this unprecedented time.”
“Over the last few weeks, we’ve been able to use our platform for good, and our collaboration with the State of New Jersey highlights how Yext can help any local, state or federal government agency in a moment of crisis,” said Brian Distelburger, Co-Founder and President of Yext. “In just a few days we were able to implement Yext Answers on the State’s new information hub, and together we now help give New Jersey residents the information they need to make important, potentially life-saving decisions about their health.”
Users of a new health and wellbeing app are contributing to a publicly-available heat map of people with COVID-19 symptoms, providing a national picture of the outbreak and its spread over time.
The app was developed by UK company, Evergreen, in collaboration with data and health scientists. As more people add their responses to the data, the accurate information provided will help fight the virus. The data is being shared with leading universities, including Manchester, who will analyse it with the NHS.
The University of Manchester’s Professor Tjeerd Van Staa and Dr Ian Hall are among those analysing the data. Dr Hall, a Reader in Mathematical Staistics said: “Evergreen Life users are supporting a better understanding of the local experience of COVID19 disease through sharing their data which will be incredibly useful to national and local planning.
The data, based on 25,548 responses shows that at March 27, 10.4% of respondents had reported having the symptoms consistent with COVID-19, up from 8.1% in the initial survey on Sunday 22nd, before lockdown was announced.
Before lockdown, 53% with symptoms were staying at home and after lockdown 89% with symptoms are reporting they are staying at home - showing that the overwhelming majority of those with symptoms are now acting on government advice to self-isolate.
Dr Hall added: “This is an exciting emerging data stream and I look forward to helping interpret the data, with colleagues in Manchester and Liverpool, as it provides situational awareness to users and policy makers alike.”
Dr Hall is also one of a special task force of statisticians who have been analysing data models from the beginning of the COVID-19 outbreak to help inform UK Government policy and response. They specialise on risk to communities that are in enclosed places, such as prisons or large vessels like a cruise ship; as well as analysing highly social communities found in schools to much smaller social environments, like our family homes.
Evergreen Life CEO Stephen Critchlow says: “We’ve asked our 750,000 users to help build a heat map of those with symptoms of COVID-19 to help the NHS and researchers better understand how the virus is moving and spreading around the UK. We’ve already heard from over 25,000 people and the questionnaire has been completed over 40,000 times.
“We have compared the situation before and after lockdown. It shows that while many more people are now staying at home, the number of people reporting symptoms has risen from 8.1% to 10.4%.
Users of the app, available from app stores, are being asked to report if they are self-isolating, have a dry cough or a temperature. The anonymised data is being used to create a national picture of those reporting symptoms. People will also be asked to report when they recover to enable further data analysis as the outbreak progresses. App users are also sent personalised information on national guidance, to support them, and optimise their wellbeing. The platform will also be offered to give the special advice from the NHS for users within the 1.5m people with the greatest risk of complications.
Digital health is one of most vibrant research areas at The University of Manchester, building on an exceptionally strong track record with more than 40 years of interdisciplinary research. The University has world-leading capabilities in engineering and research methodology for digital health technologies, as shown through the bio-health informatics research programme at the School of Computer Science.
UK startup OpenSpace, whose digital twin platform measures real-time passenger movement, have discovered a new use for their technology in the fight against coronavirus. The pioneering system currently deployed at St Pancras station in London, can monitor social distancing through detecting and visualising the distance between passengers in real-time. It can also compare historical weekly and daily information for trend analysis.
With social distancing measures due to the coronavirus pandemic likely to continue for six months or longer, this data provides a useful indicator of public adherence to Government guidelines, especially when lockdown measures are lifted in stages. OpenSpace have been using Artificial Intelligence and Internet of Things (IoT) technology to detect and predict overcrowding at St Pancras since 2019 as part of a government-funded innovation project, and will be able to show how well the public respond to future changes to the lockdown.
OpenSpace CEO Nicolas LeGlatin explained: “Our technology is designed to detect real-time passenger separation to alert station managers to current and future overcrowding, and suggest interventions. But the unexpected events of the past few months have revealed a new application - monitoring social distancing. If our data can help better inform government strategy on COVID-19 to help save lives, then we want to do our bit.”
“The platform detected a 90 per cent drop in passenger numbers after lockdown measures were introduced on Monday 23rd March, compared with a weekday in January this year. This represents the scale of travel demand change due to COVID-19 in one of the UK’s busiest stations.”
“Our purpose is to use technology to help make the passenger experience better for everyone, including protecting privacy. Cameras with computer vision technology are key to measuring passenger flow rate, travel patterns and social distancing. The data collected is anonymous, and doesn’t use facial recognition. Through the use of Virtual Reality (VR) headsets, operators can put themselves in the shoes of passengers in real-time. Like those approaching a crowded area, to see and feel some of what customers are feeling, driving improvement strategies.”
The project at St Pancras was funded by the Department for Transport through the First of a Kind Round 2 competition, delivered by InnovateUK. Other project partners include High Speed 1, Govia Thameslink Railway, Network Rail - High Speed, and Birmingham Centre for Railway Research and Education.
As a CEO you are always conscious of the arrow of disruption heading towards your business, and if you’re lucky you will see it before it’s too late.
By Nick Earle, CEO of Eseye.
Whether it’s Kodak failing to keep up with the change in photography habits, Nokia and Blackberry’s rapid downfall in the mobile phone market, or Blockbuster’s destruction at the hands of on-demand streaming services, history is littered with examples of big-name brands that failed to evolve and keep up with changes in consumer behaviour or technology.
The challenge is, in this rapidly changing world, the next arrow could come from anywhere. In fact, most disruption comes from companies that did not even exist 5 years ago – with many notable recent examples such as Airbnb and Lyft demonstrating how a small start-up can quickly transform an entire marketplace. So what could that next arrow look like and how does the modern CEO get their retaliation in first?
Having led the cross-company global transformation teams of two multi-billion-dollar companies, HP and Cisco, from hardware-led to software-led business models, I’ve been at the forefront of significant technology led business transformation. Part of my role was to predict the next disruption and I believe the next one will be enabled by the Internet of Things (IoT). The very nature of global business will be transformed.
Why? Because IoT enables products to be turned into experiences by feeding back data about how your customers are using them day in day out, enabling new disruptive business models. In short, information about how a product is being used will become far more valuable than the product itself.
The companies who get this data will provide a radical personalised experience to customers. Increasingly products will be given away in return for this valuable data. Every product will undergo this change. Can your company compete with free?
But now that you can see this arrow; how do you prepare your business to not only defend the attack but get on the front foot to lead the disruption?
Embracing disruption
Before getting started with your own business disruption, you must identify your disruptive change agent. Remember, this isn’t all about the technology. You’ve got to first look at it from a business case. What radical customer experience would truly transform your sector? Which product area is the most vulnerable? Are there some start-up companies already releasing solutions and signing up early adopters? These are all the vital questions that are central to the beginning of every successful journey into disruption.
Once you’ve set the objectives you want to achieve, the next decision is how you’ll make it happen. Will you outsource the strategic plan creation to third-party consultants, or will you trust your current Exec team?
Unfortunately, the problem with option one is that outsourcing to a third-party can be incredibly expensive and far too often, once they have completed their strategy report, you are then left on your own to try and implement it. Although very impressive, many of these projects remain theoretical masterpieces that fail to be properly deployed throughout the organisations that they were created for.
Likewise, by giving the responsibility of disruption to existing managers, option two also presents some issues. Ultimately, it’s tough to factor disruption into the time of current management teams, who already have enough on their plate. It often breeds unease because fundamentally, if successful, it will entirely change their roles, change how they are measured and how they are paid. So, the reality is that you can’t look outside the organisation for help because you’re left with an expensive slide-deck with no implementation, and you can’t go inside because you’ll only get ‘lip-service’. Now what?
Creating your own rebel force
My experience is that the most effective way to create a successful programme of disruption is with option three - by creating an internal change management group. This internal group of rebels encourages change from within and is the key to successful disruption.
To truly maximise its capabilities, this band of insurgents must be led by the right person. Specifically, it must be driven by a visionary leader that has the respect of the C-suite. After all, this person must be an evangelist who can sell the value of change to the very people who will be the most disrupted – your current Exec team!
And, they must be thick-skinned. After all, they’ll likely get complaints and be challenged by many at the start of the journey, because they’re going against the grain. To allow the project to properly come to fruition, CEO’s must also give these change management teams air cover and a ring-fenced budget to offer the financial freedom needed to make change happen. They’ll need it.
To put it into an analogy – think of your organisation as a caterpillar. How do you get a caterpillar to move? You can’t get it to budge by pushing it from behind or pulling it from the front, as it will get scared and curl into a ball. Instead, to avoid startling it, all you need to do is get the front feet going by attracting it towards a juicy lettuce. The concept is the same for a large organisation. The whole company doesn’t need to be shaken-up at once, just identify the people who are the most likely to want change and offer them the ‘lettuce’ – whatever that may be. Once the first people have been convinced, the next set of feet will follow and so on. The caterpillar will begin to move.
Finally, you must kill this rebel group after 18 months. After 18 months you will know whether you’ve been successful or not. By 18 months the disruption will have either become operational and the group will naturally no longer be needed, or they will have failed. It must be said, however, that the first six months will always be the worst. It’s that time where not much happens, everything seems pointless, and nobody will see the value of the group. You must persevere.
By embedding the philosophy of disruption at the heart of your organisation you can dodge the disruption arrow and turn the threat of IoT into a business opportunity. After all, the possibilities of this technology are truly incredible. Over to you.
UK grocery chain Tesco generates $150 million in supply chain efficiencies via analytics. Oil and gas giant BP saves $7 billion a year with Internet of Things (IoT) sensors. Hershey saves $500,000 per batch of candy with machine learning and IoT. These examples of bottom-line benefits share a common thread — a maturing approach to digital business.
By Dave Aron, Distinguished Research VP, Gartner.
These businesses show that instead of simply translating a traditional process into a digital equivalent, they changed their process by optimising it (e.g. supply ordering) or doing something that wasn’t possible before (e.g., IoT-enabled predictive maintenance). However, the problem we face is that too many businesses set their aims too low when it comes to digital ambition.
While 82% of CEOs have plans to transform, only 22% understand the need to make significant changes to their business model and that makes 60% of organisations digitally vulnerable. One source of digital vulnerability is conventional thinking and this mindset prevents leaders from embracing a transformative approach.
We outline below six outdated ideas that are hampering digital growth:
Outdated Idea 1: IT alone is responsible for digital
C-suite leaders tend to look to their CIOs for guidance on how to integrate digital approaches throughout the organisation. While the CIO has a critical role to play, IT can’t drive digital transformation alone any more than marketing can be solely responsible for the customer. Senior leadership determines corporate strategy, and then IT — along with its peer functions — pursues departmental priorities to support it.
Digital is not a self-contained project or initiative within IT, or elsewhere. Digital technology plays a role in all business activities, from who makes decisions and how they make them to the resources employees can use to collaborate and do their jobs.
We should promote holistic ideas of what digital means for the organisation and encourage business leaders to consider digital as part of every decision or initiative.
Outdated Idea 2: Global roles are fixed
Geographic stereotypes like “East Asia is the place to go for manufacturing and India for business process outsourcing” are outdated and limiting. Global economic shifts are spreading wealth, talent and industrial capabilities around the world. Successful digital businesses will think creatively about location. They’ll reach across geographic boundaries and transcend geographic stereotypes to access the talent, resources and partnerships that drive success.
We need to adopt a mindset for a multipolar world, promote broad diversity in teams and invest in multicultural awareness for all employees.
Outdated Idea 3: Growth evolves from core positions
Strategists in the past have pursued organic growth through product or brand extensions that leveraged existing core competencies but with digital capabilities evolving every day and data expanding the possibilities, we can go much further.
Just as Amazon Web Services grew out of the company’s in-house data centre, so too can legacy firms build capabilities that evolve into new business lines.
To do this, teams should look for new markets where in-house digital capabilities and data resources unlock potential opportunities. Exploring partnerships with organisations that have complementary skills for these new markets is a good place to start.
Outdated Idea 4: CX happens inside the organisation’s boundaries
Customer experience (CX) has long focused on customer interactions with a product or service however boundaries are becoming blurred as people interact with both physical and digital platforms as part of a holistic customer experience.
Take transportation as an example: A self-driving car shared by multiple owners will someday charge by distance travelled, calling for each owner to pay their share of insurance, tolls, fuel and wear-and-tear. And fees will transfer automatically via digital currency exchange. To the customer, it’s all part of getting to work. To companies, the intersection of mobility with insurance and banking requires an expanded view of the customer experience. Such cross-industry experiences will be the norm for digital businesses of the future.
The need here is to think of CX as an integrated cross-market effort. Explore opportunities through the lens of customer behaviour and accept sector overlap as a natural consequence of how customers operate in the real world, then look for opportunities to exploit it.
Outdated Idea 5: Business success is only about processes
Once upon a time, efficient execution of core business processes determined strong, or weak, performance - think automotive manufacturing or book printing. Companies that still operate with such a process focused mindset view good business activities to be predictable and repeatable.
A digital mindset requires openness to spontaneity and sometimes one-off opportunities to solve customer problems. Process thinking is not totally irrelevant in a digital context and has its much-needed place, but it has to serve products capable of flexibly serving customer needs and behaviours.
Leaders must communicate how placing too much emphasis on process can lead to rigid approaches incapable of capturing new opportunities. Integrating process and product teams to design digital products and services that take both sides into account will be a good first step.
Outdated Idea 6: Agile practices make for agile organisations
Agile development enables tech teams to deliver new functionality and still pivot quickly when new needs arise. The proven benefits have encouraged ‘agility practices’ to spread to non-IT departments like marketing and operations. However, agility itself can’t operate solely at the level of process or function. Digital businesses require agility to be applied as a mindset across strategy, culture, investments and other areas.
Overall, for agility, you must embrace a product management approach that encourages fast, incremental deliverables and the ability to shift and adapt as necessary.
The current Covid-19 pandemic has served to focus attention on the health of the healthcare sector, as never before. Information technology has a vital role to play. Here we cover a range of viewpoints and news, much of it relating to the ongoing battle with the coronavirus. Part 5.
Kaleyra supports the Italian Red Cross (Croce Rossa Italiana, CRI) with a free text-message service designed to help volunteers and citizens dealing with the emergency caused by the spread of COVID-19. Through a single number - 4353535, the CRI association can recruit health workers in the affected areas, effectively manage queries from citizens, as well as communicate more quickly with all its volunteers in urgent situations. The toll-free number can be reached by all local operators, facilitating the booking of essential medical services through text messages.
Apart from these specific use cases, the CRI will also be able to use Kaleyra's versatile platform for other needs to raise awareness and provide necessary care to those affected by the Coronavirus outbreak.
"There are many challenges that this pandemic poses to the community we live in, and we want to do everything we can to help. In these trying times, the Red Cross is doing an incredible job, and we thought of putting our platform at their disposal, to support them in their fight against the Coronavirus," explains Dario Calogero, CEO, and founder of Kaleyra. "If there are other organizations or associations out there who need a similar solution, we want them to know that we are here to help. We are prepared to use our global presence to support their initiatives to control the spread of the Coronavirus disease."
The number - 4353535 is already active and usable through all national telephone carriers. The Red Cross is also using it to support doctors, nurses, and health workers to enhance the emergency response system in the Lombardy region. The doctors and nurses who want to give their availability have to send a simple text-message by typing "immediate doctor availability" or "immediate nurse availability." The system will automatically send them the instructions to follow.
The text-message solution lends itself to a number of other use cases, such as the decongestion of the toll-free number, which recently faced an exceptional call load. The Kaleyra platform, in this case, helps to manage such high traffic volumes better. If a citizen finds the toll-free number to be busy, they can send a request via text-message. The text-message will be immediately processed by Kaleyra's CPaaS platform that will then direct the request to the helpline organization. The organization, in turn, contacts the user as soon as an operator is available.
The service can also be used in particularly critical situations by the Red Cross, such as to send urgent communication instantly to all its 160,000 volunteers in the Italian territory.
The idea was conceptualized by Dario Calogero, the CEO, and founder of Kaleyra and his son Pietro while watching the press conferences by the authorities about the Coronavirus emergency. The Italian Red Cross at the time was facing high call volumes with the rising number of Coronavirus cases, and the systems were overburdened.
Together with his son Pietro and key project contributors from the Kaleyra team, the solution was developed during the days immediately after the Italian Coronavirus lockdown. Over just a couple of days, both Kaleyra and CRI teams worked relentlessly and created the pro-bono text-message service to allow the Italian population to receive services from CRI.
"The first was created to request services such as the temporary home presence of a volunteer or for the delivery of groceries or medicines," commented Pietro Calogero. "The second service works as an auto-responder to requests for collaboration addressed to doctors and nurses, and the third is an emergency communication platform between the CRI and its volunteers in the area."
The service, activated on Sunday, March 15th, 2020, has already seen more than 7,500 text messages passing through the platform in the first two weeks of its operation. Having coordinated the entire project from his residence in New York City, Dario added, "As a former ambulance volunteer, I am particularly proud of how our entire team in Milan is working smart, volunteering for this project, and doing an amazing job. The strength always lies in the crew. This initiative might be a small contribution, but when many businesses come together for the greater good, society really benefits. It is time for all of us to work together and use every solution we have to win this battle against Coronavirus."
Exscientia to screen nearly every known approved and investigational drug - 15,000 clinical molecules - against key COVID-19 drug targets to progress rapid treatments.
Exscientia, the leading artificial intelligence (AI) driven drug discovery company, has announced a joint initiative with Diamond Light Source, the UK's national synchrotron facility and Calibr, a division of Scripps Research (USA), to progress compounds that could rapidly become viable drugs for the rapid treatment of COVID-19.
Through this alliance, Exscientia has gained access to Calibr’s world’s leading collection of 15,000 clinically-ready molecules. This collection includes launched drugs, additional compounds that have already been shown to be safe in humans and promising compounds that have passed pre-clinical safety studies. The collection, which has been funded by the Bill & Melinda Gates Foundation, will be shipped from the Scripps Research in La Jolla, California to Exscientia, in Oxford, UK.
Exscientia will first apply its advanced biosensor platforms to rapidly screen the complete collection against key viral drug targets of SARS-CoV-2, the virus responsible for COVID-19. Three prioritised targets include; the 3CL protease, the NSP12-NSP7-NSP8 RNA polymerase complex (both of which are vital components for viral replication) and the virus’s SPIKE protein, which interacts with the human cell receptor ACE2 in order to gain entry to human cells.
Dr Martin Redhead, Head of Quantitative Pharmacology at Exscientia, who will carry out the analysis, commented: “Given the ever expanding scale and rapid speed at which COVID-19 is spreading, the initial priority is to search for any existing drug that can be repurposed to protect the human population. Then as we move forward we can design superior molecules with our AI-Design systems to work even more effectively against the virus. The Scripps Research collection allows us to explore both of these important objectives.
Professor Dave Stuart FRS, Director of Life Sciences at Diamond Light Source and MRC Professor of Structural Biology at the University of Oxford, is co-ordinating the efforts along with his colleagues, Dr Martin Walsh and Dr Jonathan Grimes. Prof. Stuart commented: “The drugs we are testing have either been approved by the FDA for other diseases or have been extensively tested for human safety. By being able to start with existing high-quality molecules, we can move more rapidly to clinical trials, and potentially an initial treatment for patients”. The three UK collaborators; Diamond Light Source, Oxford University and Exscientia have been working together since January, firstly to produce viral proteins initiate drug screening and then to investigate how anti-viral drugs at atomic detail in order to provide high-quality seed data for Exscientia’s AI drug design algorithms.
Exscientia’s mission is to make safer, more sophisticated drugs available to all - more quickly and efficiently than ever before using it AI drug discovery platform. At this critical time globally, the company believes that a dual strategy to first identify opportunities in the existing set of known drugs and then work on new optimised molecules is an optimal path to start to protect the human population from this invasive disease.
Robots play an important role in fighting the coronavirus SARS-CoV-2 around the globe. Disinfection robot UVD for example has been in high demand since the outbreak of COVID-19 pandemic. Chinese hospitals have ordered more than 2,000 UVD robots by Danish manufacturer Blue Ocean Robotics. They started to destroy viruses in Wuhan, where the global pandemic began. The units operate in more than 40 countries – in Asia, Europe and the United States. UVD uses ultraviolet light (UV-C) to kill harmful microorganisms. The robot is the current holder of the IERA innovation award by IEEE and the International Federation of Robotics (IFR).
“We are now helping solve one of the biggest problems of our time, preventing the spread of viruses and bacteria with a robot that saves lives,” says Claus Risager, CEO of Blue Ocean Robotics. “The immediate demand has increased a lot with the outbreak of COVID-19. Existing customers buy many more units than before, and many new customers are ordering the UVD robots to fight coronavirus and other harmful microorganisms.” This is an ongoing success story for the IERA award winning robot. Blue Ocean Robotics has seen a growth in sales of more than 400 percent annually over the last two years.
Robot moves autonomously
The Danish robot moves autonomously around patient rooms and operating theatres - covering all critical surfaces with the right amount of UV-C light in order to kill specific viruses and bacteria. The more light the robot exposes to a surface, the more harmful microorganisms are destroyed. In a typical patient room, 99.99 % of all viruses and bacteria are killed within 10 minutes.
Robot helps at airports, schools or office spaces
“UVD is a supplementary device which assists the cleaning staff,” said Claus Risager. For safety reasons, it works on its own and automatically disengages the UV-C light if someone enters the room. The collaborative robot can be used in various enclosed spaces – not only in hospitals. The technology also works in environments such as office spaces, shopping malls, schools, airports and production facilities.
“Robots have a great potential of supporting us in the current severe corona pandemic,” said Dr Susanne Bieller, General Secretary of the International Federation of Robotics.
“They can support us in healthcare environments, but also in the development, testing and production of medicine, vaccines and other medical devices and auxiliaries. Disinfection tasks performed e.g. by UVD units or safe distribution of hospital material in quarantine zones - without personal contact - provided e.g. by Photoneo´s mobile robot Phollower, are just two of many examples.”
By now, medical robots represent a well-established service robot market with a considerable growth potential. Sales of medical robots increased by 50% to 5,100 units in 2018. This is according to the statistics published in World Robotics by IFR.
Anaqua’s ideaPoint, a provider of medical affairs and innovation management solutions, has announced a strategic partnership with clinical data company WideTrial to facilitate greater access to investigational medicines for physicians and patients dealing with COVID-19 worldwide.
The project’s objective is to establish a single centralized hub through which creators of potentially effective COVID-19 therapeutics can make their products available to large numbers of people under applicable Expanded Access regulations. Expanded Access Programs (EAPs) are specially authorized clinical trials for the “treatment-use” of pre-market products by patients and their doctors who cannot participate in research trials. The new platform is designed to allow interested healthcare providers (including non-traditional trial sites, community hospitals, and clinics) to sign up online to participate in the EAPs of their choice.
Supporting the Expanded Access platform is ideaPoint, a leader in web-based inquiry management software for life science and healthcare uses. The company has delivered physician engagement and clinical data collaboration tools in several therapeutic areas including Immuno-Inflammation, Infectious Disease, Oncology, HIV/AIDS, Respiratory, Vaccines and Rare Diseases through global EAPs for large and mid-size pharmaceutical companies.
“With our expertise in providing software to manage and track all aspects of a global EAP and WideTrial’s clinical operations and regulatory expertise, we hope to help and support companies with potentially promising investigational medicines to fight COVID-19 and make medicines available to large portions of the global patient population as quickly as possible,” said Scott Shaunessy, Founder and CEO of ideaPoint. “We have combined forces to offer a unified resource for rapid deployment of an EAP.”
Bob Romeo, CEO of Anaqua, added: “As a global company, we deeply understand and are committed to supporting initiatives that may alleviate the effect of this pandemic worldwide. Putting people’s health first, we want to support our global pharmaceutical clients and other pharmaceutical companies in our collective efforts to urgently fight COVID-19 through supporting this modernized Expanded Access solution.”
WideTrial is a specialized clinical trial company created exclusively to sponsor large-cohort EAPs under agreements with the originating life science companies. WideTrial’s innovation includes an in-house clinical operations team, Expanded Access regulatory specialists, a unitized per-patient cost-recovery structure, and CDASH/CDISC compliant data capture tools that help streamline the workflow of busy clinicians.
Commenting on the importance of effective EAPs, Jess Rabourn, CEO of WideTrial said: “Applying traditional clinical trial business models to Expanded Access is a colossal mistake and has served only to frustrate healthcare delivery systems, drug companies, and patients. We built WideTrial to re-arrange that landscape into one that frees Expanded Access to function as it was intended, for the vast numbers of patients and doctors who cannot access a new medicine through its research trials.”
New technology is being introduced faster than ever before and this is fundamentally recalibrating elements of our society.
By Daniel Ball, business development director, Wax Digital.
Before the cloud really took off, implementations of procurement software involved on-premise, standalone packages that were costly and time-consuming. Digital transformation helped speed up the implementation of many of these processes, increasing business efficiency and productivity. Yet, this also presented additional challenges that were overlooked because of the perceived value of change.
A key challenge posed by the rapid development and application of technology is the necessity to make sure employees within the business have the skills and knowledge to use it effectively. According to the Centre for Economics and Business Research, 12% of the UK population will lack digital literacy by 2028, which will equate to approximately seven million people. However, this issue can be addressed by investing in this area; upskilling, training and nurturing staff could see UK businesses gain around £1.5 billion.
Embedding digital technology within the organisation
We recently surveyed 200 senior professionals in IT, HR, finance and procurement to understand how they felt about digital transformation within their respective organisations. A huge 85% of respondents believed their job role had changed because of new technology; and, over half claimed they hadn’t received enough training on how to best utilise it. Yet, most of these professionals did recognise the value of the technology, with 70% thinking that it has made their business more efficient.
To effectively carry out the digital transformation process, businesses need a change of culture. This could mean re-evaluating how your organisation operates as well as advocating an inclusive strategy to help everyone get the right training, support and understanding of how you intend to use technologies.
Here are five tactics that you can use to make sure everyone in the business is on the right page when it comes embracing digital technology:
1. Enabling digital confidence - Whether the goal is to increase sales or boost productivity, all employees should be aware of the benefits and impacts of new technology. Employers should communicate this in a range of ways from face to face meeting and electronic reminders. Employees should feel comfortable to ask questions or raise concerns, as this will establish clear lines of communication and empower staff to fully understand how to make the most of digital technologies.
2. Promote collaboration – By getting people to work closely together, businesses will encourage a productive culture and mindset. This is imperative for the successful implementation of a digital transformation project. For instance, procurement and finance can work closer with eProcurement technology to help finance professionals clearly define what procurement professionals can spend, or perhaps need to cut to meet profit margins for the year.
3. Build training packages – Training is imperative to ensure that staff can adopt new technology effectively. This can be done by designing a comprehensive training and development programme that accommodates multiple levels of expertise, and by assessing staff needs on a role by role basis. After you have done this, the next consideration is how to deliver the training. This can be achieved through big show and tell sessions, one-to-one workshops or using a range of resources such as video or written materials to teach staff about digital technology and how it will add value to their roles.
4. Drive an innovative culture – Setting up idea-generation competitions or allowing creative time for teams to think of new ideas, will promote an innovative culture within businesses. Though this will not happen overnight, it encourages staff to think differently about ways of doing things.
5. Digital transformation champion – Having an expert to help guide the process will contribute to successful implementation of digital technologies. This person could be appointed internally or externally, and there are pros and cons to both. Using someone from outside the business will be more objective, whereas someone from within will understand more about the ins and outs of the organisation.
Digital transformation continues to be important to help businesses evolve and grow. It’s important that all employees understand the benefits associated with new technology as this will maximise productivity and increase efficiency. However, it’s imperative that business do all they can to ensure everyone within the organisation is in a good position to take advantage of using these digital technologies.
Modern businesses are under pressure to deliver when it comes to the experience, they offer their employees and customers when accessing applications in the cloud. Digital-savvy users expect seamless and consistent access to applications and services, regardless of where they are connecting or which device they use. While the emergence of the internet as the new corporate network has myriad benefits for all concerned, it requires a fundamental overhaul of traditional network security.
By Nathan Howe, director of transformation strategy at Zscaler.
Coined by Gartner, secure access service edge (SASE) is a new security framework that has been designed with the requirements of the digital workplace in mind. Put simply, SASE is about making sure traffic is secured throughout its entire journey from a device to the requested destination application, regardless of where the user is or what network they are on. And this is the crucial point: the ‘edge’ – where services are provided – is where the user is going, rather than where they are. Formerly, security was provided at the corporate network or the data centre, or via the extension of an MPLS connection. In contrast, with the SASE model, digital businesses must provide security at all times regardless of the location of the user.
With growing staff mobility and the adoption of the cloud, boundaries for users have steadily broken down in recent years, and there will be fewer and fewer borders going forward as the traditional data centre and corporate network structure becomes obsolete. The SASE concept can be seen as a reaction to cloudification as it becomes critical in modern times to provide a secure path on the way from the user to the service they wish to consume, no matter where the applications are located.
The application landscape continues to grow and become ever more complex with multicloud scenarios being adopted. As such, the ability to deliver a simplified, streamlined service will become a key competitive differentiator for businesses. This service must be consumable by anyone from anywhere, regardless of the device used to connect, without compromising on security in the process. Indeed, at the heart of the SASE concept is the idea that it’s the security of the journey - not just the destination - that’s most important. And that’s where the notion of a transit security cloud comes in. As opposed to a destination cloud, which is being consumed to access the desired application, a transit cloud provides a security service along the journey and security policies are applied to the traffic between the user and the application. Rather than security being located in a physical location, a cloud service is always on and can be anywhere to secure the mobile user.
As Gartner stated: “In a modern cloud-centric digital business, users, devices and the networked capabilities they require secure access to, are everywhere. As a result, secure access services need to be everywhere as well.”
These days, it's often the apps themselves that dictate the type of cloud service used based on specific requirements, which is why multicloud scenarios are increasingly common in enterprises. A top priority for companies is to offer simplicity for users as a reaction to the growing complexity of the application landscape. With applications moving to the cloud, the network-less network can only become a reality when the access path becomes seamless for the user. Ideally, users won’t even realize where an application they are trying to access is being hosted.
Once the transition to the cloud is made, the network set-up should become easier to manage rather than more complex. A transit cloud for security control provides the solution for greater simplicity. The physical infrastructure requirements are reduced when security controls are merely based on the identity of the user and implemented in a transit layer. A transition cloud validates the user based on his or her identity, confirms the access is secure and lets the user through to the application regardless of the location of the cloud or data centre. Such a transit cloud service provides, in path, the security controls thus leveraging the cloud-based future for enterprises in which the user can follow the path of least resistance on the way to their destination.
The current Covid-19 pandemic has served to focus attention on the health of the healthcare sector, as never before. Information technology has a vital role to play. Here we cover a range of viewpoints and news, much of it relating to the ongoing battle with the coronavirus. Part 6.
We’re all in the midst of an extraordinary moment—not only for our teams, colleagues, and customers, but for the world at large. The impact of the novel coronavirus (COVID-19) has created many new challenges, and for many of us, has required that we adopt new ways of working.
All over the world, businesses and users depend on Google Cloud to help them stay connected and get work done. And we take this responsibility very seriously. Today, I want to share many of the ways we’re working to support businesses, government institutions, researchers and one another.
How we’re helping workers stay safe and productive
Empowering remote workers to stay connected
As more and more businesses rely on connecting an at-home workforce to maintain productivity, we’ve seen surges in the use of Google Meet, our video conferencing product, at a rate we’ve never witnessed before. Over the last few weeks, Meet’s day-over-day growth surpassed 60%, and as a result, its daily usage is more than 25 times what it was in January. Despite this growth, the demand has been well within the bounds of our network’s ability.
Because we know how critical keeping colleagues connected and engaged is for business continuity, we’ve made the advanced features in Google Meet free to all G Suite and G Suite for Education customers globally. We’ve also made Meet Hardware available in additional markets, including South Korea, Hong Kong, Taiwan, Indonesia and South Africa, to ensure customers have the right hardware to complement their Meet solution.
We’ve heard from a number of enterprises that G Suite has helped them make the transition to remote work. The MACIF Group, a leading French mutual insurance provider, was able to ensure business continuity and maintain the link between its employees with G Suite, already deployed to more than 8,000 employees. MACIF staff shifted from in-person meetings to more than 1,300 Google Meet video meetings daily, and the use of collaborative virtual rooms facilitated important human contact and responsiveness in an unexpected period of remote work.
Korean gaming company Netmarble told us G Suite helped them make the company-wide transition to working from home smoothly, saying, “With video conferencing through Google Meet, collaboration via Google Docs, and all data accessible on Google Drive, there's really no difference when working from the home or the office.”
Providing training opportunities to upskill employees
As people transition to remote work and learning in response to COVID-19, many are looking to build their skills and increase their knowledge while at home. To help, we’re offering our portfolio of Google Cloud learning resources, including our extensive catalog of training courses, hands-on labs on Qwiklabs, and interactive Cloud OnAir webinars at no cost until April 30. Anyone can gain cloud experience through hands-on labs no matter where they are—and learn how to prototype an app, build prediction models, and more—at their own pace. Teams can also build their skills through our on-demand courses on Pluralsight and Coursera. Our most popular learning paths, including Cloud Architecture and Data Engineering, are now available for all.
How we’re helping public sector agencies and educational institutions
Supporting government efforts to fight COVID-19
We’re working with governmental organizations around the world on projects such as developing AI-based chat technology to help overtasked agencies respond more quickly to citizen requests; bolstering government websites that get critical information to the public with free content delivery network (CDN) and load-balancing services; and providing services and tools to track the spread of the virus.
In the U.S., we are working with the White House and supporting institutions to develop new text and data mining techniques to examine the COVID-19 Open Research Dataset (CORD-19), the most extensive machine-readable coronavirus literature collection to date.
We’re also working with state agencies like the Oklahoma State Department of Health on solutions for medical staff to engage remotely with at-risk people who may have been exposed to the coronavirus. Within 48 hours, the department deployed an app that allowed medical staff to follow up directly with people who reported symptoms and direct affected citizens to testing sites. We worked with our partner MTX Group to create the app and are now deploying it with governments in Florida, New York, and many other states so they can use our tools for insights into how the virus’s spread is affecting citizens and state healthcare systems.
Internationally, we’re working with a number of governments to provide collaboration solutions and tools to track the spread of COVID-19. For example, in Spain, we’ve set up an app for the regional government in Madrid to help citizens perform self-assessments of coronavirus symptoms and offer guidance, easing the demands on the healthcare system. The Spanish national government is also planning to deploy this app across other regions in the country in the coming days. In Italy, the more than 70,000 employees working in the Veneto region’s healthcare system are relying on G Suite to maintain their high level of service and patient care during the COVID-19 crisis. This week, the Australian Government Department of Health launched its Coronavirus Australia App. Built on Google Cloud, the app offers real-time information and advice about the fast changing COVID-19 pandemic.
And in Peru, the Judiciary branch is using Google Meet to continue operating during the nation-wide quarantine. Through video conferences they are carrying out both internal meetings and also hearings. By doing this, attorneys, lawyers and judiciary clerks don’t have to physically attend court, keeping the virus from spreading, while maintaining the administration of Justice in the country.
Assisting educational institutions with content, tools, and distance learning
Educational institutions have been particularly impacted by the coronavirus, and we’re undertaking a number of initiatives to support them, ranging from providing free content and educational tools to supporting distance-learning initiatives that help educators continue teaching students who are at home.
For example, in recent weeks, we rolled out Google Classroom to more than 1.3 million students in New York City so they can continue their school year virtually at home. And we continue to provide critical infrastructure for nonprofit educational organization Khan Academy, which supported 18 million learners per month before the crisis. Since school closures began, Khan Academy is seeing record growth across all metrics: Time spent on the site is approximately 2.5 times normal, student and teacher registrations are up roughly six times from this period last year, and parent registration is up 20 times normal.
In Malaysia, where schools are closed in response to COVID-19, we’ve been hosting daily webinars for teachers, bringing them up to speed on how they can leverage Google tools to teach from home
And in Indonesia, we provided the technology infrastructure for online education services platform Ruangguru, which opened a free online school service in response to school closures in Indonesia and was tapped by more than a million learners on day one.
In Italy, we worked with the Italian Ministry of Education—the governing body accountable for millions of Italian schoolchildren—to rapidly shift students entirely to remote learning. Our teams banded together, and engineers worked around the clock to speed up the enrollment process, even making a virtual help desk available for timely activation and support. As a result, the Ministry of Education was able to help bring millions of students online in a matter of days.
How we’re helping other organizations
Supporting researchers, hospitals, and more
Healthcare is the most impacted industry during the pandemic, and technology can be a critical tool to help. We’re providing solutions for the health research community to identify new therapies and treatments; and assist hospital systems with tracking the pandemic and providing telehealth and remote-patient monitoring solutions.
In health research, we’re making several COVID-19 public datasets free to query like Johns Hopkins Center for Systems Science and Engineering COVID-19 data, the U.S. Census Bureau's American Community Survey data, and OpenStreetMaps data. We’re also providing $20 million in Google Cloud credits to academic institutions and research organizations as they study potential therapies and vaccines, track critical data, and identify new ways to combat COVID-19. Researchers at accredited academic institutions can submit a proposal to the COVID-19 High Performance Computing Consortium, while other researchers who need Google Cloud capacity for work on COVID-19 can submit proposals directly to us.
Last week, we joined the COVID-19 Healthcare Coalition, a group of healthcare, technology, and research organizations who have come together to share resources in order to fight the virus. Coalition members include athenahealth, Mayo Clinic, University of California Health System, and others. As part of the coalition, we’re helping build a data exchange that allows coalition members to safely and securely share and analyze data—ultimately enabling many of the world’s top researchers with data to work together.
We’re also supporting hospitals in several ways. In Asia, Since the COVID-19 outbreak, more people have been turning to Doctor Anywhere’s telemedicine services, and opting for video consultations with locally-registered doctors and medication delivered to their doorstep. According to Rishik Bahri, Chief Technology Officer, Doctor Anywhere, “We’ve seen a more than 70% increase in traffic on our telehealth application since the coronavirus outbreak, and it’s more important than ever to deliver frictionless access to users and partners alike on the Doctor Anywhere app.”
In the UK, the NHS is exploring the use of G Suite to allow them to collect critical, real-time information on hospital responses to COVID-19, such as hospital occupancy levels, and accident and emergency capacity.
Helping retailers, manufacturers, and other businesses handle demand
Businesses globally are facing unprecedented challenges in terms of forecasting demand from customers and the impact of COVID-19 on their overall supply chains. To help on the demand side, we’ve activated our Black Friday/Cyber Monday Protocol for retailers and other businesses seeing exponential traffic increases—bringing professional services, technical account managers and Customer Reliability Engineering resources together to support, plan and react to user demand during these peak times.
One of Canada’s largest retailers, Loblaw, asked for our help to support an increase in traffic to its PC Express grocery delivery and pickup platform. The Google Cloud team provided them with the resources to ensure they could scale, helping people get food and other critical goods during this time. As said by Hesham Fahmy, GM at Loblaw Companies Limited, “The Google Cloud team has been a fantastic partner during this ever changing time. We truly appreciate the level of ownership, care and help Google has been providing. It is for a great cause, to make sure Canadians’ don’t have to stress about their essential needs in these uncertain times.”
German luxury fashion retailer Breuninger employs 5,000 staff and decided to temporarily close its 11 department stores and focus on its online shop only. All staff suddenly working remotely presented big challenges as their existing collaboration tools for video conferencing proved unable to deal with that sudden increase of usage. Google Cloud helped to get more than 1,100 Breuninger employees live on G Suite within 48 hours—with more employees to be added over the next few days. In addition to this, they are exploring how to interact with customers through new digital services enabled by G Suite.
Providing a stable platform for telecom, media and entertainment
Communications and entertainment companies are seeing challenges as varied as they are. While the telecommunications industry is working hard to keep people connected, the media industry has seen demand increase as people look for news and entertainment, and the video game industry has also experienced a large spike in usage as more people are staying home. We are working with some of the largest news agencies and game publishers so that people can stay informed and have some fun during this challenging time.
Telecommunications providers are leveraging our technology to deliver services as seamlessly as possible. For example, Vodafone is using GCP to analyze both network traffic and traffic prioritisation to direct bandwidth to users that need it most.
In media, we helped the broadcast team at Yahoo Finance transition 150 reporters, producers, anchors and technicians from a legacy TV studio to a 100 percent work from home model overnight. Within the span of a few hours, our team worked with them to set up a seamless eight hours of live broadcast, via Google Meet, on air from locations across the U.S. and London, providing people with critical news and information in this particularly uncertain time.
In gaming, Unity Technologies, which recently partnered with the World Health Organization (WHO) on a new #PlayApartTogether initiative, has seen player demand for online games significantly increase due to COVID-19 social distancing mandates. Despite these huge spikes in gaming activity, Unity’s Multiplay server hosting solution has so far not seen any downtime. Unity's partnership with Google Cloud has helped them ensure real-time online games stay up and running and continue to deliver great player experiences, regardless of demand surges.
Looking ahead
Although we’re all facing an extraordinary moment of uncertainty, I’m proud to report that at Google Cloud, we’re prepared—we’ve activated remote customer service agents and our enhanced support protocol for peak periods, we’ve detailed plans to manage our capacity and supply chain, and we’ve rigorously tested the resilience of our infrastructure and processes. All of these preparations have been put in place to ensure we can best support our customers during a time like this.
We’ll continue to work tirelessly on these and other initiatives to support our users, customers, and communities in this time of need. I’m so grateful to the many extraordinary Cloud Googlers that have worked so hard to provide so many capabilities for our customers.
Edge computing is taking the sector by storm, notably for its capabilities in creating new ways to maximise operational efficiency in addition to improving safety and ‘always on’ performance and availability.
By Jeremy Kivi, Senior Portfolio Marketing Manager at AVEVA.
For many, edge computing is credited as the technology that is driving the automation of many core business processes. From speed and latency to cost savings, the edge is bringing greater scalability and reliability to industrial and enterprise organisations that is supported by the cloud.
Alongside the flexibility of the edge comes a degree of complexity, so it is often adopted as part of a wider enterprise solution. From IIoT architectures to cloud-driven remote edge management, each component of an ‘edge to enterprise’ solution plays an integral part in daily operations. However, many organisations still fail to recognise that it is the seamless integration of these edge computing elements that form a holistic overview of an operations lifecycle and, in turn, can improve visibility and agile scalability.
The edge as we currently know it is already bringing demonstratable benefits, alongside several challenges. But as it undergoes increasing levels of collaboration with the cloud, what is edge computing set to bring in 2020 and beyond?
Enabling different architectures
Like with any new technology, increased adoption of the edge will bring the introduction of new hardware that is unfamiliar to many organizations. In addition to this comes a range of new communication protocols that take time, and often, significant investment in staff training and the implementation of new resources.
The key for optimising edge technologies is embracing its flexibility to communicate with different architectures and hardware. Given our current balance between new and innovative technologies and legacy hardware systems, we’re likely to see the big-name equipment manufacturers such as Schneider Electric, Siemens and Allen Bradley support customers bridge the software gap brought by the edge and cloud infrastructure.
Data insights and new applications
Edge computing is both praised and characterised by its flexibility and its use in a number of industries. Organisations are able to bring cloud capabilities to the very edge in order to reduce latency and can do so in a number of customisable ways – either by using existing hardware to run custom software that emulates the cloud, or by extending the public cloud locations.
However, businesses are only going to find edge computing effective if they are as effective with the data that they source from it. For example, with tools in the cloud such as AI and machine learning, users can add context to data and identify both positive and negative trends. For example, if sensors at the edge are able to record the number of times a valve opens or closes, and compare that to the number of times a valve can usually open and close until failure, the edge data might be able to warn managers when it’s time to change the valve before it fails.
It’s important that all data is useful. Operators already risk data overload, where alarms are missed or ignored, or where trends aren’t followed because no one knows where they might be leading. Data must be used in a meaningful way, using the tools available, to offer insights that weren’t available before. Furthermore, these same organisations – whether in the enterprise or in industry – must understand that edge computing and all associated data underpins a number of additional technologies that are making their way into the working environment.
For example, predictive maintenance is not a product that can be plucked off a shelf, nor is it a set of solutions. In fact, it is a goal, and the ability to do computing on the edge is an enabler for technologies like predictive maintenance. This is because the edge effectively draws together a range of data sources – both new and historical – that allow predictive maintenance systems to do their job. This year, we’re likely to see more businesses recognise that the edge cannot be implemented for technology’s sake, but as a way to successfully implement other technological innovations that may be desperately required.
Increased collaboration
As the centralised point for IIoT technologies, the demands placed on the edge are set to grow. With these demands come complex decisions that need to be made and often, a degree of uncertainty. Because of this, we’re likely to see an uptake in the number of industry and enterprise organisations choosing to work with vendor partners to then adopt customised, packaged solutions.
The flexibility and bandwidth of the cloud, and subsequently the edge, means the maintenance and utilisation of these systems will force organisations to outsource for support. As the edge will become a critical infrastructure that will interact with masses of data, organisations of all types will have to be increasingly careful with their security measures as the edge relies on constant data movement and interaction with the cloud. With a third-party support, any risk is greatly minimised and controlled.
An edge revolution
In 2020, we are set to see many organisations use the edge interchangeably with cloud, and are likely to do so in a vendor partnership. With this relation, we are likely to witness a market shift whereby the definition of what constitutes the ‘edge’ will become more refined.
Business needs are now reaching the point where there is an expectation that most systems will incorporate some level of edge computing if they are online. Now, companies must look into how they can participate if they are set to keep up with the ‘edge to enterprise’ revolution.
As our world becomes increasingly interconnected, businesses and individuals are realising benefits they could once only dream of. Thanks to this digital revolution, our critical infrastructure can be operated with increased efficiency and reliability, helping to provide consumers with the services they demand and businesses with more insight into their operations. However, as we come to rely much more upon digital technologies, there will be a shift in the types of risk we face.
By David Sylvester, digital trust and cyber security expert at PA Consulting.
Aviation is one industry currently undergoing a significant digital transformation. It is learning to adapt to and exploit new and emerging technologies. Everything from aircraft to airports are becoming much smarter.
On board aircraft, complex electronics and computer algorithms are helping reduce aircrew workload and assisting with safety critical functions. In airports themselves, digital technologies are appearing in more and more places and passengers are interacting directly with this critical infrastructure. Self-service baggage drops and automated boarding gates are two examples of how airports have embraced the digital revolution to improve efficiency and customer experience.
As these more advanced technologies emerge, we can expect our critical national infrastructure to become increasingly connected. Before adopting these new technologies and connecting the new with the old, businesses must first seek to understand the challenges and risks posed to their operations in order to realise maximum benefit from their investment. Historically, the aviation industry has had to focus on ensuring safety and physical security; however, recent digital transformation has meant that the sector has had to learn to adapt to a new type of threat - cyber-attack.
News outlets and government agencies alike are constantly warning us of new and evolving cyber threats. Vulnerabilities in interconnected systems are already being exploited for malicious purposes, having the potential to cause significant disruption to the businesses affected. Whilst the most damaging incidents are more often the result of a highly targeted cyber-attack, recent events in other sectors show this isn’t always the case. In early 2020, a ransomware attack that spread from within the corporate network caused a two-day outage at a US gas pipeline facility. The operational network in this case can be considered collateral damage, but it demonstrates how an incident can affect business operations if risks are not fully understood and mitigated.
Unsurprisingly, aviation operations have previously been impacted by cyber-attacks; in 2018, departure boards at Bristol Airport were taken offline for two days by a ransomware attack, whilst Polish airline LOT cancelled several flights from Warsaw’s Chopin Airport after becoming unable to file flight plans in 2015.
Given many of the systems associated with diverse processes like security screening to controlling aerodrome ground lighting are digital, even our critical aviation infrastructure is not immune to such threats. As we become increasingly reliant upon digital systems, we must ensure our critical assets remain resilient and that incidents cannot compromise activities, safety or security. The key cyber security challenges for aviation are around understanding the changing threat and adopting best-practice.
Cyber-security is a business enabler. By making it secure, we can also be confident that our critical infrastructure is safe and resilient to new and emerging threats, allowing society to realise the full benefit of a connected, digitalised world. To do so, organisations should think holistically about safety, operations, and security. When looking to adopt new, digital technologies, we encourage businesses to ensure:
Security, like safety, is built into everything from the start. By building security into a system from the outset, and ensuring it will be maintained over its lifetime, it’s possible to realise maximum benefit and maintain safety, without having to retrofit. For example, if an airport was looking to invest in a new terminal - and the associated systems and services - it is more cost-effective to include security at the requirements stage. This enables operators to build confidence that they meet all applicable legal and regulatory requirements (such as the NIS Regulations). By thinking about the overall security architecture of new systems and services, it is possible to ensure they are scalable, and that effective segregation is in place between operational systems and the standard IT environments.
Security is incorporated into day-to-day processes, business continuity and incident response plans. As outlined above, cyber security incidents do have the potential to affect the day-to-day operations of both airports and airlines. It is important that cyber incident response is fully integrated with the existing emergency/crisis/business continuity plans. The potential effects of a cyber incident include physical impacts, service disruption, or a loss of data. Such impacts not only require an immediate response, but implicate Public Relations, Legal, Regulatory, Human Resources and, potentially, Health and Safety. By ensuring response plans are comprehensive, it is possible for businesses to manage potential operational consequences.
Security risk should be understood and managed throughout the supply and support chain. Supply and support chains within the critical national infrastructure space are extremely complex and operators are often heavily reliant upon third party service providers. By making sure supply chain partners have suitable processes in place which take into account cyber and information security, it is possible to manage the risk. In all cases, the approach to supplier assurance should be proportionate, but practical steps include asking potential suppliers about their approach to cyber security (to understand risk levels) and ensuring their networks and equipment are operated/developed according to recognised good practice. For suppliers of less sensitive goods and services, accreditations such as Cyber Essentials may suffice, whereas those providing more operationally critical services may need ISO 27001 certification and development processes aligned to relevant international standards (such as ISO/IEC 62443).
Training and development of personnel is key to operating safely and securely. Ensuring operations and maintenance staff are appropriately trained and have access to clear operating instructions means they can operate safely and securely. So they are aware of the aims, objectives and expectations of the business, all personnel should undergo regular security awareness training. Where personnel hold more sensitive roles, more regular, specific training and accreditations may be required. One practical step to reduce risk to operational systems is the introduction of a passport process, whereby personnel must first complete the requisite training package in order to complete certain tasks. By keeping accurate training records, and ensuring personnel are suitably qualified and experienced, businesses can be confident that the human side of security is appropriately managed.
What it means to be secure varies from sector to sector and between businesses. Regardless, security should be considered a business enabler, directly supporting the achievement of organisational aims and objectives. In all cases, businesses should take a flexible approach to understanding and managing the cyber-security risks posed to their systems and implement proportionate measures that enable them to respond to the changing threat.
The current Covid-19 pandemic has served to focus attention on the health of the healthcare sector, as never before. Information technology has a vital role to play. Here we cover a range of viewpoints and news, much of it relating to the ongoing battle with the coronavirus. Part 7.
Vidyo has announced a program to help its telemedicine clients dramatically scale to combat the Coronavirus pandemic.
The program allows for new or existing clients to increase the time or bandwidth they use by multiples of up to 10 to meet needs as they arise. The new program is available as an on-premise, hybrid or cloud-based solution.
“As the world responds to the COVID-19 coronavirus outbreak, Vidyo is committed to doing its part to support health systems’ essential efforts to maintain services,” said Reuben Tozman, General Manager, Enghouse Vidyo. “We have put a unique program in place for our clients that provides them increased access to our technology to combat the disruptions many are experiencing.”
In recent weeks, many organisations have exponentially increased their demand for video communication platforms. Vidyo’s proven track record of supporting large health organisations, as well as major financial corporations, government agencies and educational institutions, positions it as an ideal solution for rapidly evolving contingency plans.
Specifically, the Vidyo Telehealth solution enables clinics and hospitals to protect front-line staff and patients by supporting self-isolation and quarantine scenarios, and ensures clinicians, nurses and physicians can provide remote diagnoses and treatments, often with existing technology.
“While we hope the impact of the COVID-19 outbreak will be short lived, the potential strain on health systems could be severe. We know many health organisations are evaluating how best to deliver patient care under difficult circumstances over potentially extended periods of time,” said Tozman. “Vidyo excels at delivering resilient virtual care services. As health systems prepare for the next few months, Vidyo is here to support them with our program so they can effectively integrate various forms of telehealth into their patient management strategies.”
GDm-Health enables remote monitoring of pregnant women in light of UK government guidelines to comply with social distancing and care away from hospitals & clinics.
Sensyne Health announces that in response to new UK government guidelines for all pregnant women to avoid face-to-face contact for three months, it will be providing its GDm-Health digital therapeutic product free to NHS Trusts for one year.
GDm-Health is a digital therapeutic for the remote management of women with diabetes during pregnancy by their clinical care team. Comprising a smartphone application connected to a wireless blood glucose monitor, the patient’s near real-time data that has been prioritised by algorithms. This is communicated directly to the hospital team supervising care, enabling this high-risk group to monitor their condition safely at home.
GDm-Health is regulated, CE marked and clinically validated and can be rapidly deployed. Already commercially available and in-use at 16 NHS trusts, Sensyne Health will work collaboratively with the NHS to broaden the use of the product.
Lord (Paul) Drayson, CEO of Sensyne Health, said: “In light of the UK government’s guidelines around ‘social distancing’ to combat the ongoing COVID-19 pandemic, there is now a greater focus than ever before on the use of remote patient monitoring to reduce the burden on limited NHS resources and help high-risk people stay at home.
“Providing this product free of charge during the pandemic will enable the NHS to adopt GDm-Health quickly and provide remote care for more pregnant women during this crucial time and hence reduce hospital visits for this high-risk group.
“We are also working closely with our NHS partners as well as our industry collaborators on modifying our existing technologies to aid remote patient monitoring and provide the authorities with additional relevant information on the pandemic.”
Lucy Mackillop , Chief Medical Officer of Sensyne Health plc and consultant obstetric physician at Oxford University Hospitals NHS Foundation Trust and Honorary Senior Clinical Lecturer, Nuffield Department of Women's and Reproductive Health, University of Oxford, said: “I am pleased that GDm-Health is being deployed more broadly during this crisis to help this high-risk group receive their diabetes care safely at home. GDm-Health has made the transition to a commercial product and is already available across the NHS. This product is a result of an enormous amount of work by the clinical and academic teams at Oxford University Hospitals NHS Foundation Trust and University of Oxford and by Sensyne Health to have taken our prototype and transformed it into a sustainable scalable product.”
GDm-Health has demonstrated a positive impact for mothers-to-be since its launch in August 2018. At present 20 NHS Trusts have adopted the product, with the system now live in 16 of those Trusts. The system has helped to avoid an estimated 1,312 caesarean sections and 532 pre-term births, and 780 mothers have avoided transitioning to further pharmacological treatment[1]. It also displayed the potential for cost-savings to the NHS through improved patient outcomes.
Diabetes during pregnancy refers to an intolerance of glucose during pregnancy. The condition is increasing in prevalence world-wide, driven by demographic and lifestyle changes. In the UK, the rise is predicted to reach over 16%, from a baseline of around 4% in 2008[2]. The traditional method of manually recording glucose levels on paper by the patient is time-consuming, is open to the risk of transcription errors, and does not provide clinicians with the opportunity to review changes in symptoms in real-time in order to prioritise patients at need.
GDm-Health began as a collaboration between Oxford University Hospitals NHS Foundation Trust and the University of Oxford’s Institute of Biomedical Engineering. The system was invented, and clinical development led, by Dr Lucy Mackillop, a consultant obstetric physician at Oxford University Hospitals NHS Foundation Trust and Honorary Senior Clinical Lecturer in women’s and reproductive health at the University of Oxford, and VP of Medical Affairs at Sensyne Health.
Easy to use natural language web platform is made to help global health organizations, clinical experts, researchers, and scientists accelerate their search of structured and unstructured information vital to mitigating the COVID-19 pandemic.
Element AI, a global developer of artificial intelligence (AI) solutions, is releasing a beta of a free search platform to help clinical and scientific researchers, public health authorities, and frontline workers find answers and patterns in research papers by locating relevant work across thousands of published papers.
Information on COVID-19 is evolving fast and this AI-powered platform leverages a semantic search model that will allow users to quickly connect disparate information. The platform can execute searches based on specific inquiries, along with critical paragraphs copied from a relevant paper. Unlike keyword searches, the queries do not need to be specifically structured, and actually perform better in longer form. This initial version is configured to work with the COVID-19 Open Research Dataset (CORD-19) corpus. Element AI is looking for users and organizations from various groups to test the platform and suggest other data sets and features that could best fit their needs.
The group's Element AI is looking to work with include:
Clinical researchers who need to incorporate many phenomena to make a rich model of the pandemic and its impacts.
Government, Public Safety and Public Health authorities looking to find best practices across different countries.
Pharmaceutical companies working on new therapies or vaccine trials, as well as identifying existing therapies that could provide immediate help.
Scientific researchers and data scientists who are working on novel ways to connect research across the body of knowledge already available for COVID-19.
“Research data and reports are being published at an unprecedented pace as organizations scale up their efforts to respond to COVID-19. We want to contribute, and this free platform is our way to help the community locate and gather knowledge to find answers and patterns,” said Jean-François (JF) Gagné, CEO and Co-founder of Element AI. “We encourage the scientific and healthcare community to use this free platform and engage with our team to quickly ramp up and collaboratively meet the needs of the people working to slow down and contain COVID-19. We hope that their feedback and collaboration will help us quickly add features and datasets on top of what we already have made available” added Gagné.
The COVID-19 platform leverages technology from the Element AI Knowledge Scout product, which uses natural language techniques to tap into structured and unstructured sources of information. The first version will be progressively updated in coming weeks as additional datasets emerge. The site can be accessed at: https://www.elementai.com/covid-research.
Enterprise office solutions provider Y Soft uses its fleet of 3D printers to contribute face shields to hospitals.
To help address the lack of protective equipment for doctors and other workers who are exposed to the risk of viral infection on a daily basis Y Soft Corporation, a leading enterprise office solutions provider is using its 3D printers to produce protective face shields and binder strips for hospitals across the world.
The face shields - which are being produced using the Y Soft be3D eDee printers, designed for the education sector – are being donated for free to hospitals to help protect medical workers. Y Soft is currently producing around 500 of the shields per day, which can be re-sterilised. The plastic face shields use a piece of shaped plastic to cover the face and a 3D printed “frame” is used to connect the plastic shield, with a simple rubber strap to affix it to a person’s head.
In the UK, Y soft is currently supplying three hospitals with the face shields: University Hospital Bristol, Devon Partnership NHS Trust and UHCW NHS Trust.
“The world was not prepared for the current situation. While we cannot handle massive production, it turns out that everyone can voluntarily contribute their own efforts. That's why we got involved as a company. Our office workers around the world also use affordable 3D printers to print protective equipment for healthcare professionals. It is a way to help people in hospitals. I’m extremely proud of how YSofters have come together to do this kind of work without prompting from anyone. We are in this together, "said Y Soft founder Václav Muchna.
He continues, “Our manufacturing isn’t outfitted for ongoing mass production, but clearly a little bit from everyone can help. We see the contribution as a small and simple step to helping the global pandemic.”
The design of the frame itself is from another 3D printer manufacturer and has evolved over time based on feedback and on how hospitals are asking the shields to be delivered. Y Soft is already working on version 3.0 of the face shields and are now focusing on optimising production so that they can be produced even faster.
The consequences of sensitive data getting into the wrong hands can be significant, and a considerable source of risk and anxiety for organisations. But a bigger problem logistically can be determining where such data exists across the business, so companies can implement protective measures.
James Paton, CEO of SynApps Solutions, explains.
Since and even before its introduction two years ago, the EU’s General Data Protection Regulation (GDPR) has shone a bright light on the risk of having sensitive data strewn across an organisation – and of companies not quite knowing where versions or copies of this data might exist. As a result of this lack of visibility, locking down sensitive information so that it doesn’t get into the wrong hands becomes very difficult.
Security assessments and tightening of controls, and even initiatives to move data to the cloud as part of digital transformation programmes, are further drivers for organisations to get a better handle on where all of their sensitive data currently resides.
And of course GDPR isn’t the only regulatory driver for organisations to increase their understanding of where and how they manage and use sensitive data. The Payment Card Industry (PCI) data security standard affects any merchandiser handling branded credit cards from the major card schemes. Listed companies, meanwhile, must keep track of market-sensitive information and be able to report on where it is under market abuse regulations. And public sector and health organisations must be vigilant about sensitive citizen/patient data. The list goes on.
‘Find my sensitive data’ services
It is in response to many of these challenges that there has been new innovation in the form of ‘sensitive data discovery’ on demand: that is, managed services that any organisation can tap into if they need to trace and report on where particular types of data exist.
Run securely in the cloud, or in company’s own data centres, and fully resourced with highly qualified engineers, such hosted services remove a great burden from IT/compliance departments. Rather, it becomes possible for them to scan for instances of sensitive data across whole IT estates, and dynamically generate board-level reports, without having to allocate dedicated internal resources.
For organisations that want to go further, there are value-added services that can analyse the findings at a more detailed level, and suggest ways to bring sensitive data under more effective control.
By overcoming previously poor visibility to provide comprehensive sensitive data discovery, this kind of service can empower businesses to progress their bigger projects, such as digital transformation and cloud migration, fulfilling the CxO strategic agenda.
Knowledge is power: driving better user behaviour
The potential a sensitive-data discovery service becomes even more significant where end users are engaged and involved in the remediation process, if sensitive data is found to exist where it shouldn’t – for example, unprotected on someone’s laptop. Alerts to individual users can prompt them to take appropriate remedial action in line with company policy.
Where all such activity is recorded and monitored, this alleviates the pressure on internal compliance teams to interpret and react to all of the findings from a data scan – which could run into thousands of information policy contraventions that need to be addressed. This also has the added benefit that, if an audit is launched, the organisation is fully covered by a comprehensive record of all steps that have been taken.
Beyond board-level HERO reports and information for internal governance purposes, data discovery services can also report on organisations’ exposure to risk, with associated values and ROI metrics – so companies can see issues that are still outstanding, what it would take to remediate them, and what intrinsic value that would have.
One of the most persuasive arguments in favour of using such services is the speed of deployment, and of getting actionable results – this could be within just a few hours, for instance. Which means IT teams could very efficiently and sustainably scan their organisations’ entire digital estate - across multiple systems and operating environments - on a quarterly or annual basis.
Data discovery as-a-service, and the ‘sensitive data’ variety in particular, are a potential game-changer for organisations seeking to regain control of their diverse information assets - especially in the light of digital transformation programmes and cloud ambitions. After all, the first step in optimising what you do with something is scoping what you’re dealing with.
Following the announcement that the new WiFi standard 802.11ax is on its way, businesses will be looking forward to sitting back and basking in the faster connection speeds that await. However, they could be missing out on a host of additional advantages.
By Patrick Hirscher, EMEA Wireless Market Development Manager at Zyxel.
More commonly and pithily referred to as ‘WiFi 6’, the impending upgrade’s speedier impact will of course be an advantage in itself. Upgrading alone is a necessary and positive step. Speed increases up to 40% have been mooted in comparison to previous technologies and, as such, enterprise operations will accelerate into new territory.
Why stop there though? As a new differentiating factor, businesses should be looking beyond the obvious to extract peripheral advantages from the advent of WiFi 6, and to inject a little more acceleration into their own development.
Mobile enhancement
The first mode of potential differentiation actually taps into a trend simultaneously engulfing industry at the moment. Mobile, remote and flexible working has been a natural consequence of the rise of IoT and BYOD, but companies – especially SMEs – have often had to proceed into these territories with caution so far.
Will employees be extracting the same performance from their own systems away from their desk? Will interconnectivity be affected?
Well, it’s safe to say that with WiFi 6, performance in crowded or fragmented areas will no longer be a concern. Not only that, but the duration available to employees looking to get away from their desks and to ‘go mobile’ will also be enhanced with the promise of longer battery life under the watch of WiFi 6.
Essentially, speed facilitates mobilisation – a notion not to be ignored when so much job attractiveness is attached to employee flexibility at present. You’ll not only be diversifying your own office landscape, but making yourself more appealing to prospective hires in the process.
Be the most valuable player in your own value chain
Increased speed, efficiency and performance across your internal systems will also be hugely appealing to business partners and fellow peers along your supply chain.
Range of deployment is greatly improved by increased network speeds, creating an equally impressive range of opportunity for subscribers, and indeed business partners. Tapping into a new WiFi experience will bring both sets of relationships what they seek above all else during business dealings – unprecedented efficiency, quality and security with reduced risk.
Reducing OpEx
With improved speed, efficiency and quality inevitably comes less maintenance. At face value this contributes to reduced downtime, heightened confidence in your company’s operations and the technologies running them, and a more content workforce.
At actual value, the benefits are even more tangible. Operational expenditures are so inherently intertwined with digitisation these days. The one thing you can’t afford to falter in your daily workings is your technology; driven often by your connection speeds.
Data storage and management, administration both internally and externally, file preservation, research and real-time knowledge building, even socially through the significance of social media and chat applications are all now pivotal to a company’s daily operations, and all of their success and speed are dictated by network strength.
When any of the above strands breakdown, the maintenance that is required can often be costly, but is certainly necessary. To move forward in the knowledge that the likelihood of such breakdowns have been greatly reduced goes beyond instilling confidence; and actually impacts overheads.
For one package deal to upgrade to WiFi 6, companies will potentially be offsetting a host of future outlays; and this is before taking into account the financial spikes that will occur as a result of aforementioned benefits including an improved and enriched workforce, a more loyal and high-performing supply chain, and more seamless administrative procedures.
Don’t get left behind
As is always the case with a new technological breakthrough or unveiling, WiFi 6 will soon become the norm. 5G is on the verge of having a similar impact as we enter 2020, while IoT, big data and AI are already separating businesses from their competitors through the way they’re being utilised.
WiFi 6 is likely to have a similar impact, and perhaps the biggest benefit of all will be not to get left behind. Finding differentiators in its application and nuances is important, but upgrading in general will become pivotal. If even one of your closest industry competitors beats you to the punch in terms of both adoption and application then all other strands you may have been able to differentiate with could fall by the wayside. You’ll already be beaten to improved administrative performance; to enhanced supply chain and HR attractiveness; to reduced maintenance and subsequent downtime; and to cost savings at the end of it all.
Step one is keeping at least in line with, and ideally ahead of the curve. Prepare for the speed that WiFi 6 will bring, by picking up the pace yourself.
The current Covid-19 pandemic has served to focus attention on the health of the healthcare sector, as never before. Information technology has a vital role to play. Here we cover a range of viewpoints and news, much of it relating to the ongoing battle with the coronavirus. Part 8.
Asks Mohua Sengupta, Co-founder, Ventures.
In November 2018, purely out of my keen interest in Blockchain and a little influence from my daughter’s MUN (Model United Nations) preparation in WHO committee, I had written an article, ‘Unleashing Blockchain for cross-border surveillance & reporting of Communicable Diseases.’ It was pure research at that point and seemed like a perfect use case for Blockchain. Today this COVID-19 crisis suddenly makes me wonder, that if we had a blockchain based real time reporting of Communicable Diseases, maybe we could have avoided a pandemic?
Biggest challenges for Communicable Diseases
There are two major challenges with communicable diseases.
How did COVID 19 start, spread and then become a pandemic?
While there are various opinions about this matter and many conspiracy theories are floating around, lets for now just consider the proven cause. The disease started from the wet markets of Wuhan, China, where live and dead animals and birds are sold daily. The virus most likely has originated from Bats, but since bats are not sold in that market, the scientists think that the bat must have bitten a bird or animal that got sold in the Wuhan wet market.
This much we have seen many a times when any new disease hit us. But then what’s different with this? How has this virus brought the world to a standstill? We have seen it’s more dangerous cousins, MARS, SARS etc., but they didn’t bring the world to a standstill. Then why Covid 19? The difference is that it’s very highly contagious, much more than its close cousins. So when one person got infected, he started infecting many in a day and each one of them infected that many and very soon there were thousands of people in Wuhan who were infected. Since the symptoms are typically like simple flu, people continued to travel and China missed sending the warning to the rest of the world and did not restrict travel to and fro. So, international travelers kept on coming to China and vice versa. More than a month went by before the world woke up to realize that this is a serious threat and by then the disease has already spread to many countries like Italy, Spain, UK, US, Iran and a few others too.
And then while the news of Wuhan epidemic as well as the challenges in the other countries came out, travel was not restricted. It took a couple of more weeks for the travel restrictions to start and by then the virus has travelled to 20 odd countries. Now the world really wakes up and every country starts taking measures to do lockdown by restricting travel and also taking various other measures to reduce spreading of the virus within the country. But by then it’s a pandemic! The entire world gets introduced to a very different challenge, business suffers, stock market tanks across the world, many people lose their jobs, not to speak of the enormous loss of life and pressure on the healthcare system of all affected countries.
Could Blockchain have helped the situation?
As we all know, with Blockchain we can share any transaction / information, real time, between relevant parties present as nodes in the chain, in a secure and immutable fashion. In this case, had there been a blockchain where WHO, Health Ministry of each country and may be even relevant nodal hospitals of each country, were connected, sharing real time information, about any new communicable disease, then the world might have woken up much earlier. We might have seen travel restrictions given sooner, quarantining policies set sooner and social distancing implemented faster. And may be fewer countries would have got impacted.
What every country is doing now fighting this pandemic, would have been restricted to fewer countries and in a much smaller scale. The usage of a Blockchain to share the information early on, might have saved the world a lot of pain.
Why Blockchain?
The top five advantages of Blockchain technology are:
Aren’t they all supremely important for reporting cross-border Communicable Disease cases?
The world had not seen anything like this Covid 19 pandemic before. Today we need to take a hard look at the reporting infrastructure available for communicable diseases, both technology and regulations and improve upon that, such that we do not need to face another pandemic like this in the future.
But of course Blockchain, in this case, like any technology is not a solution, it’s just an enabler. An enabler that would ensure the security and efficiency needed for sharing something so sensitive. Finally it would depend on the goodwill of people and governments.
Wish we come out of this crisis soon and as unscathed as possible!
Many organisational leaders talk about continual service improvement (CSI), but they fail to execute the strategy. Why? Because regardless of industry, size, workforce, goals or motivations, there’s no one size fits all for CSI, and that’s scary.
By Sumit De, head of consultancy, TOPdesk UK.
Within the service management industry, CSI is in the spotlight. Organisations know they should be doing it, but they don’t really understand its true value. Being bogged down in the mentality of firefighting day-in-day-out, resetting passwords, granting access to reports, issues with a firewall, or a broken printer, is exhausting.
Changing mindset, taking the time and energy to consider how things could be better can seem unattainable. But the long-term benefits of CSI should be motivation enough to stop thinking and talking about CSI and start doing it.
The why’s and what’s of CSI
CSI is working in a way where the only constant is change. Why should you take this approach? Well, user’s demands are regularly adapting to new technologies and customer experience ideals, it’s only natural. To ensure you’re meeting their demands, the delivery of a service must be prepared to change, whenever is required.
The value of CSI goes beyond that service excellence experience. If time and effort are put into this approach, it will keep organisations relevant in their industry, not just for a few months, or a year, or a couple of good years, but continuously. As an organisation, you’ll embrace the fact that today’s service is good but staying stagnant with this approach for the next five years isn’t good and turn it to your advantage.
Staying ahead, being a leader in your industry is important. From this perspective, investing focus, resource, and time in CSI is critical for organisations and service providers to ensure they don’t drop onto the backfoot and end up having to play catch-up. That’s never a place that you want to be.
Challenges: truth or myth?
When it comes to CSI, there are often two barriers that act as deterrents for organisations. Firstly, it is the mindset that CSI is ‘just a stage’. Secondly, is that CSI comes across as a resource-heavy approach.
Those with the mindset that CSI is just another stage are wrong. The first word is continuous. CSI is always happening, or at least it should always be happening. This doesn’t mean 24 hours, 7 days a week, it means a habit instilled in your organisation. On a regular, continuous basis, you’re checking in with yourself, the organisation, and your customers. Looking at what’s good, what’s bad, and what can be improved.
Within organisations, there’s confusion about the best uses of the workforce. On one side, best practice is defining roles that organisations are encouraged to embody and perform. On the other, companies are striving to become lean, efficient, and balanced in terms of how many different caps employees wear in their organisation.
This ongoing battle means implementing CSI seems unrealistic for organisations. When do we make time for CSI? Who takes ownership? What does ownership look like? How can we be successful when we’re constantly bombarded with emails and requests?
The key to implementing CSI
Implementing CSI has been overblown for a long time. It has the reputation of a tall hurdle that you must build several steps, over several months and years, to be able to reach a height possible to jump over it. Whereas, in reality, CSI should be a series of small, achievable hurdles.
Put it this way, if you were competing in the Olympics, you would participate in the 100-meter hurdle race, not the pole vault.
The key to CSI success? Aim to make small, bitesize improvements, continuously. This way, each little goal you achieve will deliver satisfaction for yourself, and your user base, while providing the motivation to reach your next goal. Remember the three K’s: keep it simple, keep it small, keep it achievable.
Ready to start?
We’ve discussed the theory behind CSI, common challenges, and how to implement this way of working. Now, let’s look at one way you can make an instant impact within your organisation using CSI for knowledge management.
Your service desk is often bombarded with basic questions about the way your service functions. What are the opening hours? Who should I contact about this problem? What are the terms and conditions of a service?
Now the problem has been recognised, how can you improve it? Simply create a knowledge item and make it visible to your customer base through the self-service portal. This empowers your users to find answers to their questions and frees up your already busy operators.
It might sound like a small, insignificant step but by making this improvement you’ve flipped from reactive, firefighting to being proactive and providing information for the future.
It’s not a solo job!
To truly succeed in CSI, everybody within the organisation needs to be open to playing their part. While it’s a good idea to have a part of the organisation or an individual to oversee CSI, keeping an eye on the progress, it’s not the responsibility of one person. Everybody has the responsibility to be involved.
Collectively as an organisation you must create a service improvement routine to measure the success of your bitesize improvements. For example, facilitate weekly, monthly or quarterly drop-in sessions for users and staff. This will allow you to receive feedback on the work you’ve been doing and your current development programmes.
Stop thinking about CSI, start doing it
Going forward, stop considering CSI as something you should be doing but is going to take too much time and effort to implement. Start trying it out. It doesn’t have to be a big project, small incremental improvements that are properly communicated is far more effective.
Having the mentality of using CSI not a part of the process but as an underlying habit that we all do is going to be key to organisations staying ahead of the curve, keeping up with the pace of change, and leading the way for service excellence.
As the electric and digital worlds converge, talent acquisition and team management are among the biggest challenges faced by data centre and technology businesses today.
By Marc Garner, VP, Secure Power Division, UK&I, Schneider Electric.
According to the Harvey Nash/KPMG CIO Survey, “The single highest cause of stress is being short of staff”, and this has become a major issue. The survey found that the UK's tech industry is experiencing the highest skills shortage for more than a decade, with almost two thirds of CIOs (64%) reporting a shortfall of talent.
As technologies such as edge computing, AI and 5G continue to evolve, identifying and attracting new skills into the industry should be simple, but often, it can mean dispelling myths that have built up over time.
Efforts to expand the talent pool through diversity agendas are making inroads and traditional biases are fast being overcome, but in the era of digital transformation, more needs to be done to address the endemic skills shortage. This, however, is beginning to raise new questions, asking what are the best tactics for team recruiting, retaining and nurturing talent?
Furthermore, are diversity and inclusion efforts proving effective at bridging the skills gap, and when it comes to attracting new recruits, does the data centre industry have an image problem?
As a culturally diverse and global industry, our sector is open to engineers, marketers and salespeople from all walks of life. So why are we experiencing such a skills shortage, and what can be done to attract the next generation of talent?
New Perspectives
In 2020, people entering the world of employment are often seeking more than a thirty-year career in return for a steady salary and pension. Increasingly young people (though not exclusively younger people) state that they want to become part of something bigger than themselves and will examine prospective employers with questions such as, “does this organisation relate to me and do they share my values?”
One such individual is Thomas Morgan, a mechanical engineer who joined Schneider Electric’s Secure Power Division as part of the graduate programme intake in 2016.
Recently named within Data Economy’s ’30 under 30’ list for leading talents in the data centre sector, Thomas was first attracted to the company because of its core message around sustainability, energy efficiency and reducing impact on the environment.
As a business, today Schneider Electric is committed to improving energy consumption and reducing CO2 emissions across all areas of our business. This includes reaching carbon neutrality across all of our company sites by 2025; goals to achieve net-zero operational emissions by 2030; and most importantly, to have net-zero emissions throughout our entire supply chain by 2050.
Thomas knew that after graduating from Sheffield University he wanted to start a career in tech and what he found at Schneider Electric was a role that would help him advance his business knowledge while utilising all of his technical expertise.
In his first job as a Tender Engineer, Thomas was provided with exposure to customer facing teams. Here he began to build his business skills.
The role is technically detailed and contractually complex, involving learning and sharing of engineering and business knowledge– something he relishes as a challenge. As an engineer it presented the opportunity to learn about customer needs and the complex financial aspects of large-scale capital investments.
From there, Thomas moved into a new customer team servicing a major client, eventually leading them to successfully secure a major tender win with a value of more than €13m. It was this success that helped Thomas to be named as one of Data Economy’s “30 under 30” people to watch in the data centre industry and he is someone I’m proud to have worked with directly.
As well as enjoying business success Thomas is a passionate advocate of promoting science, technology, engineering and mathematics (STEM) as a career to young people and expanding the opportunities to as broad a catchment as possible. He regularly hosts events and presents on the importance of inclusion, diversity and balance, making STEM available to everyone.
Core Values
Both within the business and through his extracurricular activities Thomas reflects Schneider Electric’s core values, which are our company’s platform for success. These span a customer-first experience, daring to disrupt, embracing cultural differences, taking ownership, embracing an open mind and committing to learn every day.
As a business, we know there is much work to be done to make technology available to all, we believe access to energy is a basic human right and that life should be on for everyone, everywhere.
But to address the skills shortage within the technology industry itself, our goal is to build a culture where skills, ambition and application are rewarded equally. One where the industry stands for advancement for all no matter their culture, religion, race or gender.
If you are studying as an under graduate or a recently qualified post-graduate and are looking to build a fulfilling and empowering career in technology, energy management and sustainability, please click here to find out more about the Schneider Electric graduate programme.
Join us as we continue to innovate new technologies that drive digital transformation and reduce the impact of carbon emissions on the environment.
The current Covid-19 pandemic has served to focus attention on the health of the healthcare sector, as never before. Information technology has a vital role to play. Here we cover a range of viewpoints and news, much of it relating to the ongoing battle with the coronavirus. Part 9.
The new app is dedicated to providing people across Northern Ireland with immediate advice and links to vital trusted information, as the situation with the pandemic evolves we will keep the app up to date.
The app includes guidance on the symptoms of the coronavirus infection and supports individuals to identify whether they might potentially have the infection. It will also provide advice on what actions people should take if they think they may have coronavirus. The information provided will also help people decide if they need advice from a health or care professional and how best to access that advice should they need it.
People can ask specific questions through the app with an Advice Search ‘chatbot’ that automatically reviews all the guidance to find a response to match individual queries.
By providing early and easily accessible advice to the public, and information on whether someone may need to speak to a healthcare professional the app should also help with easing pressure on GP surgeries, pharmacies and other community services.
Once downloaded, people who use the app will also receive push notifications, which will include the latest public health advice.
Health Minister Robin Swann praised the unparalleled speed of delivery of this important resource by Department of Health’s Digital Team and local companies Civica and Big Motive who were also involved in developing the app.
He said: “I recognise that people in Northern Ireland need access to up to date information at this worrying time and this vital initiative will complement all the other work we are doing across our health and care services.”
“The pressure on our services is going to be unprecedented and this is an important resource for those who need local information quickly.
“We have made this investment knowing that trusted local health guidance is critically important. We expect the app will also help us track the impact of Covid-19, which will assist us in planning and directing services and supports to the best of our ability.”
Who can use this app?
· The app is available to all individuals who are currently residing in Northern Ireland.
· You can use this app to get advice for yourself, or on behalf of someone else that you would like to help.
Data security
· The app does not collect any personally identifiable information.
· We will collect information related to the postcode and age of the user to help us track the impact of Covid-19 in Northern Ireland.
· This will help us to plan services and ensure that resources are directed to the areas of greatest need.
The Department of Health has launched a new Covid-19 information app which was commissioned and delivered in just two weeks by locally based cloud software specialists CIVICA in partnership with Big Motive.
“Covid-19 NI is one of the first dedicated tools of its kind in the UK and Ireland and demonstrates that our local Department of Health and the HSC were well ahead of the game in knowing exactly what they wanted and securing it in record time,” explained Mark Owens, Managing Director of CIVICA Northern Ireland.
“The App is simple to download and use and has three core functions, firstly an in-app symptom checker for Coronavirus (COVID-19) leading the user to appropriate medical advice and onward links to further information; secondly a chat bot with natural language interface to answer users’ questions which will be enhanced and expanded with time, usage and feedback; and thirdly a push notification service with the ability to push messages out to user's devices very similar to text messages alerting people to things like significant changes to official status or advice.
“Our brief was to provide a trusted single source of information dedicated to providing the population of Northern Ireland with immediate advice that should ease pressure on GP surgeries, pharmacies and of course Hospitals.
“I cannot stress how unprecedented it is in our industry to produce an App of this nature so quickly and we could not have done it without the skills and dedication of our developers who were determined to do their best to support our amazing NHS.
“We all know the pressure on our medical services is going to be extraordinary and we believe this is going to be a critical tool for those who need local information immediately.
“It is the duty of every business in Northern Ireland to do their bit to support the Executive, the Government Departments and to realise that it is vital we all work together to beat this Pandemic, use our skills, our ingenuity and retool where necessary to support each other and our economy,” concluded Mark.
Combating COVID-19 with technology to resume social and economic operations.
In accordance with its new strategic direction “From Things to Life,” Dassault Systèmes is working with China’s Central-South Architectural Design Institute (CSADI) to support the simulation and evaluation of virus dispersal in the confined environment of Leishenshan Hospital in Wuhan, China. The largest hospital for infectious diseases and COVID-19 patients, the modular Leishenshan Hospital was created with “China Speed,” surprising the world with its 14-day construction. CSADI and Dassault Systèmes are using the 3DEXPERIENCE platform’s simulation capabilities to simulate virus contamination and diffusion within the hospital’s ventilation system and to counteract the negative effects from unplanned ventilation risks.
As a strategic partner of Dassault Systèmes in China, CSADI undertook the design of Leishenshan Hospital. Avoiding contamination of nearby environments is a key consideration for CSADI, especially the minimisation of cross-infection in the hospital and any impacts on external communities, crowds and surroundings. To this end, Dassault Systèmes has donated SIMULIA XFlow software, powered by the 3DEXPERIENCE platform, to CSADI to simulate indoor and outdoor fluids, virus dispersal in ventilation systems, as well as other projects within Leishenshan Hospital.
“Dassault Systèmes technology focuses on life and the future. CSADI showed ‘China Speed’ during the Leishenshan Hospital Project and will use Dassault Systèmes’ advanced SIMULIA XFlow software to simulate indoor air distribution schema and optimise suggestions on better contamination discharge in negative pressure wards to protect medical personnel,” said Zhang Shen, Director of Engineering Digital Technology Center, CSADI. “SIMULIA XFlow will also simulate outdoor exhaust emission impacts on nearby surroundings to help the design and site selection of the modular hospital.”
“Dassault Systèmes is committed to helping Chinese enterprises combat COVID-19 with the aid of technology, focusing on restoration and development of enterprises after the pandemic,” said Ying Zhang, Managing Director, Greater China, Dassault Systèmes. “As a company, we have extended our focus from things to life. It is our concern about human life that drives us to make positive contributions to the environment during the pandemic and in future hospitals with CSADI. Every day the power of the 3DEXPERIENCE platform to collaborate and exploit the value of a 3D virtual twin experience is shown. Applied systematically to design, engineering and manufacturing, it provides seamless remote collaboration at any time, anywhere, and enables all users to understand, experience and communicate. In the midst of today’s crisis, our 3D cloud-based collaborative platform approach replaces an older, slower document-based approach.”
Over 440 medical institutions from 104 countries and regions have applied to learn and share experiences in battling COVID-19 through the International Medical Expert Communication Platform.
The platform, a centrepiece of the Global MediXchange for Combating COVID-19 (GMCC) program, was jointly established by the Jack Ma Foundation and Alibaba Foundation. It is designed for medical experts around the world to communicate seamlessly with each other to share their invaluable experience of fighting coronavirus disease 2019 (COVID-19) and to ask and answer each other’s questions. To date, the most applications have come in thus far from medical institutions in the U.S, Turkey, the U.K., Pakistan, Spain and Germany.
Medical staff need to apply and be approved to join the platform. Once they’re accepted, they’re free to participate in individual or group discussions and sessions.
“Knowledge is power! We launched an online platform for doctors and nurses around the world to exchange ideas, lessons and know-how to fight the virus. We welcome all hospitals to join Chinese hospitals on this open platform https://covid-19.alibabacloud.com. One world, one fight!”, Jack Ma wrote in a tweet on Wednesday.
Tapping Alibaba Group’s DingTalk messaging and communications functions, the digital platform provides free audio and video conference functionality, along with live broadcast functions for more-complex scenarios. Medical workers from different countries can choose to communicate with their fellow doctors individually, or they can participate in live-sharing group sessions to interact with multiple participants, using real-time artificial intelligence translation to overcome communication barriers. The First Affiliated Hospital, Zhejiang University School of Medicine (FAHZU), for example, has been using the platform to share their valuable experience with 92 medical institutions from 44 countries and regions.
A playback function allows further sharing or posting on official websites for consumption anytime – and for those who can’t join the live broadcasts due to the time difference or if they are occupied by their duties.
To date, the International Medical Expert Communication Platform has attracted numerous Chinese medical institutions, including Zhongnan Hospital of Wuhan University (Wuhan Leishenshan Hospital), The First Affiliated Hospital, Zhejiang University School of Medicine (FAHZU), and others.
Through video conferencing and AI translation from and into 11 languages (Arabic, Chinese, English, French, Indonesian, Japanese, Russian, Spanish, Thai, Turkish, and Vietnamese), the platform aims to build a virtual community.
DingTalk, which powers this platform, has also been tabbed by UNESCO as facilitating distance learning during the coronavirus outbreak.
Also as part of the GMCC programme, the Handbook of COVID-19 Prevention and Treatment, authored by the First Affiliate Hospital, Zhejiang University School of Medicine and compiled according to clinical experience, is now available at no cost and in seven languages: Chinese, English, French, Italian, Japanese, Spanish and Turkish. This handbook provides comprehensive guidelines and best practices by China's top experts for coping with COVID-19. More languages will be added soon, as the GMCC programme continues to launch more anti-epidemic resources and provide more practical suggestions and references for medical staff worldwide.
GPs are using the latest digital phone and video technology from their own homes, helping them address remotely the huge care demands created by the coronavirus pandemic.
GPs and returning healthcare workers can now work remotely from home to minimise the spread of coronavirus through new ‘breakthrough’ digital telephone and video technology that will help protect the frontline workforce from infection.
Following close collaboration with GPs, cloud communications provider X-on has developed the GP@Home service, which allows doctors to provide patients with the same level of phone and video care from their own home as they would from their surgery. The award-winning technology provider has also developed Video Connect, which enables GPs to switch from phone to video consultation in a single click.
The product suite integrates with major clinical systems such as EMIS and TPP’s SystmOne, which are used by doctors’ surgeries to hold patient information. Being able to access this directly whilst on a phone call helps save time for GPs and surgery staff, which is vital now as the country is gripped by the Covid-19 pandemic.
GP IT expert Dr Neil Paul, who practises in Cheshire, has been trialling GP@Home, and has been ‘really impressed’ with how it can help doctors work more safely in light of the pandemic, and allow home workers to answer the phone as if they were sat in the surgery. "It solves the ‘I don't want to use my own phone to make hundreds of calls to patients’ problem, which is affecting thousands of practices across the country,” he said.
GPs across London are some of the first in line for the technology, which has been developed by X-on by listening to the needs of general practice.
Six hundred practices already use X-on’s Surgery Connect to rapidly provide the public with access to information about the virus, and many are using the pioneering companion technology Video Connect so they can switch from phone to video call with a simple click on a link. Unlike many other systems, the patient does not need to download an app, and the doctor does not have to schedule when a video consultation will take place.
Dr Barry Sullman, Newham GP and Clinical Commissioning Group clinical lead, said: “This is a game changing piece of software that has allowed my practice to perform normally through the Covid-19 crisis. It’s business as usual for me - and Video Connect plays a key role in delivering that.
“Video call quality is extremely high and supports remote diagnosis. By being able to diagnose a diverse range of conditions such as skin rashes reliably, and by being able to assess the clinical condition of a patient confidently through high quality video, face to face appointments at the surgery are saved, which in turn increases the access capacity of the surgery. The technology also records the phone or video call, and links it to the clinical record, so that doctors can refer back to the information if required.”
X-on is helping thousands of staff in general practice deliver on the ‘digital-first’ initiative put forward by NHS England and NHSX in their rapid response to the situation. This encourages GPs to use the phone and digital means for triage and consultation, so as to help manage increased demand and to reduce the risks of infection.
X-on managing director Paul Bensley said: “The digital transformation of your local family doctor is taking place at an incredible pace and scale. It is vital that we leave no-one behind, and so it is essential that we make sure that the phone can meet the current needs of patients and professionals, but also those for the long term as they make the move to ‘digital-first’ primary care. Digital telephony is enabling hundreds of practices to put in an equitable, future proof digital front door to primary care.”
X-on is also offering free teleconferencing services to regional health providers as they collaborate around their response to the pandemic, with multiple primary care networks and commissioning groups having taken up the offer.
Whilst Nottingham-based 2bm is widely regarded as the UK’s leading company for bespoke design, build, refurbishment and upgrade of server rooms and data centres, they also have a growing reputation for tailored audits and health checks for all types of data centre.
Neil Roberts, sales director of 2bm, commented: “As there is often nothing obvious or visible to the naked eye that needs immediate attention, clinical cleaning is an area of maintenance that is consistently overlooked by many data centre operators.”
“Clinical cleaning schedules are vital to not only eliminate dirt but also discover the potential sources of contamination and the depreciation and malfunction of company assets. Modern-day computer equipment is often sensitive to environmental conditions, highlighted in recent studies that show 75% of storage and hardware failures are a result of factors in the environment,” added Neil.
Whilst temperature and humidity accounted for most of the issues, carbon and concrete dust, together with zinc whiskers are often stated as key areas of concern.
Clinical cleaning, which involves the removal of dust particles, static and other contaminants, also incorporates the use of anti-static solutions within rooms. Regular clinical cleaning of a server room helps prevent the build-up of static electricity, dust and contaminants which cause overheating, reduced filter life and additional wear to components.
Certain hardware manufacturers actually specify that equipment should be housed in a clean environment to ISO14644/8 standard, with some warranties deemed void if data centres or server rooms are not regularly cleaned and decontaminated.
The clinical cleaning process includes ceiling voids, subflooring, raising floor surfaces, wall surfaces, frames and windows, high-level trunking, ducting, tray works and lighting systems. It also includes internal clean of cabinets and servers (where required), cabinet exteriors, wall-mounted equipment and CRAC units.
Neil Roberts said: “With simple things like dust and dirt having a major impact on your operation and, ultimately, the efficiency of a data centre, a regular schedule of preventative clinical cleaning is essential. At 2bm, we offer flexible working hours 24 hours a day, seven days a week, which means there are no interruptions to a normal working day or the running of a data centre.”
“It’s possible to increase the lifespan of server equipment through proper cleaning procedures that improve air quality. The frequency of cleaning depends on the size and type of data centre or server room,” added Neil.
2bm’s services have been designed to work around the day-to-day operations of a data centre, ensuring a business can run as usual. They also include an initial audit to establish the extent of any problem and recommend a solution which varies from a one-off deep clean to ongoing preventative cleaning on a monthly, quarterly or annual basis.
The key is to ensure that the temperature and airflow remain constant, and by removing mould, dust, pollen and other bacteria, it not only helps maintain flooring hardware and equipment but also reduces the build-up of static electricity.
2bm provide data centre and server room audits in line with corporate governance and industry best practice, including safety, health, environment, security, quality and energy. They also offer auditing capability for a wide range of International and British Standard accreditations as well as providing ongoing support to assist with meeting set targets.
As an endorser of the EU Code of Conduct, 2bm is recognised as an organisation that promotes the Code’s best practice and has been implementing best practice for many years and understands the need to develop industry-wide guidelines for designers and operators.
The Code of Conduct can be used as a specific reference document for existing facilities as well as for the design of new facilities. With an unrivalled knowledge of the EUCoC, we can work with customers to bring them in line with the Code as well as become participants in the scheme.
“Our wider passion for innovation in our industry drives us to be engaged with emerging technologies and use them to achieve the very best results for our clients,” commented Neil Roberts.
“Every data centre is different in terms of its content and environment, so we pay attention to the finest details to make certain of optimum outcomes.”
“Our team thrive on offering solutions which improve efficiency and reduce running costs and importantly are within budgets. We only recommend an action which is right for a business,” said Neil.
CASE STUDY: CLINICAL CLEANING (please use 2bm/excel logo)
2bm has worked in partnership with ExcelRedstone for almost ten years within London’s financial sector, and today they are regarded as the go-to company for all their clinical cleaning projects.
Over three decades, ExcelRedstone has become one of the foremost companies in delivering IT infrastructure and support services to clients in some of the UK’s landmark buildings and offices.
ExcelRedstone makes spaces, which are critical to organisations, work smarter and harder. Its smart building technology solutions are developed in partnership with customers, from the design phase through to operation, implementation, delivery and management.
2bm was asked to provide clinical cleaning for a prestigious fit-out in the financial sector within the City of London. The complex and thorough cleaning was needed in-service equipment rooms to ensure no remaining dust and dirt.
The scope of works for cleaning the data centres and comms rooms included intensive cleaning of floors and walls, together with everything above the line of lighting, such as cables, trunking, pipework and fire/smoke detectors.
Without being addressed, the build-up of dirt and/or static discharge could have potentially led to the loss of critical data. Therefore, by improving the circulation of the air, computer equipment will function more efficiently, contributing to long-term energy cost savings.
Mike Meyer, Managing Director at Critical Facilities Solutions UK
As home to mission-critical equipment, it’s easy to see why you’d want your facility to be as contaminate-free and well maintained as possible. Yet, even with the necessary procedures in place, contamination still occurs. Everyday activities such as running cooling systems, employees opening and closing doors, installing new equipment — all of these are activities that introduce various levels of contamination.
Maintaining your DC should take a "minimize, regulate and maintain" approach to contamination control and cleaning but how do you find a cleaning schedule and program that is in line with your operational goals?
Hardware manufacturers such as IBM, EMC and Dell recommend you maintain your environment to ISO14644-1:2015 Class 8 utilizing professional data centre cleaners. In fact, failing to do so may void your warranty in instances where preventable airborne contamination was found to be a cause of the device failure. ASHRAE recommends having an annual sub-floor clean and quarterly floor and equipment surface cleaning. Many of the ‘standards’ and ‘recommendations’ seemingly contradict one another.
Nevertheless, a clean data centre is essential… and here’s why! Airborne contaminants are the unnoticed threat. The trouble with airborne contaminants is that the source (or sources) isn’t always easy to identify and harmful buildup can occur over the course of days, months, or even years.
You might not see the source, but airborne and contact-based contaminants build up on equipment. Even solid-state storage mediums can be compromised by buildup on heat sinks, bearings and vents. There’s no such thing as an airproof data centre. Therefore, contamination from airborne sources is — for all intents and purposes — unavoidable. Electrostatic dust, corrosive oxides, volatile organic compounds, solvents and other contaminants put equipment at risk. Even seemingly mundane, everyday sources of contamination such as pollen, dust, hair and carpeting fibers can prove to be problematic.
Periodic indoor air quality testing, otherwise known as air particle testing, has long been the best and only, method for ascertaining and confirming compliance to the ISO standard for machine room and data centre air cleanliness. The faults with this method are twofold; firstly, it’s a snapshot in time and; secondly, it only measures contaminates that are airborne and not those that have already settled.
There have been significant new advancements in the equipment and methods used to test the air quality and the volume of particulate in the air. At Critical Facilities Solutions we are introducing new methods of testing. While we still use hand-held, snapshot, air particle testing where necessary and relevant we are also installing robust, cost effective alternatives that measure the air quality on a constant or predetermined basis. We’ve coined the phrase Constant Air Monitoring. The product and system we supply and install can operate as a standalone system or be integrated into any BMS system.
While Continuous Particulate Air Monitors (CPAMs) have been used for years in nuclear facilities to assess airborne particulate radioactivity (APR) and pharmaceutical cleanrooms to measure air particulate (AP) the CPAMs have typically been extremely costly to install in other environments especially when taking the test parameters of the ISO standard and integration into data centre systems into account.
Settled contaminants cause decreased performance and thermal clogging. When airborne or touch contaminants buildup on the surface of equipment, this is known as "settled contamination." These tiny particles make their way onto (and into) delicate equipment, resulting in thermal clogging, data loss and performance bottlenecks due to thermal throttling. Contamination-related failures can even occur with solid-state drives (SSDs).
Densely packed racks are more susceptible to contamination. Servers and drives continue to shrink and become even more compact. This is great for reducing floorspace, but it also means equipment is packed in tightly, creating opportunities for settle contaminants to go unnoticed. It’s important to note that the more contamination accumulates on equipment and in air filters, the less efficient equipment becomes, leading to performance bottlenecks and wasted energy mostly down to the additional cooling requirements which then lead to further environmental impact.
On to lesser known risks, but for those that have experienced it firsthand, the threat of zinc whiskers — and how they cripple essential equipment — is very real. But, there are several factors that are making this once-rare phenomenon all the more common.
So, what are zinc whiskers and how do you know if your server room or data center is at risk?
Zinc whiskers are microscopic, crystalline slivers of zinc that form through corrosion. Whiskering can originate from any number of sources; flooring panels, ductwork, ceiling hangers, server racks, electrical components and virtually any source galvanised with this brittle metal — even bolts, nuts and washers may exhibit signs of whiskering.
While it is now fairly well understood how whiskering occurs, tracing the source isn’t always so easy. For one, these "whiskers" are incredibly light, which means they can easily travel through HVAC systems and subfloor voids.
These metallic, fiber-like "whiskers" are highly conductive and can cross circuit board traces, corrupt data, compromise hardware and cause extensive downtime. PCB boards and other pieces of electronic equipment (servers, SSDs, PCB boards, etc.) are all at risk of being affected by zinc whiskers.
To neutralise the risks associated with zinc whiskers, Critical Facilities Solutions offers a complete solution that includes:
• Sample collection and analysis
• Laboratory testing
• Remediation
• Specialist cleaning
• Testing and consultancy
Getting started with professional cleaning doesn’t have to be difficult, if you’re new to the concept of hiring specialists to clean your critical facility, a professional data centre cleaner can walk you through the entire process, explaining each step and making recommendations along the way. Since no two facilities are alike, its highly recommended that a thorough inspection and survey be commissioned before you set out to create a service profile and schedule.
Following a consultation, its highly likely that a full deep clean will be recommend as the starting point for any on-going maintenance cleaning (especially if your facility has never received a professional service, or if there has been a lapse in cleaning). A deep clean may include cleaning every square inch of the data hall, equipment surfaces, as well as flooring, stringers, pedestals and the sub-floor voids. These aren’t "precautionary steps," but essential parts of preventing recontamination and ensuring your facility is as dust- and contamination-free as possible.
Selecting the best ‘starting point’ for your Data Centre’s maintenance regime can prove challenging. The Data Centre Alliance (DCA), the Data Centre Trade Association has, in consultation with the leading UK Data Centre cleaning authorities and companies, produced and distributed an Anti-Contamination Guide which looks to focus on overall best practice and should be considered a great resource in determining your starting point for any maintenance schedule.
Gary Hall, Operations Director at Critical Facilities Solutions UK
The anti-contamination guide is a document that outlines best practices for controlling contamination in mission critical IT spaces. The document has been revised for 2020, and now features additional information on the changes to the EU Code of Conduct reference Data Centre cleaning, expanding of information on zinc whiskers (remedial actions), conducting air particle testing and the apparatus used, and anti-contamination products.
New addition and review of the document includes the following:
Please follow the link to the full report: https://dca-global.org/file/view/6373/dca-anti-contamination-guide-2020-edition