The current excitement over digital transformation shows no signs of slowing down. Just when it seems everyone is comfortable with, and actually deploying, Cloud and managed services, along comes AI, ML and RPA to bolster the rising profile of IoT. Rest assured, VR and AR are waiting just around the next bend in the technology road.
With the supply chain stacked with a whole portfolio of clever, elegant, cost-effective, useful technology solutions and end users recognising that they have to embrace the future if they are not to go the way of the various high profile businesses that have shut their doors in the past year or so, what could possibly go wrong with the digital transformation story?
Skills, or rather a lack of them, is the answer. Vendors and the Channel are struggling to identify the individuals who combine the necessary levels of problem-solving and technology attributes to become the trusted advisors that customers need; and most industry sectors, with the exception, perhaps, of fintech, have a distinct lack of knowledge when it comes to specifying and deploying digital technology.
The starting point for all this is that end users do not really want to have conversations about IT, but about business solutions. This is what I want to do…how can you help me achieve it?
Traditionally, the supply chain has approached sales trying to convince customers that the newest server or storage box is twice as fast as the previous one, at half the price, so why wouldn’t you buy it? And, once upon a time, customers would have. However, now customers would respond to this ‘compelling’ sales story with a ‘So what?’ What they want is someone telling them that they can help improve the speed and reliability of their ecommerce website, the performance of their mobile devices, speed up resolution of customer complaints…solutions not technology for technology’s sake!
The sales skills replied to do this are thin on the ground right now. So, there is a very real danger that the headlong rush for digitalisation will slow down as there aren’t enough people with the right combination of skills and knowledge to specify and/or supply business solutions.
It will be fascinating to see whether this gap can be addressed successfully and quickly so that digitalisation’s momentum is not checked. But it’s not immediately obvious how and when, with rather too many current Channel and IT specialists still trying to hold on to what they’ve always done and reluctant to embrace the change that their bosses are demanding.
Worldwide spending on cognitive and artificial intelligence (AI) systems will reach $19.1 billion in 2018, an increase of 54.2% over the amount spent in 2017. With industries investing aggressively in projects that utilize cognitive/AI software capabilities, the International Data Corporation (IDC) Worldwide Semiannual Cognitive Artificial Intelligence Systems Spending Guide forecasts cognitive and AI spending will grow to $52.2 billion in 2021 and achieve a compound annual growth rate (CAGR) of 46.2% over the 2016-2021 forecast period.
"Interest and awareness of AI is at a fever pitch. Every industry and every organization should be evaluating AI to see how it will affect their business processes and go-to-market efficiencies," said David Schubmehl, research director, Cognitive/Artificial Intelligence Systems at IDC. "IDC has estimated that by 2019, 40% of digital transformation initiatives will use AI services and by 2021, 75% of enterprise applications will use AI. From predictions, recommendations, and advice to automated customer service agents and intelligent process automation, AI is changing the face of how we interact with computer systems."
Retail will overtake banking in 2018 to become the industry leader in terms of cognitive/AI spending. Retail firms will invest $3.4 billion this year on a range of AI use cases, including automated customer service agents, expert shopping advisors and product recommendations, and merchandising for omni channel operations. Much of the $3.3 billion spent by the banking industry will go toward automated threat intelligence and prevention systems, fraud analysis and investigation, and program advisors and recommendation systems. Discrete manufacturing will be the third largest industry for AI spending with $2.0 billion going toward a range of use cases including automated preventative maintenance and quality management investigation and recommendation systems. The fourth largest industry, healthcare providers, will allocate most of its $1.7 billion investment to diagnosis and treatment systems.
"Enterprise digital transformation strategies are increasingly including multiple cognitive/artificial intelligence use cases," said Marianne Daquila, research manager, Customer Insights & Analysis at IDC. "Business transformation is occurring across all industries as successful companies embrace the array and potential impact of these solutions. Automated customer service agents, increased public safety, preventative maintenance, reduction of fraud, and improved healthcare diagnosis are just the tip of the iceberg driving spend today. With double-digit year-over-year spending growth forecast, IDC expects to see an increase in general use cases, as well as a refinement of industry-specific use cases."
The cognitive/AI use cases that will see the largest spending totals in 2018 are: automated customer service agents ($2.4 billion) with significant investments from the retail and telecommunications industries; automated threat intelligence and prevention systems ($1.5 billion) with the banking, utilities, and telecommunications industries as the leading industries; and sales process recommendation and automation ($1.45 billion) spending led by the retail and media industries. Three other use cases will be close behind in terms of global spending in 2018: automated preventive maintenance; diagnosis and treatment systems; and fraud analysis and investigation. The use cases that will see the fastest spending growth over the 2016-2021 forecast period are: public safety and emergency response (75.4% CAGR), pharmaceutical research and discovery (70.5% CAGR), and expert shopping advisors and product recommendations (67.3% CAGR).
A little more than half of all cognitive/AI spending throughout the forecast will go toward cognitive software. The largest software category is cognitive applications, which includes cognitively-enabled process and industry applications that automatically learn, discover, and make recommendations or predictions. The other software category is cognitive platforms, which facilitate the development of intelligent, advisory, and cognitively enabled applications. Industries will also invest in IT services to help with the development and implementation of their cognitive/AI systems and business services such as consulting and horizontal business process outsourcing related to these systems. The smallest category of technology spending will be the hardware (servers and storage) needed to support the systems.
On a geographic basis, the United States will deliver more than three quarters of all spending on cognitive/AI systems in 2018, led by the retail and banking industries. Western Europe will be the second largest region in 2018, led by retail, discrete manufacturing and banking. The strongest spending growth over the five-year forecast will be in Japan (73.5% CAGR) and Asia/Pacific (excluding Japan and China) (72.9% CAGR). China will also experience strong spending growth throughout the forecast (68.2% CAGR).
"The latest iteration of the Cognitive/AI Spending Guide is a roadmap for the journey of organizational digital transformation through the use of AI, deep learning, and machine learning," added Schubmehl. "Organizations should be evaluating and starting to use AI throughout their systems and the Cognitive/AI Spending Guide is an indispensable resource in that effort."
Forty-eight percent of organizations that are implementing the Internet of Things (IoT) said they are already using, or plan to use digital twins in 2018, according to a recent IoT implementation survey by Gartner, Inc. In addition, the number of participating organizations (202 respondents across China, U.S., Germany and Japan) using digital twins will triple by 2022.
Gartner defines a digital twin as a virtual counterpart of a real object, meaning it can be a product, structure, facility or system. Gartner predicts that, by 2020, at least 50 percent of manufacturers with annual revenues in excess of $5 billion will have at least one digital twin initiative launched for either products or assets.
"There is an increasing interest and investment in digital twins and their promise is certainly compelling, but creating and maintaining digital twins is not for the faint hearted," said Alexander Hoeppe, research director at Gartner. "However, by structuring and executing digital twin initiatives appropriately, CIOs can address the key challenges they pose."
Gartner has identified four best practices to tackle some of the top challenges posed by digital twins:
1- Involve the entire product value chain
Digital twins can help alleviate some key supply chain challenges. Digital twin investments should be made value chain driven to enable product and asset stakeholders to govern and manage products, or assets like industrial machinery, facilities across their supply chain in much more structured and holistic ways. Some challenges that supply-chain officers face in improving their performance are for example, a lack of cross-functional collaboration or a lack of visibility across the supply chain.
The value of digital twins can be an extensible product or asset structure that enables addition and modification of multiple models that can be connected for cross-functional collaboration. It can also be a common reference with comprehensive content for all stakeholders to access and understand the current status of the physical counterpart. When engaging the supply chain in digital twin initiatives, CIOs should incorporate access control based on the sensitivity of the content and the role of the supplier.
2- Establish well documented practices for constructing and modifying the models
Best-in-class modeling practices increase transparency on often complex digital twin designs and make it easier for multiple digital twin users to collaboratively construct and modify digital twins. They attempt to minimize the amount of effort to enable changes within the digital twin or between the digital twin and external, contextually important content. When modeling practices are standardized, one user is more likely to understand how another user created a digital twin. This enables the downstream user to modify the digital twin in less time and with less need to destroy and recreate portions of the digital twin.
3- Include data from multiple sources
It is difficult, to anticipate the nature of the simulation models, data types and data analysis of sensor data that might be necessary to support the design, introduction and service life of the digital twins' physical counterparts. While 3D geometry is sufficient to communicate the digital twin visually and how parts fit together, the geometric model may not be able to perform simulations of the behavior of the physical counterpart in use or operation. At the same time, the geometric model may not be able to analyze data if it is not enriched with additional information. CIOs can expand the utility of digital twins by recommending that IT architects and digital twin owners define an architecture that allows access and use of data from many different sources.
4- Ensure long access life cycles
Digital twins with long life cycles include buildings, aircraft, ships, factories, trucks and industrial machinery. The life cycles of these digital twins extend well beyond the life spans of the formats for proprietary design software that most likely were used to create them and the means of storing data.
"This means that digital twins created in proprietary design software formats have a high risk of being unreadable throughout their service life," said Mr. Hoeppe.
Additionally, the digital twin evolves and accumulates growing historical data, such as geometric models, simulation data and IoT data. As a result, the digital twin owner risks becoming increasingly locked into the vendor with the authoring tools. "CIOs can guard against this if they increase the viable life of digital twins by setting a goal for IT architects and digital twin owners to plan for the long-term evolution of data formats and data storage," Mr. Hoeppe added.
Study shows firms must first master interconnected disciplines of customer experience, operational excellence and business innovation to achieve stronger digital transformation maturity.
Virtusa has published the findings of The Digital Transformation Race Has Begun, a September 2017 study commissioned by Virtusa and conducted by Forrester Consulting, that reflects the digital maturity of firms worldwide. The study evaluates the state of digital transformation across six key industries – retail, banking, healthcare, insurance, telco, and media.Internet of Things (IoT)-based attacks are already a reality. A recent CEB, now Gartner, survey found that nearly 20 percent of organizations observed at least one IoT-based attack in the past three years. To protect against those threats Gartner, Inc. forecasts that worldwide spending on IoT security will reach $1.5 billion in 2018, a 28 percent increase from 2017 spending of $1.2 billion.
"In IoT initiatives, organizations often don't have control over the source and nature of the software and hardware being utilized by smart connected devices," said Ruggero Contu, research director at Gartner. "We expect to see demand for tools and services aimed at improving discovery and asset management, software and hardware security assessment, and penetration testing. In addition, organizations will look to increase their understanding of the implications of externalizing network connectivity. These factors will be the main drivers of spending growth for the forecast period with spending on IoT security expected to reach $3.1 billion in 2021 (see Table 1)."
Table 1
Worldwide IoT Security Spending Forecast (Millions of Dollars)
2016 | 2017 | 2018 | 2019 | 2020 | 2021 | |
Endpoint Security | 240 | 302 | 373 | 459 | 541 | 631 |
Gateway Security | 102 | 138 | 186 | 251 | 327 | 415 |
Professional Services | 570 | 734 | 946 | 1,221 | 1,589 | 2,071 |
Total | 912 | 1,174 | 1,506 | 1,931 | 2,457 | 3,118 |
Source: Gartner (March 2018)
Despite the steady year-over-year growth in worldwide spending, Gartner predicts that through 2020, the biggest inhibitor to growth for IoT security will come from a lack of prioritization and implementation of security best practices and tools in IoT initiative planning. This will hamper the potential spend on IoT security by 80 percent.
"Although IoT security is consistently referred to as a primary concern, most IoT security implementations have been planned, deployed and operated at the business-unit level, in cooperation with some IT departments to ensure the IT portions affected by the devices are sufficiently addressed," explained Mr. Contu. "However, coordination via common architecture or a consistent security strategy is all but absent, and vendor product and service selection remains largely ad hoc, based upon the device provider's alliances with partners or the core system that the devices are enhancing or replacing."
While basic security patterns have been revealed in many vertical projects, they have not yet been codified into policy or design templates to allow for consistent reuse. As a result, technical standards for specific IoT security components in the industry are only now just starting to be addressed across established IT security standards bodies, consortium organizations and vendor alliances.
The absence of "security by design" comes from a lack of specific and stringent regulations. Going forward, Gartner expects this trend to change, especially in heavily regulated industries such as healthcare and automotive.
By 2021, Gartner predicts that regulatory compliance will become the prime influencer for IoT security uptake. Industries having to comply with regulations and guidelines aimed at improving critical infrastructure protection (CIP) are being compelled to increase their focus on security as a result of IoT permeating the industrial world.
"Interest is growing in improving automation in operational processes through the deployment of intelligent connected devices, such as sensors, robots and remote connectivity, often through cloud-based services," said Mr. Contu. "This innovation, often described as Industrial Internet of Things (IIoT) or Industry 4.0, is already impacting security in industry sectors deploying operational technology (OT), such as energy, oil and gas, transportation, and manufacturing."
Worldwide shipments for augmented reality (AR) and virtual reality (VR) headsets will grow to 68.9 million units in 2022 with a five-year compound annual growth rate (CAGR) of 52.5%, according to the latest forecast from the International Data Corporation (IDC) Worldwide Quarterly Augmented and Virtual Reality Headset Tracker. Despite the weakness the market experienced in 2017, IDC anticipates a return to growth in 2018 with total combined AR/VR volumes reaching 12.4 million units, marking a year-over-year increase of 48.5% as new vendors, new use cases, and new business models emerge.
The worldwide AR/VR headset market retreated in 2017 primarily due to a decline in shipments of screenless VR viewers. Previous champions of this form factor stopped bundling these headsets with smartphones and consumers have shown little interest in purchasing such headsets separately. While the screenless VR category is waning, Lenovo's successful fourth quarter launch of the Star Wars: Jedi Challenges (Lenovo Mirage AR headset)—a screenless viewer for AR—showed the form factor may still have legs if paired with the right content. Other new product launches during the quarter included the first Windows Mixed Reality VR tethered headsets with entries from Acer, ASUS, Dell, Fujitsu, HP, Lenovo, and Samsung.
"There has been a maturation of content and delivery as top-tier content providers enter the AR and VR space," said Jitesh Ubrani senior research analyst for IDC Mobile Device Trackers. "Meanwhile, on the hardware side, numerous vendors are experimenting with new financing options and different revenue models to make the headsets, along with the accompanying hardware and software, more accessible to consumers and enterprises alike."
Looking ahead, IDC also expects the VR headset market to rebound in 2018 as new devices such as Facebook's Oculus Go, HTC's Vive Pro, and Lenovo's Mirage Solo with Daydream ship into the market with new capabilities and new price points. Meanwhile, with the exception of screenless viewers, AR headsets are likely to remain largely commercially focused until later in the forecast due to the technology's high cost and complexity.
"While there's no doubt that VR suffered some setbacks in 2017, companies such as Google and Facebook continue to push hard toward making the technology more consumer friendly," said Tom Mainelli, program vice president, Devices & AR/VR research. "Meanwhile, Lenovo's success with its first consumer-focused AR product shows that consumers are beginning to understand what augmented reality is and the experiences it can provide. This bodes well for the category long term."
Category Highlights
Augmented Reality head-mounted displays will see market-beating growth over the next five years as standalone and tethered devices grow to account for more than 97% of the market by 2022. IDC expects AR screenless viewers, the overall market leader in 2017, to peak in 2019 as standalone and tethered products become more widely available at lower price points. The rise of screenless viewers geared toward consumers tilted shipment volumes away from commercial viewers in 2017 and that's likely to continue in 2018; by 2019 the segment will shift back toward more commercial shipments.
Virtual Reality head-mounted displays will see a shift in product mix. Screenless viewers, once the overall leader, will see share erode quickly over time. Meanwhile, standalone and tethered devices – in the minority in 2017 – will comprise 85.7% of total shipments by 2022. Consumers will account for a majority of headset shipments throughout the forecast, but commercial users will slowly occupy a larger share, growing to nearly equal status in 2022.
AR/VR Headset Market Share by Form Factor, 2018 and 2022 | |||
Technology | Form Factor | 2018* | 2022* |
Augmented Reality | Screenless Viewer | 6.7% | 1.1% |
| Standalone HMD | 2.4% | 19.1% |
| Tethered HMD | 1.0% | 17.9% |
Virtual Reality | Screenless Viewer | 34.9% | 8.8% |
| Standalone HMD | 11.7% | 29.8% |
| Tethered HMD | 43.3% | 23.3% |
Total |
| 100.0% | 100.0% |
Source: IDC Worldwide Quarterly AR and VR Headset Tracker, March 19, 2018 |
* Note: Forecast values.
Just 5% of board members in non-tech organisations have digital competencies.
Global pressures of digital disruption are failing to impact businesses as only five per cent of non-tech digital board members have digital competencies. What’s more, despite the central role digital transformation plays in building a more competitive and efficient business, non-tech organisations have shown little or no effort to improve their digital capabilities in the last two years. This is according to a new landmark study from global executive search firm, Amrop.
The report, “Digitisation on Boards: Are Boards Ready for Digital Disruption?”, examines how listed organisations are addressing digitisation. Worryingly, it revealed that the digital picture remains fragmented, and multiple questions still surround its role.
Since the previous study in 2015, Amrop’s Digitisation report found that most non-tech listed organisations have shown little or no effort to improve digital competencies on boards. The latest report revealed that the five per cent figure remains quasi-unchanged since the study’s first edition. This is despite the view that digital transformation is supposedly occurring in organisations all across the globe.
The study, which examined the profiles of 3,000 board members in listed organisations across 14 countries, did find some places where digital board representations had improved. This included the UK, where non-tech companies raised their digital competencies from three to eight per cent since 2015. Finland is leading the way and in growth-mode, tripling its representation between 2015 and 2016 from four to two per cent. In the Netherlands however, there has been a decline in digital competency, with representation dropping from seven to five per cent in the last two years.
Moreover, the report found that traditional committees are still dominating. Just nine of the 300 boards analysed have an official technology committee. That’s just three per cent of organisations that have a committee dedicated to technical strategy on their business agenda.
The report analysed the digital competencies on boards for a number of sectors. Unsurprisingly, it found that representation for tech companies far outstrips the non-tech equivalents. Compared to the five per cent representation in non-tech companies, the study found that 43 per cent of board members in technology companies have digital competencies.
What’s more, the gap between tech and non-tech is widening — in 2015 the previous study found board-level digital competencies were seven times higher in the tech industry than in others. In this study, the representation now sits at nine times higher.
One explanation for the sluggish movement is the length of time needed to nominate and integrate board members. The wheels of board governance turn slowly and in some sectors, regulation clogs them even further.
As Mikael Norr, Leader of Amrop’s Global Technology & Media Practice, explains, “the core answer must lie in the difficulty of positioning the right talent in the top layer of organisations: people who can help fellow board members develop a robust strategy based on a strong business case and a clear view of the purpose of the organisation, its culture, its structure, and its markets.
“Once this is done, change must be stimulated in the right place, at the right pace, with strong management of resources and risk. Engaging in education programmes for boards, before setting out, can help prepare the road.”
Whilst examining the profiles of 3,342 board members, the study revealed a significant correlation between boards with tech profiles and a higher degree of female representation. According to the findings, women hold 35% of all digital/technology positions in the boards surveyed.
What’s more, the report found there to be a significant increase in female representation in these roles over the past year. For an industry that is often criticised for its lack of diversity, these findings suggest that digital/technology jobs may in fact now be a catalyst for change.
France and Italy lead the field with around 60% of digital/tech profiles represented by women. The UK however is worryingly at the bottom of the list, alongside the USA, Turkey and Spain, having named no new female digital/tech board members among the 133 new non-exec directors appointed in 2016.
Discussing the importance of diversity in these roles, Amrop Partner Bo Ekelund, who authored the diversity section of the report, comments: “Every day, working with clients worldwide, we see successful enterprises adopting the powerful trends of diversity and gender balance. Yet we also know that only 20% of all the people working in the IT sector are women. Those who are in place are severely under-represented in decision-making positions.
“Diversity drives innovation. To ensure new voices are properly heard, boards may need to ask tough questions about how they interact as a group. A structured and anonymous board assessment and rigorous onboarding for new members is strongly advised”.
The shortlist has been confirmed and Online voting for the 2018 DCS Awards has just opened. Make sure you don’t miss out on the opportunity to express your opinion on the companies, products and individuals that you believe deserve recognition as being the best in their field.
Following assessment and validation from the panel at Angel Business Communications. The shortlist for the 21 categories in this year’s DCS Awards has been put forward for online voting by the readership of the Digitalisation World portfolio of titles. The Data Centre Solutions (DCS) Awards reward the products, projects and solutions as well as honour companies, teams and individuals operating in the data centre arena.
The winners of this year’s awards will be announced at a gala ceremony taking place at London’s Grange St Paul’s Hotel on 24 May.
All voting takes place on line and voting rules apply. Make sure you place your votes by 11 May when voting closes by visiting: http://www.dcsawards.com/voting.php
The full 2018 shortlist is below:
Data Centre Energy Efficiency Project of the Year
New Design/Build Data Centre Project of the Year
Data Centre Consolidation/Upgrade/Refresh Project of the Year
Data Centre Power Product of the Year
Data Centre PDU Product of the Year
Data Centre Cooling Product of the Year
Data Centre Facilities Automation and Management Product of the Year
Data Centre Safety, Security & Fire Suppression Product of the Year
Data Centre Cabling & Connectivity Product of the Year
Data Centre ICT Storage Product of the Year
Data Centre ICT Security Product of the Year
Data Centre ICT Management Product of the Year
Data Centre Cabinets/Racks Product of the Year
Data Centre ICT Networking Product of the Year
Data Centre Hosting/co-location Supplier of the Year
Data Centre Cloud Vendor of the Year
Data Centre Facilities Vendor of the Year
Excellence in Data Centre Services Award
Data Centre Energy Efficiency Initiative of the Year
Data Centre Innovation of the Year
Data Centre Individual of the Year
Voting closes : 11 May
One of the latest buzz words taking Cloud Computing by storm is that of Functions as a Service (FaaS) or serverless computing by Chris Gray, Chief Delivery Officer, Amido.
Serverless is a hot topic in the world of software architecture, however it has been gaining attention from outside the developer community since AWS pioneered the serverless space with its release of AWS Lambda back in 2014. As one of the fastest growing cloud service delivery models, FaaS has fundamentally changed the way in which technology is not only being purchased but how it’s delivered and operated.
The significance of FaaS for businesses could be huge. Businesses will no longer have to pay for the redundant use of servers, but just for how much computing power that application consumes per millisecond, much like the per-second billing approach that containers are moving towards. Instead of having an application on a server, the business can run it directly from the cloud allowing it to choose when to use it and pay for it, per task – making it event driven. According to Gartner, by 2020, event-sourced, real-time situational awareness will be a required characteristic for 80% of digital business solutions, and 80% of new business ecosystems will require support for event processing.
FaaS is a commoditised function of cloud computing and one that takes away wasted compute associated with idle server storage and infrastructure. “Not every business is going to be right for FaaS or serverless, but there is a real appetite in the industry to reduce the cost of adopting the cloud – so this is a great way to help drive these costs down,” adds Richard Slater, Principal Consultant at Amido. “The thing is, if you’re considering this as an option you are signing up to the ultimate in vendor lock-in as it’s not easy to move these services from one cloud to another (though there are promising frameworks like Serverless JS which claim to resolve this); each cloud provider approaches FaaS in a different way and at present you can’t take a function and move it between vendors. As the appetite for serverless technologies grow, the nature of DevOps will subsequently change; it will still be relevant, although how we go about doing it will be very different. We could say that we are moving into a world of NoOps where applications run themselves in the cloud with no infrastructure and little human involvement. Indeed, humans will need to be there to help automate those services, but won’t be required to do as much coding or testing as they do now. With the advent of AI, the IoT, and other technologies, business events can be detected more quickly and analysed in greater detail; enterprises should embrace ‘event thinking’ and Lambda Architectures as part of a digital landscape.”
With FaaS and serverless gaining momentum, we are seeing fundamental changes to the traditional way in which decisions around technology are made, with roles like the CIO evolving at enterprise level now that there isn’t the same level of vendor negotiations. “Cloud providers are basically the same price across the board, meaning there is little room for negotiation, other than length of contract. However, signing up to long-term single-cloud contracts introduces the risk of having a spending commitment with a cloud that doesn’t offer the features that you need in the future to deliver business value. In this respect, the CIO is still necessary,” adds Richard Slater.
The current industry climate is demanding an increase in specialised IT skills that can cater to serverless digital transformation. If business leaders want to deliver, they need to let go of the ‘command and control’ approach and empower teams to be accountable. Creating the environment and securing the right skillsets to be able to develop, own and operate applications from within the same team is demanding for a new breed of IT engineering. Organisations wanting to embrace digital transformation and this new breed of cloud service delivery must start to give trust to the individuals closest to the business and writing code on the ground. “To a certain extent this trust must be earned, but in many of today’s enterprises there is so much governance around technical delivery that it has the effect of slamming the brakes on any transformation,” concludes Richard Slater, Principal Consultant at Amido.
We’ve seen trends come and go over the years, but with global companies like Expedia and Netflix embracing serverless computing, and cloud heavyweights Amazon, Google and Microsoft offering serverless computing models in their respective public cloud environments, FaaS seems here to stay.
With the Global Data Protection Regulation (GDPR) on the horizon, businesses that wish to operate in the European Union are having to spend more time than ever thinking about compliance.
By Mark Baker, Field Product Manager, Canonical.
Not only does all personally identifiable customer data need to be accounted for – a task that is easier said than done for many organisations – internal processes also have to be updated and employees need to be educated to ensure the compliance deadline of 25th May 2018 is met.
Of course, GDPR is just one legislative challenge facing businesses. Financial services firms, for example, have a revamped version of the Markets in Financial Instruments Directive (also known as MiFID II) to respond to, while the UK telco industry is facing the prospect of new legislations being enforced after Brexit.
And as falling foul of industry regulations has the potential to result in massive financial penalties, as well as the threats of reputational damage and a loss of customers, organisations simply can’t afford to be complacent.
However, fear of the complexity of managing compliance in new infrastructure as well as the effort already involved in ensuring existing systems are ready to go, is prompting many businesses to shy away from cloud, despite the many benefits such services offer. Concerns are primarily due to a misconception that cloud platforms, with data held by third parties on shared systems, will be a more difficult undertaking than traditional in-house systems and potentially less secure, but the truth is very different.
Public cloud services can be extremely secure and often can be a more secure option than in-house systems. So, what exactly is behind this misconception and why should businesses be trusting public cloud services with their compliance needs?
Privacy please
On the face of things, it’s easy to see why many people would assume on-premise infrastructure is more secure and easy to manage. In theory, businesses know exactly where their data is being stored and who has access to it, both of which provide comfort for organisations.
They can also design the architecture to suit their own specific needs and preferences, as well as reducing the risk of data loss if a public cloud provider goes out of business. One could argue that such a setup would be particularly appealing to businesses operating in highly regulated industries, such as healthcare and financial services, which need to have greater visibility and control over how their data is managed.
However, firms would be wise to remember that operating their own private cloud places the responsibility of security and compliance squarely on their shoulders. Businesses are at the mercy of the whims of nature and the resilience of their local power grid, potentially leaving them helpless if something goes wrong.
It also leaves them vulnerable to disgruntled employees and internal data theft. Employees may have easy access to confidential data, sometimes with very little to stop them from stealing corporate information simply by pulling a disk from a server and leaving the building with it. Often employees can also connect USB drives which have been used in home systems and may contain malware or viruses. Huge faith is placed in the firewall as an effective means of keeping intruders out, yet backdoors may well exist in the form of legacy and unsecured modem connections, as well as poor access control processes that leave user credentials in place long after the relevant employee has left the company.
So just because infrastructure is in your data centre doesn’t mean it is inherently more secure, resilient or suitable to meet the needs of regulatory compliance than public cloud.
Going public
While some businesses may feel more comfortable knowing their data is being stored within their own walls, data location is only one small aspect of security and compliance.
Along with the provision of innovative new services to enable business growth, it is the job of public cloud providers to protect their customer’s data. A central component of their value proposition, therefore, is the delivery of systems, tools and continuity plans that make their cloud infrastructure safe and secure.
This applies to both virtual and physical means of protection. Corporate data will be stored in a secure facility with multiple layers of physical security that are often not present if businesses opt to run their cloud infrastructure in-house.
And, with competition in the market continuing to increase at a rapid rate, ensuring compliance is not only a valuable competitive advantage for those businesses offering public cloud services, but also essential to gaining customer trust and in turn, loyalty. In this respect, smart cloud providers such as City Cloud are leading the way with a value proposition focused very much around regulatory compliance
Public cloud providers are also likely to carry out software patching on a more regular basis which is essential to manage compliance. Businesses running their own private clouds will generally be slower to patch security gaps, leaving themselves exposed to potential data breaches and compliance holes. The recent Spectre and Meltdown vulnerabilities are a great example of this, with Google, Microsoft and Amazon all patching their system quickly after the problems became public. Meanwhile many businesses will still be trying to determine what systems they need to patch and how they go about doing it.
Furthermore, public cloud providers tend to have highly skilled and experienced IT teams, which isn’t something that can be said for all businesses. The skills gap issue is an extremely prevalent one in the cloud world and businesses are finding it harder than ever to attract talented developers. This is causing problems when it comes to addressing the more technical compliance challenges, which could be solved using third-party infrastructure.
Add in the fact that businesses will not be alone when defending against attacks and the skills argument provides compelling support for the merits of using third-party providers to ensure legislative compliance.
The combination of these factors means that in many cases public cloud can actually be a better option than a private cloud for systems with high security and compliance requirements . It can certainly be a less complicated option for businesses and help to give them peace of mind amidst shifting regulatory landscapes.
As end users become far more sensitive to security of their personal data and initiatives like Open Banking come into effect, the challenges are only going to grow. That’s why organisations today, rather than shying away from public infrastructure, should be embracing them as part of a hybrid cloud offering on their journey to compliance.
However, the importance of investing varies when viewed from the perspective of senior level executives compared with those on the frontline of an organisation.
This digital transformation has resulted in businesses becoming far more reliant on technology to manage the day to day operations and for efficient decision making. To be successful in this environment, organisations must evolve to be smarter, stronger and faster than they are today, enabled by changes across three dimensions: organisation and talent; business and customer; and process and technology.
Those leading the change will have a distinct advantage over their competitors — a performance gap that will continue to grow as emerging technologies and digital channels offer further insight using data from both within and outside the enterprise.
However, only 55 per cent of those at entry level agree, so as the evidence shows, the investment strategy is not filtering through from the top to the frontline. If a business invests well even in basic technology, it can instantly boost staff motivation and productivity as well as appeal to prospective customers.
Employees expect an easy, quick and efficient experience whenever they use a device, whether onsite or remotely. For most businesses, the workforce is its largest investment. From motivating and developing talent to creating an engaged workforce, the effectiveness of employee management has a direct impact on business results and competitiveness.The role of IT has evolved from being a historical provider of technology to challenging processes and leading the digital changes across organisations on a global scale. Reflecting this, 69 per cent of employees believe their company is investing in mobile devices to differentiate themselves from the competition.
The market has shifted, and businesses are now embracing mobility as a game-changer, lowering the overall cost of IT investment by reducing device costs, making business operations more efficient and making employees more productive - key elements in growing revenue and increasing profitability.Leveraging a mobile strategy has many distinct advantages including a higher level of employee responsiveness and engagement.
Today, digital transformation is infiltrating every aspect of an organisation and as a result, technology demands across departments have skyrocketed. It is now critical for senior executives to effectively manage those out in the field.Moscow IT Department has completed the implementation of a unified IoT platform monitoring and controlling over 22,000 communal vehicles.
The Department of Information Technologies of Moscow (DIT) has completed the implementation of a unified IoT platform for monitoring and controlling over 22,000 communal vehicles: street sweepers, snowplow and waste trucks, water carts, etc. With the launch of the project millions of Moscow citizens now have a chance to experience the benefit of living in an IoT-dominated landscape.
All communal vehicles have been integrated into a unified network and are equipped with special sensors which allow the city authorities track routes as well as control speed, fuel consumption and operation mode. AI generates a daily scope of work for each vehicle based on the weather forecast. The system automatically indicates the period of work and calculates the most optimal route selected from the database of route patterns. The GLONASS system enables to identify a precise location of the vehicle and thus plan the route for it in the most efficient way, while special fuel sensors contribute to reducing fuel consumption.
AI calculates daily scope of work for each vehicle, the period of work and thus plans the route in the most efficient way.
The IoT platform helps to optimize the routes for municipal vehicles and thus avoid overconsumption of fuel. The savings per month amount to 9.1 million rubles (162.000 USD) which equivalents to the cost of 2 brand-new snowplow trucks.
Special sensors installed on vehicles transmit data on speed and fuel consumption.
Moscow has recently suffered a month's worth precipitation in the city records: 47 centimetres (17 inches) of snow covered the capital during the past weekend and at the start of the week. This weather cataclysm turned into a real crash-test for the newly implemented system, however the system did pass it. Over 15,500 of equipment units controlled by the IoT platform were involved into removing unprecedented 1.2 million cubic meters of snow. Despite extremal weather conditions no public transport breakdowns were recorded.
"Navigation and telemetric data accumulated via the unified IoT platform allows profile departments of Moscow Government monitor and control activities of communal vehicles operating in Moscow. Thus we have a full online access and 24/7 real-time control of each of the 22,000 vehicles”, - commented CIO of Moscow Artem Ermolaev. “First days of February brought record-breaking snowfalls to Russia’s capital, however thanks to coordinated work of the city authorities and the unified IoT platform we could react quickly and resolved the crisis". - said Mr. Ermolaev.
Telemetric and navigation data from the transmission data systems gets to the dispatch center of the Department of Housing and Communal Services and Improvement of Moscow. All the data is transmitted over secure encrypted channels. The center collects, processes, transfers and stores data, which enables to manage and control communal vehicles in real-time.
The IoT platform provides access to the data according to the position and level of the requesting official. For instance, the Mayor of Moscow and the executives of the Department of Housing and Communal Services and Improvement of Moscow have full access to the database, while the access of the municipal authorities and services’ providers is limited. They receive only the data on the districts which fall under their responsibility.
The IoT platform identifies a precise location of each of 22,000 vehicles in real time. Marked “green” are on the move, “orange” are on hold.
Telemetric data allows housing and communal services provide round the clock technical support and maintenance service as well as forehanded replacement of vehicles. The system monitors engine performance metrics warning about potential breakdowns and equipment wear. Experts say that the next step could be monitoring drivers’ performance. The installed sensors will be able to indicate first signs of fatigue and recommend a driver to stop for a rest thus increasing road safety.
IoT is going to become one of the primary drivers of digital transformation in 2018 and beyond. Smart cities worldwide are quickly moving to implement IoT solutions into daily life. The telemetry technology in smart cities worldwide is thriving. Vehicles are monitored in real time by thousands of sensors. Telemetry thus forms a solid basis for a smart infrastructure that is going to become a next step to self-driving vehicles which are to shape our transport future.
Large organisations are being forced between a rock and a hard place, as agile companies continually interrupt the current state of established markets and industries – their only option to avoid failure is to innovate. Just take a look at how Netflix’s digital transformation takeover has rendered Blockbuster redundant. This highlights the very real issue that is facing large legacy enterprises today.
By Bob Davis, CMO of Plutora.
The new reality is that every company needs to be a digital, software-powered company. IT projects and software functionality have permeated every aspect of every business. From backend processes all the way to customer-facing innovation, and application development is at the heart of many of these processes. Staying competitive requires swift digital transformation capable of keeping pace with the ever-increasing demand for faster app releases and updates. While small businesses and start-ups can thrive on this demand, large enterprises face many obstacles to speeding up software delivery.
There are key drivers behind the need to innovate and release software faster, but the question business leaders are left with this year is, “how do we actually increase speed and be more responsive without losing control?” To answer this question, instead of looking for small changes to project management or operations that could increase productivity, enterprises should look to automation to overhaul their release management processes and provide the means to bridge the gap between engineering speed and project management visibility, specifically:
1. Necessary acceleration
There has been plenty written about the advantages of adopting agile DevOps at the engineering level to help automate and streamline technical processes. However, less is mentioned of the need for project management teams to handle all of the different automation solutions the engineering teams deploy. With manual processes, there’s no way to accelerate support for the engineering side, so embracing automation where possible, and sensible, within release management will enable teams to keep pace with their counterparts in development.
2. Managing risk
Every software release is accompanied by a certain level of risk for the business. Whether it’s a bug in a customer-facing system that leads to revenue loss, or an internal IT system that crashes and diminishes productivity, software release risk must be managed. As delivery demand increases, manual release management also brings with it some additional risk as project management teams work to keep pace with increased release cycles. Automated release management provides additional visibility into the total risk landscape – something manual processes can’t provide.
3. Juggling delivery pipelines
When release schedules were based on a 12-month or bi-annual pace, it was much easier for project managers to manually track delivery pipelines. However, daily or weekly cycles are creating major overlap between delivery pipelines – and the sheer volume of delivery pipelines increases on a near-daily basis. Additionally, one failed process or pipeline conflict can compromise many other applications and projects, causing a ripple effect throughout an organisation. Managing these pipelines with spreadsheets simply won’t work anymore, driving the need for delivery management change, and automation is here to answer the call.
Uniting your teams
One of the main challenges businesses will continue to face in 2018 when trying to speed up the IT delivery process is the widening gap between the management portions of IT – delivery and operations teams. Project teams are largely focused on the development and delivery functions, and generally accept and facilitate improvements to increase the speed of delivery. With a deep concern for making delivery faster, better, and cheaper, the delivery teams are usually very responsive to the adoption of automated technology and the acceleration necessary for digital transformation.
The operations side, on the other hand, is concerned with securing and protecting the integrity of production environments. While this governance and quality assurance is vital to the delivery process, manual approaches to operations are a hindrance to the modern rate of releases. When engineering teams followed relatively slow and methodical release cycles, the gap between project management and operations was manageable. Business demand for greater agility necessitates a more streamlined and automated approach to quality and risk.
Deeper down the value stream, engineering teams will use numerous tools and platforms to build new software and release it. For example, there could be hundreds of projects going on at once, each one supported by tools such as Puppet, Jenkins, JRA, Rally, and more. With so many tools, individual engineering teams have plenty of resources to speed up their processes – but only in isolated pockets. To break down the silos, both development and operations need a way to coordinate across multiple tools to gather the necessary information to manage delivery processes holistically.
Automated enterprise release management is the bridge between project delivery and operations. It connects the two management components to the underlying engineering level – no matter the tools they choose to use. Enterprise release management sits between project management and operations, and builds a comprehensive view of all information related to IT delivery throughout the organisation.
Why automated enterprise release management is a bonus
Streamlining the release management, test environment management, and deployment processes enables changes to be made more quickly. By keeping up with release demand, companies can deliver a differentiated customer experience necessary for survival in increasingly competitive digital markets.
Traditional manual spreadsheet strategies often lead to rework for IT delivery teams – especially when demands increase to meet digital transformation needs. Automated enterprise release management eliminates the need to reconcile inconsistent information on a day-to-day basis.
Similarly, unifying release processes throughout the organisation allows management teams to absorb more information about changes while controlling IT service quality even under heavy speed demands.
For most organisations, digital transformation is either up and running or on the horizon. This year more companies will be looking to implement an automated enterprise release management solution that enables them to increase their speed and responsiveness to market changes by addressing the risk management element of IT delivery. When quality management and governance are also built into automated enterprise release management solutions, digital transformation will become a much less risky venture.
"Never waste a good crisis," as Churchill once said. And the next big crisis that's about to hit digital businesses, in many people's eyes, is the General Data Protection Regulations (GDPR). This is a once in a lifetime "crisis" that the best firms will "not waste" by overhauling their business processes and their culture of compliance.
By Vivek Dodd, Chief Compliance Officer, Skillcast.
Whilst many will see compliance with GDPR (and other legislation) as a burden on business, it is better to accept it as something that is part and parcel of business life. GDPR and other regulatory measures create a level playing field that benefits the ethical business, restores the trust of consumers, and provides a clearer environment for companies to operate in.
Compliance with regulations applies to all businesses. So, the companies that can imbibe the regulations and create a culture of compliance will have a competitive advantage and will thrive over time.
Now with GDPR upon us, it's the perfect opportunity for you make your employees sit up, take notice and change their behaviour to act with more care, integrity and compliance. The way to do this quickly and inexpensively is with the use of digital training and compliance apps that provide instant access, streamlined logistics and people analytics for you to effectively monitor and manage compliance in your organisation.
Of course, in many arenas, classroom training is a favourite but it is increasingly looking ill equipped and out-dated for the demands of the modern world - particularly for the cutting edge digital market.
Mobiles phones are now more powerful than many laptops were just a few years back and high bandwidths, video and data analysis are all available to engage with and train staff in imaginative ways.
Whilst digital learning is a great solution, it is also really worth taking the time with compliance training to carefully look at the detail and work out who really needs to know what. The training has to be relevant and there is nothing more frustrating than learning new information, which will not be applied. We can all recollect times when we’ve taken time away from our day job to be on courses where much of the material has been a waste of time and not appropriate to the job we do.
So, it’s vital to be precise with the content - after all employees should be engaged and then have the option to choose how and when they will undertake their learning. This allows the workforce to buy in.
The content required for compliance programmes can take place in many diverse forms - from professionally developed micro-learning videos, interactive scenarios, e-books, podcasts, articles and research reports to the less polished informal content generated by blogs and vlogs.
Also, it’s worth adding that recent research by Deloitte reveals that companies are investing considerably in programmes that use data for workforce planning, talent management and operational improvement.
Such analysis of data can build knowledge and competency maps that show areas for improvement for an individual and/or even the entire organisation, whilst also providing a measure of success for learning interventions and/or the entire learning programme.
This also paves the way for adaptive learning, whereby the learning platform recommends learning based on an employee's job role, experience and previous knowledge of the subject.
Within digital learning a trend that's taking hold is organisations furthest up the maturity curve are aligning learning much more closely to job performance. It's no longer standard to conduct training in isolation in the hope that some of it will be retained by employees when they go back to their daily roles. Instead, they want training to be integrated with performance support apps and suitable job aids. For instance, integrating RegTech tools with digital learning to reduce operational complexity, and improve compliance with laws and regulations.
Also, what cannot be emphasised highly enough is compliance training, and indeed all training, has to be enjoyable. Like everything in life, when something is fun it is so much easier to take on board information and retain it.
A good way to do this is by adding elements of gamification, such as interesting storylines, non-linear pathways, timers and elements of risk taking. Another way, is by noting and rewarding employees for their achievements both online or offline.
In short, the demands of compliance will not go away, so adopt an across the board, enthusiastic approach, and then go about the task knowing that the days of sitting at a desk and forgetting what you’ve learnt have finally gone.
Again, it is worth remembering that it has always been the case that employees are at their happiest learning when they can exercise control over how they access their training and have a selection of exciting pathways that they can take to complete it.
The digital industry needs no convincing that technology will help them adapt how they train their staff and no doubt they will see much higher employee engagement, staff retention and productivity, plus the added bonus that compliance will hopefully be no longer viewed as a burden.
The learning solutions are here. It’s up to you to harness them to create that culture of compliance and get a head start on your competitors.
As we settle into 2018, Europe has a fantastic opportunity to firmly establish itself as a hive for digital innovation. We will see significantly more forging of cross-industry partnerships as organisations strive to deliver on their promise of a fully digital offering. In addition, augmented reality (AR), blockchain and robotics will move out of pilots into mainstream adoption.
By Euan Davis, European Lead for Cognizant’s Center for the Future of WorkHere are five key trends to look out for in 2018 and recommendations on how businesses can make the most of upcoming opportunities:
1. Brands begin to blend the physical with the virtual - AR edges into the mainstream
Over the course of this year, we will see AR weave its way into more serious B2B and B2C interactions, improving internal processes and business models. Immersive technology will underpin brands of the future thanks to its ability to make stronger connections with people. AR will quickly appear within experiences and transactions in both our personal and business lives, in the form of what we call ‘journeys’ – blending people, places, time, space, things, and events. Organisations will also begin weaving these immersive technologies into customer, employee, supplier and partner physical interactions, increasingly using them in training or as a sales tool, for example.
The most successful businesses will be those using the technology to create captive moments that draw in people’s attention by creating personalised stories for each consumer. This is a trend that will see consumers continue to favour spending money on experiences, such as visiting a pop-up virtual reality (VR) game in their city, over buying ‘things’ such as the latest TV.
2. Europe continues to build-out the economic foundations for a digital future
Traditionally, the USA is the world leader in software and Asia leads in hardware. In a global context, while full of innovative digital businesses, Europe still lacks the scale of the existing global tech companies. Yet, the basis of modern day Europe – as an open, collaborative and pluralist continent – will form the foundations for a successful digital economy in 2018. Furthermore, the impact of new technologies applied to all aspects of business and society is so large that there is no way to escape its gravitational pull.
Expect to see regional leadership emerge around the innovation needed to build the industries of the future, such as next generation education, smart cities, connected healthcare and autonomous vehicles. As Europeans successfully master the digital economy , we shall see a rise in employment, improved productivity and building social cohesion. In this vein, the four freedoms that underpinned the 1990’s single market—goods, services, capital and people—could get a fifth, as Europe builds the foundation for digital single market.
This year, European economies will continue to make smart bets on their future, and the start-up scene will see new investment funds open up for the likes of big data, artificial intelligence (AI) and VR. Additionally, the region’s leadership on digital regulation will stand it in good stead to establish itself as a long term sustainable leader in industries of the future.
3. Ecosystems accelerate the breakdown of traditional industry boundaries
Cross-industry partnerships are nothing new, but in 2018 these relationships will begin to blur the lines between industries to an even greater degree as sectors begin to “cross-source” functional areas of expertise from one another. As opportunities to integrate additional functions into existing products or services multiply, organisations will realise that collaborating with other industry experts will speed up the process, while at the same time exposing them to a more diverse customer base, across multiple industries.
For instance, last year Ford integrated Amazon Alexa to run its in-car infotainment systems. Whilst Ford is diluting its direct exposure to its captive customer base, the quality of its digital work is greatly improved through its collaboration with Amazon and therefore boosts the overall customer experience. Equally, Amazon gains increased exposure to Ford’s customers. A win-win situation for both.
4. Blockchain begins to truly influence consumers and business
The stratospheric rise of Bitcoin in 2017 revealed a growing public awareness of crypto-currencies and the blockchain technology that underpins them. Blockchain, over the last couple of years, has been largely confined to proof of concept projects, but this year we will begin to see a flood of live blockchain-enabled projects aimed at transforming B2B and B2C transactions.
Following a year of high profile data breaches in 2017, blockchain technology has come at a time when consumer trust has reached its lowest ebb. Successful blockchain rollout will begin to positively influence consumer trust in organisations. We will see new business models emerge as companies begin to understand and take advantage of secure data transfer. Elsewhere, cross-industry collaborations will take a leap forward as smart contract technology, based on blockchain, will allow for the secure transfer of consumer data. Regulators across the world will be working with businesses to keep up with the momentum of blockchain adoption.
5. People begin to embrace, rather than fear, the machine
New types of work are now emerging thanks to machines, despite the doom and gloom around automation and AI from some commentators. Machines still need humans and we are beginning to get a realistic vision of how they will work together. From ‘data detective’ to ‘personal memory curator’, new jobs will start to emerge as business leaders evaluate the working relationship between people and machines.
As outlined in a recent Cognizant whitepaper, ‘21 Jobs of the Future’, based on the major macroeconomic, societal, business and technology trends observable today, new types of work are emerging. In the future, we are likely to see more accurate forecasts about the impact of technology on jobs, focusing on how technology will continue to improve the human experience, not rob us of our humanity.
Machine learning may sound expensive and out of reach, but it needn’t be. Nearly every major machine learning implementation has been made available. Open source platforms include offerings from Amazon, Google, Microsofit, Baidu, and many more.
By Chris Adams, President and COO, Park Place Technologies.
These “kits” substantially reduce the knowledge base required to apply machine learning, to the point it can be nearly turnkey. In fact, machine learning has become so accessible, the most technically unsavvy of reporters are trying their own experiments.
Thus the barrier is no longer obtaining the PhD scientists required to build a bespoke machine learning application—although some big, moneyed players like Uber are doing just that—it’s about using the tools on the market in ways that truly drive value for the business.
Here’s what various experts recommend to those embarking on the machine learning mission:
In short, machine learning is more accessible than many believe, requiring less data than many IT pros might expect. And it’s not a whole lot different than other technological tools. To work for the business, it needs to be built into the existing processes, culture, and oversight/governance structures within the enterprise.
Need another bit of encouragement? You won’t be far behind most in the field.
We may all be talking about neural networks, but “[t]he state of the practice is less futuristic,” opines TechCrunch. Most applications of machine learning, even among the tech leaders, are using the same algorithms and engineering tools from years ago. Regression analysis, decision trees, and similar methods are driving ad targeting, product recommendations, and search results ranking to a greater degree than sexy “deep learning” advancements.
What’s more, there are infrastructure issues yet to be solved. The majority of time devoted to machine learning is spent preparing and monitoring the learning tools. Building the AI is a relatively small part of the picture.
Unfortunately, preparing data is a hassle, and the “bigger” the data, the worse the problems. Using scripts to consolidate duplicates, normalize metrics, and so on, can involve days of manual labor for a single run.
Big data can also lead to big machine learning errors, so monitoring production models is essential. Again we reach an impasse: When moving into unsupervised machine learning, where the correct output isn’t known in advance, traditional testing and validation tools no longer work. So how is IT to determine if the model is making “good” predictions? Dashboards and program alerts fill the gap at the development level, and more capable and specific tools are finally being developed by a few innovators.
The point is that machine learning isn’t breaking any molds for rapid adoption. To the contrary, it’s experienced a slow rise. Neural networks joined the scientific literature in the 1930s, the math was completed by the 1990s, and it’s taken the intervening decades for computers to catch up.
The next obstacle will be developing end-to-end solutions, which will accelerate the transition from the rudimentary machine learning dominating business today to the more futuristic possibilities still mostly dormant in neural networking laboratories. How long such a transition will take is still up for debate.
If any more evidence were needed that a fire has been lit under digital transformation, then the prediction that the enterprise mobility market will explode to $500bn by 2020 is it.
By Nick Pike, VP UK and Ireland, OutSystems.
Fuelling this blaze is customer and employee demand for a seamless mobile digital experience in all aspects of their home and working lives. For businesses, it’s a case of adapt or be left in the dust, as they face challenges from disruptive industry entrants and innovators. IDC predicts that, by 2020, half of the Global 2000 enterprises will see the lion’s share of their businesses depending on their ability to create digitally-enhanced products, services and experiences. The deadline is short and the opportunities are huge, so why are some corporate organisations stuck at the starting gate?
Reason 1: The fear factor
Digital transformation means just what it says: fundamentally changing the way enterprises do business from top to bottom. That’s a daunting prospect for many executive teams in large corporations, despite the promised rewards. While workers at the coalface of the business may be crying out for streamlined mobile business processes and apps that will make them more efficient, the drive for large scale strategic change has to come from the top. Human and financial resources need to be allocated and the whole business lined up in support of the process so that digital transformation is viewed as a strategic investment in the future competitiveness of the company, rather than an expense.
Fear can also arise from concerns that already overstretched IT departments will struggle to cope with the new demands of application development and rollout. In fact, Gartner fed this particular fear in 2015, when predicting that by the end of 2017 the demand for enterprise-grade mobile apps would have grown at least five times faster than the ability of internal IT departments to deliver them. However, in the two years since that prediction was made, the rise in rapid application development platforms has reduced the burden on IT departments and shortened the time to launch. So, this particular fear can now be faced with a degree of confidence.
Reason 2: User resistance
Large companies with employees that are used to a slower pace of life can find it hard to adapt to the speed of digital transformation. They can struggle to align vital employee education programmes with the rollout timeframe that can be achieved. It’s no good having a fantastically efficient new system if users are still hankering after the legacy technology – warts and all – and struggling to embrace their new environment. Therefore, user education is a critical part of the transformation process.
OutSystems customer Aravind Kumar encountered this challenge when migrating his consulting company from a collection of 50 IBM Lotus Notes applications to a suite of new applications that were developed using the OutSystems low-code platform. He told us: “Getting people to shift their thinking was one of the greatest hurdles we had to overcome. In fact, even as we were building new applications, people were saying we should try to recreate them just as they were in Lotus Notes!”
Fortunately Aravind was able to bring his users with him on a journey to discover the efficiency and accessibility of the new applications his team had developed. The key point is that, to be successful in digital transformation, businesses need to invest in the human factor as well as in the technological expertise in order to realise the full benefits and mitigate resistance.
Reason 3: Letting the past dictate the future
Every large enterprise has a past. Unlocking information and freeing business processes from legacy IT systems can be one of the biggest stumbling blocks when it comes to digital transformation. However, it is important to recognise that, as I’ve said before, legacy systems exist because they work - they just don’t have the agility that the digital world requires. Establishing when legacy systems should be retired and when they should be incorporated into mobile business processes is a key challenge and evidence suggests that enterprises are mixing it up. A recent report by VDC research found that 53% of large organisations (organisations with >1000 employees) said that the most common development projects they worked on involved building net-new applications from the ground up; however 43% stated that they were modernising legacy applications.
The elegant solution is to find a way to leverage the legacy systems of the past without letting them crush the ambition and potential of the future. An advantage of the OutSystems low-code platform is its ability to integrate with legacy systems, even if they are unique to the customer, meaning that prior investment is not wasted.
As IDC neatly put it “Digital transformation starts with mobility. Organisations with untethered business processes and ubiquitously accessible IT resources will be better positioned to compete and thrive in the digital economy.” This is why organisations need to address the challenges of fear, user resistance and integrating legacy systems to get out of the starting gate onto the competitive racetrack of digital transformation.
It has been predicted that by 2020, there will be more than 200 billion devices connected by the Internet of Things (IoT). Whilst it has been discussed in technology circles for many years, it is only recently that businesses have started to properly embrace it. It essentially consists of billions of devices all connected by an invisible network, combined with faster wireless speeds and smart technology, it is slowly reshaping how entire industries function.
By Syed Ahmed, CEO at SAVORTEX.
One of the biggest challenges businesses face is a constant pressure to reduce costs and improve their carbon footprint. However, these two challenges contradict each other because in order to improve CO2 levels, a company tends to have to spend more money. Therefore, businesses are more frequently looking for cost effective solutions that will improve efficiency and help them to meet green targets, and many are turning towards the IoT for a solution.
In a recent survey conducted by Forbes, two thirds of the companies questioned believed that the IoT is important to their current business, and over 90% stated that it will play a crucial role in the future of their company. This new reliance on the IoT has provided manufacturers and technology companies with an opportunity to innovate and develop products that will help businesses achieve their goals.
For example, we partnered with Intel to create a revolutionary hand dryer that would harness the IoT and bring substantial savings and benefits to the businesses that installed them. The adDryer is an IoT-enabled smart dryer which includes a digital screen that can deliver tailored, high-definition video messages to users, which can be used for internal marketing or as an additional revenue stream.
The sleek and sustainable design means the adDryer has a depth of only 134mm and the lowest carbon footprint per dry as it runs off 500kW. With super-fast drying times from 11 seconds – 2.7 times faster than other dryers on the market, and using a patented energy recovery and curved air delivery technology, it results in a 66% energy saving compared to any other dryer on the market, with 97% savings when compared to using paper towels. This ensures the hand dryer meets BREEAM and LEED compliance.
In addition, due to the adDryer’s unique data capture technology, we can create an audience value of the estate based on the users of the building, for example gender, age and occupation. The company’s media buyers can then use this information to set and agree a cost per view rate that will be paid to the building owner each time someone uses an adDryer and sees an advert, thus creating revenue.
Moving forward, manufacturers now have a responsibility to make the IoT more accessible to smaller businesses. In recent years, the IoT has mainly been designed with bigger companies in mind. The fact that the technology is still fairly advanced means there are not many off-the-shelf solutions, and few SMEs have the resources for this sort of complexity. Therefore, simpler and more straightforward products must be developed to enable companies of all sizes to benefit from the savings and efficiencies the IoT can bring.
Whilst the washroom is only one part of the office space, by introducing the IoT, businesses can achieve substantial energy and cost savings by simply updating their hand dryers, and this is a solution that can be adopted by businesses of any size. If the same innovation is applied to all industries, companies will be able to surpass their sustainability targets.
Hyperconverged infrastructure is essential when embracing the demands of the IoT by Lee Griffiths, Infrastructure Solutions Manager, IT Division, APC by Schneider Electric.
The explosion in digital data requires, among many other things, an expanded vocabulary just to be able to describe it. Many of us familiar with computer systems as far back as the 1980s—or further—remember what a kilobyte is, in much the same way as we remember the fax machine. However, millenials may only have a vague recollection that such terms or technology ever existed or were relevant.
Now we must become as familiar with terms like the Zettabyte (that's 1x 1021byte) as we are with products like smart phones and self-driving cars—marvels that were science fiction not so long ago but which have become, or are rapidly becoming, commercially available realities. Industry analyst IDC predicts that by 2020 there will be 44Zbyte of data created and copied in Europe alone, based on the assumption that the amount will double every two years.
If that projection sounds like an echo of the famous Moore's Law of semiconductor growth, that the number of transistors able to be constructed on a single silicon chip will double every two years, it may be worth while pondering the effect on the computer industry and the world of that simple empirical observation of the 1960s. The ever-increasing capability of silicon chips, both to process information more quickly and store and retrieve it in vaster capacities, has underpinned our information society for decades.
More recently Cisco has estimated that within 5 years 50 billion devices and ‘things’ will be connected to the Internet. Digitalisation is driving the economy and according to IDC, by the year 2020 around 1.7 megabytes of new information will be created every second for each human living on the planet. Incredible numbers, but when you look at new innovations such as Amazon go, the partially automated Grocery Store, you can see these predictions becoming a reality. Think of the amount of IT Infrastructure that will be needed to support the supermarket industry of the near future if this new way of shopping takes off!
IDC's projections warn those depending on digital services that the amounts of data required are only going to increase for the foreseeable future. How does the data centre industry respond with the necessary capacity?
Speed of response is a vital factor in the data centre industry. Indeed, the very rapid surge in creation of new data and the consequent demand for more data centres is causing the industry to diversify along two routes.
At one level, centralised data centres are becoming bigger, making huge data capacities available. At another, smaller data centres are moving to the edge of the network, bringing data closer to the point of consumption, simplifying Internet traffic and reducing network latency for applications such as video streaming - where speed of response is essential. The rapidly growing Internet of Things (IoT) phenomenon is also leading to a demand for smaller data centres distributed around the edge of the network.
Collaboration enables innovation
The response of OEM’s in both the data centre and IT industries has been to realise that no one vendor can provide all tools necessary to deliver the types and variety of digital services required by todays rapidly growing businesses; that the necessary speed of response requires a collaborative approach between vendors, and their channel partners the systems integrators charged with designing and assembling facilities to customers' specific needs; and that standards and interoperability between various vendors' products are essential to such a collaborative approach.
The goal for many skilled systems integrators now is to be able to rack, stack, label and deploy a data centre of any size for any customer, tailored to their particular needs and rolled out in the quickest time frame possible.
At the most specialised space of the market this comprises localised Edge or micro data centres, delivered in a single rack enclosure with integrated power, uninterruptible power supply (UPS), power distribution, management software (DCIM), physical security, environmental monitoring and cooling. Such infrastructure can be assembled and deployed rapidly to support a self-contained, secure computing environment, in some cases in as little a timeframe as 2-3 weeks.
Further up the scale are centralised, hyperscale data centres with purpose-built computer Halls comprising air-conditioning equipment, containment enclosures for Hot or Cold Aisle cooling configurations and the necessary power supply and networking infrastructure to drive a multitude of computer-equipment racks. Such installations need to be adaptable to accommodate rapid upgrading and/or scaling up or down of the amount of compute and storage capacity to respond to end-user needs.
Simplified integration
In either case, the ability to scale rapidly, to rack, stack, label and deploy the solution requires ever greater collaboration between vendors and convergence between their various product offerings, so that the time taken to build complete systems is as short as possible. In many cases Data Centre companies offering critical infrastructure solutions must work closely with IT Vendors who produce rack-mounted servers, storage arrays and networking equipment to ensure their products integrate seamlessly with each other.
Integration is absolutely essential for end-users looking to embrace digital transformation and expand their footprint rapidly. Solutions must be delivered ready to deploy, in excellent working condition and that requires both focused partnerships and the skills of highly specialised System Integrators who have become the go-to people in the converged and Hyperconverged infrastructure space. The magic it seems lies not within the individual pieces of IT and infrastructure equipment, but very much within the way the system is built, tested and deployed.
Additionally, such hardware must be guaranteed to work flawlessly with DCIM (Data Centre Infrastructure Management) and virtualisation software solutions that allow pools of processing or storage resources to be treated as individual isolated systems dedicated to particular customers or applications.
Collaboration; the key to hyperconvergence
By definition, converged infrasturucture enables the user to deploy four critical or core components of data centre technology - the compute, storage, networking and virtualised server - within a single, secure, standardised, rack-based solution. Hyperconverged infrastructure differentiates by utlising software to enable tighter integration between components and recognise them as a single stack, rather than as individual products.
In many of todays markets many businesses are adopting hyperconverged solutions as a more collaborative, forward-thinking and customisable approach for their data centre infrastructure requirements. It means that they can strategically hand pick the core components, which are in many cases used to expand footprint whilst providing both resiliency and connectivity at the Edge of the network. The real beauty of hyperconverged infrasturcture is that once a particluar solution is chosen, tested and deployed, it can be quickly standardised and replicated to provide faster scalability and reduced costs – both in CAPEX and OPEX.
A recent example of collaboration between Vendors is that between Schneider Electric and companies such as Cisco, HPE and Nutanix. In the former case the two companies have worked together to certify that Cisco's Unified Computing System (UCS) servers can be shipped already packaged in Schneider Electric's NetShelter racks and portfolio of localised Edge or micro data centre solutions. Nutanix, meanwhile, has certified that Schneider's PowerChute Network Shutdown power protection software will work seamlessly with the ESXi and Hyper-V software used in the management of their own Hyperconverged systems.
In addition Schneider Electric has leveraged its Micro Data Center Xpress™ architecture in partnership with HPE on HPE Micro Datacenter, a collaboratively engineered converged infrastructure solution providing end-to-end IT infrastructure, networking, storage and management in a self-contained and easy-to-deploy architecture - ideal for distributed Edge Computing and IT environments.
Collaborations such as these and others help simplify the task of systems integrators as they specify and assemble bespoke data centre systems of all sizes. Giving them the peace of mind that key components of the overall systems they are tasked to build will work together seamlessly, reliably speed up the delivery time of new data centre deployments and allow their customers to rapidly scale businesses as they seek out new markets. It is therefore of paramount importance that Edge data centre solutions work reliably, as promised from the moment they are connected.
The advance of semiconductor technology, guided by the road map established by Moore's Law, brought in the era of the PC, cellular phone networks and handheld technology. Now, in the era of Cloud Computing and the Internet of Things, the data centre – no matter the size - provides the fundamental technological base that makes all other services possible.
The ability to rack, stack and deploy new IT resources quickly, efficiently and under the guarantee that they will perform to specification, will no doubt have a huge impact on how well companies succeed in the era of Edge Computing and the IoT.
Advanced Computer Vision applications are becoming increasingly pervasive and are used to enable autonomous-driving modes of today’s cars, as well as in augmented reality, security surveillance systems, healthcare, industrial inspection equipment, robotics and more. Users’ expectations are rising, as familiarity inevitably brings demands for higher performance, such as faster response times, greater accuracy, or recognition of extra objects or features.
By Giles Peckham, Regional Marketing Director at Xilinx and Adam Taylor CEng FIET, Embedded Systems Consultant.
Recognition and classification of images uses deep machine learning techniques such as convolutional neural networks. Before they can be deployed these networks need to be trained for the application they are to be deployed in. High-performing embedded application-processing engines are capable of running deep neural networks that are trained in the Cloud for deployment on an autonomous edge device. This approach is suitable for systems like self-driving vehicles, where low latency is critically important and a reliable, high-bandwidth connection to the Cloud may not always be available.
Other use cases, such as security surveillance or medical imaging, may be less demanding of outright speed, and instead require complex analyses, extremely high accuracy to help specialists make informed decisions, and the ability to allow multiple parties access to results. In such situations, where compute-intensive tasks and flexible storage and access policies are required, the image-processing application can be more effectively and economically hosted in the Cloud.
Hardware-Accelerated Cloud Computing
Compared with the embedded vision systems discussed previously, hosting the application-processing algorithms in the Cloud creates a different set of challenges. It is here that compute-intensive deep machine learning, data analytics and image processing are implemented. Often the applications are required to stream processed data in near real-time and without dropouts to the client.
Cloud data centres are increasingly unable to fulfil the demands of today’s most intensive processing workloads using conventional CPU-based processing alone. Some have adopted FPGA-accelerated computing to achieve the throughput needed for these workloads and others such as complex data analytics, H.265 encoding and SQL functions.
Historically, an FPGA has been teamed with a CPU to provide acceleration, but a new model is emerging based on arrays of FPGAs such as Xilinx Virtex® UltraScale+ devices. These arrays deliver extremely high peak compute capability, with the added advantage of rapid runtime reconfigurability to repeatedly re-optimise for subsequent workloads.
Stack Streamlines FPGA Development
To fully leverage the capabilities provided by programmable logic, an ecosystem is needed that enables development using current industry-standard frameworks and libraries. The Xilinx Reconfigurable Acceleration Stack (RAS) answers this need, by streamlining FPGA-hardware creation, application development and integration. Hyperscale data centres can use these tools to jump-start development: several major operators are currently working with Xilinx to boost performance and service agility by introducing FPGA acceleration in their server farms, making extreme high-performance compute capacity available to customers as a web service.
Like the reVISION™ stack for embedded vision development, which was described in the previous article in this series, the RAS leverages High Level Synthesis (HLS) for efficient development of programmable logic in C / C++ / OpenCL® and System C. This HLS capability is then combined with library support for industry-standard frameworks and libraries such as OpenCV, OpenVX, Caffe, FFmpeg and SQL, creating an ecosystem that can be extended in the future to add support for new frameworks and standards as they are introduced.
Also like the reVISION stack, the RAS is organised in three distinct layers to address hardware, application and provisioning challenges. The lowest layer of the stack, the platform layer, is concerned with the hardware platform comprising the selected FPGA or SoC upon which the remainder of the stack is to be implemented. The RAS includes a single slot PCIe® half‐length full-height development board and a reference design, which are created specifically to support machine learning and other computationally intensive applications like video transcoding and data analytics.
The second level of the RAS is the application layer. This uses the Vivado® Design Suite and SDAccel™ development environment, leveraging HLS to implement the application. SDAccel contains an architecturally optimising compiler for FPGA acceleration, which enables up to 25 times better performance per Watt compared with typical processing platforms comprising conventional x86 server CPUs and/or Graphics Processing Units (GPUs). The environment is featured to deliver CPU/GPU-like development and run-time experiences by ensuring easy application optimisation, providing CPU/GPU-like on-demand loadable compute units, maintaining consistency throughout program transitions and application execution, and handling the sharing of FPGA accelerators across multiple applications.
For machine learning applications, DNN (deep neural networking) and GEMM (general matrix multiplication) libraries are available on the Caffe framework, as shown in figure 1. Libraries for other frameworks such as deep learning TensorFlow, Torch, and Theano are expected to be added later. It is worth noting at this point that the scope of RAS is not limited to machine vision or deep learning: as figure 1 shows, other libraries are included that support MPEG processing using FFmpeg as well as data movers and compute kernels for data analytics on the SQL framework.
The third level of the RAS is the provisioning layer, and uses OpenStack to enable integration within the data centre. OpenStack is a free, open-source software platform that comprises multiple components for managing and controlling resources such as processing, storage and networking equipment from multiple vendors.
Figure 1. RAS addresses hardware development, application development, and integration.
Performance Boost, with Power Savings
By using the RAS to streamline the creation of Cloud-class FPGA-based computing, a significant increase in compute capability can be achieved, compared with processing on conventional CPUs. Image processing algorithms can be accelerated by as much as 40 times, while deep machine learning can be up to 11 times faster. In addition, hardware requirements are reduced, which lowers power consumption thereby resulting in a dramatic increase in performance per Watt. Moreover, the FPGA-based engine has the important advantage of being reconfigurable and so can be quickly and repeatedly re-optimised for different types of algorithms as they are called to be executed.
Conclusion
Automatic image analysis and object recognition applications can benefit from the increased performance and reduced power consumption offered by highly optimised, reconfigurable FPGA-based processing engines. Whether the application is to run on an embedded system or in the Cloud, using an acceleration stack enables developers to overcome design and integration challenges, reduce time to market and maximise overall performance. Both the reVISION stack for embedded development and the Reconfigurable Acceleration Stack for building Cloud-based FPGA compute engines assemble the necessary hardware and software resources and can adapt to support frameworks and standards as they are introduced.
Fast-paced advancements in technology have led to a shift in today’s consumer preferences. As consumers utilise virtual assistants via smart speakers and the Amazon Dash buttons to order products quickly and easily, it’s clear the demand for integrated services is growing.
By Navdeep Sidhu, Product Marketing, Integration and API Management, Software AG.
Consequently, companies must adapt to cater these new consumer behaviours, this emphasises businesses’ need for greater integration. In the near future, we can expect to see robots, microservices and devops integrated into services to drive further innovation and make life easier for both businesses and customers.
Traditional companies are realising quickly that without integration there can be no digital transformation and, with the advent of machine learning, hybrid platforms and Robotic process automation (RPA), integration will itself rapidly evolve in 2018. Here are some key trends that I expect to see in the year ahead.
1. Say goodbye to being put on hold - RPA, chat bots and virtual assistants will become the number one driver for customer experience.
Customer-facing representatives will be assisted by RPA and robotic assistants to ensure that their customers receive the fastest, most efficient service possible. Integration providers, partnering with RPA vendors, will enable organisations to provide a seamless customer experience, ensuring that you never have to place an incoming customer call on hold. This will be backed up by human contact to quickly offer more specialised information and support.
2. Convergence of APIs and B2B - APIs and B2B integration will converge to drive the next generation of B2B interactions.
B2B platforms married to APIs (application programming interface) will enable organisations to not only conduct transactions but also to make every step of this process transparent. Already having invested in B2B platforms, companies will add APIs to create a smoother supply chain and provide instant results for customers making queries. This will be particularly relevant in healthcare and retail, ensuring the best and fastest customer response possible.
3. “Mapping bots” - AI and machine learning will make integrations smarter.
Integration projects are complex and time-consuming. Developers have to understand the information coming into the platform and map it to its logical place, where this information can be made useful. In the case of manufacturing, where there are many different suppliers in the supply chain, this means translating, mapping and orchestrating each supplier’s information. “Mapping bots” will become commonplace for simple data such as names and currencies, while human beings will still be needed as integration specialists.
4. Hybrid Integration Platforms - Hybrid Integration Platforms (HIP) will drive convergence of integration platforms as a service (iPaaS) and internet of things (IoT) cloud platforms.
Instead of IoT devices sending information to on-premises integration platforms, they will become an edge-based extension of integration. IoT cannot function without integration. Hybrid integration platforms will create a convergence of IoT with (iPaaS) to offer scale and complex integrations for IoT projects.
5. Introducing Microservices - DevOps will go mainstream and Microservices will bridge the chasm.
Microservices is the buzzword for 2018, everyone wants to try it for scalability and superfast apps. Technology architectures add new features every day, but users cannot afford any downtime to manage them. Microservices offer the ability to do this, but devOps is needed for fast deployment. Microservices and devOps go hand-in-hand.
6. Avoiding information silos - Multi-cloud finally becomes a reality and integration can stitch them all together.
There are hundreds of cloud providers in the market, and many organisations use more than one for hosting their application architectures. If the clouds are not stitched together, via integration, and able to “talk” to each other, organisations risk having information silos. This creates errors and wastes time.
To keep pace with the most successful organisations, businesses need to integrate effectively in order to implement a robust digital transformation project, before the market evolves further and they can no longer keep up.
If there’s one phrase that’s currently being uttered by IT departments the world over, it’s digital transformation. It’s a hot topic of debate for businesses across a huge plethora of industries, and is deemed by many as a necessity for future success.
By Stuart Nielsen-Marsh, Microsoft Cloud Strategy Director, Pulsant.
However, there is a marked difference in what many businesses believe digital transformation consists of. For some, it is simply a matter of replacing old, outdated technology with state-of-the-art alternatives in order to keep pace with the rest of the competition — and this is indeed a valid view. Whether an organisation chooses to adopt Office 365 to increase efficiency and collaboration among employees, or decides to move their servers off-premise and instead locate their data in the cloud, technology has a huge role to play in future-proofing operations.
But digital transformation is a much bigger picture than many might think, with wider implications to consider. Successful digital transformation requires a marked change in attitude that enables businesses to remain relevant and competitive by keeping up with the fast-changing industry landscape, while also helping to retain existing customers and attract new ones. Technology undoubtedly forms part of this, but it also requires a significant cultural shift which will involve the cooperation of everyone within the workplace.
It is not entirely surprising that so many hold a limited view on what digital transformation is. There are plenty of emerging technologies — from artificial intelligence (AI) and machine learning to big data and the Internet of Things (IoT) — that are constantly being touted as the next big thing, and so it is easy to become overwhelmed and distracted by all of this public discourse and eventually develop a blinkered view of the potential for transformation.
Rather than get caught up in differing opinions, businesses should instead take a step back and adopt a more holistic approach to digital transformation that triggers a total rejuvenation of their business.
Specifically, this new approach requires reviewing and updated three areas in particular. The first is technology, commercial and business strategies, the second is the customer engagement approach and finally — and perhaps most importantly — is the organisational culture, with an emphasis on the importance of digital transformation being driven from the top down.
If digital transformation at this scale is to prove successful, it requires strong leadership, and there are several tips that can help to successfully achieve this change.
Consider the bigger picture
As with so many hot topics in IT, it can be all too easy to get lost in the sea of buzzwords that surrounds the digital transformation process. While of course there will be a certain amount of jargon that organisations must contend with, the process is much more high-level than some of us might first imagine: it effectively involves rebuilding your business to adapt to the digital landscape, while interacting with customers in a modern, innovative way.
Migrating towards a cloud-based infrastructure or adopting new technologies for the sake of it will not magically deliver successful digital transformation. Rather, it’s about enabling both your business and your customers to be a part of the wider paradigm shift, and to benefit from the changing industry around them. Technology will always form the backbone of this, but it should never be the sole focus of digital transformation initiatives.
In order to unlock the value of technology and have it form part of the modus operandi of your business, it’s imperative to get the culture and change programme right to begin with. However, many currently struggle with this because they don’t realise the full extent of the changes required. If the human elements of change aren’t addressed, then successful digital transformation is unlikely to happen.
This wider shift in approach involves rejigging all departments and areas within the business accordingly. While the specifics of this will vary depending on the business, it often involves reconsidering and redeveloping the sales approach, retraining sales staff, adjusting to new revenue and remuneration models and ensuring compliance.
Once this cultural and organisational shift has been finalised, businesses can then focus on the inevitable: technology.
Looking ahead to 2018 and beyond, there are likely to be three main technology components of transformation to focus on: hybrid, security and compliance, and data transformation. These are all particularly hot topics of the moment, and will play a big part in defining the majority of digital transformations moving forward.
Map out your unique transformation journey
With this in mind, organisations must ask themselves three questions that will help them to further guide their own digital transformation journeys…
· What is the compelling business reason for digital transformation? — As previously discussed, this involves taking a step back and defining the reason why transformation is necessary in the first place. For example, an independent software vendor might need to do so to start selling more modern, cloud-based and SaaS offerings
· What is the technology challenge that the organization must comply with? — In other words, what issues needs to be overcome? To continue with the example of an ISV, they need to be able to deliver these new cloud and SaaS offerings to clients while remaining compliant with new regulatory changes, such as the General Data Protection Regulation (GDPR) and revised Payment Service Directive (PSD2)
· What technical solution can be put in place to solve it? — This final stage is self-explanatory. For the compliance-conscious ISV, they might consider adopting a modern, hybrid technology solution that is suitable for cloud consumption, while also maintaining compliance. It also allows them to take the data they use regularly, consolidate it and make it useful in a modern application sense.
Conclusion
The business and IT worlds are renowned for their use of buzzwords, and to many, digital transformation is just another one to add to the ever-growing list. But the potential of successful digital transformation should never be underestimated. If businesses can effectively combine an overhauling of technology with a significant shift in organisational mindset, they will surely benefit from a more efficient, motivated workforce and a competitive advantage.
Data defines the world as we know it today. Don’t believe me? Take this into consideration – by the time you have finished reading this article, over 14 million queries would have been performed on Google alone and Facebook users would have sent an average of 186 million messages and viewed 16 million videos. Staggering? I agree.
By Jackson Lee, Vice President of Corporate Development at Colt Data Centre Services.
As data volumes continue exploding, the brunt of the impact will be felt by data centre providers across the globe. We are already seeing digital powerhouses such as Amazon, Google, IBM and Microsoft using more compute capacity, networking and storage through hyper-scale server farms to meet growing data demands and workloads.
Hyper-scale computing is definitely not a new development – in recent times, we have seen a prolific rise across a number of verticals. From banking to manufacturing, more and more industries are adopting these cost-effective digital technologies as a means to scale up and improve agility to meet customer demands.
Winning with a competitive edge
On the opposite side of the same coin is the evolution of edge computing and micro-data centres. Speed is everything. Latency can make or break a business, especially when dealing with financial transactions and trading.
These days, the entertainment and automotive industries are also demanding faster connectivity with the lowest possible latency. Media streaming services cannot afford for buffer time when viewers are watching a movie or video online. The same goes for connected car manufacturers. A split second of connection downtime could result in unforeseen accidents.
When applications and data are moved from centralised points to the outer layers of a network, the distance between users and that data narrows. It makes delivering the right information at the right time to the user or the device quicker and more efficient. The increase in interconnectivity between machines, applications and other IoT-based devices using cloud providers is directly tied to this trend. As virtual reality (VR), the connected home and driverless cars emerge as mainstream products and services, a latency-centred product that sits closer to the user is key.
Today, almost every company and user requires near-instant access to data in order to be successful. This might explain why edge computing has been publicised as the next multibillion-dollar tech market. Organisations across the board are increasingly looking to double-down on customer experience through the delivery of services, content and data in real-time.
Enter the hybrid generation
The growing adoption of digitalisation has given rise to new forms of competition and lifestyle improvement for end users. However, more digitalisation also presents significant resource and data processing challenges.
Firstly, a data centre strategy that combines hypers-cale and edge computing into one, or chooses one over the other, is neither cost effective or competitive. It is no longer practical for every connected device or application to use the cloud in the same way smartphones do. Consider the millions of connected artificially intelligent devices, medical equipment, manufacturing robots and VR headsets in use today. The strain on network bandwidth and speed that these devices can pose soon makes sense. In short, it is highly likely that the user experience of such devices will rapidly deteriorate if congestion and latency is not addressed.
This is why a hybrid strategy – one that welcomes both full hyper-scale (centralised) and edge (decentralised) computing – is so important. If the type of product or service offered is not latency or bandwidth-driven (for example, the billing process after a transaction has been made on Amazon) it makes more sense to host it in the server farm that sits out of town away from the user. Low-level processing, backup or storage are other examples to mention.
However, technologies such as drones, driverless cars and connected fridges are latency-driven products that require more “edges” so that the information can be distributed quicker and the distance between device and data narrowed, thus improving the end user experience. These products produce too much data for it to be processed in a location far away. In order to function effectively and meet the demands of the user, the products need immediate results. This is particularly true of driverless cars.
The best of both worlds
“Connected things” will continue to grow in popularity over the coming years, resulting in an abundance of data being created every single day. Data centre providers will play an important role in helping organisations meet new demands as the data boom continues.
To effectively match user expectations, edge computing and hyper-scale technology must work in tandem. This will provide organisations the ability to leverage the best of both worlds in meeting customer needs effectively while lessening the resulting IT workload and operating costs.
The journey towards a fully hybrid future is inevitably on its way. Organisations and data centre providers alike must be ready to embrace the change, the hybrid change.
In-Memory Computing may be the key to the future of your success, as it addresses today’s application speed and scale challenges by Terry Erisman, Vice President of Marketing, GridGain Systems.
The In-Memory Computing Summit Europe, scheduled for June 25 and 26, 2018 in London, may hold the key to how your organization can meet the complex competitive challenges of today’s digital business transformation, omnichannel customer experience or real-time regulatory compliance initiatives. These initiatives take a variety of forms, including web-scale applications, IoT projects, social media, and mobile apps, but they all have one thing in common: the need for real-time speed and massive scalability. To solve this challenge, many leading organizations have turned to in-memory computing, which eliminates the processing bottleneck caused by disk reads and writes. In-memory computing isn’t new, but until recently it was too expensive and complicated for most organizations. Today, however, the combination of lower memory costs, mature solutions, and the competitive demand to achieve the required speed and scale for modern applications means in-memory computing can offer a significant ROI for organizations of any size in a wide range of industries.
The limitations of disk-based platforms became evident decades ago. Processing bottlenecks forced the separation of transactional databases (OLTP) from analytical databases (OLAP), but this required a periodic ETL process to move the transactional data into the analytics database. However, real-time decision-making is not achievable with the delays inherent in ETL processes, and over the last few years, organizations have turned to in-memory computing solutions to enable hybrid transactional/analytical processing (HTAP), which enables real-time analyses on the operational data set.
In-memory computing platforms, which are easier to deploy and use than point solutions, have driven down implementation and operating costs and made it dramatically simpler to take advantage of in-memory-computing-driven applications for use cases in financial services, fintech, IoT, software, SaaS, retail, healthcare and more.
The Next Step in In-Memory Computing: Memory-Centric Architectures
Two important limitations of many in-memory computing solutions are that all data must fit in memory and that the data must be loaded into memory before processing can begin. Memory-centric architectures address these limitations by storing the entire data set on persistent devices with support for a variety of storage types such as solid-state drives (SSDs), Flash memory, 3D XPoint and other similar storage technologies, and even spinning disks. Some or all the data set is then loaded into RAM while allowing processing to occur on the full data set, wherever the data resides. As a result, data can be optimized so all the data resides on disk but the higher-demand, higher-value data also resides in-memory, while low-demand, low-value data resides only on disk. This strategy, only available with memory-centric architectures, can deliver optimal performance while minimizing infrastructure costs.
A memory-centric architecture also eliminates the need to wait for all the data to be reloaded into RAM in the event of a reboot. Instead, the system processes data from disk while the system warms up and the memory is reloaded, ensuring fast recovery. While initial system performance will be similar to disk-based systems, it speeds up over time as more and more data is loaded into memory.
The In-Memory Computing Summit Europe 2018
While the benefits of in-memory computing are now well established, many companies don’t know where to begin. Which approach and which solution is best for their particular use case? How can they ensure they are optimizing their deployment and obtaining the maximum ROI? The In-Memory Computing Summit Europe 2018, hosted in London on June 25 & 26, is the only in-memory computing conference focusing on the full range of in-memory computing-related technologies and solutions. Attendees will learn about the role of in-memory computing in the digital transformation of enterprises, with a range of topics from in-memory computing for financial services, web-scale applications, and the Internet of Things to the state of non-volatile memory technology.
At last year’s inaugural event, 200 attendees from 24 countries gathered in Amsterdam to hear keynotes and breakout sessions presented by representatives of companies, including ING, Barclays, Misys, NetApp, Fujitsu and JacTravel. This year’s conference committee includes Rob Barr from Barclays, David Follen from ING Belgium, Sam Lawrence from SFB technology (UK) Ltd, Chris Goodall from CG Consultancy, William L Bain from ScaleOut Software, Nikita Ivanov from GridGain Systems, and Tim Wood from ING Financial Markets.
For organizations wanting to learn more about in-memory computing and how it can help them achieve their technical and business goals, the In-Memory Computing Summit Europe 2018 is the place to hear from in-memory computing experts and interact with other technical decision makers, business decision makers, architects, CTOs, developers and more.
Terry Erisman serves as the Vice President of Marketing for GridGain Systems. An industry veteran with more than 25 years of experience, Erisman has initiated and driven high revenue growth for a multitude of award-winning companies in the SaaS, open source, and enterprise software sectors.
Darren Turner, General Manager at Carelink, discusses the drivers for cloud in healthcare, why the sector has been slow to migrate to date and how the new Health and Social Care Network (HSCN) could be a catalyst for driving adoption as multi-agency collaboration develops. More importantly, Darren highlights that clouds come in different shapes and sizes – from private to public cloud and hybrid and specialist cloud – and one size doesn’t fit all. He makes the case that rather than thinking Cloud First, healthcare providers should think first and then make an informed cloud choice based on their individual needs.
Think Cloud First? Or think first and then Cloud?
The Government’s Cloud First policy states that when procuring new or existing services, public sector organisations should consider and fully evaluate potential cloud solutions before considering other options. We know that the driver for this is more efficient use of computing resources, through higher utilisation and flexible provisioning. And, alignment with this policy and the perceived cost savings associated with cloud are the primary motivators for public sector organisations to adopt the technology. This is particularly true in the healthcare sector.
With an estimated funding gap of £30Bn expected by 2020, there is immense pressure on the NHS to increase efficiency while simultaneously cutting costs.
But how much could cloud migration save?
Figures of 35% cost savings are quoted, based on a comparison between on-premise, self-managed, platform with a public cloud platform with outsourced management. But further examination shows that the bulk of the savings are associated with outsourcing the management – a compelling argument for a managed service platform but not necessarily the best measure of cloud versus on-premise solutions.
While it’s difficult to quantify savings without comparing like for like, if we consider hardware resilience, resource scalability and the removal of the need for life cycle management, along with the wider benefit of higher, more efficient resource utilisation, the benefits of cloud become clearer.
But, most importantly, investment in virtual technologies is not just about saving money or boosting technical performance in a healthcare setting. The end goal of all digital initiatives in the NHS should be about delivering improved quality of service and better patient outcomes by accelerating the way the sector uses, stores and shares information.
Healthcare still in the clouds
Most recent figures available from G-Cloud (up to December 2016) suggest that less than a quarter of technology spend in healthcare was cloud related. So why has healthcare has been slower than many other sectors to embrace and migrate to cloud?
This is partly due to perceived security risks, particularly when it comes to public cloud providers, and complexity and constraints around location and sharing of Patient Identifiable Data (PID).
I also think there is a lack of trust in cloud performance and resilience, as well as concerns around physical location of hardware. Added to this, there is a shortage of the right skills, willingness to embrace the necessary cultural shift and the budget to cover the cost of migration.
However, what we’re finding when it comes to trust is that comfort levels increase with the amount of information that is available about the platform. Because these details tend to be more readily available when we’re talking about private and specialist cloud solutions versus public cloud, healthcare providers are more willing to explore these options.
Clouds come in different shapes and sizes
When we think cloud we tend to think about hyper-scale public cloud providers – indeed, the Cloud First policy is primarily directed at these suppliers. But it’s important to remember that clouds come in different shapes and sizes and one size doesn’t fit all, especially when it comes to the varying needs and requirements of health and social care organisations.
In addition to public cloud providers like Amazon Web Services (AWS) and Azure there are private clouds, with dedicated physical server hosting in UK data centres, as well as specialist cloud platforms powered by the likes of UKCloud. Specialist cloud providers combine some of the benefits of hyper-scale public cloud providers with the advantage of delivering secure, government assured infrastructure.
Should we really be thinking Cloud First?
Perhaps it’s the Cloud First diktat that presents the biggest hurdle. Despite these obligations, I’d argue that healthcare providers shouldn’t start from the basis that everything must go in the cloud or that public cloud is the only option. Instead, they should work with a trusted and technology agnostic supplier to identify the best solution or combination for their organisation’s specific needs.
At Carelink, we advocate Cloud First only where appropriate. I certainly wouldn’t encourage healthcare providers to simply push everything onto the public cloud without first assessing and critiquing the various options.
We know that most healthcare providers don’t want to go 100% cloud and that, in many cases, they are cautious or hesitant about cloud. There is still a strong desire to physically see where data is held and there will always be a need for more assured solutions, in terms of security and performance.
Cost and pressure to meet policy requirements will continue to drive adoption of virtual technologies, but I don’t foresee a day when everything is running on public cloud even in the longer term.
What’s the best approach?
While there are no hard and fast rules, we find a common approach amongst healthcare providers is to keep sensitive data or PID on our CCSP or a private cloud in our UK data centre or with UKCloud. Anything else can go on the public cloud.
Many healthcare providers favour a hybrid architecture, where hardware servers hosting legacy or resource hungry applications are mixed with virtual machines running less intensive services on the cloud.
With a hybrid approach, organisations can realise the efficiencies of virtualisation, through increased utilisation of compute resources, while being able to more closely control the availability of those resources across the estate.
For larger estates, particularly those with high storage volumes, the cost of a private cloud platform can compare favourably to the cost of a hyper-scale public cloud. Healthcare providers should, therefore, work with their network and infrastructure supplier to explore this cost comparison to ensure they get the best value for their money.
Whether opting for private or public cloud, multi-cloud or a hybrid solution, when it comes to entrusting a supplier with an incredibly valuable and irreplaceable asset - data – healthcare providers need to be sure it’s secure. Buyers should seek accredited suppliers with a proven track record in providing secure solutions and protecting mission critical environments, ideally in a healthcare environment.
Cloud supporting joined up health and social care
With the roll out of the Health and Social Care Network (HSCN), the data network for health and social organisations that replaces N3, cloud services are being made easier to provision.
Health and social care organisations will increasingly be able to access a full range of HSCN compliant network connectivity and cloud services from one supplier, effectively a one stop shop, which simplifies the supplier and procurement environment. With assurance that HSCN obligations and standards have been met, this will likely drive greater adoption of cloud services in the sector.
Ready access to HSCN compliant cloud services helps simplify the process and provides assurance for healthcare procurement. Indeed, we’re seeing the likes of UKCloud, as well as hyper-scale cloud providers, setting up healthcare divisions and actively seeking suppliers to deliver HSCN connectivity.
Furthermore, HSCN could actually be the catalyst for driving cloud adoption as multi-agency collaboration develops. We can expect to see cloud services become an increasingly important feature in delivering more joined-up health and social care, enabling distinct organisations to operate together in a seamless way and in doing so, providing better care to the patients they serve.