Regular readers of my DW
column might think that I have some kind of an obsession with IT security. I’d
like to state here and now that it’s not IT security itself with which I’m obsessed.
There is, nor ever will be, any such thing as 100% total security, so my
obsession, if that is what it is, is actually about what happens when things go
wrong.
A recent news story suggested (once again) that banks would not be footing the bill for any online fraud/theft if, in their opinion, a customer had ‘inadequate’ security – ie an easy to guess password, or maybe even the gullibility to hand over important security details in response to an email that was pretty obviously not from the bank. So, sack al your staff, force you customers to go online with their banking, and then wash your hands of any subsequent security issues…not a bank of which I’d care to be a customer, but will any of us, long term, have any choice?
Clearly, much rests on the interpretation of the word ‘inadequate’. Folks who work in the IT industry will be familiar with most of the scams out there, but plenty of intelligent people who are not well-versed in the intricacies of cybersecurity could quite easily fall prey to innocent-looking emails requesting certain banking details. I suspect that, once a month, I lecture my family on the importance of never ever sending any banking details to anyone, ever, in response to an email request. As I summarise to them, if someone really does want to get hold of you/certain details legitimately, then they will find a way that doesn’t involve email and one which you can trust. However, in the digital world we are rapidly entering, it may well be that the choice of communication is reduced to just email and it’s more than possible that the emailer may be a robot. Yes, a robot in charge of your banking details.
Lest anyone think I have it in for only the banks (and I’m happy to admit that I believe that the vast majority, in the UK at least, are very poor indeed at customer relations), there are plenty of other organisations who are heading the same way and, in most cases, the customer who refuses to sign up to an electronic/online relationship is made to feel like an inconvenience, or some kind of luddite.
Of course, there are some signs that the headlong rush to the digital world is not all sunshine and smiles – I’m sure most of you have read the very entertaining stories of how, in the main, robots of various shapes and sizes, but in all cases not quite enough intelligence(!), fail to do what is required of them – being either rude and/or just plain incompetent.
As ever, the issue is not the technology, which is often breathtaking in both its complexity and ability, but the decision as to when and where to use it. Let one small example, definitely not in the banking industry, serve for many. Having purchased a new car recently, there are many wonderful features to discover each time I drive it (no one reads the manual, do they?!). However, there is one feature that is already annoying me immensely – the fact that, in order to release the electronic handbrake I must have my seatbelt on. A great safety feature…but if I just want to move the car in the drive, to make space for another vehicle, I have to get in the car, start up the engine, engage first gear ready to move, say, four or five metres, only to remember that I must out my seatbelt on before the car will move. Of course, some will argue that this is a minor inconvenience compared to the fact that I will always be wearing a seatbelt; others, myself included, will argue that there are a surprising number of occasions when one only wants to move the car a small distance, and having to put on a seatbelt in order to do so is very annoying.
Now I’ve got all of the above off my chest, I’ll end with a simple observation: whatever your role in the IT industry – vendor, Channel or end user – think very carefully as to how your customers will react to anything you might be planning to do. You might have come up with a great idea to save your organisation vast amounts of money, but if implementing this idea risks alienating your customers, then what is the true value of the cost-saving?
A combination of device releases, price reductions, and company rationalizations marked the first quarter of 2016 (1Q16) in the worldwide wearables market.
According to data from the International Data Corporation (IDC) Worldwide Quarterly Wearable Device Tracker, total shipment volumes reached 19.7 million units in 1Q16, an increase of 67.2% from the 11.8 million units shipped in 1Q15.
The first quarter saw its fair share of significant events to entice customers, with multiple fitness trackers and smartwatches introduced at the major technology shows; post-holiday price reductions on multiple wearables, including Apple's Sport Watch; and greater participation within emerging wearables categories, particularly clothing and footwear. Conversely, several start-ups announced headcount reduction or shut down entirely, underscoring how competitive the wearables market has become.
"The good news is that the wearables market continues to mature and expand," noted Ramon Llamas, research manager for IDC's Wearables team. "The wearables that we see today are several steps ahead of what we saw when this market began, increasingly taking their cues from form, function, and fashion. That keeps them relevant. The downside is that it is becoming a crowded market, and not everyone is guaranteed success."
Still, there are two areas where the market shows continued growth: smart watches and basic wearables (devices which do not run third party applications).
"There's a clear bifurcation and growth within the wearables market," said Jitesh Ubrani senior research analyst for IDC's Mobile Device Trackers. "Smart watches attempt to offer holistic experiences by being everything to everyone, while basic wearables like fitness bands, connected clothing, or hearables have a focused approach and often offer specialized use cases."
Ubrani continued, "It's shortsighted to think that basic wearables and smart watches are in competition with each other. Right now, we see both as essential to expand the overall market. The unique feature sets combined with substantial differences in price and performance sets each category apart, and leaves plenty of room for both to grow over the next few years."
Fitbit began 2016 the same way it finished 2015: as the undisputed leader in the wearables market. The launch of its new Alta and Blaze devices resulted in million unit shipment volumes for each, pointing to a new chapter of fashion-oriented fitness trackers. It also points to significant declines for its previously successful Surge, Charge, Charge HR, and Flex product lines. Still, with a well-segmented portfolio, pricing strategy, and a strong brand, Fitbit's position is well-established.
Xiaomi supplanted Apple in 1Q16 and captured the number 2 position. The company expanded its line of inexpensive fitness trackers to include heartrate monitoring and also recently launched a kids' watch to help parents track their children. It should be pointed out that its success is solely based on China, and expanding beyond its home turf will continue to be its largest hurdle.
According to Apple CEO Tim Cook, the Watch has met the company's expectations. Its total volumes and revenue trailed far behind its iPhone, iPad, and Mac product lines, and did little to stem their declines. Until the next version of the Watch comes out, it would appear that Apple will continuously update its watch bands to keep the product relevant.
Garmin finished slightly ahead of Samsung on the strength of its wristbands and watches appealing to a wide range of athletes, most especially golfers, runners, and fitness tracker enthusiasts. While the company added two fitness trackers with the vivoactive HR and the vivofit 3, Garmin launched its first eyeworn device, the Varia Vision In-Sight Display, for cyclists.
Samsung landed in the number 5 position on the success of its Gear S2 and Gear S2 Classic smartwatch. What sets the Gear S2 apart from most other smartwatches is that it is among the very few with a cellular connectivity version, forgoing the need to be constantly tethered to a smartphone. It is also compatible with Android smartphones beyond Samsung's own, broadening its reach. However, its application selection trails behind what is available for Android Wear and watchOS.
BBK tied* with Samsung for fifth place worldwide. This is the second time that BBK finished among the top five vendors worldwide, having debuted in 3Q15 with its Y01 phone watch for children. The company returns with another phone watch for children, the Y02 with improved water resistance and durability.
Top Five Wearables Vendors, Shipments, Market Share and Year-Over-Year Growth, Q1 2016 (Units in Millions) | |||||
Vendor | 1Q16 Unit Shipments | 1Q16 Market Share | 1Q15 Unit Shipments | 1Q15 Market Share | Year-Over-Year Growth |
1. Fitbit | 4.8 | 24.5% | 3.8 | 32.6% | 25.4% |
2. Xiaomi | 3.7 | 19.0% | 2.6 | 22.4% | 41.8% |
3. Apple | 1.5 | 7.5% | N/A | 0.0% | N/A |
4. Garmin | 0.9 | 4.6% | 0.7 | 6.1% | 27.8% |
5. Samsung* | 0.7 | 3.6% | 0.7 | 5.8% | 4.5% |
5. BBK* | 0.7 | 3.6% | N/A | 0.0% | N/A |
Others | 7.3 | 37.2% | 3.9 | 33.1% | 87.9% |
Total | 19.7 | 100.0% | 11.8 | 100.0% | 67.2% |
Source: IDC Worldwide Quarterly Wearables Tracker, May 16, 2016 |
* IDC declares a statistical tie in the worldwide wearables market when there is less than one tenth of one percent (0.1%) difference in the unit shipment share of two or more vendors.
Top Five Basic Wearables Vendors, Shipments, Market Share and Year-Over-Year Growth, Q1 2016 (Units in Millions) | |||||
Vendor | 1Q16 Unit Shipments | 1Q16 Market Share | 1Q15 Unit Shipments | 1Q15 Market Share | Year-Over-Year Growth |
1. Fitbit | 4.8 | 29.4% | 3.8 | 38.7% | 25.4% |
2. Xiaomi | 3.7 | 22.8% | 2.6 | 26.6% | 41.8% |
3. Garmin | 0.8 | 5.0% | 0.6 | 6.0% | 36.5% |
4. BBK | 0.7 | 4.3% | N/A | 0.0% | N/A |
5. Lifesense | 0.7 | 4.1% | N/A | 0.0% | N/A |
Others | 5.7 | 34.5% | 2.9 | 28.7% | 98.2% |
Total | 16.4 | 100.0% | 9.9 | 100.0% | 65.1% |
Source: IDC Worldwide Quarterly Wearables Tracker, May 16, 2016 |
Top Five Smartwatch Vendors, Shipments, Market Share and Year-Over-Year Growth, Q1 2016 (Units in Millions) | |||||
Vendor | 1Q16 Unit Shipments | 1Q16 Market Share | 1Q15 Unit Shipments | 1Q15 Market Share | Year-Over-Year Growth |
1. Apple | 1.5 | 46.0% | N/A | 0.0% | N/A |
2. Samsung | 0.7 | 20.9% | 0.5 | 29.8% | 40.5% |
3. Motorola | 0.4 | 10.9% | 0.2 | 11.0% | 98.2% |
4. Huawei | 0.2 | 4.7% | N/A | 0.0% | N/A |
5. Garmin | 0.1 | 3.0% | 0.1 | 7.2% | -17.3% |
Others | 0.5 | 14.5% | 0.8 | 52.0% | -44.2% |
Total | 3.2 | 100.0% | 1.6 | 100.0% | 100.2% |
Source: IDC Worldwide Quarterly Wearables Tracker, May 16, 2016 |
Rapidly changing business models are causing businesses to reconsider their approach to IT, as they look to capitalise on the huge opportunity for differentiation through the effective use of technology
However, this is posing real difficulties for IT departments across Europe according to the latest research from managed services provider Claranet. As business leaders across Europe seek to transform their organisations, the incremental ‘business as usual’ approach to IT improvement is increasingly unfit for purpose.
The research, which surveyed 900 IT decision-makers from the UK, France, Germany, Spain, Portugal and the Benelux from a range of mid-market organisations, discovered that 46 per cent of European IT decision makers find supporting fast changing business models a key challenge, up from 35 per cent in 2015. In the UK, the number of IT departments struggling to support a rapidly transitioning business is even higher, with 54 per cent of respondents citing it as a problem.
For Michel Robert, Claranet’s UK managing director, the results highlight the growing imperative for IT departments to embrace more dynamic approaches to keep up with changing business needs: “Our research shows that the IT department needs a fundamental rethink of how it approaches innovation and development. Traditionally, IT departments have incrementally upgraded their capabilities, adding new features sequentially. While this pace of development was acceptable in the past, and often the only pace permitted by infrastructure and software development limitations, it is now no longer agile enough to satisfy the needs of modern businesses.
“If IT departments are to empower their organisations, and keep pace with the desired rate of change, they need to adopt more progressive approaches to IT management, focusing on practises which will boost their applications, such as public cloud and DevOps. The flexibility and agility brought by public cloud services enable IT departments to spin up new services which scale on demand, without heavy investments in additional infrastructure. DevOps, meanwhile, can increase the frequency of updates, and speed to market, ensuring the application estate can support changing business conditions.”
Robert concluded: “IT services providers have a central role in supporting their customers in handling this level of significant change. Strong partnerships are able to ease the pressure on IT leaders, giving them the tools they need to respond effectively to the needs of their organisations. Companies should look to partner with services providers who promote an application first approach. This means addressing the hosting, management and development needs of individual applications and prioritising the availability, performance and security of applications which will make the most difference to the business.”
Progress has published the results of its recent global survey, “Are Businesses Really Digitally Transforming or Living in Digital Denial”.
Businesses today are faced with pressures to optimise customer experience and improve business outcomes across all channels, connecting the dots between people, information and systems. The survey, conducted in Q1 2016 by Loudhouse, the specialist research division of Octopus Group, aimed to better understand how business leaders view digital transformation and learn their plans to address its challenges.
Survey respondents included a mix of more than 700 geographically dispersed C-Level/VP decision makers; heads of marketing, digital and IT; as well as developers, IT architects, directors, engineers and line of business managers. These individuals represent organisations ranging from SMBs through large global enterprises.
While most businesses recognise the inherent benefits of “going digital,” the majority of respondents are hitting roadblocks—lack of internal alignment, lack of adequate skills and plenty of cultural resistance. Coupled with technology constraints and an overall inability to execute, the result is a growing state of anxiety about embarking on digital transformation, with some fearing it may already be too late.
Key findings from the survey concluded:
The worldwide x86 server virtualization market is expected to reach $5.6 billion in 2016, an increase of 5.7 percent from 2015, according to Gartner, Inc.
Despite the overall market increase, new software licenses have declined for the first time since this market became mainstream more than a decade ago. Growth is now being driven by maintenance revenue, which indicates a rapidly maturing software market segment.
Michael Warrilow, research director at Gartner, said: "The market has matured rapidly over the last few years, with many organizations having server virtualization rates that exceed 75 percent, illustrating the high level of penetration."
The market remains dominated by VMware, however, Microsoft has worked its way in as a mainstream contender for enterprise use. There are also several niche players including Citrix, Oracle and Red Hat, in addition to an explosion of vendors in the domestic China market.
While server virtualization remains the most common infrastructure platform for x86 server OS workloads in on-premises data centers, Gartner analysts believe that the impact of new computing styles and approaches will be increasingly significant for this market. This includes OS container-based virtualization and cloud computing.
The trends are varying by organization size more than ever before. According to Gartner, usage of server virtualization among organizations with larger IT budgets remained stable during 2014 and 2015. It continues to be an important and heavily used technology for these businesses, but this market segment is approaching saturation. In contrast, organizations with smaller IT budgets expect a further decline in usage through to at least 2017. This is causing an overall decline in new spending for on-premises server virtualization.
Gartner believes that organizations are increasing their usage of "physicalization," choosing to run servers without virtualization software. More than 20 percent of these organizations expect to have less than one-third of their x86 server OSs virtualized by 2017 — twice the amount reported for 2015. However, the underlying rationales remain varied.
The rise of software-defined infrastructure (SDI) and hyperconverged integrated systems (HCIS) are providing new options. It has put pressure on best-of-breed virtualization vendors to add more out-of-the-box functionality and provide a better experience and faster time-to-value.
"What was considered as the best approach to greater infrastructure agility only a few years ago, is becoming challenged by an array of newer infrastructure choices," said Mr. Warrilow.
According to the new Worldwide Semiannual Big Data and Analytics Spending Guide from IDC, worldwide revenues for big data and business analytics will grow from nearly $122 billion in 2015 to more than $187 billion in 2019, an increase of more than 50% over the five-year forecast period. The new Spending Guide expands on IDC's previous forecasts by offering greater revenue detail by technology, industry, and geography.
The services-related opportunity will account for more than half of all big data and business analytics revenue for most of the forecast period, with IT Services generating more than three times the annual revenues of Business Services. Software will be the second largest category, generating more than $55 billion in revenues in 2019. Nearly half of these revenues will come from purchases of End-User Query, Reporting, and Analysis Tools and Data Warehouse Management Tools. Hardware spending will grow to nearly $28 billion in 2019.
The industries that present the largest revenue opportunities are Discrete Manufacturing ($22.8 billion in 2019), Banking ($22.1 billion), and Process Manufacturing ($16.4 billion). Four other industries – Federal/Central Government, Professional Services, Telecommunications, and Retail – will generate revenues of more than $10 billion in 2019. The industries experiencing the fastest revenue growth will be Utilities, Resource Industries, Healthcare, and Banking, although nearly all of the industries profiled in the new Spending Guide will see gains of more than 50% over the five year forecast period.
Large and very large companies (those with more than 500 employees) will be the primary driver of the big data and business analytics opportunity, generating revenues of more than $140 billion in 2019. However, small and medium businesses (SMBs) will remain a significant contributor with nearly a quarter of the worldwide revenues coming from companies with fewer than 500 employees.
"Organizations able to take advantage of the new generation of business analytics solutions can leverage digital transformation to adapt to disruptive changes and to create competitive differentiation in their markets," said Dan Vesset, group vice president, Analytics and Information Management. "These organizations don't just automate existing processes – they treat data and information as they would any valued asset by using a focused approach to extracting and developing the value and utility of information."
"There is little question that big data and analytics can have a considerable impact on just about every industry," added Jessica Goepfert, program director, Customer Insights and Analysis. "Its promise speaks to the pressure to improve margins and performance while simultaneously enhancing responsiveness and delighting customers and prospects. Forward-thinking organizations turn to this technology for better and faster data-driven decisions."
From a geographic perspective, more than half of all big data and business analytics revenues will come from the United States. By 2019, IDC forecasts that the U.S. market for big data and business analytics solutions will reach more than $98 billion. The second largest geographic region will be Western Europe, followed by Asia/Pacific (excluding Japan) and Latin America. The two regions with the fastest growth over the five year forecast period will be Latin America and the Middle East & Africa.
While the common assumption is that the cloud represents reduced costs and better application performance, many organizations will fail to realize those benefits, according to research by VMTurbo, the only application performance control platform
A multi-cloud approach, where businesses operate a number of separate private and public clouds, is an essential precursor to a true hybrid cloud. Yet in the survey of 1,368 organizations 57 percent of those surveyed had no multi-cloud strategy at all. Similarly, 35 percent had no private cloud strategy, and 28 percent had no public cloud strategy.
“A lack of cloud strategy doesn’t mean an organization has studied and rejected the idea of the cloud; it means it has given adoption little or no thought at all,” said Charles Crouchman, CTO of VMTurbo. “As organizations make the journey from on-premise IT, to public and private clouds, and finally to multi- and hybrid clouds, it’s essential that they address this. Having a cloud strategy means understanding the precise costs and challenges that the cloud will introduce, knowing how to make the cloud approach work for you, and choosing technologies that will supplement cloud adoption. For instance, by automating workload allocation so that services are always provided with the best performance for the best cost. Without a strategy, organizations will be condemning themselves to higher-than-expected costs, and a cloud that never performs to its full potential.”
Above and beyond this lack of strategy, SMEs in particular were shown to massively underestimate the costs of cloud implementation. While those planning private cloud builds gave an average estimated budget of $148,605, SMEs that have already completed builds revealed an average cost of $898,508: more than six times the estimates.
Other interesting statistics from the survey include:
Adopting cloud is not a quick, simple process: Even for those organizations with a cloud strategy, the majority (60 percent) take over a year to plan and build their multi-cloud infrastructure, with six percent taking over three years. Private and public cloud adoption is also relatively lengthy, with 66 percent of private cloud builds, and 51 percent of public cloud migrations, taking over a year.
Growth of virtualization is inevitable and exponential: The number of virtual machines in organizations is growing at a rate of 29 percent per year; compared to 13 percent for physical. With virtualization forming a crucial platform for cloud services, it suggests that the technology will favor a cloud approach in the future.
Organizations’ priorities are split: When asked how they prioritize workloads in their multi-cloud infrastructure, organizations were split between workload-based residence policies (27 percent of respondents), performance-based (23 percent), user-based (22 percent) and cost-based (13 percent). Ten percent had no clearly-defined residence policies.
“The cloud is the future of computing – increasingly, the question for organizations is when, not if, they make the move,” continued Charles Crouchman. “However, organizations need to understand that the cloud does not follow the same rules as a traditional IT infrastructure, and adapt their approach accordingly. For instance, workload priorities are still treated as static. Yet the infrastructure housing those workloads, and the ongoing needs of the business, are completely fluid. An organization using the cloud should be able to adapt its workloads dynamically so that they always meet the business’s priorities at that precise time. Without this change in outlook, organizations will soon find themselves squandering the potential the cloud provides.”
The market for hyperconverged integrated systems (HCIS) will grow 79 percent to reach almost $2 billion in 2016, propelling it toward mainstream use in the next five years, according to Gartner, Inc.
HCIS will be the fastest-growing segment of the overall market for integrated systems, reaching almost $5 billion, which is 24 percent of the market, by 2019. Although the overall integrated systems market is growing, other segments of the market will face cannibalization from hyperconverged systems, Gartner analysts said.
Gartner defines HCIS as a platform offering shared compute and storage resources, based on software-defined storage, software-defined compute, commodity hardware and a unified management interface. Hyperconverged systems deliver their main value through software tools, commoditizing the underlying hardware.
Andrew Butler, vice president and distinguished analyst at Gartner, said the integrated systems market is starting to mature, with more users upgrading and extending their initial deployments.
"We are on the cusp of a third phase of integrated systems," said Mr. Butler. "This evolution presents IT infrastructure and operations leaders with a framework to evolve their implementations and architectures."
Phase 1 is the peak period of blade systems (2005 to 2015), Phase 2 marked the arrival of converged infrastructures and the advent of HCIS for specific use cases (2010 to 2020), and Phase 3 represents continuous application and microservices delivery on HCIS platforms (2016 to 2025).
The third phase of integrated systems will deliver dynamic, composable and fabric-based infrastructures by also offering modular and disaggregated hardware building blocks, driving continuous application delivery and continuous economic optimization.
Despite high market growth rates, HCIS use cases have so far been limited, causing silos with existing infrastructure, according to Gartner. Its progression will be dependent on multiple hardware and software advances, such as networking and software-defined enterprises.
Ultimately, the underlying infrastructure will disappear to become a malleable utility under the control of software intelligence and automated to enable IT as a service (ITaaS) to business, consumer, developer and enterprise operations.
"HCIS is not a destination, but an evolutionary journey," said Mr. Butler. "While we fully expect the use cases to embrace mission-critical applications in the future, current implementations could still pose constraints on rapid growth toward the end of the decade."
According to the latest figures published by IDC, Western European shipments of ultraslim convertibles and detachables posted positive growth (44.7%)
According to the latest figures published by IDC, Western European shipments of ultraslim convertibles and detachables posted positive growth (44.7%) to account for 18.4% of total consumer shipments and 21.9% of commercial devices in 16Q1, up from 9.2% and 16.3% respectively a year ago. This trend is even more significant in the context of a contracting market. The Western European PC and tablet market contracted by 13.7% YoY in 16Q1, with total shipments reaching 18.2 million units. The decline was softer in the commercial segment, where the drop in shipments remained in single digits (-5.2% YoY), while consumer demand fell by 18.6% YoY.
Detachable shipments grew 190.4% on a YoY basis in Western Europe, from about 500,000 to 1.5 million units over the course of a year. On the PC side, despite a 12.9% decline in PCs in Western Europe, convertible notebooks grew by 12%, driven by consumer demand. In the consumer tablet market, the detachable form factor continues to gain popularity, with shipments increasing almost fourfold from the same quarter last year, to just below 1 million units. The performance seen in the convertible and detachable sectors highlights that purchases are driven by the need for portable, mobile, and functional solutions, and that despite the challenging market situation, these form factors have significant growth potential. The market uptake has been limited so far but with more choices in terms of brand and price point, a growing number of end users are being won over by the new value proposition.
"Customers are looking for solutions that allow for flexibility," said Andrea Minonne, research analyst, IDC EMEA Personal Computing. "We want to access information, create content, or communicate without constraints. Addressing such market demand represents an opportunity for IT vendors. Convertible notebooks and detachables are the most suitable device to guarantee functionality and mobility at the same time. Both form factors have been well received in the market and have gained momentum across Western Europe. Interestingly, growth in convertible notebooks and detachables in the first quarter of 2016 was above average in Germany, Italy, and Switzerland."
Detachables' penetration continues to increase among both enterprises and professionals, with half a million devices shipped in the first quarter of the year (up 92.9% YoY).
"Adoption among business users is only just starting," said Marta Fiorentini, research manager, IDC EMEA Personal Computing. "We expect an acceleration in detachable deployments in the coming months as companies evaluate the new and more powerful commercial designs that have recently been introduced. Interest from enterprises is clear and this form factor seems to be a perfect fit with their mobility strategies. In some countries, we also see detachable deployments taking place in the public sector, which is usually more traditional in its form factor choices and often challenged by budget constraints."
This trend is reflected in the evolution of the different ecosystems and consequently OS. In terms of OS dynamics, Windows continues to account for over half of the combined PC and tablet market — strengthening its overall position due to the success of Microsoft's detachable devices and increasing ODM designs for this form factor, which have increased its tablet market share to 13%. As for PCs, Windows continues to dominate the market but performed slightly below average in the first quarter of the year — despite strong awareness of Windows 10, consumers have been slow to upgrade their hardware, while in the commercial segment most users have only recently renewed their old XP machines.
Android/Chrome OS ranks second with Android maintaining its dominance in the tablet market (over 60%), despite declining annual volumes due to market saturation and weak consumer demand. Chrome OS is still a marginal operating system in volume terms, but it is slowly gaining some ground in certain geographies and in education in particular.
Apple continues to play a significant role in the market, especially in the premium segment. The new 9.7in. iPad Pro only started to ship in the last few days of the quarter, so has not yet had a major impact on iOS shipments in the first quarter, though it should boost the outlook for the rest of year. In the PC market, OS X continued to grow its market share from last year, supported by the continued success of the MacBook Pro line. Moreover, the new MacBook launch in April will sustain portable shipments for the rest of the year. Overall the large installed base of Apple devices in Europe, combined with an extensive number of applications, makes OS X/iOS users among the most attractive for companies like SAP, which recently signed a partnership agreement with Apple.
The "Others" group includes Linux and Ubuntu OS — the latter brought into the tablet market with devices from Spanish vendor BQ.
Western Europe PCs and Tablets: OS
16Q1 (Calendar Year, 000 Units)
OS | Unit Shipments 15Q1 | Unit Shipments 16Q1 | Unit % Share 15Q1 | Unit % Share 16Q1 | Unit Growth 15Q1 vs 16Q1 |
Windows | 12,074 | 10,793 | 57.4% | 59.4% | -10.6% |
Android/Chrome OS | 5,582 | 4,663 | 26.5% | 25.7% | -16.5% |
OS X/iOS | 3,389 | 2,706 | 16.1% | 14.9% | -20.2% |
Others | 4 | 2 | 0.0% | 0.0% | -50.7% |
Grand total | 21,049 | 18,164 | 100.0% | 100.0% | -13.7% |
Intelligent Systems at the edge will continue to grow as processors and connectivity are embedded into a plethora of previously unconnected devices
Intelligent Systems at the edge will continue to grow as processors and connectivity are embedded into a plethora of previously unconnected devices, according to a new forecast, Worldwide Embedded and Intelligent Systems 2015-2020 Market Forecast, from IDC. The report forecasts that intelligent systems will log a compound annual growth rate (CAGR) of 7.2% from 2015-2020 with revenues exceeding $2.2 trillion in 2020.
"The vision of intelligent connected systems is real and far-reaching," said Mario Morales, program vice president, Enabling Technologies and Semiconductors at IDC. "The IT (Information Technology) and OT (Operational Technology) industries are moving beyond the Internet of Things (IoT) buzz and are now deploying intelligent systems in specific markets like retail, industrial automation, automotive systems, and directly in our homes. Despite IT vendors and technology suppliers maintaining the initial mind share as the key builders of the IoT world, the vision, implementation, and ecosystem is centered on OT companies like GE, Siemens, ABB, GM, Bosch, Ford, Volvo, Toyota, Samsung, Honeywell, Hitachi, and others that hold decades of system knowledge and an entrenched position in each of the major industries investing in the next wave of embedded and intelligent systems."
To enable and realize the true value of IoT, edge intelligence, which pushes processing for data intensive and processing applications away from the core to the edge of the network, continues to expand. As processors, microcontrollers, and connectivity are embedded into a plethora of new devices, "edge intelligence" in smart appliances, industrial machines, and automobiles continues to increase. The study identifies the fastest growing segments of the intelligent systems market, including wearables, advanced driver assistance systems (ADAS), automotive entertainment, drones, smart homes, smart buildings, video surveillance, 3D printers, and transportation telematics.
"A radical transformation is underway from the cloud to the edge of every major system. In an effort to address the opportunity, both edge and cloud infrastructure needs to continue to scale and support trillions of sensors and billions of systems," said Les Santiago, research director, Wireless and IoT Semiconductors at IDC. "Increasing intelligence at the edge will be one of the primary drivers of growth of the overall semiconductor market over the next few years against the backdrop of a maturing smartphone and PC market and a difficult pricing environment in the memory markets. As such, vendors would benefit from structuring their product portfolios to take advantage of this trend."
It's been a hot topic for some time and is gaining even greater traction as CEOs and CIOs alike wake up to the benefits of this kind of digital transformation.
By Maurizio Canton, TIBCO CTO.
Achieving this transformation, however, is far from simple or straightforward, and a degree of caution is called for if businesses are to avoid being pulled in different directions by competing demands from within. In particular, given that lines of business units are empowered to source, develop and manage IT solutions themselves, rather than continue with the age old approach of relying on a central IT department to do it all for them.
Commonly referred to as the democratisation of IT, this delegation of responsibility is growing in popularity as a means of building more agile applications tailored specifically to the needs of those working at the edge of the business. The dilemma, however, is that those applications still need to access and work with core infrastructure systems, subject to a completely different development mindset and compliance agenda and, in most cases, completely off limits to those outside the core IT team.
Add to that the fact that, unfettered by the rules and prejudices of the past, line of business developers are highly likely to turn to third parties and public cloud services as an expedient way of achieving their goals, and you have the recipe for an almost perfect storm. A storm made up of applications which may well deliver in the short term, but which exist in their own self-contained silos, effectively cut off from both core business systems and other edge developments. More alarming still, those siloed applications that are subject to completely different compliance and management regimes.
Tackling this perfect storm before it pulls the business apart calls first and foremost for a coherent and comprehensive strategy that addresses the different IT demands of both the core and the edge of the business. And that calls for a real sea change in the way IT strategy has been developed up until now.
It also calls for a good deal more transparency and access to core systems. Typically, through managed APIs that enable developers at the edge to build applications that can work and share information with core systems (and other edge applications) without loss of control or exposing those systems to unacceptable security risks.
Beyond that, digital transformation requires the building of an infrastructure able to orchestrate and automate processes across the organisation, regardless of whether the component applications are running in the corporate datacenter, on branch servers, a private cloud or public cloud services. In short, IT in all its guises needs to be joined up, shared and made subject to the same compliance and management agenda.
The good news is that products and technologies to progress digital transformation are appearing thick and fast. The bad news is that most address only specific parts of the problem. Even where vendors claim to offer more comprehensive solutions, few can match the scope of those from TIBCO, where we’ve seen the storm coming from a long way off.
Working in this area since day one has enabled us to develop a unique, comprehensive and highly integrated set of technologies, products and services designed expressly to empower businesses to address the competing IT demands of both the core and the edge of the business. To enable customers to formulate and implement an IT strategy that not only acknowledges, but reconciles those demands and which can be implemented across multiple platforms, to deliver the agility needed at the edge together with the stability and control required by the core and build a fully “joined-up” digital business.
By Gerardo Dada, SolarWinds
Just a few years ago, few projects were built on open source databases, they were immature and fragmented. MySQL didn’t support transactions, for example. Most applications were built on Oracle, SQL Server, DB2 and others. Remember Pervasive?
In the following years, MySQL evolved into a database engine powering many websites and commercial applications. In 2008, Sun Microsystems acquired MySQL AB and Sun was in turn acquired by Oracle in 2010. The fear of Oracle owning MySQL resulted in a few derivatives such as MariaDB and Percona.
However, despite the ongoing evolution of MySQL and the fact it has proven itself as a serious database engine, using MySQL for mission-critical workloads still poses some challenges that need addressing. Let’s explore them.
Remember that open source can mean various things, such as access to view the source code, the ability to modify and contribute code and free software. Whichever attributes are valuable to your organisation will determine whether you want to use a free commercial database engine, such as SQL Server Express or OSS DBMS, for your workload.
Also, don’t forget that just because it’s open source doesn’t mean it’s free: consider the costs needed for some open source platforms for configuration, customisation and maintenance.
Similar to making a business decision about any technology, consider what tools, services, support and talent are available. You want to take a close look at the tools available for MySQL, from performance tuning to backups. The same is true for skills, services and support. Investigate providers such as Percona and MariaDB, who provide support, managed services and consulting.
Open source is not a religion and supporting it should be about making a good business or career decision, not solely aligning with the free software movement. Ask yourself if it’s the right tool for your business needs.
Consider database engines that are open source NoSQL, such as Cassandra, MongoDB and Couchbase. A clear definition of the type of data being stored (structured or not, large amounts of text or images, etc), need for transaction ACIDity, scaling and other requirements will help guide this decision. Keep in mind most SQL databases also offer NoSQL access to data.
Compare the different architectural and performance profiles of these databases for your workload requirements. What important design decisions will you need to make? Will there be any trade-offs or limitations to consider? Keep in mind clustering, high availability and scaling architectural considerations. The way to scale and do DR in MySQL is very different from how you do it with Oracle or SQL Server, each model has its trade-offs. Do you plan to move the workload to cloud in the near future?
What costs will your organisation incur from migrating existing applications to a new database engine? Take a step back and build out a multi-year database strategy with a workload-by-workload plan based on application requirements and potential upside.
Before your company takes the plunge, consider these five unique challenges that adopting an open source database could pose. Because as with any technology decision, adoption needs to be carefully thought out and executed.
Lawrence Freeman, Operations Director at Mutiny and provides some practical advice for businesses.
Networks and internet connections are the lifeblood of modern businesses. Downtime of these networks can have many effects and not just financial ones. Demotivated staff and non-responsive customer facing systems can potentially have a devastating effect on customer service and reputation. Two reports from Veeam and DevOps put the financial value of unplanned downtime at $1.25 - 2.5m (£875,000 - £1.75m) and the cost of infrastructure failure at $100,000 (£70,000) per hour for enterprises from a range of vertical sectors. This is a considerable sum, which in turn makes the reduction of downtime an obvious priority for IT professionals.
Infonetics research undertook a survey in February 2016 where the most common causes of ICT downtime were identified as: failures of equipment, software and third-party services, power outages and human error.
Based on these survey findings there are obvious steps that businesses can use to avoid the crisis of downtime. Backup generators and UPS units can keep power up, the implementation of more redundancy protection, offsite backup solutions, better training for staff and the use of more cloud based software. However these do not stop failures. The most effective way to stop failures is to understand when and where they will occur and take measures to prevent them. Simple network monitoring allows you to view your network and the status of its devices, so you know when they are on or off line, but modern monitoring solutions allow you to extract and monitor far more than that.
Network monitoring solutions can proactively detect issues before they escalate to cause outages. Even across multiple sites a centrally monitored solution allows IT teams to understand how their network is performing, the bottle necks, device load levels, software statuses and resource availability.
Monitoring supplies a 24/7 view of network resources. Centralised dashboards provide IT teams with a single view of their infrastructure and applications, allowing them to actually see when issues are occurring. Automated alerts via email and SMS warn your IT team when systems are getting critical, allowing them to react immediately and most importantly before they fail.
Furthermore, monitoring does not just supply a view of the infrastructure but can also provide a view of the environments surrounding your devices. A variety of sensors can be configured to work alongside your monitoring solutions: Temperature monitoring can ensure that servers stay cool, humidity sensors ensure that there is not too much moisture in the air, static electricity and other sensors ensure that devices are not under threat from outside influences. You can even monitor who is in the server room.
Unfortunately, downtime can never be 100% unavoidable, however there are steps businesses can take to both minimise its occurrence and the impact it has on the business. By implementing contingency and redundancy the loss of a device has less impact on staff and clients. Awareness is also key. Implementing a pro-active monitoring system that alerts your teams is very cost effective (in some cases, even free) allowing them to react quickly to issues and resolve them before they have any serious impact.
Mark Edge, UK Country Manager at Brainloop, discusses why it’s time that boardrooms operated with the same stringent cyber security measures that they demand elsewhere in their businesses.
Cyber security – it’s becoming a big issue for UK boards. A recent survey by the consultancy PwC reveals that one-in-four UK companies have suffered a data breach in the past two years. And, according to PwC, cyber criminals are becoming ever more ambitious, targeting not just financial information, but also customer data and intellectual property.
The increase in high profile attacks on companies like TalkTalk and JD Wetherspoon has done much to raise awareness of the growing cyber threat. It should come as no surprise that today’s UK boardrooms have become more cyber-savvy. A recent survey by Brainloop, conducted at ICSA 2016, confirms that cyber-security awareness is at an all-time high, with over two-thirds of those organisations surveyed saying that cyber security is now a pressing board issue.
Considering the financial and reputational impact posed by a potential cyber breach, it would be astonishing if boardroom engagement with cyber risk management was not a top priority. But with cybercrime incidents in the UK up 20% since 2014, the board itself also has a major role to play in battening down potential vulnerabilities. That’s especially true when it comes to securing its own communications and board materials.
A data breach at board level would be catastrophic for many UK businesses. According to 81% of respondents participating in the Brainloop survey, the loss or leakage of board papers would prove highly damaging for their business.
While there is a pressing need to secure every aspect of an organisation’s operations against external threat, you’d expect that a ‘belt and braces’ approach to cyber security would begin at the top. In other words, that the majority of UK boards would be utilising a digital board portal to help create and securely share, review and update board materials.
Yet the evidence speaks to the contrary. Almost half (48%) of the organisations surveyed confirmed they still use traditional methods to create board information packs, the majority of which are then distributed as hard copies.
Clearly, boardroom methodologies and workflows remain firmly wedded to the past, despite the fact that many boardrooms say they are well aware of the damage that a data breach could cause. That includes using convenient yet insecure channels – like email – to communicate confidential board information on a day-to-day basis. It’s this approach that puts board communications and company data at risk of interception by criminals.
The Brainloop survey revealed that 57% of organisations said their board had a strong digital culture, including using multiple devices for work, being active on social media and looking for ways that technology can improve their business. But this culture doesn’t appear to be filtering down to where it matters the most. Take, for example, data governance. Of the 52% of organisations using a digital board portal, one-third had no idea in which country their solution provider stores their data. This has significant compliance implications for the business, as data protection regulations, tax laws and security policies can all impact where data can be stored.
There are also major regulatory changes in the pipeline, including Privacy Shield – the new deal that’s set to replace Safe Harbor once EU member states have reviewed and approved as adequate – and the EU’s General Data Protection Requirement is also on the horizon, so it is becoming even more important for companies to know in which country their data is being held.
Cyber attacks and data leakage represent a daily threat for UK organisations of every size. With investors and regulators challenging boards to step up their oversight of cyber security, it’s time for the board to take the lead. Especially considering that board members deal with some of the most sensitive company data of all.
While embracing a digital board culture is important, engaging in a real-world cyber-aware culture is now an essential requirement. The good news is that technologies are available to help the board be as secure as possible.
Designed specifically for board directors and executives, these web-based secure workspace solutions make it possible for corporate secretaries to securely create and distribute board packs, notify directors of changes, and update documents in real-time. Providing a secure environment for board members to gain access to, and collaborate on, confidential documents.
Given the pressing nature of the cyber-threat environment, it’s time that boards took the lead and shored up their own day-to-day practices. Because it’s one thing to demand secure processes across the business – and quite another to practice what you preach.
The Cloud is rapidly becoming a star in the IT strategies of organisations throughout the world. Companies of all sizes are waking up to the benefits of moving some or all Oracle application to the Cloud, reducing the need for up-front capital investment in order to accelerate return on investment, as well as gaining greater business insight and an excellent user experience.
By Mark Vivian, Managing Director, Claremont.
The reality of today’s Cloud technology is now matching - or even
exceeding - the hype of a few years ago. However, some organisations are still
cautious about moving business-critical applications to the Cloud, based on
perceived risks that are often overstated and based on outdated concerns. To
allay these fears, we can draw on the experiences of early adopters, such as
Reading Council, to prove the value of migrating Oracle applications
to the Cloud.
When we implemented Cloud ERP for Reading BC they were able to save in excess of 30% of their annual IT spend. This significant saving meant they could protect their critical front office services through an early adoption of Cloud ERP during a period of financial austerity.
Patrick Hopkins, ERP Capability Lead, ClaremontOrganisations today are under increasing pressure to embrace new technology, both in order to demonstrate innovation and to improve performance. Within the Oracle community, organisations are at different stages in migrating their business-critical applications but more and more are embarking on - or at least planning their route for - the journey to the Cloud. A key driver is that the Cloud reduces the need for specialist skills in-house, liberating IT departments from systems maintenance and empowering them to focus on more strategic initiatives.
Claremont’s recent survey on The Future of Oracle-based Applications delivers a fascinating insight into the state of play in Cloud migration.
The survey asked: Has your business migrated any of its Oracle-based applications to the Cloud?
While only 26% are actively engaged in the Cloud at present, perhaps the most interesting result is in the intentions of the 74% who have not yet migrated. With 60% planning on moving to the Cloud within three years, organisations are clearly excited about the benefits of the Cloud, making late adopters an ever-diminishing minority.
The survey also asked organisations who engaged in the Cloud about which applications they have moved. HCM/Payroll is currently the most common application, with 62.5% of organisations moving it to the Cloud. In future, this looks is set to change, with 80% of organisations keen to move ERP, once any apprehensions have been dispelled (18% currently host ERP in the Cloud).
Our Cloud customers are well versed in the benefits of this approach. Reduced operating costs are often cited as the main benefit but customers also place highly value on access to around the clock support and the ability to remove the need to maintain specialist skills in-house.
Jonathan Stuart, Managed Services Director, ClaremontThe survey also asked organisations who engaged in the Cloud about which applications they have moved. HCM/Payroll is currently the most common application, with 62.5% of organisations moving it to the Cloud. In future, this looks is set to change, with 80% of organisations keen to move ERP, once any apprehensions have been dispelled (18% currently host ERP in the Cloud).
Moving business-critical applications to the Cloud only makes sense if you see clear benefits. The mass migration over the next 1 - 3 years demonstrates that the vast majority of organisations see them very clearly indeed. This movement towards the Cloud is surely influenced by the experiences of early adopters. For instance, the survey indicates that 75% of organisations who have moved their Oracle applications have reduced their operating costs as a result, with 63% saying their system is now easier to upgrade and maintain. Perhaps even more impressive, 63% cite their move to the Cloud as enabling new business models for the organisation.
Judging by the smoothness of the transition project and your flexibility, there really wasn’t anywhere where I’d say there was room for improvement. Since go live, my guys have been very pleasantly surprised at the pro-activeness of your support. I think there’s been three occasions over the past few weeks where your guys have suggested something could / should be done and we’ve taken up your suggestions. In eight years with our previous supplier, this never happened once – not once!
Barry Reynolds, Head of IT Services, PPLThe organisations who believed in the promise of the Cloud have blazed a trail and have proven the value of moving some or all Oracle applications to the Cloud. The right hosting partner will be able to work with you to determine if moving to the Cloud is right for you and to help make that journey as seamless as possible.
SURVEY DATA
The figures in this article form part of the Research Survey Report: The Future of Oracle-based Applications, produced and created by Claremont. For further information on the detailed findings of this survey please contact
info@claremont.comAccording to the Digital Universe study by IDC, global data volumes will grow from 4.4 zettabytes in 2013 to 44 zettabytes by 2020.
That’s a staggering increase. Managing all of this new data represents several opportunities for businesses, but also significant challenges.
By Ashwin Viswanath, Head of Product Marketing at Talend.
After all, it’s not just the volume of data as much as the increasing variety of data sources and formats that presents a problem. With mobile apps, machine data, on-premises applications and SaaS all flourishing, we are witnessing the rise of an increasingly complicated information value-chain ecosystem. IT leaders need to incorporate a portfolio-based approach and combine cloud and on-premises deployment models to sustain competitive advantage. Improving the scale and flexibility of data integration across both environments to deliver a hybrid offering is vital to providing the right data to the right people at the right time.
The evolution of hybrid integration approaches creates requirements and opportunities for converging application and data integration. The definition of hybrid integration will continue to evolve, but the ‘direction of travel’ is clearly to the cloud.
Gartner is projecting dynamic growth in public cloud spending. According to the research firm, the worldwide public cloud services market is projected to grow 16.5 percent in 2016 to total $204 billion, up from $175 billion in 2015. The highest growth will come from cloud system infrastructure services (infrastructure as a service [IaaS]), which is projected to grow 38.4 percent in 2016.
The increasing focus on the cloud means that customers will need to have an effective hybrid integration strategy. At Talend, we have identified five phases of cloud data integration, starting with the oldest and most mature and going right through to the most bleeding edge and disruptive. Here, we provide a brief overview of each phase of that integration and highlight how businesses can optimize the approach as they move from one step to the next.
The first stage in developing a hybrid integration platform is to replicate SaaS applications to on-premises databases. Companies in this developmental phase typically either need analytics on some of the business-critical information contained in their SaaS apps, or they are sending SaaS data to a staging database so that it can be picked up by other on-premises apps.
So as to increase the scalability of existing infrastructure, it’s best to move to a cloud-based data warehouse service within AWS, Azure, or Google Cloud. The scalability of these cloud-based services means businesses don't need to spend cycles refining and tuning the databases. Additionally, they get all the benefits of utility-based pricing. However, with the broad range of SaaS apps today generating even more data, they may also need to adopt a cloud analytics solution as part of their hybrid integration strategy.
Each line of business has their preferred SaaS app of choice: Sales departments have Salesforce, marketing has Marketo, HR Workday, and Finance NetSuite. However, these SaaS apps still need to connect to a back-office ERP on-premises system.
Due to the complexity of back-office systems, there isn't yet a widespread SaaS solution that can serve as a replacement for ERP systems such as SAP R/3 and Oracle EBS. Businesses should not try to integrate with every single object and table in these back-office systems – but rather look to accomplish a few use cases really well so that their business can continue running, while benefiting from the agility of cloud.
Databases or data warehouses on a cloud platform are geared toward supporting data warehouse workloads; low-cost, rapid proof-of-value and ongoing data warehouse solutions. As the volume and variety of data grows, enterprises need to have a strategy to move their data from on-premises warehouses to newer, Big Data-friendly cloud resources.
While they assess which Big Data protocols best serve their needs, they can start by trying to create a Data Lake in the cloud with a cloud-based service such as Amazon Web Services (AWS) S3 or Microsoft Azure Blobs. These lakes can relieve cost pressures imposed by on-premises relational databases and act as "demo areas", giving businesses the opportunity to process information using their Big Data protocol of choice and then transfer it into a cloud-based data warehouse. Once enterprise data is held there, the business can enable self-service with Data Preparation tools, capable of organising and cleansing the data prior to analysis in the cloud.
Businesses today need insight at their fingertips in real-time. In order to benefit commercially from real-time analytics, they need an infrastructure to enable them with this level of rapid data insight. These infrastructure needs may change depending on the use case—whether it be to support weblogs, clickstream data, sensor data or database logs.
It’s best for IT leaders to first assess all their data sources in order to judge which ones must remain on-premises versus those that need to be moved to the cloud. For example, most IoT use cases involving sensors with industrial equipment are on-premises, so it’s best to keep your streaming analytics infrastructure on-premises. However, for use cases where you're collecting streaming data about systems already in the cloud, it’s probably best to keep your infrastructure there also and use existing services within those ecosystems to set up your streaming infrastructure. That way you keep ahead of the game in terms of moving everything to the cloud.
We live in a ‘mobile first’ society, meaning that every experience will be delivered as an app through mobile devices. In providing the ability to discover patterns buried within data, machine learning has the potential to make applications more powerful and responsive. Well-tuned algorithms allow value to be extracted from disparate data sources without the limits of human thinking and analysis. Businesses will need to harness the expertise of skilled developers who understand that machine learning offers the promise of applying business critical analytics to any application in order to accomplish everything from enhancing customer experience to serving up hyper-personalised content.
In order for companies to reach this level of ‘application nirvana’, they will need to have first achieved or implemented each of the four previous phases of hybrid application integration.
That’s where we see a key role for integration platform-as-a-service (iPaaS), which is defined by Gartner as ‘a suite of cloud services enabling development, execution and governance of integration flows connecting any combination of on premises and cloud-based processes, services, applications and data within individual or across multiple organisations.’
The right iPaaS solution can help businesses achieve the necessary integration, and even bring in native Spark processing capabilities to drive real-time analytics, allowing them to move through the phases outlined above and ultimately successfully complete stage five.
Secure code has to underpin everything IoT, from platforms to devices, because just one chink in the armour leaves us all vulnerable
By Amit Ashbel, Cyber Security Evangelist at Checkmarx.
The pace at which the Internet of Things (IoT) is entering our homes and workplaces is phenomenal. This proliferation brings lots of potential benefits to users but it also presents numerous security risks. There is currently no common IoT platform; instead there are various tech giants competing to own the IoT platform of choice with securing that platform seeming to be a lesser consideration. The Open Web Application Security Project (OWASP)'s top ten IoT list of vulnerabilities gives recommendations on how to develop IoT applications that will help fight off hacking attempts. In the IoT space, releases are generally quick and often so OWASPs top ten is certainly helpful but they can only have a positive affect if the underlying application code itself is secure.
There is no doubting the benefits of IoT. For consumers, British Gas' Hive already allows you to control your home heating from wherever you happen to be while London City Airport is using IoT to keep things moving and thus make the customer's journey easier. Unfortunately, the benefits come at a price. The hacking of IoT devices has become mainstream news. Cars can be hacked causing accidents or even hacked in order to access sensitive information on mobile devices connected to your car's WiFi network. IoT devices give hackers another avenue to hack into which also happens to be one that is perhaps not as secure as it could be. Regardless of how secure some of your devices might be, it only takes one chink in the armour and the hacker is in.
The Smart TV is probably the most prevalent of connected devices in people's homes today. They are effectively mini-computers with WiFi access and applications that require the user to input information such as email IDs, phone numbers, full names etc. Unfortunately, unlike the smart phone app-stores like GooglePlay and the App Store, there are no regulations when it comes to Smart TV apps which means that security protocols are often absent. In fact, manufactures often give developers the software development kits (SDKs) with no real security policy, thus allowing hackers easy access to innards including the file I/O and the screen/app control API. In other words, Smart TV apps are running with complete “root” access.
Taking the case of the Smart TV specifically, a flaw in the application layer can provide a hacker entry to a Smart TV which can lead to identity theft and harvesting of sensitive account information. Hackers can record private footage through the built-in camera and microphone or work out passwords through key-logging and / or capturing sensitive screenshots. And a hacked TV can also be used for commercial espionage through monitoring usage behaviour and patterns.
But of course, it doesn't stop there, defence forces are increasingly going online and we've all seen the various TV shows and movies where the hackers get control of a weaponised drone. The security risks associated with IoT are very real and it seems that tech giants are ploughing ahead with their IoT platforms like Google's Brillo, Apple's HomeKit, Intel's IoTivity, Qualcomm's AllJoyn and most recently, Samsung's Artik Cloud Platform. We know that manufacturers within Apple's HomeKit eco-system have to use certified chipsets and specialised firmware developed with security in mind but the reality is that with no actual regulation or industry standards to speak of, many smart devices end up not password protected with data being transferred from the devices without adequate encryption.
There are a number of things end users can do to help the fight against hackers such as changing default passwords (1111, 1234 etc), use strong or complex passwords and change them regularly, and try to avoid open public hotspots in favour of secure local WiFi networks. But of course, it shouldn't be left only to end users to protect themselves, centre security protocols should be built in to these applications; here are four security measures OWASP says all IoT application developers must use:
1 – Prevent Brute Forcing – Malicious attackers use a wide variety of automated methods to guess passwords in order to hack into systems. Blocking access after a certain number of login attempts will help to prevent this.
2 – Disable Use of Default Password – To avoid the easily hackable default passwords, IoT hardware should be programmed to enforce a “default password change” during the initial setup process. The new password should have to be 'strong' and replaced regularly.
3 – Store Credentials Securely – All private data should be encrypted, stored in a secure manner and not be exposed over the network. Cloud systems require strong transports encryption.
4 – Secure Updating Mechanisms – IoT devices need to be updated constantly (new features, security patches, etc). The software should be able to process update files in an encrypted manner, after they are validated before implementation using signed files.
While the above are good rules to live by, if the application has bad code integrity, there is a limit to how helpful these measures can actually be. And with Gartner claiming that we will see over 25 billion connected things by 2020, the issues around securing these things cannot be ignored any longer. To avoid vulnerabilities such as SQL injections, Cross-Site Scripting (XSS) and more, those in the business of IoT should consider secure application code underpins any and all IoT platforms or devices.
The most effective way to avoid these vulnerabilities is to develop applications in a secure Software Development Life Cycle (sSDLC). In many instances, code is written, passed to another team and then checked for security vulnerabilities very close to release times. Checking this late in the process means that it takes longer for the coders to re-code in order to fix any bugs or vulnerabilities as they are not as familiar with the code then as they would have been when they were first writing it. It also puts pressure on the business to release a new version that isn't fully patched due to time constraints. The cost of delay is sometimes too high and vendors prefer to “Take the risk”
Security implemented in the form of Static Code Analysis (SCA), a SAST methodology allows the automating of the security process and enables early elimination of application-layer vulnerabilities. This saves developers time, which incidentally is what they are normally measured on, and helps protect the business from hackers.
At the moment, the hacker storylines portrayed in many TV shows and movies are not as fictional as you might think. Whether by hackers themselves to portray their own security concerns around IoT or through malicious hacks, we've already seen cars, fitbits, baby monitors, Samsung's smart fridge for its access to Google Calendar and even WiFi enabled Hello Barbie. And of course, hacking students at the University of Alabama managed to hack the pacemaker in a robot used for medical training students enabling them to theoretically kill that 'patient'. With former Vice President Dick Cheney having a pacemaker, that episode of Homeland suddenly seems a lot less far-fetched.
This is now our reality but if applications are developed securely with high code integrity, smart devices will become safer. The IoT revolution is nothing to be feared if vulnerabilities are eliminated at the root – the application code.
Making the move to cloud-based services like Office 365 provides organisations with many benefits; from an increase in end user productivity to reduced cost and complexity of maintaining on-site hardware.
The risk of downtime is also substantially reduced because the applications are run across highly available architectures spread over different regions. These benefits have made Office 365 an attractive prospect to businesses of all sizes and industries.
By Stefan Schachinger, Consulting System Engineer – Data Protection, Barracuda Networks.
But cloud service providers have made it easy for IT departments to think that when it comes to disaster recovery, their work has been done. While application downtime is certainly reduced in Office 365, Microsoft cannot protect businesses from themselves. There is no way for Office 365 to distinguish between a malicious employee deleting critical files and another deleting some unneeded items. This means that if data is lost because of human error, there’s often very little that businesses can do to get the files back.
What’s worrying is that data loss through human error isn’t an uncommon occurrence. Recent research from Cloudwards found that 32% of data loss is caused by human error. And there are plenty of ways this can occur, for instance:
While Office 365 does provide customers with some protection against loss of data, often the window for data recovery is short and the recovery options limited. For example, in Exchange Online, individual emails that are deleted will remain in the user’s deleted items folder for 30 days by default. While the deleted items folder does provide a layer of protection against end-user errors, if the user chooses to empty the folder, the data will be held for a further 14 days by default, after which it will be gone forever.
Businesses looking to adopt a Cloud solution for business-critical activities need some additional support to eliminate the risk of items lost due to human error or malicious deletion, as well as retain emails and files indefinitely if users leave the organisation. Here are four tips to ensure that important data is accessible, recoverable and protected:
1. Automate your Cloud backup: By employing an automated backup service, IT departments can save an enormous amount of time compared to running manual backups. This will also minimise the risk of out-of-date backups. Some services include on-demand backups and backup schedules, so the IT team has peace of mind.
2. Keep data retrievable: Making sure that businesses are meeting the compliance demands of their industry is essential. All Office 365 data should be retrievable from anywhere with an internet connection and restores should be fully flexible, allowing point-in-time recovery as well as restores to both the original or any other user account. Keeping data retrievable is also essential for compliance. If an organisation is placed under legal hold, it must have its records readily available.
3. Fool-proof your data: Mistakes will happen. By implementing a solution that can recover previously edited versions of documents, businesses can ensure that edited documents can be recovered and mistakes rectified.
4. Prepare for the worst: Businesses need to be confident their data is secure in the event of a hack. Make sure that your Office 365 data is encrypted when at rest and in transit. Businesses might also want to assess multi-factor authentication technologies and role-based administration to ensure that cyber attacks don’t hinder productivity.
Most SaaS vendors back up their customers' data to protect against application downtime, but they cannot protect customers from themselves; if data within the application is changed, either accidentally or on purpose, the overwritten data can be lost forever. Companies must implement the same level of data protection for their Cloud services as they have for their existing on-premise applications, so they can rest assured that their business won’t be damaged by data loss.
With increasing interest from the healthcare industry in cloud hosting and services, a recently published infographic from Arxan caught my eye.
It highlights some of the differences between perception and reality around how secure mobile applications are in this sector.
By Monica Brink, EMEA Marketing Director, iland.
The UK healthcare sector, and particularly the NHS, have changed and re-formed more in recent years than ever before. In the face of significantly reduced budgets and alongside huge demands to reduce costs and increase efficiency, radical cuts are being made and new technologies introduced, including mobile apps, in order to reshape the industry from top to bottom.
Many healthcare organisations are now implementing new digital
strategies and mobile apps as the organisations and their workforces become
more flexible, agile, remote and mobile.
Like many industries, more and more employees in healthcare are using mobile applications to perform more effectively in their jobs, which means more and more sensitive data is constantly passing through these applications. This is driving a growing need to safeguard the confidential data on employees’ smartphones and tablets.
Organisations in the healthcare industry, however, face many challenges with regards to the safeguarding of data. Firstly, there’s the nature of the kind of information they have access to. This isn’t just financial data that could affect a company’s bottom line; it also includes individuals’ health records; sensitive information that could seriously harm people’s personal lives if it were to get into the wrong hands. In addition to this is the nature of how many healthcare organisations operate. We’re not talking about traditional office structures here. Doctors, nurses and other hospital staff are rarely tied to one workstation all day, more commonly moving around their workplace.
In data access regulation, we often talk about operating on a ‘need to know’ basis, with restrictions being based on the necessity of each individual to do their job. When we’re talking about healthcare, it’s of the utmost importance to get this right, as often ‘need to know’ literally means a question of life or death. Consider the doctor who needs to check his or her patients’ allergies before administering urgent medication. Having that information to hand at the right time and the right place is not just a matter of convenience; getting these restrictions right is crucial. On the one hand, it is imperative that patients’ sensitive data is safeguarded, but on the other it’s of equal importance that the right people have the access necessary to do their job, whenever and wherever they need it.
This is why the healthcare industry is among the most regulated with regards to data security. In the US, healthcare providers must adhere to the federal law of the Health Insurance Portability and Accountability Act (HIPAA).
In the UK, private providers that also operate in the US will need to adhere to HIPAA too, but in the public sector, the National Health Service has security policies for England, Wales and Scotland. While not law, these policies aim to safeguard patient data and ensure organisations within the NHS adhere to the Data Protection Act (DPA). This has recently taken on greater significance since the Information Commissioner’s Office (ICO), which enforces the DPA, was given greater authority by the UK government earlier this year to audit NHS organisations’ data security.
However, the challenge is much broader than simply securing devices on a network. Organisations also need to secure systems and infrastructure right from the server to the end user, no matter where that infrastructure might be - most of which is likely to be in the cloud.
With the growth of IoT, mobile devices and cloud being key IT trends, companies need to ensure that their end-to-end attack surfaces are all fully protected. This is clearly evident from the many infrastructure breaches we have seen recently in the press - from the well-known UK telecoms provider that suffered a well-publicised infrastructure breach at the end of October 2015, to lesser-known small and medium-sized businesses that have been completely levelled by a cyber-attack.
This is one of the reasons why we have invested in making sure that our Cloud Hosting for HIPAA compliance is built from the ground up with security features, reporting, adherence to a BAA (Business Associates Agreement), and professional services to ensure that healthcare companies can leverage all the benefits of cloud computing while meeting the requirements of HIPAA in a hassle-free way. Equally, in the UK we adhere to the Data Protection Act and we will also adhere to the EU General Data Protection Regulation when this eventually takes effect.
With more healthcare companies adopting cloud than ever before, the cloud infrastructure that employees are working from also needs to be just as secure to cope with a security breach and protect all of that data. Making sure your cloud networks, infrastructure, applications and data are as secure as possible is a vital part of leveraging the mobile application trends that have the potential to deliver so many great benefits to the healthcare industry.
Monitoring network traffic is not one size fits all. If your organization uses a virtualization platform like VMware or KVM, some network traffic never leaves the host.
For example, two virtual machines (VMs) on the same host communicate using a virtual switch (vSwitch) which means the software performs network switching. If you use virtualization and you only monitor the physical wire, your application, security, and network monitoring tools may miss all of the virtualization traffic. How much traffic you miss depends on how much of your data center environment is virtualized and how much east-west traffic moves within your data center or the cloud.
By Jason Landry, Senior Solutions Marketing Manager, Ixia.
Server virtualization accounts for a majority of compute resources. Gartner estimated that 75% of all workloads were virtualized in 2015. Only 20% of businesses are this heavily virtualized, but it does highlight the continued trend towards more virtual computing. To calculate your organization’s virtualization percentage, add up all of your VMs and divide by the total number of all servers (VMs + non-virtualized servers). For example, if you have 500 VMs running on 50 physical servers and an additional 50 servers with no virtualization, you are 91% virtualized (500 / (500+50) = 500/550 = .909091).
No matter where you are with virtualization, one thing is true – the greater your percentage of virtualization, the more you should rethink network monitoring to maintain application performance, security, and compliance. Since a significant portion of virtual machine traffic never touches a physical link, virtual monitoring can help in analyzing the VM to VM communication buried deep inside of physical hosts. On average, east-west traffic (server to server), accounts for about 75% of the traffic in your data center. How much east-west traffic skips a physical wire depends, in part, on your virtualization percentage.
When you monitor a physical network, the impact is largely isolated to a one-time topology or switch configuration change. Network taps or SPAN ports copy traffic and forward it directly to an analysis tool or a dedicated monitoring network. Dedicated monitoring networks are separate from the inline path of traffic on the production network. This isolation of physical monitoring makes it possible to limit the effect monitoring has on your network resources. However, this isolation does not exist in virtualization.
Virtualization creates hardware functions in software. Virtualization uses physical host resources to perform compute and networking functions. If you want to monitor your virtualized environment, you will need to make a copy of the virtual network traffic. This means you will consume CPU, memory, and network resources to make a copy of the traffic. This could have consequences. For instance, using resources to copy and filter packets may change the number of VMs you can run on a host. It could also affect the performance of the applications you are monitoring. We have seen increases in host utilization from 2% to 30%.
There are multiple options to copy virtual network traffic. None of them are perfect and there are trade-offs with each. Some you can run in the public cloud, some you cannot. Some may support VM motion but are not flexible to deploy. Some options may require you use a particular version of VMware or a specific type of vSwitch. Choosing the best option for you depends on the application you are monitoring, where you are running the application, the virtualization solution you are using, and the outcome you want to receive. The beauty is you are not restricted – you can use one option across your entire environment or some combination of all five.
1. Put the vSwitch in Promiscuous Mode
This option copies all vSwitch traffic and forwards it to all attached VMs. A special VM is setup on the same host to collect all the packet copies and perform basic filtering. Placing the vSwitch in promiscuous mode can be a security concern because it essentially turns the vSwitch into a virtual hub. This means any VM connected to the vSwitch could listen to all traffic. The upside is that it is flexible to deploy, easy to scale, and is supported by vMotion. Beware though, copying all traffic moving across a vSwitch can use a lot of host resources.
2. Use vSwitch Port Mirroring
This option copies vSwitch traffic on one port and forwards it to another port. A special VM is setup on the destination port to collect packet copies. This method is similar to configuring a SPAN port on a physical switch. Configuration could become burdensome if you have a large number of vSwitches or applications you want to monitor. However, it can be easy to configure especially when you need to perform temporary monitoring.
3. Add a tap agent to a monitored VM
This option uses the monitored VM itself to copy packets moving through the virtual NIC. Add a tap agent into an application or into the virtual machine operating system and it will “sniff” the network data moving to and from it. This option natively supports vMotion because configuration happens in the VM, not the vSwitch. It will work in just about any environment including a public or private cloud. However, it can adversely affect the host resources and the application being monitored as the VM is responsible for copying and storing all those packets.
4. Remote control vSwitch Infrastructure
This options copies select packets on the vSwitch, based on rules you define, and forwards them to a virtual or physical destination. This option is similar to option 2, but instead of manual configuration of each vSwitch, an orchestrator VM is deployed that communicates with the vSwitches via an API to automatically configure ports and set the filtering rules you defined. While this option can offer superior configuration control, not all virtualization platform versions support it.
5. Flow Steering
This option uses a control to instruct the virtual switching layer to steer VM to VM traffic to another packet-capture-and-copy VM, first. Flow steering differs from the other four options because it adds an additional failure node in the application data path between the VMs. However, it does offer high flexibility, control, and automation capabilities and works in the public cloud.
You have calculated your virtualization percentage. You have explored different ways to copy virtualization traffic for monitoring. You probably have a good idea how much of your virtualization traffic you have and how much of it never touches a security or network monitoring tool. Your next step is to ask yourself:
Your answers will help guide you towards choosing the best visibility strategy for your virtualized environment.
Forget everything you have ever heard about the security of the cloud and let’s start from scratch.
Despite the hype, the cloud does not make it easier to have poor security– in fact, I would go so far as to say the opposite; it may, however, make poor security feel less painful psychologically. The thing many fail to understand is that the cloud is just the same technologies from an on-premise environment running somewhere else. Any danger that there would have been when running a CRM app like Salesforce on premises is still there in the cloud, although the share of risks is much smaller since the provider takes care of some.
By Jonathan Sander, VP of Product Strategy at Lieberman Software.
In a case like Amazon’s EC2 where servers are running in the cloud, the organization would be just as responsible for security from the operating system up as they would be if it were a server in their own datacenter. Many fail to see that clearly and think that there are either unique risks or magical protection afforded by running in Amazon’s world. Of course, when something is “over there” it feels like it’s less your problem. So, many things that would have perhaps been a nagging feeling about a server that is set up in your datacenter may feel way more distant when they are running in the cloud – but it doesn’t mean they don’t exist.
The single most common mistake users of public cloud make is to not read their contracts and understand where their responsibilities truly lie. Often people are unclear as to when and how the creation of a server in the cloud moves from the care and security of the provider to them. I’ve run into folks who mistakenly thought their cloud provider was patching servers through some back door for them. They weren’t; and the servers went unpatched for months. Often organizations will forget that the layer of management given to them by the cloud provider will also need some security. The administrative users and rights used to configure and control the cloud systems will need to be treated just as carefully as any other privileged users in their systems.
Another mistake that is common is to think that the cloud provider will have services that their on- premises systems did, simply waiting for them to use. It’s true that Amazon, Microsoft, and others do build in many services for customers, but before moving to the cloud organizations really must do a full inventory of everything they were doing on-premises to identify gaps. Security is often an area where there are things missed when moving workloads from on-premises to the cloud. Maybe there are different groups involved – the operations folks are spurring the move to the cloud for cost reasons, but the security folks only find out at the last minute and have to scramble to make a change to support the move.
Protecting public cloud resources is no different than securing systems running on- premises. The differences, in principle, are none; and the differences in operation are minimal. The real trick to appropriate security in the public cloud is to treat it as if it’s just another datacenter. Attempt to build security that’s at least as good as what you had on-premise, or perhaps even take the opportunity of the new build out to make improvements that you would have on-premise if you had only had the time. If there are ways that you want to apply security patterns that turn out not to work because things are running in the cloud, then deal with them as exceptions. You won’t find many.
From a security perspective, cloud has been mature for years. If you look at the intimidating list of security and even compliance certifications that the major cloud providers have, you can see that no IT shop except the most elite (and well-funded) have ever come close to offering a platform as well secured. They have to. If the cloud providers had a major gap in security, especially considering how much undue attention is being paid to that security, then they would be finished overnight. It’s suffice to say that if you have very poor security in the public cloud, it’s likely you brought it in with you when you walked through the door.
The worst consequences of cloud security failures are conversations about cloud security failures. In the end, security in the cloud is only as bad as the user makes it. You could argue that the massive investments made by cloud providers to secure the underpinnings of the applications, servers, and other technologies they offer in the cloud actually makes cloud security quite a bit better. But cloud is under a microscope because of its impact and potential. Combine that with the fact that there is this (most false) impression that the cloud is somehow less secure, and you get a multiplier for any cloud security failure that happens to occur.
Although the cloud hasn’t made people more complacent to risks, it also doesn’t seem to have made them more attentive to them either. This varies from organization to organization, of course. Some see the very specific language about what duties and risks are theirs in the contracts with their cloud providers and it wakes them up to all the things that may go wrong that they have forgotten. The complacency comes from the fact that risks are still prioritized for action alongside everything else that pulls on organizations. If it will cost twice the money to fix a security risk as to increase profit margins by a third, what do you think an organization will do? Organizations will ultimately act to further their main interests and IT security risks don’t often make the cut.
Security in the public cloud will need to be a team effort just as it was on premises. There is a need for a security subject matter expert for sure. However there will be pieces that require a cloud subject matter expert, too. The real trouble here is that most organizations don’t have an appropriate process to manage and disseminate good security information for their current systems and moving to the cloud won’t magically fix that. However, forward looking organizations could use the opportunity afforded by a paradigm shift like moving to cloud to help establish a better process. Long standing security processes, e.g. those from SANS, are perfectly well suited to the cloud. Taking models that are proven and applying them to the new public cloud operations will definitely result in better outcomes.
It is absolutely sure that the contract any organization has with their cloud provider will make it crystal clear where their responsibilities begin and the provider’s end. As with other areas of life, don’t make assumptions, especially when it comes to your vital corporate data. Failing to read and comprehend what they are responsible for has gotten many public cloud users into uncomfortable spots and thus given cloud security an unjust bad rep.
With Gartner predicting that the hybrid cloud market will reach £51.4 billion in sales by 2018, it’s clear that this is more than an emerging trend.
In particular, businesses are turning to hybrid cloud telephony software as an agile, cost saving alternative to traditional telephony systems.
By Marcus Kellman, head of technical sales and pre-sales, Solgari.
Its flexibility has seen companies quick to adopt hybrid telephony – a cloud computing service that is a mix of on premise, private and public cloud services – as it gives cautious companies the opportunity to experience the benefits of the cloud without fully investing their resources. However, while there are benefits for those taking their first steps into the cloud, businesses also need to be aware of the challenges associated with this model, including the added complexity it brings when managing both cloud and legacy systems.
Cost saving is undoubtedly one of the biggest benefits of a hybrid model. By using both cloud and legacy infrastructure, organisations can mix and match services to strike the ideal balance between cost and security, giving them a flexible, agile telephony option. Costs can be separated, whether it’s using on premise telephone equipment or hosted cloud services for archiving and backup, and organisations can select the appropriate option depending on their business needs and purse strings. Rather than having to rip and replace legacy technology, hybrid telephony allows businesses to bridge the gap between old and new systems, therefore enabling them to take the step towards a complete cloud telephony system at a pace that suits them. A monthly subscription fee for cloud services also reduces initial capital expenditure and cuts costs associated with deploying and maintaining additional hardware or software on-site.
Another key benefit of the hybrid model is that it allows businesses to have on premise IT infrastructure that can support the everyday workload, and rely on the cloud for any excess demands. By directing calls or call data where they are best suited, it helps improve overall business efficiency.
The flexibility of hybrid telephony also means it can solve a number of business and communication challenges. For example, if a business has an existing on-site phone system it may find it more convenient to subscribe to a call recording service or a hosted contact centre rather than installing additional hardware or software for these applications. Or, if a larger business has invested heavily in its telephony equipment, it could choose to have a staggered transition with deployments of cloud services for remote sites while continuing to use existing on premise telephony systems.
These benefits are significant, but it’s important that business are aware of the challenges they could face when they adopt hybrid telephony. Indeed, on premise, private and public cloud applications are all very different environments and it’s important that they integrate well. Both the cloud and legacy systems need to be in sync, and the nature of the network setup for both the public and private cloud needs to be taken into consideration. Applications, data and services should be able to move between private and public cloud environments seamlessly. The network subsequently needs to be designed to allow for fluid changes.
This leads us onto one of the most widely discussed topics when referring to the hybrid cloud – security and compliance. The hybrid model can cause numerous headaches because of the complications associated with managing multiple security strategies. Indeed, many businesses are concerned that crossing operating boundaries could jeopardise their ability to meet strict regulatory requirements. While it’s true that it does add an element of complexity when managing the rules for each service, whether it’s on premise, or private or public cloud, it’s not as daunting a task as many imagine. By formalising hybrid cloud security strategies that focus on user management, access controls and encryption, businesses can ensure that all channels coexist securely.
It’s also worth noting that utilising the cloud and outsourcing certain IT services to cloud vendors can have a positive impact on security. Indeed, providers have stringent regulations they must comply with and standards they must meet are designed to keep data held externally as safe as possible. Moreover, hybrid telephony gives businesses the option to store data on premise or in the cloud, depending on business requirements. For example, if important data used for audit requirements, such as call recordings, are saved in the cloud, should a company lose physical machines or equipment, this data is still easily retrievable and safe.
A hybrid telephony environment does have its challenges, but the flexibility and scalability of the cloud can simplify business communications, save organisations time and money, improve security measures, and leave room for future growth. Ultimately, it’s a transitional time within the telephony industry and the hybrid telephony model will likely be a stepping stone to a fully cloud-based system.
OpenStack has been around just under six years. In that relatively short period of time, it has gained a lot of traction and consistently grown in popularity with deployments in production reaching 65 per cent in 2016 compared to just 32 per cent in 2013, as reported in the OpenStack Foundation’s annual User Survey. The large number of companies adopting OpenStack, including Fortune 100 companies and many well-known U.S. giants such as eBay, Best Buy, Comcast, Disney and Walmart, further drives the technology’s reputation and growing popularity.
The reasons for adoption vary from standardizing on an open platform to deploying cloud-native products and services faster, but OpenStack’s attractiveness is easy to nail down. Because OpenStack is open source, it provides you with the flexibility to explore an orchestrated public or private cloud with little risk. It also reduces time spent on infrastructure maintenance, so that you can focus on business innovation, and eliminates costs associated with proprietary software, including initial purchase, licensing fees and support renewals.
There’s also the freedom to control large pools of compute, storage and networking resources throughout the datacenter so you’re no longer locked into a particular vendor, allowing you to develop your own interfaces to meet your organisation’s individual needs. Overall, OpenStack creates a much more dynamic and flexible tool for the business to use. It can, for instance, be used as a basis for public or private clouds and can power a massive, geographically distributed cloud.
But despite OpenStack’s significant popularity, its deployment has never been easy. In fact, implementations can be complicated, especially if the people you are working with don’t have the right skills or ecosystem vendors can not easily integrate with your datacenter infrastructure to extract the most value from OpenStack solutions.
As one user survey respondent said, “There’s a lot of great things in OpenStack, but it’s a Homercar…It’s designed for consultants and implementation specialists who speak OpenStack 24x7.” And another user responded, “It requires certain skills and knowledge to use and adjust a business.”
Hiring highly skilled and often hard-to-find specialists, or alternatively making substantial investments in consultants to help you achieve your business goals is one way of addressing this complexity. When planning to implement OpenStack, this has become the go-to approach within the industry. But ironically, success is not about expensive expertise, rather it’s the need to keep things simple. Simply put, both OpenStack vendors and organisations aiming to deploy OpenStack need to go back to basics.
For a successful and easy-to-manage implementation, this is a fundamental tenet. In practice it means removing unnecessary complexities ahead of implementation and focusing on the basics. These basics form the foundation for a solid yet simple storage backend, alongside computer power and infrastructure, to support an OpenStack cloud.
For instance, an all-flash, scale-out system is the best block storage option for OpenStack and can readily meet the demands of production-ready cloud environments. It’s important to get OpenStack storage right. You need to be able to scale non-disruptively, grow block storage resources without impact, guarantee performance to all tenants and applications in your cloud, and ensure all cloud data is available and protected against loss and corruption.
Once a solid foundation is in place, you can translate this simplicity into operations, maintenance and the ability to customise an environment with the use of APIs. For instance, you can focus up the stack on application deployment and business innovation.
It’s worth emphasising that you ensure a clear path for scaling and growth is clearly outlined for the future. This will help you avoid disruptive migrations or the need to rebalance resource pools a little further down the line.
It’s also important that you have a full understanding of the OpenStack vendor’s sales model. What you need is a simple and transparent model so you’re not going to be hit by any nasty surprises such as unexpected licensing and add-on fees, ensuring that your investment is protected down the line.
You may find that this back-to-basics approach is initially resource-heavy, but it will ultimately pay off. If you want to succeed, you’ll have an OpenStack cloud that is simple to operate, easy-to-use and one that provides all the benefits you’d expect of OpenStack.
You’ll also be set to evolve to a next-generation automated and software-defined data centre because an OpenStack-powered cloud orchestrates the entire data centre.
This automation takes place without the need for physical intervention and refers to installing an operating system, configuring a server, provisioning volumes on storage, deploying code and system updates, user provisioning and so on.
This level of automation allows for the deployment of infrastructure on demand and for processes to become more consistent, repeatable and predictable.
But orchestration builds on automation by bringing a series of automated tasks into a single workflow. It’s important to note that automating myriad tasks doesn’t mean orchestrating a cloud. Orchestrating, in its truest sense, means evolving automated tasks into fully automated workflows. These automated workflows enable the deployment of an entire infrastructure on-demand while the processes are consistent and repeatable.
Follow these back-to-basics guidelines and you’ll have a next-generation data centre orchestrated by OpenStack and all the cost savings, flexibility and operational benefits it delivers.
One of the best practices for secure development is dynamic analysis.
Among the various dynamic analysis techniques, fuzzing has been highly popular since its invention and a multitude of fuzzing tools of varying sophistication have been developed.
By Emily Ratliff, Senior Director of Infrastructure Security for the Core Infrastructure Initiative at The Linux Foundation.
In the context of fuzzing a single application, it can be quite straight-forward to spot and rectify flaws in a system. But what if a developer is leading a large project which does not lend itself to being fuzzed easily? In this case, dynamic analysis must be approached in a far more considered fashion.
Deciding on the goals is a fundamental first step. Fuzzing might be used to screen for security issues, or perhaps all types of correctness issues. One of fuzzing’s many benefits is that it detects many low severity issues which may never be encountered in normal use. These may look exactly like security vulnerabilities with the only difference being that no trust boundary is actually being crossed.
For example, if by fuzzing a tool that only ever expects input to come from the output of a trusted tool, it may cause crashes that would never be encountered through normal usage. Are there other ways to get corrupt input into the tool? If so, there is a security vulnerability. If not, it may be a ‘low priority’ correctness issue that is never resolved. That is unless it is clearly defined in the goals that all of the issues will be addressed, rather than only those relating to security. By setting up expectations for dynamic analysis up front, it can save a lot of time and frustration.
Documenting and understanding exactly where the error checking should occur is vital to the efficiency of the process. The most seasoned security professionals find it easy to create a mental model of strong security whereby every function defensively checks every input. However in the real world, it is more complex than that. This type of hypervigilance is hugely wasteful and therefore never survives in production.
By putting in the extra work to establish a correct mental mode of the security boundaries for the project, the analysis team will better understand where in the programme control flow the checks should be made and where they can be omitted, thus ensuring a more efficient use of available time.
Different ‘fuzzers’ have different specialities. Therefore, the project should be segmented into areas for the different types of fuzzers based on the interface – file, network, API.
New fuzzing tools are being developed all the time and the existing tools are being updated with new capabilities. By staying up to date with these developments, it is possible to unearth new efficiency gains that can make a fuzzing process run that bit smoother – even if it is only helping within a subset of the project.
A situation may arise whereby the requirement is to perform a dynamic analysis on a large mixed language project which does not lend itself to existing tools. In this instance, it is worth considering writing a fuzzer specific to the project’s APIs and generating random inputs based on them – adding lots of assertions that are at least enabled during fuzzing. Creating a fuzzer is relatively straight-forward if it is clear what the API is and how it can be introspected – take a random number generator, set up an isolated container or VM and go.
A common critique of fuzzing tools is that after they have been run for a while, they stop finding bugs. But the obvious flaw in this conception is that not finding bugs is exactly what you want to achieve. Just as throwing away an automated test suite because it finds so few regressions is illogical, the same rationale applies to fuzzing. If fuzzing tools are no longer finding bugs, it should be considered a success – before moving on to find the more elusive threats.
The most time-consuming part of the fuzzing process is finding the best tool to run it. If a tool works with the project or even covers a subset of the project, then it is simply a case of running it. As the tool picks out issues in the system, deciding whether to fix the low priority glitches will be decided along the way and will shape the development of the project’s trust boundaries. There may be a crash for which a patch is generated and submitted, only to find that it gets rejected because the incorrect input generated by the fuzzer can never reach that part of the project. This might make adding a check too expensive.
Whichever approach is taken, make sure it is well-established in written processes so that future developers can go above and beyond it in the future.