This month, readers will be delighted to know that I’ve parked my own opinions and ideas, as the Green Grid news below reached me just as we were going to press. As a summary of the present situation, whereby plenty of organisations need to understand that they can no longer rely on doing what they’ve always done when it comes to sourcing and running their IT infrastructure; and many other organisations are discovering data centre solutions that address the growing trends of high density, energy efficiency, high performance compute/networks/storage and the increasing interest in open technologies, it could hardly be expressed more elegantly or succinctly.
Enterprise server rooms will be unable to meet the compute power and IT energy efficiencies required to meet the demands of fluctuating technology trends, pushing a higher uptake in hyperscale cloud and colocation facilities. Citing the latest IDC research, which predicts a growing fall in the number of server rooms globally, Roel Castelein, Customer Services Director, at The Green Grid argues that legacy server rooms are failing to keep pace with new workload types and causing organisations to seek alternative solutions.
“It wasn’t too long ago that the main data exchanges going through a server room were email and file storing processes, where 2-5KW racks was often sufficient. But as technology has grown, so have the pressures and demands placed on the data centre. Now, we’re seeing data centres equipped with 10-12KW racks to better cater for modern-day requirements, with legacy data centres falling further behind.
“IoT, social media, and the number of personal devices now accessing data are just a handful of factors that are pushing the demands of compute power and energy consumption, which is causing further pressures on legacy server rooms used within the enterprise. As a result, more organisations are now shifting to cloud-based services, dominated by the likes of Google and Microsoft, and also colo facilities. This trend is not only reducing carbon footprints, but also guarantees that the environment organisations are buying into are both energy efficient and equipped for higher server processing.”
In IDC’s latest report, ‘Worldwide Datacenter Census and Construction 2014-2018 Forecast: Aging Enterprise Datacenters and the Accelerating Service Provider Buildout’, it claims that while the industry is at a record high of 8.6 million data centre facilities, after this year, there will be a significant reduction in server rooms. This is due to the growth and popularity of public cloud based services, occupied by the large hyperscalers including AWS, Azure and Google, which is expected to grow to 400 hyperscale data centres globally by the end of 2018.
Roel continues: “While server rooms are declining, this won’t affect the data centre industry as a whole. The research identified that data centre square footage is expected to grow to 1.94bn, up from 1.58bn in 2013. And with hyperscale and colo facilities offering new services in the form of high-performance compute (HPC) and Open Compute Project (OCP), more organisations will see the benefits in having more powerful, yet energy efficient IT solutions that meet modern technology requirements.”
Greater transparency in energy sustainable practice among data industry players will help improve collaboration to tackle rising carbon emissions seen in the industry. In Greenpeace’s 2017 green IT report, ‘Clicking Clean: Who is winning the race to build a green internet?’, many hyperscalers scored highly in the report for its adoption and initiatives on renewable energy, but other players in the industry were urged to improve advocacy and transparency, and to work more collaboratively.
Roel Castelein, Customer Services Director, The Green Grid said: “The Greenpeace Report is a good indicator that while there are definite movements towards a more sustainable data centre industry, many organisations have sought individual goals, rather than working together to share best practice and find the best ways to a sustainable future. Google, Facebook and Apple are constantly pushing the barriers of green innovation, while also working closely with energy suppliers to help achieve sustainable company targets. Their ability to advocate such measures is beginning to influence the rest of the sector, yet more must be done.
“Netflix is one such hyperscaler that whilst having one of the largest data footprints out of all the companies profiled, it has been urged to increase the adoption of renewable energy and advocate for more use of renewables across the data centre industry. As the video streaming market continues to grow and produce unprecedented amounts of data, the need for Netflix or an equally large provider to set the standard and advocate green policies can set a precedent for others to follow.”Advertisement: Eltek
Since 2012, the amount of electricity consumed by the IT sector has increased by six per cent (totalling 21%) in the past five years, making the need for a green data centre industry stronger than ever before. With an anticipated threefold increase in global internet traffic by 2020, the advocacy of renewable energy for data centres will be important in sustaining its growth.
“The growth in the amount of data demands that all data centre providers come together, rather than working in silos, and be clear in their use of renewable energy in creating a more sustainable industry. Whether it’s meeting government sustainability objectives, using renewable energy as secondary sources, or pushing for stronger connections with energy suppliers, it can all contribute to enhanced efforts in tackling carbon emissions.”
Roel continued: “The need for data centre providers and end users to collaborate to ensure our use of data is sustainable has never been greater. Organisations like The Green Grid are providing the space for this to happen and are developing a range of tools to make sure that our growing dependency on technology is sustainable.”
Hybrid IT intensifies demand for comprehensive managed, consulting and professional services, finds Frost & Sullivan’s Digital Transformation team.
The increasing maturity and customer awareness of cloud services in Europe is impelling a phased migration from premise-based data centres to a cloud environment. With this shift, public cloud providers such as AWS and Google have identified a large market for Infrastructure-as-a-Service (IaaS) solutions. The convergence of cloud services with emerging applications like Big Data and Internet of Things (IoT) creates more opportunities for growth, encouraging innovations in infrastructure and platforms.
European Infrastructure-as-a-Service Market, Forecast to 2021, the new analysis from Frost & Sullivan’s IT Services & Applications Growth Partnership Service program, analyses the emerging trends, competitive factors and provider strategies in the European IaaS market. Western Europe, specifically Germany, the UK and Benelux, leads the market; in due course, Eastern Europe also will emerge an influential region.
As the pace of migration of each enterprise depends on its size, type and regional presence, IaaS providers are recognising that a one-size-fits-all solution is not ideal. They are, therefore, developing nuances within their portfolios to meet each customer's unique requirements.
“The varying cloud-readiness of enterprises has fostered a market for hybrid IT, and service providers are tailoring their portfolios to meet this enterprise requirement,” said Digital Transformation Research Analyst Shuba Ramkumar. “Astutely, they are seeking to cement long-term customer relations in this emerging market by offering managed, consulting and professional services to guide customers through the transition period.”
The biggest challenge for IaaS providers in Europe is the regional nature of the market, which compels them to adopt indirect channel strategies to expand their presence across European countries. Partnerships with local providers will give them a stronger foothold in European countries where there are strict security regulations about data being housed within countries’ borders.
“Meanwhile, enterprises’ increased familiarity with cloud services and recognition of its benefits will reduce the impact of adoption deterrents such as security,” noted Ramkumar. “The ability of IaaS to improve performance, data migration and management, as well as enhance business agility, will eventually attract investments from data centres of all sizes, across Europe.”
By 2018, half of enterprise architecture (EA) business architecture initiatives will focus on defining and enabling digital business platform strategies, according to Gartner, Inc.
"We've always said that business architecture is a required and integral part of EA efforts," said Betsy Burton, vice president and distinguished analyst at Gartner. "The increasing focus of EA practitioners and CIOs on their business ecosystems will drive organizations further toward supporting and integrating business architecture. This is to ensure that investments support a business ecosystem strategy that involves customers, partners, organizations and technology."
The results of Gartner's annual global CIO survey support this development. The responses show that, of CIOs in organizations participating in a digital ecosystem (n = 841), on average, the number of ecosystem partners they had two years ago was 22. Today, it is 42, and two years from now, Gartner estimates that it will have risen to 86. In other words, CIOs in organizations participating in a digital ecosystem are seeing, and expecting to see, their digital ecosystem partners increase by approximately 100 percent every two years.
Advertisement: Vertiv
"EA practitioners must focus their business architecture efforts on defining their business strategy, which includes outlining their digital business platform's strategy*, particularly relative to a platform business model," added Ms. Burton. In addition, EA practitioners will increasingly focus on the business and technology opportunities and challenges by integrating with another organizations' digital platforms and/or by defining their own innovative digital platforms.
Digital innovation continues to transform itself, and EA needs to continuously evolve to keep pace with digital. Building on the base of business-outcome-driven EA, which emphasizes the business and the execution of the business, enterprise architects are increasingly focusing on the design side of architecture — which is at the forefront of digital innovation.
Gartner predicts that by 2018, 40 percent of enterprise architects will focus on design-driven architecture. "It allows organizations to understand the ecosystem and its actors, gaining insight into them and their behavior and developing and evolving the services they need," said Marcus Blosch, research vice president at Gartner. "Many leading platform companies, such as Airbnb and Dropbox, use design-driven approaches such as 'design thinking' to build and evolve their platforms. Going forward, design-driven and business-outcome-driven approaches are set to define leading EA practice."
However, the move to design-driven architecture has implications for people, tools and services. "We recommend that enterprise architects develop the design knowledge, skills and competencies of the EA team," concluded Mr. Blosch. "They also need to educate the business on design-driven architecture and identify an area where they can start with a design-driven architect, to not only develop innovation but also to learn, as an organization, how to do design."
A new update to the Worldwide Semiannual Security Spending Guide from International Data Corporation (IDC) forecasts worldwide revenues for security-related hardware, software, and services will reach $81.7 billion in 2017, an increase of 8.2% over 2016. Global spending on security solutions is expected to accelerate slightly over the next several years, achieving a compound annual growth rate (CAGR) of 8.7% through 2020 when revenues will be nearly $105 billion.
"The rapid growth of digital transformation is putting pressures on companies across all industries to proactively invest in security to protect themselves against known and unknown threats," said Eileen Smith, program director, Customer Insights and Analysis. "On a global basis, the banking, discrete manufacturing, and federal/central government industries will spend the most on security hardware, software, and services throughout the 2015-2020 forecast. Combined, these three industries will deliver more than 30% of the worldwide total in 2017."
In addition to the banking, discrete manufacturing, and federal/central government industries, three other industries (process manufacturing, professional services, and telecommunications) will each spend more than $5 billion on security products this year. These will remain the six largest industries for security-related spending throughout the forecast period, while a robust CAGR of 11.2% will enable telecommunications to move into the number 5 position in 2018. Following telecommunications, the industries with the next fastest five-year CAGRs are state/local government (10.2%), healthcare (9.8%), utilities (9.7%), and banking (9.5%).
Services will be the largest area of security-related spending throughout the forecast, led by three of the five largest technology categories: managed security services, integration services, and consulting services. Together, companies will spend nearly $31.2 billion, more than 38% of the worldwide total, on these three categories in 2017. Network security (hardware and software combined) will be the largest category of security-related spending in 2017 at $15.2 billion, while endpoint security software will be the third largest category at $10.2 billion. The technology categories that will see the fastest spending growth over the 2015-2020 forecast period are device vulnerability assessment software (16.0% CAGR), software vulnerability assessment (14.5% CAGR), managed security services (12.2% CAGR), user behavioral analytics (12.2% CAGR), and UTM hardware (11.9% CAGR).
Advertisement: Managed Services And Hosting Summit Europe
From a geographic perspective, the United States will be the largest market for security products throughout the forecast. In 2017, the U.S. is forecast to see $36.9 billion in security-related investments. Western Europe will be the second largest market with spending of nearly $19.2 billion this year, followed by the Asia/Pacific (excluding Japan) region. Asia/Pacific (excluding Japan) will be the fastest growing region with a CAGR of 18.5% over the 2015-2020 forecast period, followed by the Middle East & Africa (MEA)(9.2% CAGR) and Western Europe (8.0% CAGR).
"European organizations show a strong focus on security matters with data, cloud, and mobile security being the top three security concerns. In this context, GDPR will drive up compliance-related projects significantly in 2017 and 2018, until organizations have found a cost-efficient and scalable way of dealing with data," said Angela Vacca, senior research manager, Customer Insights and Analysis. "In particular, Western European utilities, professional services, and healthcare institutions will increase their security spending the most while the banking industry remains the largest market."
From a company size perspective, large and very large businesses (those with more than 500 employees) will be responsible for roughly two thirds of all security-related spending throughout the forecast. IDC also expects very large businesses (more than 1,000 employees) to pass the $50 billion spending level in 2019. Small and medium businesses (SMBs) will also be a significant contributor to BDA spending with the remaining one third of worldwide revenues coming from companies with fewer than 500 employees.
A new update to the Worldwide Semiannual Big Data and Analytics Spending Guide from International Data Corporation (IDC) forecasts that Western European revenues for Big Data and business analytics (BDA) will reach $34.1billion in 2017, an increase of 10.4% over 2016. Commercial purchases of BDA-related hardware, software, and services are expected to maintain a compound annual growth rate (CAGR) of 9.2% through 2020 when revenues will be more than $43 billion.
"Digital disruption is forcing many organizations to reevaluate their information needs, as the ability to react with greater speed and efficiency becomes critical for competitive businesses," said Helena Schwenk, research manager, Big Data and Analytics, IDC. "European organizations currently active in Big Data programs are now focusing on scaling up these efforts and propagating use as they seek to learn and internalize best practices. The shift toward cloud deployments, greater levels of automation, and lower-cost storage and data processing platforms are helping to reduce the barriers to driving value and impact from Big Data at scale."
Banking, discrete manufacturing, and process manufacturing are the three largest industries to invest in Big Data and analytics solutions over the forecast period, and by 2020 will account for more than a third of total IT spending on BDA solutions. Overall, the financial sector and manufacturing vie with each other for the largest share of spending, with finance just edging out manufacturing, accounting for 21.5% of spending on BDA solutions compared with manufacturing's 21.2%. However, the industries that will show the highest growth over the forecast period are professional services, telecommunications, utilities, and retail.
Western Europe lags the worldwide market in overall growth, with a CAGR of 9.2% for the region, while worldwide spending will grow at a CAGR of 11.9%. The highest growth is in Latin America, while the largest regional market is the U.S. with more than half of the world's IT investment in Big Data and analytics solutions.
"The investments in the finance sector — banking, insurance, and securities and investment services — apply across a wide range of use cases within the industry," said Mike Glennon, associate vice president, Customer Insights and Analysis, IDC. "Examples include optimizing and enhancing the customer journey for these institutions, together with fraud detection and risk management, and these use cases drive investment in the industry. However, the strong manufacturing base in Western Europe will also invest in Big Data and analytics solutions for more effective logistics management and enhanced analysis of operations related data, both of which contribute significantly to improved cost management, and hence profitability."
He added that adoption of Big Data solutions lags that of other 3rd Platform technologies such as social media, public cloud, and mobility, so the opportunity for accelerated investment is great across all industries.
BDA technology investments will be led by IT and business services, which together will account for half of all Big Data and business analytics revenue in 2017 and throughout the forecast. Software investments will grow to more than $17 billion in 2020, led by purchases of end-user query, reporting, and analysis tools and data warehouse management tools.
Cognitive software platforms and non-relational analytic data stores will experience strong growth (CAGRs of 39.8% and 38.6% respectively) as companies expand their Big Data and analytic activities. BDA-related purchases of servers and storage will grow at a CAGR of 12.4%, reaching $4.4 billion in 2020.
Advertisement: Flash Forward
Very large businesses (those with more than 1,000 employees) will be responsible for more than 60% of all BDA spending throughout the forecast and IDC expects this group of companies to pass the $25 billion level by 2020. IT spending on Big Data and analytics solutions by businesses with fewer than 10 employees is expected to be below 1% of the total, even though these businesses account for over 90% of all businesses in Western Europe. These businesses need expertise and time to evaluate and adopt Big Data solutions and will rely heavily on solution providers to guide them through implementation of this technology.
From watching movies, to reading books; it’s impossible to think of an aspect of our lives that has not been affected by technology. In business, even the most established sectors are adapting.
By Keith Tilley, Executive Vice President & Vice-Chair, Sungard Availability Services.
The banking sector for example, is undergoing severe disruption due to the rise of plucky fintech start-ups. The UK has recently been hailed as number one in the world for supporting innovation in this area – but while a positive accolade, one can’t help but think of the pressure this is placing upon the IT department to keep up with this pace of digital transformation.
For the business, IT holds immeasurable power: offering a competitive advantage, enabling growth and playing a vital role in the market strategy. However, with so much to do, IT is becoming something of a complex beast.
Digital tools are vital for business growth today, from attracting the brightest talent[1], through to entering new markets[2]. This power has not gone unnoticed, in research recently undertaken by Sungard Availability Services, with 79% of ITDMs stated that digital transformation is vital in remaining competitive.
In tandem, employees are placing growing importance on the power of digital technology, believing it will improve productivity, allow them to develop new skills, and make their jobs easier. With UK businesses facing ongoing market uncertainty following the EU referendum and subsequent vote for Brexit, could this digital revolution be the key to futureproofing organisations during these turbulent times?
Advertisement: MPL Technology
The pressure is certainly on for the IT department, and 50% of IT decision makers fear that they cannot drive digital transformation forward at the speed their management team expects. Combine this with the fact that 32% of employees also believe their employers are not driving digital transformation as fast as competitors are doing, and you have the ingredients for a disaster – commercially speaking.
When too much pressure is heaped on IT, the department struggles to deliver the best quality service to end users – impeding businesses from innovating to remain competitive in their fields.
Unsurprisingly, this affects more than just the IT department and is having a knock-on effect upon the whole organisation. Nearly a third of employees (30%) confessed that new digital tools are making their jobs more difficult, while 31% said it made their roles more stressful.
As the demand for digital continues, reining in disruptive IT has never been more critical – and those who fail to do so will risk everything from staff retention[3] and customer satisfaction, through to their very survival.
This pace of change will not abate. With this in mind here are some top tips for a more manageable IT estate:
Seek out the right skills: Bringing in the right talent – inside and outside of the IT department – who can help drive your digital culture forward will be vital. Remember that soft skills are just as important as technical ability.
Communicate clearly: Keeping a clear and open dialogue with the wider business will not only help the IT department understand exactly where business priorities lie, but will help prevent employee and management expectations from getting out of hand.
Secure adequate resources: Changing a company’s culture understandably requires investment. Communicating the benefits associated with digital transformation – and investing in training for those who need it – is crucial to ensure that the wider business both pays its fair share of the associated costs, and receives due positive outcomes too.
Don’t go it alone: Turning to colleagues and tech champions outside the IT department can help drive change. Additionally, partnering with an appropriately experienced managed services provider can allow the IT department to focus on delivering new, innovative services, rather than getting waylaid by small, fiddly system maintenance tasks, or overwhelmed by the changes undertaken.
Perhaps one of the biggest mistakes lies with the connotations that are associated with the term ‘digital transformation’. Many have made the mistake of thinking it should be an all-encompassing overhaul; but a revolution doesn’t come out of nowhere, it can be built upon one step at a time. By aligning IT to business outcomes, you then use it to create new opportunities, create better working practices, and ultimately improve the competitive strength of your business.
[1] 35% of UK businesses attribute digital success as vital to attracting graduate talent
[2] 59% of UK IT Decision makers believe digital success results in increased business agility, with 43% stating revenue growth will result
[3] 34% of UK employee respondents said they would leave their current organisation if they were offered a role at a more digitally progressive company
Luckily, things have moved on since in the last 10 years around Big Data, but there’s still an awful lot of confusion and frustration when it comes to analytics. By Bob Plumridge, Director and Treasurer, SNIA Europe.
This is not a unique experience. Companies struggle to exploit Big Data - partly because they don’t know how to overcome the technical challenges, and partly because they do not know how to approach Big Data analytics. The most common problem is data complexity. Often, this is self-inflicted, as companies starting out with Big Data analytics try to “boil the ocean.” Subsequently, IT teams become overwhelmed and the task turns out to be impossible to solve. It is true, data analytics can deliver important business insights. But it’s not a solution for every corporate problem or opportunity.
Complexity can also be a symptom of another problem, with some companies struggling to extract data from a hotchpotch of legacy technologies. The reality is, many companies will be tied to legacy technologies for years to come, and they need to find a way to work within this context, and not try to escape it, as they will most likely fail.
Another source of trouble is setting wrong or poorly planned business objectives. This can result in people asking the wrong questions and interrogating non-traditional data sets through traditional means. Take Google Flu Trends, an initiative launched by Google to predict flu epidemics. It made the mistake of asking: “When will the next flu epidemic hit North America?” When the data was analysed, it was discovered that Google Flu Trends missed the 2009 US epidemic and consistently over-predicted flu trends and the initiative was abandoned in 2013. An academic later speculated that if the researchers had asked “what do the frequency and number of Google search terms tell us?” the project may have proved more successful.
Advertisement: DTC Manchester
The renowned American poet, Henry Wadsworth Longfellow, once wrote: “In character, in manner, in style, in all things, the supreme excellence is simplicity”. Too often, people associate simplicity with a lack of ambition and accomplishment. In fact, it’s the key to unlocking a great deal of power in business. Steve Jobs once said you can move mountains with ‘simple’.
Over the years, technology has progressed by getting simpler rather than more complex. However, this doesn’t mean the back-end isn’t complicated. Rather, a huge amount of work goes into creating an intuitive user experience. Consider Microsoft Word: every time you type, transistors switch on or off and voltage changes take place all over computer and storage mediums. You only see the document, but a lot of technical wizardry is happening in the background.
Extracting meaningful value from data depends on three disciplines: data engineering, business knowledge and data visualisation. To achieve all three, you need a team of super humans who can code in their sleep, have a nose for business, an expansive knowledge of their industry and adjacent industries, supreme mathematical genii and excellent management and communication skills. Or, you have technology that can abstract all these challenges and create a platform layer which does most of the computations in the background.
However, there is a caveat. Even if you eschew complexity and embrace a simplified data platform, you still need data savvy people. These data scientists won’t have to train for three years to memorise the finer points of Hadoop, but they will need to understand Big Data challenges.
There are companies which provide the method and points businesses in the right direction, but they still need to uncover what questions to ask, and what kind of answers to expect. How businesses can equip themselves with the right skills for the job is an extremely important issue to consider.
While Big Data projects may stall, or fail for any of the above reasons, we are starting to see more “succeed and transform” businesses, mainly thanks to the huge strides in stripping out complexity in the front-end through layer technology.
Let’s take the Financial Industry Regulatory Authority, Inc. (FINRA), a private self-regulatory organisation (SRO) and the largest independent regulator for all US-based securities firms. Thanks to the methods I was referencing earlier, the financial watchdog has been able to find the right ‘needles’ in their growing data ‘haystack’. Analysts can now access any data in FINRA’s multi-petabyte data lake to identify trading violations – in an automated fashion making the process 10 to 100 times faster: this means a difference of seconds vs. hours!
FINRA achieved simplicity and more control of its data as a result. It ordered brokerages to return an estimated €90 million in funds, obtained through misconduct during 2015 which was nearly three times the 2014 total.
Big Data projects don’t have to confound and confuse. They can bring breakthrough lightbulb moments, provided they’re grounded in simplicity. Let the technology do the difficult stuff – in all else, keep it simple.
One of the most important events in the datacenter industry calendar, the Open Compute Project (OCP) Summit took place in Santa Clara in March 2017. Jeffrey Fidacaro, at 451 Research, was there and gives us his take on OCP adoption as well as hot topics and news at the summit.
The value proposition of OCP-based designs – lower cost, highly efficient, interoperable, scalable – appears to be well understood by the industry even with some healthy scepticism around the magnitude of savings announced by some hyperscalers.
However, despite the maturing hardware ecosystem and the known benefits of open compute, many vendors are still waiting for signs of an anticipated wave of non-hyperscale adoption. We believe this is primarily because of a lack of maturity around the procurement, testing and certification, and support functions that firms are accustomed to with traditional (non-OCP) hardware.
There are some systems integrators and others that are stepping up to fulfil these functions, and to help with integration, but there is more work ahead, before enterprises and other non-hyperscale buyers can confidently overcome their caution. There is clearly a high level of enterprise interest in OCP, and supplier and service provider momentum. We believe broader adoption is a question of when – not if.
The OCP community is working hard on many levels to address these challenges. Most visibly, it has launched an online OCP Marketplace, where users can search for OCP hardware and find where to order it.
Major hyperscale announcements at the OCP event included Microsoft's support of ARM server processors for Azure cloud OCP servers, and Facebook's OCP server portfolio refresh that includes a new server type. Intel announced a collaboration with small software supplier Virtual Power Systems to contribute software-defined power technology to the community. Equinix, the largest colocation provider by revenue, announced broad OCP support, while some of the leading OCP hardware makers discussed their plans to drive greater OCP adoption. This year's event also focused on the open software stack including open network switch software and new software-defined storage announcements from NetApp and IBM.
Advertisement: Eltek
Since its inception in 2011, the OCP ecosystem has grown to 198 official members. OCP hardware deployments, however, have been primarily in hyperscale environments – Facebook, Microsoft and Google – and a handful of large financial institutions. The maturity of OCP hardware and vendor support was evident at the event, with a vendor show floor that was easily twice the size of last year's.
At this year's Summit, Facebook and Microsoft made significant announcements, while Google was notably quiet (and absent from the keynote roster). Google joined the OCP a year ago, and has since shared its 48V rack designs with the community – a higher voltage than the 'traditional' 12V OCP designs.
A focus of the 2017 Summit was on networking components, as well as the software stacks running OCP gear. A panel discussion highlighted the need for greater interoperability between, for example, network operating systems, software-defined storage, OpenStack and Linux.
One of the major announcements at the Summit was Microsoft's support of ARM-based servers integrating chips from Cavium and Qualcomm (and others) into its OCP-designed servers. While ARM-based servers have been around for some time (with little traction), support by Microsoft presents a significant threat to Intel's monopoly in the world of server processors. Microsoft also announced it will integrate AMD's new x86 server chip, Naples, which is shipping in 2Q 2017. (Look for a separate 451 Research report that delves deeper into the ARM and AMD, versus Intel, server chip battle.)
Microsoft ported its Windows Server operating system to run on 64-bit ARM processors, and is strategically testing the ARM servers for specific (not all) workloads – primarily for cloud services. This includes its Bing search engine, big-data analytics, storage and machine learning. This is significant, because Microsoft claims that these workloads make up nearly half of its cloud datacenter capacity. Microsoft noted that the ARM-based version of Windows Server will not be available externally.
The ARM-based servers are part of Microsoft's Project Olympus platform (first announced in November 2016), which includes OCP designs for a universal motherboard, a universal rack power distribution unit, power supply and batteries, and rack management card. Project Olympus is a new development model whereby designs are 50% complete by intention, and shared with the OCP community for collaboration and to speed innovation. Intel and AMD are also working with Microsoft to have their newest processors (respectively, Skylake and Naples) included as part of the Project Olympus specifications.
At the Summit, Facebook introduced a full refresh of its OCP portfolio, introduced a seventh OCP server type, and updated its software stack. The new Type VIII server combines two systems – the Tioga Pass dual-socket server and the Lightning storage (JBOF) – to maximize shared flash storage across servers. Its next-generation storage platform, Bryce Canyon, is designed for handling high-density storage (photos and videos) and can support up to 72 hard disk drives in a 4-OpenU chassis.
Facebook also updated its Big Sur GPU server to Big Basin using the latest generation GPU processors, and increased its memory from 12GB to 16GB, allowing it to train machine-learning models that are 30% larger compared to its predecessor. Other refreshes included the Yosemite v2 server (four single-socket compute nodes) and Wedge 100S top-of-rack network switch.
In the storage space, NetApp announced it is offering a software-only version of its ONTAP operating system, ONTAP Select, for use with OCP storage hardware in a private cloud. IBM also released its Spectrum Scale storage software for OCP. These enterprise storage operating systems are now unbundled, and bolster the software-defined storage stack that had been missing in OCP.
A new online OCP Marketplace was launched where products can be reviewed and sourced by the community. Last year, an incubation committee was established that developed two designations: OCP Accepted (full hardware design specifications that are contributed to the OCP community) and OCP Inspired (designs that hold true to an existing OCP specification).
The marketplace currently lists 70 OCP products that are ready to purchase. We believe this is a positive initial step in aggregating available OCP-based hardware in one location, but more curating and certification may be needed to resolve some of the enterprise challenges in OCP hardware procurement.
Schneider Electric, one of the leading datacenter technologies suppliers, and Microsoft announced their co-engineering of a universal rack power distribution unit (UPDU), a unique component of Project Olympus. The UPDU is based on a single PDU reference design along with multiple adapter options to accommodate varying alternating current standards and input power ratings (amps, phases and voltages) across different geographies, as well as different rack densities. The goal is to simplify global procurement, inventory management and deployment.
Also at the datacenter facilities level, was the announcement from Intel that it is working with VPS to develop software-defined power monitoring and management. We believe the broader datacenter industry will increasingly move toward the use of software-driven power management, enabling greater efficiencies and utilization.
This is not a new approach, but one that has been slow to take off – Intel's support may help to change that. Intel is integrating VPS's software with its Rack Scale Design open APIs to enable power availability where needed on-demand, as well as peak shaving among other functions. Intel and VPS have committed to contributing the specification to OCP.
The leading OCP server manufacturers (ODMs), Quanta Cloud Technology and Wiwynn, are assessing more integrated OCP rack offerings, in a bid to simplify the procurement process. But in our view, they still need to develop stronger channel and distributor relationships to get closer to resembling the traditional supply chain.
A number of systems integrators at the event shared a common strategic mandate to fulfill the intermediary role between enterprise and non-hyperscale buyers and the OCP supply chain. The availability of OCP-compliant space at colocation providers is likely to also be an enabler. Equinix, the largest colocation supplier by revenue, joined the OCP in January, including the OCP Telco Project that launched a year ago (and has since grown from 20 to over 100 participants).
At the Summit, Equinix announced that it would adopt OCP hardware at its International Business Exchange datacenters to support certain infrastructure services. Equinix discussed with 451 Research its broader intent to support – and in some cases, help facilitate via interoperability testing – OCP adoption among its customers and partners. This could include the top 10 cloud providers, plus hundreds of smaller cloud customers. Equinix's broader OCP strategy will be discussed in greater detail in a forthcoming report.
Several other colocation providers are also paving the way for OCP inside their facilities. Aegis Data made OCP-compliant space available in its HPC-designed datacenter outside of London and, along with Hyperscale IT (a hardware reseller and integrator) and DCPro Development (datacenter training), stood up OCP hardware and held an awareness course in February.
CS Squared, a datacenter consultant, opened an OCP lab in a Volta colo datacenter in London (mirroring Facebook's Disaggregated Lab in California). More recently, the Dutch colo Switch Datacenters announced a data hall in one of its Amsterdam facilities that is suitable for Open Rack systems.
In reality, the 'end to end' service offerings available today for the non-hyperscale buyers of OCP hardware, from procurement to maintenance and support, are still a work in progress. It is moving forward and evolving but, in our opinion, is still too onerous or complicated for most.
Further investment and a concerted effort from the ODMs, integrators, colos and others in the supply chain will be required. But once the issues around procurement, testing and support are fully resolved, it could be a tipping point for broader non-hyperscale OCP adoption.
Jeffrey Fidacaro is a Senior Analyst in the Datacenter Technologies and Eco-Efficient IT practices at 451 Research
By Steve Hone CEO
The DCA, Trade Association for the date centre sector
Throughout the year we invite DCA members to
submit articles on a variety of subject matters related to the data centre
sector, many highlight common challenges which operators of data centres face on
a day to day basis. Often these thought leadership articles provide details and
awareness of possible solutions, this helps the data centre sector move forward
and overcome similar challenges.
In this month’s DCA journal we thought we would provide members with the opportunity to submit some detailed customer/client case studies providing examples of just how innovative solutions have been applied and implemented.
From independent research conducted it has been shown that the majority of us are fairly risk adverse, do not like change and reluctant to be the first ones to introduce something new which many run the risk of back firing or failing to deliver.
Now I’m not saying that change is easy or without risk, far from it - nothing worth doing came easy after all! However, the change can be made easier and the risks dramatically reduced if you can be reassured that you are not the first to face a particular challenge. Things improve if you are facing a challenge that has already been successfully solved by someone else who was in exactly the same position as you. That’s where ‘real life’ case studies come into their own and are “worth their weight in gold”.
Reading about other businesses who have already implemented what you are considering can be a real confidence boost. Far from imploding these innovative businesses have come out of the other side stronger and more prepared than ever for what lies ahead.
Last month I spent two days at DCW 2017 at the Excel in London, while there I took the opportunity to visit various DCA members; many of who were also exhibiting. During the course of our conversations I raised the subject of case studies, I found many members had case studies which they felt would be of value but they also confessed that they were not always made very easy to find on their own websites.
Over the coming months, with the membership’s support, we intend to gather as many member cases studies as possible to build a document library within the new DCA website (due out in the summer). The intention is to make these invaluable case studies provided by DCA members easier to find and refer to.
As always, a big thank you for all the contributions submitted this month. If you would like to participate in this case study initiative please contact the DCA.
Next month the theme will be Predictions and Forecasts (deadline for copy is 11th April). This is a broad title and is a great opportunity for members to share their thoughts on what they feel lies ahead, the challenges we might need to overcome or innovations we can look forward too.
If you would like to submit an article please contact Amanda McFarlane. Amandam@datacentrealliance.org
Chatsworth Products – Customer Case Study
Basefarm, a leading, global IT hosting and colocation services provider, securely hosts more than 35,000 services and reaches over 40 million end users worldwide in industries ranging from finance and government, to media and travel. Headquartered in Oslo, Norway, Basefarm offers its customers advanced technology solutions, high-end cloud services, application management and colocation from its six data centres located throughout Europe.
In response to growing demand for its colocation services, Basefarm set out in March 2015 to design and build a green, state-of-the-art data centre—Basefarm Oslo 5.
“The brief for the new site was to create the most energy-efficient data centre in Oslo,” said Ketil Hjort Elgethun, Senior System Consultant, Basefarm. “Cooling was a key factor in the design, so we set out to find an airflow containment and cabinet package that could maximise the return on our cooling equipment, and provide a thermal solution for all the racks and cabinets within, whilst being flexible and easy to use.”
What Basefarm sought was a customised cabinet and containment solution, one that would allow it to rapidly respond to the future deployment of integrated cabinets, as well as accommodate a variety of cabinet sizes. With Chatsworth Products’ (CPI) Build To Spec (BTS) Kit Hot Aisle Containment (HAC) Solution and GF-Series GlobalFrame® Gen 2 Cabinet Systems, Basefarm found the optimal solution.
“The cabinet platform on which your enterprise is built is just as critical as the equipment it stores. Using a properly configured cabinet that is designed to fit your equipment and work with your data centre’s cooling system is crucial,” commented Magnus Lundberg, Regional Sales Manager, CPI.
Integral to the design of Basefarm Oslo 5 was a state-of-the-art, ‘Air-to-Air’ cooling system, designed to deliver high levels of cooling with exceptionally low power consumption.
This innovative cooling technology demanded a containment strategy that could effectively manage the separation of hot and cold airflow. The solution needed to be of the highest quality, easy to work with and flexible enough to accommodate a mix of cabinets and equipment from multiple suppliers.
HAC solutions isolate hot exhaust air from IT equipment, and direct back to the CRAC/CRAH through a vertical exhaust duct, which guides the hot exhaust air away from the cabinet to support a closed return application, resulting in more efficient cooling units. This ability to isolate, redirect and recycle hot exhaust air was exactly what Basefarm was looking for in its new super-efficient data centre design.
Basefarm worked closely with CPI engineers to create a layout that included the BTS Kit, allowing the flexibility to field-fabricate ducts over the contained aisle. This was key to the ongoing needs of Basefarm Oslo 5 because the BTS Kit can be used over a mix of cabinets of different heights, widths and depths in the same row and can be ceiling- or cabinet-supported. BTS Kit also features an elevated, single-piece duct, allowing cabinets to be removed, omitted or replaced as required. With a high-quality Glacier White finish that reflects more light, a durable construction and a maintenance-free design, the BTS Kit has given Basefarm the security of enduring performance in building its new mission critical data centre.
Along with the BTS Kit, CPI supplied Basefarm with a range of GF-Series GlobalFrame Cabinets with Finger Cable Managers installed. The GF-Series GlobalFrame Cabinet System is an industry-standard server and network equipment storage solution that provides smarter airflow management. Also in a Glacier White finish, GlobalFrame Cabinets feature perforated areas on the doors that are 78 percent open to maximise airflow.
“During the design phase, we considered many different options for cabinet and containment solutions. CPI’s GlobalFrame Cabinets and the BTS Kit stood out as the best-in-class to meet our needs in building a super-efficient data centre,” added Elgethun.
With CPI’s cabinets and aisle containment, Basefarm had deployed a solution that allowed them to:
The dream of building Oslo’s most reliable and sustainable data centre became a reality, just one year after the design process began, when phase 1 of Basefarm Oslo 5 was completed. In Spring 2016, the new data centre went live, offering Basefarm customers access to more than 10 megawatts of critical capacity and up to 6000 square metres of white space. Located only five kilometres away from Basefarm’s existing data centre, Basefarm Oslo 5 will be used as part of a dual site solution for customers requiring high levels of redundancy.
Finding an innovative and low-cost cooling solution was a top-level priority for the Basefarm design team from the start. In a facility designed to grow over the next five years, beyond meeting the desired Power Useage Effectiveness (PUE) goal of 1.1, this was a chance to break new ground and become a model for other green data centres to follow.
With the flexibility CPI’s BTS Kit and GlobalFrame Cabinet Systems provided, Basefarm can now quickly scale the new data centre to accommodate its clients’ future colocation needs.
“The combination of the quality and design of CPI’s products and their responsiveness during deployment, meant it was a win-win situation for Basefarm. The cost of improving our PUE numbers by 10 percent is a small cost compared to being able to use the space 80 percent more effectively. And it looks great!” Elgethun stated.
Taking a team approach was the key to building the trust needed for Basefarm and CPI to work together to successfully design and build Oslo’s biggest and greenest data centre yet–Basefarm Oslo 5.
Basefarm securely hosts more than 35,000 services and reaches over 40 million end users worldwide in industries ranging from finance and government, to media and travel. Headquartered in Oslo, Norway, Basefarm offers its customers advanced technology solutions, high-end cloud services, application management and colocation from its six data centres located throughout Europe.
At Chatsworth Products (CPI), it is our mission to address today’s critical IT infrastructure needs with products and services that protect your ever-growing investment in information and communication technology. We act as your business partner and are uniquely prepared to respond to your specific requirements with global availability and rapid product customisation that will give you a competitive advantage. At CPI, our passion works for you. With over two decades of engineering innovative IT physical layer solutions for the Fortune 500 and multinational corporations, CPI can respond to your business requirements with unequalled application expertise, customer service and technical support, as well as a global network of industry-leading distributors. Headquartered in the United States, CPI operates from multiple sites worldwide, including offices in Mexico, Canada, China, the United Arab Emirates and the United Kingdom. CPI’s manufacturing facilities are located in the United States, Asia and Europe. For more information, please visit www.chatsworth.com
Brendan O’Reilly, Sales Director – Blygold UK Ltd
For several years there is an increasing interest in the energy consumption of data centres. The increasing power hungry servers produce more heat. More powerful and advanced cooling technology is required to remove the heat and allow the servers to operate at optimal conditions.
Next to the direct power consumption of the servers the indirect power consumption of the cooling installation plays a key role in energy efficient data centres. This has initiated research and the market introduction of new technologies. Some of these developments focus on the process inside the data centre from which the heat must be removed, others focus on the equipment outside the data centres that is essential for releasing the heat to the environment. Together these developments should result in a well designed and easy to maintain cooling system that will only constitute a small part of the energy consumption of the total data centre.
It is clear that operational costs will exceed the initial investments by far on the long term. Yet a big part of the of the operational costs are fixed by choices made in the initial investment process. The choices made the design phase of data centre cooling equipment, will not only affect the initial efficiency and power consumption but it may have a major impact on these parameters in the future.
A good example of this is the choice of the type and material of the heat exchangers that are used in most installations. In the process industry or power plants heat exchangers are considered key elements for optimal process efficiency. Loss of heat transfer in these elements affects the efficiency of the whole installation and is therefore carefully monitored and corrected if necessary. In data centre design and operation the focus on these heat exchangers seems to be less than other industries.
To understand how corrosion and pollution in HX can have such an impact on cooling installations efficiency we can look at the basics of the cooling process. In all cooling installations the refrigeration cycle uses the evaporation of a liquid to absorb heat. The absorbed heat is then released at a higher pressure/temperature to the environment. The cycle of evaporation, compression, condensing and expansion is shown simply in figure 1
The white lines show the normal cycle where the system works in optimal condition. The red lines show the cycle when the condensing temperature raises. This can be due to higher outside temperatures or due to a less efficient heat exchanger. The higher condensing temperature results in an extra power input while the effective capacity is reduced. Because an increased condensing temperature has this double effect, the efficiency of the cooling equipment is reduced significantly.
For an acceptable energy consumption of the cooling installation it is essential to keep the condensing temperature as low as possible. Every degree counts! This is the point where a close look at the heat exchangers in the system becomes vital. These heat exchangers must be kept clean and free of corrosion. The right choices in the design phase determine the performance in the long term.
Heat exchangers are designed to exchange heat between media without direct contact between those media. Aluminium and copper are good materials for this purpose as they have high heat conductivity. Standard liquid‐to‐air heat exchangers are made with copper tubes and aluminium fins.
A weakness in this design is the joint between the copper and aluminium. As long as the fins are tightly joined to the copper tube, without gaps or interference of organic layers or corrosive products, the heat transfer will be optimal. Pollution on the fin surface will also influence the heat transfer of a heat exchanger and the airflow through it.
The joint between the copper tubes and aluminium fins is one of the more corrosion sensitive parts of an air‐conditioning unit. With aluminium being less noble
than copper it will be sacrificed in the presence of electrical conducting fluids.
These fluids will
always be present due to pollution and
moisture from the environment.
The accelerated corrosion due to the presence of different metals is called galvanic corrosion and is one of the main problems in copper‐aluminium heat exchangers. An example of this galvanic corrosion is given in figure 2. The joint that existed between copper and aluminium is now replaced by a copper aluminium oxide joint. The heat conductivity of aluminium oxide is much lower than that of aluminium. Therefore, the heat transfer between copper tubes and aluminium fins is significantly decreased.
If pollution on the fins limits the airflow through the heat exchanger, the temperature of the air that is passing over the aluminium fins will increase (the same kW in less kg of air). This will cause the temperature difference between the liquid/gas in the copper tube and the air passing over the fins to decrease. A smaller temperature difference will result in reduced heat transfer. The only way the system can cope with this loss of heat transfer is the undesirable increase of the condensing pressure and temperature.
In the design phase engineers must take into account the effect corrosion and pollution will have during the lifetime of a cooling installation. Corrosion can of course be controlled by selecting the right materials but also the type of heat exchangers is important. The use of indirect adiabatic cooling on a heat exchangers will create massive galvanic corrosion due to the amount of moisture that is brought into the heat exchanger.
Protecting heat exchangers from corrosion and accumulating pollution is essential for cooling installation capacity and energy consumption. The options available in the market to realize this can be divided into three parts; metal optimization, pre‐coated metals and post‐coated metals.
Metal optimization consists of looking into different metals or alloys to reduce the risk of corrosion. Using copper fins instead of aluminium is a good example of this. The resulting copper tube‐copper fin heat exchangers will not suffer from extreme galvanic corrosion anymore. Apart from the price and the weight the disadvantage is that in industrial environments sulphurous and nitrogen contamination will create high amounts of metal loss. Heat transfer will not directly be affected but the lifetime of the heat exchangers will significantly decrease.
Pre-coated aluminium is often offered as a “better than nothing” solution against corrosion in heat exchangers. The ease of application makes this a cheap and tempting solution. In this case the aluminium fin material receives a thin coating layer before being cut and stamped to fit. The result is that during cutting and stamping the fins, the protective layer is damaged creating hundred and thousands of meters of unprotected cutting edges in every single coil. Next to this one must take into account the fact that the protective layer will be between the copper tube and alu fin which reduces the heat transfer already without any corrosion being present.
Post coating is a technique where corrosion protective coatings are applied to heat exchangers after full assembly. If the right coating and the right procedures are applied the metals will be sealed off from the environment without reducing heat transfer. Disadvantage is that it requires special coatings and application can only be done by specialized companies.
Applying these special heat conducting coatings on the complete heat exchanging surface of coil is difficult and time consuming. This creates extra challenges with respect to pricing and delivery times compared to other solutions that can often be produced/supplied by heat exchanger manufactures themselves. Even though post coatings are preferably applied as a preventive measure before installation, it is possible to use them as corrective measure if choices made in the design phase turn out to be insufficient.
Selecting the right solution for heat exchanger design and corrosion prevention will affect the costs of maintenance, replacement and energy consumption. Investments made in the design phase will show significantly reduced operational costs in the long term. Data centres engineering has a big focus on electronic equipment that can handle higher temperatures because increasing the operational temperature can significantly reduces the operational costs, every degree counts! With this awareness it is only a small step to look at the heat exchangers of cooling equipment the same way.
Every degree counts; Protect, monitor and maintain these key elements of the cooling process!
How choices in the design phase affect long term performance ; example from the field
Aluminium – copper air‐cooled heat exchangers with indirect adiabatic cooling system might seem to be a cost effective method to lower the initial investment but will significantly increase corrosion problems. Within 3‐4 years the heat exchanger efficiency is significantly reduced as the hidden corrosion process is accelerated due to the water atomization during hot days (design days). With a less efficient heat exchanger the installation efficiency is negatively affected the whole year long. Once efficiency drops below critical point early replacements of HX is inevitable even though the front of the coil might look in good condition.
As a retailer committed to “exceeding customer needs”, Asda wanted to go beyond offering affordable goods to their customers and provide a new service that would bring convenience to their daily lives.
The answer was toyou, an innovative end-to-end parcel solution. The service takes advantage of Asda’s extensive logistics and retail network enabling consumers to return or collect purchases from third-party online retailers across its stores, petrol forecourts and Click & Collect points. This now meaning, a consumer who buys a non-grocery product from another retailer, is able to pick it up or return the item at an Asda store rather than wait for home or office delivery.
In order to implement the service, Asda required a far more agile warehouse and supply chain management system. This new system needed to be hosted off-site so that there was no reliance on a single store or team.
“We were launching a new service, in a new field, on a scale we had never undertaken before. We needed a pedigree IT solutions provider that could support the full scale of our full end-to-end implementation, so CenturyLink stood out to us,” explains Paul Anastasiou, Senior Director toyou in Asda.
Despite the scale of the project, Asda could not risk putting extra pressure on their existing legacy IT system – a seamless and secure transition was required. In addition to this, as a brand dedicated to making goods and services more affordable to customers, keeping costs low as the business model expanded was crucial. As such, Asda chose to outsource the administration of the operating system and application licenses to manage costs.
Advertisement: Managed Services and Hosting Summit UK
Asda chose a warehouse management software platform from CenturyLink's partner Manhattan Associates. The multi-faceted solution was deployed as a hosted managed solution across two data centres. CenturyLink Managed Hosting administered Asda's operating systems, and Oracle and SQL databases on a full life cycle basis as part of the solution. Asda created its complete development, test certification and production environments for the Manhattan Associates platform on that dedicated infrastructure.
Asda used CenturyLink Dedicated Cloud Compute, which provided Compute and Managed Storage with further capacity to house the data flowing through Asda's business on-demand. The security requirement was accomplished with a dedicated cloud firewall protecting the entire solution.
CenturyLink instituted Disaster Recovery services between the two data centres at the application and database level, as well as managed firewalls to secure the data. CenturyLink also implemented Managed Load Balancing to manage the entire virtual environment and interface to all the linked warehouses.
18 months from concept to implementation
Approximately 3 - 4 months of testing
Asda’s launch of toyou with the support of CenturyLink, has greatly boosted the retailer’s customer relationships. By moving most of the retailer’s operations to CenturyLink, Asda has effectively launched a huge scale new venture, all the while maintaining the same level of service and value to customers.
Asda and CenturyLink are continuing to develop the working relationship and discuss what opportunities could be available as toyou grows and develops.
About CenturyLink
CenturyLink is a global communications, hosting, cloud and IT services company enabling millions of customers to transform their businesses and their lives through innovative technology solutions. CenturyLink offers network and data systems management, Big Data analytics and IT consulting.
Founded in 1945, KoçSistem is a member of the Koç Group and a leading, well-established, information technology company in Turkey. The company’s history spans seven decades, during which time KoçSistem has introduced leading edge technologies to the market in order to enhance the competitive edge and productivity of enterprises.
KoçSistem plays a major role in the digital transformation of enterprise offerings with a portfolio of smart IT solutions crossing mega industry trends including the Internet of Things, Big Data, Cloud Computing, Corporate Mobility and Smart Solutions. Providing data centre services for some of the biggest enterprises in both the public and private sector in Turkey, KoçSistem is also focused on delivering reliable co-location services.
KoçSistem originally supported its data centre business from two data centres, one in Istanbul and one in Ankara as a disaster recovery site. However, as demand grew, it recognised that it needed to invest in another data centre facility in Istanbul to prepare for, and manage, expansion.
KoçSistem carefully considered the issues related to self-build versus the selection of a data centre operator that would provide the bespoke infrastructure it required. Ultimately, it made the decision to outsource the new data centre requirements in order to maximise service quality. This approach would also enable the company to focus on strategic aspects of its business, instead of attention and resource being diverted to building and managing the infrastructure supporting it.
The ability of the operator to provide maximum uptime and resilience for its growing managed services business was key in the selection of a new data centre partner. Location, connectivity, capacity and the technical specification of the new facility was also important.
An initial review of the market revealed that whilst approximately 60% of data centre operators in Turkey are located in Istanbul, their size and the capacity available for future growth and adherence to international standards required for a Tier III+ level of resilience was extremely limited.
Few of the existing data centres had been built to withstand serious seismic activity although this is a geographical ‘at risk’ area, in spite of regulatory standards for the construction of critical buildings, such as hospitals, government offices, power generation and distribution facilities and telecommunications centres.
Zenium’s decision to enter the Turkish data centre market in 2014 with the development of a state-of-the-art data centre campus in Istanbul coincided with KoçSistem’s search for a partner.
Can Barış Öztok, Assistant General Manager, Sales & Marketing at KoçSistem explained: “KoçSistem manages technology demands for clients across all sectors, particularly from financial to retail so it’s critical to us that all business partners offer exceptional levels of service to the highest standard. We have found without doubt that Zenium meets those expectations.”
“Zenium demonstrated its ability to meet all of our key criteria and must-have features and capabilities,” added Öztok. “The company also shared our belief in the future of the Turkish IT market and supported our main focus; delivering enterprise cloud solutions crucial for digital transformation.”
Solution
KoçSistem was able to provide its customers with IT and cloud services from Istanbul One from day one of the launch of the new facility in September 2015. Its initial requirement was for 500 sq m of data centre space, customised to meet KoçSistem’s initial day-one power and cooling profile. This was further supported by dedicated mechanical and electrical plant, and with the ability to be scaled up as its requirements, for power densities for example, increased.
“Zenium’s decision to design and construct Istanbul One to earthquake code TEC 2007 and the highest importance factor of 1.5 for ‘buildings to be utilised after an earthquake’ provided us with the peace of mind that the facility would provide the business-critical security and resilience that we required,” continued Öztok.
“Its proactive approach to tackling the potential power outages that can occur in the region, by investing in multiple fuel oil storage tanks as an emergency power supply to the generators and stringent SLA contracts with fuel suppliers, also provided the reassurance that maximum uptime will be achieved regardless of the external issues relating to power supply,” said Öztok.
The combination of Zenium’s experience in data centre design, build and management and KoçSistem’s expertise in co-location and managed services has already paid dividends.
Increasing demand for quality data centre space and high level data management services resulted in KoçSistem filling its initial 500 sq m data hall within 12 months and the decision to extend its agreement with Zenium, and take on a second data hall, doubling capacity to 1,000 sq m in early 2017.
Istanbul One is the first data center to provide wholesale data center solutions in Turkey which is an important feature for KoçSistem going forward.
“We provide our customers a whole solution including all layers of cloud starting from co-location and cloud infrastructures to applications,” Öztok explained. “Our enterprise customers have large co-location requirements so the potential to increase capacity with Zenium offered us a future-proofed solution. It is scalable, energy efficient and economical from the outset. A win-win for us all.”
Istanbul has long been the link between east and west. However, consumer demand for mobile and internet services and the need for business class IT infrastructure, coupled with Istanbul’s status as a regional banking hub, is fuelling demand for high speed, low latency connectivity between Turkish and European financial and business centres. Istanbul is now in the unique position of being able to provide a digital bridge that will support communications and growth between neighbouring geographies. It is poised to become the natural regional centre for Internet connectivity.
Located in a well-established Organized Industrial Zone (OIZ) in close proximity to the new International Financial Center (IFC) and within easy reach of the historic business district in Istanbul, the Zenium campus comprises three self-contained, purpose-designed buildings that deliver 22 MW of IT load to 12,000 sq m of high specification technical space.
Constructed from a concrete frame with proprietary steel insulated cladding, the buildings are designed to meet the highest level of earthquake code allowing for immediate use and continued operation following a seismic event. Mains power is supplied via dual diverse HV feeds providing 30MVA, backed up by seismically-rated emergency generators.
The only carrier-neutral facility in the region, tenants at Istanbul One benefit from diverse connectivity via multiple fibre providers which enables them to choose the telecommunications operator/ISP that best meets their needs.
As the first global grade data center in Turkey with peering capabilities, Istanbul One also supports the enhanced connectivity increasingly required by international organisations.
Shortlist confirmed.
Online voting for the DCS Awards opened recently and votes are coming in thick and fast for this year’s shortlist. Make sure you don’t miss out on the opportunity to express your opinion on the companies that you believe deserve recognition as being the best in their field.
Following assessment and validation from the panel at Angel Business Communications, the shortlist for the 24 categories in this year’s DCS Awards has been put forward for online voting by our readership. The Data Centre Solutions (DCS) Awards reward the products, projects and solutions as well as honour companies, teams and individuals operating in the Data Centre arena.
DCS Awards are delighted to be joined by MPL Technology Group as our Headline Sponsor, together with our other sponsors, Eltek, Vertiv, Comtec, Riello UPS, Starline Track Busway and Volta Data Centres and our event partners Data Centre Alliance and Datacentre.ME
Phil Maidment, Co-Founder and Owner of MPL Technology Group said: "We are very excited to be Headline Sponsor of the DCS Awards 2017, and have lots of good things planned for this year. We are looking forward to working with such a prestigious media company, and to getting to know some more great people in the industry."
The winners will be announced at a gala ceremony taking place at London’s Grange St Paul’s Hotel on 18 May.
All voting takes place on line and voting rules apply. Make sure you place your votes by 28 April when voting closes by visiting: http://www.dcsawards.com/voting.php
The full 2017 shortlist is below:
Data Centre Energy Efficiency Project of the Year
New Design/Build Data Centre Project of the Year
Data Centre Management Project of the Year
Data Centre Consolidation/Upgrade/Refresh Project of the Year
Data Centre Cloud Project of the Year
Data Centre Power Product of the Year
Data Centre PDU Product of the Year
Data Centre Cooling Product of the Year
Data Centre Facilities Management Product of the Year
Data Centre Physical Security & Fire Suppression Product of the Year
Data Centre Cabling Product of the Year
Data Centre Cabinets/Racks Product of the Year
Data Centre ICT Storage Hardware Product of the Year
Data Centre ICT Software Defined Storage Product of the Year
Data Centre ICT Cloud Storage Product of the Year
Data Centre ICT Security Product of the Year
Data Centre ICT Management Product of the Year
Data Centre ICT Networking Product of the Year
Excellence in Service Award
Data Centre Hosting/co-location Supplier of the Year
Data Centre Cloud Vendor of the Year
Data Centre Facilities Innovation of the Year
Data Centre ICT Innovation of the Year
Data Centre Individual of the Year
Voting closes : 28 April
www.dcsawards.com
The successful Managed Services & Hosting Summit series of events is expanding to Amsterdam in April, assessing the impact of market trends and compliance on the MSP sector in Europe. Expert speakers from Gartner and a leading legal firm involved in assessing EU General Data Protection Regulation (GDPR) impact will be providing keynote presentations, speaking as evidence emerges that many MSPs need to “up their game”, particularly in their sales and customer retention.
Customers are demanding more, so Bianca Granetto, Research Director at Gartner will examine new research into digital business and digital transformation market dynamics and what customers are really asking about.
Another keynote speaker, Renzo Marchini, author of Cloud Computing: A Practical Introduction to the Legal Issues, is a partner in law firm Fieldfisher's privacy and information law group which has over 20 years' experience in advising clients across different sectors and ranging from start-ups to multinationals. He has a particular focus on cloud computing, the “Internet of Things”, and big data.
Finally, for those seeking guidance on the high level of merger and acquisition activity in the sector, David Reimanschneider, of M&A experts Hampleton, will look at where the smart money is going in the MSP business and what the real measures of value and time are, and when to sell.
The European Managed Services & Hosting Summit 2017 which will be staged in Amsterdam on 25th April 2017 will build on the success of the UK Managed Services & Hosting Summit which is now in its seventh year and will bring leading hardware and software vendors, hosting providers, telcos, mobile operators and web services providers involved in managed services and hosting together with channels including Managed Service Providers (MSPs) and resellers, integrators and service providers seeking to developing their managed services portfolios and sales of hosted solutions.
The European Managed Services & Hosting Summit 2017 is a management-level event designed to help channel organisations identify opportunities arising from the increasing demand for managed and hosted services and to develop and strengthen partnerships aimed at supporting sales. The event has attracted a strong line up including many of Europe’s leading suppliers to the MSP sector such as: Datto, SolarWinds MSP. Autotask, Kingston Technology, RingCentral, TOPdesk, ASG, Cato Networks and Kaseya.
For further information or to register please visit: www.mshsummit.com/amsterdam
FLASH FORWARD - A one-day end-user conference on flash and SSD storage technologies and their benefits for IT infrastructure design and application performance.
1st June 2017 – Hotel Sofitel Munich Bayerpost
Since the very early days of flash storage the industry has gathered pace at an increasingly rapid rate with over 1,000 product introductions and today there is one SSD drive sold for every three HDD equivalents. According to Trendfocus over 60 million flash drives shipped in the first half of 2016 alone compared to just over 100 million in the whole of 2015.
FLASH FORWARD brings together leading independent commentators experienced end-users and key vendors to examine the current technologies and their uses and most importantly their impact on application time-to-market and business competitiveness.
Divided into four areas of focus the conference will carry out a review of the technologies and the applications to which they are bringing new life together with examining who is deploying flash and where are the current sweet spots in your data centre architecture.
The conference will also examine what are the best practices that can be shared amongst users to gain the most advantage and avoid the pitfalls that some may have experienced and finally will discuss the future directions for these storage technologies.
The keynote speakers and moderators delivering the main conference content are confirmed as respected analyst Dr. Carlo Velten of Crisp Research AG, Jens Leischner the founder of the end user organization sanboard, The Storage Networking User Group, Bertie Hoermannsdorfer of speicherguide.de and André M. Braun representing SNIA Europe.
Sponsors include Dell/EMC, Fujitsu, IBM, Pure Systems, Seagate, Tintri, Toshiba, and Virtual Instruments and the event is endorsed by SNIA Europe.
Registration is free for IT managers and professionals from end-user organisations via the website www.flashforward.io/de using the promo code FFDEEB1. Resellers and other representatives of channel organisations are also welcome to attend for a small registration fee.
Vendors interested in sponsoring the event should contact paul.trowbridge@evito.net.
Cloud is quickly becoming the “new normal” — according to a recent Forrester report, the ability to “better leverage big data and analytics in business decision-making” tops the priority list for organisations adopting the cloud. The problem? Increased cloud usage means increased complexity, often leading to a kind of infrastructure “blind spot”. This results in analytics data gathered being incomplete, which carries the risk of masking important issues. So how do companies get around this blind spot?
By Mark Boggia, Director Sales Engineering Europe at Nexthink.
Simply put, as cloud networks expand, so does their complexity and existing server monitoring tools aren’t up to the task — they were designed to handle finite internal environments, not the ever-changing perimeter of the cloud. Under these conditions, meaningful analytics become virtually impossible since relevant data lies beyond the visibility of the IT department.
Advertisement: Vertiv
The cloud breeds complexity, which limits visibility. So what’s the solution for multi-cloud companies that need the best of both worlds? Think of it like this: While server-side monitoring tools can capture data from all attached devices, they’re naturally frustrated by the cloud “gap” which exists between on-premises and off-site solutions. What’s more, using centralised data collection automatically puts IT teams behind the curve. For instance end-users experiencing network problems or engaging in risky behaviour — such as the use of unsanctioned cloud applications — often don’t wait around for logs and error reports to reach the IT desk. They frequently try to find their own solution, ask a colleague or download another app. It makes sense therefore to start by looking at end-users first.
Companies are now turning to real user monitoring (RUM) solutions which collect data and metrics at the end-user level directly and in real-time, allowing them to effectively “flip the script” of traditional monitoring techniques. According to the same survey, 77 percent of IT managers believe implementing RUM solutions would be “very effective” or “generally effective” at solving end-user monitoring challenges.
So why the big push for hybrid analytics? Why are companies adopting the cloud so focused on this outcome? The simple answer is data. By adopting hybrid and multiple cloud models, businesses have access to virtually limitless data sources — but this same abundance also creates a natural “blind spot” for IT infrastructure, forcing companies to choose between reduced complexity and better analytics or large-scale cloud adoption and limited big data efficacy. But the emergence of flexible, RUM-based tools may suggest a way for companies to increase their visibility without losing their edge: services, costs and end-users are monitored in real-time — even as the data they provide is used to improve analytics outcomes.
Albie Attias, managing director of IT solutions provider King of Servers, explores how the rise of connected devices will impact the future workplace.
When we think of Internet of Things (IoT), it can be easy to fall into the trap that such connectivity will merely make devices and machines more efficient and intelligent. This is an understatement. The IoT will actually greatly improve human efficiency, having an incredibly positive impact on the workplace. Smart devices will help employees to organise their time better and serve them information before they even realise they need it.
Imagine smartphones that plan out a route to a meeting for an employee before they even think about opening Google Maps, simply because the meeting is in their calendar. It will automatically set an alarm to ensure they wake up in time, based on traffic data and GPS movement of the employee. It will also alert said employee that they need to visit the petrol station on the way if they want to avoid breaking down on their journey. It may even ping an email to meeting attendees automatically if they are running late due to traffic.
Aside from assisting staff with their typical working duties, the rise of IoT will make people in general more efficient, which will yield great benefits in the workplace.
A rise in connected devices will increase the adoption of remote working. It will become much easier for employees in different locations to connect and communicate effectively. Video collaboration robots will make it easier for employees to have a virtual ‘presence’ in a meeting, rather than trying to decipher conversations via poor Skype connections.
Not only will this benefit the general population, it will also open up many new employment opportunities for people with disabilities. As their physical presence won’t be required on location.
Although IoT will provide numerous benefits to the modern workplace, it will also throw up some challenges, particularly for IT departments. Undoubtedly, a new variety of security holes will emerge, bringing new types of threats we couldn’t even begin to imagine presently.
The more touchpoints a network has, the more vulnerable it is likely to be. The sheer amount of connected devices alone will create vulnerabilities. This will be particularly challenging for smaller IT departments that are restricted by budget and will require continuous training and learning as well as purchasing the latest IT security devices and applications.
IoT relies on lots of data being processed, analysed and understood in order to run effectively. While many mainstream devices will run off data externally, IT departments wishing to integrate their applications will need to be adept at handling and processing data in larger volumes than ever before. This alone is a specialist skill, with many experts warning that there is a serious skills shortage in this area.
I believe IoT is a very exciting concept for enterprise. It will be interesting to see how different industries and businesses adopt the new technology and how it will alter the current workplace as we know it. We are just at the beginning of a very exciting technology journey.
Advertisement: MPL Technology
Cybersecurity breaches are happening at an industrial scale. The unabated volume, scale, and magnitude of these breaches are forcing the entire industry to re-think how security should be managed, deployed, and evaluated.
By Adrian Rowley, Lead Solutions Architect EMEA, Gigamon.
While it is simply unacceptable to be complacent when it comes to cybersecurity, there still seems to be some confusion among many organisations when it comes to their perceived level of security and actual cyber-readiness. In fact last year, an IT Security study found that only 55 percent of UK government organisations have an IT budget dedicated to security solutions, which just isn’t enough. This lack of commitment is especially worrying with the increasing amount of attacks across vital industry sectors – such as energy, water, telecoms, financial services, transport, defence and government – commonly referred to as Critical National Infrastructure (CNI). So where do investments in security need to be made?
While these type of threats and attack are not new by any means, an increasing amount of CNI continues to move online and more devices are connecting to networks with questionable levels of security. Previously, the Supervisory Control and Data Acquisition (SCADA) architecture in these CNI systems were isolated from the outside world and therefore more difficult for hackers to infiltrate. However, now attackers don’t need to expend nearly as much energy to hack CNI organisations , as the proliferation of the internet in this sector has made their networks much more accessible. One of many examples of the impact of an attack on CNI was seen in Ukraine in December 2015 where the electricity grid was taken down by a cyber-attack, affecting nearly a quarter of a million citizens. It doesn’t take a vivid imagination to envisage how an attack to a country’s construction, finance, telecoms, transport, or utilities systems could have devastating consequences. Unfortunately, despite companies’ best defences, increasingly organised hackers are not only getting through, they are staying hidden and undetected on networks for longer.
Organisations and security vendors must avoid complacency and instead fight smarter to identify, isolate and eliminate cyber threats faster. Companies need to constantly examine the way that their data security models are deployed and managed and ensure they are fit for purpose.
There are inherent issues in trying to stem the rise in attacks on CNI systems, and that problem goes back to the fundamentals of the security model itself. The traditional way in which organisations had set up their security model has led to cybersecurity systems that are simply inadequate at addressing the level of cyber breaches that organisations face today.
It seems that for a while, in order to keep out hackers, the main focus was to bolster their perimeters. There was the simplistic assumption of what was outside the perimeter was unsafe and what was inside considered secure. That perimeter security typically consisted of a firewall at the internet edge and endpoint security software such as an antivirus solution, at the user end. However, most of the perimeter firewalls and endpoint security software solutions leverage rules and signatures to identify malware. Today, many of the cyber breaches exploit zero-day vulnerabilities: vulnerabilities that have been detected but a patch for it does not yet exist in various pieces of software. Consequently, it is increasingly difficult for traditional perimeter-based solutions to prevent malware and threats from breaking in. Ultimately, this means hackers are by-passing perimeters and staying on networks.
Another aspect of the original model was a high reliance on employee trust; employees were considered trustworthy while everyone else was a threat. However, many offices now have employees who use personal computing devices, such as smartphones for business use, or their work force consists of more than just employees, but also consultants, contractors, and suppliers all needing to access the network and IT resources. This creates multiple points of entry for potential hackers and makes the simple trust model unrealistic, as the threat could just as easily come from within.
Furthermore, security appliances were deployed at fixed locations. Typically, these would assume a fixed perimeter or a set of fixed “choke” points at which traffic was expected to traverse and be monitored for threats. However, with the advancement of IoT, BYOD, and the general mobility of users and their devices, the predictability of traffic patterns and these fixed “choke” points has diminished. Additionally, the adoption of the cloud has blurred the edge and perimeter boundaries. This is making the workplace a far more dynamic environment with far less predictability on where the boundaries and choke points lie. Consequently, the ability to consistently and comprehensively identify all threats based on the static deployment of security appliances at fixed locations has been severely impaired.
Despite these issues, and the fact that cybercriminals are becoming much more sophisticated in their approach, many organisations are still using traditional security architectures to prevent network breaches. Criminals have set their sights on bigger targets with much greater fall out. Today’s threats are far stealthier, more sophisticated and destructive at an industrial scale. Many of them are grouped under an umbrella called Advanced Persistent Threats (APT), named so as they compromise the network and take up residence there for lengthy periods of time, and are the source of many of the recent large scale breaches.
Modern security strategies have to be forged on the assumption that breaches are inevitable. In other words, there must be a growing emphasis on detection and containment of breaches from within, in addition to prevention of breaches. Since the network is the primary medium that bridges the physical, virtual and cloud environments, network traffic is becoming increasingly critical for its role in providing the window to the enterprise for malware and threats. Organisations need to have persistent visibility to analyse network traffic for threats, anomalies, and lateral movement of malware. There needs to be a structured platform-based approach that delivers traffic visibility for a multitude of security appliances in a scalable, pervasive, and cost effective manner. Such a platform would deliver visibility into the lateral movement of malware, accelerate the detection of exfiltration activity, and could significantly reduce the overhead, complexity and costs associated with such security deployments.
With the current threat landscape including industrialised and well-organised cyber threats on a national level, it is no longer sufficient to focus on the security applications exclusively. Focusing on how those solutions get deployed and how they get consistent access to relevant data is a critical piece of the solution. A security delivery platform in this sense is a foundational building block of any cyber security strategy.
The changing cyber security conditions are driving a need for a fundamental shift in security. As organisations accept the inevitability of network breaches, their focus should shift to security architectures that detect malware and threats within the organisation, and respond to mitigate risk. Doing this requires far deeper insight and far greater coverage across the infrastructure than traditionally feasible and consequently a new model for deploying security solutions. This model must have pervasive reach and insight into the network and equips organisations to better protect themselves against the hackers that have raised the stakes.
The server industry is becoming more and more automated. Robots are helping to deploy servers and clouds that are moving away from large racks of blades and hardware managed by teams of administrators to machines that are deployed and managed with minimal human interaction. The way information is delivered as well as improvements in automation technologies are fundamentally changing the central role of IT.
By Mark Baker, OpenStack Product Manager, Canonical.
Data centres are becoming smaller, distributed across many environments, and workloads are becoming more consolidated. CIOs realize there are less dependencies on traditional servers and costly infrastructure; however, hardware is not going anywhere. For IT executives, servers will be part of a bigger solution that creates new efficiencies that will make cloud environments quicker and more affordable to deploy.
CIOs wishing to run either cloud on premises (private cloud) or as a hybrid with public cloud, need to master both bare metal servers and networking. This has caused a major transition in the data centre. Big Software, IoT (Internet of Things), and Big Data are changing how operators must architect, deploy, and manage servers and networks. The traditional Enterprise scale-up models of delivering monolithic software on a limited number of big machines are being replaced by scale-out solutions that are deployed across many environments on many servers. This shift has forced data centre operators to look for alternative methods of operation that can deliver huge scale while reducing costs.
As the pendulum swings, scale-out represents a major shift in how data centres are deployed today. This approach presents administrators with a more agile and flexible way to drive value to cloud deployments while reducing overhead and operational costs. Scale-out is driven by a new era of software (web, Hadoop, Mongodb, ELK, NoSQL, etc.) that enables organisations to take advantage of hardware efficiencies whilst leveraging existing or new infrastructure to automate and scale machines and cloud-based workloads across distributed, heterogeneous environments.
For CIOs, one of the most often overlooked components to scale-out are the tools and techniques for leveraging bare metal servers within the environment. What happens in the next 3-5 years will determine how end-to-end solutions are architected for the next several decades. OpenStack has provided an alternative to public cloud. Containers have brought new efficiencies and functionality over traditional Virtual Machine (VM) models, and service modelling brings new flexibility and agility to both enterprises and service providers, while leveraging existing hardware infrastructure investments to deliver application functionality more effectively.
Because each software application has different server demands and resource utilization, many IT Organizations tend to over-build to compensate for peak-load, or they will over-provision VMs to ensure enough capacity years out. The next generation of hardware uses automated server provisioning to ensure today’s IT Pros no longer have to perform capacity planning five-years out.
With the right provisioning tools, they can develop strategies for creating differently configured hardware and cloud archetypes to cover all classes of applications within their current environment and existing IT investments. Effectively making it possible for administrators to make the most of their hardware by having the ability to re-provision systems for the needs of the data centre. For example, a server used for transcoding video 20-minutes ago is now a Kubernetes worker node, later a Hadoop Mapreduce node, and tomorrow something else entirely.
These next generation solutions bring new automation and deployment tools, efficiencies, and methods for deploying distributed systems in the cloud. The IT industry is at a pivotal period, transitioning from traditional scale-up models of the past to scale-out architecture of the future where solutions are delivered on disparate clouds, servers, and environments simultaneously. CIOs need to have the flexibility of not ripping and replacing their entire infrastructure to take advantage of the opportunities the cloud offers. This is why new architectures and business models are emerging that will streamline the relationship between servers, software, and the cloud.
Advertisement: Vertiv
In 2007, data centre efficiency was just emerging as a serious issue. Increases in data centre density and capacity were driving up energy bills, while concerns over global warming were spurring conversations and government pressures about energy consumption. Since then, the industry has responded with a number of tactical approaches and has been searching for cohesive strategies to optimise efficiency.
By Mike O’Keeffe, VP Services EMEA, Vertiv.
Let’s take the discussion back to basics. There are a few key needs across the data centre space: one is the need to design a data centre which is as economical and efficient as possible, while another is to provide the highest levels of availability, even in the most critical conditions. With the dynamic changes of the modern world and the rising demands of our hyper-connected lives, fuelled by the explosion of mobile devices, it can be a challenge to manage your IT budget and improve operational efficiency.
So how do we go about designing the right data centre for the future? It’s the question we’re constantly faced with, and the answer is often a tough one. Typically, a data centre facility is designed to last over 20 years. Usually there is a fixed infrastructure from day one, designed with a maximum power and cooling limit. The equipment deployed in the building can change at the minimum every four years but for hyperscale operators, design changes may be made every four months. Overall, there are no two data centres that are the same, which means that the key to designing a facility is its initial building design and deployment which flex to match fluctuating changes in IT load and the needs of different organisations.
It’s much easier to factor in cost and energy reductions from day one than to retrofit a facility that’s already been built. For this reason, design, testing and project management are key in the early stages of building design. While the industry may be chasing exciting new designs and build opportunities, there are thousands of data centres already in existence which are more than 10 years old. In reality the task of legacy facility optimisation is equally as important and challenging as building an optimised facility from scratch.
Therefore, understanding how existing data centres can achieve optimum efficiency is crucial. One area of efficiency that is often overlooked, is cooling equipment and how it’s being used. Even the slightest temperature increases can affect costs and power output, not to mention the reliability of mission-critical equipment. In an average data centre, cooling amounts for approximately 40 per cent of total energy use – a staggering percentage given the level of equipment used in the building. To ensure availability without breaking the bank, a data centre needs a thermal management solution designed to optimise cooling efficiency while lowering the total cost of operations. This requires smart, flexible technologies that can promptly adapt to changing temperatures. An added bonus of a holistic and integrated thermal management solution is that it can simultaneously prevent critical thermal issues that can disrupt service. A win, win situation.
Another important area to consider is power. The question here is whether the current power systems in a facility can continue operations under any foreseeable changes to circumstances? Selecting a critical power system that optimises availability, efficiency and scalability needs is key. According to the 2016 Costs of Data Centre Outages report from the Ponemon Institute and Vertiv, UPS system failure was the most frequently cited root cause of data centre outages and accounted for 25 per cent of these events. In fact, the costs associated with these outages increased by more than 42 per cent between 2010 and 2016.
To ensure the most effective delivery of power to a critical infrastructure, making sure a data centre is equipped with reliable equipment, redundant configurations, and a scalable and flexible power management strategy should be at the top of the priority list.
The third area to consider is how you’re managing your data centre infrastructure to account for change and plan for future needs. A data centre infrastructure management (DCIM) solution that includes regular monitoring for real-time information can identify vulnerabilities and inefficiencies, and allow you to make the best decisions to operate and maintain your infrastructure. In addition, your DCIM solution is also very valuable in planning for additional capacity that will support maximum efficiency and future growth in your data centre. A conscious effort should be made to continuously review your data centre’s mission critical equipment and maintenance practices, and get the information you need to make smart changes to improve efficiency and reliability.
Overall, there are a number of changes and considerations you can make from the onset to increase your efficiency, like thermal management, power and DCIM. In order to retain a competitive advantage and fight the increasing pressures of optimum efficiency, there is no one-size-fits-all solution. In fact, a full range of efficient solutions, coupled with a knowledgeable team to effectively deploy them is what may put you ahead of the pack.
Put simply, the key is to use innovative design and service expertise to extend equipment life, reduce operating costs and address your data centre’s unique challenges. The last message here is particularly important. We’re often distracted by data centres being planned by Google, Amazon and Facebook, and it’s easy to get swept away by the latest innovative technologies for hyperscale deployments. But in reality, not every company has the same needs or budgets of these big players to implement efficient front-end design from day one, or continue these high levels of investment every time the technology changes or is updated.
Innovation can happen on different levels, because no two data centres have the exact same needs. What’s more, there is no such thing as a standard design or output for a data centre: a hyperscale facility built for an internet giant has totally different needs to a trading bank or hospital data centre. Above all, having a range of solutions that meet all these different requirements is essential.
While best practices in optimising availability, efficiency and capacity have emerged, how they are applied is hugely dependent on a specific site. Although the above techniques can and should be deployed, knowing the boundaries for these is critical, and this is where local expertise from a trustworthy provider can be the difference between success and failure. While following best practice is advised, the particular needs of your facility should always be front and centre when making decisions on data centre optimisation. Which is why we must design – or redesign – data centres to prioritise flexibility using scalable architectures that ultimately minimise your carbon footprint and operating costs.
Growing connectivity, competition and consumer power have left business leaders grappling with unprecedented change.
By Matt Leonard, Solution Engineering & Marketing Director, CenturyLink EMEA.
An environment in which organisations must disrupt others before they themselves are disrupted, means that new products, business models and ecosystems are required. Growing competition and consumer power has eroded traditional product-based advantages, forcing companies to shift into a new battleground – that of customer experience. And this requires an integration of the entire business in order to demonstrate value at each and every customer touchpoint.
As a result, ‘digital transformation’ has not only become the latest buzzword but is now also the cornerstone of many business strategies.
Despite this however, there may still be a lack of complete understanding and agreement around what the term actually means. Fundamentally, digital transformation is about the application and exploitation of technology in all aspects of an organisation’s activity. Success doesn’t necessarily come from the technology itself, but rather from the organisation’s ability to implement that technology innovatively by rethinking its business model, strategy, culture, talent, operating models and processes.
It’s not hard to understand why CIOs at many organisations are changing their IT strategies in an attempt to maximise the benefits of technology. Gone are the days when established enterprises could rely solely on their size as a critical success factor. To be successful today, companies need to be responsive, adaptable, agile, insight-driven, connected AND collaborative.
And in many cases it’s the smaller, more nimble start-ups that are best placed to take this approach to growing their enterprises and better serving their clients.
To keep up with these new, disruptive companies, CIOs should be looking at ways of building a digital transformation strategy that drives genuine value for the enterprise, rather than simply taking a “me too” approach based on what their peers are doing or following a barked order from the CEO to “use the cloud”.
Ten years ago, consumers were used to paying for products and services up front or on an annual subscription basis. Nowadays, though, a technological and material shift has led to a culture of on-demand consumption.
Customers today seek services and products from companies that align with their preferences and values as individuals, and are willing to share information in exchange for more personalised offerings.
This new way of purchasing and consuming products and services has filtered through into the business community, and many businesses are using digital transformation as a vehicle to strive for a “single complete view of the customer” that will allow them to hyper-personalise their interactions.
Beyond ensuring that the data analytics are in place that will provide this insight, businesses - whether serving a B2B or a B2C community – also need to work fast on “SaaSifying” their products, and ensuring their applications are on a sufficiently robust platform to be able to cope with peaks and troughs in demand.
However, when it comes to digitally transforming the portfolio of applications an organisation owns and manages, the most challenging will often be those that aren’t based in the cloud, they are either colocated or housed on on-premise legacy infrastructure. Deciding what to do with these applications in order to gain greater agility or faster response times, while juggling requirements such as regulation, data sovereignty, security, integration complexity, contractual certainty, and service level agreements into consideration, can be a little like playing three dimensional chess.
And the more thought a CIO gives to these legacy applications and the challenges they represent, the further away that nirvana of being responsive, adaptable, agile, insight-driven, connected and collaborative starts to feel.
Faced with increasing competition from emerging start-ups, the larger corporates are considering what they can do to behave more like their newer counterparts. After all, no-one wants to be the next business to get ‘Ubered’.
The solution – or so they believe – is to embark on a digital transformation programme. This is not as straightforward as simply investing in more tech though as, historically, major corporations have always been at the forefront of tech investment. Simply adding on more technology layers will not fix a broken process or an inefficient culture.
While major corporations are right in attempting to mirror some of the positive traits that nimble start-ups exemplify, simply embracing ‘digital’ without knowing why they’re doing it, or what they want to achieve, is not the answer.
Focusing on the back-end technology and how to implement it, rather than the desired business outcomes can either result in investment being wasted on technology that drives no specific business value, or can lead to complete inertia – leaving the enterprise at risk of being ‘Ubered’.
The world’s leading digital enterprises aren’t a success simply because they run on flexible, scalable cloud platforms, for example. Rather, their success lies in the fact that they have uncovered a problem and have taken the smartest, most efficient approach to solving it.
Mistakes can frequently be made when organisations follow their competitors feet first into the cloud without proper thought and planning. There’s no “one size fits all” approach to which environment is most suitable for a move to the cloud, so consideration should be given as to how it fits in as part of a broader transformation strategy, and how it meets the demands of the business and its customers.
Public cloud, for example, is best suited for workloads that require flexible or short term computing resources, that are highly scalable and have the ability to switch on and off easily. Private cloud, on the other hand, is more suitable for workloads that require predictable levels of resources for at least three years, and more complex security requirements. Grid computing with three year workloads can run in a custom security environment using managed hosting, saving money over cloud and supplemented with bare metal for short term needs. Some older applications at the end of their lifecycle may be best left alone or moved into a colocation environment for integration into other venues. The key to a successful digital transformation with a legacy IT estate is a hybrid IT model with the flexibility to choose the best execution venue for each application and integrate into other environments.
It’s becoming increasingly commonplace in today’s landscape for corporations with a strategy encompassing both public and private options to require a number of cloud providers. While this approach can offer a greater level of flexibility and efficiency, it can be a lot for the CIO to juggle. A less stressful option for CIOs of enterprises that rely on multiple cloud providers is to seek a managed services provider to provide a management layer ensuring that promised savings materialise, and that complexity is kept to a minimum.
Change is inevitable and businesses must either take steps to adapt or be overtaken by their more agile, flexible and versatile competitors.
When a business decides to embark on digital transformation, it’s important that consideration is given as to where that business currently is, what it wants to achieve, and how it will address the challenges and opportunities it faces during that transformation.
Customer demands for the way they purchase and consume products and services have changed forever, and the demands that this puts on a business have changed with them.
In ensuring that its customer-facing applications are up to the task, a business must think long and hard about any cloud migration strategy. Rather than “putting them in the cloud”, it’s important to ensure that the environment in which these application are housed, and how they’re managed are right for the business.
Advertisement: MPL Technology
However, there is something of a mystique surrounding these different data center components, as many people don’t realize just how they’re used and why. In this pod of the “Too Proud To Ask” series, we’re going to be demystifying this very important aspect of data center storage. You’ll learn:•What are buffers, caches, and queues, and why you should care about the differences?
•What’s the difference between a read cache and a write cache?
•What does “queue depth” mean?
•What’s a buffer, a ring buffer, and host memory buffer, and why does it matter?
•What happens when things go wrong?
These are just some of the topics we’ll be covering, and while it won’t be exhaustive look at buffers, caches and queues, you can be sure that you’ll get insight into this very important, and yet often overlooked, part of storage design.
Recorded Feb 14 2017 64 mins
Presented by: John Kim & Rob Davis, Mellanox, Mark Rogov, Dell EMC, Dave Minturn, Intel, Alex McDonald, NetApp
Advertisement: Cloud Expo Europe
Converged Infrastructure (CI), Hyperconverged Infrastructure (HCI) along with Cluster or Cloud In Box (CIB) are popular trend topics that have gained both industry and customer adoption. As part of data infrastructures, CI, CIB and HCI enable simplified deployment of resources (servers, storage, I/O networking, hardware, software) across different environments.
However, what do these approaches mean for a hyperconverged storage environment? What are the key concerns and considerations related specifically to storage? Most importantly, how do you know that you’re asking the right questions in order to get to the right answers?
Find out in this live SNIA-ESF webcast where expert Greg Schulz, founder and analyst of Server StorageIO, will move beyond the hype to discuss:
· What are the storage considerations for CI, CIB and HCI
· Fast applications and fast servers need fast server storage I/O
· Networking and server storage I/O considerations
· How to avoid aggravation-causing aggregation (bottlenecks)
· Aggregated vs. disaggregated vs. hybrid converged
· Planning, comparing, benchmarking and decision-making
· Data protection, management and east-west I/O traffic
· Application and server I/O north-south traffic
Live online Mar 15 10:00 am United States - Los Angeles or after on demand 75 mins
Presented by: Greg Schulz, founder and analyst of Server StorageIO, John Kim, SNIA-ESF Chair, Mellanox
The demand for digital data preservation has increased drastically in recent years. Maintaining a large amount of data for long periods of time (months, years, decades, or even forever) becomes even more important given government regulations such as HIPAA, Sarbanes-Oxley, OSHA, and many others that define specific preservation periods for critical records.
While the move from paper to digital information over the past decades has greatly improved information access, it complicates information preservation. This is due to many factors including digital format changes, media obsolescence, media failure, and loss of contextual metadata. The Self-contained Information Retention Format (SIRF) was created by SNIA to facilitate long-term data storage and preservation. SIRF can be used with disk, tape, and cloud based storage containers, and is extensible to any new storage technologies.
It provides an effective and efficient way to preserve and secure digital information for many decades, even with the ever-changing technology landscape.
Join this webcast to learn:
•Key challenges of long-term data retention
•How the SIRF format works and its key elements
•How SIRF supports different storage containers - disks, tapes, CDMI and the cloud
•Availability of Open SIRFSNIA experts that developed the SIRF standard will be on hand to answer your questions.
Recorded Feb 16 10:00 am United States - Los Angeles or after on demand 75 mins
Simona Rabinovici-Cohen, IBM, Phillip Viana, IBM, Sam Fineberg
SMB Direct makes use of RDMA networking, creates block transport system and provides reliable transport to zetabytes of unstructured data, worldwide. SMB3 forms the basis of hyper-converged and scale-out systems for virtualization and SQL Server. It is available for a variety of hardware devices, from printers, network-attached storage appliances, to Storage Area Networks (SANs). It is often the most prevalent protocol on a network, with high-performance data transfers as well as efficient end-user access over wide-area connections.
In this SNIA-ESF Webcast, Microsoft’s Ned Pyle, program manager of the SMB protocol, will discuss the current state of SMB, including:
•Brief background on SMB
•An overview of the SMB 3.x family, first released with Windows 8, Windows Server 2012, MacOS 10.10, Samba 4.1, and Linux CIFS 3.12
•What changed in SMB 3.1.1
•Understanding SMB security, scenarios, and workloads•The deprecation and removal of the legacy SMB1 protocol
•How SMB3 supports hyperconverged and scale-out storage
Live online Apr 5 10:00 am United States - Los Angeles or after on demand 75 mins
Ned Pyle, SMB Program Manager, Microsoft, John Kim, SNIA-ESF Chair, Mellanox, Alex McDonald, SNIA-ESF Vice Chair, NetApp
•Why latency is important in accessing solid state storage
•How to determine the appropriate use of networking in the context of a latency budget
•Do’s and don’ts for Load/Store access
Live online Apr 19 10:00 am United States - Los Angeles or after on demand 75 mins
Doug Voigt, Chair SNIA NVM Programming Model, HPE, J Metz, SNIA Board of Directors, Cisco
Data industry, as it is clearly stated in the name, is all about data – its storage, distribution, protection, safety and security. Data as something absolutely intangible but very valuable, “new oil” at its very best. However, high-tech industry is incorporated in the physical world in many aspects and one of those is data centers. “Houses for data” are actual buildings and the process of their creation presents new challenges and opportunities for the traditional construction industry. The following article aims to provide a general contractor’s perspective on the data center as a construction project.
By Elizaveta Ageeva, Project specialist, NCC Building Finland.
We shape our buildings; thereafter they shape us.
Winston Churchill“Location dogma” is an axiom, which real estate professionals are being told and tell themselves throughout their careers – from university benches to the meetings of highest level concerning millions worth deals. Old but true – choice of location may either present an excellent ground base for a successful project or depreciate and void earlier made efforts and preparations.
Search of location for a data center may be considered on two levels: country level and regional level. Location on both levels should fulfill certain criteria: sufficient energy and fiber network supply, physical safety and data security, and cost efficiency. Important to note, that all aforementioned aspects play a vital role and should be satisfied entirely without compromising. Additional criteria may include availability of renewable (green) energy and other sustainable solutions, availability of educated workforce and, tax or investment related incentives.
An investment decision can be made either based solely on country level i.e. when data center operator /investor decides to locate data center in a certain country and then starts looking for sites, or, which from practical experience is more often, decision is made at the same time both about the country and a specific land site.
Cost efficiency runs like a golden thread through all the criteria - data industry is first of all business and business is about profitability. Biggest share of the costs in data centers belongs to the consumption of electric energy. Prices of electricity on country level are annually compared by statistical office Eurostat (for the European Union), which publishes information on its website. Market players themselves also put a huge effort in minimizing costs of energy, one way or another. For instance, in Nordic countries colder climate plays a role of natural cooling which makes it possible to reduce the amount of energy for cooling. Another possibility is to utilize waste heat produced by data centers by selling it to district heating systems. Such experience has been successfully implemented in data centers in Finland and is gradually becoming a new sustainable industrial standard. For instance, data center in Finland with capacity of 6 MW may sell around 3,6 MW of heat, which warms 5 000 private houses in the area.
A third option for reducing costs might be the usage of alternative sources of energy, variety of which is available nowadays, from natural gas to sea water, from wind turbines to solar panels. From construction perspective energy costs could be reduced by installing of efficient cooling systems and other engineering solutions, as well as by providing life-cycle maintenance services for data centers.
To take a small step backwards, worth saying, that electric energy should not be only costefficient, but first of all it should just be – available, reliable, and affordable. Availability and reliability of electricity is regulated on a national level – for instance in Finland, public limited liability company Fingrid (major shareholder is the state), is responsible for power system and according to the latest published information, electric grid covers 100% of the country and in year 2016 was 99,99985 % reliable. Availability in practice means also possibility to get access to the electric grid with minimum costs and time spent.
Fiber network supply is provided by telecommunication companies operating in the area. Companies can be of different scale, operating either on national or regional level. Data centers operators and investors have different objectives and preferences, based on which they make choice of specific operator, so the general principle would be “variety is the key” or the more providers are available in the area, the better.
Safety and security is another cornerstone of data center both as a concept and as a big practical aspect to consider. Safety and security may refer to cyber security i.e. protection of the information kept in the data center and involves also aspect of physical safety i.e. internal and external protection and safe location. On country level cyber security is being guaranteed by national legislation, therefore it is important to pay attention in which cases information may be retrieved and on which terms. For example, in Finland privacy of data is ensured by the legislation in a way that access to the third party’s data may be granted even to the official authorities only against a court decision. Physical safety on country level relates mostly to natural environment. Floods, volcanoes, tsunamis, hurricanes, tornadoes. Check school geography study book and name any natural disaster. Data center location shall be free of the aforementioned with almost zero tolerance.
Political environment may be also considered as an element of the safety aspect. In times of overall social and economic turbulence and instability healthy political environment becomes a significant argument to consider a stable and reliable country as a good investment target, especially in such sensitive and confidential business as data industry.
Speaking of a specific land site level, similar aspects should be considered. In addition to that, on a regional level may be mentioned such risks as forest fires, flight paths’ location, proximity of hazardous industries, sometimes even proximity of bigger cities or natural sources of water (e.g. large river).
The right approach to the construction process helps to mitigate physical risks. Preliminary analysis of groundwater and soil, usage of endurable and high-end construction materials, installation of surveillance and alarm systems, back up procedures and other protective measures enable a safer and more secure data center. Digital construction technologies such as VDC (Virtual Design & Construction) enables to visualize and estimate entire construction process already at design phase and then closely manage and evaluate performance (including need of possible changes) during the execution phase.
Additional criteria to consider are availability of alternative sources of energy, sustainable solutions, educated workforce in the area, tax or investment related incentives on country or regional level.
All mentioned criteria may be characterized as more or less objective factors, which is possible to measure, calculate or estimate in some way. There is however more subjective parameters worth taking into investment consideration, and probably most essential of which is finding right partners for the project. If there has to be a human factor (or “partner” factor in the case), better to make it a successful one and get the most possible benefit from the cooperation. Let’s take a closer look, what kind of partnering will ensure successful implementation of data center project.
Construction of a data center brings together specialists and experts from different fields, making the project process quite complicated and multi-sided. Just to mention a few participants: investor (who either may combine roles of owner, operator, tenant or each participates as an independent party), general contractor, regional and country authorities, energy providers, owner of the land, architects, designers, consultants, subcontractors. And others – not maybe exactly construction of Tower of Babel, but in practice somewhat similar. Execution of the project in a foreign country increases overall level of risk and possible uncertainty.
From general contractor’s practical experience in similar major projects, a solution to this situation would be to accumulate interests and capabilities of all the parties in a partnering cooperation. Dream team is a team, where individual success is ensured by pursuing common targets in the first place and achieved by coming to the win-win decisions. General contractor in this scheme takes position of a leader of the project, who brings together, facilitates and ensures cooperation between the participants. This will enable offering the client a complete and ready-to-go project package, which includes land plot, pre-negotiated agreements with energy suppliers and authorities, expert project team and variable technical solutions, consulting, marketing and other supporting services.
As a rewarding result of this construction business model will be a turn-key data center project, which has been tailor-designed and built according to the client’s specifications. Throughout the project the general contractor will be the partner for the client and direct contact, who is capable of managing and answering arising questions and satisfy client’s needs either by itself or with other parties’ involvement.
Being a recovering perfectionist, the author of the article is tempted to conclude with recommendation of “Aim high, dream big and never settle for less”. However, planned and done is much better than perfect and non-existing. So look for the right team and partners, consider objectives, smartly evaluate risks and opportunities and … just get started.
About NCC. Our vision is to renew our industry and provide superior sustainable solutions. NCC is one of the leading companies in the Nordics within construction, infrastructure and property development, with sales of SEK 53 billion and 17,000 employees in 2016. The NCC share is listed on NASDAQ Stockholm. More about NCC Data Center concept www.ncc.group/datacenter www.ncc.fi/datacenter
In today’s global marketplace, business intelligence (BI) is not just for senior management. Employees from all levels of organisations and across various departments are using BI to drive decisions. With this influx of users needing to access and interact with data, companies now face new challenges in data governance.
By Tim Lang, Chief Technology Officer at MicroStrategy.
Chief Data Officers (CDOs) set strategies for governance programmes, related employee trainings, and bridge the gap between IT and business units. Ultimately, given the greater demand to manage data for new users, the CDO works to minimize the risk of contaminated data leaking into business reports.
Oftentimes, the CDO position is overlooked or undervalued when businesses plan to increase BI initiatives. While this is a common mistake, it can be a costly one as well. If an organisation’s data becomes corrupted, then it is of no value – data must be trusted and verified to be useful. Each team or department at an organisation has its own needs for using this data, and traditionally the IT department focused on managing data. However, as more business users interact with data on their own, and as self-service options continue to grow, the likelihood of data corruption increases significantly with each added user into an environment.
Democratising data has its risks, as minor inconsistencies introduced into data sets can have exponential effects across multiple departments. Inconsistencies rapidly multiply as users unknowingly introduce unverified information with colleagues, clients, and others outside the organisation. Complications originate from issues with ownership, data collection processes, or technology standardisation.
An employee who corrupts company data often does so with no intent, but rather because of a lack of technical knowledge or training. Without even knowing they’ve caused an issue, the data they’ve contaminated can lead to excessive consumption of company resources, increased maintenance costs, and distorted results that end with painful decisions.
‘Reverse engineering’ irrelevant, out-of-date, or erroneous data is a tedious, time-consuming process. It provides an opportunity for the competition to jump ahead because your company resources are diverted to cleaning and restoring data to its pre-contaminated state. To avoid the many pitfalls associated with data contamination, here are five tips to help organisations get data quality and data governance right.
A governance framework sets the parameters for data management and usage, creates guided processes for resolving data issues, and enables businesses to make decisions based on high-quality data. Building this foundation of trust is essential for any organisation that looks to obtain precise insight and business value from data assets. But implementing a data governance framework isn't easy and there isn’t a one-size-fits-all approach. It must be customised according to each organisation to effectively allow collaboration between business and IT departments.
To effectively manage a governance framework, new roles are being created at varying levels within organisations. These data stewards are critical in curating data and fostering communication between teams. To communicate effectively, each business unit should designate representatives who engage in routine cross-team dialogue aimed at keeping everyone in the organisation on the same page. It is the stakeholders’ responsibility to ensure that their team adheres to established processes.
The primary factor in ensuring successful adoption of data-driven initiatives is the partnership between business and IT according to the New Vantage Partners 2016 Big Data Executive Survey. Open communication and collaboration works to ensure that everyone’s data needs are met. Transparency from both sides is key, and while the analytics platform may be able to provide monitoring capabilities to help bridge that gap, the technology itself can only take you so far. To be more effective, governance processes need to be fluid and open, and IT needs to continuously monitor and prioritise how they promote essential measuring tools into a governed framework.
The easiest part of the process is choosing the right technology and putting it into the hands of both business and IT users. Technology should enable business teams to control the 'who, what, where, when, and why' of data entry – allowing access to information that pertains to them. It should also enable collaboration across the organisation to break down existing data silos.
Data governance cannot be created or fixed overnight. With business needs frequently changing, it takes a continual process of identifying gaps, prioritising applications, and promoting assets, so it’s good to start with small steps. As with any initiative, getting buy-in from key personnel is a crucial first step. The entire organisation needs to recognise the value of having an enterprise-wide data governance initiative—whether it’s through standardised technology, systematic reviews, or appointing data stewards.
The next step is to identify and start with an organisation’s most important application. Try to certify or promote a single application each month, and by year’s end, you’re your organisation will have data governance covering twelve critical applications.
As BI adoption becomes more pervasive, and employees increasingly access and interact with data, an effective data governance programme is more critical than ever before. It ensures that a single version of the truth is maintained across all departments of an organisation – protecting the data’s integrity and credibility.
Investing in employee education is a key element to consider when deploying data governance. Preventative measures and early investments work to minimise the likelihood of eventual mistakes or oversights that could damage credibility. Once the right technology is in place and being used correctly, applying a data governance framework will keep a business poised to reap the benefits and gain a competitive edge in today’s climate of continual change and disruption.
Increasing the efficiency of data centres and reducing the overall energy consumption, are major concerns for industry and indeed wider society today. Which is likely to become ever-more important as the amount of data being generated, stored and shared continues to increase, in turn causing a demand for physical data-centre space to grow.
By Stefano D'Agostino, MSc, MBA, PhD, MIET, Software Solutions Business Manager Data Centre, IT Business, Schneider Electric.
By some estimates, the amount of data in the world is doubling every two years and consequently the global energy consumption of data centres is increasing by 20% annually. The share of the world’s generated electricity consumed by data centres has risen from 0.5% ten years ago to 3% today, with some expecting the proportion to rise to 25% in 20 years time. Financial pressure, regulations and corporate social responsibility (CSR) demand data centre operators to stay focussed on improving efficiency.
Software control of both the IT equipment and the supporting infrastructure such as cooling, security and power backup systems has long been a fundamental tool for monitoring and managing both operational and energy efficiency. Such tools will become increasingly important, and increasingly integrated, in the future. Furthermore, they will make use of the most cutting-edge advances in software technology, including artificial intelligence (AI), to meet the challenges posed by the critical importance of energy efficiency.
Apart from the IT equipment itself, the biggest consumer of power in any data centre is its cooling system. Vital to maintaining operating temperatures at acceptable levels, cooling is nevertheless a major drain on the energy required by a data centre and much effort is now expended in organising it efficiently.
Data centres are now designed from the outset to facilitate the most efficient cooling with careful attention being paid to the choice of cooling equipment for the facility as a whole, the design and alignment of the IT equipment racks to enable the optimum air flows for maintaining efficient operating temperatures and the containment of those racks so that hot and cold air streams are separated to avoid unnecessary duplication of cooling effort.
Perhaps most critical of all is the ongoing monitoring and control of the data centre’s operations so that adjustments can be made in response to inevitable changes in the operating environment. Variations in ambient or external temperature, a sudden change in the IT operating load, the addition or removal of equipment racks, or even the failure of some elements of the cooling apparatus all effect the cooling effort that is needed to maintain optimum performance.
A particular challenge is that often the most appropriate measure to take in response to localised rises in temperature may be counter intuitive. Take for instance the case of a design based around an air-cooled packaged chiller. If the IT temperature set point is increased and the chilled water temperature is increased, the chiller energy decreases for two reasons. First, the data centre can operate in economiser mode for a greater portion of the year and secondly, the chiller efficiency increases.
However if the computer room air handler (CRAH) supply air temperature is not increased proportionally to the chilled water temperature, the cooling capacity of the CRAH decreases and the CRAH fans need to spin up to compensate. This in turn increases the energy consumption of the CRAH. Furthermore, the dry cooler energy increases because the number of economiser hours increases.
Clearly the task of cooling an IT room is a complex and demanding one and requires consideration of several tradeoffs between the demands placed on various elements of the cooling system. It requires the adoption of a holistic approach to the cooling of a data centre considering all aspects of the load, support infrastructure and surrounding environment.
There have long been software tools to assist in the management of all aspects of a data centre’s operation. Systems management software from leading IT vendors allows operators to monitor issues like server performance, disk utilisation and network traffic congestion. In a foreshadowing of the Internet of Things, embedded processing elements on individual pieces of equipment warn give notice of servicing and update schedules and warn of potential malfunctions.
Separate from systems management software suites, Data Centre Infrastructure Management (DCIM) systems assist in the management of supporting infrastructure such as cooling and backup power supply systems. Facilities Management software, like Building and Energy Management systems (BEMS), assists in the monitoring of building or site-level functions such as air conditioning and electricity supply.
As energy management becomes ever more critical, given the growth in data-centre facilities, it will be necessary for all of these systems to work more closely together. Given the complexity of managing all of these systems, self-learning systems or artificial intelligence (AI) will increasingly be deployed in the quest for efficiency.
Such systems are already in place in some established DCIM software suites. For example Schneider Electric’s Cooling Optimize tool, part of its own StruxureWare suite uses an intelligent algorithm that learns how cooling works in the data centre it is monitoring and automatically makes adjustments to elements of the cooling system in accordance with changes in the IT load. In this way it helps to achieve the optimal trade off between using sufficient energy to ensure adequate cooling without using too much and thereby reducing efficiency.
Elsewhere, some innovative startups are using intelligent machines with self-learning algorithms to optimise the allocation of the IT load itself so that optimal cooling can be achieved. Load-balancing systems are already a familiar feature of systems management software suites, but their focus is on ensuring an equitable distribution of resources from the point of view of maximising the available IT capacity in a data centre. The new software tools also take account of the effect of loads on the cooling requirements so that the energy implications can be factored into decisions.
The self-learning elements make calculations based on a number of factors including the times of the day, or indeed the year, when spikes in the load can be expected and calculate how such increased loads can be allocated across resources so that overall system and electrical efficiency can be maximised.
The combination of greater integration between the various management-software systems and the use of machine learning will help to reduce energy consumption, maximise hardware efficiency and provide greater visibility of system utilisation to data centre management. The latter feature is important especially in colocation and cloud environments from the point of view of ensuring agreed service levels to each customer.
There are of course other innovative efforts in place to help reduce power consumption in data centres. This includes research into new basic computing elements to replace silicon, especially as the venerable Moore’s Law, which predicts the regular achievement of ever more powerful computing performance from ever smaller amounts of silicon, approaches its physical limits. Such promising developments as quantum computing, superconducting cold logic and optical computing are all being investigated but for the foreseeable future we will still be building servers and data centres around silicon chips.
The better we manage such systems, taking into account all important factors including performance and energy efficiency, the more productive our data centres will be. Integration between management software systems, assisted by artificial intelligence, will be key to that.
Wärtsilä’s new, state-of-the-art energy generator is changing the game for data centres.
Every data center needs extremely reliable, but also affordable power supply. Traditionally this is provided by a combination of grid electricity, ensuring affordability, and emergency diesel generators, guaranteeing reliability. Unfortunately, this mechanism has certain drawbacks: from dependency on increasingly unstable power prices to high local emissions from diesel generators, which sometimes might even lead to problems obtaining environmental permits.
The solution to those challenges are modern gas-fired engines that provide affordable, reliable power supply. State-of-the-art gas engines are capable of starting up just as fast as diesel engines, but unlike those, they are able to competitively generate power not only in emergencies but also to electricity markets, thus recouping their costs and even generating additional profits.
In the vast majority of cases, data centers are expected to operate much more reliably than the power grids that support them. Therefore, practically every data center is provided with an on-site power generation facility to be used in the event of grid failure. This is normally done by installing emergency diesel generators, which are relatively cheap and involve simple and reliable technology that can provide required power almost instantly. At the same time, the generators use fuel that is not environmentally friendly, relatively expensive in most areas of the world, and most of the time the diesel generators just sit there doing nothing except generating costs. This is typically understood as a necessary cost of security, a type of insurance policy.
Traditionally, emergency diesel generation has been the solution of choice for all sensitive facilities, from nuclear power plants to hospitals to chemical factories for many decades already, adopted long before the birth of IT sector as we know it. It gets the job done just as it did fifty years ago. Yet in the second decade of the 21st century some things can be done smarter. Emergency power supply is exactly one of those things.
How could one then replace the cost of diesel insurance with a solution that both meets the same operational needs, but would also be a pro table and environment-friendly component of a data center project? The solution is actually very simple: it involves replacing the engine-generator sets suitable only for emergencies with ones able to operate efficiently whenever this makes economic sense, even continuously, if desirable. For this, of course, a change of fuel is needed: from oil to gas.
Despite the technical advancement in many power generation technologies, a reciprocating engine still remains the only solution capable of meeting emergency power supply requirements. But such an engine does not have to run on diesel fuel anymore – now there is a cleaner and more economically effective alternative: natural gas.
Industrial reciprocating engines running on natural gas are not a new concept. Early machines of this type were used at the beginning of the 20th century, but those were huge, bulky, low-speed machines with nothing in common with modern agile and flexible units. The development of gas engine technology as we know it today started somewhere in the 1980s. In the 1990s, it was already becoming quite popular in local distributed power generation, as it attained efficiency levels impossible to match for any other small-scale technology. In the early years of the 21st century gas engine technology advanced further, earning an important place in a modern power system. Compared to other fuel-based technologies used in large-scale commercial power industry, gas engines are fast to start, cheap to build and extremely flexible. This led to widespread use in the role of intermediate-load or peaking power stations all over the world, not only in distributed generation, but in fairly large power plants as well: the largest engine power plant to date has an installed capacity of around 600 MW.
Yet until quite recently gas engines had a major flaw compared to diesels: the starting time. While ten minutes – the state of the art just a few years ago – is very impressive in the world of commercial power generation and faster than any other technology except diesel or hydro, for an emergency power generation system this would be way too slow. In fact, this is more than ten times slower than any decent emergency diesel generator.
However, during recent years, huge progress has been made in this area. In general, the increasingly volatile electricity markets have forced equipment vendors to improve the flexibility of all power generation technologies, but in case of the gas engines the progress has been perhaps the most impressive. Over just a few years, standard series-built medium-speed gas engines had their start-up times reduced from ten minutes to just two. This was still longer than diesel, but the difference was no longer an order of magnitude. And even this has been further reduced by now. Recent development and testing by Wärtsilä has conclusively demonstrated that state-of-the-art gas engines may be started and brought to full power in considerably less than one minute of the starting order, which brings them within the world of emergency power supply.
Therefore, now and finally, diesel engines have an alternative as a source of backup power. However, adopting gas goes far beyond just providing a different equivalent solution. Natural gas is the cleanest of all fossil fuels. First of all, using gas means less CO2. This is an inherent feature of natural gas as a fuel. The higher hydrogen-to-carbon ratio in its constituent compounds means that the exhaust gas contains less carbon dioxide and more climate-neutral water vapor. When combined with high efficiency of modern industrial gas engines (which is among the highest of all power generation technologies and higher than the diesel engines currently used for emergency power generation), this means that using electricity generated by gas engines has a considerably lower carbon footprint than using diesel-generated power. In fact, the carbon footprint is much lower than that of grid electricity in most countries. This means, operating the generating sets continuously instead of relying on grid would have a positive effect on the carbon footprint of the data center.
Of course emissions are not only about CO2. However, nitrogen oxide emissions from gas engines are also considerably lower than from diesels, and there is practically no emission of sulfur oxides (as natural gas contains no sulfur) or particulate matter (again – no sources in fuel). All this, combined with the fact that in majority of the markets the cost of natural gas allows its use for commercial power generation, means that gas engines installed at a data center would not be limited to emergency power generation and could actually start earning money.
Recent developments in gas engine technology, combined with advent of liquefied natural gas solutions, make gas engines a viable option for the emergency power supply of data centers. At the same time, the robustness of this technology, low emissions, high efficiency and affordable fuel prices make such a solution suitable for much more than just providing backup for rare cases of grid failure. The solution of Wärtsilä described in this article allows data center operators to replace the necessary cost of technical insurance against grid failures into a pro table component of their business. At the same time, it will reduce the environmental footprint of their facility, both on global (carbon emissions) and local (particulate matter, nitrogen oxides) levels.
Most commentators agree that we are in the midst of a digital revolution. Data will play an important role in the world of business, and over time, competitive benefits will accrue for businesses that can effectively use data to their advantage. Data volumes continue to grow exponentially, with 40 zettabytes expected to be created by 2020. Despite the wealth of opportunity that this situation presents, data security remains a critical issue for the industry. Although much of the focus is on cyber security, the importance of physical threats to information held in data centres must be remembered.
By Southco.
It’s expected that the global data centre infrastructure management (DCIM) market will grow at a compound annual growth rate (CAGR) of about 15% between now and 2020.[1] Data volumes are exploding, requiring safe and effective storage, processing and administration. Unsurprisingly, the number of data centres being built around the world has increased, and all of them need to protect data from a range of threats.
More and more businesses are now hosting applications and storing data in colocated data centres. Although these colocation facilities represent a favourable alternative for many businesses compared with hosting data in a dedicated data centre, offering lower cost, greater reliability and 24/7 local support, shared access to critical infrastructure carries its own set of challenges. As physical security becomes more important, companies must safeguard business-critical data against accidental breaches to prevent theft or damage to valuable equipment, as well as the possibility of sensitive information falling into the wrong hands.
Although extensive measures are in place to secure the perimeter of a colocated data centre, the biggest risk to security often comes from inside. Although the threat of malicious unauthorized access is a real concern, it’s worth mentioning that a sizable portion of breaches are accidental.
Research has shown that accidental and malicious unauthorized access from within data centres accounts for between 9% and 18% of total data breaches, costing the global industry more than $400 billion annually.[2] Given the ever present risk of data breaches, the need for physical security at the rack level becomes critical. Not only must these security measures maximize cost efficiencies for data centre owners, while barring access to unwanted intruders, but they must also deliver a complete audit trail, providing a clear overview of access and highlighting anything irregular or suspect to those with the power to act.
Traditionally, access to individual racks has always been protected by key-based systems with manual access management. In some instances, data centre owners have turned to a more advanced coded key system, but even this approach provides little in the way of security—and no record of who has accessed the data centre, making the collation of accurate audit reports practically impossible.
To ameliorate the problem of unauthorized access and concerns surrounding data security, traditional security systems are quickly being replaced by sophisticated electronic-access-based solutions. Above all, these solutions provide a comprehensive locking facility while offering fully embedded monitoring and tracking capabilities. They form part of a fully integrated access-control system, bringing reliable access management to the individual rack. The system also enables the creation of individual access credentials for different parts of the rack, all while eliminating the need for cages and thereby saving cost.
What makes these electronic-access solutions so effective is that they can generate digital signatures to control and monitor access time and tracking for audit trails, helping to comply with growing data-security standards including PCI DSS, HIPAA, FISMA, European Data Protection and Sarbanes-Oxley. Furthermore, as a standalone system, they require no network and no software to operate, instead being complemented by a manager key that can be used to add and remove users in real time as well as execute a power override if necessary.
Any physical-security upgrade in the data centre has its issues, of course. Uninstalling existing security measures in favour of new ones costs both time and money, which is why data centre owners are turning toward more-intelligent security systems such as electronic-locking swinghandles, which can be integrated into new and existing secure server-rack facilities. They employ existing lock panel cutouts, eliminating the need for drilling and cutting. This approach allows for lock standardisation in the data centre, saving considerable time (and therefore cost)—something that holds real value given the pressing demand for data centre services.
The device also helps to solve the issue of rack security by integrating traditional and contemporary access control, giving the racks maximum protection. Physical access can be obtained using an RFID card, pin code, Bluetooth, Near Field Communication or biometric identifications. Furthermore, the addition of a manual-override key lock allows emergency access to the server cabinet and surrounding area. Even in the event that security must be overridden, an electronic-access solution can still track the audit trail, monitoring time and rack activity. Such products have been designed to lead protection efforts against physical-security breaches in data centres all over the world.
As data centre threats continue to increase, technologies such as electronic-locking swinghandles will become even more in demand as companies seek greater protection for their business-critical data. Therefore, security systems must become more intelligent and integrated to afford business owners peace of mind. The industry has an increasing need for access solutions that combine protection and monitoring functions to combat the often overlooked threat of physical data breaches.
[2] http://blog.gemalto.com/security/2016/09/20/data-breach-statistics-2016-first-half-results/
Huge volumes of data generated by an increasing number of business applications has forced an unprecedented rate of transformation onto data centres. Organisations are increasingly relying on virtualisation to support their IT infrastructure as a result. At first glance, this makes a lot of sense: virtual environments run more services on less hardware and, more importantly, keep services running during disruptions in the power supply and so protect critical loads. However, any data centre embracing virtualisation risks under-performance or even outages if its power management strategy fails to keep up.
Here, Gary Bowdler, Integrated Solutions Specialist Eaton EMEA, examines how, by following five logical steps, companies can ensure services keep running during disruptions in the power supply
Virtualised IT architectures and the associated technology they require are putting increasing pressure on the need to manage power effectively in data centres. Virtualised machines (VMs) run at 70 – 80% capacity compared with 10 -15% for non-virtualised machines; this translates into enclosures demanding up to 40 kW of power each. Additionally, virtualisation allows applications to switch rapidly and unpredictably from one server to another, which helps data centres balance critical power demands, but also creates a whole new importance for power flexibility.
Knowing this and how to manage it properly is critical to the viability of enterprises as, according to a recent white paper by Eaton, outage costs can reach nearly £5,000 an hour for small businesses of up to 100 employees, rising to a staggering £780,000 an hour for large corporations with 1000+ employees.
Fortunately, effective solutions exist. Adopting a robust, intelligent power management strategy lets operators realise the full potential of their modern IT architecture while avoiding the reputational and financial costs of an outage. This can be crystallised into five logical steps: Protect, Distribute, Organise, Manage and Maintain.
Protection comprises Uninterruptible Power Supply (UPS) backup power sources to avert data loss and ensure business continuity in the event of extended power outages. UPSs protect sensitive IT equipment from power disturbances when the mains power is present in addition to providing backup during power outages.
Intelligent power distribution includes cables and intelligent rack power distribution units (PDUs) that deliver outlet and section current information, which allows users to quickly determine exactly where energy is being used, and identify rogue hardware that’s consuming excessive energy. Accurate metering also simplifies load balancing and reveals locations with spare power capacity.
The information provided by power distribution devices is enhanced by environmental monitoring that measures a range of variables, triggering backup and failover policies to minimise data loss and optimise recovery.
The equipment itself should be organised into secure and reliable IT infrastructure housing, offering easy access for maintenance. Virtualisation condenses power and cooling loads into smaller footprints, so more efficient cooling strategies become essential. Containment of hot and cold airstreams is becoming increasingly popular; this depends on well-managed raised-floor data centres, where all potential air leakage gaps are sealed to maintaining uniform, sub-floor, static pressure and airflow distribution. Racks that avoid leakage and internal hot/cold air crossover are critical to these strategies.
Visibility and control of the intelligent power management strategy is facilitated by suitable software integrated into the virtualisation platform. Status data for all UPS and PDU power devices in the virtual network can be viewed together with network, physical server and storage information, from a single pane of glass. This helps to ensure business continuity as managers can make decisions informed by both power and IT equipment status. Reaction can be faster and automated disaster recovery policies become more effective.
Preventive maintenance is also essential to any power management strategy. Modern UPSs are reliable, but they are still complex devices and subject to failure, while other equipment such as PDUs also require care. Risk can be avoided by implementing the right service approach for all the components in the system.
Both UPSs and PDUs can be maintained and their service life extended with regular servicing and monitoring. Cover should include on-site visits for both preventive maintenance and emergencies, UPS and PDU exchanges, spare electronic parts, batteries and battery trays, and a professional helpline.
Integrating these five steps into a cohesive and effective power management strategy calls for the right hardware, software and service tools – components that combine into a resilient, long term solution as well as performing excellently on their own. Eaton’s industry expertise, support and solution portfolio offer these tools, allowing the user to build a truly optimised power management strategy that lets them leverage the benefits of an IT architecture.