A well-known multinational pharmaceutical corporation is undergoing a modernisation programme. The company wants to move large amounts of data about such things as drugs trials, and other matters, across the globe in the pursuit of a cure for cancer. The type of data includes images that emanate from research labs across the globe.
Saïd Business School now enjoys an ongoing close relationship with Centrality’s executive team and its round-the-clock support service. The school has stated that it now has total confidence in providing an up to date, supported, maintained, resilient and highly available ‘world class IT service’ that its students, staff and faculty members reasonably expect and fully appreciate.
JBi regularly need flexible, high performance hosting solutions as the foundation for their digital campaigns and services. Their clients require a combination of competitive pricing, performance, availability and security. To fill this need, JBi wanted a strategic hosting partner with whom they could build an effective, long-term partnership. This was vital for the development, implementation and live stages of client campaigns, as was the need to ensure support and service would be delivered to the highest possible standards.
Fruition Partners and ServiceNow have created an enterprise service management solution for the Travis Perkins Group, now supporting EaaS implementation for IT, HR and logistics and providing a self-service portal which offers 30,000 staff a wide range of products and support, across 20 businesses and 2000 sites.
The above paragraphs are taken from four of the articles in this issue of DCSUK. I’m not going to tell you which ones – you’ll have to read the magazine to find them! However, I do think it’s important to remember that, for all the great content to be found in our digital magazine, much of it is often about the theory of data centres and the wider IT world. Nothing wrong with that. How else would everyone learn about new ideas and technologies if there weren’t such articles?!
However, often it’s just as valuable to read about how your peers are actually implementing new solutions. You might be sceptical as to the theory, but when you can read about it in practice, suddenly it does make sense.
So, I’m delighted that we have some really strong case studies in DCSUK, alongside the usual high quality technology and ideas content.
Data has become one of the most valuable assets for 21st century businesses. Organizations are in a constant state of pressure to manage a massive amount of data in their supervision. As a result, managing the health of data centers is paramount to ensure flexibility, safety and efficiency of a data driven organization.
A continually developing and changing entity, today's complex data center requires regular health checks empowering data center managers to stay on the pulse of their data center facilities in order to maintain business continuity. A preventative versus reactive approach within the data center is paramount to avoiding outages and mitigating downtime. Data center managers can maintain the health of data center hardware by leveraging automated tools that conduct ongoing monitoring, analytics, diagnostics and remediation functions. With the average data center outage costing even the most sophisticated organizations upwards of three-quarters of a million dollars, implementing a data center health management strategy is mission critical in today's dynamic business environment.
A recent study carried out by Morar Consulting, on behalf of Intel and Siemens, amongst 200 data center managers in the UK and US reveals that nearly 1 in 10 businesses do not have a data center health management system in place, showing that many businesses are potentially exposed to outages costing businesses thousands of dollars per minute in downtime.
This report summarizes the findings from a study carried out in Spring 2017, highlighting today's approaches and attitudes towards the implementation of a data center health management strategy.
Other key findings:For businesses that do have health management systems in place, a third implemented these only once their backs were up against the wall, being forced into implementation following an outage, witnessing an outage elsewhere or being pressured by the C-suite to do so.
In an era of automation, 1 in 5 data center managers are still relying on manual processes to perform jobs that could be minimized or automated through innovative software solutions.
Specialist data centre healthcare business, ABM Critical Solutions, has completed an emergency clean of a South London college’s facility within a matter of hours of being called, thus preventing any damaging outages or potential loss of data.
The team was called after a gas suppression system was activated in the server room and unloaded gas and debris onto the IT equipment; a portable fire extinguisher was also accidentally deployed adding further contaminants on to the live equipment.
After receiving the initial emergency call at midday, a senior operations manager was onsite within two hours. The manager completed a detailed survey of the room and equipment and put a full emergency delivery plan together with the client. A team of highly-trained specialist technicians were deployed and onsite within five hours of the original call to begin the clean-up.
IT racks were micro-vacuumed and anti-statically wiped internally and externally; internals of the IT equipment were meticulously micro-vacuumed and cleaned; the room interior was cleaned to ISO14644-1 Class 8 Standards; and an indoor air quality test was completed.
Technicians worked throughout the night to restore the room to its original state as the college couldn’t afford any additional downtime with staff and students dependent on the systems to complete work the following day.
Mike Meyer, Sales Director at ABM Critical Solutions says that thanks to the quick response of his team there was a very low mean time to recovery (MTTR): “None of the IT equipment experienced any long-term damage thanks to our quick response and thorough clean to remove all the abrasive fire suppression materials,” he explains.
“This situation underlines the critical importance of understanding the proper fire safety equipment that should be specified within any data facility,” he adds. “Should an emergency situation such as this occur, calling in an expert like ABM as quickly as possible can prevent an accident becoming a much more serious incident.”
VIRTUS Data Centres continues its rapid expansion announcing plans for two new adjacent facilities on a single campus near Stockley Park, West London. The new site will be amongst the most advanced in the UK and create London’s largest data centre campus.
Establishing this new mega campus further strengthens VIRTUS’ position as the largest hybrid colocation provider in the London metro area.
The two buildings, on the secure eight-acre campus, total 34,475m2. Known as VIRTUS LONDON5 and LONDON6, they are designed to deliver 40MW of IT load and have the secured power capacity to increase to 110MVA of incoming power from diverse grid connection points, future proofing expansion for customers.
The location of the campus, is ideally situated: 16 miles from central London on the main fibre routes from London to Slough, and 7 miles from Slough, thus providing unrivalled hyper efficient, limitless metro fibre connected, flexible and massively scalable data centre space, within the M25.
Work has started to fit out space in LONDON5 for customers who have already committed and general availability is expected in early 2018. These two new data centres will provide an additional 17,000NTM (net technical metres) of IT space and will increase VIRTUS’ portfolio in London to approximately 100MW across their six facilities in Slough, Hayes and Enfield, with the power to expand to circa 150MW on the various campuses.
Together with the recently announced LONDON3 in Slough, the new facilities keep VIRTUS at the forefront of next generation data centres. Their size and expandability will significantly increase the capacity of highly reliable, efficient, secure, scalable and interconnected data centre space available to VIRTUS customers in London.
Neil Cresswell, CEO of VIRTUS Data Centres, said, "With the hunger for connectivity and data growing exponentially, our data centres continue to play a vital role in enabling the UK and Europe’s digital economy. We work with clients across all industries, all with unique audiences and IT landscapes, but with the common need to deliver the highest levels of availability, performance and security of digital experiences. As we move with our customers into an increasingly digital future, we help them deliver high performing applications and content. We provide fast, seamless connectivity to networks and public clouds, along with the capacity for vast data storage and compute processing power - all for lower costs. This investment in LONDON5 and LONDON6 means we can grow with our customers and help them achieve their ambitions.”
Data centre and cloud services specialist Proact is set to future-proof the IT environment at the University of Gloucestershire by supporting a multi-stage infrastructure project to overhaul the entire infrastructure estate, while also providing round-the-clock Service Management to relieve the University’s IT resource of maintenance tasks. Specifically, Proact will assist with IT consolidation and transformation, and will provide an improved DR and backup approach in addition to streamlining operations.
The University was facing a number of challenges due to its ageing infrastructure and the need to refocus the team to add more business value. In particular, the University required assistance to overcome the issues caused by an antiquated backup solution, an out-of-date disaster recovery environment and a need to focus scarce resources on delivering student-facing services. In order to tackle these problems, the University of Gloucestershire called in Proact who will deploy a brand new, leading technology solution, complemented by a full Service Management wrap, to transform the organisation’s IT infrastructure.
Proact was able to demonstrate its expertise early in the ITT process before being selected as the best suited partner to work with the University’s IT team due to the firm’s advanced data centre and cloud capabilities. Proact will act as an extension of the University’s IT department and will not only design and implement infrastructure solutions as part of this phased project, but will play a key advisory role as the organisation looks towards a cloud transformation in the future.
The chosen solution will enable the University of Gloucestershire to become more effective in working towards its key goal which is to integrate support, learning and teaching by transforming IT operations. In particular, Proact’s Service Management means that the University can take advantage of fast and effective IT monitoring, support and incident resolution, provided by Proact’s experts. This will enable the existing IT team at the University to focus on delivering business value to their internal customers without having to concern themselves with the day-to-day operations of their estate.
“It is great to have been selected to work with the University of Gloucestershire, using our advanced skills and experience to completely rejuvenate the University’s IT set-up. With our bespoke infrastructure solutions in place in addition to our 24x7 monitoring service, we look forward to driving innovation at this dynamic organisation,” says Jason Clark, Managing Director at Proact UK and CEO of Proact IT Group.
Doctor Nick Moore, Director of the Library, Technology and Information Service at the University of Gloucestershire, says: “This was a significant decision for the University, not just in terms of making sure we worked through the technical solution, but that we chose a partner organisation that we could trust to deliver the service we needed.
“The transition to their solution went incredibly smoothly and our expectations of the managed service support from Proact have been significantly exceeded. We have found the service desk members we speak to incredibly keen to support us. It feels like a weight has been lifted from my technical team who have been unanimous in their positive comments of the relationship and support from Proact.”
City of Wolverhampton Council now boasts one of the public sector’s most advanced cloud based IT systems, after appointing automation specialist PowerON to maximise its use of Microsoft System Center and launch an innovative Microsoft Azure Hybrid Cloud model.
The council was keen to automate more IT tools, enable the pro-active monitoring of hardware, and streamline software distribution. With a Microsoft-centric system already in place, the council invested in Microsoft System Center, but required help to get the most out of the suite of systems management products, so enlisted the help of PowerON.
The company specialises in providing powerful, high quality IT management and cloud automation solutions to organisations of all sizes, and immediately decided to create a bespoke Cloud Management Appliance (CMA) with extensive management capabilities, for the council. Following its success, PowerON is now replicating the solution blueprint at a number of other local authorities in areas such as Suffolk, Hounslow, Barnsley and Wirral.
Steve Beaumont, Product Development Director at PowerON and a Microsoft Most Valuable Professional, explains: “We have a solid reputation for the quality and simplicity of our service, as well as offering an assured outcome to our customers, which really resonated with City of Wolverhampton Council. Like many public sector organisations, the council is under pressure to make its budgets go as far as possible and was facing challenges with legacy systems.
“Using Microsoft Hyper-V provided flexibility between on-premises and cloud computing resources and when this was combined with the reliability and security of Azure, as well as our unique CMA, it created an unrivalled solution. This is complemented by a hybrid cloud system with the flexing of infrastructure services across on-premises and Azure for key scenarios. We then also incorporated Operations Management Suite for cloud backup, disaster recovery and log analytics, as well as Enterprise Mobility Suite for hybrid management of mobile devices.
“The council had to quickly ‘stand up’ servers and we delivered a fully functioning version of System Center, via our CMA, in just over two weeks which was much less than the 90 days the council had initially estimated it would take.
“We were then able to roll out Windows 10 faster than any other desktop refresh in the council’s history. The whole system, which runs on Windows Server 2012 R2, is now completely scalable through Azure and can support up to 20,000 managed devices per appliance. Our success at City of Wolverhampton Council is now a blueprint for local authorities and we’re already implementing similar systems at other councils throughout the UK.”
Paul Dunlavey, Enterprise Manager at City of Wolverhampton Council, says: “PowerON thrives on being agile and flexible which made them the ideal partner for this large-scale project. They made the adoption of the cloud an easy process and the speed it took to build the infrastructure, which had been a barrier to adopting these technologies in the past, resulted in major cost savings against our initial forecast of how long it would take do in-house. Ultimately it has transformed the way the whole council works.
“Another benefit is the size reduction of both our primary and secondary datacentres which has generated both office space and further savings. Importantly, it has also paved the way for the council to migrate larger systems to the cloud and within two years the aim is to have all primary business applications running from the cloud.”
Following the completion of the project, City of Wolverhampton Council was recently named Local Authority of the Year at the prestigious Municipal Journal (MJ) Awards 2017, which described the organisation as an "outstanding example of modern local government where the resident is at the heart of sound commercial decision-making."
PowerON has also recently completed major projects for a wide range of high-profile organisations including Sandwell and West Birmingham Hospital NHS Trust, Stena Group, Drax Group, Tesco and food giant Princes, as well as winning contracts with British Land, GHD and Clifford Chance.
The Bolton Mountain Rescue Team (Bolton MRT), a voluntary rescue service, has chosen Navisite and SRD Technology UK to help manage, migrate and transform its IT infrastructure to the cloud. Migrating to Navisite’s cloud environment will enable better communication between Bolton MRT and emergency services from remote locations, helping to improve the management and response times for rescues.
Bolton MRT also envisages this connected solution will be a direct aid in helping to locate missing people and casualties. The new Navisite cloud-based system will allow multiple emergency response agencies to work together and share information in real-time, in a single environment. More importantly, it lets Bolton MRT instantly share location information with emergency services so they are able to respond with the most appropriate resources, reducing rescue response time and potentially saving lives.
“Often in rescues, the biggest challenge is precisely locating the lost or injured person,” said Martin Banks, Operational Member and Treasurer at Bolton Mountain Rescue. “We’re now able to start the process of identifying the person’s location using an application which can be run on our system from any location.”
The police can often triangulate a person’s mobile phone location but this data can be inexact. The SARLOC positioning service, developed specifically for mountain rescue teams and now used across the country, is more accurate. Bolton Mountain Rescue team use Navisite’s cloud to access SARLOC data from any location. The positioning service requires the consent of the lost person, and a smartphone with GPS and mobile data service.
Sumeet Sabharwal, General Manager, Navisite said: “We’re proud to be collaborating with SD Technology UK to support such an important, life-saving organisation like Bolton MRT and to be a part of its cloud transformation. When a cloud service like Navisite’s Desktop as a Service (DaaS) is being used in such a critical and time sensitive context, it’s vital that the service has a reliable and robust infrastructure with availability from any location. We’re pleased to have our technology and managed services used in such a beneficial manner.”
The new system works by sending a text message with a link to the mobile phone of the missing person. By clicking the link, any lost or injured person will be able to open a website that will calculate their location and notify Bolton MRT wherever they are through the mapping application running on Navisite’s DaaS. As a result, teams are able to locate lost or injured people much faster and with greater accuracy, no matter where the rescue team is located.
“Bolton MRT was working off IT equipment that had started to tire and limited their ability to carry out important work from the remote locations in which search and rescue incidents often take place,” said Simon Darch, Technical Director, SRD Technology UK. “With a dedicated focus on supporting cloud migrations for charities and not for profit organisations across the UK, the collaboration was critical for us to be able to provide not only the technology, but the managed services around it. Navisite’s Desktop-as-a-Service and Managed Office 365 solution enables Bolton MRT to work remotely, with full access to an integrated and standardised suite of applications to help their rescue operations.”
Bolton MRT is a registered search and rescue charity, with trained volunteers covering an area over 800 square kilometres. The team functions as a vital resource to local emergency services in locating and rescuing missing or injured hillwalkers, climbing and biking accidents and any other incidents involving people in wild and remote places.
The volunteer organisation previously ran its operations from shared desktops located at the organisation's base and had no back-up procedure in place. This system restricted the ability of the team to access important data and systems remotely, as well as limiting information sharing with other emergency agencies. SRD Technology UK implemented Navisite’ s DaaS and Managed Microsoft Office 365 solution which allows standardised remote working, making it easy to deploy and manage desktop environments and gives the team access to a full suite of applications.
SRD Technology UK and Navisite were selected ahead of other managed service providers for the flexibility and reliability of their solutions, as well as their experience and skills in managing these cloud applications and environments.
Cloud specialists sign up online booking giant for five-year deal.
Leading British hosting firm UKFast today announces a five-year partnership with LateRooms.com to support the development of the popular online booking site’s technology infrastructure.
The deal sees the online accommodation site migrate its web hosting arrangements from a colocation solution with ATOS in favour of a bespoke-built, cloud-based solution, developed in partnership with UKFast.
The recently launched cloud solution now regularly processes 6.4 million events an hour. Laterooms.com currently serves 100 million page requests a month.
Head of IO at LateRooms.com, Stuart Taylor, said: “The new platform gives us improved stability, higher availability and the ability to scale up while remaining cost effective. In short we can do a lot more, with less.
“Throughout the migration process there have been learnings on both sides. UKFast has been thoughtful and responsive to the challenges we faced during a migration that was constrained within tight timescales. Together we have created an amazing platform that is a true business enabler.
“UKFast demonstrated a drive and commitment that is familiar to us in our own internal culture. That coupled with a passion for technology and a solid technical solution gave us the confidence that UKFast would become a partner and not just a supplier.”
A four month migration process saw project teams from both sides working to move LateRooms.com across to their new solution with zero downtime.
UKFast CEO Lawrence Jones said: “LateRooms.com is a brilliant online brand that we're incredibly proud to host. We won the deal up against some very good rivals but they chose us for a number of reasons.
“Growing a business is about the challenges you can overcome and the kind of partnerships you build along the way. They trust our brand and they know we care 100% about every element of the service and the experience we give them.
“They could have used a public cloud provider to deliver their whole solution, but recognised it would be harder to keep control of costs as they scaled up, and they wouldn’t receive the same level of support.”
“They are particularly happy with our approach to collaborative working. For example, we offered our developers to solve a few issues they were experiencing. It is clearly a breath of fresh air for them and it built up a trust immediately.”
The new cloud infrastructure includes an overhaul of LateRooms.com’s dev environment, with custom-built APIs and the ability to spin up new VMs at the click of a button. The solution delivers an extremely flexible and easy-to-use system, responding to the booking site’s need for continuous development and ongoing deployment and integration of new features.
PagerDuty has released the findings of its State of Digital Operations Report: United Kingdom, which revealed the need for a shift in workforce expectations and the way teams across an organisation collaborate to resolve consumer-facing incidents.
While the majority of IT practitioners believe their organisation is equipped to support digital services, over half of them also say they face consumer-impacting incidents at least one or more times a week, sometimes costing their organisations millions in lost revenue for every hour that an application is down. The report also highlighted that an organisation's failure to deliver on consumer expectations for a seamless digital experience can greatly affect a company's brand reputation and bottom line.
The report findings are based on a two-part survey of over 300 IT practitioners and over 300 UK consumers on the impact of digital services. The survey specifically examined what UK consumers expect from digital experiences, how organisations are investing in supporting digital services and what tools IT teams are using to keep these services up and running.
The State of Digital Operations report found that nearly all (90.6 percent) of UK consumers surveyed use a digital application or service to complete tasks such as banking, making dinner reservations, finding transportation, grocery shopping and booking airline tickets, at least one or more times a week. This finding is indicative of the larger UK digital services landscape -- IDC predicts that half of the Global 2000 enterprises will see the majority of their business depend on their ability to create and maintain digital services, products and applications by 2020. In addition, IDC says 89 percent of European organisations view digital transformation as central to their corporate strategy.
"With the rise in digital services in the UK, European businesses need to be ready to accelerate their digital transformation journey and adapt to consumer demands," said Jennifer Tejada, CEO at PagerDuty. "Disrupting brand and engagement experience means lost revenue and organisations need to be proactive versus reactive -- a reactive or automated approach to resolving consumer-facing incidents is not table stakes. Organisations can arm their IT teams by taking a holistic approach to incident response. Solutions that embrace capabilities such as machine learning and advanced response automation can help organisations easily deploy an expedited response to consumer-facing incidents."
The State of Digital Operations report revealed that along with the heavy reliance on digital services, UK consumers expect a seamless user experience, and IT organisations are struggling to meet these expectations.
When a digital app or service is unresponsive or slow, many consumers indicated that they are quick to stop using that app or service.IT teams are now at the front and center of providing customers with a satisfying user experience. The State of Digital Operations report revealed that organisations are making significant upfront investments in tools and technologies that support the delivery of digital services in order to avoid costly performance issues later on. Nearly half of respondents (49.9 percent) reported that their organisations budget £ 500,000 or more for DevOps and ITOps tools and services to support and manage digital service offerings -- a critical investment as downtime or service degradation can significantly impact an organisation's financial success.
Majority of IT leaders are in the dark about cloud services and spending.
An overwhelming majority of UK CIOs (76%) don’t know how much their organisation is spending on cloud services, according to a new research report released today by Trustmarque, part of Capita plc. This is due to the increasing rise in employee-driven ‘cloud sprawl’ and ‘Shadow IT’, which are posing a significant challenge to businesses’ cloud adoption and overall data security.
54% of IT leaders admitted they don’t know how many cloud-based services their organisation has, blaming employees being able to sign up to these services easily making it difficult for them to know exactly how many subscriptions and services the company ‘owns’.
58% went on to say they were worried that costs could spiral out of control as a result of cloud sprawl. 86% said cloud sprawl and Shadow IT makes the ongoing management of cloud services a challenge, while almost half of CIOs (45%) argued that providers could do more to warn users about costs they’re incurring when using cloud services.
While 91% of IT decision makers are looking to migrate onto on-premise apps in the next 3-5 years, 59% fear these ambitions for cloud adoption will be slowed over a lack of control on how cloud services are deployed and managed.
This lack of control also means UK companies are exposing themselves to possible data breaches and not being compliant with legal, regulatory and contractual obligations. With the impending EU General Data Protection Regulation (GDPR), this could lead to a significant financial impact, with failure to comply carrying penalties of up to €20m or 4% of global annual turnover.
James Butler, CTO at Trustmarque, said: “Cloud adoption is an unstoppable force, but as this research demonstrates there are still a number of challenges facing organisations. Forward planning is everything in IT and without suitable clarity into who is using what in the organisation, there could be a nasty surprise for IT bosses down the line. That’s not to mention the high potential costs associated with any data breach resulting from such unsanctioned use, as well as the impact of extra network congestion, and even excess mobile data charges.
“The self-service, user-friendly nature of the cloud has made it easy for employees to open cloud services and this is happening on a large scale. The first step towards best practice security is knowing where your data is at all times, and how it is being used. If it is residing in cloud repositories you don’t know about, this may be breaking internal policies and could land you in regulatory hot water – especially if it’s customer data.”
Phil McCoubrey, Head of Security Architecture at Capita, said: “These findings underline the extent to which British organisations must quickly appreciate the magnitude of the potential impact of GDPR. While the regulation clearly sets out that the personal responsibility and therefore accountability lies with managing data control, which is often a job of IT leaders, there is a worrying lack of action being taken by CIOs and GDPR may be difficult for companies to achieve if IT leaders don’t exactly know where employees are storing and sending business data. GDPR is an opportunity to strengthen data security processes and improve resilience, when it is needed more than ever but for those who haven’t adopted basic principles of the Data Protection Act, there is a lot of work to do.”
The next Data Centre Transformation event, organised by Angel Business Communications in association with DataCentre Solutions, the Datacentre Alliance, The University of Leeds and RISE SICS North takes place on 3rd July, 2018 at the University of Manchester. For the 2018 event, we’re taking our title literally, so the focus is on DATA, CENTRE and TRANSFORMATION.
There are plenty of opportunities to be involved in DCT 2018. Right now, we’re looking for people who would like to help shape the event by offering to chair one of the workshop sessions. So, if you are passionate about data centres, recognise and understand the massive changes taking place in the industry at the present time and want to help data centre owners, operators and users understand what these changes mean into the future, please get in contact with us indicating which area you are most interested in.
DATA
Digital Business
Digital Skills
Security
CENTRE
Energy
Connectivity
Hybrid DC
TRANSFORMATION
Open Compute
Automation
Cloud and the Connected World
This expanded and innovative conference programme recognises that data centres do not exist in splendid isolation, but are the foundation of today’s dynamic, digital world. Agility, mobility, scalability, reliability and accessibility are the key drivers for the enterprise as it seeks to ensure the ultimate customer experience. Data centres have a vital role to play in ensuring that the applications and support organisations can connect to their customers seamlessly – wherever and whenever they are being accessed.
And that’s why our 2018 DCT Conference will focus on the constantly changing demands being made on the data centre in this new, digital age, concentrating on how the data centre is evolving to meet these challenges.
Please email Debbie Higham: debbie.higham@angelbc.com to express your interest and tell us about yourself and which subject you would like to chair.
At opposite ends of the healthcare ecosystem, data is being harnessed to drive a revolution.
By Junaid Hussain, Product Marketing, UKCloud.
1) Data standards and interoperability are enabling the patient to become the customer
There are currently a number of technological imbalances in the sector that are being corrected as the industry is turned on its head:
Fundamental to all of this is are data standards and interoperability, enabling a host of new devices and apps to work together to generate a wealth of new and enriched data. This rich data then enables and inspires a further wave of specialist solutions that can deliver new insights, reduce costs and improve outcomes.
2) Powerful secure platforms for pooling valuable datasets are providing clinicians, researchers and specialists solution providers with unprecedented capabilities
At the same time, there were limiting factors that might once have restricted what was possible with data in healthcare and these are being overcome:
While there are a host of technologies in play here, cloud is the central enabler for them all. The IoT (Internet of Things) devices that are gathering data like never before, are feeding it all into the cloud. It is in the cloud that the data is then securely stored and processed in order to mine it for insights and turn it into intelligence. It is in the cloud that collaboration between a vibrant ecosystem of specialist solution providers can then amplify and enrich this intelligence. And it is from the cloud that this intelligence is then accessed remotely by mobile devices, empowering clinicians, researchers and patients.
So, what does all of this mean for the datacentre industry ….
It is evident that few NHS organisations, if any, will be building new datacentres of their own. Indeed, many will be closing such facilities as they look to move to the cloud. This provides an opportunity for colocation providers to host legacy workloads that cannot be moved to the cloud and for cloud service providers not only to host workloads for these organisations, but also for the ecosystem of health tech firms that will be providing IoT and cloud based services to them.
For organisations across health and social care, research and life sciences, and pharmaceuticals, one concern is that patient identifiable data will be secure and if possible will never leave the UK. Cloud service providers that can provide a guarantee that their data will remain in a UK-sovereign data centre will have an advantage here. Such a guarantee, coupled with secure connectivity to HSCN, will provide their customers with secure access to their data, safe in the knowledge that it will always remain in the UK.
UKCloud provides a secure, UK-sovereign cloud platform that is connected to all the public sector networks (from PSN to HSCN) and works closely with an ecosystem of partners in health and public sector technology in the UK. If you want to become part of this ecosystem, get in touch.
Research reveals the reality of hybrid computing.
By Tony Lock, Director of Engagement and Distinguished Analyst, Freeform Dynamics Ltd.
At the beginning of the ‘Cloud’ movement, vendors, evangelists, visionaries and forecasters were often heard proclaiming that eventually all IT services would end up running in the public cloud, not in the data centres owned and operated by enterprises themselves. Our research at the time, along with that of several others, showed that the reality was somewhat different: the majority of organisations said they expected to continue operating IT services from their own data centres and from those of dedicated partners and hosters, even as they put certain workloads into the public cloud.
More recent research by Freeform Dynamics (link: http://www.freeformdynamics.com/fullarticle.asp?aid=1964) illustrates that this expectation – running IT services both from in-house operated data centres, and from public cloud sites – is now very much an accepted mode of operation. Indeed, it is what we conveniently term “hybrid cloud” (Figure 1).
The chart illustrates very clearly that over the course of the last five years almost three-quarters of organisations have already deployed, at least to some degree, internal systems that operate with characteristics similar to those found in public cloud services, i.e. they have deployed private clouds. Over the same period, just under two-thirds of those taking part in the survey stated that they already use public cloud systems. It is interesting to note that both private and public cloud usage has grown steadily rather than explosively, but this is not surprising given the pressures under which IT works, and that the adoption of any “new” offering takes time. Especially if the systems will be expected to support business applications rather than those requiring lower levels of quality or resilience (Figure 2).
The second chart shows that for a majority of organisations, private cloud is already in use or will be supporting production business workloads in the near future. The adoption of public cloud to run such workloads clearly lags behind, but its eventual usage is only out of the question for around a quarter of respondents. When combined with the results for test/dev and the production hosting of applications and services developed specifically for the web, the picture of a hybrid cloud future for IT is unmistakable.
But if ‘hybrid IT’ is to become more than just a case of independently operating some services on internally owned and operated data centre equipment and others on public cloud infrastructure, the survey points out some key characteristics that must form part of the management picture. (Figure 3.)
The results in this figure highlight several key requirements that must be met around the movement of workloads if ‘hybrid cloud’ is to become more than a marketing buzzword. Given that private clouds are today used more extensively to support business applications than public clouds, there should be little surprise that smoothing the movement of workloads between different private clouds is ranked as important, or at least useful, by around four out of five respondents.
But the chart also indicates a recognition of the need to move workloads smoothly between private clouds running in the organisation’s own data centres and those of public cloud providers. And almost as many answered similarly about the need to be able to migrate workloads between different public clouds. The importance of these integration and interoperation capabilities is easy to understand: they are essential if we want to achieve the promise of cloud, in particular the ability to rapidly and easily provision and deprovision services, and the ability to dynamically support changing workloads coupled with hyper scalability to ease peak resource challenges and enhance service quality.
How quickly such capabilities can be delivered depends on a number of factors (Figure 4.)
The need for the industry to adopt common standards is clear and, to its credit, things are beginning to move in this direction although there is still much work to be done. The same can be said for integrating cloud services with the existing management tools with which organisations keep things running, although, once again, things do need to improve especially in terms of visibility and monitoring.
The days of vendors building gated citadels to keep out the competition and keep hold of customers should be coming to an end, as many – though alas not all – are under pressure to supply better interoperability. In truth, while interoperability does make it easier for organisations to move away, such capabilities are also attractive and can act as an incentive to use a service.
After all, no one likes the idea of vendor lock-in, and anything that removes or at least minimises such fear can help smooth the entire sales cycle. In addition, if a supplier makes interoperability simple via adopting standards, being open and making workload migration straightforward, they then have an excellent incentive to keep service quality up and prices competitive.
The SVC Awards celebrate achievements in Storage, Cloud and Digitalisation, rewarding the products, projects and services as well as honouring companies and teams. The SVC Awards recognise the achievements of end-users, channel partners and vendors alike and in the case of the end-user category there will also be an award made to the supplier who nominated the winning organisation.
Voting is free of charge and must be made online at www.svcawards.com
Voting remains open until 3 November so there is still just time to cast your vote count and express your opinion on the companies that you believe deserve recognition in the SVC arena.
The winners will be announced at a gala ceremony on 23 November at the Hilton London Paddington Hotel. Contact the team and join the Storage, Cloud and Digitalisation community as it celebrates the best in the business.
All voting takes place on line and voting rules apply. Make sure you place your votes by 3 November when voting closes. Visit : www.svcawards.com
Below is the full shortlist for 2017 SVC Awards:
Storage Project of the Year
Cohesity supporting Colliers International
DataCore Software supporting Grundon Waste Management
Mavin Global supporting The Weetabix Food Company
Cloud / Infrastructure Project of the Year
Axess Systems supporting Nottingham Community Housing Association
Correlata Solutions supporting insurance company client
Navisite supporting Safeline
Hyper-convergence Project of the Year
HyperGrid supporting Tearfund
Pivto3 supporting Bone Consult
UK Managed Services Provider of the Year
EACS
EBC Group
Mirus IT Solutions
netConsult
Six Degrees Group
Storm Internet
Vendor Channel Program of the Year
NetApp
Pivot3
Veeam Software
International Managed Services Provider of the Year
Alert Logic
Claranet
Datapipe
Backup and Recovery / Archive Product of the Year
Acronis – Backup 12.5
Altaro Software – VM Backup
Arcserve - UDP
Databarracks – DraaS, BaaS, BCaaS solutions
Drobo – 5N2
NetApp – BaaS solution
Quest – Rapid Recovery
StorageCraft – Disaster Recovery Solution
Tarmin – GridBank
Cloud-specific Backup and Recovery / Archive Product of the Year
Acronis – Backup 12.5
CloudRanger – SaaS platform
Datto – Total Data Protection platform
StorageCraft – Cloud Services
Veeam Software - Backup & Replication v9.5
Storage Management Product of the Year
Open-E – JovianDSS
SUSE – Enterprise Storage 4
Tarmin – GridBank Data Management platform
Virtual Instruments – VirtualWisdom
Software Defined / Object Storage Product of the Year
Cloudian – HyperStore
DDN Storage – Web Object Scaler (WOS)
SUSE – Enterprise Storage 4
Software Defined Infrastructure Product of the Year
Anuta Networks – NCX 6.0
Cohesive Networks – VNS3
Runecast Solutions – Analyzer
Silver Peak – Unity EdgeConnect
SUSE – OpenStack Cloud 7
Hyper-convergence Solution of the Year
Pivot3 - Acuity Hyperconverged Software Platform
Scale Computing - HC3
Syneto - HYPERSeries 3000
Hyper-converged Backup and Recovery Product of the Year
Cohesity – DataProtect
ExaGrid - HCSS for Backup
Syneto - HYPERSeries 3000
PaaS Solution of the Year
CAST Highlight - CloudReady Index
Navicat – Premium
SnapLogic - Enterprise Integration Cloud
SaaS Solution of the Year
Adaptive Insights – Adaptive Suite
Impartner – PRM
IPC Systems - Unigy 360
Ixia - CloudLens Public
SaltDNA - Secure Enterprise Communications
x.news information technology gmbh – x.news
IT Security as a Service Solution of the Year
Alert Logic – Cloud Defender
Barracuda Networks - Essentials for Office 365
SaltDNA - Secure Enterprise Communications
Votiro - Content Disarm and Reconstruction technology
Cloud Management Product of the Year
CenturyLink - Cloud Application Manager
Geminaire - Resiliency Management Platform
Highlight - See Clearly - Business Performance Acceleration
HyperGrid – HyperCloud
Rubrik – CDM platform
SUSE - OpenStack Cloud 7
Zerto - Virtual Replication
Storage Company of the Year
Acronis
Altaro Software
DDN Storage
NetApp
Virtual Instruments
Cloud Company of the Year
Databarracks
Navisite
Six Degrees Group
Storm Internet
Hyper-convergence Company of the Year
Cohesity
Pivot3
Syneto
Storage Innovation of the Year
Acronis - Backup 12.5
Altaro Software - VM Backup for MSP’s
DDN Storage - Infinite Memory Engine
Excelero – NVMesh
Nexsan – Unity
Cloud Innovation of the Year
CloudRanger – Server Management platform
IPC Systems - Unigy 360
SaltDNA - Secure Enterprise Communications
StaffConnect - Mobile App Platform
Zerto - ZVR 5.5
Hyper-convergence Innovation of the Year
Pivot3 - Acuity HCI Platform
Schneider Electric - Micro Data Centre Solutions
Syneto - HYPERSeries 3000
Digitalisation Innovation of the Year
Asperitas – Immersed Computing
IGEL - UD Pocket
Loom Systems - AI-powered log analysis platform
MapR – XD
For more information and to vote visit: www.svcawards.com
How well does your company communicate internally? Specifically, how well do your IT departments communicate with each other? Enterprises typically contain four or more IT sub departments (Security, Network Operations, Virtual DC, Capacity Planning, Service Desk, Compliance, etc.) and it’s quite common for them to be at odds with each other, even in good times. For instance, there’s often contention over capital budgets, sharing resources, and headcount.
By Keith Bromley, Senior Solutions Marketing Manager, Ixia.
But let’s be generous. Let’s say that in normal operations things are usually good between departments. What happens if there’s a breach though, even a minor one? Then things can change quickly. Finger pointing can quickly result, especially if there are problems with acquiring accurate monitoring data for security and troubleshooting areas.
So, what can you do? The answer is to create complete network visibility (at a moment’s notice) for network security and network monitoring/troubleshooting activities. Here are three common sources of issues for most IT organizations:
Add taps to replace SPAN ports. Taps are set and forget technology, which means that you only need to get Change Board approval one time to insert the tap, and you are done.Add a network packet broker (NPB) to eliminate most of the other Change Board approvals and eliminate crash carts. The NPB is situated after the tap so you can perform data filtering and distribution whenever you want. By implementing a tap and NPB approach, you may be able to reduce your MTTR times by up to 80 percent.Add an NPB to perform data filtering. The NPB performs data filtering to send the right data to the right tool whenever you need it. This improves data integrity to the tools and improves time to data acquisition.Add an NPB to create role-based access to filters. This eliminates the “who changed my settings” issue and allows multiple departments to share the same NPB.Add virtual taps to get access to the often hidden East-West data in a virtual data center or cloud network
No one wins at the blame game, as it’s a zero sum game. Even if one department appears to win, the whole group typically loses. One of the best things an IT department can do is increase network visibility because it gets at the core of the issue instead of treating symptoms. This is what will help reduce incidents, reduce long-term costs, reduce troubleshooting times, and increase staff happiness.By Steve Hone CEO, The DCA
On a regular basis we invite DCA members to submit case studies for the DCA Journal. These articles can vary in terms of subject matter, many articles highlight regular challenges operators of data centres are presented with. We have found that articles often provide details and awareness of solutions, this helps the sector move forward to overcome similar challenges.
In this month’s DCA Journal we have a case study from Schneider Electric, detailing an upgrade project at Sheffield Hallam University. What’s interesting is the approach taken by those involved. The parties working on this project viewed it as partnership and wanted to ensure that going forward they could all be available and flexible in their response for the life of the facility.
Also included is a feature from Blygold who apply a high-performance coating to the external condenser. The case study related to a project undertaken for Carillion Energy which dramatically increased the life and efficiency of the entire system, delivering an ROI inside six months, a remarkable success story for a simple solution that really does do exactly what it says on the tin!
We read with great interest an article from ebm-papst. This case study focused on the reduction of energy and cost savings through the introduction of EC fans for UBS. Three DCA members, ebm-papst, Vertiv and CBRE collaborated on this successful project, this is something we are thrilled about and look to promote more collaboration between members going forward.
The DCA exists to help its members and those with an interest in the sector. Case studies allow readers to gain trusted insight into Data Centre projects, how they are implemented and experiences gained. The ability to submit case studies is not just limited to this month’s DCA journal theme we are always interested in receiving case studies for circulation from our members and these are added to the central library for continual reference.
The DCA are working on plans for next year and have been asked to support and endorse an even larger number of Data Centre events in the year ahead. Our own event, Data Centre Transformation 2018 is now scheduled for Tuesday 3rd July 2018 at the University of Manchester.
The structure will be three tracks focusing on:
Data - Digital Business, Digital Skill and Security
Centre - Energy, Connectivity, Hybrid DC
Transformation - Open Compute, Automation, Cloud and the Connected World
So, hold the date in your diary, and plan to come along take advantage of a wide range of educational workshops and network with other DCA members and end users.
The remainder of 2017 conference season is yet again jam packed, events include:
We are coming to the end of another busy year, the DCA Journal for 2017 will finish with a ‘Review of 2017’. This will be published in the Winter edition of DCS Magazine – which covers December and January.
If you would like to submit an article please contact Amanda McFarlane. amandam@datacentrealliance.org
First port of call was to carry out a sample survey of the data centres in the Carillion estate to establish if a deep clean and Blygold Coating could improve the performance of the external air source condensers. In most cases the units were found to be regularly maintained with the coils being washed down twice a year by outside contractors.
Despite this structured maintenance approach, the coils were still being compromised by a steady build-up of dirt and calcium deposits, resulting in restricted air throughput.
It was estimated that following a Blygold treatment the life of the plant would be significantly increased with energy saving in the region of 10% and with an ROI of less than 6 months.
Based on these initial findings the client was keen to progress to a trial and a site was selected which was felt indicative of the plants within the estate. On this occasion a York chiller was selected at the Hayes data centre facility.
A week prior to the Blygold treatment taking place, the Chiller had been deep cleaned and underwent an oil/refrigerant change and a Climacheck Data Logger was used to monitor performance both before and after treatment
After Treatment
The initial results following Blygold treatment looked very promising and the system continued to be remotely monitored as full results would only be seen over time under full operating conditions. It did not take long !!
Just 2 days later we received a call from the Engineering Department who were concerned we had broken two of their condensers as they were no longer working.
After investigation, it was found that they were still in fully operation condition just no longer needed. Prior to treatment they had three big Denco Condensers working 24x7x365 to maintain cooling, however after they had been Blygold treated two out of three condensers had reverted to ‘standby mode’ as only one was now needed.
Talking with the client it soon became clear that the Engineering Department had never seen the units in standby mode before and that this had led to the understandable confusion.
‘As a result of the increased efficiency the engineers also had to visit each server room to INCREASE the temperatures by 5◦.as the server rooms were now becoming too cold!’. Bob Molinew VM
The net effect of the Blygold Coating on the York YCAJ76 Chiller was compiled in a report by an independent consultant – Dave Wright, MacWhirter Ltd, which highlighted the following key points:
‘The units now run considerably better having had the Blygold treatment I am just surprised that the units are not Blygold treated from new!’ Greg Markham, Carillion
Based on these positive results the client contracted with Blygold to carry out the same process on all of its other nineteen client sites. As a result the client has tripled the life time of the coils, reduced its energy bills by 15% and reduced wear and tear on the rest of the system resulting in lower maintenance costs, increased uptime and fewer call outs.
About Blygold UK Ltd
Blygold UK Ltd apply anti-corrosion coatings to Air Source heat exchangers such as chillers, AHUs, Air-Conditioning etc. Blygold coatings can more than triple the life of the coils blocks and reduce the energy consumption by as much as 25%, particularly on units in such corrosive environments as airports, ports, industrial areas, coastal areas and city centres. The coatings can be applied to new units prior to installation or to existing units already installed on site.
The simplest way to reduce the energy consumption in buildings is to ensure that all Heating, Ventilation & Air Conditioning (HVAC) equipment is fitted with the highest efficiency EC fans. Those involved in the data centre industry are quickly realising the energy reduction potential in their buildings through upgrading HVAC equipment to innovative Electronically Commutated (EC) Fans. The motor and control technology in GreenTech EC fans from ebm-papst has enabled UBS to benefit from proven efficient upgrades to their data centre cooling systems.
ebm-papst undertook an initial site survey to review the types of units being used and the potential solutions that were needed; along with an estimation of the payback period for any new kit. The units that were in place before the project were chilled water, with an optional switch to lower performance and utilised AC fan technology. In order to improve efficiency, ebm-papst recommended upgrading the equipment with EC fan technology. Based on the survey results, a trial was then agreed on a single 10UC and 14UC CRAC unit to establish actual performance and energy savings. Data was logged before the upgrade and again once the trial units were converted from AC to EC.
Post upgrade trial data, revealed that less power was absorbed by ebm-papst’s EC fan motors than by their AC predecessors. Based on this information, UBS decided to proceed with the conversion of all units, installing 191 fans within 76 CRAC units. Three different unit models were installed: 39x14UC units; 21x10UC units and 16xCCD900CW.
Vertiv™ then worked with CBRE (who project managed the upgrade) both to UBS’s satisfaction and without causing disruption to the live data. The main element of the upgrade project was the replacement of all fans, with ebm-papst’s EC technology direct drive centrifugal fans, including the installation of EC fans within a floor void that required modification.
Since completion of the EC Technology upgrade project, the following savings have been made:
On average, UBS has seen a 48% energy saving across all units and a payback period of under two years. Other project paybacks include a CO reduction of 5,229 tonnes. In addition to these savings, new control strategy software was put in place; which controls the EC fans on supply air temperature. This saw a further reduction of 14% in energy usage. UBS’s data centres are now also benefitting from reduced noise levels, increased cooling capacity and extended fan and unit life.
Project Challenges
UBS operates a 130,000 sq ft data centre in west London, which is fundamental to the to the operation of the firm’s global banking systems. Within this site there were a number of Down Flow Units (DFUs) operating around the clock, making them crucial to sustaining the required operating conditions for the computer equipment in the data centre. The challenge was to improve the energy efficiency of the data centre, freeing up additional electrical capacity to use on IT resource. In addition, the task was to improve the airflow and improve controllability of the cooling units in the data hall.
Project restrictions were extensive given the live data environment and the upgrade teams were only allowed access to three halls, with only one unit switched off at any one time. However the upgrade was delivered on time and to budget, without disruption. Work took place while the data centres were live; the project managers had to factor in working space and access around constraints from existing equipment and infrastructure.
ebm-papst replaced the existing DFUs in the data centre with high efficiency direct drive EC fans in the Computer Room Air Conditioning (CRAC) units. UBS’s objective for the project was to reduce the drawn-down power by up to 30%, resulting in a 180kW power reduction load to be allocated to IT equipment.
The solution resulted in a load reduction of 250 kW. This resulted in an annual power saving of 48%, which allowed UBS to increase IT power consumption in addition to reducing CO emissions and energy costs. Nearly 5 years since the project took place, UBS has seen the below key metrics:
The energy savings from the EC fan replacement project were exactly as predicted and there was no need to perform any additional analysis due to monthly energy reports being dramatically lower. The EC fans have continued to deliver energy savings, through increased reliability, resulting in a reduced maintenance burden for CBRE and UBS.
Heating, Ventilation and Air Conditioning (HVAC) systems can be responsible for over half the energy consumed by data centres. In cases where energy is limited, improving the energy efficiency of HVAC equipment will result in an improved allocation of energy resource to IT equipment. Whilst many new data centre facilities built in the UK already incorporate EC fans in their HVAC systems, most older buildings continue to use inefficient equipment. Rather than spending capital on buying brand new equipment, often the more cost-effective option is to upgrade the fans in existing equipment to new, high efficiency EC fans.
With ebm-papst GreenTech EC fans, the impeller, motor and electronics form a compact unit that is far superior to conventional AC solutions. The UBS project is an excellent example of how upgrading from AC to EC technology can impact on energy savings, carbon and CO reduction.
Working with Advanced Power Technology (APT), an Elite Partner to Schneider Electric and specialist in data centre design, build and maintenance, Sheffield Hallam University has undertaken work to deploy a state-of-the art highly virtualised data centre as part of a £30m building development at Charles Street in central Sheffield.
APT’s installation is based on Schneider Electric InfraStruxure integrated data centre physical infrastructure solution for power, cooling and racking. The new facility is managed using StruxureWare for Data Centers™ DCIM (Data Centre Infrastructure Management) software to maximise the efficiency of data centre operations.
With a pedigree dating back to the early 19th Century, Sheffield Hallam is now the sixth largest university in the United Kingdom with more than 31,000 students, around 20% of whom are post graduates, and over 4,500 staff. One of the UK's largest providers of tuition for health and social care career paths, and teacher training, it offers around 700 courses across a wide range of disciplines including Business and administrative studies, Biological sciences, and Engineering & Technology.
The university has a range of research centres and institutes as well as specialised research groups. Research grants and contracts provide an important source of income to support work at Sheffield Hallam; in May 2013 the university was awarded £6.9m from the HEFCE Catalyst Fund to create the National Centre of Excellence for Food Engineering, to be fully operational by 2017.
Sheffield Hallam University is situated on two campuses comprising 12 major buildings in the centre of the city of Sheffield. Its IT department operates two data centres, running as an active-active pair in which each location provides primary IT services as well as offering failover support to the other.
“Services provided by the IT department are typical of those required by any university,” says Robin Jeeps, Project Manager for Sheffield Hallam. “We host the website, the intranets and common applications such as Exchange, Outlook and Office, in addition to the student management systems, virtual learning environments, library systems and CRM (customer relationship management) systems.”
In terms of hardware, the university has adopted a virtualisation policy, running between 800 and 900 Virtual Machines on about 70 blade servers distributed across both data centres. It also has a small high-performance Beowulf compute cluster to support research projects but for the most part the main concerns for the IT department are high availability, reliability and cost.
As one of the existing data centres was located in a building whose lease was due to expire, the IT department took the opportunity presented to move the IT facility into the Charles Street development and upgrade its capabilities to improve efficiency and availability.
Following a contract tender, APT was selected to provide and install the cooling and power infrastructure equipment and the DCIM software necessary to manage it efficiently. Thanks to virtualisation, the number of physical servers the university needed to maintain services had dropped from 60 devices in the older data centre to 15 in the new Charles Street facility.
“We can now run on one chassis what we would have run in three racks before,” says Robin Jeeps. “That makes a big difference.”
Located at the new Charles Street data centre, the IT equipment racks are installed within two APC by Schneider Electric InfraStruxure with Hot Aisle Containment Systems (HACS) to ensure an efficient and effective cooling supply. Two 300kW free-cooling units supply chilled water to the HACS and within the equipment racks, APC InRow cooling units maintain optimum operating temperatures.
The HACS segregates the cool air supply from the hot exhaust air, preventing both streams from mixing and enabling more precise control of the cooling according to the IT load’s requirement. At the same time, locating the InRow cooling units next to the servers and storage equipment also reduces the cooling energy requirement by eliminating the need to move large volumes of air in a suspended floor space.
Crucial to maintaining efficient operation is the adoption of Schneider Electric’s StruxureWare software. This marks the first time that Sheffield Hallam has had an integrated management system for monitoring all aspects of its data centres’ infrastructure, according to Robin Jeeps.
“We had a variety of software packages in place before,” he says. “But StruxureWare for Data Centers provides us with a much more integrated solution. As long as something has an IP address, we can see it in StruxureWare and monitor how it is working. Previously we had to go through physical switches and hard-wired cables to monitor a particular piece of equipment.”
Jeeps says that the homogenous integrated management environment proposed by APT was crucial to its winning the contract to supply the data centre infrastructure. “We kept the IT side of the contract separate from the overall development of the building,” he says. “When we studied APT’s tender we liked the clear design they presented and the consistent management of our infrastructure that it made possible.”
The new management capabilities presented by StruxureWare will allow Sheffield Hallam the flexibility to monitor its infrastructure for maximum efficiency and to manage how it makes its services available to students and researchers. Jeeps says that this will allow the university to tender for research contracts that hitherto it had not been able to do.
“We don’t currently provide cost charging or resource charging of IT services to our departments and I doubt that we ever will,” he says. “I don’t think that’s the best way for a university to operate. But if we were undertaking a research project, for example, which work on fixed funding and had to itemise how much the computing support would cost, we have the tools to do that now. We never had anything like that before.”
Another potential benefit offered by StruxureWare is the benchmarking of the overall system efficiency, especially with regard to how well the cooling infrastructure operates as a percentage of the overall power budget of the data centre. PUE (Power Usage Effectiveness) ratings are increasingly being used to compare one data centre’s efficiency with its peers.
“It’s a bit of a ‘chicken and egg’ situation,” says Jeeps. “Until we saw the capabilities of the software we didn’t know some of the monitoring, reporting and capacity planning that was now possible. Previously, we could only have done some rough calculations using a tool like Excel but the capabilities we have now will spur us on further to think about all sorts of things we can do.”
“Working with Advanced Power Technology and Schneider Electric has been an efficient and productive partnership from start to finish,” said Robin Jeeps. “The services they provided have been professional, thorough and at times very patient in terms of solving some of the challenges we’ve had to correct throughout the deployment stages. They remained focused on delivering an intricate solution that would meet our expectations and point of view as a customer, at all times.”
John Thompson from APT explains; “when we build a data centre for one of our clients we look on the relationship as a partnership. It is very important for us to understand the long-term requirements so that we can design for future possibilities in order to remain available and flexible in our response, throughout the life of the facility. This is one of the reasons we chose to deploy a complete Schneider Electric ‘engineered as a system’ data centre solution for the Charles Street room. To begin we built a virtual data centre within the StruxureWare for Data Centers™ software suite, so that the stakeholders could have a ‘3D walk round’ and provide feedback on the solution they were getting prior to delivery. Whilst this resulted in quite a few design revisions it helped to ensure that APT delivered exactly what was expected.”
What do you know about where your telecommunications and mobile provider stores, manages and secures your personal data? Asks Alastair Hartrup, Global CEO of Network Critical.
Having revealed that the UK generated £6.8 billion worth of investment in digital tech last year – 50 per cent more than any other European country - Tech City UK’s Tech Nation 2017 Report highlighted the crucial role the sector plays in economic and business growth.
By Leo Craig, general manager, Riello UPS.
But with this growth comes added risks. The increased pressure on the UK’s power supply has the potential to lead to a number of issues, including possible power fluctuations and disturbances, blackouts and voltage spokes, all of which can have a major impact on business productivity.
To minimise these risks, data centres require reliable and stable power that is protected by an uninterruptible power supply (UPS). The UPS acts as the first line of defence in this environment, but as with any electronic device, it’s likely to need repair at some point during its product lifecycle. It’s vital, therefore, that businesses have a robust maintenance regime in place to prevent downtime and ensure efficiency remains intact.
This article explains the most common errors that can be overcome with maintenance checks and gives some top tips for putting a plan in place to keep your data centre running seamlessly.
Bespoke data centre protection
Regardless of the sector in which they operate, data centres should be resilient by nature. This is fundamental - not only to minimise risk, but also ensure operations remain fail-safe and working in an efficient manner. As UPS systems are the backbone of minimising this risk - working to provide a lifeline when input power source or mains power fails - it’s crucial to endeavour to keep up a regular and robust maintenance regime.
It’s all well and good to know that you should be carrying out regular maintenance, but what’s important is to put specific checks and balances in place to suit your way of working.
Human error
Since the summer of 2017, British Airways has become an example of how human error can bring catastrophic cost to a company, not to mention tarnish its reputation with customers and partners for years to come.
Whether it be engineers throwing a wrong switch, or carrying out a procedure in the wrong order, human error is the main cause of problems that occur during UPS maintenance procedures. In this instance, it may seem easy to place the blame solely on the engineer when in fact errors of this kind are often a result of poor operational procedures, poor labelling or poor training. By tackling these issues from the offset and throughout the UPS installation, risks can be avoided.
For example, if the system being installed is a critical system comprising large UPS’s in parallel and a complex switchgear panel, castell interlocks should be incorporated into the design. Castell interlocks force the user to switch in a controlled and safe fashion, but are often left out of the design to save costs at the start of the project.
Simple things can make a difference. By ensuring that basic labelling and switching schematics are up-to-date, disaster can be averted. Having clearly documented switching procedures available is recommended. If the site is extremely critical, the procedure of Pilot - Co Pilot (two engineers both check the procedure before carrying out each action) will prevent most human errors.
Use the latest technology
UPS maintenance is intrusive by nature, so reducing downtime is only a good thing. Common problems, including electrical components failing, are proceeded with an increase in heat. If a connect point isn’t tightened properly, for example, it will start to heat up and eventually fail in some way.
It’s not always (if ever) possible to check every connection manually, which is where thermal image technology can come in handy. This technology can identify potential issues that wouldn’t necessarily be picked up using conventional methods, without the need of physical intervention.
Select the right provider
When it comes to selecting a supplier, it must be one you feel comfortable with. Do your research and find a supplier that offers a bespoke solution for your requirements with a robust provision for spares and guaranteed response time.
24-hour equipment monitoring significantly strengthens protection against power failure and should therefore be part of any data centre’s maintenance package. What’s more, rigorous training is key to ensuring that field service engineers are able to carry out their work in a timely and efficient fashion. You should also be clear on exactly what the ‘response’ constitutes - will it just be a phone call or will it be someone coming to site, and, if so, will that someone be a competent engineer?
Unlike other manufacturers, Riello UPS stocks all spare parts/components in strategically placed warehouses across the UK combined with a multimillion pound stock holding at its Wrexham headquarters where UPS up to 500kVA are ready for immediate dispatch.
Finally, you should never be afraid to ask questions of your maintenance provider. Remember, it’s your responsibility to request proof of competency levels – both of which can impact the company itself and the engineers it uses.
Undertaking a review of your current UPS maintenance procedure will help to identify and reduce risk to critical operations, that you may not have previously anticipated. By applying an extra level of due diligence today, you can help to avert disaster tomorrow.
Riello’s Multi Power Combo was recently awarded Data Centre Power Product of the Year for its outstanding capabilities combining high power in a compact space. Multi Power Combo is part of the modular range developed by Riello UPS and features both UPS modules and battery units in one.
With the cost associated with downtime on the rise not only in terms of revenue, but also company reputation, businesses need to be more aware of the importance of power protection and the benefits for a reliable, well maintained UPS.
Complex industrial environments, such as data centres, will always require exceptional levels of resilience and reliability under all operating conditions. Having the right UPS and support in place will give peace of mind that even when the worst happens, the impact on the business can be managed.
Fruition Partners and ServiceNow have created an enterprise service management solution for the Travis Perkins Group, now supporting EaaS implementation for IT, HR and logistics and providing a self-service portal which offers 30,000 staff a wide range of products and support, across 20 businesses and 2000 sites.
Travis Perkins plc is one of the UK's leading distributors of materials to the building and construction and home improvement markets.
The Group operates more than 20 businesses from approximately 2,000 sites and employs over 30,000 people across the UK. With a proud heritage that can be traced back over 200 years, the Group’s employees are continuing that tradition by working with their customers to build better together.
As a FTSE 100 company boasting revenue of £5 billion, it has helped in the delivery of major building and infrastructure projects including Heathrow Terminal 5 and the Shard. In addition to Travis Perkins itself, its 20+ industry-leading brands include Wickes, Tool Station and Tile Giant, making it the largest supplier of building products to consumers and the trade in the UK.
The company is part-way through the delivery of an ambitious five-year growth strategy. This includes investing in IT to create greater integration and efficiency across the business through innovation, multi-channel transactional support, plus re-engineering and upgrading legacy systems to provide enhanced infrastructure, and a wider and deeper capability based on open source and cloud architecture. Improved service and support plays a key part of that, based on the ServiceNow platform.
In March 2013 the company put together a five-year roadmap for IT development that would include investments that would more than double the IT budget. Technology is seen to be a key enabler of strategic change at Travis Perkins rather than a support function. CIO Neil Pearce was recently quoted as saying that by making these investments: “We’ll have a much more efficient business – one that’s able to attract and retain people because we’ve got a better set of systems and processes. We’ll be at a point where we’re making use of our digital capabilities to create much better services for our customers.”
At an early stage of the five-year plan, Tech Director Matt Greaves established the Service Development & Change team alongside the Service Delivery team to deliver changes without dropping the ball. With multi-channel sales platforms, a commitment to cloud-based computing and support for trends such as BYOD (bring your own device), IT service management (ITSM) was going to be required to play a significantly enhanced role in future.
The Group’s legacy systems were unsuited to the task, based as they were on a ‘break/fix’ approach, and offered no proactive review of trends and issues from a top-down perspective, which meant the team was constantly ‘fire-fighting’.
The initial approach was driven by the principles at work across the IT department: to salvage what was possible from legacy systems; to adopt an Agile collaborative approach to new development; and to focus on cloud-based technology. Wendy Collison, Project Manager - Service Development, says, “We needed transformational change to support a demanding agenda. With the introduction of a five-year commitment we invested in a new ITSM tool - ServiceNow. With the Group strategy all about cloud and collaborative technology, ServiceNow was the perfect solution.”
From the start of the ServiceNow implementation in 2014, Travis Perkins turned to Fruition Partners UK for support in configuring and implementing ServiceNow. With only a small in-house team at Travis Perkins IT, Fruition Partners were able to recommend the best practice approach in implementing ServiceNow and to fill the resource gaps where needed in the early implementation.
Over time, Travis Perkins IT has built up its ServiceNow capabilities and its in-house team is led by Wendy Collison who says that she still relies on Fruition Partners for “the more complex integration and scripting work, and for their experience of what is possible with ServiceNow, as it’s such a powerful tool and the capabilities are expanding all the time.”
Wendy is also full of praise for Fruition Partners’ teamwork: “They have really invested in getting to know us over the years, and now they are completely part of the team. Whatever we achieve, we get there together!”
Sponsor Video
In July 2014, the Service Delivery team launched the first iteration of a self-service website, known as SolveIT. The initial portal was focused entirely on IT support, providing the ability to users to both log IT incidents and track progress, as well as look up information to help fix issues for themselves.
The response to the portal has been, from the outset, extremely positive. As Wendy Collison says “People love the flexibility and transparency: they know when something will be fixed and it’s ended the need for numerous repeat calls asking for updates. This means that the service centre can concentrate on better service, and anticipating problems before they happen, not just keying in what our colleagues ask for and reading the current status from the screen.”
SolveIT is now fully accessible to all 30,000 colleagues across the Group, enabling them to make requests for hardware and software, log incidents and request support. Currently 10 percent of incidents and all service requests go through the portal which Wendy Collison says is a good achievement: “Many of our colleagues in the stores or warehouses don’t necessarily have access to IT so they don’t tend to use the portal, but the take-up and usage rates among colleagues who do have that access has grown substantially and continues to do so.”
Over the last year or so, the IT Services team have gone on to implement a second phase designed to stabilise and optimise the services offered to the Travis Perkins business.
This stabilisation phase has focused on moving from a ‘fix fast’ approach to a ‘fail less’ approach. Whereas historically the teams maintained a healthy average speed of answer (ASA) on the desk and kept outstanding incidents to a minimum, the new system did not improve the overall service as the number of incidents wasn’t dropping; teams were merely fixing them faster.
The capabilities offered by ServiceNow enabled the teams to conduct effective problem management trend analysis on repeat incidents. This resulted in over a 20% reduction in incidents as well as improved availability and customer satisfaction. ServiceNow also allowed the team to link incidents to changes, to risks and to problems enabling much faster root cause analysis, with the impressive improvement on this indicator from 40% to 80%.
The current focus of the optimisation is to move fully to a services-based organisation based around a service catalogue, detailing all the business services the organisation provides along with associated costs and commitments; something Wendy Collison describes as “a bold but achievable plan”.
She goes on: “The first step was to create a successful product catalogue enabling colleagues to purchase devices from our self-service portal, such as phones, laptops, printers etc. Moving forward, we aim to create a full service catalogue where businesses will be able to purchase readily available solutions such as websites, business analytics and customer relationship solutions.”
One of the other developments that is underway for the portal is the development of an online branded clothing store for Travis Perkins staff who will be able to select uniform, schedule delivery and organise returns if necessary via the automated system.
Also in the pipeline is a plan to work with Fruition Partners to make use of the ITSM and ITOM toolsets provided by ServiceNow to map all of the Group’s services, linking them to fully-discoverable config items which will be proactively monitored by the various monitoring tool sets but integrated into a single ‘pane of glass’ or portal, all facilitated by ServiceNow.
Wendy Collison continues: “Using the existing incident, problem and change capabilities, our colleagues will be able to view the services Group IT offers, as well as the specific portfolio of services that they have signed up for. They’ll also be able to examine cost of services consumed, as well as their performance against our commitments, all via a cloud-hosted and mobility-centric solution.”
In addition to providing IT service management, Travis Perkins has extended its use of ServiceNow to encompass a range of other support functions. HR has been enthusiastic in the development of its own self-service portal, which supports its case management and employee relations work, as well as providing knowledge-sharing facilities.
Other applications include support for logistics functions: for example, ServiceNow has been deployed in the Group’s Range Centres which act as warehouses and distribution hubs allowing all branches to offer next day delivery on any heavy-side product. The Range Centre staff can use the portal to log issues with equipment and request support.
Similarly, issues with the Group’s websites are logged and managed using ServiceNow. This is in fact done by an external team who access the ServiceNow platform – just one example of ServiceNow is heavily integrated with third party applications run by a variety of external suppliers who provide IT services to Travis Perkins. Wendy Collison’s team can then, in turn, use it to monitor delivery against SLAs and ensure that integration is working as it should. The team is also currently looking at using the ServiceNow orchestration module to help them further in integrating third-party applications.
Over time, the Travis Perkins Group’s use of ServiceNow and its work with Fruition Partners has evolved considerably, as it has graduated from ITSM to ESM (Enterprise Services Management). In Wendy Collison’s view, “We’ve continued to invest in our use of ServiceNow as part of the Group’s commitment to IT as a driver of change, and we can now see that paying off in a significant way”.
Events over the last 12 months have seen the topic of cyber security firmly cemented as a major topic of concern for business across the globe. High-profile attacks, such as the recent WannaCry ransomware attack that hit the NHS particularly badly, and the Mirai botnet, which took down websites with thousands of users including Netflix and Twitter, have forced company directors to urgently evaluate their cyber security provision.
By James Plouffe, Lead Solutions Architect, MobileIron.
In the same way it is being taken seriously by company directors, cyber security is now a topic of major importance in the corridors of government. The newly created National Cyber Security Centre – a part of GCHQ – opened earlier this year in London, and has already warned company directors not to bury their heads in the sand when it comes to looking at cyber security threats. London is also preparing to cement its place as a world-leader in the fight against cyber attacks and threats with an investment of some £14.5 million in a new innovation centre to develop the next of generation of cyber security technology.
However, despite these powerful resources being deployed in the battle against cyber crime, company directors need to ensure they remain aware of the latest threats their businesses might face. Knowledge such as this is vital to ensuring an effective anti-hacking strategy is in place – and is important as a starting-point for taking heed of the NCSC’s advice not to bury their heads in the cyber-security sand.
If your firm is likely to engage in high-profile activity in the coming months, such as a merger or acquisition, or is getting ready to float on a stock exchange, directors need to be ultra-aware of potential cyber attacks that might result from an increased corporate profile.
So-called “hostile actors”, such as criminals, terrorists and even foreign states, may latch on to companies that are planning to restructure or change the way they are managed. News such as a merger or acquisition, or joining together of previously separate departments, or even news of major new investors planning to stamp their own authority over a business may well send the message that a period of change – and resulting insecure systems – could be imminent. They will then strike at the best opportunity to steal large amounts of sensitive data, which could include intellectual property, strategic data or research and development plans.
Companies are increasingly employing a mobile workforce. The advent of homeworking and hot-desk working practices, coupled with an increased amount of people travelling overseas to take advantage of international business expansion opportunities, has seen reliance on shared devices become more important than ever.
Likewise, employee expectations are changing rapidly and they are increasingly demanding uninterrupted access to data and the freedom to work anywhere, anytime, on their device of choice. Today’s business leaders must figure out how to create an engaged, mobile, and global workforce, otherwise they will fall behind their competitors.
However, the tools they use to ensure they can achieve this have to be watertight and stand up to any form of cyber attack. Although there is some disagreement on the exact figure, the consensus among researchers is that some 50 billion devices will be connected by 2020. While this presents exciting opportunities, it also provides increased opportunities for hackers and cyber-criminals to exploit any weak infrastructure links.
It is therefore essential that company bosses have a solid infrastructure in place that only allows access to authorised devices. Without this, we will see further examples of “weaponised” IoT devices targeting organisations, such as last year’s DDoS attacks on Dyn. To prevent this, companies need to fortify their network perimeters and do all they can to ensure all employees’ devices are secure.
Companies are watching their bottom line more than ever as they operate in an increasingly competitive environment. As a result, there is a danger of deploying an “if it’s not broke, don’t fix it” mentality when it comes to investing in new equipment.
However, this could be an incredibly damaging false economy. While enterprise computing is currently undergoing an evolution that will make OS upgrades and updates much simpler, companies will need to overhaul any outdated hardware and systems to take full advantage of this. Although such processes can, of course, be complicated, time consuming and expensive, it is those firms which are capitalising on innovations and keeping technology up to date that will fare the best against the crippling attacks that have taken down some of their competitors.
It is likely that traditional cloud security solutions are nowhere near adequate enough to protect data from falling into the wrong hands through unsecured mobile apps and devices.
Company directors responsible for technology provision need to opt for a cloud security system that embraces the unique proficiencies offered by Enterprise Mobile Management (EMM). By embracing EMM, IT admins will be able to outline granular cloud access control policies at the level of application, IP address, user and device identity. This will allow IT to bridge the gap between mobile and cloud security, allowing for a more detailed understanding of how users are accessing enterprise cloud services and therefore arming you with the tools to better protect your cloud data.
All company directors need to ensure they have a working knowledge of the latest regulatory developments – and make sure their systems don’t let them down when it comes to meeting these standards. For example, the issue of “jailbroken” or “rooted” devices, which can access company data, is an urgent problem that must be addressed. This becomes even more pressing in the face of the General Data Protection Regulation (GDPR) that aims to strengthen and unify data protection for all individuals within the EU.
As with many things, when it comes to beating cyber attacks, knowledge is power. If company directors maintain a solid working awareness of the issues that might affect them, it will create a powerful partnership with organisations such as the NCSC – one that must be exploited as heavily as possible in the on-going fight against cyber crime.
New to 4D Data Centres this year, The Network Factory discusses the benefits of having a new data centre partner conveniently located just 30 minutes away at Gatwick, with all the support and knowledge needed; by a fast growing business as it focuses on meeting the high demands of its customer base.
The Network Factory prides itself on supporting the IT and platform requirements for renowned businesses such as ITV and the Halifax and hi-tech companies including Matchbox Mobile. To meet the varied requirements of its different customers The Network Factory works with a number of critical partners, one being 4D Data Centres, to underpin cloud migrations while reducing risk and costs through the delivery of better software and infrastructure.
Finding the right colocation partner in this day and age when security is of paramount importance is no easy feat. “Until recently we used a data centre in Slough, but being based in East Sussex meant that we were spending a lot of unproductive time on commuting alone, not to mention the challenge of meeting the requirements of on-site SLA’s,” commented Richard Stubbs, Director, The Network Factory. “When we heard that 4D Data Centres had opened a new facility at Gatwick we thought it worth exploring a new partnership and we haven’t looked back since we migrated our infrastructure to them in early March this year.”
With a limited internal resource Richard and his team found that having a data centre within easy reach was highly convenient, and the support he got from the 4D Data Centre team was also outstanding.
“Not only was the migration over to the new data centre almost seamless, with minimal disruption to ours and our clients’ businesses, but also the wealth of knowledge amongst the support team at 4D was, and continues to be, superb. They made something that could have been extremely disruptive so easy for us.”
While it’s too early in the relationship to talk about results The Network Factory team have already saved hours in commuting to their data centre as well as time supporting their infrastructure. “We already trust the 4D Data Centres team greatly. The locality, the excellent connectivity, the 24/7 support and easy access makes them the ideal partner as we look to expand our services and support for our customer base.”
Centrality supports Saïd Business School, University of Oxford, transforming the IT infrastructure.
It is a world leading innovative centre of learning for postgraduate students in business, management and finance. Rooted in the eight-century-year-old University of Oxford, the School is known for world leading education and research. Its mission is to be a world-class business school community that tackles world-scale issues. Having selected Centrality following a rigorous tendering process, the school has stated that it can have total confidence in delivering ‘world class IT services for a world class organisation’ that its students, staff and faculty members recognise, appreciate and can be proud of. Both Centrality and Saïd Business School, have been awarded the ‘Cloud Project of the Year 2017’ by the Real It Awards 2017, for their work on this implementation.
The school’s new CIO Mark Bramwell had inherited an ageing, obsolete, non-resilient and out of support and maintenance infrastructure. There were several single points of failure and no disaster recovery plan or failover in place. Support and maintenance was stretched across a small internal team (1.6FTE), underpinned by expensive day rate contractors and consultants, with support being provided Monday to Friday 08:00-18:00. The infrastructure was not befitting or fit for purpose of a world leading business school, prone to failure and loss of service which was undermining the service, standing, credibility and confidence in IT at the school.
A future looking and robust IT strategy was essential. Every day, week and year the school serves thousands of students – all of whom invest greatly to learn and study at the School – as well as staff and faculty. They all understandably and reasonably expect the school’s IT systems and service offer to match its reputation for excellence. An up to date IT service offer would therefore be needed to support all these VIP customers and regain a level of trust and confidence in IT.
“The quality, passion and customer service focus of the Centrality team stood out from the competition.”
The solution would be multi-faceted:
A complete IT service partner was needed to serve the needs of the entire business school and all of its students, staff and faculty. The partner would work alongside the school to assist in streamlining the organisation’s technology and network related processes; mitigating issues and resolving challenges, as well as providing hands-on management during the transition to the new service.
A managed private wide area network (WAN) formed part of the Centrality proposal, as did a round-the-clock, 365-day a year hosted data centre with full resilience, failover and disaster recovery for absolute peace of mind, together with an incentivised future migration plan to Office 365.
The proposal went to a panel of six members which included representatives from the central university infrastructure and procurement teams, and CIO Mark Bramwell. From an initial response of 19 companies, a shortlist of four was selected with Centrality being the unanimous choice of the panel.
The University of Oxford had never previously looked towards external hosting or provided an external partner with full control over a managed service. Culture, risk and the ‘unknown’ was therefore a factor, so it was crucial that the correct steps were followed in the procurement process. With the central university also providing its own hosted data centre services, a degree of collaboration, transparency, political sensitivity, tact and diplomacy was essential.
After contracting, the project commenced in March 2016 and full integration of the new system began in August the same year, all running to schedule and budget. As with all projects undertaken by Centrality, members of the senior executive and management team were involved throughout on a hands-on basis.
Saïd Business School now enjoys an ongoing close relationship with Centrality’s executive team and its round-the-clock support service. The school has stated that it now has total confidence in providing an up to date, supported, maintained, resilient and highly available ‘world class IT service’ that its students, staff and faculty members reasonably expect and fully appreciate.
“As a result of the school’s partnership with Centrality, its infrastructure services have been transformed … I can now sleep again at night!” comments Mark Bramwell, CIO, Saïd Business School.
Solid State Disks (SSDs) are a popular option for IT professionals looking to boost performance in their data center, but the question of how to best implement SSD storage often arises. Automated caching and tiering can provide the solution. But while both caching and tiering provide a layer of application acceleration that result in a cost-effective way to improve performance and get more out of your applications, they differ in a number of ways. They are not interchangeable and it is essential for IT administrators to understand the purpose and difference of each.
By Cameron Brett, STA Board Member, Toshiba America Electronic Components, Inc.
PCIe/NVMe is a high-performance option, but when provided in add-in card form factor, the drive is not very serviceable. 2.5”/U.2 and M.2 form factors are growing in availability, but PCIe drive slots are still rare, until the ecosystem becomes more established. And while vendors are creating controllers that take advantage of the PCIe bus's extra performance and reduced latency, the lack of a mature process for managing storage over PCIe will likely limit adoption in the near term.
Both caching and tiering provide a tier 0 layer of high-performance storage, making it easy to improve performance from 2x to 10x or greater. But there are differences. Caching provides a seamless layer of flash that the application does not need to be aware of everything just goes faster in most applications. This makes it the most common method of using SAS SSDs for performance improvement. Since data changes regularly on a front-end cache, a drive with high endurance would the best fit, such as a SAS SSD with x10 drive writes per day (DWPD), also referred to as “write intensive.”
In some cases, tiering also provides a seamless layer, but the additional storage in the tiered layer is part of primary data storage. It acts as a super-fast layer where the added capacity is a part of the data storage pool. Since caching is not counted towards data storage, you get more capacity with tiered storage that you do with caching. A mixed use (x3 DWPD) or read intensive (x1 DWPD) would typically be a good fit for a data drive.
When storing important data, you should have redundancy with a tiered layer. This is most commonly done with RAID 1 or mirroring, or multiple copies are available in scale-out and cloud environments. With caching, since it is not part of primary storage and it is not the only copy of data, it doesn’t have to be redundant or backed up (assuming “writes” are confirmed to permanent storage).
SSDs excel when the system requirements are more skewed toward performance, reliability and low power. The list of applications which could benefit from faster storage is vast and adoption of SAS SSDs is growing rapidly, and will continue as SSD prices fall and increased densities make it more cost-effective.
Transactional applications require the speed of the storage system and I/O performance (IOPs) to be as high as possible. 12Gb/s SAS SSDs provide the enterprise proven reliability and performance that is needed. Virtualized environments also do well with SAS SSDs due to their small block sizes and highly randomized workloads. Media streaming takes advantage of higher throughput rates that SAS SSDs provide over SATA SSDs and HDDs.
Applications such as online analytic processing (OLAP) that enables a user to easily and selectively extract and view data from different points of view and virtual desktop infrastructure (VDI), where a desktop operating system is hosted within a virtual machine running on a centralized server, also benefits from the higher enterprise-class system performance, connectivity and scalability of 12Gb/s SAS storage interfaces.
The cost analysis, or metrics of caching vs. tiering vary depending on how you measure your datacenter. These could include performance, dollars, power, application transactions or datacenter real estate.
Some common metrics are to compare one solution to another, depending on your application.
SAS SSDs are available through well-established suppliers such as Toshiba, Western Digital/SanDisk and Samsung. Software solutions are also available.
A few examples of solutions as of March 2017:
Tiering and caching are both used to accelerate datacenter and enterprise applications, but take different approaches. Caching is temporary in nature and typically can better utilize a minimal amount of SSDs. Tiering is more permanent, but requires a higher capacity investment in flash to be effectively utilized. Both offer a cost-effective way to improve performance and get more out of your application. For data center applications that require higher IOPs and faster throughput than hard disk drives, 12Gb/s SAS SSDs are especially well-suited for caching or tiering configurations.
Subscription-based payment models, including Software-as-a-Service (SaaS) solutions offered by vendors such as Adobe and Salesforce.com, are increasingly favoured by businesses because they enable cost-efficient and flexible access to critical assets.
By Jean-Michel Boyer, CEO, BNP Paribas Leasing Solutions UK.
Leasing offers customers a similar experience, enabling them to scale their operations without compromising budgets. That’s why IT leasing is a growing market in the UK. According to the Finance & Leasing Association (FLA), in June 2017, new business for IT equipment finance increased 17% compared to the previous year. Businesses are realising that leasing hardware and software, rather than buying it outright, offers them affordable and sustainable benefits.
These benefits extend not only to business end-users, but also IT suppliers and resellers, the environment, and the wider economy.
As businesses expand, they typically outgrow their existing technologies and therefore need to reconsider their IT infrastructures. However, many growing businesses are challenged by limited access to disposable funds.
Flexible procurement solutions, like leasing, can give them the freedom to access the technology they need without breaking the bank – while also preventing them from being stuck with obsolete or inadequate equipment. Instead, vital cash reserves can be put towards strategic business investment. Within a leasing agreement, regular, fixed payments over a defined contract period also protect these businesses from inflation and other unexpected costs.
Many IT leasing solutions also have the added benefit of factoring in soft costs such as implementation, training and ongoing maintenance for further convenience and budget certainty.
Not all resellers are able to offer their customers a subscription-style contract given the obvious need to optimise cashflow. By partnering with a finance provider, resellers can bridge this issue, by enabling them to lease solutions to their customers but still receive full invoice payment upfront. This means the reseller can recognise revenue immediately whilst the customer benefits from a subscription-style agreement.
IT leasing helps resellers win and retain more customers. Flexible payment terms are in demand: the FLA figures show that more and more businesses within the UK are using finance to procure their IT solutions.
To meet this demand and maintain their competitive position, resellers should integrate leasing solutions into their sales process and encourage their customers to review all their buying options. A leasing contract also encourages the reseller to forge a closer relationship with the customer over time by providing not only physical assets, but also value-add services such as set-up and implementation, consultancy, maintenance and training for staff. This, in turn, creates a more likely environment for repeat business and helps resellers stand out from their competitors.
In addition to supporting the economy and business growth, leasing also has a long-term, positive impact on the environment. Since, in an IT leasing arrangement, the finance provider retains ownership of the assets, it has final control over environmentally responsible disposal of the equipment.
This not only removes the burden of ownership and disposal from the business end-user, it also encourages faster adoption of the ‘circular economy’, an alternative economic model which is guided by three actions: repair, reuse and remanufacture. Simply put, in a world that is vulnerable to growing resource scarcity, the ‘circular economy’ aims to eliminate waste.
Ultimately, with the right finance provider on board, IT leasing represents a win-win solution for everyone.
The volume of data we create is growing at a dramatic rate, and it’s not slowing down anytime soon; by 2025, IDC predicts that the “global datasphere” will be 10x the size it is today, rising to 163 zettabytes.
By Paul Mills, Converged & Partner Solutions MD at Six Degrees Group.
The Internet of Things (IoT) and Big Data are key drivers behind this explosive growth and healthcare, energy and manufacturing businesses – to name but a few – are utilising this huge increase in consumer and industry intelligence to drive revenue and enrich customer experience.
To support this acceleration in data creation and to maximize the opportunities it generates, businesses of all sizes need to plan ahead and build flexibility and security into their IT estate. For most this means implementing a cloud solution, be it public, private or hybrid, to “host” virtual infrastructure, or migrating on-premise equipment to a third party. Ultimately, however, these cloud and “off-premise” solutions have to live somewhere; in data centres.
The growth of data and cloud across all industries – Gartner predicts that the public cloud market alone will grow by 18% in 2017 – combined with issues of data sovereignty and the impending GDPR legislation, means that it’s crucial for the ‘work horses’ – the data centre facilities – to be secure, resilient, scalable and agile. So what does this actually mean?
Cloud technology puts data centre technology on the front line and, despite several recent high profile security breaches, it seems an alarming number of companies only take the security of data seriously once it has been compromised. The Government recently announced that it will be investing almost £2bn into cyber security over the next five years and businesses of all sizes should follow its lead. With data breaches and hacking scandals on the rise – not to mention the occasional incident of human error – it is vital to take security seriously.
Data centre providers are responsible for all physical onsite security to keep customer infrastructure safe. This includes all personnel access to the facility, surveillance, fire and flood protection and 24x7x365 monitoring of all cooling, power and business critical systems. The physical security of a facility plays a huge part in ensuring data is kept safe but it is also crucial that businesses and service providers have robust processes in place to mitigate the threat of any cyber-attack.
Providing cyber security can often be the more complex piece of the security puzzle, so companies should be proactive about it. “Tiering” the data and applications that need to be protected, from the most business-critical applications to the least, establishing a perimeter fence to keep opportunists out and implementing a disaster recovery plan should the fence be broken, are just a few things that should be considered when protecting data. A good data centre provider will advise and empower tenants to ensure they have the right security measures in place.
Effective monitoring is essential to avoid disruption in a data centre (and avoiding disruption is key to keeping data secure!)
The biggest cause of downtime is power supply failures, whilst the highest profile is cyber-attacks. But not all causes are so dramatic; overheating equipment, extreme weather and random acts such as animals breaking through cables can also be an issue. With the right monitoring set up however, all of these hazards can be easily avoided. If there is an in-depth understanding of the data centre environment and a suitable level of real-time operational intelligence, as well as backup and recovery systems, downtime is much less likely.
When considering cloud adoption, thanks to some clever advertising campaigns from the likes of Apple, end-users (non-technical ones at least!) tend to forget that the cloud-based services and resources reside in a physical location. As the cloud marketplace and data creation continue to grow, data centre facilities will continue to be in demand. They should therefore always be built with scalability in mind so they can cope with customers’ increasing data demands.
With scalability comes agility. Data centres need to have the best network capabilities to ensure they can deal with bursts in traffic to the servers in the facility. Cost-effective, fast and low latency connectivity to other data centers and to AWS, Azure and Google ensures that businesses can change and adapt to any situation.
With all this in mind, it is becoming more and more apparent that there is a lack of industry-wide standards available to help guide businesses to the right facility that will meet their requirements.
This confusion should put pressure on data centre providers to ensure expectations and reality are aligned. The EN 56000 standards will go some way in creating clarity so that businesses are better informed and expectations can be managed. As the data centre industry continues to grow, new providers will frequently become part of the marketplace, so due diligence and extended vetting should always be a top priority when investing in a third party data centre.
With a new tech buzz word seemingly being created every other day, as an IT professional it can seem daunting to keep up and intimidating to be at the forefront of a new technology trend. However, if you start with a solid data centre infrastructure there is a lot less to worry about.
With global enterprises’ network managers already working in a forest of cloud and hybrid models, multiple WANs and fast-changing branch IT needs, are the latest iterations of SD-WAN really going to give them a way through the trees …to the big prize of agile networks that ensure greater responsiveness to customers along with lower running costs?
By Marc Sollars, CTO, Teneo.
We’ve all seen the predictions: IDC foresees a $6bn global SD-WAN market by 2020 and raft of global carriers have added SD-WAN to their portfolio already this year, so it’s clear that the market is very confident of this technology’s potential.
The future looks rosy, where enterprises are set to become far more versatile by configuring network performance as never before. But despite all the hype, CIOs and network managers need to take a long hard look at their network performance expectations and vendor capabilities, before committing to an SD-WAN strategy. They need to risk-manage their path through the forest to achieve network agility while avoiding potential cost issues or operational constraints in the future.
The rise of cloud has seen enterprises increasingly use hybrids of on-premise and cloud environments to boost their responsiveness. And to further empower local branches and boost DevOps’ innovations, networks have also gone hybrid: many CIOs use a blend of MPLS, broadband, 4G and public Wi-Fi connectivity. As a result, however, CIOs often struggle to maintain networks’ performance, both world-wide and locally, as well as control mushrooming costs effectively. Trouble has long been brewing in the forest.
Multiple WANs mean more responsiveness, but they also lead to continual upkeep challenges including poor branch application performance, connectivity issues and rising network maintenance costs. This applies even where network managers have implemented WAN optimisation in recent years, because it too demands regular upgrades and in-house IT resources assigned to maintaining it. Behind the scenes, there’s some serious untangling to be done even before you can begin to define your new SD-WAN architecture.
Analysts have warned that despite all the network investments in recent years, global businesses still have infrastructures with sub-par connectivity and core applications that cannot respond adequately to volatile trading conditions, new branch office growth and increasingly-mobilised workforces.
Fortunately, the rise of SD-WAN promises CIOs better control of these scattered networks, sites and applications. Layered over companies’ existing connectivity solutions, SD-WAN tools involve applications and data being abstracted from the underlying infrastructures. As a result, global network performance can be controlled, fine-tuned and automated from a central point. This avoids network engineers needing to be on-site at branch offices to carry out manual network configurations.
Using SD-WAN, networking teams can use cheaper links, improve application availability and business units’ productivity, accelerating the set-up of new locations and reducing the ongoing need for in situ maintenance.
Centralised control promises transformation of local branch capabilities. In particular, SD-WAN helps network managers to make smarter decisions about the routes data will take over the network, depending on business priorities for different applications. Enterprises can build in greater bandwidth for local offices or set up failover rules so traffic automatically switches to the next best available route and avoids downtime.
Sensitive application traffic can be sent down high-grade routes such as MPLS with less sensitive material sent down cheaper routes if deemed less important. Before rushing into an SD-WAN strategy, CIOs would therefore be wise to seize the opportunity to assess other connectivity costs, such as enterprise-grade Internet, at the time of MPLS contract refreshes.
While MPLS generally requires the customer to operate expensive edge hardware usually to a carrier’s term contract, SD-WAN flips the cost model to suit the service user, offering a low-cost commodity item with the intelligence or orchestration capabilities provided at the overlay level. SD-WAN is turning such hybrid network cost equations on their head: a US-based analyst recently estimated that estimating a traditional model 250-branch WAN’s three year running costs of $1,285,000 could be reduced to $452,500 through an SD-WAN deployment.¹
Carefully implemented, even in complex ‘brownfield’ IT landscapes, the latest SD-WAN implementations are starting to give CIOs new levels of network control and agility as well as boosting the bottom line for performance and bringing order to network maintenance, support and travel budgets. SD-WAN tools provide the opportunity as Gartner puts it, for IT teams to ‘align’ IT infrastructures with the businesses they serve; but planning for them also demands that commercial and network priorities are considered together and not in isolation.
To ensure effective SD-WAN planning and guard against network agility strategies placing possible constraints on the business in the future, we recommend that enterprises take time to understand their innovation, future business model, infrastructure and in-house resourcing needs. The need for caution naturally extends to selecting the right SD-WAN provider.
When pursuing an enterprise innovation strategy, the CIO needs to assess how delivering new apps for end customers or new DevOps platforms will affect their bandwidth needs or alter network traffic priorities. For example, if the board is rethinking the business model, how many global locations will be supported and what level of access will be given to new classes of remote workers, partners and contractors?
Different global companies have vastly different network agility goals too. An enterprise might be looking for an SD-WAN vendor that provides them with network optimisation and improved efficiency of circuits ‘out of the box’. Another customer might be focused solely on better voice and video solutions for teams working collaboratively.
To achieve true agility, SD-WAN also forces companies to rethink their security postures, particularly where complicated configurations are required. Security of course should be number one priority for any new network architecture, particularly as more applications move to the cloud. SD-WAN provides the opportunity for a more secure architecture than a traditional WAN, but to achieve this, the security team and policies must be integrated from the start. Choosing an SD-WAN provider that also understands the security world will be of benefit here.
Despite the market excitement and so many global players adding SD-WAN to their service portfolios, there are also WAN-focused and specialist SD-WAN vendors that deliver more customised capabilities.
Many carriers are signed on with a single SD-WAN vendor, which means their capabilities are defined by what the individual vendor’s product is designed to do and might not suit their customers’ agility needs. IT teams might be better advised to seek a specialist SD-WAN provider that can map out flexible options, without the need for a long-term deal or a large-scale investment.
Each vendor approaches architectural needs or delivers its commercial offering in very different ways. A global carrier offering connectivity-with-SD-WAN services may be an attractive option to a fast-growing company, especially if it provides coverage in territories targeted by the business. Such ‘one size fits all’ deals are simple and save time in the short-term but the customer CIO needs to be aware that the carrier agreement might limit the type of circuits used, and lock the customer into costly contractual commitments over a longer timeframe.
And amid the rush to provide SD-WAN offerings, there’s the old question of vendor responsiveness and reporting. CIOs in fast-growing or hard-pressed enterprises need to satisfy themselves that the new vendor and carrier SD-WAN services established in the last 12 months are truly flexible enough to ensure manageable costs, responsive support teams and compliance with security regulations. To perform such due diligence takes time, which must be carefully balanced with speed to market to ensure the right results.
As we’ve seen, finding the right way through the trees also demands detailed discussions with SD-WAN integrators and vendors that will find the most appropriate options for a global enterprise with unique network agility demands.
SD-WAN tools will undoubtedly give IT teams agile network options to better support business units world-wide. But our view is that IT organisations in a fast-growing company need to carefully assess their connectivity and performance needs and ask how their SD-WAN vendor will bring clarity to implementing and managing these tools’ costs. If not, the CIO will still struggle to see the management wood for the networking trees.
¹ SD-WAN: What is it and why you’ll use it one day, Network World, June 12, 2017.
JBi is a leading digital agency focusing on web design, development and digital marketing. As an integrated team, they have worked with a wide range of domestic and international brands ranging from ITV and Channel 4, to William Hill and Rolls Royce. As well as design and development, JBi offer continuous support and management for its clients to deliver maximum service performance and availability.
JBi regularly need flexible, high performance hosting solutions as the foundation for their digital campaigns and services. Their clients require a combination of competitive pricing, performance, availability and security.
To fill this need, JBi wanted a strategic hosting partner with whom they could build an effective, long-term partnership. This was vital for the development, implementation and live stages of client campaigns, as was the need to ensure support and service would be delivered to the highest possible standards.
“Our clients often require us to develop high performance digital services, assets and campaigns where hosting is a vital piece of the jigsaw,” explained Raj Bawa, Operations Director at JBi. “To achieve that we need a really strong partner, and while we understand hosting, to do it to the highest standards needs very specific experience and expertise, so it’s not a capability we can develop in-house.”
“Our clients also need us to be flexible. Some have a very clear focus on project budget, while for others the absolute top priority is security. It is essential that we can balance these needs, and our hosting partner needs to mirror our approach and level of flexibility. It’s not easy to find a specialist hosting business with all these capabilities.”
In 2016, having considered a range of options and providers, JBi turned to Hyve as their hosting partner. JBi work with Hyve to identify the right hosting solution for each of their clients – there is no ‘one size fits all’, and every hosting package is designed as a bespoke solution.
“Hyve are experts in their field, a successful growing business with an approach which places service at the heart of their capabilities,” said Bawa. “This gives us tremendous confidence that we can exceed the needs of our clients and deliver digital projects on time, on budget and with the performance they require,” said Bawa.
Hyve has since worked together with JBi to provide hosting solutions across a range of projects and industries, from pharmaceutical and automotive to finance and media.
Hyve specialise in fully managed business hosting services. A team of systems architects and highly trained engineers work with clients to tailor the best possible solution for their needs. Hyve’s methodology is to build a relationship with each client over time, becoming an extension of its business.
JBi and Hyve’s approach to hosting is to employ a continuous process: consult, design, deploy and maintain. The consult phase seeks to understand client needs in order to architect the perfect cost effective solution, within budget and to a project planning timeline that fits client planning precisely.
Project deployment focuses on key technical milestones ranging from server build, migration and content delivery, to platform configuration, fine-tuning and launch. Hyve maintains and delivers 24/7 monitoring and support, backed up by on-going performance tuning, giving every client the ability to scale hosting services according to their requirements.
Hyve and JBi work together to deliver digital innovation for JBi’s clients, and by working with Hyve, hosting has now become one of the cornerstones of JBI’s services. “For many of the other digital agencies we see or compete with, hosting is a challenge, a technical overhead and sometimes a point of failure” said Bawa. “Our partnership with Hyve takes away the hosting headache, which means we can do the same for our clients.”
JBi also values the levels of rapid response support delivered by Hyve’s dedicated account managers. “We have specific people on the Hyve team who we know well, and who we can call if we need advice or help,” explained Bawa. “That’s a very different landscape from many other providers who base their entire support structure behind a ticketing system where you might never speak to the same person twice. It’s really important to us.”
Many people have been lost to cancer, and everybody can potentially get one form of cancer or another. The race is therefore on to find a cure. Increasingly, pharmaceutical companies and organisations conducting research to push cancer into history realise that the answer may lie in big data and in sharing it. To achieve this they need reliable network infrastructure that isn’t slowed down by the data and network latency. Even with the ever growing volumes of big data, it’s important to allow data to flow fast to permit accurate analysis.
By David Trossell, CEO and CTO of Bridgeworks.
On 27th April 2017, The Telegraph newspaper wrote: ‘How data is helping to select the best new cancer treatments for patients’. The article, which is sponsored by Cancer Research UK, reveals: “Gifts left in wills help Dr. Bissan Al-Azikani and her team to look at vast amounts of data, which can aid them in identifying the best new cancer treatments for patients.
Talking about her role, she says: “In the past 10 years, we have made huge strides in discovering therapies as a result of understanding cancer at a molecular level. The challenge is to decide which cancer drug targets show the most promise and they should be prioritised for development. We use artificial intelligence to select the best drug targets by combining DNA data from tens of thousands of patients with billions of results from lab experiments.”
Realising that she can’t work as a sole beacon to find the cure for cancer, Al-Azikani has established the world’s largest cancer knowledge database (CanSAR) with Cancer Research UK’s investment. Cancer researchers can access the database free of charge. The aim is to globally democratise information to increase the odds of beating the disease. It is currently used by 170,000 researchers worldwide, and they can benefit from the fact that CanSAR offers big-picture data to enable drug discovery and to speed up the process of getting new cancer treatments to patients more quickly than they could before.
In S.A Mathieson’s ComputerWeekly article, ‘Genomics England exploits big data analytics to personalise cancer treatment’, Anthea Martin – Science Communications Manager at Cancer Research explains that standard cancer treatments don’t work for everyone. This is because every individual and every cancer is different. She nevertheless argues that IT is central to testing and research –particularly as the high volumes of data present their own problems.
This has led some of the researchers to sending hard drives to their colleagues by post. That’s neither the fastest, nor the most safe way to share data compared to sending encrypted data over a wide-area network (WAN) with the support – for example – of a data acceleration solution> This mitigates the negative effects of data and network latency, and it allows even encrypted data to be sent at speed over a WAN. It also removes the danger of losing sensitive data on hard drives or tape is removed.
Writing for Wired magazine, Lola Dupre agrees. Her headline for her October 2016 article says, ‘The Cure for Cancer is Data – Mountains of Data.’ She writes: “With enough data, the theory goes, there is not a disease that isn’t druggable.” However, focusing on plunging into the depths of an individual’s DNA is not enough. She says this is because a cure for cancer requires a exabytes of data – a complete universe of it.
Without it she explains that the ability to detect patterns in a population, to apply machine learning, to find network mutations responsible for the disease is diminished. Large data sets are therefore crucial; they improve the accuracy and power of the big data analysis, and they do this to the extent that the predictors become strengthened.
Yet there is one crucial hurdles that the researchers need to overcome: The data is not readily available, and so people need to be encouraged to share their biological data. Even in this field data privacy is important and people will want to know that the data is being used for a good purpose.
“You must then convince the medical centres and genetic companies who collect this data to offer open access. Hoarding it with their own profitability in mind won’t help anyone to find a cure for cancer. Transparency is crucial, and by sharing it on an open access basis economies of scale can be attained and the data sets will number in their millions. Unfortunately, Dupre points out that the “volume of information is simply not available, but companies ranging from tech behemoths to biomedical start-ups are racing to solve these issues of scale.”
With the right digital infrastructure and informed data-sharing consent in place anything is possible. Not everyone, but many more patients may become happier in the future to share everything from the genome data to blood pressure data. With increasingly patient-friendly tests it will become possible to check each individual for signs of trouble and to act quickly. However, with the need to examine exabytes of big data, and to invest in data acceleration solutions will be a must. WAN optimisation solutions and the man in the van just won’t do.
A well-known multinational pharmaceutical corporation is undergoing a modernisation programme. The company wants to move large amounts of data about such things as drugs trials, and other matters, across the globe in the pursuit of a cure for cancer. The type of data includes images that emanate from research labs across the globe.
At present they run an average IT infrastructure, but the business wants to move into new and exciting areas. This is Ieading to a business and IT transformation exercise, and with it the company is adopting a new approach. Traditionally, IT is often said to dictate to the business what it needs, but in this case the business is leading the change programme by informing IT what it wants and needs in order to move the data and to analyse it.
The firm was one of the first to move into WAN optimisation because it is putting data at the heart of everything it does. The business is as involved with the project as IT is. It now sees data acceleration solutions as the way forward, moving data faster to speed up their research in the hope that cure for cancer can be more quickly found. Historically it has been said that moving large volumes of their data can’t be done, but now it’s proven that it can be achieved. Although the business requires its IT infrastructure to be changed, much can be done with its existing infrastructure too with PORTrockIT.
Data acceleration solutions such as this use machine learning to increase the ability to speed up data flows. While WAN optimisation is often touted by vendors as the solution, it only pays lip service to increasing data speed and as a technology it often can’t deal with encrypted data. Another aspect is that IT vendors often tout the need to replace an organisation’s existing storage and network infrastructure, but this often isn’t necessary.
For example, a larger bandwidth isn’t necessarily going to mitigate latency and solve the challenges that even cancer research organisations and pharmaceutical companies face. However, this challenge can be overcome with a new approach that is supported by artificial intelligence and machine learning to support the big data analysis. With more accurate and faster analysis it’s hope that cancer’s ability to adapt and evolve to be resistant to treatment will be eliminated.
Yet IT alone won’t cure cancer. It requires a multi-disciplinary approach and a roadmap. The Association of the British Pharmaceutical Industry (ABPI) in association with management consultancy KPMG has therefore published a ‘Big data roadmap’. The document examines the big data challenges with a call to action, a look at how the UK competes, and an analysis of the future big data opportunities for the pharmaceutical industry.
It concludes by saying: “We need to continue to scan the big data horizon, identifying new opportunities where innovative technologies may play an emerging role. This will ensure focused investment and contribute to a sustainable cycle – identifying and prioritising next generation, high value opportunities.” To create value from big data there needs to be open dialogue, and the report emphasises that collaboration is key between all of the stakeholders – say, for example, in cancer research and the use of big data to find a cancer cure. But this will amount to nothing unless the right technology is put in place to ensure that data can flow freely and fast across a WAN.
The value of labels and signs in keeping a safe, tidy and efficient workplace.
By Dymo.
Order and disorder is the make or break of effective data centre management. Housing miles of wires and cabling systems, without proper organisation there are real risks to efficiency and safety. While much of the cable management falls to the IT manager, facilities managers have a role to play in keeping a safe, tidy and efficient centre. Just as with cable management, facilities management can really benefit from an effective labelling system.
Facilities managers will often have responsibility for planned and reactive maintenance. By working with IT colleagues, they can understand the workings of a data centre, including the varying demands on capacity, and plan maintenance around this. Reactive maintenance can be more difficult and this is where an effective labelling system that both the IT and FM departments can understand pays dividends.
Unexpected downtime can cause major disruptions resulting in loss of revenue and reduced productivity. Google estimated that a five minute outage in 2013 cost $545,000. Clear labelling that both IT and FM engineers could follow was vital in ensuring remedial action was taken swiftly and that the outage lasted only five minutes. A well-documented, clearly labelled system is easier for engineers to navigate, update and repair, which results in lower maintenance costs.
Organisation also helps to avoid unplanned downtime. Human and mechanical error is responsible for 88 per cent of power outages in businesses, according to research from Uptime Institute, a US based advisory organisation focused on improving business critical infrastructure. Organised cabling minimises potential damage to wiring and machines, plus allows easy access for maintenance and ensures vital air flow to components, keeping temperatures down and preserving functionality.
Without organised cabling, the chances of complications occurring increases and recovery becomes more difficult. This means time and cost inefficiencies. A simple solution such as ready-made label templates can maintain organisation in a simple yet effective manner, transforming a complex system into an easy to navigate arrangement.
In a data centre with many miles of cabling, it’s important that the process for labelling is not just easy to follow, but quick to implement. Using the latest technology, such as the handheld DYMO XTL, can be a real time saver. This device increases efficiency as it allows you to upload data from an excel spreadsheet and print a number of labels in one go without having to type each individual one, a feature certainly appreciated by anyone managing a rack of patch panels in a date centre. Additionally, hundreds of pre-loaded label templates are available to further simplify and accelerate the task, saving precious hours and minimising mistakes.
It is essential that as well as machinery, personnel are protected from harm and this is the facilities manager’s realm of responsibility. Often a small team has a duty of care to hundreds, sometimes thousands, of individuals, and this can make ensuring their safety a real challenge. One of the best ways to address this difficulty is through embedding health and safety into the core culture of the workforce. It can be challenging to generate a shared health and safety culture in an organisation, but there are ways to achieve this, including using signs and labels.
The humble safety sign can be used to enhance awareness of potential hazards, for example if there are high voltage power sources, these can be highlighted with appropriate signage. Increasing the visibility of labels in the workplace is a good way of raising their profile. Those signs that are colourful and bright will always stand out and be noticed. This can also help workers to identify any action that should be taken when working around the equipment, or warn employees of any restricted areas of the site. A high tech environment needs clear signage to alert people to the hazards that surround them during their daily routine. A simple sign could help prevent someone from being seriously hurt.
Beyond highlighting the immediate hazard, labels can subconsciously help to embed the importance of health and safety amongst employees. Signs can be used to promote best practice when it comes to safety such as reminding workers not to overload sockets with extensions. This is also important for protecting workers against any faulty equipment. Where there is an awareness of safe working, it helps everyone to be alert to dangers around the workplace and act in an appropriate manner.
It is difficult to ensure regulatory compliance with a disorganised system. The consequences of failure to comply with statutory regulations can range from a simple penalty fine to a serious injury or even death.
Regulations extend beyond occupational health and safety, and for facilities managers there are regulations around what must be labelled and how to label. For example, section 514 “identification and notices” of the BSEN 7671 17th Edition, 3rd Amendment outlines the minimum text size required for some notice labels. Believe it or not, this regulation may not be being met by some current labelling solutions. This can be easily corrected by utilising an effective cable labelling product that guarantees compliance with the requirements.
While a labelling tool won’t help in every situation a data centre FM might find themselves in, technology can make a difference on many of those time-consuming daily tasks.
When choosing your next labelling tool, here are five key questions to consider:
1. Does it provide template labels that meet industry regulations and standards?
2. Can it be updated to continue meeting those standards?
3. Does it print a variety of sized labels depending on the requirement?
4. Is it rechargeable or will there be expenditure on batteries?
5. Is it a trusted brand?
The most important part of data centre management is ensuring uninterrupted, efficient operation. Organisation is the key to this. A shared and intuitive labelling system allows for effective co-operation between FM and IT departments that means quick troubleshooting and recovery during unexpected outages. Importantly, it also means a safe, tidy and efficient operation.