The Storage Networking Industry Association (SNIA) and JEDEC Solid State Technology Association were honoured recently at the Flash Memory Summit conference with a Best of Show Award in the Most Innovative Flash Memory Technology category.
The award recognized the DDR4 NVDIMM-N Design Standard, and was presented to Jeff Chang of AgigA Tech and Arthur Sainio of SMART Modular Technologies, Co-Chairs of the SNIA NVDIMM Special Interest Group and Jonathan Hinkle of Lenovo, Chair of the JEDEC Hybrid DIMM Task Group and a member of the JEDEC Board of Directors, at a reception and awards ceremony at the conference, which is held annually in Santa Clara, CA.
The JESD248 DDR4 NVDIMM-N Design Standard now enables datacenter systems to design for standardized, compatible persistent memory. NVDIMM-N’s are dramatically boosting performance for next generation storage platforms systems by providing high-speed access to persistent memory. This has been a game-changer, allowing the ecosystem to now broaden and provide a solid platform for firmware, operating system, and application developers to start reaping the orders of magnitude performance gains available. Published by JEDEC, the JESD248 DDR4 NVDIMM-N Design Standard is available for free download from the JEDEC website.
JEDEC’s JC-45 Committee for DRAM Modules and the NVDIMM Special Interest Group (SIG), a subcommittee of the Solid State Storage Initiative within SNIA, collaborated on the development of the NVDIMM taxonomy, of which NVDIMM-N is one version. Non-Volatile DIMMs are hybrid DDR4 memory modules that plug into standard dual in-line memory module (DIMM) sockets and appear like a DDR4 SDRAM to the system controller, yet contain non-volatile (NV) memories such as NAND Flash on the module. The NVDIMM-N version combines DRAM and NAND Flash where the Flash memory provides backup and restore of all DRAM for reliable data persistence through power failure.
“JEDEC is honored to receive this prestigious award in partnership with SNIA,” said Mian Quddus, Chairman of JEDEC’s JC-45 Committee for Dynamic Random Access Memory (DRAM) Modules. He added, “This recognition illustrates how effective collaboration between standards organizations provides innovative solutions to the industry and helps accelerate market adoption of new technologies.”
“SNIA’s commitment to the advancement of persistent memory technologies is enhanced by our JEDEC partnership,” said Jim Pappas, SNIA Vice Chairman and Co-Chair of the SNIA Solid State Storage Initiative. “We are grateful for industry recognition of how joint technical activities can advance the delivery of persistent memory solutions to today’s customers.”
“New hybrid storage solutions are emerging with innovative next generation types of storage class memory that can accelerate performance,” said Jay Kramer, Chairman of the Awards Program and President of Network Storage Advisors Inc. “We are proud to select SNIA and JEDEC for the Best of Show Innovation Award for their innovation in creating the DDR4 NVDIMM-N Design Standard as an industry standard that enables NAND flash and NVDIMM technologies to be combined in a way that enables compatibility among vendors and seamless implementations of the latest memory and storage technology.”
Total EMEA external storage systems revenue fell by 3.9% in the first quarter of 2017, according to the International Data Corporation (IDC) EMEA Quarterly Disk Storage Systems Tracker 1Q17.
The all-flash market outpaced expectations and grew 100.7% with accelerated growth in Western Europe and in the Middle East and Africa. In contrast, the traditional hard disk array (HDD) segment in EMEA fell 34.5% in 1Q17.
"Brexit uncertainty, unfavorable exchange rates, major vendors' internal reorganizations, and increased component costs for SSD have weighed down on EMEA performance once again, making 1Q17 the ninth quarter of uninterrupted decline for the region," said Silvia Cosso, research manager, European Storage and Datacenter Research, IDC. "However, as enterprises progress in their digital transformation paths, sales of all-flash array systems, standalone or converged, see no crisis in sight, doubling their sales compared to the same period a year ago and reaching a quarter of total sales."
"CEMA external storage market development remains difficult to predict due to the differences in the subregions' maturity and the complex macroeconomic and political situation," said Marina Kostova, research manager, Storage Systems, IDC CEMA. "While we still expect storage spending in the region to grow in the next few years, growth will be more subdued and there will be quarterly fluctuations due to the combined effect of technological disruption and volatility of the markets."
Looking at the subregional level, Central and Eastern Europe (CEE) underperformed the Middle East and Africa (MEA). Storage investments in most CEE EU member states intensified, boosted by the launch of new flash-optimized midrange solutions and the realization of high-end storage projects by the government, finance, and manufacturing sectors. However, the Russian storage market dragged the entire region down due to the postponement of already approved projects for next quarter by public and large corporate clients.Non-Volatile Memory Express (NVMe) is expected to eventually replace flash storage by 87% of European IT professionals whose organisations are currently using or considering NVMe technology (60% of all respondents).
Pure Storage has published the results of Evolution, a groundbreaking independent global research survey that explores the ways businesses are balancing infrastructure and applications today and beyond.
Emerging technologies such as Artificial Intelligence (AI), Internet of Things (IoT) and machine learning have changed the way businesses operate and what it takes to thrive in a digital economy. An independent survey of IT leaders in more than 9,000 businesses spanning twenty-four countries across the US, EMEA and Asia Pac found that digital solutions drive around half of revenue (47% on average) for organizations, whether through customer facing applications or more back-office functionality. It’s clear that digital transformation is no longer just a buzzword, it’s actually happening.
But despite this growth, technical complexity and strategic uncertainty from an infrastructure standpoint have prevented businesses from truly becoming digital. Public, private and hybrid cloud, SaaS and traditional on-premises all have momentum, but businesses still lack confidence in where to place specific workloads:
On average, businesses are running 41% of applications with traditional on-premises IT – higher than both public cloud (26%) and private cloud (24%).Public cloud is poised to grow in the next 18-24 months (61% say their use will increase). Alongside this, a combined 87% of respondents see their use of either private cloud (52%) or traditional on-premises (35%) accelerating.Despite strong indications of public cloud growth, a significant number of companies that ran workloads in public cloud environments have actually moved some or all of those workloads back on-premises (43% of businesses in North America have done so). In EMEA, 65% say they have reduced use of public cloud in the last 12 months because of security concerns.Businesses run approximately one in five applications via SaaS currently (22%), and more than half (51%) see their use of SaaS increasing over the next 18-24 months.
“Emerging technologies have started to drive true digital transformation, but businesses remain in a cycle of lure and regret when it comes to public cloud,” said Scott Dietzen, CEO of Pure Storage. “Rather than being viewed as competing options, companies should embrace cloud and on-premises storage as complementary offerings. By doing so, storage infrastructure becomes agile and future-proof, which drives the data advantage that enterprises seek.”“Digital Data Storage Outlook 2017” explores enterprise storage today and trends shaping the future.
Spectra Logic has published “Digital Data Storage Outlook 2017,” Spectra’s annual review of the data storage industry. The report provides insight for storage and media manufacturers, application developers and enterprise storage customers into the trends, pricing and technology shaping the data storage industry, and projects future storage needs and the technologies with the best chance of satisfying them.Storage industry experts have predicted that the digital universe could grow to more than 40 zettabytes (ZB) of data in 2020. Spectra’s “Digital Data Storage Outlook 2017” projects that much of this data will never be stored or will be retained for only a brief time, bringing the total amount of data stored closer to 20ZB in 2026. Backed by Spectra’s nearly 40 years in data storage, the report explores how enterprise flash will displace two-and-half-inch 15,000 RPM magnetic disk, and why existing technologies like tape will have the most significant impact on the storage digital universe through 2026, among other conclusions.
“The world’s repository of data is growing more rapidly than we could have imagined. At the same time, organisations are more reliant than ever on digital assets, and must protect and preserve access to this data forever,” said Spectra Logic CEO Nathan Thompson. “This is a unique time for the data storage industry, and we are proud to take a leadership role in helping the industry anticipate the trends, tiers and technologies of the future.”
The “Digital Data Storage Outlook 2017” is a comprehensive look at the ever-changing landscape of data storage, covering all aspects of the industry including:
Solid State Flash Storage - NAND flash is the fasting growing technology in the storage market. The demand for this technology will increase year over year through 2020, due to increased investment by all major flash vendors, and technical advancements allowing for more capacity and less cost per piece.
Disk Storage – This flash storage medium poses numerous threats to the magnetic disk drive industry that will affect disk’s market share. This threat includes the displacement of disk drives by flash drives in laptops and desktops, and the removal of disk drives in home gaming devices and digital video recorders. Spectra Logic estimates that by 2020, the disk industry will service a focused market, comprised of large IT shops and cloud providers.
Tape Storage – The backbone of the industry for more than 40 years, tape drive technology continues to consolidate. Having the greatest potential for capacity improvements, tape technology fills a market need as an inexpensive storage medium, at $.01 per gigabyte, the Spectra Logic report points out that a long-term scenario for tape is to coexist with flash technology. The report also details tape’s connection to the cloud, projecting that cloud providers will mostly adopt LTO (Linear Tape-Open), the most common tape technology. A new tape head technology, called TMR (tunneling magnetoresistance) will significantly boost tape capacities and speeds for years to come. Moreover, these tape technologies will integrate well with cloud strategies in disaster recovery plans.
Optical Disc Storage - The optical disc storage market will see a downward spike in 2017, yet may be an option for customers that have definitive long-term archival requirements. The report attributed the downward trend to its high cost, with this option being about 10 times more expensive than tape. The whitepaper reviews ways to archive at a more competitive price, but all the factors make optical an uphill battle.
Global cloud and network company supports data resiliency and media archive services with multi-petabyte object storage deployment.
Interoute, the global cloud and network company, has rolled out a cloud-based storage service based on Cloudian’s HyperStore object storage technology. Offered as part of the Interoute Virtual Data Centre Cloud Platform, it provides customers with fast, reliable and highly durable cloud-based storage for unstructured data, backups and archives at very low cost.Having evaluated a range of object storage technology vendors, Interoute selected Cloudian’s HyperStore for its scale-out capabilities, industry-leading S3 API compatibility, multi-tenancy features and ease of integration. Interoute customers have adopted the Cloudian-based service for use cases such as data resiliency, static content hosting and media archiving.
Interoute is offering multiple petabytes of capacity, with further growth planned in accordance with customer demand. The Cloudian deployment is available across the entire Interoute platform with 17 Virtual Data Centre zones globally. Customers have the option to use resilient in-country deployments in Switzerland and Germany.
The geo-location flexibility offered by the Cloudian solution, in combination with the Interoute Enterprise Digital Platform, gives Interoute customers control over data locality and assured performance, enabling them to build regulatory-compliant storage solutions in different territories.
“With GDPR looming large in 2018, as well as the rapid adoption of VDC and SaaS platforms, our customers are revisiting the legacy world of physical backup and archiving and demanding a simple, controlled, auditable cloud service,” explained Mark Lewis, EVP Products and Development at Interoute. “So, we’ve created an easily accessed and integrated, cost-effective object storage service to support their digital transformation.”DataCore has published details of long-term usage of DataCore SANsymphony™ software-defined storage solution at ExCeL (Exhibition Centre London) - one of the world’s most eminent exhibitions and international convention centres.
Over three million people a year visit ExCeL. It relies on a secure, always-on IT infrastructure to power critical business apps for the smooth running and processing of large events - from visitor registrations to systems that manage the entire venue. ExCeL does this from two physically separated data centres for business continuity and disaster recovery. Paul Tuckey, IT Manager, ExCeL London, notes: “The ExCeL hosts events continually throughout the year, so downtime has never been an option. For the past ten years we have successfully relied on DataCore’s SANsymphony solution running in a replicated dual node configuration to present all data to our VMware enterprise cluster. Protecting our apps remains the most important function for IT.”
Assisting ExCeL’s IT needs is support partner, Virtual IT. Simon Hartog, Head of Technology at Virtual IT recalls the journey that led to the install of DataCore’s SANsymphony that started back in 2007 with a full consolidation and virtualisation programme of the entire estate using VMware’s vSphere with DataCore as the back end storage platform. As the underlying highly available shared storage infrastructure, DataCore would, for over a decade, reliably supply the venue’s needs when it came to workload migration, load balancing, fail-over and, in such a prominent public venue, Disaster Recovery.
Today the mirrored active-active architecture remains identical across the two data centres in a dual node configuration with synchronous replication. Running HPE hardware, the SANsymphony software platform offers Nexsan storage arrays as ‘virtual disks’ akin to virtual machines (VMs), speeding up I/O response and throughput using the inbuilt caching and empowering the VMware infrastructure. As a long-term customer under support, ExCeL has benefitted from ongoing, no-cost upgrades with each generation of the software-defined storage platform, including performance improvements offered through Parallel I/O processing technology. Simon Hartog, Virtual IT summarises the ten year install:-
“DataCore has addressed our primary requirement of uninterrupted high availability and business continuity for apps for over a decade and we have tested it on occasion throughout that period. This includes a few unplanned power outages as a result of nearby building works. In the outages, DataCore has never let us down – when one half of the mirror was powered down, the other half simply took over. DataCore seamlessly kicks in without manual intervention and rebuilds without affecting applications. The mirror automatically re-synchronises and paths are restored in the background.”
In ExCeL’s dynamically growing environment, the ability to both seamlessly scale and provision as and when required has been important. Before virtualising the storage layer, ExCeL was limited by ageing hardware that could scale no further and was restricted from adding applications and growing data. Using DataCore’s software-defined storage, the centre simply scales on demand. Provisioning new VMs to accommodate data growth is a straightforward process with logical wizard driven steps. With scalability defined back in 2012, ExCeL was comfortably able to cope with an additional 1.2 million visitors across a 6 week period as it became home to one of the London locations for the Olympics.
Simon concludes: “Our use case is pretty simple and remains so – ongoing stable usage of DataCore that has allowed us to cost effectively develop VM enterprise clusters using a shared backend with the surety that if a hazard hits, ExCeL can weather any storm.”
Moonpig.com is the world’s largest online personalised greeting card retailer. Founded in 1999 it now has almost four million active customers and ships over 14 million cards per year. Moonpig is a subsidiary of Photobox Group.
Moonpig needed to relieve technical debt built up in a traditional data centre facility (Telecity) and to provide additional failure and backup options, which were non-existent at project inception. They engaged with Cloudreach to develop a Business Continuity and Disaster Recovery (BCDR) environment in Microsoft Azure.
Moonpig required Cloudreach’s assistance in developing a solution that would not only deliver the BCDR requirements, but also allow for scalable burst workload to the cloud at peak periods.
HyperGrid, the enterprise cloud-as-a-service leader, has successfully upgraded the legacy IT infrastructure of global relief agency Tearfund, which will help the agency save over £950,000 in associated costs over the next five years. By assisting the charity in migrating its outdated IT infrastructure, HyperGrid has ensured that Tearfund has experienced significant improvements in overall application and service performance, as well as drastically reducing its footprint and overhead costs.
Before partnering with HyperGrid, Tearfund employed a traditional three-tier IT infrastructure consisting of three high-end C7000 HP Bladecentre chassis, each with eight blades and close to 70TB of Lefthand SAN (SAS) storage with associated networking technologies for switching, provided by Cisco and Brocade. Despite being spread over seven 42U racks between two data centres, performance bottlenecks were however common. Performance issues were common due largely to a particular decision to support OLAP cubes.
Stuart Hall, Infrastructure Lead at Tearfund, said: “We faced a lot of performance issues owing to the physical separation of logical storage from the memory and compute which led to up to 12 hours of processing time, 2 hours on a good day. As such we decided to carry out proofs of concept on HyperGrid and its leading competitor, both of whom could support our preferred hypervisor – Hyper-V. As part of the PoC, Tearfund challenged both to reduce the processing time to below one hour. HyperGrid succeeded in reducing the time to just 20 minutes.
“Due to HyperGrid’s impressive proof of concept, we have since purchased two three-node all-SSD HyperCloud platforms, occupying 6U of rack space including the associated switching. This has seen us jump from IOP capacity measured in the thousands to a system where we measure the capacity in millions. One chassis would have been enough, but we purchased a second to give us full failover capacity for BCP at our remote data centre.”
Since the implementation, Tearfund has seen significant improvements in performance, as well as impressive cost savings. These include a reduction in racks from seven to just one, a reduction in electricity usage from 22KW/h to 3KW/h for all back-office IT equipment which in real time payments over five years will save Tearfund over £200,500. The ease of use for installation and management has also helped save Tearfund £150,000 a year in engineer costs. The reliability and simplicity of HyperGrid’s solution ensures that the team does not have to learn how to use new tools and management interfaces with no need to make changes to its day-to-day DevOps management processes.
Doug Rich, Vice President of EMEA at HyperGrid, said: “We are really pleased to have worked with Tearfund to boost its processing power while helping it make substantial savings. As a global charity having to meet a diverse range of challenges, it was impossible for Tearfund to work effectively with its outdated IT infrastructure in place, and a rapid change was a necessity if the bottlenecks were to be eliminated. Our HyperCloud solution not only meets Tearfund’s immediate requirements, but we have future proofed their IT estate for the foreseeable future enabling them to focus on what matters most: relieving those in need.”
Hall concluded: “The agility that HyperGrid’s solution has delivered has been key in making a difference in the areas where we support humanitarian aid, and we are now able to do so better than we ever could, with a reduced impact on the environment and our overheads. We are delighted that HyperGrid has been part of this journey, and are thrilled with the outcome.”
HC3 Platform reduces complexity, saves cost and guarantees disaster recovery.
UK-based Liverpool School of Tropical Medicine has selected Scale Computing's HC3 system to dramatically simplify its storage infrastructure and guarantee disaster recovery capabilities.Founded in 1898, the Liverpool School of Tropical Medicine (LSTM) was the first institution in the world dedicated to research and teaching in the field of tropical medicine. With a worldwide reputation for delivering state of the art research combating tropical diseases, LSTM attracts over 600 students each year from 68 countries.
LSTM initially built its IT environment using a Server BladeCenter chassis and SAN implementation utilising Microsoft’s Hyper-V failover cluster technology. The infrastructure was ageing and becoming increasingly difficult to manage. The institution evaluated a number of products in the market before settling on a shortlist of Nutanix and Scale Computing.
Following a successful proof of concept, LSTM opted for Scale Computing’s HC3 cluster and a disaster recovery cluster for high availability, cloning, replication and snapshots, providing complete business continuity. The HC3 cluster has reduced management time by over 50 per cent and its simplicity has allowed the school to focus on other areas of business, saving time and money. The HC3 platform has also delivered the added benefit of replication. Through regular snapshots, back-up testing has become simplified and regular, allowing the institution to guarantee business continuity.
“The Scale solution was perfect for our environment,” commented Matthew Underhill, IT Team Leader at Liverpool School of Tropical Medicine. “With Scale, we don’t have to worry about the underlying operating system, this is all taken care of. Previously, if there was an issue with any of our servers, we had to spend hours troubleshooting, but Scale has completely streamlined this process and we don’t have to worry about the everyday management of our systems. We also have the added ability to test our backups more regularly and recover VMs within five minutes, ensuring we are back up and running again in the event of a disaster.”
The Scale solution has also saved LSTM over £80,000 at face value. Underhill concluded: “This is an amazing cost reduction and we can put this money to use elsewhere. In addition to cost savings, we no longer need the knowledge or expertise to deal with growing complex systems, which was becoming a challenge.”
Johan Pellicaan, VP and MD EMEA at Scale Computing commented: “The HC3 platform is designed to offer scalability and simplified management enabling organisations to reduce their operational costs. The Liverpool School of Tropical Medicine is a leading charity institution and we are pleased we to have been able to work with them to offer a cost effective and reliable solution that will provide the school with business continuity.”
Watch manufacturer Peers Hardy Group turns to Buffalo Technology’s TeraStation WSH5610 Windows Storage Server hardware RAID platform to centralise and simplify local and cloud storage backups with lightning fast performance. In the process it cuts costs and reduces both energy consumption, and maintenance requirements.
Peers Hardy Group is a major supplier of own label watches for many leading high street fashion chains. Founded in 1971, the company has grown from a small customer base to design and distribute watches for retailers such as Next, Argos and Amazon as well as fashion brands such as Radley, Orla Keily and Ice Watch. It also produces children’s watches for brands such as Tikkers, Disney and Peppa Pig.
Headquartered in the West Midlands, UK, the 170-strong company recently expanded its own range of watches by introducing the Henry London brand which rapidly became an internationally recognised name. To support this expansion, the company invested heavily within China and Hong Kong with offices and manufacturing facilities.
Peers Hardy’s expansion into the Far East resulted in a rethink about what shape its IT infrastructure should take. The company was familiar with the benefits of cloud-based computing and in conjunction with the implementation of Microsoft Azure Backup Server wanted a storage technology that would dovetail with this platform.
Mark Griffiths, IT Manager, Peers Hardy Group, explains: “Clearly we had a number of options from the storage perspective but ultimately we wanted to centralise storage for the UK and the Far East into a single device. Specifically we wanted a main interface between local and cloud backups. This would give us the opportunity to downsize our existing Windows server environment at some locations to reduce costs, maintenance requirements and energy consumption.
“Some might say that this introduces a single point of failure, but I was confident that if we found the right storage platform this wouldn’t be a problem. It would also reduce cost and complexity which are important considerations too.”
In short, the company wanted centralised backup operations to make management and monitoring easier. Importantly it also wanted a platform to run a Windows operating system given that Windows Storage Server is its preferred ‘go-to’ storage management option for its multiple physical and virtual servers.
Peers Hardy Group had been using Buffalo Technology’s storage for a number of years and had also recently implemented several Buffalo TeraStation WS5600DR2 Windows Storage Server 12TB units and was using them chiefly as file servers across its branch offices.
As it was exploring its storage options it tested Buffalo’s TeraStation WSH5610 Windows Storage Server 24TB, a powerful yet affordable networked attached storage device specifically designed for small and medium-sized businesses.
The TeraStation WSH5610 immediately caught the attention of Mark Griffiths. “We do multiple backups through the day to our cloud-based disaster recovery platform running in Azure. The TeraStation has monster throughput capacity which is precisely what we needed,” he says.
The TeraStation WSH5610 is configured by default with hardware RAID 6, though other RAID options are available, and also supports 3.5” SATA drives. The drives are hot-swappable and support automatic RAID rebuild while optional AES 128-bit encryption is also supported.
The management interface is simple and intuitive and designed for good integration with business networks such as Windows server domains. The software also gives administrators extensive control over users' permissions so they can an easily manage various storage functions such as file shares, and data caching.
Underpinning the business
The Buffalo TeraStation WSH5610 storage system now underpins the Peers Hardy Group head and branch office operations. It acts as a consolidation point, to collect all local backups and move them to its Azure Cloud Backup platform. Because it utilises the Windows Storage Server operating system, the TeraStation integrates seamlessly with Active Directory and the company’s network infrastructure. Mark Griffiths explains: “This allows our users quick and easy access to essential documents on both PC and Mac. And where we were once struggling for space, the Buffalo TeraStation is a cost-effective storage platform that has alleviated this concern.”
Performance gains
The use of hardware RAID provides important performance gains for the Peers Hardy Group, increasing backup speeds by a factor of three. Mark Griffiths says: “Speed is important when backing up multiple servers several times a day. In our case, I don’t believe that the software equivalent would have been capable of providing the performance we need to perform simultaneous on-site backup and off-site long term retention operations. We now access file areas about three times quicker than with software equivalent devices.”
Cost savings
The move to Buffalo TeraStation WSH5610 has helped the company consolidate its storage while also saving costs on management, maintenance and energy. “We’ve now consolidated and centralised our storage for both the UK and Far East. Hardware RAID also gives us backup options and in conjunction with Windows Storage Server communicates effortlessly with the connected uninterruptible power supply ensuring we have reliable, cost-effective disaster recovery.
Centralised and simple management
Buffalo TeraStation WSH5610 dovetails effortlessly with Peers Hardy Group’s recent move to Microsoft Azure Backup Server. The TeraStation acts as the onsite backup location while installing Microsoft Azure Backup Server onto the device has centralised all the backup operations making management and monitoring easier. The hardware RAID functionality also provides resilience.
Fast and efficient
Mark Griffiths points out that TeraStation WSH5610 is a remarkably quick platform and one that will provide storage capability for the coming years: “From my perspective it was shockingly quick. I was surprised, I just pressed the button and it was working. I can back up as many times as I need to throughout the day and overnight, and performance is faultless. It also provides us with at least two to three years storage, most probably five years.”
Building on Buffalo
By using Buffalo TeraStation WSH5610 Peers Hardy Group is building on its use of Buffalo WS5600DR2, a software RAID platform which is used for storing and using large design files for programs such as Adobe Creative Suite and as a file server in its branch offices.
Taken together, both storage platforms utilise the same systems as the company’s other Windows based servers. This means the IT team requires no specialised knowledge, software or training for either the core IT team or third party companies that may provide additional support.
“The added benefit of RAID support, with redundancy means that if a fault does ever occur, we have the ability to continue normal operations. It integrates with our current server systems, works with both Macs and PCs and provides quick and easy management and monitoring options for the UK and Far East,” adds Mark Griffiths.
Management Science Associates, Inc. (MSA) has selected Kaminario K2 all-flash storage array to power its internal marketing analytics applications and support its hosted infrastructure services.
A leader in analytics, information technology infrastructure and information management solutions, MSA provides analytical-based solutions across a spectrum of industries -- from consumer packaged goods, media and IT to medical, life sciences, pharmaceutical, and the arts.
Kaminario K2 all-flash storage array provides MSA with unmatched database performance, helping reduce the run time of their most critical analytic workloads by 15 to 40 percent. Extending the benefits beyond just application performance improvements, the new infrastructure also enables MSA to optimise its power and data center space utilisation, reducing operational spend by nearly 70 percent.
“With our focus on performance, security and continutity of our data centers, the Kaminario K2 is a perfect match,” said Mario Cafaro, vice president, MSA. “This partnership gives us the flexibility to provide our customers tremendous performance, security and recoverability of their data.”In addition, the MSA Information Technology Systems and Services (ITSS) division, which provides customers with a secure hosting environment and a broad range of information technology services, is partnering with Kaminario to provide the underlying storage infrastructure for enterprises to leverage the performance and flexibility of K2 in a managed services infrastructure. The K2 all-flash array will ensure optimal performance and scalability for all of MSA’s state-of-the-art data centers.
“Kaminario – a recognised leader in cloud-ready flash storage for SaaS companies – together with MSA will deliver a strong platform for modern, database-driven application to our customers,” said Itay Shoshani, chief revenue officer, Kaminario. “In addition, MSA’s customers can now leverage Kaminario K2’s scalability and cost efficiencies, which will allow them to further expand and support their IaaS and on-demand applications, fast.”Commvault will be providing full cloud backup and recovery solutions for Randstad across their global infrastructure. Randstad is using AWS cloud infrastructure as the basis for its new central Shared Service Centre (SSC) to deliver global IT services, and will be utilising Commvault to maximise this offering.
The Randstad Group is a global leader in the HR services industry and specialises in solutions in the field of flexible work and human resources services. Their IT infrastructure is currently being consolidated and centralised across 30 IT departments providing services to 40 Operating Companies in Europe, North and South America, the Middle East and Asia-Pacific. This transformation will increase efficiency, reduce risk and provide better service from an IT infrastructure perspective in alignment with Randstad’s global digital vision.
“Given the scale and business criticality of this AWS deployment, we needed a trusted provider to support this transformation with a robust and compatible solution,” said Bernardo Payet, General Manager for Randstad Global IT Solutions. “Commvault was chosen based on their ability to deliver the scalability and flexibility of service necessary to complete the task in exact accordance with our requirements.”
In addition, the Commvault platform delivered deduplication and compression functionality, which will reduce storage consumption and save significant costs. Another critical reason for the selection was the proven integration between Randstad’s AWS based public cloud strategy, managed by Tata Consultancy Services (TCS) and Commvault’s solutions.
TCS, a global leader in IT services, digital and business solutions, is working with Randstad on its migration to AWS. Due to the global nature and scale of the deployment, TCS recommended Commvault based on its global capabilities and market leading technology.
“Commvault software will enable high performance and simplified data management as Randstad migrates to a global AWS based cloud approach,” said Rob Van Lubek, Area-Vice President EMEA Northwest, Commvault. “Randstad needed to ensure its data would be protected, compliant and accessible while taking advantage of cloud storage benefits and Commvault is pleased to be enabling this journey.”Tegile hybrid array leads cloud service provider to select Tegile as its default storage vendor.
Dublin-based cloud service provider Savenet Solutions has selected a Tegile hybrid array to reduce its costs and storage capacity footprint, and to offer its customers higher performance. Thanks to the combination of Tegile's compression and deduplication technology, the data footprint achieves a 3-to-1 reduction and as a result enables Savenet’s customers to reduce costs by as much as 50%. Savenet’s disaster recovery (DR) service has benefited the most: the provider has cut the DR recovery time in its service level agreements (SLAs) from 2-4 hours to 1-2 hours. Following the successful deployment of the Tegile array, Savenet will be replacing its existing storage systems with Tegile arrays as they come to the end of their lives.Founded in 2005, Savenet provides a range of data protection and cloud computing services to a variety of businesses ranging from SMEs to larger enterprises. Already reliant on eight SANs across multiple locations and managing over 400 TB of data, Savenet was expanding and in need of additional storage capacity. The team started looking for a new solution, one that would also allow it to implement disaster recovery technology from its partner, Zerto. In addition, Savenet’s legacy SANs did not support disk-level encryption, compression or deduplication so the new system would need to be able to encrypt data in transit and at rest, and be capable of handling up to 20,000 IOPs in order to support the needs of Savenet’s customers.
CTO and co-founder of Savenet Lorcan Cunningham spent two years investigating storage solutions by the new generation of hybrid SAN vendors. He spoke to Tegile and some of its competitors as well as Tegile end users who had also gone through the proof of concept stage with other vendors. The overwhelming consensus was that Tegile outperformed alternative technologies and that its cost per GB was the most competitive.
Tegile’s impressive 3-to-1 deduplication and compression rates were particularly appealing to Savenet. Starting out with 60 TB of data, Tegile’s deduplication and compression technology reduced this by nearly two thirds to 23 TB. The significant reduction allowed Savenet to store a larger volume of data in a smaller footprint without impacting performance, while also avoiding the need to add another rack to its datacentre and therefore creating considerable savings in cooling and power costs. These, as well as the attractive price per gigabyte and Tegile’s comprehensive customer care programme IntelliCare, were important considerations in the selection process. All these benefits have given the cloud provider a more competitive edge.
Cunningham explains “We are able to run multiple customers at the same time on our storage. Our customers are happy. The performance is excellent and we’re able to offer lower disaster recovery SLAs at no extra cost. These are all major factors that show us we made the right decision.”
“We like Tegile’s solution so much that we’ve effectively become the main reseller for Tegile in Ireland. We are trusted advisors to our customers. They know we’re independent and that we investigate vendors very carefully. We always highly recommend Tegile and quite a few of our customers have bought or are in the process of buying a Tegile solution.”
Angel Business Communications is pleased to announce the categories for the SVC Awards 2017 - celebrating excellence in Storage, Cloud and Digitalisation. The 30 categories offer a wide-range of options for organisations involved in the IT industry to participate. Nomination is free of charge and must be made online at www.svcawards.com
Storage Project of the Year - Open to any specific data storage related project implemented in any organisation of any size in EMEA.
Digitalisation Project of the Year - Open to any ICT project that has incorporated/implemented one or more digital technologies to transform a business model and provide new revenue and value-producing opportunities.
Cloud Project of the Year - Open to any implementation of a cloud-based project (public cloud, private cloud or hybrid cloud) in any organisation of any size in EMEA.
Hyper-convergence Project of the Year - Open to any implementation of a project (public cloud, private cloud or hybrid cloud) based on a set of hyper-converged products/solutions in any organisation of any size in EMEA.
Managed Services Provider of the Year - Open to any Managed Services Provider operating in the storage, virtualisation and/or cloud technologies market in the EMEA region.
Vendor Channel Program of the Year - Open to any IT vendor’s channel program in the storage, virtualisation and/or cloud technologies market introduced in the EMEA market during 2017 that has made a significant difference to the vendor’s and their channel partners business in terms of storage revenue increases, improved customer satisfaction or market awareness.
Channel Initiative of the Year - Open to any IT system reseller, distributor, MSP or systems integrator who has introduced a distinctive or innovative vendor specific or vendor independent program of their own design/specification to boost sales and/or improve customer service in EMEA.
Excellence in Service Award - Open to any IT vendor or reseller/business partner delivering end-user customer service in the UK or EMEA markets. Entries must be accompanied by a minimum of THREE customer testimonials (in English) attesting to the high level of customer service delivered that sets the entrant apart from their competition.
Channel Individual of the Year - Open to any senior individual working within any organisation manufacturing, selling and/or supporting the storage, digitalisation and/or cloud sectors in the EMEA market that has made a significant contribution to his or her employer’s business or that of their partners or customers.
Infrastructure
Backup and Recovery/Archive Product of the Year - Open to any solution that’s primary design is to enable data backup and restore or long-term archiving.
Cloud-specific Backup and Recovery/Archive Product of the Year - Open to any solution that’s design is to enable cloud-based data backup and restore or long-term archiving via a service offering.
SSD/Flash Storage Product of the Year - Open to any storage solution that can be classified as using SSD/Flash or NVMe to store and protect IT data and information.
Storage Management Product of the Year - Open to any system management solution that delivers effective and comprehensive storage resource management in either single or multi-vendor environments.
Software Defined/Object Storage Product of the Year – Open to any product that delivers a ‘software-defined- storage model’ or any ‘object storage’ product or solution.
Software Defined Infrastructure Product of the Year: Open to any solution that enables a ‘software defined’ set of solutions normally associated with traditional non-physical infrastructure deployments such as networks, application servers etc.. (excludes storage).
Hyper-convergence Solution of the Year – Open to any single or multi-vendor solution that delivers on the promise of an architecture that tightly integrates compute, storage, networking and virtualisation resources for the client.
Cloud
IaaS Solution of the Year - Open to any product/solution that delivers or contributes to effective Infrastructure-as-a-Service implementations for users of private and/or public cloud environments.
PaaS Solution of the Year - Open to any product/solution that delivers or contributes to effective Platform-as-a-Service implementations for users of private and/or public cloud environments.
SaaS Solution of the Year - Open to any product/solution that delivers or contributes to effective Software-as-a-Service implementations for users of private and/or public cloud environments.
IT Security as a Service Solution of the Year - Open to any product/solution that is specifically designed to enable the securing of data within private and/or public cloud environments
Cloud Management Product of the Year – Open to any product/solution that delivers or contributes to effective Cloud management or orchestration for users and/or providers of private and/or public cloud environments.
Co-location / Hosting Provider of the Year – Open to any company/organisation offering hosting and/or co-location services to end-users and/or service providers in the EMEA market.
Companies of the Year
Storage Company of the Year - Open to any company supplying a broad range of storage products and services in the EMEA Market.
Cloud Company of the Year - Open to any company supplying a wide range of cloud services or products in the EMEA Market.
Hyper-convergence Company of the Year - Open to any company supplying a clearly defined Hyper-converged product set in the EMEA Market.
Digitalisation Company of the Year - Open to any company supplying a clearly defined set of digitalisation services and or products in the EMEA Market.
Innovations of the Year (Introduced after 1st June 2016)
Storage Innovation of the Year – open to any company that has introduced an innovative and/or unique storage service, product, technology, sales program, or project since 1st June 2016 in the EMEA market.
Cloud Innovation of the Year – open to any company that has introduced an innovative and/or unique public, private or hybrid cloud service, product, technology, sales program, or project since 1st June 2016 in the EMEA market.
Hyper-convergence Innovation of the Year – open to any company that has introduced an innovative and/or unique hyper-convergence service, product, technology, sales program, or project since 1st June 2016 in the EMEA market.
Digitalisation Innovation of the Year – open to any company that has introduced an innovative and/or unique digitalisation service, product, technology, sales program, or project since 1st June 2016 in the EMEA market.
Just about every new piece of technology is considered disruptive to the extent that they are expected to replace older technologies. Sometimes as with the cloud, old technology is simply re-branded to make it more appealing to customers and thereby to create the illusion of a new market. Let’s remember that Cloud Computing had previously existed in one shape or form. At one stage it was called On Demand Computing, and then it became Application Service Provision.
By David Trossell, CEO and CTO of Bridgeworks.
Now there is Edge Computing, which some people are also calling Fog Computing and which some industry commentators feel is going to replace the Cloud as an entity. Yet the question has to be: Will it really? The same viewpoint was given when television was invented. Its invention was meant to be the death of radio. Yet people still tune into radio stations by their thousands each and every day of every year. Of course, there are some technologies that are really disruptive in that they change people’s habits and their way of thinking. Once people enjoyed listening to Sony Walkmans, but today most folk listen to their favourite tunes using smartphones – thanks to iPods and the launch of the first iPhone by Steve Jobs in 2007, which put the internet in our pockets and more besides.
So why do people think the Edge Computing will blow away the cloud? This claim is made in many online articles. Clint Boulton, for example, writes about it in his Asia Cloud Forum article, ‘Edge Computing Will Blow Away The Cloud’, of 6th March 2017. He cites venture capitalist Andrew Levine, a general partner at Andreessen Horowitz, who believes that more computational and data processing resources will move towards “Edge devices” – such as driverless cars and drones - which make up at least part of the Internet of Things. Levine prophesises that this will mean the end of the cloud as data processing will move back towards the edge of the network.
In other words the trend has been up to now to centralise computing within the datacentre, while in the past it was often decentralised or localised nearer to the point of use. Levine sees driverless cars as being a datacentre; they have more than 200 CPUs working to enable them to operate without going off the road and causing an accident. The nature of autonomous vehicles means that their computing capabilities must be self-contained, and to ensure safety they minimise any reliance they might otherwise have on the cloud. Yet they don’t dispense with it.
So the two approaches may in fact end up complementing each other. Part of the argument for bringing data computation back to the Edge falls down to increasing data volumes, which lead to ever more frustratingly slow networks. Latency is the culprit. Data is becoming ever larger. So there is going to be more data per transaction, more video and sensor data. Virtual and augmented reality are going to play an increasing part in its growth too. With this growth, latency will become more challenging than it was previously. Furthermore, while it might make sense to put data close to a device such as an autonomous vehicle to eliminate latency, a remote way of storing data via the cloud remains critical.
The cloud can still be used to deliver certain services too, such as media and entertainment. It can also be used to back-up data and to share data emanating from a vehicle for analysis by a number of disparate stakeholders. From a datacentre perspective, and moving beyond autonomous vehicles to a general operational business scenario, creating several smaller datacentres or disaster recovery sites may reduce economies of scale and make operations more inefficient than efficient.
Yes latency might be mitigated, but the data may also be held within the same circles of disruption with disastrous consequences when disaster strikes. So for the sake of business continuity some data may still have to be stored or processed elsewhere, away from the edge of a network. In the case of autonomous vehicles, and because they must operate whether a network connection exists or not, it makes sense for certain types of computation and analysis to be completed by the vehicle itself. However, much of this data is still backed up via a cloud connection whenever it is available. So, Edge and Cloud Computing are likely to follow more of a hybrid approach than a standalone one.
Saju Skaria, Senior Director at consulting firm TCS, offers several examples of where Edge Computing could prove advantageous in his Linkedin Pulse article, ‘Edge Computing Vs. Cloud Computing: Where Does the Future Lie?’. He certainly doesn’t think that the cloud is going to blow away:
“Edge Computing does not replace cloud computing…in reality, an analytical model or rules might be created in a cloud then pushed out to Edge devices…and some [of these] are capable of doing analysis.” He then goes on to talk about Fog Computing, which involves data processing from the Edge to a cloud. He is suggesting that people shouldn’t forget data warehousing too, because it is used for “the massive storage of data and slow analytical queries.”
In spite of this argument, Gartner’s Thomas Bittman, seems to agree that ‘Edge Will Eat The Cloud’: “Today, cloud computing is eating enterprise datacentres, as more and more workloads are born in the cloud, and some are transforming and moving to the cloud….But there’s another trend that will shift workloads, data, processing and business value significantly away from the cloud. The Edge will eat the cloud…and this is perhaps as important as the cloud computing trend ever was.”
Later on in his blog, Bittman says: “The agility of cloud computing is great – but it simply isn’t enough. Massive centralisation, economies of scale, self-service and full automation get us most of the way there – but it doesn’t overcome physics – the weight of data, the speed of light. As people need to interact with their digitally-assisted realities in real-time, waiting on a datacentre miles (or many miles) away isn’t going to work. Latency matters. I’m here right now and I’m gone in seconds. Put up the right advertising before I look away, point out the store that I’ve been looking for as I drive, let me know that a colleague is heading my way, help my self-driving car to avoid other cars through a busy intersection. And do it now.”
He makes some valid points, but he falls into the argument that has often been used about latency and datacentres: They have to be close together. The truth, however, is that Wide Area Networks will always be the foundation stone of both Edge and Cloud Computing. Secondly, Bittman clearly hasn’t come across data acceleration tools such as PORTrockIT and WANrockIT. While physics is certainly a limiting and challenging factor that will always be at play in networks of all kinds – including WANs, it is possible today to place your datacentres at a distance from each other without suffering an increase in data and network latency. Latency can be mitigated, and its impact can be significantly reduced no matter where the data processing occurs, and no matter where the data resides.
So let’s not see Edge Computing as a new solution. It is but one solution, and so is the cloud. Together the two technologies can support each other. One commentator says in response to a Quora question about the difference between Edge Computing and Cloud Computing that: “Edge Computing is a method of accelerating and improving the performance of cloud computing for mobile users.” So the argument that Edge will replace Cloud Computing is a very foggy one. Cloud Computing may at one stage be re-named for marketing reasons, but it’s still here to stay.
The biggest part of an iceberg, just like the biggest part of your data storage, sits out of sight, out of mind. This helps illustrate how most organisations approach Tier 1 data storage. Research has shown that over 60% of primary capacity is either dormant or rarely used, adding a huge, hidden weight to storage TCO. Fortunately, solutions are emerging to help relieve this stress.
By Jon Toor, CMO, Cloudian.
Dormant and rarely used data drive substantial storage cost and complexity, compounding the already monumental challenge of capacity management. Data volumes are growing enormously and organisations are continually adding to their Tier 1 capacity to meet everyday needs. IDC estimates that a mind boggling 44 zettabytes of digital data will exist by 2020 (to give it some context, a single zettabyte is 2 to the 70th power bytes, or around a billion terabytes).
The primary driver for this growth is unstructured data from a wide variety of applications including data protection, media archives, big data, web content and financial/consumer data. Adding to the enterprise storage burden are longer data retention policies – even when almost all of this data ends up inactive or rarely used.
The problem for many organisations is that regardless of what this data is, and how often (or in many cases, rarely) it is required, they’re putting much of it in the same place: tier 1, primary storage.
Look more closely at backup and replication, for example. Keeping this data on NAS is a storage-hungry and expensive process. Almost as soon as an initial file is created, data is added to the snapshot schedule. Active files have a daily change rate of about 20 percent, so a 1MB file becomes 2MB of snapshots within a week. Backup retention adds to the burden. Daily and weekly backups are retained for months, while monthly backups are retained for years. The end result is that over the life of a file, enterprises use many times more storage than the original file size.
And the problem feeds itself – as we create and retain more data, the backup and replication burden increases. This adds to an issue every storage manager will be familiar with — traditional NAS and backup systems have to be provisioned well ahead of demand, so new storage often sits idle for two or three years.
So, what’s the answer? Today, storage tiering is emerging as a solution. The tiering concept – saving cost by moving data to less expensive storage – is not new. What is new are two technologies that now make tiering a very attractive option: object storage and next-gen data management software. Together, these solutions deliver a 50% cost savings. Best of all, there is now zero impact to the user experience. And rather than add complexity, these solutions now simplify by consolidating more information to a single storage pool.
Highly cost-effective enterprise object storage is the first part of the solution. Object storage technology has long been a key part of cloud platforms like Amazon S3, Google Cloud Platform, Facebook, and Netflix. New solutions bring that technology into the data center to create storage systems that are highly cost-effective and limitlessly scalable – factors that make them a perfect Tier 2 repository for a wide range of applications.
For enterprises, object storage offers a low cost entry point that can start small and then scale in much smaller increments than traditional storage. Enterprise object storage options range from ½ cent to 1 cent per GB, per month, or about one-third the traditional SAN or NAS systems
Object storage is also flexible in that it can be configured for the level of data durability, availability, performance, and accessibility to meet an organization’s needs. Data durability of 14-nines is achievable, with built in redundancy and cross-region replication.
The second part of the solution is next-gen file management software that automates the tiering process to object storage. These solutions identify Tier 1 data based on user-defined attributes such as file age, frequency of access, owner, and file type.
The selected data is then migrated to object storage. If the user requests the data, it is transparently retrieved from object storage. If the user changes the file, it is re-hydrated to the original filer. The entire process is invisible to the user – there is no change in data access.
Given the pressure on budgets, storage capacity and performance, enterprises need to adopt a smarter approach to their Tier 1 storage strategies. Data tiering offers a solution for the cold storage iceberg. Best of all, it offers a limitlessly scalable path that can finally and permanently address the problem of Tier 1 storage proliferation. Learn more and you will see why all major cloud providers standardized on object storage for their data.
As part of the complete history of storage, NAS or Network Attached Storage could be considered as a relatively new option for some companies, though it has actually been around since the ‘90s, and was popularised in the early-mid 2000s.
By Steve Broadhead, Broadband-Testing.
But speed of change is a fundamental issue for IT; NAS solutions designed 15 years ago were designed to solve 15 year-old problems, not current demands. Since then, IT has moved on everywhere, but especially in the realms of data, storage and their related applications. Data usage has both increased massively and changed in terms of its form, function and access requirements. Part of this is due to the dramatic reduction in the cost of storage per megabyte (or should we say petabyte) of data and in the sheer volume of data we can now store in a tiny form factor, hence reducing power requirements, as well as lowering CapEx costs. But the applications themselves have also changed, notably in their requirement to access data as quickly, efficiently and reliably as possible.
As NAS gained in popularity, it was all about simplifying file access, migration and backup – in other words, creating a virtual single file system. At the time this was a significant breakthrough, as with it came improved speed of deployment and a reduction in the amount of human day-to-day management required. While those elements are still relevant, in today’s world of scale-out NAS deployments, due to the sheer amount of data traffic coming on line continuously, instant scalability is now a fundamental requirement. Long gone are the days when a company could spend months capacity planning for the next 5-10 years. Now it’s more like 5-10 minutes. Consequently, the first or even second generation NAS architectures are out of kilter – islands of storage developed as a direct consequence of scaling traditional NAS technology and this approach simply doesn’t work any longer. The “scale-out” buzz phrase that every NAS provider on the planet is using may well be overused but no vendor can afford to admit they cannot cope with the required pace of storage expansion many companies now need.
However, when it comes to certain verticals such as manufacturing, that scale-out requirement and its related benefits of performance and ease and speed of deployment become hyper-critical. After all, a 24x7 manufacturing plant can’t readily shut down “over the weekend” for planned upgrades. What is required in the world of manufacturing, as well as several other verticals such as scientific research, energy and media and entertainment, are very specific capabilities that allow the storage solution to keep pace with the company and its business. This does not mean adapting 15+ year old technology. One vendor whose focus is on contemporary scale-out NAS, Panasas, follows that mantra and has created what it believes to be a solution capable of supporting the needs of modern manufacturing, with a parallel, scalable approach to file access.
Taking a quick look at legacy storage architecture, the limitations are glaringly obvious from a performance perspective. Using a file server in the data path means it is responsible for managing all requests. This creates a classic bottleneck and the inevitable congestion that ensues, limiting scalability. Hence, we have the “islands of data” scenario, itself a potential management nightmare, not least from a capacity planning perspective. The Panasas answer to this conundrum is to use parallel access to data, with clients accessing storage directly, thereby removing that bottleneck issue. Moreover, accessing a single pool of storage hugely simplifies the management issue. Let us now look in more detail as to how this is achieved, notably with respect to manufacturing.
Modern manufacturing brings with it very specific requirements of a NAS system, not least in the area of Computer Aided Engineering (CAE) and its related disciplines. A key area of modern CAE is Computational Fluid Dynamics (CFD). In simple terms, this is the use of applied mathematics, physics and computational software to visualise how a gas or liquid flows, as well as how the gas or liquid impacts on objects as it flows past. It is based on the Navier-Stokes equations, which describe how the velocity, pressure, temperature, and density of a moving fluid are related. A popular application for CFD is analysis air flow around vehicles and forms of transport, notably cars and aeroplanes, hence the key manufacturing association. Even within the IT world itself, CFD has it uses, such as within the data centre (DC) for analysing thermal properties and modelling air flow in the form of 3D mathematical models.
The fundamental issue here for Legacy NAS is its inability to keep up with the input/output (I/O) requirements of the highly data-intensive CFD calculations required. But modern manufacturing requirements go way beyond simply performance issues; management, scaling and capacity planning are all critical factors that can delay or kill a project stone dead.
And there are more challenges awaiting modern NAS deployments, such as MDX or Multi-Disciplinary Design Exploration, which requires serious computational capabilities, especially when moving beyond single point simulation into fully-automated deterministic optimisation. Technical barriers have been prevalent and have kept multi-disciplinary simulation-based design from becoming mainstream in the product development cycle. Design exploration requires many simulations to be run at once on a large number of multi-core processors. In other words, this is a perfect application for parallel processing and data access. In the US, CD-adapco is pioneering MDX, using engineering data from simulation results to improve a product through multiple design iterations, using it to maximise the real-life performance of products in the aerospace, automotive, energy, life sciences, and oil and gas industries. Legacy NAS was unable to keep up with I/O requirements of the highly data-intensive CFD calculations, so CD-adapco needed a storage solution that could improve system performance, minimise administration and maintenance time, and provide high reliability.
“When we turned on the Panasas system, the bottlenecks disappeared and we were able to run a complex MDX simulation without impacting other system users,” said Steven Feldman, senior vice president of information technology at CD-adapco. “We deployed Panasas on all mission-critical systems and find that storage and data loss is now something we just don’t worry about. In addition, administration for Panasas storage is almost non-existent.” According to Feldman, key to the success of the deployment was how the Panasas technology supports parallel data paths, meaning CD-adapco was able to substantially increase data throughput to meet the high-performance requirements of MDX simulations, while maintaining high reliability.
Next Generation Sequencing (NGS) is another major challenge for storage. DNA sequencing efficiency has increased by approximately 100,000-fold in the decade since sequencing of the human genome was completed. NGS machines can now sequence the entire human genome in a few days, and this capability has inspired a flood of new projects that are aimed at sequencing the genomes of thousands of individual humans and a broad swath of animal and plant species. Some of the biggest technical challenges that are associated with these new methods are caused by repetitive DNA: that is, sequences that are similar or identical to sequences elsewhere in the genome. From a computational perspective, repeats create ambiguities in alignment and in genome assembly, which, in turn, can produce errors when interpreting results. Sequencing performance and accuracy is therefore critical. One Panasas customer completed an NGS storage project at a global genomic research institute and achieved some amazing results, increasing their sequencing capacity by 50 times.
And the challenges continue… EDA or Electronic Design Automation - a set of software tools for designing systems such as ICs (Integrated Circuits) and PCBs (Printed Circuit Boards) – has been with us for some time now, but the complexity of semiconductor technology has been scaling at a rate that can best be described as “off the planet”! EDA simulation for electronics has rapidly increased in importance with the continuous scaling of semiconductor technology. Since a modern semiconductor chip can have billions of components, EDA tools are essential for their design. Storage therefore needs to be capable of scaling at a rate to support these increases in demand.
So, in terms of the challenge modern manufacturing poses to NAS vendors, it is not one challenge but a whole series of bottleneck-avoidance issues covering performance, scaling and management.
We are no longer in a world where manufacturing relies on costly, lengthy but restricted physical testing for product design.
For example, in automotive design, gone are the days when physical crash test dummies were used to test automotive safety. Aerospace companies used real wind tunnels to test thermals across aircraft wings, but no longer. Instead they now use High Performance Computing (HPC) with Linux clusters and parallel computation, performing complex simulations of products at levels of detail far beyond what was achievable in the past. This creates massive amounts of new data, requiring the building, maintaining and rapid scaling of IT environments to support that level of analysis. The necessary sharing and preservation of that data creates further storage challenges and leads to several potential pain points:
Ø Storage performance limiting productivity.
Ø Capacity scaling limits delaying projects.
Ø Cost and complexity of building and maintaining the storage.
Ø Predicting the cost and resource allocation.
Ø Lifecycle management, notably around data retention.
Unlike many other HPC markets, manufacturing workflows tend to involve many different simultaneous projects and potentially hundreds of applications. High performance, mixed workload access at an aggregate level is key for most environments. So, while these are still largely scientific workflows, a larger number of management and support features are frequently needed than in other HPC sectors, which are just as important as the pure performance itself is.
Currently, around 23 billion ‘things’ are connected to the world’s numerous communication networks and more are joining at a rapid speed. For consumers, these include everything from connected fridges to watches. The general hunger for both novelty and innovation is swiftly increasing as industry goliaths continue to release new products into the market.
By Nassar Hussain, managing director of SOTI Europe.
Though machine-type communications are set to usher in the fourth industrial revolution, it is predicted at least 65 per cent of businesses will adopt a mass of connected devices by 2020 - more than twice the current rate. Manufacturers, logistics firms and retailers will be the first movers in this ‘internet of things’ (IoT) revolution, as they look to connect and automate process-driven functions.
The outcome of adopting connected devices is beneficial for every business, as they see investment in prominent technologies such as mobile devices key to ensuring they can serve their staff and customers, and bring them a greater understanding of consumers.
However great the perceived benefits are, connected devices bring new business challenges around scale, interoperability, security and the management of devices and endpoints. Starting at the coal-face, for those of us who rely on mobile devices for work purposes, the emotional fallout can be hard – recent research revealed 59 per cent admit to being stressed by technology and 29 per cent even voiced fears of losing their jobs due to technological failures. The stress of technical failures is 13 per cent higher for business owners, with 72 per cent concerned over the potential cost of data loss.
To ride the tech wave, enterprises must have a clear strategy for mobility management. It is essential to cover traditional devices and non-traditional ‘things’, such as connected cars; taking in technical issues like interoperability, security and more straightforward ones like filtering vast new oceans of data and what to save in the catch. Without a strategy in place, companies will find themselves throwing infinite resources into connecting everything to the internet, rather than just those that are crucial. So, what do businesses need to streamline mobile and IoT device management and harness the potential opportunities?
STEP 1: INTEGRATION
It seems device management is the most challenging task facing the market, as 45 per cent of companies are failing to enforce restrictions such as blocking apps. At a basic level, connected devices must be properly coordinated if businesses are to easily access and manipulate the data available to them, regardless of its origin. Using an integrated suite of mobility solutions offers a clever, quick, and reliable way for businesses to build their apps faster and manage their mobile devices and IoT endpoints.
Additionally, a closely integrated device and IoT management system can bring added benefits to companies seeking to bring order to the rising confusion of IoT connectivity. Businesses must recognise what can be achieved through IoT, not just by creating “smart” devices, but by providing business intelligence and improving productivity, cutting costs and improving the customer experience. Refined mobility management solutions give real-time insights into remote device performance, which can be tapped into by help-desk teams to run device diagnostics, solve technical issues and maintain staff productivity.
Likewise, the most cutting edge device and IoT management solutions cover rapid cross-platform app development, so businesses can deploy enterprise applications for their own specific devices in a fraction of the time. Ultimately, if network inter-play must be solved by the technology industry at large, the working integration of connected devices is the responsibility of leadership teams and IT departments within enterprises themselves.
STEP 2: SECURITY
The recent WannaCry ransomware attack, which impacted 200,000 computers globally, makes it all too clear that this dynamism makes us vulnerable. By 2020 it is estimated that the number of connected devices will be 30 billion, but with each new device comes a new way for criminals to access the system.
Undoubtedly, mobile IoT devices must be secured and maintained properly, but while governments and industry bodies work out the detail to increase minimum security-levels, it is essential that enterprises consider their own network, device and data security. New devices should have the right security certifications but much more can be done to support devices and data.
Companies should expect device management solutions to enforce authentication, including biometric and two-factor authentication, in order to stop unauthorised access to valuable company data and documents. They should also expect full device storage encryption to ensure sensitive company information present on mobile devices in the field is as secure as data on an office-based workstation.
In case they are lost or stolen, IoT devices should also be trackable and wipeable in while the wireless access and the network connection must remain constantly private and secure.
STEP 3: SIMPLICITY
Approximately 90 per cent of all data has been created in the past two years. The sheer volume of data available to us is over-whelming and intellectually crippling if it is not understood and processed swiftly.
Likewise, companies must efficiently filter and understand the data they capture. Businesses should take deliberate stock with specialist data analysts and mobility management providers, and evaluate the types of data they have – looking at the insights they can gain, and how these will distinguish them.
It also requires experimenting; the process to insight and differentiation is iterative. It is foolish to jump into this sea of data and try to swim; it is far better to build a vessel on dry land, test it in the shallows, and then to guide it towards new horizons. Once the boat has been constructed and set afloat, the main navigation can be automated with periodic check-ups to master the course.
Human input is crucial from the beginning and throughout, but the most recent data analytics and machine learning engines can lighten the load – especially as the sea widens with the flood of new data from new ‘things’.
For businesses entering uncharted waters, it is vital to not only ‘think big’ but also to retain extremely close attention to detail. Their approaches need to be right for their strategy and market. Trying to achieve too much at once can end up being counterproductive; the real value from IoT lies in doing the smaller things well and building on that. Companies which refuse to take these precautionary measures will find themselves drowning in data. By focusing on integration, device management and interpreting data, businesses can avoid falling adrift and ride the wave of success.
Data is the new battleground. For companies, the situation is clear – their future depends on how quickly and efficiently they can turn data into accurate insights. This challenge has put immense pressure on CIOs to not only manage ever-growing data volumes, sources, and types, but to also support more and more data users as well as new and increasingly complex use cases.
By Mike Tuchen, CEO Talend.Fortunately, CIOs can look for support in their plight from unprecedented levels of technological innovation. New cloud platforms, new databases like Apache Hadoop, and real-time data processing are just some of the modern data capabilities at their disposal. However, innovation is occurring so quickly and changes are so profound that it is impossible for most companies to keep pace, let alone leverage those factors for a competitive advantage.
It’s clear that data infrastructures today can’t be static if they are to keep pace with the data requirements of the business. Today’s competitive environment requires adaptive and scalable infrastructures able to solve today’s challenges and address tomorrow’s needs. After all, the speed with which you process and analyse data may be the difference between winning and losing the next customer. This is significantly more important today than 10 or 15 years ago, since companies used to make a strategic database choice once and keep running it for a decade or two. Now we see companies updating their data platform choices far more frequently to keep up.
If companies are to thrive in a data-driven economy, they can’t afford to be handcuffed to ‘old’ technologies; they need the flexibility and agility to move at a moment’s notice to the latest market innovations. However, it’s not enough for companies to simply be technology agnostic; they also need to be in a position to re-use data projects, transformations, and routines as they move between platforms and technologies.
How can your company meet the agility imperative? To start, let’s consider the cloud question.
Many Clouds and Constituencies
In a data-driven enterprise, the needs of everyone – from developers and data analysts to non-technical business users – must be considered when selecting IaaS solutions. For example, application developers who use tools such as Microsoft Visual Studio and .NET will likely have a preference for the integration efficiencies of Microsoft Azure.
Data scientists may want to leverage the Google Cloud Platform for the advanced machine learning capability it supports, while other team members may have a preference for the breadth of the AWS offering. In a decentralised world where it’s easy to spin up solutions in the cloud, different groups will often make independent decisions that make sense for them. The IT team is then saddled with the task of managing problems in the multi-cloud world they inherited – problems that often grow larger than the initial teams expected.
One way to meet a variety of stakeholders’ needs and embrace the latest technology is to plan a multi-cloud environment by design, creating a modern data architecture that is capable of serving the broadest possible range of users. This approach can safeguard you from vendor lock-in, and far more importantly, ensure you won’t get locked out of leveraging the unique strengths and future innovations of each cloud provider as they continue to evolve at a breakneck pace in the years to come.
Integration Approaches for Data Agility
Once perhaps considered a tactical tool, today the right integration solution is an essential and strategic component of a modern data architecture, helping to streamline and maximise data use throughout the business. Your data integration software choice should not only support data processing “anywhere” (on multi-cloud, on-premise, and hybrid deployments) but also enable you to embrace the latest technology innovations, and the growing range of data use cases and users you need to serve.
Hand Coding
I said “data integration software” as I simply don’t believe that a modern data architecture can be supported by hand-coded integration alone. While custom code may make sense for targeted, simple projects that don’t require a lot of maintenance, it’s not sustainable for an entire modern data architecture strategy.
Hand coding is simply too time-consuming and expensive, requiring high-paid specialists and high ongoing maintenance costs. Moreover, hand-coded projects are tied to the specific platform they were coded to, and often even a particular version of that platform, which then locks the solution to that vendor and technology snapshot. In a continually accelerating technology environment, that’s a disastrous strategic choice. Also, hand coding requires developers to make every change, which limits the organisation’s ability to solve the varied and evolving needs of a widely-distributed group of data consumers. And finally, it can’t leverage metadata to address security, compliance, and re-use.
Traditional ETL Tools
Traditional ETL tools are an improvement over hand-coding, giving you the ability to be platform agnostic, use lower skilled resources and reduce maintenance costs. However, the major drawback with traditional ETL tools is that they require proprietary runtime engines that limit users to the performance, scale, and feature set the engines were initially designed to address.
Almost invariably, they can’t process real-time streaming data, and they can’t leverage the full native processing power and scale of next-generation data platforms, which have enormous amounts of industry-wide investment continually improving their capabilities. After all, it’s not simply about having the flexibility to connect to a range of platforms and technologies – the key is to leverage the best each has to offer. Moreover, proprietary run-time technologies typically require software to be deployed on every node, which dramatically increases deployment and ongoing management complexity.
Importantly, this proprietary software requirement also makes it impossible to take advantage of the spin up and spin down abilities of the cloud, which is critical to realising the cloud’s potential elasticity, agility and cost savings benefits. Traditional ETL tools simply can’t keep up with the pace of business or market innovation and therefore prevent, rather than enable digital business success.
Agile Data Fabric
What’s required for the digital era is scalable integration software built for modern data environments, users, styles, and workflow – from batch and bulk to IoT data streams and real-time capabilities – in other words, an agile Data Fabric.
The software should be able to integrate data from the cloud and execute both in the cloud and on-premises. To serve the increasing business need for greater data agility and adaptability, integration software should be optimised to work natively on all platforms and offer a unified and cohesive set of integration capabilities (i.e. data and application integration, metadata management, governance and data quality). This will allow organisations to remain platform agnostic, yet be in a position to take full advantage of each platforms’ native capabilities (cloud or otherwise) and data technology. All the work executed for one technology should be easily transferable to the next, providing the organisation with economies of skills and scale.
The other critical capability you should look for in an Agile Data Fabric is self-service data management. Moving from a top-down, centrally controlled data management model to one that is fully distributed is the only way to accelerate and scale organisation-wide trustworthy insight. If data is to inform decisions for your entire organisation, then IT, data analysts and line of business users all have to be active, tightly coordinated participants in data integration, preparation, analytics, and stewardship. Of course, the move to self-service can result in chaos if not accompanied by appropriate controls, so these capabilities need to be tightly coupled with data governance functions that provide controls for empowering decision makers without putting data at risk and undermining compliance.
The challenge CIOs face today is acute – with rapidly advancing platforms and technology, and more sources to connect and users to support than ever before. Meeting these new and ever-evolving data demands requires that companies create a data infrastructure that is agile enough to keep pace with the market and the needs of the organisation.
In a converted 19th-century church on the outskirts of Barcelona sits a computer so overwhelmingly powerful, it could someday save us all.
In a converted 19th-century church on the outskirts of Barcelona sits a computer so overwhelmingly powerful, it could someday save us all.
Save us from what? We’re not sure yet. But one day soon a scientific or medical research breakthrough will happen and its origins will be traced back to a glass-encased room inside the Torre Girona Chapel. Sitting within is a hulking mass of supercomputing power: a whopping 3,400 servers connected by 48 kilometers of cable and wire.
Torre Girona, nestled inside the Barcelona Supercomputing Center on the campus of the Polytechnic University of Catalonia, was used as a Catholic Church until 1960. The church was deconsecrated in the 1970s but, the longer you spend here seeing how supercomputing speed can enable lightning-fast insight, the more you start to sense the presence of a higher power.
This is technology at its inquisitive best. And it all starts with the specs of the monster they call MareNostrum.
THE SPECS. To consider the sheer power and scale of MareNostrum’s High Performance Computing (HPC) capabilities is to test your own knowledge of large-scale counting units. You see, for supercomputing nerds it’s all about FLOPs, or Floating Point Operations/Second. The original MareNostrum 1, installed in 2004, had a calculation capacity of 42.35 teraflops/second. Which meant 42.35 trillion operations/second. Not bad, I guess, until you consider that the 2017 version (MareNostrum 4) blows that out of the water--it possesses 322 times the speed of the original.
“The new supercomputer has a performance capacity of 13.7 petaflops/second and will be able to carry out 13,677 trillion operations per second,” says Lenovo VP Wilfredo Sotolongo as we gaze upwards inside the chapel. Sotolongo not only works closely with the BSC, he actually lives near Torre Girona in Barcelona.
As I try to get my head around all these unfamiliar units of measure, Sotolongo lays it out for me: “In computing, FLOPs are a measure of computer performance. Node performance…” My mind wanders a bit before I tune back in. “A petaflop is a measure of a computer's processing speed and can be expressed as a quadrillion, or thousand trillion, floating point operations per second. A thousand teraflops. 10 to the 15th power FLOPs.” Etc etc.
He sees my head spinning so, mercifully, he simplifies it. “Basically, MareNostrum 4 is 10 times more powerful than MareNostrum 3.” OK, I can relate to that but I one-up him anyway: “How many times more powerful is it than my 2016 ThinkPad X1 Carbon laptop?” He laughs. “About 11,000 times.” Gulp.
THE WORK. What kinds of workloads require the type of computing power found in the MareNostrum cluster? There are a lot, it turns out. Because HPC systems deliver results in a fraction of the time of a single desktop or workstation, they are of increasingly vital interest to researchers in science, engineering and business. They are all drawn by the possibility of solving sprawlingly complex problems in their respective fields.
Over the years, MareNostrum has been called on to serve more than 3,000 such projects. On any given day, as the Catalonian sun streams through the stained-glass windows of Torre Girona, MareNostrum manages mountains of data and spits out valuable nuggets of insight to a staff of more than 500 that could someday help solve some of humanity’s greatest challenges.
For example, teams in the BSC are developing HPC simulations of complex challenges in the energy and renewables sector. They are handling data for various sectors from pharma and medical to engineering, smart cities, nautical, automotive and aviation. In Earth Sciences, they are seeking solutions for problems with air quality, agriculture, weather and climate.
On the day we visited, we saw a simulation of the human heart so realistic it literally drew a gasp from our tour group. It was hard not to get fired up when learning how doctors could use this vivid way of seeing inside us to better treat those with heart ailments.
THE GUTS. Under MareNostrum’s hood, so to speak, is a complex web of technologies sourced from three different companies. The BSC purchased a full solution from IBM that includes one major general purpose system (based on Lenovo technology) and three smaller experimental systems: one each from Lenovo, IBM and Fujitsu.
The Lenovo-provided general purpose element of MareNostrum 4 has:
MareNostrum 4--the new baby--has been delivered and deployed as of this writing. It faces a few tests as it is brought fully online but the BSC team say it will be completely operational before long. The technology in this room--right now untouchable and over the heads of most of us--will be put through its paces, a formidable brain in search of big problems to solve.
Gavin O'Hara is Lenovo's Global Social Media Publisher.
These days content is often seen as a company’s biggest asset. It helps business leaders make better informed decisions, is the key driver of customer interactions and it sits at the heart of core business processes. Yet despite the benefits content can bring, for many organizations it is becoming an increasing headache. Document volumes are growing exponentially; regulations are becoming more complex and yet users are demanding simplicity and ease of use. These, in a nutshell, are the key content management issues facing enterprises today.
By Brendan English, VP, Line of Business, Content Solutions, ASG Technologies.
Unfortunately, these problems are typically compounded by unwieldy legacy solutions used by the business to manage content. Traditional enterprise content management (ECM) no longer works effectively in practice. The reality is many enterprises are using multiple systems to store all of their content. This article considers the shortcomings of this increasingly outdated approach and goes on to look at how modern content services platforms can provide a solution.
Three Key Areas Where Traditional ECM is Failing
A recent ASG-commissioned technology adoption profile study, “Today’s Enterprise Content Demands a Modern Approach” conducted by Forrester Consulting polled 220 IT managers, enterprise architects and operations decision makers involved with content management at their organisations. It found 95% were using more than one system to manage enterprise content, including 31% that were using five or more systems. These duplicitous systems lead to disjointed information with hard to access retrieval and use. As a result, in traditional legacy environments it is often a given that the whole content management process will be cumbersome and unwieldy. Lack of flexibility is therefore one clear shortcoming of existing approaches to ECM. Organisations want to invest in systems and technology that allow them to grow and adapt to a changing market but traditional ECM often hinders them rather than helping them to do so.
Further, the amount of data these organisations are tasked with storing has increased significantly over the past two years, with 82% of respondents reporting an increase in unstructured data in the form of business content, like office documents, presentations, spreadsheets, and rich media.
They are also having to manage a great deal of transactional content originating from outside the organisation, like applications, claims, contracts, and forms. Not surprisingly, storage figures reflect the increased growth with the majority of organisations (60%) storing 100 terabytes (TBs) or more of unstructured data. Traditional ECM systems typically struggle to cope with this level of growth due to another key shortcoming – their inability to scale. Typically, these solutions are either hampered by being locked into existing architectures, or they are limited and therefore unable to accommodate large storage volumes.
Most traditional ECM solutions also struggle to manage growing regulatory and security requirements. As the Forrester Consulting study highlights: “Sharing content with external parties is becoming the norm. But with that comes expanding regulatory and compliance demands and an increased urgency to protect both customer and enterprise data.” In line with this, the top two content management challenges for organisations identified in the study are ‘providing secure content access to our extended enterprise,’ and ‘meeting expanding regulatory and compliance demands.’
What’s Needed from a Solution
So how can organisations effectively address the multiple challenges outlined above?
Now, enterprises can leverage content services to manage assets across multiple content repositories, whether in the cloud or on-premise, and keep that information in its native form while still making it easily accessible. By providing controlled access and integrating content from any device, anywhere, these solutions can effectively scale to accommodate growing data volumes, while at the same time breaking down the repository walls created by proprietary systems.
However, they need to be aware that the regulatory environment is becoming ever more complex, and its dynamic nature means processes must be put in place to effectively ensure compliance and business success.
Practical Answers
At ASG Technologies, we recommend a four-pronged approach to addressing today’s content challenges:
1. Recognise that technologies alone do not solve the problem of getting content into the right hands when organisations are making business decisions. Today’s content solutions must connect people with the business and content they need to make decisions, disseminate knowledge and collaborate with customers and colleagues. Without the right information at the right time, they will be at a business disadvantage
2. Look for purpose-built, decoupled content services architectures, such as ASG’s own Mobius solution, to manage content. Build your content services infrastructure for a mobile-first workforce and look for platforms that expose specific ECM capabilities as services rather than fully-formed features. This way a solution will be scalable and allow the business to grow;
3. Seek vendors that deliver transparent, contextual access to the content, eliminating the need for the user to know where the asset is stored. Content should be delivered to the users’ workflow through an intuitive process offering the user options through a policy controlled “learning” process. This provides businesses with flexibility;
4. Understand and utilise granular policy management for content assets. When reviewing content services architectures, look for those that have granular policy management services to provide content with contextual meaning as well as how it should be governed. This typically will include details of when the content is supposed to be deleted, when it is meant to be archived, and other information that is often critical to businesses needing to effectively govern content. This kind of rules-based policy foundation approach to content management is becoming more powerful in a world where regulatory pressures are constant and the need for rigorous compliance and governance is omnipresent.
Positive Prospects
As we look to the future, we are increasingly seeing the cumbersome ECM suites of the past give way to flexible content services platforms. This new services approach enables users to attain access to content across on-premise, cloud-based and hybrid environments at any time and from anywhere and also obtain enhanced visibility across their disparate systems. In the modern business world, where content is such a valuable asset, it will be those businesses that take the plunge and adopt the latest modern content management approaches that derive the most value from the content and use it successfully to achieve improved decision-making capabilities; enhance customer relationships and drive competitive edge.
NAKIVO, Inc. is a US corporation, which was founded in 2012. Its co-founder, Bruce Talley, had a long track record of general management and market development, and his vision was to provide businesses worldwide with a solution which would help them protect their virtualized environments and secure themselves against the loss of valuable data.
Since then, the company has evolved, and now it is now a fast-growing company with nearly 100% YoY revenue and customer growth. NAKIVO is highly ranked by the global virtualization community: SpiceWorks rated it 4.9/5, Software Informer named NAKIVO an Editor’s Pick, and TrustRadius scored the company 9.1/10.
NAKIVO’s product – NAKIVO Backup & Replication – is a fast, reliable, and affordable data protection solution for VMware, Hyper-V, and AWS environments. Its development started in Q4 2012, and the first release included an essential set of data protection features. Since then, new releases were launched each quarter, and the product gradually acquired more and more useful capabilities. Currently NAKIVO Backup & Replication is a mature product featuring the flexibility, speed, and ease of integration into an environment no other industry product can provide.
The product is praised by customers for its simplicity and usability, and its easy-to-use, responsive Web interface does not require reading manuals.
NAKIVO Backup & Replication offers quite a variety of deployment options. Customers can install it on both Windows and Linux, create a reliable and high-performance VM backup appliance by installing NAKIVO Backup & Replication on a NAS (now it supports QNAP, Synology, ASUSTOR, and Western Digital NAS), deploy the product as a pre-configured VMware Virtual Appliance, or launch it as a pre-configured Amazon Machine Image. The product installation is incredibly easy and quick, as it requires only one click and takes less than a minute to deploy NAKIVO Backup & Replication with all of its components, and the virtual infrastructure is discovered and added to the product inventory within seconds.
In terms of functionality, NAKIVO Backup & Replication offers an extensive feature set that enables customers to increase data protection performance, improve reliability, speed up recovery, and help save time and money. All features are auto-configured and work out of the box. These are some of them:
When you consider a backup solution, you surely need this product to: be fast to deploy and easy to use and manage; protect your data from loss; provide features allowing to streamline and automate the process of creating backups; allow copying your backups offsite and to the cloud; ensure instant, guaranteed, and easy recovery of your VMs, files, and application objects in case of any failure or a disaster; guarantee the shortest RPO and RTO possible; and save your time, resources, and money. NAKIVO Backup & Replication meets all these parameters and can do even more, thus ensuring the unprecedented protection of your data.
Therefore, over 10,000 customers worldwide are using NAKIVO Backup & Replication in their VMware, Hyper-V, and AWS environments, and the top customers use this software to protect 3,000+ VMs, which span across 200+ locations. Over 150 hosting, managed, and cloud services providers are using NAKIVO Backup & Replication to deliver VM BaaS and DRaaS to their customers.
You can also download a full-featured Free Trial of NAKIVO Backup & Replication and see its advantages.
These are some examples of NAKIVO customers’ success stories:
NAKIVO aims to be 100% channel-based. As of September 2017, NAKIVO has over 2,000 channel partners and a large number of distributors worldwide, and these numbers grow dynamically. All of the partners get large discounts, sales trainings, deal registration, and regular promotions to drive sales.
NAKIVO has a wide geographical coverage and is currently protecting businesses in 124 countries worldwide. These businesses represent a variety of industries ranging from manufacturing and education to airlines. As NAKIVO’s geographical and market presence grows exponentially, the company is not intended to settle and plans to do better and go further.
After all, competitive SMB data protection products do not provide sufficient data protection and recovery capabilities and thus do not get the job done or waste the customer’s time, while competitive enterprise products are overly complex and expensive, wasting the customer’s time and money. NAKIVO Backup & Replication fills the market gap by providing a feature-rich and easy to use solution at an affordable price.
NAKIVO is one of the fastest-growing companies in the industry. The company’s plans for the next couple of years are to expand its market presence and focus on large enterprises. To do this, NAKIVO is going to gradually add new highly demanded features to its product and further improve its UX.
Tearfund is a Christian charity that works to alleviate poverty across the world. Operating in Asia, Africa and South and Central America, Tearfund has reached over 29 million people through community development projects and a network of over 100,000 churches.
Challenge
To manage the charity’s applications and services, Tearfund utilised a traditional three-tier IT infrastructure composed of three high-end C7000 HP Bladecentre chassis, each with eight blades and close to 70TB of Lefthand SAN (SAS) storage with associated networking technologies for switching, provided by Cisco and Brocade. This infrastructure was spread over seven 42U racks between two data centres, which caused significant performance bottlenecks and processing issues. Tearfund often has to respond rapidly to disasters in the developing world, so speed of operation can be critical to the organisation.
The IT infrastructure was no longer fit for purpose. Stuart Hall, Infrastructure Lead at Tearfund, explains: “It is essential for non-profits and charities to operate as efficiently as possible to maximise their financial resources, and it was clear that our outdated IT infrastructure had to be upgraded to reduce costs and free up more money for the charity’s projects. We faced several challenges with our existing system, including slow processing speed, high electricity consumption and pressure on IT staff resources, which we needed to resolve to minimise the impact of IT services on the charity’s bottom line.”
“We faced a lot of performance issues owing to the physical separation of logical storage from the memory and compute, which could often lead to 12 hours of processing time, two hours on a good day. We had additional concerns around performance due to our support of OLAP cubes. These issues were a real hindrance to Tearfund carrying out critical activities that were essential to the success of the charity’s projects. We needed a solution that would cut processing time to under an hour so that our staff could accomplish the work they needed to do in as short a time as possible.”
Another challenge facing the IT team at Tearfund was the high levels of energy consumption on their existing infrastructure. As a charity that relies heavily on donations, it is essential that all services run in as cost-effective a manner as possible. Operating this outdated IT infrastructure resulted in very high electricity consumption. Before the project began, the Tearfund IT systems were consuming 22KW/h, and it was crucial that this level of consumption was reduced so that the charity could devote more finances to its projects across the developing world.
Hall identifies another challenge faced by the IT team at Tearfund: “Our existing infrastructure was consuming too much of our staff’s time in maintenance. We would often have to spend half a day configuring a VM which prevented us from taking a more strategic approach to IT, enabling IT to be a driver of the charity’s activities, rather than an impediment. The IT labour market is very competitive, and it can be difficult to retain top talent when they spend so much time on repetitive maintenance tasks rather than working on more interesting projects.”
Solution
Tearfund began surveying the IT market with a view to upgrading its existing infrastructure to an on-premise, cloud-based solution. It carried out a proof of concept with HyperGrid and a competing vendor, both of whom could support the company’s preferred hypervisor – Hyper-V. As part of the proof of concept, Tearfund challenged both companies to deliver significant improvements in performance and reduce the processing time to below one hour.
Stuart Hall says: “HyperGrid was able to reduce processing time to just 20 minutes, a phenomenal achievement and well below our target. We have since purchased two three-node all-SSD HyperCloud platforms, occupying 6U of rack space including the associated switching. This has seen us jump from IOP capacity measured in the thousands to a system where we measure the capacity in millions. One chassis would be enough, but we purchased a second to give us full failover capacity for BCP at our remote data centre.”
HyperGrid deployed its cutting edge HyperCloud platform-aaS, enabling Tearfund to consolidate its IT infrastructure, to a single private cloud platform under a common management framework. The intuitive nature of the HyperCloud platform allows Tearfund to refocus on efficient application and service delivery to the organisation. Tearfund are currently occupying just 40% of the available data storage on the HyperGrid solution, but this gives the organisation room to grow and develop innovation over the lifetime of the product. With HyperGrid’s solution, Tearfund can continue to use its existing management toolsets and frameworks, with no need to change its day-to-day DevOps management processes.
Results
The reliability and simplicity of HyperGrid’s solution ensures that the team does not have to learn how to use new tools and management interfaces, enabling the IT staff to work on more interesting and engaging IT projects, rather than spending so much of their time on traditional and repetitive IT tasks. HyperGrid’s on-premise solution has significantly improved processing speed, outperforming Tearfund’s specified requirements and ensuring that key activities that are crucial to the success of the charity’s projects are not delayed by IT processes.
“As a charity we do not often get the opportunity to adopt truly world-leading technology, but HyperGrid has given us that opportunity. I have worked in the IT sector for 25 years, and this has been, by some distance, the easiest, smoothest project that I have ever managed. We have had fantastic support from HyperGrid throughout the process, from the reassurance and rigorous testing that we completed in the pre-sales period, to first-class support throughout the project. HyperGrid has met absolutely every requirement we set, and we couldn’t be happier with the solution.”
Over the last few weeks we’ve seen more and more companies talking about – and planning for – GDPR. Tech heavy weights like Google, Microsoft and McAfee have all publically taken their first, early steps towards compliance to ensure that they achieve a seamless transition when the new rules come into play. In this piece, we’re looking at what GDPR is and what companies need to do over the next year to become compliant.
By Amy Johnson, VP of marketing for Vertiv.
What is GDPR?
For those unaware, the General Data Protection Regulation (GDPR) has now become a law. It’s being implemented next year and this will affect many businesses – so it’s important to use this year to prepare.
Put simply, this regulation has been created to protect and standardise the use of personal data within the European Union (EU). Superseding the old Data Protection Directive, this new set of laws helps streamline the use and flow of personal data across and out of the EU member states. The GDPR will be effective from 25th May 2018 and will cover important aspects of individual rights, as well as data security.
Although some may presume that this is just another regulation to fall out of Brussels, these laws aim to protect every European citizen’s rights in this digital age. For many companies in and out the EU, there’s still much work to be done to be compliant.
Why is it important?
Whether we realise it or not, we’re constantly handing over personal data to third parties through a series of daily transitions – booking hotels, using social media, shopping online or signing up to newsletters. These actions always transmit some kind of personal data – to put this in perspective, in just one minute of the internet, 640 trillion bytes of data is transferred globally.
For many this transfer of data is a care free act, something that has become the norm in our everyday lives and we pay little attention to where this data is going, how it’s being stored and protected, and who has access to it.
The purpose of GDPR is to ensure that those who are managing personal data are gathering, storing and transferring it in a responsible and accountable manner. It aims to reduce endless layers of red tape, while putting individual rights and data security at the forefront.
What you need to know
Naturally, questions from businesses about what the changes are and how they’re affecting organisations are high. We’ve broken them down into five categories.
1. The right to be forgotten
At the top of many people’s agenda is the right to be forgotten, which has been recognised by the European Court of Justice in 2014 in a case against Google Spain. Data which is considered inaccurate, inadequate, irrelevant or excessive now comes with rights to have it erased upon request. While this right needs to be balanced with other fundamental ones such as freedom of expression and of the media, it does put a lot of pressure on search engines and other online repositories who are receiving hundreds of requests from European citizens to delete their personal information from search results.
2. Mandatory notification on data breaches
When GDPR officially kicks in, it will become mandatory for businesses to notify customers of any breach in data security that may result in inducing risk for the rights and freedom of individuals. The notification will have to be done within 72 hours from when the breach was first known about and sanctions for non-compliance can be very high. According to a government survey, two thirds of large UK businesses were hit by cyber breach or attack in the previous twelve months, and 25% experience these at least once a month.
3. Visibility on data use
The GDPR rules have been expanded so that customers can now get visibility from organisations which are hosting their data, such as which data of theirs is being processed, where it’s being processed and for what reason. This openness from businesses is a huge shift in transparency and will need to be offered free of charge.
4. Employing Data Protection Officers (DPO)
Although there are many caveats to this rule, several organisations – especially those with over 250 employees – may need to employ a DPO. These companies will have to hire someone of expert knowledge of data protection law and practices, who can offer consultancy to ensure the business is compliant with GDPR.
5. Penalties for failure
Failing to meet GDPR comes with severe consequences with fines up to 4% of annual global turnover or €20 million, whichever is higher. Fines will be imposed on a tiered basis and can be applied to all parties involved with the data, whether it is hosting or processing the data. This point alone will elevate data protection discussions to the boardroom, as it is no longer an IT issue, but a matter of legal and financial compliance that could come at a hefty expense if incorrectly managed.
Will this make a hard Brexit, harder?
Brexit will not change the UK’s need to comply with GDPR, at least for now.
In recent months, Article 50, the formal procedure of the UK leaving the EU, was triggered by the UK’s Prime Minister, Theresa May. The process is expected to take as much as two years to officially complete. Within this timeframe, the GDPR laws will be enforced and the UK will therefore need to be compliant, at least until they’re officially out of the EU. But even afterwards, UK companies will want to align with EU regulations to ensure smooth operations with their European customers. This will be the case also for any non-EU organisation doing business in the EU, thus GDPR has offered a unified, blanket solution that all companies around the world can use for interactions within the EU.
Getting ready
For many businesses, the prospect of becoming compliant with the new GDPR rules by May 25th 2018 is a daunting task. The deadline for being compliant is still about a year away which gives organisations enough time to get their act in gear. The key to success is planning and budgeting correctly with the support of a legal consultant, and sticking to a strict timeline.
· Next 3 months: Organisations above 250 employees will need to employ a DPO. Employment can be an arduous task, so finding an employee that can join now and be part for the process towards compliance will be a benefit in the long term.
· Next 6 months: Accurate planning is vital in the success of compliance. Businesses should thoroughly assess their current processes to define where they need new procedures and what steps are required to put these in place.
· Next 9 months: Understanding partners’ and suppliers’ processes will make the difference between winning and losing when complying with GDPR, especially when there is a data breach. Take time to assess and decide how to handle different scenarios so that you’re ready when the time arrives.
The new-born GDPR is earning a reputation as a difficult set of rules to master, as well as a laborious task to undertake. However, these rules will set the bar in personal data security so compliance simply isn’t a matter of choice. Starting as early as possible and handling it head-on is the way to go to protect your business and ensure that privacy and sensitive data is protected in an accountable way that we have all come to expect.
StorMagic’s virtual SAN (SvSAN) is being used in Cisco’s Secure Ops solution, the company’s cybersecurity offering for OT (Operational Technology) networks, particularly those in industrial automation environments. This includes businesses in the oil & gas sector or in any brownfield environment where a mix of old and new IT infrastructure components are in place and where network monitoring, consistent data flow and high security are imperative. Companies using the solution at present include Diamond Offshore a leader in offshore drilling and provider of contract drilling services to the energy industry around the globe. It is also used by a major oil and gas company to protect its ICS Network.
Secure Ops’ original architecture ran in their data centre and was dependent on non-HA local storage to provide the required performance and uptime. However, the company recently revised the architecture to include a client-side compute infrastructure. This allowed Secure Ops to provide security protection paired with a data centre component that could collect and analyse critical information. Right from the start, the development team knew that the product’s success would hinge on building a cost-effective solution that could deliver high performance and availability in a small rack space, while also ensuring client data was secure and protected.
This necessitated an entirely new approach to servers and storage, which was met in part by StorMagic’s SvSAN. The updated design uses SvSAN to gain the required clusters of highly dense rack servers for high-availability at a competitive price point, while still delivering to the customer’s expectations of uptime and performance. Looking deeper in to the architecture, at each client site there is a cluster of two Cisco UCS C220 M4 rack servers that communicate back to another cluster of two Cisco UCS C240 M4 rack servers in a Secure Ops data centre. The servers are virtualised using VMware and the storage is virtualised by StorMagic using internal server disks. There is no need for external storage arrays at any of the sites. Additionally, SvSAN mirrors data between servers that provides the required high availability at each location.
Before selecting StorMagic SvSAN to deliver the virtual storage, Secure Ops also evaluated other virtual storage software solutions, an integrated hyperconverged appliance solution, a traditional external storage array and an in-house development approach using open source software. The Cisco/StorMagic solution was selected because all of the design requirements were met: low cost, performance, high availability and density.
Robust: Protects mission critical data and operations with a proven, highly available and automated solution for any environment and location.
Cost effective: Eliminates physical SANs by converging compute and storage into a lightweight commodity server footprint thereby dramatically lowering costs.
Flexible: Delivers on today’s performance needs leveraging any CPU and storage type whilst avoiding over- provisioning with the confidence
StorMagic’s SvSAN is one of several tightly integrated products and services that have been brought together to form a single piece of kit that offers a robust, flexible, and secure way to enable network monitoring and data flow. It is a single global information repository with situational awareness dashboards that display system health.
“StorMagic’s virtual SAN helped us lower our infrastructure costs by completely eliminating external storage and delivering more than 99% uptime for our IT security offering. This is a huge time saving for everyone involved,”Nicky Haan, Business Architect, Cisco Secure Ops.
The Cisco Advanced Services team that developed the Secure Ops solution is very happy with their choice of Cisco servers and StorMagic’s SvSAN for virtual storage and high availability. They now have an extremely rack-dense solution at each site, without the cost, complexity or space required by traditional external storage arrays. Many of their end user clients are in locations that have poor network connectivity and depend on satellite links (256Kb bandwidth/up to 3 second latency) - SvSAN performs flawlessly in these harsh environments. Deployment and management have been a breeze as they’ve been able to handle all of it from a remote location. Secure Ops has been able to lower their capital expenditures by more than 40% and uptime went from 96% to over 99% - which eliminates a lot of headaches for their support team.