iland Announces Major Upgrade to Cutting-Edge Secure Cloud Console – PR Web

iland Announces Major Upgrade to Cutting-Edge Secure Cloud Console – PR Web

“veeam” – Google News

“Veeam and iland combine the best of enterprise-class software and services, advanced data protection solutions, and the agility, availability and business acceleration of a global cloud platform designed for immediate business recovery.”

iland, an industry-leading provider of secure cloud services and Veeam Impact Partner of the Year, today announced a significant upgrade to iland Secure Cloud Console. This new release includes integration with the full suite of Veeam data protection solutions. iland is the only cloud service provider to offer this ground-breaking level of Veeam data protection integration in a single management console.
The release of Veeam Availability Suite 9.5 Update 4 means iland now provides a single interface for customers to manage, monitor, and report on their disaster recovery services, cloud-based backups, and long-term archive strategies.
iland’s update also includes improvements to historical usage, billing, and performance visibility. This is in addition to full self-service management capabilities that enable customers to request additional resources, configure their disaster recovery automation, test their DR strategy, failover, and more. This delivers a level of transparency and flexibility that fully supports enterprise ambitions for control and agility. The update enables iland customers and partners to operate advanced strategies for business continuity and resilience while reducing the management burden on pressured IT departments.
As a result of these updates, users can now leverage a single, cost-effective solution to meet all their cloud data protection management needs, and significantly improve operational efficiency, productivity, and resource forecasting requirements. iland’s Secure Cloud Console, offering a single interface for customers who are using the entire Veeam protection framework, will be an attractive proposition.
“Business leaders are pursuing digital transformation and modernization strategies to improve business agility, reduce costs, and spend less time maintaining infrastructure and more time focused on innovation and growing market share,” said Danny Allan, Vice President of Product Strategy at Veeam. “By creating a comprehensive data availability integration platform that combines backup, disaster recovery, and archive into a single solution, we can deliver on these critical customer needs. Veeam and iland combine the best of enterprise-class software and services, advanced data protection solutions, and the agility, availability and business acceleration of a global cloud platform designed for immediate business recovery.”
iland has been recognized as Veeam Impact Cloud & Service Provider Partner of the Year for North America in 2015 and 2017 and named Veeam Innovation Award Winner in 2018 and 2019. The partnership not only further validates iland’s industry-leading, cloud-based backup and disaster recovery services but also solidifies the joint offering towards digital transformation empowerment for business leaders.
Dante Orsini, Senior Vice President, Business Development, added: “Maintaining availability of applications is becoming increasingly difficult given the rapid growth of data and the security risks that today’s modern organizations face. This latest collaboration with Veeam, provides a hyper-available platform improving how our customers and partners access, manage and act on their data wherever it lives—on-premises or in the cloud. As a global leader in disaster recovery, iland now enables our customers, resellers and managed service providers to lower storage costs through support for archiving and further simplifies disaster recovery failover through direct integration with our award-winning Secure Cloud Platform.”
Availability
iland’s comprehensive Disaster Recovery, Backup and Archive integration of Veeam Availability Suite 9.5 Update 4 is available today.
About iland
iland is a global cloud service provider of secure and compliant hosting for infrastructure (IaaS), disaster recovery (DRaaS), and backup as a service (BaaS). They are recognized by industry analysts as a leader in disaster recovery. The award-winning iland Secure Cloud Console natively combines deep layered security, predictive analytics, and compliance to deliver unmatched visibility and ease of management for all of iland’s cloud services. Headquartered in Houston, Texas, London, UK, and Sydney, Australia, iland delivers cloud services throughout America, Europe, Australia and Asia. Learn more at http://www.iland.com.
Share article on social media or email:

Original Article: https://www.prweb.com/releases/iland_announces_major_upgrade_to_cutting_edge_secure_cloud_console/prweb15891789.htm

DR

Cloud Data Management Just Got a WHOLE Lot Easier!

Cloud Data Management Just Got a WHOLE Lot Easier!

Veeam Executive Blog – The Availability Lounge  /  Danny Allan

In every customer meeting I attend, “cloud” is mentioned within a matter of minutes. It’s the technology virtually every company is looking to embrace in some fashion, especially as businesses look to increase agility. However, there is always a doubt in customers’ minds over the management and protection of their data within these environments, and I spend a lot of time helping the organizations I meet alleviate their worries.
 
According to recent research[i], IT leaders are particularly concerned about portability, management and Availability of cloud workloads across their multi-cloud environments. What is the point in having data saved if you can’t access or move it when you want and need it? In fact, based on recent research it would seem:

  • 47% are concerned that cloud workloads aren’t as portable as intended
  • 58% indicate that migration of data and workloads to the cloud is challenging
  • 61% are concerned about cloud workload backup and recovery
  • 82% are concerned about application uptime

Valid concerns have arisen because quite frankly, traditional data management is not fit for purpose.
 
Today, Veeam addresses these issues by announcing a major step forward to meet the demands of businesses and further extend its leadership in cloud data management.
 
As a company, Veeam was born on virtualization and has rapidly adapted to customer needs to become the leader in Intelligent Data Management for hybrid cloud environments, thanks to the Veeam Availability Platform. In keeping with Veeam’s speed and momentum in the market, Veeam today announced new capabilities to expand its leadership in cloud data management with robust cloud-native data protection, easy cloud data portability, increased security and data governance, and solutions to make it easier than ever for service providers to deliver Veeam-powered services to market. This gives businesses the ability to get industry-leading cloud data management delivered as a managed service too.
 
Specifically, today we have unveiled enhanced cloud capabilities as part of Veeam Availability Suite 9.5 Update 4, and the upcoming Veeam Availability for AWS and Veeam Availability Console v3. You don’t just have to read about these new announcements; why not watch me talk about them during the session I delivered at our Sales Kick-Off event today and learn more about why we are so excited about these new offerings.
 
With these latest announcements, Veeam bolsters its position as the clear market leader in cloud data management with strong partnerships with AWS, Microsoft Cloud, IBM Cloud and over 21,000 service providers.  This is great news for customers, as they can now simply address many of the challenges with the migration, management and Availability of cloud workloads across their multi-cloud environments, through the easy-to-use Veeam platform they know and love.
 
Once again, Veeam has delivered a raft of innovations that answer specific customer needs that just work and help mitigate the top concerns of IT teams and business leaders. All of this and we are not even one month into the year. 2019 is clearly going to be a great year for Veeam customers!
 
[i]  Multi-cloud Complexity Calls for a Simple Cross-cloud Data Protection Solution, Frost & Sullivan
Show more articles from this author

Makes disaster recovery, compliance, and continuity automatic

NEW Veeam Availability Orchestrator helps you reduce the time, cost and effort of planning for and recovering from a disaster by automatically creating plans that meet compliance.

DOWNLOAD NOW

New
Veeam Availability Orchestrator

Original Article: http://feedproxy.google.com/~r/veeam-executive-blog/~3/tRmnCf2qEdM/enhanced-cloud-capabilities-unveiled.html

DR

Veeam cloud backup gains tiering, mobility, AWS enhancements

Veeam cloud backup gains tiering, mobility, AWS enhancements

SearchDataBackup

The latest version of Veeam Software’s flagship Availability Suite focuses on facilitating tiering and migrating data into public clouds.
Veeam launched what it calls Update 4 of Availability Suite 9.5 today, less than a week after picking up $500 million in funding that can help the vendor expand its technology through acquisitions, as well as organically.
Availability Suite includes Backup & Replication and the Veeam One monitoring and reporting tool. Update 4 of Veeam Availability Suite 9.5 had been available to partners for a short time and is now generally available to end-user customers.
The new Veeam Availability Suite update includes Cloud Tier and Cloud Mobility features, and extends its Direct Restore capability to AWS.
With its cloud focus, Veeam is making a push in a trending area in the market, according to Christophe Bertrand, senior analyst at Enterprise Strategy Group.
“Our research clearly indicates that backup data and processes are shifting to the cloud,” Bertrand wrote in an email. “Cloud-related features are going to be critical in the next few years as end users continue to ‘hybridize’ their environments, and there is still a lot to do to harmonize backup/recovery service levels and instrumentation across on-premises and cloud infrastructures.”

Cloud Tier, Cloud Mobility, N2WS integration launched

Veeam started out as backup for virtual machines but now also protects data on physical servers and cloud platforms.
Availability Suite’s Cloud Tier offers built-in automatic tiering of data in a new Scale-out Backup Repository to object storage. The Veeam cloud backup feature provides long-term data retention by using object storage integration with Amazon Simple Storage Service (S3), Azure Blob Storage, IBM Cloud Object Storage, S3-compatible service providers and on-premises storage products.

Cloud-related features are going to be critical in the next few years as end users continue to ‘hybridize’ their environments. Christophe Bertrandsenior analyst, Enterprise Strategy Group

Cloud Tier helps to increase storage capacity and lower cost, as object storage is cheaper than block storage, according to Danny Allan, Veeam vice president of product strategy.
Cloud Mobility provides migration and recovery of on-premises or cloud-based workloads to AWS, Azure and Azure Stack.
Allan said Veeam cloud backup users simply need to answer two questions for business continuity: Which region and what size instance?
Veeam also extended its Direct Restore feature to AWS. It previously offered Direct Restore for Azure. Direct Restore allows organizations to restore data directly to the public cloud.
Veeam also added a new Staged Restore feature to its DataLabs copy management tool. Staged Restore reduces the time it takes to review sensitive data and remove personal information, streamlining compliance with regulations such as the GDPR. The new Secure Restore scans backups with an antivirus software interface to stop malware such as ransomware from entering production during recovery.
The Veeam cloud backup updates also include integration with N2WS, an AWS data protection company that Veeam acquired a year ago. The new Veeam Availability for AWS combines Veeam N2WS cloud-native backup and recovery of AWS workloads with consolidation of backup data in a central Veeam repository. End users can move data to multi-cloud environments and manage it.
“This is designed for the hybrid customer, which is primarily what we see now,” Allan said.
For cloud-only customers, N2WS Backup & Recovery is still available from the AWS Marketplace.
Bertrand said he thinks the N2WS integration is a “starting point.”
“The key here is to offer the capability of managing all the backups in one place,” Bertrand wrote. “It gives end users better visibility over their Veeam backups whether in AWS or on premises.”
Veeam also updated its licensing. Users can move licenses automatically when workloads move between platforms, such as multiple clouds.

How Veeam products stack up in crowded market

Private equity firm Insight Venture Partners, which acquired a minority share in Veeam in 2013, invested an additional $500 million in the software vendor last Wednesday.
Veeam has about 3,500 employees and plans to add almost 1,000 more over the coming year. In addition, last November, Veeam said it is investing $150 million to expand its main research and development center in Prague. Veeam claims to have about 330,000 customers.
Veeam founder Ratmir Timashev said the vendor is looking to grow further by using its latest funding haul to acquire companies and technologies.
Veeam isn’t the only well-funded startup in the data protection and data management market. Last week, Rubrik also completed a large funding round. The $261 million Series E funding will help Rubrik build out its Cloud Data Management platform. Cohesity closed a $250 million funding round and Actifio pulled in $100 million in 2018. The data protection software market also includes established vendors such as Veritas, Dell EMC, IBM and Commvault.
While many of its competitors now offer integrated appliances, Veeam is sticking with a software-only model. Instead of selling its own hardware, Veeam partners with such vendors as Cisco, Hewlett Packard Enterprise, NetApp and Lenovo.
“Unlike many others, we have a completely software-defined platform,” Allan said, “giving our customers choices for on-premises hardware vendors or cloud infrastructure.”

Original Article: https://searchdatabackup.techtarget.com/news/252456264/Veeam-cloud-backup-gains-tiering-mobility-AWS-enhancements

DR

Healthcare IT: Solution coexistence creates value

Healthcare IT: Solution coexistence creates value

Veeam Executive Blog – The Availability Lounge  /  Jonathan Butz

We are in the midst of a period of transformation in Healthcare: the transition from fee for service to fee for outcome. The net effect of this is cost pressure across healthcare systems, spawning programs to optimize the delivery of care, reduce cost, and find efficiencies across the board, including IT.
 
“Do more with less” has been the mantra of Healthcare IT leaders for the better part of a decade. The healthcare systems and hospitals that achieve the greatest savings avail themselves of choice in the marketplace, almost always having more than one solution in a given category.
 
Pick any data center function be it networking, security, compute, storage, database, monitoring, management, data protection, and you will be hard pressed to find a single solution that meets every need. There is a sound argument to be made both economically and intuitively that there are benefits to having at least two solutions in each category:

  • No two are truly identical
  • One will deliver better outcomes for certain use cases or possess specific features that deliver greater value
  • Choice allows the most efficient investment in each and the ability to respond to changing needs over time

Healthcare systems, hospitals, clinics, and practices make significant investments in Veeam and leverage a second solution to protect data that Veeam cannot. Let’s take a look at our largest customers: of Veeam’s top thirty North American healthcare customers, 64% run Epic. This is despite the fact that Veeam does not currently protect all of Epic (notably Web BLOB at scale) due to our present lack of NAS support. We also have more than 120 MEDITECH customer wins, new ones every quarter, yet we cannot presently deliver the specialized requirements to safely protect MEDITECH’s data volumes.
 
Why would hospitals protect thousands of systems with Veeam and use a second solution for the remainder?
 
They did the math.
 
Veeam is a case study in the value of coexistence in the data center. Veeam originally offered a solution to protect only virtual machines. When virtualization was new, it was a small portion of the data center. The adoption of Veeam added a data protection solution to their portfolio, and that has enabled them to capitalize on the value of choice, by expanding the solution that offered the greater value. Now with virtualized x86 representing the vast majority of the healthcare data center, Veeam customers realize substantial value — even though they continue to have needs that Veeam cannot yet meet.
 
Veeam customers conclude that using Veeam for the 80% of data center systems and applications for which Veeam is optimally suited results in a 30-40% reduction in the total cost of data protection for those systems. That is the value of coexistence. That is the value of choice. And as our portfolio continues to expand and offer solutions to address more specific healthcare application requirements, they’ll do the math again, and that will be the primary input to their next decision.
 
We are committed to supporting doctors, nurses, and patients by assuring the Availability of healthcare applications. We are committed to expanding the applications we protect so our customers can realize further savings whether virtualized, cloud, physical, Office365, or even AIX.
 
The value of coexistence is quite clear: it allows the realization of substantial savings and the agile adoption of the best solution. Explore your options and see why so many hospitals continue to add Veeam to their portfolio of data center solutions.

Makes disaster recovery, compliance, and continuity automatic

NEW Veeam Availability Orchestrator helps you reduce the time, cost and effort of planning for and recovering from a disaster by automatically creating plans that meet compliance.

DOWNLOAD NOW

New
Veeam Availability Orchestrator

Original Article: http://feedproxy.google.com/~r/veeam-executive-blog/~3/64F9qkd4l4c/healthcare-it-solution-coexistence.html

DR

Veeam Gets Massive $500 Million Funding, Sets Sights On R&D, Acquisitions For Future Growth

Veeam Gets Massive $500 Million Funding, Sets Sights On R&D, Acquisitions For Future Growth

CRN  /  Joseph F. Kovar

Cloud-focused data protection software developer Veeam Wednesday unveiled a new $500 million round of funding the company said will ensure its expansion into new areas around cloud data management and give it the leverage needed for acquisitions.
The new round means that Veeam has no need, and no plans, to move toward an IPO, said Ratmir Timashev, co-founder and executive vice president of sales and marketing for Baar, Switzerland-based Veeam.
The large funding round tops the $261 million round unveiled Tuesday by Rubrik, a Palo Alto, Calif.-based developer of data protection and cloud data management hardware and software.
[Related: The 10 Hottest Storage Startup Companies Of 2018]
Veeam’s new round is significant on a number of levels, Timashev told CRN. Not only is it Veeam’s largest round of funding, it may be the storage industry’s largest funding round, Timashev said.
Prior to this round, total funding in Veeam amounted to about $32 million, including funding from the company founders and an early funding round from Insight Venture Partners of $15 million, he said.
Insight Venture Partners is the lead investor in this new round, with additional funding coming from the Canada Pension Plan Investment Board.
Timashev said Veeam does not need the new round to grow. “Veeam is a profitable company and generates a lot of cash,” he said. “Before this investment, we had $800 million in cash.”
Instead, Timashev said, the new $500 million round provides the freedom to scale the company and make acquisitions.
Veeam is already the fourth-largest provider of data protection software worldwide, and has the largest market share of any vendor in Europe, he said, citing IDC’s Software Tracker for Data Replication and Protection for the first half of 2018.
The company generated close to $1 billion in bookings and about $250 million in cash last year, and is taking 3 percent to 4 percent market share from competitors every year. “And we’ve been profitable for the last 10 years,” he said.
Veeam has a small acquisition record so far. The company in 2008 used its original funding to acquire a small company called nworks, and still sells the nworks Management Pack, although it is not a strategic focus.
The company in late 2017 acquired technology for data protection on Unix environments, particularly in IBM AIX and Oracle Solaris environments.
Veeam in early 2018 also acquired N2WS, which Timashev said develops a leading technology for data protection in Amazon Web Services.
The new funding allows Veeam to follow through on some aggressive R&D plans with a combination of in-house R&D and acquisitions that accelerate the process, Timashev said.
“The next 10 years will see a focus on multi-cloud environments,” he said. “Whoever has data management technology for multi-cloud environments will be the winner. We know that customers will be looking to deploy cloud data management, archival, compliance, security, and container technology.”
Timashev acknowledged that the new investment from Insight Venture Partners in a company like Veeam, which is already profitable, does act to dilute current investors’ shares in the company. However, he said, the funding does more than give Veeam scale and flexibility.
While Insight Venture Partners remains a minority shareholder, it brings a wealth of knowledge and experience that Veeam can use as it grows, he said.
“Insight talks to 50,000 software companies globally each year,” he said. “It knows what customers are doing. It knows customers’ strategies.”
Unlike many of its competitors in the data protection market, Veeam has no hardware business, and can focus its development on its software business and work with other vendors as partners, Timashev said. These include Hewlett Packard Enterprise, Cisco Systems, NetApp and Lenovo, all of which resell the Veeam offering, and Nutanix and Pure Storage, which partner with Veeam without reselling it, he said.
“Our business model is never hardware,” he said. “We would rather partner with system vendors.”

Original Article: https://www.crn.com/news/storage/veeam-gets-massive-500-million-funding-sets-sights-on-r-d-acquisitions-for-future-growth

DR

Win a trip to Miami for VeeamON 2019

Win a trip to Miami for VeeamON 2019

Veeam Software Official Blog  /  Eli Afanasyev

With the holiday season fast approaching, we’re proud to look back on another successful year as well as forward to 2019, where we will have even more exciting news for your Veeam folks! Thanks to the continuing support of our customers and partners, we’ve reached a customer satisfaction level 3.5 times higher than the average in the industry. So, when nine out of ten of our customers say they would recommend us, we couldn’t be happier.
Veeam was also listed on the Forbes Cloud 100 list for the third year in a row, which means that independent analysts and media publications see Veeam as being a step above other industry-leading solutions.
But even with more than 300,000 businesses relying on Veeam, we aren’t resting on our laurels.
Get ready for May — VeeamON is coming to Miami! Back for a fifth installment, the premier conference for Intelligent Data Management will bring together IT professionals from all around the globe to discover the latest news in Availability through inspiring breakout sessions, presentations and networking.

Join us in Miami

To celebrate the holidays, we’re offering FREE, full VeeamON 2019 passes to three lucky winners. Oh yeah, with round-trip flights to Miami and accommodation at a top hotel for the whole duration of the conference.
Enter the competition now for a chance to visit the biggest event in the industry and, of course, enjoy world-famous Miami Beach with its sunsets, seaside, Latin-infused cuisine and vibrant nightlife.
Wishing you all happy holidays and good luck!
The post Win a trip to Miami for VeeamON 2019 appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/EOgFvQRhU4o/free-veeamon-around-trip.html

DR

AWS vs Azure vs Google Cloud: Storage and Compute Comparison

AWS vs Azure vs Google Cloud: Storage and Compute Comparison

N2WS  /  Cristian Puricica

Choosing a public cloud service provider (CSP) has become a complex decision. Today, it’s no longer a question of which option you should work with, but rather, how to achieve optimal performance and distribute risk across multiple vendors—while containing cloud compute and storage costs at the same time.
In a recent Virtustream/Forrester survey of more than 700 cloud decision makers, 86% of respondents said that their enterprises are deploying workloads across more than one CSP. We learn from the same survey that the prime motivation for adopting a multi-cloud strategy is to improve performance, followed by cost savings and faster delivery times. Today, the three leading CSPs are Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), with respective market shares of 62%, 20%, and 12%.
In this post, the first in a two-part series, we will compare and contrast what AWS, Azure, and GCP offer in terms of storage, compute, and management tools. In the following post, we will discuss big data and analytics, serverless, machine learning, and more. Armed with this information, it should be easier for you to map out your multi-cloud strategy.

Service-to-Service Comparison

Enterprises typically look to CSPs for three levels of service: Infrastructure as a Service (IaaS, i.e., outsourcing of self-service compute-storage capacity); Platform as a Service (PaaS, i.e., complete environments for developing, deploying, and managing web apps); and secure, performant hosting of Software as a Service (SaaS) apps.
Keeping these levels in mind, we have chosen to compare:

  1. Storage (IaaS)
  2. Compute (IaaS)
  3. Management Tools (IaaS, PaaS, SaaS)

Note:   We won’t be comparing pricing since it is quite difficult to achieve apples-to-apples comparisons without a very detailed use case. Once you have determined your organization’s CSP requirements, you can use the CSP price calculators to check if there are significant cost differences: AWS, Azure, GCP.

Storage

The CSPs offer a wide range of object, block, and file storage services for both primary and secondary storage use cases. You will find that object storage is well suited to handling massive quantities of unstructured data (images, videos, and so on), while block storage provides better performance for structured transactional data. Storage tiers offer varying levels of accessibility and latency to cost-effectively meet the needs of both active (hot) and inactive (cold) data.
In terms of differentiators, Azure takes the lead in managed DR and backup services. When it comes to managing hybrid architectures, AWS and Azure have built-in services, while GCP relies on partners.

Compute

The CSPs offer a range of predefined instance types that define, for each virtual server launched, the type of CPU (or GPU) processor, the number of vCPU or vGPU cores, RAM, and local temporary storage. The instance type determines compute and I/O speeds and other performance parameters, allowing you to optimize price/performance according to different workload requirements. It should be noted that GCP, in addition to its predefined VM types, also offers Custom Machine Types.
The CSPs offer pay-as-you-go PaaS options that automatically handle the deployment, scaling, and balancing of web applications and services developed in leading frameworks such as Java, Node.js, PHP, Python, Ruby, and more.
AWS offers auto scaling at no additional charge, based on scaling plans that you define for all the relevant resources used by the application. Azure offers auto scaling per app, or as part of platforms that manage groups of apps or groups of virtual machines. GCP offers auto scaling only within the context of its Managed Instance Groups platform.
Both AWS and Azure offer services that let you create a virtual private server in a few clicks, but GCP does not yet offer this capability.

Management Tools

As you may have already experienced, managing and orchestrating cloud resources across multiple business units and complex infrastructures can be a daunting challenge. All three CSPs offer platforms and services to streamline and provide visibility into the organization, configuration, provisioning, deployment, and monitoring of cloud resources. These offerings range from predefined deployment templates and catalogs of approved services to centralized access control. However, AWS and Azure seem to have invested more heavily in this area than GCP, and AWS even offers outsourced managed services (AWS Managed Services).

And the Winner Is…

In today’s multi-cloud world, you shouldn’t be seeking to identify a single “winner,” but rather how to optimally distribute workloads across multiple CSPs. As you map out your multi-cloud strategy, bear in mind that in the key categories of storage, compute, and management tools, AWS and Azure offer a more complete and mature stack than GCP. In general, AWS’ services and products are the most comprehensive, but they can also be challenging to navigate and manage. Also consider that if your company is already using Microsoft’s development tools, Windows servers, and Office productivity applications, you will find it very easy to integrate with Azure.
In the second part of this blog series, we will compare how the CSPs support next-generation technologies such as containers, serverless, analytics, and machine learning. We will also look at higher level issues, such as user friendliness, security, and partnership ecosystems, and provide some final thoughts on how to choose the right CSP(s) for your organization’s needs.
Looking for an AWS Data Protection solution? Try N2WS Backup & Recovery (CPM) for FREE!

Read Also

Original Article: https://n2ws.com/blog/aws-vs-azure-vs-google-cloud

DR

Meeting the intelligent data management needs of 2019 – Intelligent CIO Africa

Meeting the intelligent data management needs of 2019 – Intelligent CIO Africa

“veeam” – Google News

Meeting the intelligent data management needs of 2019

Dave Russell, Vice President for Product Strategy at Veeam

Dave Russell, Vice President for Product Strategy at Veeam, outlines five intelligent data management needs CIOs need to know about in 2019.
The world of today has changed drastically due to data. Every process, whether an external client interaction or internal employee task, leaves a trail of data. Human and machine generated data is growing 10 times faster than traditional business data, and machine data is growing at 50 times that of traditional business data.
With the way we consume and interact with data changing daily, the number of innovations to enhance business agility and operational efficiency are also plentiful. In this environment, it is vital for enterprises to understand the demand for Intelligent Data Management in order to stay one step ahead and deliver enhanced services to their customers.
I’ve highlighted five hot trends in 2019 decision-makers need to know – keeping the Europe, Middle East and Africa (EMEA) market in mind, here are my views:

  1. Multi-Cloud usage and exploitation will rise

With companies operating across borders and the reliance on technology growing more prominent than ever, an expansion in multi-cloud usage is almost inevitable. IDC estimates that customers will spend US$554 billion on cloud computing and related services in 2021, more than double the level of 2016.
On-premises data and applications will not become obsolete, but that the deployment models for your data will expand with an increasing mix of on-prem, SaaS, IaaS, managed clouds and private clouds.
Over time, we expect more of the workload to shift off-premises, but this transition will take place over years, and we believe that it is important to be ready to meet this new reality today.

  1. Flash memory supply shortages, and prices, will improve in 2019

According to a report by Gartner in October this year, flash memory supply is expected to revert to a modest shortage in mid-2019, with prices expected to stabilise largely due to the ramping of Chinese memory production.
Greater supply and improved pricing will result in greater use of flash deployment in the operational recovery tier, which typically hosts the most recent 14 days of backup and replica data. We see this greater flash capacity leading to broader usage of instant mounting of backed up machine images (or copy data management).
Systems that offer copy data management capability will be able to deliver value beyond availability, along with better business outcomes. Example use cases for leveraging backup and replica data include DevOps, DevSecOps and DevTest, patch testing, analytics and reporting.

  1. Predictive analytics will become mainstream and ubiquitous

The predictive analytics market is forecast to reach $12.41 billion by 2022, marking a 272% increase from 2017, at a CAGR of 22.1%.
Predictive analytics based on telemetry data, essentially Machine Learning (ML) driven guidance and recommendations is one of the categories that is most likely to become mainstream and ubiquitous.
Machine Learning predictions are not new, but we will begin to see them utilising signatures and fingerprints, containing best practice configurations and policies, to allow the business to get more value out of the infrastructure that you have deployed and are responsible for.
Predictive analytics, or diagnostics, will assist us in ensuring continuous operations, while reducing the administrative burden of keeping systems optimised. This capability becomes vitally important as IT organisations are required to manage an increasingly diverse environment, with more data, and with more stringent service level objectives.
As predictive analytics become more mainstream, SLAs and SLOs are rising and businesses’ SLEs, Service Level Expectations, are even higher. This means that we need more assistance, more intelligence in order to deliver on what the business expects from us.

  1. The ‘versatalist’ (or generalist) role will increasingly become the new operating model for the majority of IT organisations.

While the first two trends were technology-focused, the future of digital is still analogue: it’s people. Talent shortages combined with new, collapsing on-premises infrastructure and public cloud + SaaS, are leading to broader technicians with background in a wide variety of disciplines, and increasingly a greater business awareness as well.
Standardisation, orchestration and automation are contributing factors that will accelerate this, as more capable systems allow for administrators to take a more horizontal view rather than a deep specialisation.
Specialisation will of course remain important but as IT becomes more and more fundamental to business outcomes, it stands to reason that IT talent will likewise need to understand the wider business and add value across many IT domains.
Yet, while we see these trends challenging the status quo next year, some things will not change. There are always constants in the world, and we see two major factors that will remain top-of-mind for companies everywhere….

  1. Frustration with legacy backup approaches and solutions

The top three vendors in the market continue to lose market share in 2019. In fact, the largest provider in the market has been losing share for 10 years. Companies are moving away from legacy providers and embracing more agile, dynamic, disruptive vendors, such as Veeam, to offer the capabilities that are needed to thrive in the data-driven age.

  1. The pain points of the Three Cs: Cost, complexity and capability 

These Three Cs continue to be why people in data centres are unhappy with solutions from other vendors. Broadly speaking, these are excessive costs, unnecessary complexity and a lack of capability, which manifests as speed of backup, speed of restoration or instant mounting to a virtual machine image. These three major criteria will continue to dominate the reasons why organisations augment or fully replace their backup solution.

  1. The arrival of the first 5G networks will create new opportunities for resellers and CSPs to help collect, manage, store and process the higher volumes of data

In early 2019 we will witness the first 5G-enabled handsets hitting the market at CES in the US and MWC in Barcelona. I believe 5G will likely be most quickly adopted by businesses for Machine-to-Machine communication and Internet of Things (IoT) technology. Consumer mobile network speeds have reached a point where they are probably as fast as most of us need with 4G.
2019 will be more about the technology becoming fully standardised and tested, and future-proofing devices to ensure they can work with the technology when it becomes more widely available, and EMEA becomes a truly Gigabit Society.
For resellers and cloud service providers, excitement will centre on the arrival of new revenue opportunities leveraging 5G or infrastructure to support it. Processing these higher volumes of data in real-time, at a faster speed, new hardware and device requirements, and new applications for managing data will all present opportunities and will help facilitate conversations around edge computing.

Original Article: http://www.intelligentcio.com/africa/2018/12/06/meeting-the-intelligent-data-management-needs-of-2019/

DR

Veeam updates Availability Suite 9.5, N2WS platform – Storage Soup – TechTarget

Veeam updates Availability Suite 9.5, N2WS platform – Storage Soup – TechTarget

“veeam” – Google News


Veeam updates to its flagship product and AWS data protection include the ability to migrate old data to cheaper cloud or object storage.
Update 4 of Veeam Availability Suite 9.5 is scheduled to be released this month to partners. General availability will follow at a date to be determined. Veeam’s flagship Availability Suite consists of Veeam Backup & Replication and the Veeam ONE monitoring and reporting tool.
The new Cloud Tier feature of Scale-out Backup Repository within Backup & Replication facilitates moving older backup files to cheaper storage, such as cloud or on-premises object storage, according to a presentation at the VeeamON Virtual conference this week. Targets include Amazon Simple Storage Service (S3), Microsoft Azure Blob Storage and S3-compatible object storage. Files remain on premises, but as only a shell.
The Veeam updates also feature Direct Restore to AWS. The process, which takes backup files and restores them into the public cloud, works in the same way as Veeam’s Direct Restore to Azure, said Michael Cade, technologist of product strategy.
A Staged Restore in the Veeam DataLabs copy data management tool helps with General Data Protection Regulation compliance, specifically the user’s “right to be forgotten,” Cade said. The DataLab can run a script removing personal data from a virtual machine backup, then migrate the VM to a production environment.
The Veeam updates also include ransomware protection. Also within Veeam DataLabs, the Secure Restore feature enables an optional antivirus scan to ensure the given restore point is uninfected before restoring. Veeam is not going to prevent attacks, but it can help with remediations, Cade said.
In addition, intelligent diagnostics in Veeam ONE analyze Backup & Replication debug logs, looking for known problems, and proactively report or remediate common configuration issues before impact to operations.

N2WS data protection tiers up

The Veeam updates include N2WS, a company it acquired at the end of last year. N2WS, which provides data protection for AWS workloads, released Backup & Recovery 2.4 last week, enabling users to choose from different storage tiers and reduce the cost of data requiring long-term retention.
The vendor launched Amazon Elastic Block Store snapshot decoupling and the N2WS-enabled Amazon S3 repository. Customers can move snapshots to the repository.
That repository saves up to 40% on costs, said Ezra Charm, vice president of marketing at N2WS.
“Data storage in AWS can get expensive,” especially if an organization is looking at long-term retention, for example at least two years, Charm said during the virtual conference.
Possible uses include archiving data for compliance in S3, a cheaper storage tier. Managed service providers can also use it to lower storage costs for clients.
In addition, the N2WS update features VPC Capture and Clone. That capability captures VPC settings and clones them to other regions, which eliminates the need for manual configuration during disaster recovery, according to N2WS. An enhanced RESTful API automates backup and recovery for business-critical data.
“Any of the data you store [in AWS] is clearly your responsibility,” Charm said.
AWS data protection is an emerging market. In June 2018, Druva acquired CloudRanger, another company that provides backup and recovery of AWS data.
While human error is the most likely scenario why AWS data protection is needed, Charm said, there are many other possible issues.
“Ransomware in AWS has been documented,” he said.
N2WS Backup & Recovery 2.4 is available now in the AWS Marketplace.

Original Article: https://searchstorage.techtarget.com/blog/Storage-Soup/Veeam-updates-Availability-Suite-95-N2WS-platform

DR

Hospitality group turns to Veeam for round-the-clock availability – Intelligent CIO Africa

Hospitality group turns to Veeam for round-the-clock availability – Intelligent CIO Africa

“veeam” – Google News

Hospitality group turns to Veeam for round-the-clock availability

Peermont Hotels, Casinos and Resorts operates 12 properties across South Africa, Botswana and Malawi

Peermont Hotels, Casinos and Resorts is an award-winning hospitality and entertainment company which operates 12 properties located across South Africa, Botswana, and Malawi. Renowned for its excellence in design, development, management, ownership, and operation of multifaceted hospitality and gaming facilities, guests partake in fine dining, relaxing hotel stays, exciting casino action, live entertainment, soothing spa treatments, efficient conferencing or sporting activities — offered in unique safe and secure, themed settings.
Hyper-availability of data and systems are fundamental to the success of the business, which led to Peermont turning to Veeam to implement its Veeam Availability Suite, Veeam ONE and Veeam Cloud Connect solutions.
“We do not have the luxury of a downtime window or shutting down over the weekend for system maintenance,” said Ernst Karner, Group IT Manager and CIO of Peermont Group.
“For us, customer service is fundamental and there are no allowances for not delivering a round-the-clock value proposition.
“Being a hospitality and entertainment company, we do not have the luxury of a downtime window or shutting operations over the weekend for system maintenance; we are always on the go with our business.”
Even though many other organisations have only recently started embracing the need to have hyper-available data, Peermont has been operating in this always-on environment since it opened its doors more than two decades ago.
“Historically, backing up and restoring data was not an easy proposition. We had multiple vendors claiming various features that sounded nice on paper, but the reality worked quite different. Unfortunately, no organisation can claim to not have any downtime or availability issues. The same can be said for us where drive failures were quite common in the past.”
Karner says restoring data is always used as a last resort because of the impact it could have on live systems.
“Before Veeam, we had a few incidents where recovery from our backups was not possible. We did not have the level of comfort that all our backups (stretching to more than 500 virtual machines, 275 terabytes of data, and 13 data centres) were done reliably. At the time, it was impossible to guarantee the entire company being recoverable in the event of a disaster.”
Karner says he understands customer expectations and feels that it is unforgivable in this day and age for systems to go down, irrespective of the industry a business operates in. For visitors, things like internet access (which is provided free to all customers at all Peermont properties) have become a commodity. If Peermont is unable to deliver this, people start becoming very unforgiving.
“Customers expect to have access to services at any time of day or night. As such, we had to review our capacity across all spheres of the business (network, disk storage, and compute availability). For us, there is no such thing as a Plan B. Uptime is critical and we identified Veeam as our partner of choice to address our hyper-availability requirements.”
However, unlike other businesses, Peermont requires its disaster recovery facilities to remain onsite due to its unique availability requirements. If systems go down there is an immediate financial impact on the business. However, it does supplement its on-premise business continuity strategy with a hosted component as additional fail-safe. For example, if central reservations are offline, no bookings can be made through any means. Guests will simply choose another hotel to stay at.
Similarly, if slot machines go down there is a real-time impact on revenues. Peermont can potentially lose considerable gross operating profit if downtime occurs. Not being available for two to three hours could therefore be disastrous to the business bottomline. Beyond the financial, the reputational damage could also be significant.
“Customers might forgive you once if systems are not available. But if this starts becoming an ongoing concern, they will migrate away from you to a competitor who can deliver on their expectations.”
Veeam therefore had to provide a strategic solution that could allay these concerns and provide Peermont with the peace of mind needed to deliver continuous services to customers.
The Veeam solution
As a precursor to implementing Veeam Availability Suite, Peermont embarked on a virtualisation journey in 2005. Gradually it expanded to 70 hosts across the group moving away from physical business applications.
By improving efficiencies, Peermont can also keep costs down and run analysis on data to better understand customer needs.
“We have a centralised data warehouse that collects information from all our casinos,” added Karner.
“From supplier data and procurement systems to the requests we receive from tour operators require several Extract, Transform, and Load (ETL) processes to manage effectively.”
From an intelligent data management perspective, Peermont is focused on delivering a completely integrated ecosystem designed to give customers the best value possible. It is about using the technology, data, and intelligence from the point of arrival to the point of departure.
“We identify guests from the moment they arrive at a property and tailor their experience according to the historical data and insights we have built on them,” said Karner.
“It is about using their preferences when it comes to where they dine, what they enjoy eating, what are their favourite tables to play at, and even what shows they prefer to watch, and creating a unique environment tailored to their needs.”
Customer satisfaction is therefore an essential deliverable, but this needs to be done without adding to the cost or making it a more complex environment. Beyond this balance, Peermont also needs to meet the regulatory requirements of the South African National and Provincial Gambling Boards.
Karner says this requires Peermont to deliver quarterly reports that show how data is backed up at casinos and illustrate its recoverability. This sees Peermont using a team of people requesting a tape from its storage facility, restoring it to a regulated part of the server, boot it up, and verify with the Gambling Board that the recovery was successful.
As part of these compliance fundamentals, Peermont must also make daily offsite backups of the regulated part of the server. Previously, this required tape cartridges to be delivered by courier to an external facility. And when the time came to restore the data, it was a time-consuming process of recalling the tapes. With Veeam Cloud Connect, this has changed as Peermont now has an agreement in place with an external cloud service for the secure storage of its regulated data. Veeam Cloud Connect completely automates the recovery and offsite storage of this data.
“Thanks to Veeam, we have been able to take a process that would take a day at each property, requiring numerous employees, and fully automate it to take 15 minutes,” said Karner.
“I sleep a lot better at night knowing I have a system that is reliable and can recover within minutes.”
The Results

  • Automation of backup environment results in time and money savings
    Using Veeam, backups that previously took Peermont a day to complete at each of its properties can now be done in under 15 minutes. And with the configuration and maintenance of its hyper-available environment centralised using Veeam from the head office, Peermont can rest assured that lack of data access that could result in significant loss of revenue, is now a thing of the past.
  • Reliability and recoverability of backups
    Using Veeam, Peermont ensures the integrity of its backups and guarantees being able to restore critical data in the event of a disaster. With Veeam, its backups work without fail giving them confidence knowing that risk management is taken care of.
  • Scalability to cater for individual property requirements
    Veeam scales according to the unique needs of each property in the Peermont Group. Whether it is a smaller casino with a limited number of virtual machines or head offices with continuously expanding data that runs into the terabytes, Veeam delivers on both ends of the spectrum

Kate Mollett, Regional Manager for Africa South at Veeam, said the company provided a strategic solution that could address all customer requirements and provide Peermont with the peace of mind needed to deliver continuous services to customers.
“They might operate casinos and hotels, but they needed to bet on a sure-thing, rather than take a roll of the dice,” she said.
“As an organisation, Peermont embraced hyper-availability from the day it opened its doors and required a partner to deliver on its own exacting expectations, to help it meet those of its high-class clientele. Veeam has dealt Peermont with the strongest hand possible to realise its data management goals and ensure the integrity of its backups.
“Veeam provides Peermont with the ability to restore critical data in the event of just about any disaster, rapidly. When it comes to data management, you don’t want to gamble with your reputation or your service delivery.”

Original Article: http://www.intelligentcio.com/africa/2018/12/12/hospitality-group-turns-to-veeam-for-round-the-clock-availability/

DR

Why it’s so easy to get started with Veeam

Why it’s so easy to get started with Veeam

Veeam Software Official Blog  /  Melissa Palmer


When I first started working with Veeam Software before becoming a Veeam Vanguard and joining the Veeam Product Strategy Team, one of the things that was so appealing was that it was so easy to get started with and use.
Before I knew it, I was backing up and restoring virtual machines with ease. Now, I want to take a closer look at what makes Veeam products so easy to use and get started with.

Veeam just works

Like many people who love technology out there, I tend to leap before I look. Many times, beyond installation requirements I will not actually ready any how-to-get-started guides if I am working in a lab. Has this come back and bitten me at times? Sure it has, but with Veeam, it has not.
This goes beyond Veeam’s flagship product, Veeam Backup & Replication and extends to things like Veeam ONE, Veeam Availability Orchestrator, and extends to advanced features like storage integration. I haven’t met a Veeam product I found difficult to install.

Top-notch product documentation

After installing Veeam Backup & Replication and backing up and restoring a couple of virtual machines, I wondered what else I was missing. As it turns out, I was missing quite a lot! The good news is that Veeam product documentation is top notch.
When I was getting started with Veeam, I primarily worked with Veeam Backup & Replication for VMware vSphere. There were two documents I read from back to front:

Here is why these two documents are so useful.
The Veeam Backup & Replication Evaluator’s Guide for VMware vSphere (there is also one for Hyper-V if you prefer that hypervisor) covers key getting-started information like system requirements, among other things. It also walks you through the complete setup of Veeam Backup & Replication, and more importantly, it walks you through the process of performing your first backups and restores.
Beyond simply restoring a full VM, which as we know is not always required, the Evaluator’s Guide shows you how to restore:

  • Microsoft SQL databases
  • Guest OS files
  • VM virtual disks
  • VM files (VMDKs, VMXs, etc.)

Once you are comfortable with the basics of Veeam Backup & Replication, the Veeam Backup & Replication User Guide has the full details of all the great things Veeam can do.
Some of my favorite things beyond the basics of Veeam Backup & Replication are:

The Veeam community

One of the greatest part of Veeam products is the community surrounding it. The Veeam community is huge, and there are a couple of ways to start interacting. First of all is the Veeam Community Forums, which is your one-stop shop on the web for interreacting with other Veeam users, as well as some great technology experts that work at Veeam. You can sign up for free for the Veeam Community Forums right here.
Next, we have the Veeam Vanguards. They are a great group of Veeam enthusiasts and IT pros who share their knowledge not only on the Veeam Community Forums, but on Twitter and on their blogs. Simply search for the #VeeamVanguard hashtag on Twitter to get started interacting with these great folks.
It could not be easier to get started using Veeam products. Veeam offers a free 30-day trial of many of their products, including the flagship Veeam Availability Suite, which includes Veeam Backup & Replication and Veeam ONE. You can download the free trial here.
If you are an IT pro, you can also get a completely free NFR license for Veeam Availability Suite. Be sure to fill out the request form here, and get started with Veeam in your lab.
The post Why it’s so easy to get started with Veeam appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/zRVJupF3tog/why-its-so-easy-to-get-started-with-veeam.html

DR

10 Tips for a Solid AWS Disaster Recovery Plan

10 Tips for a Solid AWS Disaster Recovery Plan

N2WS  /  Cristian Puricica

AWS is a scalable and high-performance computing infrastructure used by many organizations to modernize their IT. However, no system is invulnerable, and if you want to ensure business continuity, you need to have some kind of insurance in place. Disaster events do occur, whether they are a malicious hack, natural disaster, or even an outage due to a hardware failure or human error. And while AWS is designed in such a way to greatly offset some of these events, you still need to have a proper disaster recovery plan in place. Here are 10 tips you should consider when building a DR plan for your AWS environment.

1. Ship Your EBS Volumes to Another AZ/Region

By default EBS volumes are automatically replicated within an Availability Zone (AZ) where they have been created, in order to increase durability and offer high availability. And while this is a great way to protect your data from relying on a lone copy, you are still tied to a single point of failure, since your data is located only in one AZ. In order to properly secure your data, you can either replicate your EBS volumes to another AZ, or even better, to another region.
To copy EBS volumes to another AZ, you simply create a snapshot of it, and then recreate a volume in the desired AZ from that snapshot. And if you want to move a copy of your data to another region, take a snapshot of your EBS, and then utilize the “copy” option and pick a region where your data will be replicated.

2. Utilize Multi-AZ for EC2 and RDS

Just like your EBS volumes, your other AWS resources are susceptible to local failures. Making sure you are not relying on a single AZ is probably the first step you can take when setting up your infrastructure. For your database needs covered by RDS, there is a Multi-AZ option you can enable in order to create a backup RDS instance, which will be used in case the primary one fails by switching the CNAME DNS record of your primary RDS instance. Keep in mind that this will generate additional costs, as AWS charges you double if you want a multi-AZ RDS setup compared to having a single RDS instance.
Your EC2 instances should also be spread across more than one AZ, especially the ones running your production workloads, to make sure you are not seriously affected if a disaster happens. Another reason to utilize multiple AZs with your EC2 instances is the potential lack of available resources in a given AZ, which can occur sometimes. To properly spread your instances, make sure AutoScaling Groups (ASG) are used, along with an Elastic Load Balancer (ELB) in front of them. ASG will allow you to choose multiple AZs in which your instances will be deployed, and ELB will distribute the traffic between them in order to properly balance the workload. If there is a failure in one of the AZs, ELB will forward the traffic to others, therefore preventing any disruptions.
With EC2 instances, you can even go across regions, in which case you would have to utilize Route53 (a highly available and scalable cloud DNS service) to route the traffic, as well as do the load balancing between regions.

3. Sync Your S3 Data to Another Region

When we consider storing data on AWS, S3 is probably the most commonly used service. That is the reason why, by default, S3 duplicates your data behind the scene to multiple locations within a region. This creates high durability, but data is still vulnerable if your region is affected by a disaster event. For example, there was a full regional S3 outage back in 2017 (which actually hit a couple other services as well), which led to many companies being unable to access their data for almost 13 hours. This is a great (and painful) example of why you need a disaster recovery plan in place.
In order to protect your data, or just provide even higher durability and availability, you can use the cross-region replication option which allows you to have your data copied to a designated bucket in another region automatically. To get started, go to your S3 console and enable cross-region replication (versioning must be enabled for this to work). You will be able to pick the source bucket and prefix but will also have to create an IAM role so that your S3 can get objects from the source bucket and initiate transfer. You can even set up replication between different AWS accounts if necessary.
Do note though that the cross-region sync starts from the moment you enable it, so any data that already exists in the bucket prior to this will have to be synced by hand.

4. Use Cross-Region Replication for Your DynamoDB Data

Just like your data residing in S3, DynamoDB only replicates data within a region. For those who want to have a copy of their data in another region, or even support for multi-master writes, DynamoDB global tables should be used. These provide a managed solution that deploys a multi-region multi-master database and propagates changes to various tables for you. Global tables are not only great for disaster recovery scenarios but are also very useful for delivering data to your customers worldwide.
Another option would be to use scheduled (or one-time) jobs which rely on EMR to back up your DynamoDB tables to S3, which can be later used to restore them to, not only another region, but also another account if needed. You can find out more about it here.

5. Safely Store Away Your AWS Root Credentials

It is extremely important to understand the basics around security on AWS, especially if you are the owner of the account or the company. AWS root credentials should ONLY be used to create initial users with admin privileges which would take over from there. Root password should be stored away safely, and programmatic keys (Access Key ID and Secret Access Key) should be disabled if already created.
Somebody getting access to your admin keys would be very bad, especially if they have malicious intentions (disgruntled employee, rival company, etc.), but getting your root credentials would be even worse. If a hack like this happens, your root user is the one you would use to recover, whether to disable all other affected users, or contact AWS for help. So, one of the things you should definitely consider is protecting your account with multi-factor authentication (MFA), preferably a hardware version.
The advice to protect your credentials sometimes sounds like a broken record, but many don’t understand the actual severity of this, and companies have gone out of business because of this oversight.

6. Define your RTO and RPO

Recovery Time Objective (RTO) represents the allowed time it would take to restore a process back to service, after a disaster event occurs. If you guarantee an RTO of 30 minutes to your clients, it means that if your service goes down at 5 p.m., your recovery process should have everything up and running again within half an hour. RTO is important to help determine the disaster recovery strategy. If your RTO is 15 minutes or less, it means that you potentially don’t have time to reprovision your entire infrastructure from scratch. Instead, you would have some instances up and running in another region, ready to take over.
When looking at recovering data from backups, RTO defines what AWS services can be used as a part of disaster recovery. For example if your RTO is 8 hours, you will be able to utilize Glacier as a backup storage, knowing that you can retrieve the data within 3–5 hours using standard retrieval. If your RTO is 1 hour, you can still opt for Glacier, but expedited retrieval costs more, so you might chose to keep your backups in S3 standard storage instead.
Recovery Point Objective (RPO) defines the acceptable amount of data loss measured in time, prior to a disaster event happening. If your RPO is 2 hours, and your system has gone down at 3 p.m., you must be able to recover all the data up until 1 p.m. The loss of data from 1 p.m. to 3 p.m. is acceptable in this case. RPO determines how often you have to take backups, and in some cases continuous replication of data might be necessary.

7. Pick the Correct DR Scenario for Your Use Case

The AWS Disaster Recovery white paper goes to great lengths to describe various aspects of DR on AWS, and does a good job of covering four basic scenarios (Backup and Restore, Pilot Light, Warm Standby and Multi Site) in detail. When creating a plan for DR, it is important to understand your requirements, but also what each scenario can provide for you. Your needs are also closely related to your RTO and RPO, as those determine which options are viable for your use case.
These DR plans can be very cheap (if you rely on simple backups only for example), or very costly (multi-site effectively doubles your cost), so make sure you have considered everything before making the choice.

8. Identify Mission Critical Apps and Data and Design Your DR Strategy Around Them

While all your applications and data might be important to you or your company, not all of them are critical for running a business. In most cases not all apps and data are treated equally, due to the additional cost it would create. Some things have to take priority, both when making a DR plan, and when restoring your environments after a disaster event. An improper prioritization will either cost you money, or simply risk your business continuity.

9. Test your Disaster Recovery

Disaster Recovery is more than just a plan to follow in case something goes wrong. It is a solution that has to be reliable, so make sure it is up to the task. Test your entire DR process thoroughly and regularly. If there are any issues, or room for improvement, give it the highest possible priority. Also don’t forget to focus on your technical people as well as—they too need to be up to the task. Have procedures in place to familiarize them with every piece of the DR process.

10. Consider Utilizing 3rd-party DR Tools

AWS provides a lot of services, and while many companies won’t ever use the majority of them, for most use cases you are being provided with options. But having options doesn’t mean that you have to solely rely on AWS. Instead, you can consider using some 3rd-party tools available in AWS Marketplace, whether for disaster recovery or something else entirely. N2WS Backup & Recovery is the top-rated backup and DR solution for AWS that creates efficient backups and meets aggressive recovery point and recovery time objectives with lower TCO. Starting with the latest version, N2WS Backup & Recovery offers the ability to move snapshots to S3 and store them in a Veeam repository format. This new feature enables organizations to achieve significant cost-savings and a more flexible approach toward data storage and retention. Learn more about this here.

Summary

Disaster recovery planning should be taken very seriously, nonetheless many companies don’t invest enough time and effort to properly protect themselves, leaving their data vulnerable. And while people will often learn from their mistakes, it is much better to not make them in the first place. Make disaster recovery planning a priority and consider the tips we have covered here, but also do further research.
Try N2WS Backup & Recovery 2.4 for FREE!

Read Also

Original Article: https://n2ws.com/blog/aws-cloud/how-to-create-a-disaster-recovery-plan-for-aws

DR

Chaining plans in Veeam Availability Orchestrator

Chaining plans in Veeam Availability Orchestrator

Notes from MWhite  /  Michael White


So you have installed VAO, and have tested your plans and you are ready for a crisis.  But you want to know how you can trigger one plan, and have them all go.  Right?  I can help with that.
The overview is you do plans, test them and when happy you chain them.  But you need to make sure the order of recovery is what it needs to be.  For example, database’s should be recovered before the applications that need them. In a crisis it is best to have email working so management can reassure the market, customers and employees.
In another article I am working on I will show you a method of recovering multi – tier applications fast, rather than the slow method – which I suspect will be default for many as it is obvious, and yet, there is a much better way to do it.

Starting Point

In my screenshot below you can see my plans.  I want to recover EMail Protection first, followed by SQL, and finally View.  View requires SQL, and I need email recovered first so I can tell my customers, and employees that we are OK.

And yes, my EMail protection plan is not verified but I am having trouble with it so it needs more time. But we can still chain plans.  When you do this, make sure all your plans are verified.
The next step is to right click on Email Protection Plan and select Schedule.  You can also do that under the Launch menu too.

Yes, we are not going to schedule a failover, but this is the path we must take to chain these plans together so we do the failover on the first and they all go one after another.
We need to authenticate – we don’t want anyone doing this after all. Next we see a schedule screen and so you need to schedule a failover – in the future.  Not too worry, we will not do that.

You choose Next, and Finish.
We now right click on the next plan and select Schedule.

This time, we are enable the checkbox, but rather than select a date we will choose a plan to failover after.  So select Choose a Plan button. In our case we are going to have this second plan fire after the first one.

Now we Next, and Finish.
Now we right + click on the last plan and select Schedule. Then we enable and select the previous plan.


Then you do the same old Next, and Finish. Now when we look at our plans we see a little of what we have done – but not yet are we complete.

You can see in the Failover Scheduled column how the email plan – first to be recovered – occurs on a particular date, but then we have View execute after SQL, and SQL recovery after Email.
Now, we must right click on EMail and select Schedule.  We must disable the scheduled failover.

As you see above, remove the checkbox, and the typical Next and Finish.

Now we see no scheduled failover for the EMail protection plan, but we still see the other two plans with Run after so if I execute a failover for EMail the other two will execute in the order we set.
So things are good.  Now one plan triggered will cause the next to execute.  If you select EMail, it will trigger SQL which will than trigger View.

Things to be aware of

  • The plans need to be enabled for this to work.
  • Test failovers are not impacted by this order of execution.
  • When you do a failover with any of these plans, you will be prompted to execute the other plans in the appropriate order. This will be enabled by default but you can disable it.

One you have your plans chained, you should arrange a day and time to do a real failover to your DR site.  It is the best way to confirm things are working. BTW, all of my VAO technical articles can be found with this link.
Michael
=== END ===

Original Article: https://notesfrommwhite.net/2018/12/02/chaining-plans-in-veeam-availability-orchestrator/

DR

Top Veeam Forum Posts and Content

TOP CONTENT
HELP!.. Error
connecting veeam 9.5 with esxi 5.5
   [VIEWS: 266 • REPLIES: 16]
Hi, I’m new with
veeam backup and I have the following problem. When I try to connect Veeam
backup 9.5 with the V Center Esxi 5.5 server. I missed proxy error (407).
Error on the proxy server (407). Proxy authentication is required. How can I
solve it to connect with my ESXI. more
confused about
which ReFS version to use
   [VIEWS: 153 • REPLIES: 3]
There are 2 Veeam
Knowledge Base articles about ReFS, that stats what version that is good to
use with ReFS.
This one i the latest from 2018-12-26: https://www.veeam.com/kb2854

This says that the good version is like this: more
V2V support   [VIEWS: 140 • REPLIES: 6]
Hi.
V2V support Veeam v9.5 U4 ?
Hello all,
I would like to edit bakup copy job’s schedule via powershell.
I made a script as follows with reference to other forums. more
Backup to a
Mapped Drive?
   [VIEWS:
94 • REPLIES: 3]
I have a NAS set up
as a mapped drive on windows 10. I cannot correctly configure my backup to a
shared folder on my NAS. What could I be doing wrong? Veeam continues to tell
me it Failed to get disk free space.
more

Veeam Availability Console 2.0 Update 1 – New patch release

Veeam Availability Console 2.0 Update 1 – New patch
release
Recently a new patch was released
for Veeam Availability Console 2.0 Update 1.
This new patch resolves several issues surrounding core VAC server functionality,
discovery rules, monitoring, alarms and more. Read this blog post from
Anthony Spiteri, global technologist of product strategy at Veeam
to learn about these issues and how they will be resolved.

Recorded Webinar NEW Veeam Availability Suite 9.5 Update 4

Recorded Webinar
NEW Veeam Availability Suite 9.5 Update 4
Get an exclusive look at the NEW Veeam® Availability Suite™ 9.5
Update 4
and learn how it opens the opportunity for you
to close more deals with Veeam!
What’s new in Veeam Availability Suite 9.5
Update 4:
Veeam Cloud Tier – delivers up to 10
times the savings on long-term data retention costs using native
object storage integration
NEW Veeam Cloud Mobility – provides easy
portability and recovery of any on‑premises or cloud‑based workloads
to AWS, Azure and Azure Stack
Veeam DataLabs™ – increases security and
data governance, including GDPR compliance and malware prevention
And more!

Veeam Co-Founder Ratmir Timashev On Strategic Growth, And Co-CEO Peter McKay’s Departure – CRN

Veeam Co-Founder Ratmir Timashev On Strategic Growth, And Co-CEO Peter McKay’s Departure – CRN

veeam – Google News

Changing Of The Guard
Veeam co-founder Ratmir Timashev (pictured) expects the data availability software company to outpace the market for years to come. A 100 percent channel sales strategy, several powerful alliances with top hardware vendors including Hewlett Packard Enterprise, Cisco Systems and NetApp, and a market hungry for cloud solutions are all working in the company’s favor, he said.
The Baar, Switzerland, company’s co-CEO, Peter McKay, said Tuesday that he was leaving the company, kicking off a broader executive restructuring. Timashev said McKay made the decision to leave the company after accelerating its enterprise channel sales strategy, as well as its strategic alliance strategy.
In the wake of McKay’s departure, Veeam made Timashev, who has in the past served as CEO, executive vice president of worldwide sales and marketing. Co-founder Andrei Baronov, formerly co-CEO, is now the company’s sole chief executive.
Timashev said Veeam has grown its data protection and replication business in the mid-20 percent range in recent years, and he expects to maintain that pace over the next couple of years. Cloud service providers are the company’s fastest-growing partner segment, he said, and an increasing number of traditional resellers lured by customer demand and huge margins are getting in on that action.
What follows is an edited excerpt of CRN’s conversation with Timashev.

Original Article: http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNG2qpEm5zNhzrInBIBbbd–JUT9Xw&clid=c3a7d30bb8a4878e06b80cf16b898331&ei=hh_bW7gXgfeoAs65pKAG&url=https://www.crn.com/slide-shows/data-center/veeam-co-founder-ratmir-timashev-on-strategic-growth-and-co-ceo-peter-mckay-s-departure

DR

Prepare you vLab for your VAO DR planning

Prepare you vLab for your VAO DR planning

CloudOasis  /  HalYaman


disaster-recovery-plan-ts-100662705-primary.idge_.jpgA good disaster plan needs careful preparation. But after your careful planning, how do you validate your DR strategy without disrupting your daily business operation? Veeam Availability Orchestrator just might be your solution.
As this blog post focuses on one aspect of the VBR and VAO configuration and to learn more about Veeam Availability Orchestrator, and to find out more about Veeam Replication and Veeam Virtual Labs, continue reading.
Veeam Availability Orchestrator is a very powerful tool to help you implement and document your DR strategy. However, this product relies on Veeam Backup & Replication Tool to:

  • perform Replication, and
  • Virtual Lab.

Therefore, to successfully configure the Veeam Availability Orchestrator, you must master Replication and the Virtual Lab. Don’t worry, I will take you through the important steps to successfully configure your Veeam Availability Orchestrator, and to implement your DR Plan.
The best way to get started is to share with you a real-life DR scenario I experienced last week during my VAO POC.

Scenario

A customer wanted to implement Veeam Availability Orchestrator with the following objectives:

  • Replication between the production and DR datacentre across the county,
  • Re-Mapping the Network attached to each VM at the DR site,
  • Re-IP the VM IP address of each VM at the DR site,
  • Scheduling the DR testing to perform every Saturday morning,
  • Document the DR Plan.

As you might already be aware, all those objectives can be achieved using Veeam VBR & VAO.
So let’s get started.

The Environment and the Design

To understand the design and the configuration lets first introduce the customer network subnets at PRODUCTION and the DR site.
PRODUCTION Site

  • At the PRODUCTION site, the customer used the 192.168.33.x/24 subnet,
  • Virtual Distribution Switch and Group: SaSYD-Prod.

DR Site

  • Production network at the DR site uses the 192.168.48.x/24 subnet
  • Prod vDS: SP-DSWProd
  • DR Re-IP subnet & Network name: vxw-dvs-662-virtualwire-4-sid-5000-SP_Prod_Switch

To accomplish the configuration and requirements at those sites, the configuration must consider the following steps:

  • Replication between the PRODUCTION and the DR Sites
  • Re-Mapping the VM Network from ProdNet to vxw-dvs-662-virtualwire-4-sid-5000-SP_Prod_Switch
  • vLab with that configuration listed above.

The following diagram and design is what we are going to discuss on this blog:

Replication Job Configuration

To prepare for a disaster and to implement failover, we must create a replication job that will replicate the intended VMs from the PRODUCTION site to the DR site. In this scenario, to achieve the requirements above, we must use Veeam replication with Network Mapping and Re-IP options when configuring the Replication Job. To do this, we have to tick the checkbox for the Separate virtual networks (enable network mapping), and Different IP addressing scheme (enable re-IP):

At the Network stage, we will specify the source and destination networks:

Note: to follow the diagram, the source network must be: Prod_Net and the Target network will be the DR_ReIP network.
On the Re-IP, enter the original IP address and the new Re-IP address to be used at the DR site:
Next, continue with the replication job configuration as normal.

Virtual Lab Configuration

To use Veeam Availability Orchestrator to check our DR planning, and to make sure our VMs will start on the DR site as expected, we must create a Veeam Virtual Lab to test our configuration. First, let’s create a Veeam vLab, starting with the name of the vLab and the ESXi host at the DR site which will host the Veeam Proxy appliance. At the following screenshot, the hostname is spesxidr.veeamsalab.org
Choose a data store where you will keep the redo log files. After you have selected the datastore, press next. You must configure the proxy appliance specifically for the network to be attached. In our example, the network is the PRODUCTION network at the DR site named SP-DSWProd, and it has a static DR site IP address. See below.
Next, we must configure the Networking as Advanced multi-host (manual configuration), and then select the appropriate Distributed virtual switch; in our case, SP-ProdSwitch.

This leads us to the next configuration stage, Isolated Network. At this stage, we must assign the DR network that each replicated VM will be connected.
Note: This network must be the same as the Re-Mapped network you selected as a destination network during the replication job configuration. The Isolation network is any name you assign to the temporary network used during the DR plan check.

Next, we must configure the temporary DR network. As shown on the following screenshot, I chose the Omnitra_VAO_vLab network I named on the previous step (Isolated network). The IP address is the same IP address of the DR PRODUCTION gateway. Also on the following screenshot, you can see the Masquerade network address I can use to get access to each of the VMs from the DR PRODUCTION network:


Finally, let’s create a static Mapping to access the VMs during the DR testing. We will use the Masquerade IPs as shown in the following screenshot.

Conclusion

Veeam Availability Orchestrator is a very powerful tool to help companies streamline and simplify their DR planning. The initial configuration of the VAO DR planning is not complex; but it is a little involved. You must navigate between the two products, Veeam Backup and Replication, and the Veeam Availability Orchestrator.
After you have grasped the DR concept, your next steps to DR planning will be smooth and simple. Also, you may have noted that to configure your DR planning using Veeam Availability Orchestrator, you must be familiar with Veeam vLabs and networking in general. I highly recommend that you read more on Veeam Virtual Labs before starting your DR Plan configuration.
I hope this blog post helps you to get started with vLab and VAO configuration. Stay tuned for my next blog or recording about the complete configuration of the DR Plane. Until then, I hope you find this blog post informative; please post your comments in the chatter below if you have questions or suggestions.
The post Prepare you vLab for your VAO DR planning appeared first on CloudOasis.

Original Article: https://cloudoasis.com.au/2018/11/20/prepare-you-vlab-for-your-vao-dr-planning/

DR

Migration is never fun – Backups are no exception

Migration is never fun – Backups are no exception

Veeam Software Official Blog  /  Rick Vanover


One of the interesting things I’ve seen over the years is people switching backup products. Additionally, it is reasonable to say that the average organization has more than one backup product. At Veeam, we’ve seen this over time as organizations started with our solutions. This was especially the case before Veeam had any solutions for the non-virtualized (physical server and workstation device) space. Especially in the early days of Veeam, effectively 100% of business was displacing other products — or sitting next to them for workloads where Veeam would suit the client’s needs better.
The question of migration is something that should be discussed, as it is not necessarily easy. It reminds me of personal collections of media such as music or movies. For movies, I have VHS tapes, DVDs and DVR recordings, and use them each differently. For music, I have CDs, MP3s and streaming services — used differently again. Backup data is, in a way, similar. This means that the work to change has to be worth the benefit.
There are many reasons people migrate to a new backup product. This can be due to a product being too complicated or error-prone, too costly, or a product discontinued (current example is VMware vSphere Data Protection). Even at Veeam we’ve deprecated products over the years. In my time here at Veeam, I’ve observed that backup products in the industry come, change and go. Further, almost all of Veeam’s most strategic partners have at least one backup product — yet we forge a path built on joint value, strong capabilities and broad platform support.
Migration-VDP
When the migration topic comes up, it is very important to have a clear understanding about what happens if a solution no longer fits the needs of the organization. As stated above, this can be because a product exits the market, drops support for a key platform or simply isn’t meeting expectations. How can the backup data over time be trusted to still meet any requirements that may arise? This is an important forethought that should be raised in any migration scenario. This means that the time to think about what migration from a product would look like, actually should occur before that solution is ever deployed.
Veeam takes this topic seriously, and the ability to handle this is built into the backup data. My colleagues and I on the Veeam Product Strategy Team have casually referred to Veeam backups being “self-describing data.” This means that you open it up (which can be done easily) and you can clearly see what it is. One way to realize this is the fact that Veeam backup products have an extract utility available. The extract utility is very helpful to recover data from the command line, which is a good use case if an organization is no longer a Veeam client (but we all know that won’t be the case!). Here is a blog by Vanguard Andreas Lesslhumer on this little-known tool.
Why do I bring up the extract utility when it comes to switching backup products? Because it hits on something that I have taken very seriously of late. I call it Absolute Portability. This is a very significant topic in a world where organizations passionately want to avoid lock-in. Take the example I mentioned before of VMware vSphere Data Protection going end-of-life, Veeam Vanguard Andrea Mauro highlights how they can migrate to a new solution; but chances are that will be a different experience. Lock-in can occur in many ways, and organizations want to avoid lock-in. This can be a cloud lock-in, a storage device lock-in, or a services lock-in. Veeam is completely against lock-ins, and arguably so agnostic that it makes it hard to make a specific recommendation sometimes!
I want to underscore the ability to move data — in, out and around — as organizations see fit. For organizations who choose Veeam, there are great capabilities to keep data available.
So, why move? Because expanded capabilities will give organizations what they need.
GD Star Rating
loading…
Migration is never fun – Backups are no exception, 4.8 out of 5 based on 4 ratings

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/EjQm2rSAvsU/backup-product-migration-lock-in.html

DR

The closing speed of transformation

The closing speed of transformation

Veeam Executive Blog – The Availability Lounge  /  Dave Russell

We all know that software and infrastructure don’t typically go away in the data center. You very rarely decommission something and bring in all new gear or stop the application to completely transition over to something else. Everyone’s in some sort of hybrid state. Some think that they’re transitioning, and many hopefully even have a plan for this.
 
Some of these changes are bigger, take longer, and you almost need to try them and experience them to have success in order to proceed. I’ve heard people say, “Well, we’re going to get around to doing X, Y, and Z.” But they are often lacking a sense of urgency to experiment early.
 
A major contributor to this type of procrastination is that changes that appear to be far off, arrive suddenly. The closing speed of transformation is the issue. Seldom do you have the luxury of time; but early on you are seldom compelled to decide. You don’t sense the urgency.
 
It is not in your best interest to say, “We’re just going to try to manage what we’ve got, because we’re really busy. We’ll get to that when we can.” Because then, boom, you’re unprepared for when you really need to actually get going on something.
 
A perfect example is the impact of the cloud on every business and every IT department. The big challenge is that organizations know they should be doing something today but are still struggling with exactly what to do. In terms of a full cloud strategy, it’s oftentimes very disaggregated. And while we’re going on roughly a decade in the cloud-era, as an industry overall, we’re still really in the infancy of having a complete cloud strategy.
 
In December of last year, when I asked people, “What’s your cloud strategy? Do you have one? Are you formulating one?” The answer was unfortunately the same response that they’ve been giving for the last eight years… “Next year.” The problem is that this is next year, and they are still in the same state.
 
When it comes to identifying a cloud strategy, a big challenge for IT departments and CIOs is that it used to be easy to peer into the future because the past was fairly predictable — whether it was the technology in data centers, the transformation that was happening, the upgrade cycle, or the movement from one platform to another. The way you had fundamentally been doing something in the past wasn’t changing with the future. Nor did it require radically different thinking. And it likely did not require a series of conversations across all of IT, and the business as well.
 
But when it comes to the cloud, we’re dealing with a fundamental transformation in the ways that businesses operate when it comes to IT: compute, storage, backup, all of these are impacted.
 
Which means organizations working on a cloud strategy have little, to no historical game plan to refer to. Nothing they can fall back on for a reference. The approach of, “Well this is what we did in the past, so let’s apply that to the future,” no longer applies. Knowing what you have, knowing what your resources are, and knowing what to expect are not typically well understood with regards to a complete cloud transformation. In part, this is because the business is often transforming, or seeking to become more digital, at the same time.
 
With the cloud, you often have limited, or isolated experiences. You have people who are trying to make business decisions that have never been faced with these issues, in this way, before.
 
Moving to absolute virtualization and a hybrid, multi-cloud deployment means that when it comes time to set a strategy you have a series of questions that need to be answered:
 

  • Do you understand what infrastructure resources are going to be required? No.
  • Do you understand what skills are going to be needed from your people? No.
  • Do you know how much budget to allocate with clarity, today, and over time? No.
  • Do you know what technologies are going to impact your business immediately, and in the near-term future? No.

 
Okay, go ahead and make a strategy now based on the information you just gave me, four No answers in a row. That’s pretty scary.
 
On top of this, data center people tend to be very risk averse by design. There’s a paralysis that creeps in. “Well, we’re not sure how we should get started.” And people just stay in pause mode. That’s part of why we see Shadow IT or Rogue IT. Someone says, “Well, I’m going to go out and I’m just going to get my own SaaS-based license for some capability that I’m looking for, because the IT department says they’re investigating it.”
 
Typically, what happens is the IT department is trying to figure that out, trying to get a strategy, investigate the request. But in the meantime, they say, “No.” Now the IT becomes the department of “no” and is not being perceived as being helpful.
 
To address this issue head on, you need to apply an engineering mindset. Meaning, that you learn more about a problem by trying to solve it. In absence of having a great reference base, with something that can easily be compared to, we should at least get going on what is visible to us, and that looks to be achievable in the short term.
 
An excellent example in the software as a service (SaaS) world is Microsoft Office 365. Getting the on-premises IT people participating in this can still be a challenge. As the SaaS solutions start to become more and more implemented, they’re sometimes happening outside of the purview of what goes on in the data center. This can lead to security, pricing and performance and Availability issues.
 
Percolate that up, what’s the effect of that? What does that actually mean? It means that the worst-case scenario is you’ve got an outcome of where the infrastructure operations people are increasingly viewed as less and less strategic going forward, because if you take this out to the extreme, you’ll end up being custodians of legacy implementations and older solutions. All while numerous other projects are being championed, piloted, put in to production and ultimately owned, by somebody else; perhaps a line of business that is outside of IT.
 
That’s where you see CIOs self-report that they think more than 29% of their IT spending is happening outside of their purview. If you think about that, that’s concerning. You’re the Chief Information Officer (CIO). You should know pretty close to 100% of what’s going on as it relates to IT. If your belief is that approaching a third of IT spending happens elsewhere, outside of your control, and that this outside spending is not really an issue, then what are you centralizing? What are you the Chief of, if this trend continues?”
 
The previous way of developing a strategic IT plan worked well in the past when you had an abundance of time. But that is no longer the case. Transformation is happening all around us and inside of each organization. You can’t continue to defer decisions. IT is now a critical part of the business; every organization has become digital and the cloud is touching everything. It is time to step up, work with vendors you trust, and move boldly to the cloud.
 
Please join me on December 5th for VeeamOn Virtual where I discuss these challenges as well as 2019 predictions, Digital Transformation, Hyper-Availability and much more. Go ahead and register for VeeamOn Virtual.
 

Makes disaster recovery, compliance, and continuity automatic

NEW Veeam Availability Orchestrator helps you reduce the time, cost and effort of planning for and recovering from a disaster by automatically creating plans that meet compliance.

DOWNLOAD NOW

New
Veeam Availability Orchestrator

Original Article: http://feedproxy.google.com/~r/veeam-executive-blog/~3/IG0mZ-tB1tA/speed-of-cloud-transformation.html

DR

QNAP NAS Verified Veeam Ready for Efficient Disaster Recovery – HEXUS

QNAP NAS Verified Veeam Ready for Efficient Disaster Recovery – HEXUS

veeam – Google News

PRESS RELEASE

Taipei, Taiwan, November 19, 2018 – QNAP® Systems, Inc. today announced that multiple Enterprise-class QNAP NAS systems, including the ES1640dc v2 and TES-1885U, have been verified as Veeam® Ready. Veeam Software, the leader in Intelligent Data Management for the Hyper-Available Enterprise™, has granted QNAP NAS systems with the Veeam Ready Repository distinction, verifying these systems achieve the performance levels for efficient backup and recovery with Veeam® Backup & Replication™ for virtual environments built on VMware® vSphere™ and Microsoft® Hyper-V® hypervisors.
“Veeam provides industry-leading Availability solutions for virtual, physical and cloud-based workloads, and verifying performance ensures that organizations can leverage Veeam advanced capabilities with QNAP NAS systems to improve recovery time and point objectives and keep their businesses up and running,” said Jack Yang, Associate Vice President of Enterprise Storage Business Division of QNAP.
Veeam Backup & Replication helps achieve Availability for ALL virtual, physical and cloud-based workloads and provides fast, flexible and reliable backup, recovery and replication of all applications and data. Organizations can now choose among several QNAP systems verified with Veeam for backup and recovery including:
For more information, please visit https://www.veeam.com/ready.html.
About QNAP Systems, Inc.
QNAP Systems, Inc., headquartered in Taipei, Taiwan, provides a comprehensive range of cutting-edge Network-attached Storage (NAS) and video surveillance solutions based on the principles of usability, high security, and flexible scalability. QNAP offers quality NAS products for home and business users, providing solutions for storage, backup/snapshot, virtualization, teamwork, multimedia, and more. QNAP envisions NAS as being more than “simple storage”, and has created many NAS-based innovations to encourage users to host and develop Internet of Things, artificial intelligence, and machine learning solutions on their QNAP NAS.

Original Article: http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNGr1B0Zhq5QDEVsTvQ-qQhfGevzSA&clid=c3a7d30bb8a4878e06b80cf16b898331&ei=4xIAXKjpHpXnhAGrrL2oBg&url=https://hexus.net/tech/items/network/124439-qnap-nas-verified-veeam-ready-efficient-disaster-recovery/

DR

On-Premises Object Storage for Testing

On-Premises Object Storage for Testing

CloudOasis  /  HalYaman


objectstorage.pngMany Customers and Partners are wanting to deploy an on-premises object storage for testing and learning purposes. How you can deploy a totally free of charge object storage instance for an unlimited time and unrestricted capacity in your own lab?


On this blog post, I will take you through the deployment and configuration of an on-premises object storage instance for test purposes so that you might learn more about the new feature.
For this test, I will use a product call Minio. You can download it from this link for free, and it is available for Windows, Linux and MacOS.
To get started, download Minio and run the installation.
On my CloudOasis lab, I decided to use a Windows core server to act as a test platform for my object storage. Read on to the steps below to see the configuration I used:

Deployment

By default, after the download of Minio Server, it will install as an unsecured service. The downside of this is that many applications out there need a secure URL over HTTPS to interact with the Object Storage. To fix this insecurity, we have to download GnuTLS to generate a private and public key facility to secure our Object storage connection.

Preparing Certificate

After GnuTLS has downloaded and extracted to a folder on your drive, for example, C:\GnuTLS, you now must add that path to Windows with the following command:
setx path "%path%;C:\gnutls\bin"

Private Key

The next step is to generate the private.key:
certtool.exe --generate-privkey --outfile private.key
After the private.key has been  generated, create a new file called cert.cnf and past the following script into it:
# X.509 Certificate options  #  # DN options    # The organization of the subject.  organization = "CloudOasis"    # The organizational unit of the subject.  #unit = "CloudOasis Lab"    # The state of the certificate owner.  state = "NSW"    # The country of the subject. Two letter code.  country = "AU"    # The common name of the certificate owner.  cn = "Hal Yaman"    # In how many days, counting from today, this certificate will expire.  expiration_days = 365    # X.509 v3 extensions    # DNS name(s) of the server  dns_name = "miniosrv.cloudoasis.com.au"    # (Optional) Server IP address  ip_address = "127.0.0.1"    # Whether this certificate will be used for a TLS server  tls_www_server    # Whether this certificate will be used to encrypt data (needed  # in TLS RSA cipher suites). Note that it is preferred to use different  # keys for encryption and signing.  encryption_key

Public Key

Now we are ready to generate the public certificate using the following command:
certtool.exe --generate-self-signed --load-privkey private.key --template cert.cnf --outfile public.crt
After you have completed those steps, you must copy the private and the public keys to the following path:
C:\Users\administrator.VEEAMSALAB\.minio\certs\
Note: If you already had your own certificates, all you have to do is to copy them to the .\.\certs folder.

Run Minio Secure Service

Now we are ready to run a secured Minio Server using the following command. Here, we are assuming your minio.exe has been installed on your C:\ drive:
C:\minio.exe server S:\bucket
Note: the S:\bucket is a second volume I created and configured in Minio Server to receive the saved the objects.
After the minio.exe has run, you will get the following screen:
Screen Shot 2018-11-09 at 8.46.33 am

Conclusion

This blog was prepared following several requests from customers and partners who wanted to familiarise themselves with object storage integration during their private beta experience.
As you saw, the steps we have described here will help you to deploy an on-premises object storage for your own testing, without the cost, time limit, or storage size limit.
The steps to deploy the object storage are very simple, but the outcome is huge, and very helpful to you in your learning journey.

The post On-Premises Object Storage for Testing appeared first on CloudOasis.

Original Article: https://cloudoasis.com.au/2018/11/09/on-premises-s3/

DR

Veeam honored with 2018 CRN Tech Innovator Award

Veeam honored with 2018 CRN Tech Innovator Award
CRN, a brand of The Channel Company, recognized Veeam®
with a 2018 CRN Tech Innovator Award. Veeam Availability Suite™ 9.5 took
top honors in the Data Management category!
These annual awards honor standout hardware, software and
services that are moving the IT industry forward. In compiling
the 2018 Tech Innovator Award list, CRN editors evaluated 300 products
across 34 technology categories using several criteria, including
technological advancements, uniqueness of features, and potential
to help solution providers solve end users’ IT challenges.

UPDATED: Cisco + Veeam Digital Hub
Cisco and Veeam offer unmatched price and performance along
with hyperconverged Availability. Have you seen the latest updates
we’ve made to the Cisco + Veeam Digital Hub? Subscribe now
to get easy and unlimited access to Cisco + Veeam reports, best
practices, webinars and much more.

VeeamON Virtual 2018 – December 4

VeeamON Virtual 2018

VeeamON Virtual 2018
04 December, Tuesday
VeeamON Virtual 2018
VeeamON Virtual is a unique online conference designed to deliver the latest insights on Intelligent Data Management — all from the comfort of your own office.
Every year, VeeamON Virtual brings together more than 2,500 industry experts to showcase the latest technology solutions providing the Hyper-Availability of data.
Join us on our virtual journey to explore:
  • The challenges of data growth and inevitable data sprawl
  • Threats to data Availability and protection
  • The steps you can take to achieve Intelligent Data Management
  • And more!
Enter for a chance to win a Virtual Reality Kit and gain access to the Veeam® Availability Library!

On-prem or in-cloud? This platform provides holistic security in-cloud, on-prem and everywhere in between – SiliconANGLE News

On-prem or in-cloud? This platform provides holistic security in-cloud, on-prem and everywhere in between – SiliconANGLE News

veeam – Google News

Three years ago the big question was: “On-prem or in-cloud?” Today’s answer is both, as the majority of companies adopt hybrid solutions. However, the question turns to the complexity of securing data that is dispersed over multiple platforms. Taking steps to solve this issue is N2W Software Inc.’s Backup & Recovery version 2.4, announced during AWS re:Invent 2018 this week in Las Vegas, Nevada.
“You have a single platform that can manage data protection both on- and off-premises so that you can leverage where is the best place for this workload and protect it across no matter where it chooses to live,” said Danny Allan (pictured, left), vice president of product strategy at Veeam Software Inc., was acquired N2W this past January.
Allan and Andy Langsam (pictured, right), chief operating officer at N2W Software, spoke with John Walls (@JohnWalls21), host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, and guest host Justin Warren (@jpwarren), chief analyst at PivotNine Pty Ltd, during AWS re:Invent in Las Vegas. They discussed the release of N2WS version 2.4 and the changing face of the back-up industry as it expands from security into predictive analysis. (* Disclosure below.)

Slashing storage costs

N2WS version 2.4 introduces snapshot decoupling into the AWS S3 repository, allowing customers to move backup EC2 snapshots into the much cheaper S3 storage, according to Langsam. Real customer savings are estimated to be 40 to 50 percent a month, he added.
“Anytime you can talk about cost reduction from five cents a gig on EC2 storage to two cents on S3, it’s a tremendous savings for our customer base,” Langsam said.
A recent survey conducted by N2WS showed that more than half of the company’s customers spend $10,000 a month or more on AWS storage costs. “If they can save 40 percent on that, that’s real, real savings,” Langsam stated. “[It’s] more than the cost of the software alone.”
N2WS has reported 189-percent growth in revenue since its January 2018 acquisition by Veeam, according to Allan. “We’ve got customers recently like Notre Dame and Cardinal Health, and then we have people getting into the cloud for the very first time,” he said.
Allan attributes this growth to the financial stability provided by having a parent company. “Being acquired has allowed us to focus on the customer and innovation versus going out and raising money from investors,” he stated.
Security concerns have catapulted back-up services into the spotlight, but their capabilities are starting to expand outside of just protection and recovery. “We’re moving away from just being reactive to business need to being proactive in driving the business forward,” Allan said.
Applying new technologies available through advances in machine learning and artificial intelligence allows data protection companies to offer suggestions to their clients that will increase productivity or reduce costs. “We [can] leverage a lot of the algorithms that are existing in clouds like AWS to help analyze the data and make decisions that [our clients] don’t even know that they need to make,” Allan said. “That decision could be, you need to run this analysis at 2:00 a.m. in the morning because the instances are cheaper.”
Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of AWS reInvent. (* Disclosure: Veeam Software Inc. sponsored this segment of theCUBE. Neither Veeam nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.
The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.

Original Article: http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNHfBiD2bON-7L0l4XY3tdnBfY63Kg&clid=c3a7d30bb8a4878e06b80cf16b898331&ei=1UgAXJCDGYnihAGZ5qHwDA&url=https://siliconangle.com/2018/11/28/prem-cloud-platform-provides-holistic-security-cloud-prem-everywhere-reinvent/

DR

Veeam and N2WS: Accelerating Growth and Innovation

Veeam and N2WS: Accelerating Growth and Innovation

Veeam Executive Blog – The Availability Lounge  /  Yesica Schaaf

Who can forget that wonderful line from Forrest Gump in which Hanks talks about the beginning of his friendship with Jenny and how since that day, they were like “Peas and Carrots?” That’s how we like to think of Veeam and N2WS.
 
Veeam, a company born on virtualization, is the clear leader in Intelligent Data Management with significant investments and growth in the public cloud space. As part of our strategy to deliver the most robust data management and protection across any environment, any cloud, we’re always looking at ways to innovate and expand our offerings.  And last year, we acquired N2WS, a leading provider of cloud-native backup and disaster recovery for Amazon Web Services (AWS); since the acquisition, Veeam and N2WS have accelerated N2WS’ revenue growth by 186% year over year, including 179% growth of installed trials of N2WS’ product, N2WS Backup & Recovery.
 
Today, N2WS announces version 2.4 of N2WS Backup & Recovery, which enables businesses to reduce the overall cost of protecting cloud data by extending Amazon EC2 snapshots to Amazon S3 for long-term data retention — reducing backup storage costs by up to 40%.
 
This latest release builds on the previous announcement where N2WS and Veeam launched new innovations to help businesses automate backup and recovery operations for Amazon DynamoDB, a cloud database service used by more than 100,000 AWS customers for mobile, web, gaming, ad tech, IoT, and many other applications that need low-latency data access. By extending N2WS Backup & Recovery to Amazon DynamoDB, businesses are now able to ensure continuous Availability and protect against accidental or malicious deletion of critical data in Amazon DynamoDB.
 
While the cloud delivers significant business benefits, based on the AWS shared responsibility model, businesses must still take direct action to guard data and enable business continuity in the event of an outage or disaster. We are now seeing unprecedented numbers of new customers turn to N2WS and Veeam to ensure their AWS cloud data is secure, including the University of Notre Dame, Cardinal Health and more.
 
As Veeam and N2WS continue to invest in the public cloud space, new innovations specifically designed for AWS environments will be available in early 2019, innovations that will deliver major advancements to customers across the globe. I’d love to share this with you now, but you’ll have to wait for Q1 2019 for the details.
 
But if you are at AWS re:Invent this week, please join our session “A Deeper Dive on How Veeam is Evolving Availability on AWS” on Wednesday, November 28 or visit us at booth #1011 throughout the week to hear more about our innovative approach to the public cloud, including where we are headed with archiving, mobility and more.
 
With Veeam’s momentum and growth across AWS data protection solutions, the company is well positioned as the market leader in Intelligent Data Management with an innovative focus on the public cloud. Ultimately, from being the peas and carrots and better together, we want to be the meat and potatoes for all our new joint customers where we are an integral part of their business and IT operations.

Makes disaster recovery, compliance, and continuity automatic

NEW Veeam Availability Orchestrator helps you reduce the time, cost and effort of planning for and recovering from a disaster by automatically creating plans that meet compliance.

DOWNLOAD NOW

New
Veeam Availability Orchestrator

Original Article: http://feedproxy.google.com/~r/veeam-executive-blog/~3/Fiwmc_ygDWA/new-version-n2ws-backup-recovery.html

DR

Stateful containers in production, is this a thing?

Stateful containers in production, is this a thing?

Veeam Software Official Blog  /  David Hill


As the new world debate of containers vs virtual machines continues, there is also a debate raging about stateful vs stateless containers. Is this really a thing? Is this really happening in a production environment? Do we really need to backup containers, or can we just backup the data sets they access? Containers are not meant to be stateful are they? This debate rages daily on Twitter, reddit and pretty much in every conversation I have with customers.
Now the debate typically starts with the question Why run a stateful container? In order for us to understand that question, first we need to understand the difference between a stateful and stateless container and what the purpose behind them is.

What is a container?

“Containers enable abstraction of resources at the operating system level, enabling multiple applications to share binaries while remaining isolated from each other” *Quote from Actual Tech Media
A container is an application and dependencies bundled together that can be deployed as an image on a container host. This allows the deployment of the application to be quick and easy, without the need to worry about the underlying operating system. The diagram below helps explain this:
stateful containers
When you look at the diagram above, you can see that each application is deployed with its own libraries.

What about the application state?

When we think about any application in general, they all have persistent data and they all have application state data. It doesn’t matter what the application is, it has to store data somewhere, otherwise what would be the point of the application? Take a CRM application, all that customer data needs to be kept somewhere. Traditionally these applications use database servers to store all the information. Nothing has changed from that regard. But when we think about the application state, this is where the discussion about stateful containers comes in. Typically, an application has five state types:

  1. Connection
  2. Session
  3. Configuration
  4. Cluster
  5. Persistent

For the interests of this blog, we won’t go into depth on each of these states, but for applications that are being written today, native to containers, these states are all offloaded to a database somewhere. The challenge comes when existing applications have been containerized. This is the process of taking a traditional application that is installed on top of an OS and turning it into a containerized application so that it can be deployed in the model shown earlier. These applications save these states locally somewhere, and where depends on the application and the developer. Also, a more common approach is running databases as containers, and as a consequence, these meet a lot of the state types listed above.

Stateful containers

A container with stateful data is either typically written to persistent storage or kept in memory, and this is where the challenges come in. Being able to recover the applications in the event of an infrastructure failure is important. If everything that is backed up is running in databases, then as mentioned earlier, that is an easy solution, but if it’s not, how do you orchestrate the recovery of these applications without interruption to users? If you have load balanced applications running, and you have to restore that application, but it doesn’t know the connection or session state, the end user is going to face issues.

If we look at the diagram, we can see that “App 1” has been deployed twice across different hosts. We have multiple users accessing these applications through a load balancer. If “App 1” on the right crashes and is then restarted without any application state awareness, User 2 will not simply reconnect to that application. That application won’t understand the connection and will more than likely ask the user to re-authenticate. Really frustrating for the user, and terrible for the company providing that service to the user. Now of course this can be mitigated with different types of load balancers and other software, but the challenge is real. This is the challenge for stateful containers. It’s not just about backing up data in the event of data corruption, it’s how to recover and operate a continuous service.

Stateless containers

Now with stateless containers its extremely easy. Taking the diagram above, the session data would be stored in a database somewhere. In the event of a failure, the application is simply redeployed and picks up where it left off. Exactly how containers were designed to work.

So, are stateful containers really happening?

When we think of containerized applications, we typically think about the new age, cloud native, born in the cloud, serverless [insert latest buzz word here] application, but when we dive deeper and look at the simplistic approach containers bring, we can understand what businesses are doing to leverage containers to reduce the complex infrastructure required to run these applications. It makes sense that lots of existing applications that require consistent state data are appearing everywhere in production.
Understanding how to orchestrate the recovery of stateful containers is what needs to be focused on, not whether they are happening or not.
GD Star Rating
loading…
Stateful containers in production, is this a thing?, 5.0 out of 5 based on 4 ratings

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/n1e1aGaEEoU/stateful-containers-in-production-is-this-a-thing.html

DR

Veeam / Cisco Events for December

12/2/18 Partner Exchange — Richmond, VA
12/5/18 Cisco Connect — Denver, CO
12/12/18 Cisco Connect — Virginia Beach, VA
12/12/18 Cisco Connect — Los Angeles, CA
12/13/18 Cisco Connect — Columbus, OH
12/13/18 Partner Exchange — Pittsburgh, PA

Regional Health Centre gets healthy with Veeam and Cisco

Regional Health Centre gets healthy with Veeam
and Cisco
Canadian-based
Peterborough Regional Health Centre’s relatively small IT team was not happy
with the performance, recoverability and manageability
of their Veritas solution. Veeam and Cisco were able
to add data protection onto a VDI project which greatly simplified
and improved their data Availability. The ability
to manage and protect Microsoft Office 365 also added
incremental value to the deal.

VeeamON ’18 Virtual Conference

VeeamON Virtual Conference

VeeamON Virtual is a unique online conference designed to deliver the latest insights on Intelligent Data Management… to the comfort of your own office. Annually, VeeamON Virtual brings together more than 2,500 industry experts to showcase the latest technology solutions providing the Hyper-Availability of data. Join us on our virtual journey to explore the challenges of data growth and inevitable data sprawl, and the threat they pose to data Availability and protection.

REGISTER NOW

Enhanced self-service restore in Veeam Backup for Microsoft Office 365 v2

Enhanced self-service restore in Veeam Backup for Microsoft Office 365 v2

In Veeam Backup for Microsoft Office 365 1.5, you could only restore the most recently backed up recovery point, which limited its usefulness for most administrators looking to take advantage of the feature. That’s changed in Veeam Backup for Microsoft Office 365 v2 with the ability to now choose a point in time from the Veeam Explorers™. Read this blog from Anthony Spiteri, Global Technologist, Product Strategy to learn more.

LEARN MORE

More tips and tricks for a smooth Veeam Availability Orchestrator deployment

More tips and tricks for a smooth Veeam Availability Orchestrator deployment

Veeam Software Official Blog  /  Melissa Palmer

Welcome to even more tips and tricks for a smooth Veeam Availability Orchestrator deployment. In the first part of our series, we covered the following topics:

  • Plan first, install next
  • Pick the right application to protect to get a feel for the product
  • Decide on your categorization strategy, such as using VMware vSphere Tags, and implement it
  • Start with a fresh virtual machine

Configure the DR site first

After you have installed Veeam Availability Orchestrator, the first site you configure will be your DR site. If you are also deploying production sites, it is important to note, you cannot change your site’s personality after the initial configuration. This is why it is so important to plan before you install, as we discussed in the first article in this series.

As you are configuring your Veeam Availability Orchestrator site, you will see an option for installing the Veeam Availability Orchestrator Agent on a Veeam Backup & Replication server. Remember, you have two options here:

  1. Use the embedded Veeam Backup & Replication server that is installed with Veeam Availability Orchestrator
  2. Push the Veeam Availability Orchestrator Agent to existing Veeam Backup & Replication servers

If you change your mind and do in fact want to use an existing Veeam Backup & Replication server, it is very easy to install the agent after initial configuration. In the Veeam Availability Orchestrator configuration screen, simply click VAO Agents, then Install. You will just need to know the name of the Veeam Backup & Replication server you would like to add and have the proper credentials.

Ensure replication jobs are configured

No matter which Veeam Backup & Replication server you choose to use for Veeam Availability Orchestrator, it is important to ensure your replication jobs are configured in Veeam Backup & Replication before you get too far in configuring your Veeam Availability Orchestrator environment. After all, Veeam Availability Orchestrator cannot fail replicas over if they are not there!

If for some reason you forget this step, do not worry. Veeam Availability Orchestrator will let you know when a Readiness Check is run on a Failover Plan. As the last step in creating a Failover Plan, Veeam Availability Orchestrator will run a Readiness Check unless you specifically un-check this option.

If you did forget to set up your replication jobs, Veeam Availability Orchestrator will let you know, because your Readiness Check will fail, and you will not see green checkmarks like this in the VM section of the Readiness Check Report.

For a much more in-depth overview of the relationship between Veeam Backup & Replication and Veeam Availability Orchestrator, be sure to read the white paper Technical Overview of Veeam Availability Orchestrator Integration with Veeam Backup & Replication.

Do not forget to configure Veeam DataLabs

Before you can run a Virtual Lab Test on your new Failover Plan (you can find a step-by-step guide to configuring your first Failover Plan here), you must first configure your Veeam DataLab in Veeam Backup & Replication. If you have not worked with Veeam DataLabs before (previously known as Veeam Virtual Labs), be sure to read the white paper I mentioned above, as configuration of your first Veeam DataLab is also covered there.

After you have configured your Veeam DataLab in Veeam Backup & Replication, you will then be able to run Virtual Lab Tests on your Failover Plan, as well as schedule Veeam DataLabs to run whenever you would like. Scheduling Veeam DataLabs is ideal to provide an isolated production environment for application testing, and can help you make better use of those idle DR resources.

Veeam DataLabs can be run on demand or scheduled from the Virtual Labs screen. When running or scheduling a lab, you can also select the duration of time you would like the lab to run for, which can be handy when scheduling Veeam DataLab resources for use by multiple teams.

There you have it, even more tips and tricks to help you get Veeam Availability Orchestrator up and running quickly and easily. Remember, a free 30-day trial of Veeam Availability Orchestrator is available, so be sure to download it today!

The post More tips and tricks for a smooth Veeam Availability Orchestrator deployment appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/l-_Z3dXBpcE/availability-orchestrator-deployment-tips-tricks.html

DR

Veeam and NetApp double-team hybrid data run wild – SiliconANGLE

Veeam and NetApp double-team hybrid data run wild – SiliconANGLE

veeam – Google News

Mission-critical apps on-premises, machine-learning apps on Google Cloud Platform, serverless apps on Amazon Web Services cloud — phew, it’s getting hectic in modern enterprise information technology. The number of clouds isn’t going to shrink anytime soon, so it would help if things like data backup and data management would fuse to keep the number of additional things to mess with manageable.
NetApp Inc. and Veeam Software Inc. have partnered to integrate their technologies so fans of both will have fewer things to fiddle with. NetApp for storage — and more recently, data management — and Veeam for backup go together like chocolate and mint creme. They cover a number of crucial steps along the path of that most valuable asset in digital business, data.
“But what makes it simple is when it is comprehensive and integrated,” said Bharat Badrinath (pictured, right), vice president of products and solutions marketing at Veeam. “When the two companies’ engineering teams work together to drive that integration, that results in simplicity.”
Badrinath and Ken Ringdahl (pictured, left), vice president of global alliance architecture at Veeam, spoke with Lisa Martin (@LuccaZara) and Stu Miniman (@stu), co-hosts of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during this week’s NetApp Insight event in Las Vegas. They discussed how the companies’ partnership is reshaping their sales strategies. (* Disclosure below.)

In the back door and through the C-suite

Veeam and NetApp have integrated deeply so users can have the two technologies jointly follow their data around wherever it goes on the long, strange trip through hybrid cloud. Veeam has a reputation as a younger man’s technology that comes in the door through advocates in the IT department. It’s now making a push into larger enterprises, where NetApp has a large, deeply ingrained footprint.
“That’s a big impetus for the partnership, because NetApp has a lot of strength, especially with the ONTAP system in enterprise,” Ringdahl said.
The companies complement each other and fill in their blank spaces. “Veeam is bringing NetApp into more of our commercial deals; Netapp is bringing us into more enterprise deals,” Ringdahl explained. “We can come bottom-up; NetApp can come top-down.”
Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of the NetApp Insight event. (* Disclosure: TheCUBE is a paid media partner for NetApp Insight. Neither NetApp Inc., the event sponsor, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.
The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.

Original Article: http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNGscbl2YkiS6c9aP4_RprNlFy-siw&clid=c3a7d30bb8a4878e06b80cf16b898331&cid=52780078044958&ei=rT7XW7jDG6ORhgHZkpPwDQ&url=https://siliconangle.com/2018/10/25/veeam-and-netapp-double-team-hybrid-data-run-wild-netappinsight/

DR

How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs

How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs

Veeam Software Official Blog  /  Melissa Palmer


Unfortunately, bad patches are something everyone has experienced at one point or another. Just take the most recent example of the Microsoft Windows October 2018 Update that impacted both desktop and server versions of Windows. Unfortunately, this update resulted in missing files for impacted systems, and has temporarily been paused as Microsoft investigates.
Because of incidents like this, organizations are often seldom to quickly adopt patches. This is one of the reasons the WannaCry ransomware was so impactful. Unpatched systems introduce risk into environments, as new exploits for old problems are on the rise. In order to patch a system, organizations must first do two things, back up the systems to be patched, and perform patch testing.

A recent, verified Veeam Backup

Before we patch a system, we always want to make sure we have a backup that matches our organization’s Recovery Point Objective (RPO), and that the backup was successful. Luckily, Veeam Backup & Replication makes this easy to schedule, or even run on demand as needed.
Beyond the backup itself succeeding, we also want to verify the backup works correctly. Veeam’s SureBackup technology allows for this by booting the VM in an isolated environment, then tests the VM to make sure it is functioning properly. Veeam SureBackup gives organizations additional piece of mind that their backups have not only succeeded, but will be useable.

Rapid patch testing with Veeam DataLabs

Veeam DataLabs enable us to test patches rapidly, without impacting production. In fact, we can use that most recent backup we just took of our environment to perform the patch testing. Remember the isolated environment we just talked about with Veeam SureBackup technology? You guessed it, it is powered by Veeam DataLabs.
Veeam DataLabs allows us to spin up complete applications in an isolated environment. This means that we can test patches across a variety of servers with different functions, all without even touching our production environment. Perfect for patch testing, right?
Now, let’s take a look at how the Veeam DataLab technology works.
Veeam DataLabs are configured in Veeam Backup & Replication. Once they are configured, a virtual appliance is created in VMware vSphere to house the virtual machines to be tested. Beyond the virtual machines you plan on testing, you can also include key infrastructure services such as Active Directory, or anything else the virtual machines you plan on testing require to work correctly. This group of supporting VMs is called an Application Group.
patch testing veeam backup datalabs
In the above diagram, you can see the components that support a Veeam DataLab environment.
Remember, these are just copies from the latest backup, they do not impact the production virtual machines at all. To learn more about Veeam DataLabs, be sure to take a look at this great overview hosted here on the Veeam.com blog.
So what happens if we apply a bad patch to a Veeam DataLab environment? Absolutely nothing. At the end of the DataLab session, the VMs are powered off, and the changes made during the session are thrown away. There is no impact to the production virtual machines or the backups leveraged inside the Veeam DataLab. With Veeam DataLabs, patch testing is no longer a big deal, and organizations can proceed with their patching activities with confidence.
This DataLab can then be leveraged for testing, or for running Veeam SureBackup jobs. SureBackup jobs also provide reports upon completion. To learn more about SureBackup jobs, and see how easy they are to configure, be sure to check out the SureBackup information in the Veeam Help Center.

Patch testing to improve confidence

The hesitance to apply patches is understandable in organizations, however, that does not mean there can be significant risk if patches are not applied in a timely manner. By leveraging Veeam Backups along with Veeam DataLabs, organizations can quickly test as many servers and environments as they would like before installing patches on production systems. The ability to rapidly test patches ensures any potential issue is discovered long before any data loss or negative impact to production occurs.

No VMs? No problem!

What about the other assets in your environment that can be impacted by a bad patch, such as physical servers, dekstops, laptops, and full Windows tablets? You can still protect these assets by backing them up using Veeam Agent for Microsoft Windows. These agents can be automatically deployed to your assets from Veeam Backup & Replication. To learn more about Veeam Agents, take a look at the Veeam Agent Getting Started Guide.
To see the power of Veeam Backup & Replication, Veeam DataLabs, and Veeam Agent for Microsoft Windows for yourself, be sure to download the 30-day free trial of Veeam Backup & Replication here.
The post How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/dCjtFr9x-1M/rapid-patch-testing-veeam-backups-veeam-datalabs.html

DR

Native snapshot integration for NetApp HCI and SolidFire

Native snapshot integration for NetApp HCI and SolidFire

Veeam Software Official Blog  /  Adam Bergh


Four years ago, Veeam delivered to the market ground-breaking native snapshot integration into NetApp’s flagship ONTAP storage operating system. In addition to operational simplicity, improved efficiencies, reduced risk and increased ROI, the Veeam Hyper-Availability Platform and ONTAP continues to help customers of all sizes accelerate their Digital Transformation initiatives and compete more effectively in the digital economy.
Today I’m pleased to announce a native storage integration with Element Software, the storage operating system that powers NetApp HCI and SolidFire, is coming to Veeam Backup & Replication 9.5 with the upcoming Update 4.

Key milestones in the Veeam + NetApp Alliance
Veeam continues to deliver deeper integration across the NetApp Data Fabric portfolio to provide our joint customers with the ability to attain the highest levels of application performance, efficiency, agility and Hyper-Availability across hybrid cloud environments. Together with NetApp, we enable organizations to attain the best RPOs and RTOs for all applications and data through native snapshot based integrations.

How Veeam integration takes NetApp HCI to Hyper-Available

With Veeam Availability Suite 9.5 Update 3, we released a brand-new framework called the “Universal Storage API.” This set of API’s allows Veeam to accelerate the adoption of storage-based integrations to help decrease impact on the production environment, significantly improve RPOs and deliver significant operational benefits that would not be attainable without Veeam.
Let’s talk about how the new Veeam integration with NetApp HCI and SolidFire deliver these benefits.

Backup from Element Storage Snapshots

The Veeam Backup from Storage Snapshot technology is designed to dramatically reduce the performance impact typically associated with traditional API driven VMware backup on primary hypervisor infrastructure.
This process dramatically improves backup performance, with the added benefit of reducing performance impact on production VMware infrastructure.

Granular application item recovery from Element Storage Snapshots

If you’re a veteran of enterprise storage systems and VMware, you undoubtably know the pain of trying to recover individual Windows or Linux files, or application items from a Storage Snapshot. The good news is that Veeam makes this process fast, easy and painless. With our new integration into Element snapshots, you can quickly recover application items directly from the Storage Snapshot like:

  • Individual Windows or Linux guest files
  • Exchange items
  • MS SQL databases
  • Oracle databases
  • Microsoft Active Directory items
  • Microsoft SharePoint items

What’s great about this functionality is that it works with a Storage Snapshot created by Veeam and NetApp, and the only requirement is that VMs need to be in the VMDK format.

Hyper-Available VMs with Instant VM Recovery from Element Snapshots

Everyone knows that time is money and that every second that a critical workload is offline your business is losing money, prestige and possibly even customers. What if I told you that you could recover an entire virtual machine, no matter the size in a very short timeframe? Sound farfetched? Instant VM Recovery technology from Veeam which leverages Element Snapshots for NetApp HCI and SolidFire makes this a reality.
Not only is this process extremely fast, there is no performance loss during this process, because once recovered, the VM is running from your primary production storage system!

Veeam Instant VM Recovery on NetApp HCI

Element Snapshot orchestration for better RPO

It’s common to see a nightly or twice daily backup schedule in most organizations. The problem with this strategy is that it leaves your organization with a large data loss potential of 12-24 hours. We call the amount of acceptable data loss your “RPO” or recovery point objective. Getting your RPO as low as possible just makes good business sense. With Veeam and Element Snapshot management, we can supplement the off-array backup schedule with more frequent storage array-based snapshots. One common example would be taking hourly storage-based snapshots in between nightly off-array Veeam backups. When a restore event happens, you now have hourly snapshots, or a Veeam backup to choose from when executing the recovery operation.

Put your Storage Snapshots to work with Veeam DataLabs

Wouldn’t it be great if there were more ways to leverage your investments in Storage Snapshots for additional business value? Enter Veeam DataLabs — the easy way to create copies of your production VMs in a virtual lab protected from the production network by a Veeam network proxy.
The big idea behind this technology is to provide your business with near real-time copies of your production VMs for operations like dev/test, data analytics, proactive DR testing for compliance, troubleshooting, sandbox testing, employee training, penetration testing and much more! Veeam makes the process of test lab rollouts and refreshes easy and automated.

NetApp + Veeam = Better Together

NetApp Storage Technology and Veeam Availability Suite are perfectly matched to create a Hyper-Available data center. Element storage integrations provide fast, efficient backup capabilities, while significantly lowering RPOs and RTOs for your organization.
Find out more on how you can simplify IT, reduce risk, enhance operational efficiencies and increase ROI through NetApp HCI and Veeam.
GD Star Rating
loading…
Native snapshot integration for NetApp HCI and SolidFire, 4.9 out of 5 based on 15 ratings

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/0jXxQ6YvSno/native-snapshot-integration-element.html

DR

Veeam Announces Integration with NetApp HCI – StorageReview.com

Veeam Announces Integration with NetApp HCI – StorageReview.com

veeam – Google News

by Michael Rink

Veeam announced that Update 4 of Veeam Backup & Replication, expected later this year, will add native storage integration with Element Software,the storage operating system that powers NetApp HCI and SolidFire. Veeam first joined NetApp’s Alliance program four years ago, in 2014. Since then they’ve been steadily increasing support and integration. Earlier this month the two companies worked together to allow NetApp to resale Veaam Availability Solutions.

Veeam’s integration into Element snapshots, once complete, will allow IT engineers to quickly recover application items directly from the Storage Snapshot at a very granular level. Not just entire databases, but also individual Windows or Linux guest files and Exchange items (emails). Veem even expects to be able to recover Microsoft SharePoint items & Microsoft Active Directory items directly from the Storage Snapshot; which will be a pretty neat trick. The only requirement is that VMs need to be in the VMDK format.
With their next update, Veeam is expecting to be able to recover an entire virtual machine, no matter the size in a very short timeframe by leveraging Element Snapshots for NetApp HCI and SolidFire. Once recovered, the VM will be running from the primary production storage system. Additionally, Veeam is offering support for companies that want to leverage their backup snapshots to support testing and development. By quickly cloning the most recent backup of your company’s production environment; Veeam lets you reduce the risk of unexpected integration problems. It’s also useful for troubleshooting (especially those production only problems), sandbox testing, employee training, and penetration testing.
Veeam
Discuss this story
Sign up for the StorageReview newsletter

Original Article: http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNEfUh2GTOKZ0RgNomsXSMt6ayyeKQ&clid=c3a7d30bb8a4878e06b80cf16b898331&ei=Pb7YW7jeEqORhgHZkpPwDQ&url=https://www.storagereview.com/veeam_announces_integration_with_netapp_hci

DR

Considerations in a multi-cloud world

Considerations in a multi-cloud world

Veeam Software Official Blog  /  David Hill


With the infrastructure world in constant flux, more and more businesses are adopting a multi-cloud deployment model. The challenges from this are becoming more complex and, in some cases, cumbersome. Consider the impact on the data alone. 10 years ago, all anyone worried about was if the SAN would stay up, and if it didn’t, would their data be protected. Fast forward to today, even a small business can have data scattered across the globe. Maybe they have a few vSphere hosts in an HQ, with branch offices using workloads running in the cloud or Software as a Service-based applications. Maybe backups are stored in an object storage repository (somewhere — but only one guy knows where). This is happening in the smallest of businesses, so as a business grows and scales, the challenges become even more complex.

Potential pitfalls

Now this blog is not about how Veeam manages data in a multi-cloud world, it’s more about how to understand the challenges and the potential pitfalls. Take a look at the diagram below:

Veeam supports a number of public clouds and different platforms. This is a typical scenario in a modern business. Picture the scene: workloads are running on top of a hypervisor like VMware vSphere or Nutanix, with some services running in AWS. The company is leveraging Microsoft Office 365 for its email services (people rarely build Exchange environments anymore) with Active Directory extended into Azure. Throw in some SAP or Oracle workloads, and your data management solution has just gone from “I back up my SAN every night to tape” to “where is my data now, and how do I restore it in the event of a failure?” If worrying about business continuity didn’t keep you awake 10 years ago, it surely does now. This is the impact of modern life. The more agility we provide on the front end for an IT consumer, the more complexity there has to be on the back end.
With the ever-growing complexity, global reach and scale of public clouds, as well as a more hands-off approach from IT admins, this is a real challenge to protect a business, not only from an outage, but from a full-scale business failure.

Managing a multi-cloud environment

When looking to manage a multi-cloud environment, it is important to understand these complexities, and how to avoid costly mistakes. The simplistic approach to any environment, whether it is running on premises or in the cloud, is to consider all the options. Sounds obvious, but that has not always been the case. Where or how you deploy a workload is becoming irrelevant, but how you protect that workload still is. Think about the public cloud: if you deploy a virtual machine, and set the firewall ports to any:any, (that would never happen would it?), you can be pretty sure someone will gain access to that virtual machine at some point. Making sure that workload is protected and recoverable is critical in this instance. The same considerations and requirements always apply whether running on premises or off premises.  How do you protect the data and how do you recover the data in the event of a failure or security breach?

Why use a cloud platform?

This is something often overlooked, but it has become clear in recent years that organizations do not choose a cloud platform for single, specific reasons like cost savings, higher performance and quicker service times, but rather because the cloud is the right platform for a specific application. Sure, individual reason benefits may come into play, but you should always question the “why” on any platform selection.
When you’re looking at data management platforms, consider not only what your environment looks like today, but also what will it look like tomorrow. Does the platform you’re purchasing today have a roadmap for the future? If you can see that the company has a clear vision and understanding of what is happening in the industry, then you can feel safe trusting that platform to manage your data anywhere in the world, on any platform. If a roadmap is not forthcoming, or they just don’t get the vision you are sharing about your own environment, perhaps it’s time to look at other vendors. It’s definitely something to think about next time you’re choosing a data management solution or platform.
The post Considerations in a multi-cloud world appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/oACkbQrZbW8/multi-cloud-considerations.html

DR

Veeam Hyper-Availablity as a Service for Nutanix AHV – Virtualization Review

Veeam Hyper-Availablity as a Service for Nutanix AHV – Virtualization Review

veeam – Google News

Veeam Hyper-Availablity as a Service for Nutanix AHV

Date: November 15, 2018 @ 11:00 AM PST / 2:00 PM EST
Speaker: Steve Walker, Director of Sales & Marketing at TBConsulting
Veeam Availability for Nutanix AHV as a Service is purpose-built with the Nutanix user in mind! Join this engaging discussion around the beneficial uses of Veeam in your virtual environment. The powerful web-based UI was specifically designed to look and feel like Prism — Nutanix’s management solution for the Acropolis infrastructure stack — while ensuring a streamlined and familiar user experience brought to life by TB Consulting.
Minimize data loss with frequent, fast backups across the entire infrastructure:  Veeam Availability for Nutanix AHV as a Service.
Register today!

Date: 11/15/2018
Time: 11:00 AM

Duration: 1 Hour


Original Article: http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNFb2bc6qPpIAnELR0lBIM9AlQuNcg&clid=c3a7d30bb8a4878e06b80cf16b898331&ei=48LZW9itA4TaqgK-soi4DQ&url=https://virtualizationreview.com/webcasts/2018/10/tbconsultnov15-veeam-hyper-availablity-as-a-service.aspx?tc=page0

DR