VeeamON ’18 Virtual Conference

VeeamON Virtual Conference

VeeamON Virtual is a unique online conference designed to deliver the latest insights on Intelligent Data Management… to the comfort of your own office. Annually, VeeamON Virtual brings together more than 2,500 industry experts to showcase the latest technology solutions providing the Hyper-Availability of data. Join us on our virtual journey to explore the challenges of data growth and inevitable data sprawl, and the threat they pose to data Availability and protection.


Enhanced self-service restore in Veeam Backup for Microsoft Office 365 v2

Enhanced self-service restore in Veeam Backup for Microsoft Office 365 v2

In Veeam Backup for Microsoft Office 365 1.5, you could only restore the most recently backed up recovery point, which limited its usefulness for most administrators looking to take advantage of the feature. That’s changed in Veeam Backup for Microsoft Office 365 v2 with the ability to now choose a point in time from the Veeam Explorers™. Read this blog from Anthony Spiteri, Global Technologist, Product Strategy to learn more.


More tips and tricks for a smooth Veeam Availability Orchestrator deployment

More tips and tricks for a smooth Veeam Availability Orchestrator deployment

Veeam Software Official Blog  /  Melissa Palmer

Welcome to even more tips and tricks for a smooth Veeam Availability Orchestrator deployment. In the first part of our series, we covered the following topics:

  • Plan first, install next
  • Pick the right application to protect to get a feel for the product
  • Decide on your categorization strategy, such as using VMware vSphere Tags, and implement it
  • Start with a fresh virtual machine

Configure the DR site first

After you have installed Veeam Availability Orchestrator, the first site you configure will be your DR site. If you are also deploying production sites, it is important to note, you cannot change your site’s personality after the initial configuration. This is why it is so important to plan before you install, as we discussed in the first article in this series.

As you are configuring your Veeam Availability Orchestrator site, you will see an option for installing the Veeam Availability Orchestrator Agent on a Veeam Backup & Replication server. Remember, you have two options here:

  1. Use the embedded Veeam Backup & Replication server that is installed with Veeam Availability Orchestrator
  2. Push the Veeam Availability Orchestrator Agent to existing Veeam Backup & Replication servers

If you change your mind and do in fact want to use an existing Veeam Backup & Replication server, it is very easy to install the agent after initial configuration. In the Veeam Availability Orchestrator configuration screen, simply click VAO Agents, then Install. You will just need to know the name of the Veeam Backup & Replication server you would like to add and have the proper credentials.

Ensure replication jobs are configured

No matter which Veeam Backup & Replication server you choose to use for Veeam Availability Orchestrator, it is important to ensure your replication jobs are configured in Veeam Backup & Replication before you get too far in configuring your Veeam Availability Orchestrator environment. After all, Veeam Availability Orchestrator cannot fail replicas over if they are not there!

If for some reason you forget this step, do not worry. Veeam Availability Orchestrator will let you know when a Readiness Check is run on a Failover Plan. As the last step in creating a Failover Plan, Veeam Availability Orchestrator will run a Readiness Check unless you specifically un-check this option.

If you did forget to set up your replication jobs, Veeam Availability Orchestrator will let you know, because your Readiness Check will fail, and you will not see green checkmarks like this in the VM section of the Readiness Check Report.

For a much more in-depth overview of the relationship between Veeam Backup & Replication and Veeam Availability Orchestrator, be sure to read the white paper Technical Overview of Veeam Availability Orchestrator Integration with Veeam Backup & Replication.

Do not forget to configure Veeam DataLabs

Before you can run a Virtual Lab Test on your new Failover Plan (you can find a step-by-step guide to configuring your first Failover Plan here), you must first configure your Veeam DataLab in Veeam Backup & Replication. If you have not worked with Veeam DataLabs before (previously known as Veeam Virtual Labs), be sure to read the white paper I mentioned above, as configuration of your first Veeam DataLab is also covered there.

After you have configured your Veeam DataLab in Veeam Backup & Replication, you will then be able to run Virtual Lab Tests on your Failover Plan, as well as schedule Veeam DataLabs to run whenever you would like. Scheduling Veeam DataLabs is ideal to provide an isolated production environment for application testing, and can help you make better use of those idle DR resources.

Veeam DataLabs can be run on demand or scheduled from the Virtual Labs screen. When running or scheduling a lab, you can also select the duration of time you would like the lab to run for, which can be handy when scheduling Veeam DataLab resources for use by multiple teams.

There you have it, even more tips and tricks to help you get Veeam Availability Orchestrator up and running quickly and easily. Remember, a free 30-day trial of Veeam Availability Orchestrator is available, so be sure to download it today!

The post More tips and tricks for a smooth Veeam Availability Orchestrator deployment appeared first on Veeam Software Official Blog.

Original Article:


Veeam and NetApp double-team hybrid data run wild – SiliconANGLE

Veeam and NetApp double-team hybrid data run wild – SiliconANGLE

veeam – Google News

Mission-critical apps on-premises, machine-learning apps on Google Cloud Platform, serverless apps on Amazon Web Services cloud — phew, it’s getting hectic in modern enterprise information technology. The number of clouds isn’t going to shrink anytime soon, so it would help if things like data backup and data management would fuse to keep the number of additional things to mess with manageable.
NetApp Inc. and Veeam Software Inc. have partnered to integrate their technologies so fans of both will have fewer things to fiddle with. NetApp for storage — and more recently, data management — and Veeam for backup go together like chocolate and mint creme. They cover a number of crucial steps along the path of that most valuable asset in digital business, data.
“But what makes it simple is when it is comprehensive and integrated,” said Bharat Badrinath (pictured, right), vice president of products and solutions marketing at Veeam. “When the two companies’ engineering teams work together to drive that integration, that results in simplicity.”
Badrinath and Ken Ringdahl (pictured, left), vice president of global alliance architecture at Veeam, spoke with Lisa Martin (@LuccaZara) and Stu Miniman (@stu), co-hosts of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during this week’s NetApp Insight event in Las Vegas. They discussed how the companies’ partnership is reshaping their sales strategies. (* Disclosure below.)

In the back door and through the C-suite

Veeam and NetApp have integrated deeply so users can have the two technologies jointly follow their data around wherever it goes on the long, strange trip through hybrid cloud. Veeam has a reputation as a younger man’s technology that comes in the door through advocates in the IT department. It’s now making a push into larger enterprises, where NetApp has a large, deeply ingrained footprint.
“That’s a big impetus for the partnership, because NetApp has a lot of strength, especially with the ONTAP system in enterprise,” Ringdahl said.
The companies complement each other and fill in their blank spaces. “Veeam is bringing NetApp into more of our commercial deals; Netapp is bringing us into more enterprise deals,” Ringdahl explained. “We can come bottom-up; NetApp can come top-down.”
Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of the NetApp Insight event. (* Disclosure: TheCUBE is a paid media partner for NetApp Insight. Neither NetApp Inc., the event sponsor, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.
The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.

Original Article:


How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs

How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs

Veeam Software Official Blog  /  Melissa Palmer

Unfortunately, bad patches are something everyone has experienced at one point or another. Just take the most recent example of the Microsoft Windows October 2018 Update that impacted both desktop and server versions of Windows. Unfortunately, this update resulted in missing files for impacted systems, and has temporarily been paused as Microsoft investigates.
Because of incidents like this, organizations are often seldom to quickly adopt patches. This is one of the reasons the WannaCry ransomware was so impactful. Unpatched systems introduce risk into environments, as new exploits for old problems are on the rise. In order to patch a system, organizations must first do two things, back up the systems to be patched, and perform patch testing.

A recent, verified Veeam Backup

Before we patch a system, we always want to make sure we have a backup that matches our organization’s Recovery Point Objective (RPO), and that the backup was successful. Luckily, Veeam Backup & Replication makes this easy to schedule, or even run on demand as needed.
Beyond the backup itself succeeding, we also want to verify the backup works correctly. Veeam’s SureBackup technology allows for this by booting the VM in an isolated environment, then tests the VM to make sure it is functioning properly. Veeam SureBackup gives organizations additional piece of mind that their backups have not only succeeded, but will be useable.

Rapid patch testing with Veeam DataLabs

Veeam DataLabs enable us to test patches rapidly, without impacting production. In fact, we can use that most recent backup we just took of our environment to perform the patch testing. Remember the isolated environment we just talked about with Veeam SureBackup technology? You guessed it, it is powered by Veeam DataLabs.
Veeam DataLabs allows us to spin up complete applications in an isolated environment. This means that we can test patches across a variety of servers with different functions, all without even touching our production environment. Perfect for patch testing, right?
Now, let’s take a look at how the Veeam DataLab technology works.
Veeam DataLabs are configured in Veeam Backup & Replication. Once they are configured, a virtual appliance is created in VMware vSphere to house the virtual machines to be tested. Beyond the virtual machines you plan on testing, you can also include key infrastructure services such as Active Directory, or anything else the virtual machines you plan on testing require to work correctly. This group of supporting VMs is called an Application Group.
patch testing veeam backup datalabs
In the above diagram, you can see the components that support a Veeam DataLab environment.
Remember, these are just copies from the latest backup, they do not impact the production virtual machines at all. To learn more about Veeam DataLabs, be sure to take a look at this great overview hosted here on the blog.
So what happens if we apply a bad patch to a Veeam DataLab environment? Absolutely nothing. At the end of the DataLab session, the VMs are powered off, and the changes made during the session are thrown away. There is no impact to the production virtual machines or the backups leveraged inside the Veeam DataLab. With Veeam DataLabs, patch testing is no longer a big deal, and organizations can proceed with their patching activities with confidence.
This DataLab can then be leveraged for testing, or for running Veeam SureBackup jobs. SureBackup jobs also provide reports upon completion. To learn more about SureBackup jobs, and see how easy they are to configure, be sure to check out the SureBackup information in the Veeam Help Center.

Patch testing to improve confidence

The hesitance to apply patches is understandable in organizations, however, that does not mean there can be significant risk if patches are not applied in a timely manner. By leveraging Veeam Backups along with Veeam DataLabs, organizations can quickly test as many servers and environments as they would like before installing patches on production systems. The ability to rapidly test patches ensures any potential issue is discovered long before any data loss or negative impact to production occurs.

No VMs? No problem!

What about the other assets in your environment that can be impacted by a bad patch, such as physical servers, dekstops, laptops, and full Windows tablets? You can still protect these assets by backing them up using Veeam Agent for Microsoft Windows. These agents can be automatically deployed to your assets from Veeam Backup & Replication. To learn more about Veeam Agents, take a look at the Veeam Agent Getting Started Guide.
To see the power of Veeam Backup & Replication, Veeam DataLabs, and Veeam Agent for Microsoft Windows for yourself, be sure to download the 30-day free trial of Veeam Backup & Replication here.
The post How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs appeared first on Veeam Software Official Blog.

Original Article:


Native snapshot integration for NetApp HCI and SolidFire

Native snapshot integration for NetApp HCI and SolidFire

Veeam Software Official Blog  /  Adam Bergh

Four years ago, Veeam delivered to the market ground-breaking native snapshot integration into NetApp’s flagship ONTAP storage operating system. In addition to operational simplicity, improved efficiencies, reduced risk and increased ROI, the Veeam Hyper-Availability Platform and ONTAP continues to help customers of all sizes accelerate their Digital Transformation initiatives and compete more effectively in the digital economy.
Today I’m pleased to announce a native storage integration with Element Software, the storage operating system that powers NetApp HCI and SolidFire, is coming to Veeam Backup & Replication 9.5 with the upcoming Update 4.

Key milestones in the Veeam + NetApp Alliance
Veeam continues to deliver deeper integration across the NetApp Data Fabric portfolio to provide our joint customers with the ability to attain the highest levels of application performance, efficiency, agility and Hyper-Availability across hybrid cloud environments. Together with NetApp, we enable organizations to attain the best RPOs and RTOs for all applications and data through native snapshot based integrations.

How Veeam integration takes NetApp HCI to Hyper-Available

With Veeam Availability Suite 9.5 Update 3, we released a brand-new framework called the “Universal Storage API.” This set of API’s allows Veeam to accelerate the adoption of storage-based integrations to help decrease impact on the production environment, significantly improve RPOs and deliver significant operational benefits that would not be attainable without Veeam.
Let’s talk about how the new Veeam integration with NetApp HCI and SolidFire deliver these benefits.

Backup from Element Storage Snapshots

The Veeam Backup from Storage Snapshot technology is designed to dramatically reduce the performance impact typically associated with traditional API driven VMware backup on primary hypervisor infrastructure.
This process dramatically improves backup performance, with the added benefit of reducing performance impact on production VMware infrastructure.

Granular application item recovery from Element Storage Snapshots

If you’re a veteran of enterprise storage systems and VMware, you undoubtably know the pain of trying to recover individual Windows or Linux files, or application items from a Storage Snapshot. The good news is that Veeam makes this process fast, easy and painless. With our new integration into Element snapshots, you can quickly recover application items directly from the Storage Snapshot like:

  • Individual Windows or Linux guest files
  • Exchange items
  • MS SQL databases
  • Oracle databases
  • Microsoft Active Directory items
  • Microsoft SharePoint items

What’s great about this functionality is that it works with a Storage Snapshot created by Veeam and NetApp, and the only requirement is that VMs need to be in the VMDK format.

Hyper-Available VMs with Instant VM Recovery from Element Snapshots

Everyone knows that time is money and that every second that a critical workload is offline your business is losing money, prestige and possibly even customers. What if I told you that you could recover an entire virtual machine, no matter the size in a very short timeframe? Sound farfetched? Instant VM Recovery technology from Veeam which leverages Element Snapshots for NetApp HCI and SolidFire makes this a reality.
Not only is this process extremely fast, there is no performance loss during this process, because once recovered, the VM is running from your primary production storage system!

Veeam Instant VM Recovery on NetApp HCI

Element Snapshot orchestration for better RPO

It’s common to see a nightly or twice daily backup schedule in most organizations. The problem with this strategy is that it leaves your organization with a large data loss potential of 12-24 hours. We call the amount of acceptable data loss your “RPO” or recovery point objective. Getting your RPO as low as possible just makes good business sense. With Veeam and Element Snapshot management, we can supplement the off-array backup schedule with more frequent storage array-based snapshots. One common example would be taking hourly storage-based snapshots in between nightly off-array Veeam backups. When a restore event happens, you now have hourly snapshots, or a Veeam backup to choose from when executing the recovery operation.

Put your Storage Snapshots to work with Veeam DataLabs

Wouldn’t it be great if there were more ways to leverage your investments in Storage Snapshots for additional business value? Enter Veeam DataLabs — the easy way to create copies of your production VMs in a virtual lab protected from the production network by a Veeam network proxy.
The big idea behind this technology is to provide your business with near real-time copies of your production VMs for operations like dev/test, data analytics, proactive DR testing for compliance, troubleshooting, sandbox testing, employee training, penetration testing and much more! Veeam makes the process of test lab rollouts and refreshes easy and automated.

NetApp + Veeam = Better Together

NetApp Storage Technology and Veeam Availability Suite are perfectly matched to create a Hyper-Available data center. Element storage integrations provide fast, efficient backup capabilities, while significantly lowering RPOs and RTOs for your organization.
Find out more on how you can simplify IT, reduce risk, enhance operational efficiencies and increase ROI through NetApp HCI and Veeam.
GD Star Rating
Native snapshot integration for NetApp HCI and SolidFire, 4.9 out of 5 based on 15 ratings

Original Article:


Veeam Announces Integration with NetApp HCI –

Veeam Announces Integration with NetApp HCI –

veeam – Google News

by Michael Rink

Veeam announced that Update 4 of Veeam Backup & Replication, expected later this year, will add native storage integration with Element Software,the storage operating system that powers NetApp HCI and SolidFire. Veeam first joined NetApp’s Alliance program four years ago, in 2014. Since then they’ve been steadily increasing support and integration. Earlier this month the two companies worked together to allow NetApp to resale Veaam Availability Solutions.

Veeam’s integration into Element snapshots, once complete, will allow IT engineers to quickly recover application items directly from the Storage Snapshot at a very granular level. Not just entire databases, but also individual Windows or Linux guest files and Exchange items (emails). Veem even expects to be able to recover Microsoft SharePoint items & Microsoft Active Directory items directly from the Storage Snapshot; which will be a pretty neat trick. The only requirement is that VMs need to be in the VMDK format.
With their next update, Veeam is expecting to be able to recover an entire virtual machine, no matter the size in a very short timeframe by leveraging Element Snapshots for NetApp HCI and SolidFire. Once recovered, the VM will be running from the primary production storage system. Additionally, Veeam is offering support for companies that want to leverage their backup snapshots to support testing and development. By quickly cloning the most recent backup of your company’s production environment; Veeam lets you reduce the risk of unexpected integration problems. It’s also useful for troubleshooting (especially those production only problems), sandbox testing, employee training, and penetration testing.
Discuss this story
Sign up for the StorageReview newsletter

Original Article:


Considerations in a multi-cloud world

Considerations in a multi-cloud world

Veeam Software Official Blog  /  David Hill

With the infrastructure world in constant flux, more and more businesses are adopting a multi-cloud deployment model. The challenges from this are becoming more complex and, in some cases, cumbersome. Consider the impact on the data alone. 10 years ago, all anyone worried about was if the SAN would stay up, and if it didn’t, would their data be protected. Fast forward to today, even a small business can have data scattered across the globe. Maybe they have a few vSphere hosts in an HQ, with branch offices using workloads running in the cloud or Software as a Service-based applications. Maybe backups are stored in an object storage repository (somewhere — but only one guy knows where). This is happening in the smallest of businesses, so as a business grows and scales, the challenges become even more complex.

Potential pitfalls

Now this blog is not about how Veeam manages data in a multi-cloud world, it’s more about how to understand the challenges and the potential pitfalls. Take a look at the diagram below:

Veeam supports a number of public clouds and different platforms. This is a typical scenario in a modern business. Picture the scene: workloads are running on top of a hypervisor like VMware vSphere or Nutanix, with some services running in AWS. The company is leveraging Microsoft Office 365 for its email services (people rarely build Exchange environments anymore) with Active Directory extended into Azure. Throw in some SAP or Oracle workloads, and your data management solution has just gone from “I back up my SAN every night to tape” to “where is my data now, and how do I restore it in the event of a failure?” If worrying about business continuity didn’t keep you awake 10 years ago, it surely does now. This is the impact of modern life. The more agility we provide on the front end for an IT consumer, the more complexity there has to be on the back end.
With the ever-growing complexity, global reach and scale of public clouds, as well as a more hands-off approach from IT admins, this is a real challenge to protect a business, not only from an outage, but from a full-scale business failure.

Managing a multi-cloud environment

When looking to manage a multi-cloud environment, it is important to understand these complexities, and how to avoid costly mistakes. The simplistic approach to any environment, whether it is running on premises or in the cloud, is to consider all the options. Sounds obvious, but that has not always been the case. Where or how you deploy a workload is becoming irrelevant, but how you protect that workload still is. Think about the public cloud: if you deploy a virtual machine, and set the firewall ports to any:any, (that would never happen would it?), you can be pretty sure someone will gain access to that virtual machine at some point. Making sure that workload is protected and recoverable is critical in this instance. The same considerations and requirements always apply whether running on premises or off premises.  How do you protect the data and how do you recover the data in the event of a failure or security breach?

Why use a cloud platform?

This is something often overlooked, but it has become clear in recent years that organizations do not choose a cloud platform for single, specific reasons like cost savings, higher performance and quicker service times, but rather because the cloud is the right platform for a specific application. Sure, individual reason benefits may come into play, but you should always question the “why” on any platform selection.
When you’re looking at data management platforms, consider not only what your environment looks like today, but also what will it look like tomorrow. Does the platform you’re purchasing today have a roadmap for the future? If you can see that the company has a clear vision and understanding of what is happening in the industry, then you can feel safe trusting that platform to manage your data anywhere in the world, on any platform. If a roadmap is not forthcoming, or they just don’t get the vision you are sharing about your own environment, perhaps it’s time to look at other vendors. It’s definitely something to think about next time you’re choosing a data management solution or platform.
The post Considerations in a multi-cloud world appeared first on Veeam Software Official Blog.

Original Article:


Veeam Hyper-Availablity as a Service for Nutanix AHV – Virtualization Review

Veeam Hyper-Availablity as a Service for Nutanix AHV – Virtualization Review

veeam – Google News

Veeam Hyper-Availablity as a Service for Nutanix AHV

Date: November 15, 2018 @ 11:00 AM PST / 2:00 PM EST
Speaker: Steve Walker, Director of Sales & Marketing at TBConsulting
Veeam Availability for Nutanix AHV as a Service is purpose-built with the Nutanix user in mind! Join this engaging discussion around the beneficial uses of Veeam in your virtual environment. The powerful web-based UI was specifically designed to look and feel like Prism — Nutanix’s management solution for the Acropolis infrastructure stack — while ensuring a streamlined and familiar user experience brought to life by TB Consulting.
Minimize data loss with frequent, fast backups across the entire infrastructure:  Veeam Availability for Nutanix AHV as a Service.
Register today!

Date: 11/15/2018
Time: 11:00 AM

Duration: 1 Hour

Original Article:


Office 365 Backup now available in the Azure Marketplace!

Office 365 Backup now available in the Azure Marketplace!

Veeam Software Official Blog  /  Niels Engelen

When we released Veeam Backup for Microsoft Office 365 in July, we saw a huge adoption rate and a large inquiry on running the solution within Azure. It is with great pleasure, we can announce that Veeam Backup for Microsoft Office 365 is now available in the Azure Marketplace!

A simple deployment model

Veeam Backup for Microsoft Office 365 within Azure falls under the BYOL (Bring Your Own License) model, which means you only have to buy the amount of licenses needed besides the Azure infrastructure costs.
The deployment is easy. Just define your project and instance details combined with an administrator login and you’re good to go. You will notice a default size will be selected, however, this can always be redefined. Keep in mind it is advised to leverage the minimum system requirements which can be found in the User Guide.

VBO365 Azure deployment

Once you’ve added your disks and configured the networking, you’re good to go and the Azure portal will even share you details on the Azure infrastructure costs such as the example below for a Standard A4 v2 VM.

VBO365 Azure pricing

If you are wondering on how to calculate the amount of storage needed for Exchange, SharePoint and OneDrive data, Microsoft provides great reports for this within the Microsoft 365 admin center under the reports option.
Once the VM has been deployed, you can leverage RDP and are good to go with a pre-installed VBO installation. Keep in mind that by default, the standard retention on the repository is set to 3 years, so you may need to modify this to adjust to your organization’s needs.

Two ways to get started!

You can provision Veeam Backup for Microsoft Office 365 in Azure and bring a 30-day trial key with you to begin testing.
You can also deploy the solution within Azure and back up all your Office 365 data free forever – limited to a maximum of 10 users and 1TB of SharePoint data within your organization.
Ready to get started? Try it out today and head to the Azure Marketplace right now!

See Also:

The post Office 365 Backup now available in the Azure Marketplace! appeared first on Veeam Software Official Blog.

Original Article:


NetApp ONTAP with Veeam – simpler access to Hyper-Availability

NetApp ONTAP with Veeam – simpler access to Hyper-Availability

Veeam Executive Blog – The Availability Lounge  /  Carey Stanton

What a great year it has been for our partnership with NetApp. Our two companies have worked diligently to realize the vision of a single point of sale for Veeam Hyper-Availability with NetApp ONTAP. Now, I’m pleased to share that the strength of the NetApp and Veeam partnership has made this a reality. NetApp customers can purchase joint NetApp ONTAP and Veeam Hyper-Availability solutions from our joint partners around the globe.
The NetApp and Veeam partnership began in 2015, with each successive year bringing the two companies closer together in our common cause to deliver the best experience to our customers. Last year, when we announced the intent to offer Veeam Hyper-Availability solutions on the NetApp Global price list, we knew great things were coming to build success together. And this year has delivered.
Veeam is a premier sponsor at NetApp Insight, delivering the Veeam Hyper-Availability Platform across the entire NetApp Data Fabric portfolio. Whether you use ONTAP, NetApp HCI, E-Series or StorageGRID, Veeam is right there as the Availability solution of choice.
If you’re a NetApp customer or partner, then you have the ability and confidence of knowing two of the hottest tech companies are in sync to transform and protect your data.
NetApp ONTAP is the foundation for many organizations’ data management strategy. Veeam’s integration with ONTAP, and the simplicity, agility and efficiency the joint solution delivers, is a key reason why NetApp customers choose Veeam over competitive data protection solutions.
The NetApp resale of Veeam Hyper-Availability solutions provides our joint customers with the confidence to invest in pre-validated NetApp and Veeam solutions. This is particularly important for organizations transforming their business workloads across multi-cloud environments.
By simplifying application performance and Hyper-Availability for multi-cloud workloads, NetApp and Veeam provide the following key benefits:

  • Simplified IT. Less time spent on IT operations and more time spent helping the business transform and innovate.
  • Lower costs. Improved data center efficiencies combined with the flexibility to leverage the economies of scale of public cloud infrastructure without compromising performance or Availability.
  • Increased ROI. Accelerate business innovation through Intelligent Data Management by leveraging Veeam DataLabs with secondary NetApp storage to facilitate application development, data analytics, DR compliance testing, end-user training and more.

Don’t just take our word for it. According to recent IDC research, customers using Veeam and NetApp get nearly a 300% ROI over five years with the following benefits:

  • Nine-month payback on investment
  • 58% lower cost of operations
  • 36% lower cost of hardware
  • 89% less unplanned downtime

We here at Veeam believe partnerships are critical to the continued success of our customers. Working in unison with others in the industry is our philosophy, on both a personal and corporate level. We are proud to be a NetApp partner and proud of the trust between our two organizations. I look forward to seeing you at NetApp Insight this year and the continued success of our two companies in the year to come.
Show more articles from this author

Makes disaster recovery, compliance, and continuity automatic

NEW Veeam Availability Orchestrator helps you reduce the time, cost and effort of planning for and recovering from a disaster by automatically creating plans that meet compliance.


Veeam Availability Orchestrator

Original Article:


Industry demand for Intelligent Data Management continues at pace

Industry demand for Intelligent Data Management continues at pace

Veeam Executive Blog – The Availability Lounge  /  Peter McKay

As Co-CEO and President, I am always looking ahead. How can we improve? Are we delivering the best solutions to meet customer demands? What will be the next transformational opportunity that we can grasp? But I also like to reflect on the past and having just closed our fiscal Q3’18 I am delighted with the state of the business.
We have just closed another record period, our 41st consecutive quarter of double-digit growth. Bookings grew by more than 20% year-over-year (YoY), which is traditionally a slower quarter across the software business due to summertime. Our traction in the Enterprise customer segment continues to thrive, increasing by almost 25% compared to Q3’17, and we now count more than 80% of the Fortune 500 as Veeam customers.
While I am very pleased with these results, what excites me most is the momentum we are making in the Cloud, with our emerging products, and with our ecosystem partners. At VeeamON 2018, we unveiled our vision and strategy to be the most trusted provider of Intelligent Data Management solutions in today’s multi-cloud world. We have worked tirelessly to deliver on this vision and we are seeing the rewards; in Q3’18 our cloud business grew by 26% YoY.
A great example of customers embracing Veeam’s approach to data management across multi-cloud environments is Veeam Backup for Microsoft Office 365® Version 2, which we launched in July. We enabled customers and Veeam Cloud and Service Providers (VCSPs) to seamlessly protect their entire Microsoft Office 365 environments and ensure data availability, and we have seen this business soar – bookings are up more than 700% YoY! In addition, our AWS cloud backup product, N2WS continues to be #1 in AWS backup and recovery with growth over 186% in Q3 YoY.
But that’s not all. Perhaps our biggest achievement over the past quarter has been the momentum we’ve built with our ecosystem partners. As you know, Veeam has always been – and always will be – a company that is focused on partnerships. Our 59,000 ProPartners know this.
Over the past 18 months we have strengthened our focus with Alliance partners, with Q3’18 growth accelerating exponentially.  During the quarter,  we announced Veeam Availability for Nutanix AHV. We also expanded collaboration with Cisco to deliver Veeam High Availability on Cisco HyperFlex™ and added our fourth resale partner, Lenovo, to the reseller relationships we have with HPE, Cisco and NetApp.
The result? We grew our Alliances business by 128% YoY. This is a tremendous achievement and I want to personally thank each of our partners for their continued support and engagement. Partners are flocking to Veeam as their preferred data management partner as they know that we can deliver a proposition that meets customers’ demands today and into the future.
It’s been another strong quarter for Veeam and its ecosystem. I want to congratulate all involved. However, as much as I applaud the past, I am always looking forward. Everyone at Veeam, its partners and the 320,000 customers that rely on us to protect their environments, are looking to closing the year strong. The opportunity in front of us is huge, and I know that 2018 will go on record as Veeam’s best ever.
To all, thanks for your continued support.
Show more articles from this author

Makes disaster recovery, compliance, and continuity automatic

NEW Veeam Availability Orchestrator helps you reduce the time, cost and effort of planning for and recovering from a disaster by automatically creating plans that meet compliance.


Veeam Availability Orchestrator

Original Article:


Availability for your Nutanix AHV with Veeam

Availability for your Nutanix AHV with Veeam

vZilla  /  michaelcade

This series is to highlight the steps to deploy, install, configuration and then how to start protecting workloads and then the recovery options that we have within Veeam Availability for Nutanix AHV.
Everything You Need to for Veeam Availability for Nutanix AHV

Now that we have our Veeam Proxy Appliance deployed, installed and configured, the next step is to start protecting some of the workloads we have sitting in our Nutanix AHV Cluster.
Navigate to the backup jobs tab on the top ribbon.

Here we can add a new backup job, a simple wizard driven approach to start protecting those workloads.

Next, we need to add in our virtual machines,

In my scenario I have simple virtual machines, if you are leveraging Nutanix Protection Domains then you can also leverage this grouping here to select your virtual machines, we can also leverage dynamic mode this is to allow the adding and removing of new workloads under that protection domain.

Add the virtual machine or machines that you wish to protect.

Next, the next option is selecting the destination of the backup job. To be able to see the backup repository the access on the VBR server needs to have the correct permissions to allow for access. This is done from the Veeam Backup & Replication console.

There are some advanced settings that can also be set to remove deleted VMs from the backup that are no longer included in the backup job.

The final step is to configure through the schedule. This will allow, you to choose the interim of backups and how many restore points that you must retain.

The final screen is the summary of the backup job you are about to complete.

You will also notice the ability to run the backup job when finish is selected, this will then start the backup job process. This will trigger the backup job to perform a full backup of the virtual machines you have selected.

Over in VBR you can see the job also running. In a very similar fashion to what we saw with the original Veeam Endpoint backup, we see enough that something is happening, but nothing can be configured from this job within Veeam Backup & Replication.

Back in the Veeam Availability for Nutanix AHV we now have a completed backup job.

Veeam Backup & Replication also shows the completed job and the steps that have occurred during the job.

We will also now see the specific job in our Veeam Backup & Replication console under the backups giving us the ability to perform certain recovery tasks against those backup files.

And we also see the completed job now under the backup jobs in the proxy appliance interface. Here we can perform an Active Full in an ad hoc scenario but also, we can start and stop the job and edit that job.

Over on the Protected VMs tab you will also notice that we now have visibility into the virtual machines that are protected with how many snapshots and backups are present.

To finish, if you head back to the dashboard you will now see the job status showing that we have one created backup job and it is currently idle.

That’s all for the availability section of this series, this is really giving us the ability to create those backup jobs for the virtual machines that sit within the Nutanix AHV cluster, this is an agentless approach for any application consistency you will require the Nutanix Guest Tools.
One thing to note is if you have a transactional workload we would recommend using the Veeam Agent to provide not only the application consistent but also the log truncation within the application. Not required if you have an application that can manage that truncation task.
Next up we will look at the recovery steps and options we have.
The post Availability for your Nutanix AHV with Veeam appeared first on vZilla.

Original Article:


Why our software-driven, hardware agnostic approach makes sense for backups

Why our software-driven, hardware agnostic approach makes sense for backups

Veeam Software Official Blog  /  Anthony Spiteri

Having been hands-on in service provider land for the entirety of my career prior to joining Veeam, I understand the pain points that come with offering backup and recovery services. I’ve spent countless hours working on getting the best combination of hardware and software for those services. I also know firsthand the challenges that storage platforms pose for architecture, engineering and operations teams who design, implement and manage these platforms.

Storage scalability

An immutable truth that exists in our world is that backup and storage go hand in hand and you can’t have one without the other. In recent times, there has been an extreme growth in the amount of data being backed up and the sprawl of that data has also become increasingly challenging to manage. While data is growing quicker than it ever has, in relative terms the issues created by that haven’t changed in the last ten or so years — though they have been magnified.
Focusing on storage, those that have deployed any storage platform understand that there will come a point where hardware and software constraints start to come into play. I’ve not yet experienced or heard of a storage system that doesn’t apply some limitation on scale or performance at some point. Whether you are constrained by physical disk or controller based limits or software overheads, the reality is no system is infinitely scalable and free of challenge.
The immediate solution to resolve these challenges in my experience (and anecdotally) has always been to throw more hardware at the platforms by purchasing more. Whether it be performance or disk constraints, the end result is always to expand capacity or upgrade the core hardware components to get the system back to a point where it’s performing as expected.
That said, there are a number of systems that do work well, and if architected and managed in the correct way will offer longer term service sustainability. When it comes to designing storage for backup data, the principals that are used to design for other workloads such as virtual machines cannot be applied. Backup data is a long game and portability of that data should be paramount when choosing what storage to use.

How Veeam helps

Veeam offers tights integration with a number of top storage vendors via our storage integrations. Not only do these integrations offer flexibility to our customers and partners, but they also offer absolute choice and mobility when it comes to the short and long-term retention of backup data.
Extending that portability message — the way in which backup data is stored should mean that when storage systems reach the end of their lifetime, data isn’t held a prisoner to the hardware. Another inevitability of storage is that there will come a time when it needs replacing. This is where Veeam’s hardware agnostic, software-defined approach to backup comes into play.
Recently, there have been a number of products that have come into the market that offer an all-in-one solution for data protection in the form of software tied to hardware appliances. The premise of these offerings is ease of use and single platform to manage. While it’s true that all-in-one solutions are attractive, there is a sting in the tail of any platform that offers software that is tied to hardware.


Fundamentally, the issues that apply to storage platforms apply to these all-in-one appliances. They will reach a point where performance starts to struggle, upgrades are required and, ultimately, systems need to be replaced. This is where the ability to have freedom of choice and a decoupled approach to software and hardware ultimately results in total control of where your backup data is stored, how it performs and when that data is required to be moved or migrated.
You only achieve this through backup software that’s separated from the hardware. While it might seem like a panacea to have an all-in-one solution, there needs to be consideration as to what this means three, five or ten years into the future. Again, portability and choice is king when it comes to choosing a backup vendor. Lock in should be avoided at all costs.
GD Star Rating
Why our software-driven, hardware agnostic approach makes sense for backups, 3.9 out of 5 based on 7 ratings

Original Article:


Veeam showcases NEW cloud data management at Ignite 2018

Veeam showcases NEW cloud data management
at Ignite 2018
Veeam, a company born on virtualization,
has rapidly adapted to customer needs to become
the leader in Intelligent Data Management. In keeping
with its speed and momentum to market, Veeam’s highly anticipated
NEW Veeam Availability Suite™ Update 4 will soon be
released. This promises to redefine cloud data management
with cloud mobility for Microsoft Azure Stack, as well
as native Azure Blob storage for Veeam-powered archives.

Veeam welcomes Lenovo to our growing reseller family

Veeam welcomes Lenovo to our growing reseller family
We are excited to announce that Lenovo is now an
official Veeam® Global Reseller Alliance Partner! Through Veeam
and Lenovo, organizations can now combine the Veeam Hyper‑Availability
Platform™ with Lenovo storage area network (SAN) and software-defined
infrastructure (SDI) solutions.

The ultimate AWS re:Invent experience

The ultimate AWS re:Invent experience
Veeam and N2WS are giving away the ultimate AWS re:Invent
experience to THREE lucky winners! All you have to do
is register now and enter our raffle to win a FULL CONFERENCE

Creating policy with exclusion of folders

Creating policy with exclusion of folders

Notes from MWhite  /  Michael White

I need to add Bitdefender protection to my Veeam Backup & Replication server so this article is going to help with that. But it is mostly going to talk about adjusting the policy that is applied to the Veeam server so that it doesn’t impact the backups.


We need to have access to the info about what to exclude on the Veeam server.  That is found in this article. We also need to have access to our GravityZone UI so we can create a policy, add exclusions to it, and than apply it to the VM.

Lets get started.

Creating Policy

We need to start off in the GravityZone UI and change to the Policies area.

We do not want to edit the default policy as it is applied to everyone, plus I think we cannot delete it.  So we are going to select it and clone it so we have a new copy of it that we can tweak and attach to our Veeam server. Once it is cloned it opens up and is seen like below.

We need to name it something appropriate – I will call it VBR Exclusions. I like the default policy and think it pretty good. So I am going to leave this clone as it was in the default policy and only add to it the Veeam exclusions.  So change to the Antimalware area and select Settings.

You can see it below, where I have already entered the Veeam server exclusions.

You only need to enable the Custom Exclusions by checkmark.  Then add in what you see above. Once you have finished you use the Save button to save this new policy.  It is the same as the default – which I said I like, except it has additional exclusions.

I do not know how to attach a policy to a package so that when it is installed it gets the policy.  So we are going to install with the default and change it afterwards.

Install the Bitdefender client

Likely you know how to do that – download a package and execute it. Once done make sure you see it in the GravityZone UI.

Now we need to assign the proper policy to our Veeam server.

Policy Assignment

We need to be in the Policies \ Assignment Rules area.

We add a location rule by using Add \ Location. Once we do that we see the following screen.

We add a name, and description, plus select the policy we just created the exclusions in and add an IP address for our Veeam server.

Now we change to the Policies view and it may take a minute or two and you will see something different.

We see that one has the policy which makes sense, but there is 4 applied which is confusing. However, I do a Policy Compliance report which shows me who has what policy and I see that VBR01 – my Veeam server – is the only one that has the policy.

So things look good now. We have created a special policy for our Veeam server, applied it, and confirmed it was applied.

Any questions or comments let me know.


=== END ===

Original Article:


Veeam showcases NEW cloud data management at Ignite 2018

Veeam showcases NEW cloud data management at Ignite 2018

Veeam Executive Blog – The Availability Lounge  /  Carey Stanton

Veeam, a company born on virtualization, has rapidly adapted to customer needs to become the leader in Intelligent Data Management. In keeping with its speed and momentum to market, Veeam’s highly anticipated NEW Veeam Availability Suite Update 4 will be released in Q4 2018. This promises to redefine cloud data management with cloud mobility for Microsoft Azure Stack, as well as native Azure Blob storage for Veeam-powered archives.

Take a deeper look at the current and new innovations Veeam will showcase at Microsoft Ignite 2018:

  • Enabling cloud backup to Azure — automatically sending backups to the cloud (for both service providers and enterprise customers)
  • Protecting Office 365 email data in Azure — automatically externalizing backup of Office 365 data, including Exchange Online, SharePoint Online and OneDrive for Business to Azure
  • Protecting Microsoft Azure VMs — ensuring Availability of cloud-based workloads
  • NEW – 10x the savings on archiving in Azure Blob with Veeam Cloud Archive
  • NEW – 2 steps to restore ANY backup to Azure and Azure Stack, including both physical and virtual machine backups.

2 steps to restore ANY backup to Azure and Azure Stack

As you probably know, Microsoft Azure Stack is an exciting innovation that extends the power of the Azure public cloud to on-premises. Building on Veeam’s announcement at Microsoft Ignite 2017, Veeam is publicly presenting Veeam Direct Restore to Microsoft Azure Stack. With similar functionality to existing Veeam Direct Restore to Microsoft Azure, this new mobility feature for Azure Stack allows you to restore and migrate workloads from on-premises to Azure Stack, or even from Microsoft Azure to Azure Stack, all with a 2-step process!

This means that in the event of a failure, or if you want to migrate a current VM to Azure Stack, you can quickly achieve this through the built-in Veeam Direct Restore to Microsoft Azure Stack functionality that will be available in the upcoming update.

10x the savings with native cloud archive capabilities for Azure Blob

Another big highlight at Ignite is Veeam’s highly anticipated, native Azure Blob support known as Veeam Cloud Archive. This will give customers an automated data-management solution designed to simplify data transfer to Azure Blob storage, with up to 10x the savings on long-term archives.

With this upcoming feature, it has never been easier to free up primary storage space and save on costs by archiving your Veeam backups offsite with infinite scalability into Azure Blob. By leveraging Veeam’s proven Scale-Out Backup Repository (SOBR) technologies, customers can access the cloud directly for storage of long-term data archives. Also, unlike other solutions, Veeam does not charge storage tax fees for storing data in the cloud — making this one of the most cost-effective cloud archive solutions in the industry!


There’s a lot of buzz at Microsoft Ignite around Veeam’s new cloud management capabilities for Microsoft Azure and Azure Stack coming in NEW Veeam Availability Suite 9.5 Update 4. As the Leader in Intelligent Data Management for the Hyper-Available Enterprise, Veeam continues to deliver innovative solutions — with cloud mobility for Microsoft Azure Stack and native cloud archives for Azure Blob.

You can explore all Veeam’s Microsoft integrations by visiting Veeam for the Microsoft Cloud.

Show more articles from this author

Makes disaster recovery, compliance, and continuity automatic

NEW Veeam Availability Orchestrator helps you reduce the time, cost and effort of planning for and recovering from a disaster by automatically creating plans that meet compliance.


Veeam Availability Orchestrator

Original Article:


How to bring balance into your infrastructure

How to bring balance into your infrastructure

Veeam Software Official Blog  /  Evgenii Ivanov

Veeam Backup & Replication is known for ease of installation and a moderate learning curve. It is something that we take as a great achievement, but as we see in our support practice, it can sometimes lead to a “deploy and forget” approach, without fine-tuning the software or learning the nuances of its work. In our previous blog posts, we examined tape configuration considerations and some common misconfigurations. This time, the blog post is aimed at giving the reader some insight on a Veeam Backup & Replication infrastructure, how data flows between the components, and most importantly, how to properly load-balance backup components so that the system can work stably and efficiently.

Overview of a Veeam Backup & Replication infrastructure

Veeam Backup & Replication is a modular system. This means that Veeam as a backup solution consists of a number of components, each with a specific function. Examples of such components are the Veeam server itself (as the management component), proxy, repository, WAN accelerator and others. Of course, several components can be installed on a single server (provided that it has sufficient resources) and many customers opt for all-in-one installations. However, distributing components can give several benefits:

  • For customers with branch offices, it is possible to localize the majority of backup traffic by deploying components locally.
  • It allows to scale out easily. If your backup window increases, you can deploy an additional proxy. If you need to expand your backup repository, you can switch to scale-out backup repository and add new extents as needed.
  • You can achieve a High Availability for some of the components. For example, if you have multiple proxies and one goes offline, the backups will still be created.

Such system can only work efficiently if everything is balanced. An unbalanced backup infrastructure can slow down due to unexpected bottlenecks or even cause backup failures because of overloaded components.

Let’s review how data flows in a Veeam infrastructure during a backup (we’re using a vSphere environment in this example):

All data in Veeam Backup & Replication flows between source and target transport agents. Let’s take a backup job as an example: a source agent is running on a backup proxy and its job is to read the data from a datastore, apply compression and source-side deduplication and send it over to a target agent. The target agent is running directly on a Windows/Linux repository or a gateway if a CIFS share is used. Its job is to apply a target-side deduplication and save the data in a backup file (.VKB, .VIB etc).

That means there are always two components involved, even if they are essentially on the same server and both must be taken into account when planning the resources.

Tasks balancing between proxy and repository

To start, we must examine the notion of a “task.” In Veeam Backup & Replication, a task is equal to a VM disk transfer. So, if you have a job with 5 VMs and each has 2 virtual disks, there is a total of 10 tasks to process. Veeam Backup & Replication is able to process multiple tasks in parallel, but the number is still limited.

If you go to the proxy properties, on the first step you can configure the maximum concurrent tasks this proxy can process in parallel:

For normal backup operations, a task on the repository side also means one virtual disk transfer.

On the repository side, you can find a very similar setting:

For normal backup operations, a task on the repository side also means one virtual disk transfer.

This brings us to our first important point: it is crucial to keep the resources and number of tasks in balance between proxy and repository.  Suppose you have 3 proxies set to 4 tasks each (that means that on the source side, 12 virtual disks can be processed in parallel), but the repository is set to 4 tasks only (that is the default setting). That means that only 4 tasks will be processed, leaving idle resources.

The meaning of a task on a repository is different when it comes to synthetic operations (like creating synthetic full). Recall that synthetic operations do not use proxies and happen locally on a Windows/Linux repository or between a gateway and a CIFS share. In this case for normal backup chains, a task is a backup job (so 4 tasks mean that 4 jobs will be able to generate synthetic full in parallel), while for per-VM backup chains, a task is still a VM (so 4 tasks mean that repo can generate 4 separate VBKs for 4 VMs in parallel). Depending on the setup, the same number of tasks can create a very different load on a repository! Be sure to analyze your setup (the backup job mode, the job scheduling, the per-VM option) and plan resources accordingly.

Note that, unlike for a proxy, you can disable the limit for number of parallel tasks for a repository. In this case, the repository will accept all incoming data flows from proxies. This might seem convenient at first, but we highly discourage from disabling this limitation, as it may lead to overload and even job failures. Consider this scenario: a job has many VMs with a total of 100 virtual disks to process and the repository uses the per-VM option. The proxies can process 10 disks in parallel and the repository is set to the unlimited number of tasks. During an incremental backup, the load on the repository will be naturally limited by proxies, so the system will be in balance. However, then a synthetic full starts. Synthetic full does not use proxies and all operations happen solely on the repository. Since the number of tasks is not limited, the repository will try to process all 100 tasks in parallel! This will require immense resources from the repository hardware and will likely cause an overload.

Considerations when using CIFS share

If you are using a Windows or Linux repository, the target agent will start directly on the server.  When using a CIFS share as a repository, the target agent starts on a special component called a “gateway,” that will receive the incoming traffic from the source agent and send the data blocks to the CIFS share. The gateway must be placed as close to the system sharing the folder over SMB as possible, especially in scenarios with a WAN connection. You should not create topologies with a proxy/gateway on one site and CIFS share on another site “in the cloud” — you will likely encounter periodic network failures.

The same load balancing considerations described previously apply to gateways as well. However, the gateway setup requires an additional attention because there are 2 options available — set the gateway explicitly or use an automatic selection mechanism:

Any Windows “managed server” can become a gateway for a CIFS share. Depending on the situation, both options can come handy. Let’s review them.

You can set the gateway explicitly. This option can simplify the resource management — there can be no surprises as to where the target agent will start. It is recommended to use this option if an access to the share is restricted to specific servers or in case of distributed environments — you don’t want your target agent to start far away from the server hosting the share!

Things become more interesting if you choose Automatic selection. If you are using several proxies, automatic selection gives ability to use more than one gateway and distribute the load. Automatic does not mean random though and there are indeed strict rules involved.

The target agent starts on the proxy that is doing the backup. In case of normal backup chains, if there are several jobs running in parallel and each is processed by its own proxy, then multiple target agents can start as well. However, within a single job, even if the VMs in the job are processed by several proxies, the target agent will start only on one proxy, the first to start processing. For per-VM backup chains, a separate target agent starts for each VM, so you can get the load distribution even within a single job.

Synthetic operations do not use proxies, so the selection mechanism is different: the target agent starts on the mount server associated with the repository (with an ability to fail over to Veeam server if the mount server in unavailable). This means that the load of synthetic operations will not be distributed across multiple servers. As mentioned above, we discourage from setting the number of tasks to unlimited — that can cause a huge load spike on the mount/Veeam server during synthetic operations.

Additional notes

Scale-out backup repositorySOBR is essentially a collection of usual repositories (called extents). You cannot point a backup job to a specific extent, only to SOBR, however extents retain some of settings, including the load control. So what was discussed about standalone repositories, pertains to SOBR extents as well. SOBR with per-VM option (enabled by default), the “Performance” placement policy and backup chains spread out across extents will be able to optimize the resource usage.

Backup copy. Instead of a proxy, source agents will start on the source repository. All considerations described above apply to source repositories as well (although in case of Backup Copy Job, synthetic operations on a source repository are logically not possible). Note that if the source repository is a CIFS share, the source agents will start on the mount server (with a failover to Veeam server).

Deduplication appliances. For DataDomain, StoreOnce (and possibly other appliances in the future) with Veeam integration enabled, the same considerations apply as for CIFS share repositories. For a StoreOnce repository with source-side deduplication (Low Bandwidth mode) the requirement to place gateway as close to the repository as possible does not apply — for example, a gateway on one site can be configured to send data to a StoreOnce appliance on another site over WAN.

Proxy affinity. A feature added in 9.5, proxy affinity creates a “priority list” of proxies that should be preferred when a certain repository is used.

If a proxy from the list is not available, a job will use any other available proxy. However, if the proxy is available, but does not have free task slots, the job will be paused waiting for free slots. Even though the proxy affinity is a very useful feature for distributed environments, it should be used with care, especially because it is very easy to set and forget about this option. Veeam Support encountered cases about “hanging” jobs which came down to the affinity setting that was enabled and forgotten about. More details on proxy affinity.


Whether you are setting up your backup infrastructure from scratch or have been using Veeam Backup & Replication for a long time, we encourage you to review your setup with the information from this blog post in mind. You might be able to optimize the use of resources or mitigate some pending risks!

The post How to bring balance into your infrastructure appeared first on Veeam Software Official Blog.

Original Article:


Intelligent data management: Interview with Rick Vanover of Veeam

Intelligent data management: Interview with Rick Vanover of Veeam – TechGenix (blog)

veeam – Google News

I recently had a chance to talk with Rick Vanover of Veeam Software about what businesses need to do these days to ensure their availability strategy fully addresses their needs. Rick is the director of product strategy at Veeam, where he leads a team of technologists and analysts that brings Veeam solutions to market and works with customers, partners, and R&D teams around the world. You can follow Rick on Twitter @RickVanover.

MITCH: Rick, thanks very much for agreeing to let me interview you on the topic of how data protection and availability are changing in our cloud-based area and what organizations are doing right and wrong these days when it comes to preparing for disaster recovery.


Rick Vanover (Credit: Veeam Software)

RICK: My pleasure. This is an area that I’ve built my career around and organizations today need to ensure that their availability strategy meets the expectations of the business.

MITCH: The last few years have seen a lot of changes in how organizations of all sizes implement and manage their IT infrastructures. Cloud computing models like software as a service (SaaS) now enables users to connect to and use cloud-based apps directly over the Internet with Microsoft’s Office 365 being one popular solution of this type. Then there’s infrastructure as a service (IaaS) that lets organizations build agile computing infrastructures that can scale up and down on demand. What does and doesn’t change in regards to ensuring data protection and availability for your business with these new models?

RICK: This is a great question, Mitch, and I’m glad you asked it. The one important thing I’ve learned over the years is that while the platform may change, the rules on the data and availability expectations do not change. Microsoft Office 365 is a good example that you have illustrated. The promise of this Software as a Service (SaaS) solution is great: a great relief of on-premises tier 1 storage, the opportunity to reduce the need for mailbox quotas and with OneDrive for Business a way to combat “shadow IT” file sharing outside of corporate mechanisms. These are real business problems solved by Microsoft Office 365 and these changes are welcome to both users and IT administrators alike.

But what doesn’t change when the application does change? The responsibility of the data. Organizations need to realize that this is their data and Veeam has invested in a new product, Veeam Backup for Microsoft Office 365.

MITCH: What sort of changes do organizations need to make in their supporting processes to ensure data protection/availability in the event of a disaster when they’ve embraced the cloud wholeheartedly or at least adopted some sort of hybrid IT model?

RICK: As the mix of platforms change for organizations, the disaster recovery aspect absolutely needs to be reassessed. This is a very difficult task and, honestly, the old way of doing this isn’t acceptable anymore today. I know plenty of IT administrators who addressed disaster recovery as a once-a-year test where there was free pizza over the weekend, things were tested, about half of it failed, and the goal was to do better next year. Today’s IT services and expectations can’t deal with that.

I know plenty of IT administrators who addressed disaster recovery as a once-a-year test where there was free pizza over the weekend, things were tested, about half of it failed, and the goal was to do better next year. Today’s IT services and expectations can’t deal with that.

This is one reason Veeam have developed a new product that went available earlier this year, Veeam Availability Orchestrator. This product brings a very critical capability for disaster recovery in the era of hybrid IT. Veeam Availability Orchestrator supports orchestrating disaster recovery for on-premises workloads; but also supports orchestrated disaster recovery to VMware Cloud on AWS. This is a new cloud offering in Amazon for VMware workloads. This is an example where an organization can have their on-premises resources benefit from DR in the cloud — literally!

MITCH: I’ve heard it said by some who provide IT support for businesses that rely mostly on SaaS applications that “backup” is basically a bad word now, that performing daily backups is a dead practice because there are now more sophisticated ways to ensure availability for your business data. But this sounds a lot like an oversimplification to me. Does the availability burden of performing regular backups really go away with cloud computing?

RICK: Backup is the first stage. Veeam takes a critical view on this step. In fact, backup is the most important stage. We see the market as a five-stage journey to intelligent data management:Five stages of intelligent data management

Backup and recovery, as well as replication and failover, are the important critical functions there. This effectively is a gateway to more advanced capabilities.

The next step is an aggregation of those critical data sources, whether they are in the cloud, on-premises, or in the SaaS space. Having the data flow for all critical data is an important milestone, and each platform has their own characteristics that may change what the capabilities are for backup and recovery.

With the aggregation of this data, visibility becomes important. Answering key questions like what data is where, who is accessing what, will the environment run out of storage and such are very critical questions today.

Advanced capabilities become the next opportunity, and orchestration is a capability that Veeam brings today to the market that can respond to changes very easily. For example, orchestrated disaster recovery to another site can be done with confidence if there is a concern that weather is going to take out a data center, so organizations can proactively fail overconfidently.

With all of these capabilities, then automation becomes the goal. Automatic resolution of issues and policy violations, for example, will be a capability from Veeam later this year. This can be very important when it comes to ensuring that critical data is protected to the level the business demands today.

This is a quick overview of our vision, but it is important to reinforce that it all starts with a solid backup and recovery as well as replication and failover capability.

MITCH: Given all these changes that are happening, what are most organizations doing right and wrong these days with regard to disaster recovery?

RICK: My observation today is that many organizations are simply not providing the availability experience their business demands. The best example is a high-speed recovery technique. Ask this question: If someone accidentally deletes a virtual machine, how soon can it come back? If the answer is more than a few minutes, there is a gap between capabilities of what is in place and the expectation of users. That’s one example that Veeam has pioneered and led the market for over eight years and there are move. Same for an AWS EC2 instance in the cloud: If someone terminates and deletes it, how soon can it come back? If that too is more than minutes, there is a gap.

Organizations are doing things right when it comes to leveraging new platforms. This includes leveraging SaaS applications where it makes sense, leveraging the cloud and leveraging service providers for the right services as well.

The key advice I have to offer is to ensure that availability is thought of every step of the way.

MITCH: How do new platforms like hyper-converged infrastructure (HCI), advanced storage systems, and robust networking change the game in regards to availability?

RICK: They provide new “plumbing” to work with. New APIs, new networking techniques, and better snapshot capabilities. These are all important for efficient data flow for backup and recovery as well as replication and failover.

Veeam has always invested in APIs and platform capabilities to address data movement at scale. The new technology platforms in the mix are built in the same mindset as well.

MITCH: Looking ahead then, what do you see in the future for business continuity and disaster recovery?

RICK: I see a continued push for completeness. Organizations will make changes to applications, data, and other critical systems to make them more “DR friendly.” A good example is a legacy application that sits on a physical server that is obsolete on an operating system out of support for three years. Can that application really have good DR? No.

The key advice I have to offer is to ensure that availability is thought of every step of the way.

Organizations are indeed seeing the value of proper DR and if the application needs modernized, changed, or sunset out of production use, that’s what it takes.

Proper DR comes with modern platforms and data; obsolete components can’t be made awesome!

MITCH: What practical advice would you give to an admin for implementing disaster recovery solutions in a hybrid cloud environment? Any tips or recommendations to help them get it right going forward?

RICK: If there is a gap in the availability strategy, my advice is to start small and make it right. Specifically, that would mean take one small application. Get the basics of backup and recovery right. Then set up replication and DR capabilities for that small application. Once you have that working right, that motion educates organizations about what solid backup looks like. How those types of tools work, etc.

Then move to the next application that is a bit more complex. And succeed. Get applications to the right model one at a time. Don’t start with the biggest, most critical application in the mix at first.

Once the small successes are proven, go back to the business (like operational people in an organization) and indicate that proper DR and better backup and restore times can be achieved if we virtualize this application or invest in a storage snapshot engine or such.

If the business is drawn to the benefits, the effort to change the platform may come much easier.

MITCH: Rick, thanks very much for giving us some of your valuable time!

RICK: Cheers, Mitch, my pleasure.

Post Views: 71

report this ad

Read Next

Original Article:


New process: Even easier to sign Veeam’s Data Processor Addendum

New process: Even easier to sign Veeam’s Data Processor Addendum
YThe General Data Protection Regulation (GDPR) took effect on May 25. Veeam is held to GDPR compliance standards like other companies all over the world. Every Veeam ProPartner who has a Deal Registration or submits a purchase order, or alternatively would like to receive marketing leads or marketing development funds for potential customers from Veaam, needs to sign a Data Processor Addendum (DPA) — and that process just got easier! Log in to the ProPartner Portal to review and sign the DPA.

VeeamON Virtual Conference – December 5 – Online

VeeamON Virtual conference
VeeamON Virtual is a unique online conference designed to deliver the latest insights on Intelligent Data Management — in the comfort of your own office. Each year, VeeamON Virtual brings together more than 2,500 industry experts to showcase the latest technology solutions providing the Hyper‑Availability of data. Join us on our virtual journey to explore the challenges of data growth and inevitable data sprawl, and the threat they pose to data Availability and protection.

Veeam Cloud Service Provider Enablement Series – October 11 – Online

VCSP enablement series
New to the VCSP program? Want to become familiar with our updated program and products? This VCSP partner enablement webinar includes educational and informative content that will help you:
Understand the requirements of our VCSP program
Learn about our steps to enablement
See all the sales and marketing resources available to you