NEW Veeam Powered Network v2

NEW Veeam Powered Network v2
NEW Veeam PN (Powered Network) v2 delivers 5x to 20x faster VPN
throughput! In this latest version of Veeam PN,
a FREE tool that simplifies creation and management of site-to-site and
point-to-site VPN networks, improvements have been made to performance and
security with the adoption of WireGuard. Read this blog
to learn more about version two enhancements and visit the help center
for technical details.

NEW Veeam Availability Orchestrator v2 is now available!

NEW Veeam Availability Orchestrator v2 is
now available!
NEW Veeam® Availability Orchestrator v2 – an
industry-first – delivers a comprehensive solution that provides DR
orchestration from replicas and backups. DR is now democratized; attainable
and affordable for all organizations, applications and data, not just the mission-critical
workloads. Check out this blog to
discover what else is new in version two, or start building your first DR
plan today with a FREE 30-day download.

Top takeaways from VeeamON 2019

Top takeaways from VeeamON 2019
VeeamON 2019, our annual conference, turned Miami Beach
green last month! 2,000 attendees learned how Veeam® is committed
to supplying backup and recovery solutions that consistently deliver Cloud
Data Management™ for users across the globe. Couldn’t make it to the event?
Check out the keynote and breakout sessions, daily recap
, the CUBE interviews, blog posts
and latest

Veeam Availability Orchestrator v2.0 is now GA

Veeam Availability Orchestrator v2.0 is now GA

Notes from MWhite  /  Michael White

Hello all, I am very happy to say that what I have been working for a long time – with many other people – has now reached General Availability. It is a big release and we are going to look at things in this article.
Release Notes –
Bits –
It is important to note there is no upgrade to 2.0.  You will need to install it separate and recreate your plans. There will be an KB article to help.  But most important is remember to uninstall your VAO agents from your VBR servers so that new agents can be installed. The plan definition reports will provide the info for each plan you need to recreate.
Us two product managers did not make the decision for no upgrade lately.  It was important though as it will allow some architecture decisions to be made that will be good for customers in the future.  Once such decision is to remove the need for Production VAO servers.  They are not required or useful any longer.
What are some of the new features – the key ones?
Scopes will let you have resources and reporting – plus people that are separate and unseen by other users.  Sort of like multi-tenancy but more to allow offices or groups to have separate plans.

Above you can see both a default and HR scope.  Here you can say who belongs to what.

Above you can see the reporting options that can be defined by each scope.

Above is VM groups and you can use the button indicated above to assign them to different scopes. This is true for other resources such as DataLabs.
In v1 you could use replicas to create your plans, but in v2 you can use replicas OR backups to create your plans.  It is almost identical no matter which you choose but there are some small differences.

In our Orchestration Plans they will look identical except for as we can see above.  Failover for replica’s and Restore for backups.

As well when you look at the individual steps you will notice one that is Restore VM rather than process replica – as you can see above.
BTW, recoveries using backups can go to the original or new locations. Re-IP is possible as well.
If you design your VBR servers well, the recovery of a backup is surprising fast!
VM Console
This feature will allow you, from within VAO to access a VM’s console.  It uses the VMware VMRC HTML5 functionality to do it.
The plan needs to stay on for a some time, and if so you will see an option like below.

You highlight a VM, and select that button and you end up in the VMware HTML5 VMRC and will need to log in.  Very handy for things like checking to see if a complex app has started and is usable.
There is a lot of little things like polish in the UI.  Sort of discrete but also nice. One that is not so discrete is that we have added RTO and RPO to the plans and reports.  This means a readiness check will have a warning if the RTO / RPO defined is not possible.

Another thing is that production VAO servers are no longer required or useful.  There is only DR VAO servers.
A big release, and with an interesting range of new stuff.  You can watch my blog using this tag – for new articles.
=== END ===

Original Article:


Why we chose WireGuard for Veeam PN v2

Why we chose WireGuard for Veeam PN v2

Veeam Software Official Blog  /  Anthony Spiteri

The challenge with OpenVPN

In 2017, Veeam PN was released as part of Veeam Recovery to Microsoft Azure. The Veeam PN use case went beyond the extending of Azure networks for workload recoverability and was quickly adopted by IT enthusiasts for use of remote connectivity to home labs and the connectivity of remote networks that could be spread out across cloud and on-premises platforms.
Our development roadmap for Veeam PN had been locked in, however, we found that customers wanted more. They wanted to use it for data protection, with Veeam Backup & Replication to move data between sites. When moving backup data utilizing the underlying physical connections to its maximum is critical. With OpenVPN, our R&D found that it couldn’t scale and perform to expectation no matter what combination of CPU and other resources we threw at it.

Veeam Powered Network v2 featuring WireGuard

We strongly believe that WireGuard is the future of VPNs with significant advantages over more established protocols like OpenVPN and IPsec. WireGuard is more scalable and has proven to outperform OpenVPN in terms of throughput. This is why we chose to make a tough call to rip out OpenVPN and replace it with WireGuard for site-to-site VPNs. For Veeam PN developers, this meant the rip and replacement of existing source code and meant that existing users of Veeam PN would not be able to perform an in-place upgrade.

Further to our own belief in WireGuard, we also looked at it as the protocol of choice due to the rise of it in the Open Source world as a new standard in VPN technologies. WireGuard offers a higher degree of security through enhanced cryptography that operates more efficiently, leading to increased performance and security. It achieves this by working in kernel and by using fewer lines of code (4,000 compared to 600,000 in OpenVPN) and offers greater reliability when connecting hundreds of sites…again thinking about performance and scalability for more specific backup and replication use cases.
Recent support for WireGuard becoming a defacto standard in VPNs was ratified by Linus Torvalds:

“Can I just once again state my love for [WireGuard] and hope it gets merged soon? Maybe the code isn’t perfect, but I’ve skimmed it, and compared to the horrors that are OpenVPN and IPSec, it’s a work of art.”
Linus Torvalds, on the Linux Kernel Mailing List

Increased security and performance

WireGuard’s security was also a factor in moving on from OpenVPN. Security is always a concern with any VPN, and WireGuard takes a more simplistic approach to security by relying on crypto versioning to deal with cryptographic attacks… in a nutshell it’s easier to move through versions of primitives to authenticate rather than client server negotiation of cipher type and key lengths.
Because of this streamlined approach to encryption in addition to the efficiency of the code WireGuard can outperform OpenVPN, meaning that Veeam PN can sustain significantly higher throughputs (testing has shown performance increases of 5x to 20x depending on CPU configuration) which opens up the use cases to be for more than just basic remote office or homelab use. Veeam PN can now be considered as a way to connect multiple sites together and have the ability to transfer and sustain hundreds of Mb/s which is perfect for data protection and disaster recovery scenarios.

Solving the UDP problem, easy configuration and point-to-site connectivity

One of the perceived limitations of WireGuard is the fact that it does all it’s work over UDP, which can cause challenges when deploying into locked down networks that by default trust TCP connections more than UDP. To solve this potential road block for adoption, our developers worked out a way to encapsulate (with minimal overhead) the WireGuard UDP over TCP to give customers choice depending on their network security setup.
By incorporating WireGuard into an all in one appliance (or installable via a simple script on an already installed Ubuntu Server), we have made the installation and configuration of complex VPNs simple and reliable. We have kept OpenVPN as the protocol for connecting point-to-site for the moment due to the wider distribution of OpenVPN Clients across platforms. Client access through to the Veeam PN Hub is done via OpenVPN with site-to-site access handled by WireGuard.

Other enhancements

Core what we wanted to achieve with Veeam PN is simplifying complexity and we wanted to ensure this remained in place regardless of what was doing the heavy lifting underneath. The addition of WireGuard is easily the biggest enhancement from Veeam PN v1, however, there are several other enhancements listed below

  • DNS forwarding and configuring to resolve FQDNs in connected sites.
  • New deployment process report.
  • Microsoft Azure integration enhancements.
  • Easy manual product deployment.


Once again, the premise of Veeam PN is to offer Veeam customers a free tool that simplifies the traditionally complex process around the configuration, creation and management of site-to-site and point-to-site VPN networks. The addition of WireGuard as the site-to-site VPN platform will allow Veeam PN to go beyond the initial basic use cases and become an option for more business-critical applications due to the enhancements that WireGuard offers. Once again, we chose WireGuard because we believe it is the future of VPN protocols and we are excited to bring it to our customers with Veeam PN v2.
Download it now!

Helpful resources:

The post Why we chose WireGuard for Veeam PN v2 appeared first on Veeam Software Official Blog.

Original Article:


Veeam Availability for Nutanix AHV – Automated Deployment with Terraform

Veeam Availability for Nutanix AHV – Automated Deployment with Terraform

vZilla  /  michaelcade

Based on some work myself and Anthony have been doing over the last 12-18 months around terraform, chef and automation of Veeam components I wanted to extend this out further within the Veeam Availability Platform and see what could be done with our Veeam Availability for Nutanix AHV.
I have covered lots of deep dive on the capabilities that the product brings even in a v1 state, but I wanted to highlight the ability to automate the deployment of the Veeam Availability for Nutanix Proxy appliance. The idea was also validated when I spoke to a large customer that had heavily invested in Nutanix AHV across 200 of their sites in the US.
Having been a Veeam customer they were very interested in exploring the new product and its capabilities matched up with their requirements. But they then had a challenge in deploying the proxy appliance to all 200 clusters they have across the US. There were some angles around PowerShell and from a configuration this may be added later to what I have done here. I took the deployment process and I created a terraform script that would deploy the image disk and new virtual machine for the proxy with the correct system requirements.
This was the first challenge that validated what and why we needed to make a better option for an automated deployment. The second was that the deployment process of the v1 is a little off putting I think is a fair thing to say, so by being able to automate this it would really help that initial deployment process.
Here is the terraform script, if you are not used to or aware of terraform then I suggest you look this up, the possibilities with terraform really change the way you think about infrastructure and software deployment.

One caveat you will need to consider is as part of this deployment I placed the image file that I downloaded from on my web server this allows for that central location and part of the script to deploy that image from this shared published resource. Again the thought process to this was that 200 site customer and providing a central location to pull the image disk from, without downloading it 200 times in each location.
Once the script has run you will see the created machine in the Nutanix AHV management console.

There is a requirement for DHCP in the environment so that the appliance gets an IP and you can connect to it.

Now you have your Veeam Availability for Nutanix AHV proxy appliance deployed and you can run through the configuration steps. Next up I will be looking at how we can also automate that configuration step. However for now the configuration steps can be found here.
Thanks to Jon Kohler he is the main contributor to the Nutanix Terraform Provider and also has some great demo YouTube videos worth watch, You will see that the base line for the script I have created is based on examples that Jon uses in his demonstrations.

Original Article:


Veeam Direct Restore from AWS EC2 Backup & Replication Server and Repository

Veeam Direct Restore from AWS EC2 Backup & Replication Server and Repository

vZilla  /  michaelcade

A couple of weeks back at Cloud Field Day one of the questions asked by the delegates was would the conversion process for the Direct Restore to AWS be faster if we stored the data within AWS as part of the backup process, for example if we were to have our on-premises operational restore window local to the production data but have a backup copy job sending a retention period to a completely separate Veeam Backup & Replication server and repository that is located within AWS.

I wanted to check and compare the speed and performance of this but also if we did see a dramatic increase in speed then at what cost does this come at.
As you can see from the diagram above, we have our production environment on the left and this is the control layer for the operational backups for our production workloads. We are then for the purposes of the test going to backup copy those backups to a secondary backup repository to a machine running in AWS. We then have a complete stand-alone system running within AWS that has Veeam Community Edition installed.
This Veeam Backup & Replication server within AWS is a Windows 2016 machine with the following specifications.

Availability Zone = us-east-2
8GB Memory
2 CPUs

On this machine we then installed Veeam backup & Replication – Community Edition, the functionality of cloud mobility is completely available within the free community edition.

The installation doesn’t take too long.

As you can see in the below image, we have a full backup file. this is located within the repository in AWS. Next up we need to import this to our AWS Veeam Backup & Replication server.

It’s really simple to import any Veeam backup first up open up the Veeam Backup & Replication console and connect to the Veeam Backup & Replication server, if using the console on the server then you can choose “localhost”

On the top ribbon, select import backup.

Depending on where that backup file is located you will need to select this from the drop down, if this is not on an already managed Veeam server (i.e separate to the the Veeam backup & replication server, you will need to add this)

We can choose different Veeam files I changed the file type to VBK so I could see the full backup file in the directory.

If the backup file also contains guest file system index, then you can check the box to import this also.

Next you will see the progress, this should not take long.

We are now in a position where we can see the contents of the backup, and from here we can begin our tests.

For us to test the direct restore to AWS functionality then we first need to add our AWS Cloud Credentials.

We can choose to add our Amazon AWS Access Key from the selection.

From your Amazon account you will need to provide your Access Key and Secret Key these are super super important not to share externally. These need to be super secure.

Notice how I am not going to show you my secret key.

You will then see your newly added cloud credentials.

Ok so at this stage we can right click on the machine we wish to directly restore to AWS.

The approach we take here is a very easy to use wizard, we choose our AWS Cloud Credentials from the drop down. Choose the Region and data center region you wish to restore the machine into.

Next up we need to choose the name we wish to use for the restored machine, but also we can add specific AWS tags.

I am going to rename mine to differentiate later on.

We then need to define the AWS Instance type we wish to use, you will need to make sure that the system you are sending there is supported by AWS for the OS.

Next, we choose the Amazon VPC and Security Group we wish to use.

We can choose if we would like to use a Helper Proxy Appliance.

Finally, we are able to give a reason for the restore.

Summary of the process and outcome.

This is the simple and easy to use steps you would take to get any direct restore to AWS.
Let’s get back to the test now, we want to know if by having this machine backup located within AWS would this make the conversion faster than having this on premises. This was the question asked during Cloud Field Day.
First of all, let’s take a look at the time it takes for the same machine from on-premises, the test is from our Columbus DC which also happens to be an AWS DC as well, I will reflect and take this into account later on.
This is the screen grab for the process that took place from on premises to AWS us-east-2 upload of data looks to have taken the exact same amount of time so the proof will be in the importing of the VM and if that is faster.
The on-premises system took 14:29 to import and 15:34 for the whole process to complete.

The connectivity for my “on-premises” location is as follows.

Now we want to run the same test using the imported backup file that we showed previously. As you can see the combined total took 12:39 and the conversion took 11:33 minutes.
The machine we are restoring is 1.1GB you can see the upload speed was the same regardless of the location of the backup, it’s the conversion that is slightly improved.

We also ran a test from our lab to us east 1 (North Virginia) and as you can see this was different again in terms of the import process.

I then wanted to test outside of our lab and very helpfully Jorge was able to run a test from his home lab, as you can expect his connectivity is not the same as an enterprise lab or data centre.
Jorge took the same machine, imported it to his Veeam Backup & Replication server within his home lab with a much slower upload speed than we have used previously but you can see that the import process and here we are importing to London and its quicker still. But the import process here shows it to be faster still.

I will continue gathering data points around this but the one clear statistic at the moment is it really doesn’t make any difference really on a small workload by having the backup stored within AWS already, now you consider a larger dataset and that import process time saving could be considerable but you have to really consider the costs behind storing those backups in AWS. And is it worth it.

Original Article:


VeeamON 2019: Day one recap

VeeamON 2019: Day one recap

Veeam Software Official Blog  /  Rick Vanover

VeeamON Miami is under way! On Monday, we hosted the VeeamON 2019 Welcome Reception, which was a great start to the VeeamON 2019 event, where we are welcoming a full house of customers and partners.
The first full day was also packed with key announcements, new Veeam technologies and an awesome agenda of breakouts for attendees. Here is a rundown of Tuesday’s news:

The general session also featured key perspectives from Veeam co-founder Ratmir Timashev on Veeam’s momentum, customer testimonials and some key focus on Microsoft. Veeam Vice President, Global Business & Corporate Development Carey Stanton welcomed Tad Brockway, corporate vice president for Azure Storage, Media, and Edge platform team at Microsoft.

Image via @anbakal on Twitter

We also had a very special general session focused on technology, both already existing and coming soon. In this session, Veeam Product Strategy and R&D gave a number of key overviews of the new Veeam Availability Orchestrator v2 general availability announcement, Veeam Availability Console and Veeam Backup for Microsoft Office 365. New capabilities were shown for Veeam Availability Suite, as well as new technologies for Microsoft Azure.

Image via @anbakal on Twitter

However, the key part of the event for attendees is the breakouts! This year, the breakout menu features technical topics making up 80% of the breakouts delivered by Veeam. Everything from best practices, worst practices, how-to tips and more has been covered. Today had presentations from Platinum Sponsors, Cisco, NetApp, Microsoft Azure, ExaGrid and IBM Cloud. Here are two slides from Veeam presentations that I found compelling:

 “From the Architect’s Desk: Sizing of Veeam Backup & Replication, Proxies and Repositories”

This session was presented by Tim Smith, a solutions architect based in the US (Tim also runs the Tim’s Tech Thoughts blog and is on Twitter at: Tsmith_co). Here is one slide where Tim outlines the sizing of the Veeam backup server for 2,000 VMs with eight jobs (just as an example). This is important as sizing goes all the way through the environment: backup server, proxies, repositories, etc.

 “Let’s Manage Agents”

This session was presented by Dmitry Popov, senior analyst, product management in charge of products, including Veeam Agent for Microsoft Windows. Here is one slide where Dmitry shows a cool tip where unmanaged agents (where agents running without a license in free mode will show up) can be put into a protection group to have centralized management and a schedule:

For attendees of the event, you will be able to access the recording of these and all other sessions. More information will be sent as a follow-up email for the replay information.
Check out this recap video by our senior global technologists Anthony Spiteri and Michael Cade:

We will have more content tomorrow as well! I’ll be posting another blog with a recap from today’s event. For those of you at the event, be sure to use the hashtag #VeeamON to share your experiences!
The post VeeamON 2019: Day one recap appeared first on Veeam Software Official Blog.

Original Article:


Fujitsu ETERNUS storage plug-in now available

Fujitsu ETERNUS storage plug-in now available

Veeam Software Official Blog  /  Rick Vanover

One of the things I love is how fast we can add storage integrations with the Veeam Universal Storage API. This framework was introduced with Veeam Backup & Replication 9.5 Update 3 and has allowed Veeam and our alliance partners to rapidly add new supported storage systems. Fujitsu Storage ETERNUS DX and AF are the newest systems to be supported with a Veeam plug-in.
This plug-in actually exposes many outstanding capabilities, much more than just the Backup from Storage Snapshots that seems to get all of the attention. This particular plug-in offers the following capabilities:

Let’s review these benefits for this storage plug-in.

Veeam Explorer for Storage Snapshots

This is my favorite capability and it has a powerful set of options in our free Community Edition – and even more in the Enterprise and higher editions. This allows existing storage snapshots to be used for Veeam restores, such as a file-level recovery, a whole VM restore to a new location Application Object Exports  (Enterprise and higher editions can restore Applications and Databases  to original location) or even an application item. This will read the storage snapshots on the Fujitsu ETERNUS and present them to Veeam for launching the restores, all seamlessly without impacting the operation of primary storage resources. The diagram below shows how you can launch the 7 restore scenarios from a snapshot on the Fujitsu ETERNUS array:

Veeam Backup from Storage Snapshots

This capability that comes from the plug-in will really unleash the power of the storage integration when it comes to performing the backup. The backup from storage snapshots engine will allow the data mover (a Veeam VMware Backup Proxy) to do the heavy-lifting of a backup job from the storage snapshot versus from a conventional VMware vSphere snapshot. I like to say that this integration will allow organizations to take backups or replicas at any time of the day. All of the goodness you would expect is maintained, such as VMware Changed Block Tracking (CBT) and application consistency. This is all transparent with this integration. The diagram below shows the time sequence in relation to the I/O associated with the backup job:

On-Demand Sandbox for Storage Snapshots

Veeam has pioneered the “DataLabs” use case, and when using an integrated storage system like Fujitsu ETERNUS, you are truly unlocking endless opportunities. The On-Demand Sandbox for Storage Snapshots will take a storage snapshot and expose the VMs on it to an isolated virtual lab. This can be used for testing changes to critical applications, testing scripts, testing upgrades, verification of a result from an application and more.
One practical way the On-demand Sandbox for Storage Snapshots can help organizations is by helping avoid changes that don’t go as expected for one of many reasons. Firing up an application in an on-demand sandbox, powered by the Fujitsu ETERNUS DX/AF array, will give you a good sense of the changes. You can answer questions like “How long will it take?” and “What will happen?” But most importantly “Will it work?” This can save time by avoiding cancelled change requests due to not knowing exactly what to expect with some changes.

Primary Storage Snapshot Orchestration

The plug-in for Fujitsu ETERNUS DX/AF storage will also allow you make Veeam jobs that just create snapshots. These jobs will not produce a backup but will produce a storage snapshot that can be used for Veeam Explorer for Storage Snapshots.

The Power of the Storage Snapshot! Harness it!

The Fujitsu ETERNUS DX/AF storage systems are the latest integrated array with Veeam. You can download the plug-in here.

The post Fujitsu ETERNUS storage plug-in now available appeared first on Veeam Software Official Blog.

Original Article:


What’s new in v3 of Veeam’s Office 365 backup

What’s new in v3 of Veeam’s Office 365 backup

Veeam Software Official Blog  /  Niels Engelen

It is no secret anymore, you need a backup for Microsoft Office 365! While Microsoft is responsible for the infrastructure and its availability, you are responsible for the data as it is your data. And to fully protect it, you need a backup. It is the individual company’s responsibility to be in control of their data and meet the needs of compliance and legal requirements. In addition to having an extra copy of your data in case of accidental deletion, here are five more reasons WHY you need a backup.

With that quick overview out of the way, let’s dive straight into the new features.

Increased backup speeds from minutes to seconds

With the release of Veeam Backup for Microsoft Office 365 v2, Veeam added support for protecting SharePoint and OneDrive for Business data. Now with v3, we are improving the backup speed of SharePoint Online and OneDrive for Business incremental backups by integrating with the native Change API for Microsoft Office 365. By doing so, this speeds up backup times up to 30 times which is a huge game changer! The feedback we have seen so far is amazing and we are convinced you will see the difference as well.

Improved security with multi-factor authentication support

Multi-factor authentication is an extra layer of security with multiple verification methods for an Office 365 user account. As multi-factor authentication is the baseline security policy for Azure Active Directory and Office 365, Veeam Backup for Microsoft Office 365 v3 adds support for it. This capability allows Veeam Backup for Microsoft Office 365 v3 to connect to Office 365 securely by leveraging a custom application in Azure Active Directory along with MFA-enabled service account with its app password to create secure backups.

From a restore point of view, this will also allow you to perform secure restores to Office 365.

Veeam Backup for Microsoft Office 365 v3 will still support basic authentication, however, using multi-factor authentication is advised.

Enhanced visibility

By adding Office 365 data protection reports, Veeam Backup for Microsoft Office 365 will allow you to identify unprotected Office 365 user mailboxes as well as manage license and storage usage. Three reports are available via the GUI (as well as PowerShell and RESTful API).
License Overview report gives insight in your license usage. It shows detailed information on licenses used for each protected user within the organization. As a Service Provider, you will be able to identify the top five tenants by license usage and bring the license consumption under control.
Storage Consumption report shows how much storage is consumed by the repositories of the selected organization. It will give insight on the top-consuming repositories and assist you with daily change rate and growth of your Office 365 backup data per repository.

Mailbox Protection report shows information on all protected and unprotected mailboxes helping you maintain visibility of all your business-critical Office 365 mailboxes. As a Service Provider, you will especially benefit from the flexibility of generating this report either for all tenant organizations in the scope or a selected tenant organization only.

Simplified management for larger environments

Microsoft’s Extensible Storage Engine has a file size limit of 64 TB per year. The workaround for this, for larger environments, was to create multiple repositories. Starting with v3, this limitation and the manual workaround is eliminated! Veeam’s storage repositories are intelligent enough to know when you are about to hit a file size limit, and automatically scale out the repository, eliminating this file size limit issue. The extra databases will be easy to identify by their numerical order, should you need it:

Flexible retention options

Before v3, the only available retention policy was based on items age, meaning Veeam Backup for Microsoft Office 365 backed up and stored the Office 365 data (Exchange, OneDrive and SharePoint items) which was created or modified within the defined retention period.
Item-level retention works similar to how classic document archive works:

  • First run: We collect ALL items that are younger (attribute used is the change date) than the chosen retention (importantly, this could mean that not ALL items are taken).
  • Following runs: We collect ALL items that have been created or modified (again, attribute used is the change date) since the previous run.
  • Retention processing: Happens at the chosen time interval and removes all items where the change date became older than the chosen retention.

This retention type is particularly useful when you want to make sure you don’t store content for longer than the required retention time, which can be important for legal reasons.
Starting with Veeam Backup for Microsoft Office 365 v3, you can also leverage a “snapshot-based” retention type option. Within the repository settings, v3 offers two options to choose from: Item-level retention (existing retention approach) and Snapshot-based retention (new).
Snapshot-based retention works similar to image-level backups that many Veeam customers are so used to:

  • First run: We collect ALL items no matter what the change date is. Thus, the first backup is an exact copy (snapshot) of an Exchange mailbox / OneDrive account / SharePoint site state as it looks at that point in time.
  • Following runs: We collect ALL new items that have been created or modified (attribute used here is the change date) since the previous run. Which means that the backup represents again an exact copy (snapshot) of the mailbox/site/folder state as it looks at that point in time.
  • Retention processing: During clean-up, we will remove all items belonging to snapshots of mailbox/site/folder that are older than the retention period.

Retention is a global setting per repository. Also note that once you set your retention option, you will not be able to change it.

Other enhancements

As Microsoft released new major versions for both Exchange and SharePoint, we have added support for Exchange and SharePoint 2019. We have made a change to the interface and now support internet proxies. This was already possible in previous versions by leveraging a change to the XML configuration, however, starting from Veeam Backup for Microsoft Office 365 v3, it is now an option within the GUI. As an extra, you can even configure an internet proxy per any of your Veeam Backup for Microsoft Office 365 remote proxies.  All of these new options are also available via PowerShell and the RESTful API for all the automation lovers out there.

On the point of license capabilities, we have added two new options as well:

  • Revoking an unneeded license is now available via PowerShell
  • Service Providers can gather license and repository information per tenant via PowerShell and the RESTful API and create custom reports

To keep a clean view on the Veeam Backup for Microsoft Office 365 console, Service Providers can now give organizations a custom name.

Based upon feature requests, starting with Veeam Backup for Microsoft Office 365 v3, it is possible to exclude or include specific OneDrive for Business folders per job. This feature is available via PowerShell or RESTful API. Go to the What’s New page for a full list of all the new capabilities in Veeam Backup for Microsoft Office 365.

Time to start testing?

There’s no better time than the present for you to get your hands-on Office 365 backup. Download Veeam Backup for Microsoft Office 365 v3, or try Community Edition FREE forever for up to 10 users and 1 TB of SharePoint data.
GD Star Rating
What’s new in v3 of Veeam’s Office 365 backup, 5.0 out of 5 based on 4 ratings

Original Article:


How to enable MFA for Office 365

How to enable MFA for Office 365

Veeam Software Official Blog  /  Polina Vasileva

Starting from the recently released version 3, Veeam Backup for Microsoft Office 365 allows for retrieving your cloud data in a more secure way by leveraging modern authentication. For backup and restores, you can now use service accounts enabled for multi-factor authentication (MFA). In this article, you will learn how it works and how to set up things quickly.

How does it work?

For modern authentication in Office 365, Veeam Backup for Microsoft Office 365 leverages two different accounts: an Azure Active Directory custom application and a service account enabled for MFA. An application, which you must register in your Azure Active Directory portal in advance, will allow Veeam Backup for Microsoft Office 365 to access Microsoft Graph API and retrieve your Microsoft Office 365 organizations’ data. A service account will be used to connect to EWS and PowerShell services.
Correspondingly, when adding an organization to the Veeam Backup for Microsoft Office 365 scope, you will need to provide two sets of credentials: your Azure Active Directory application ID with either an application secret or application certificate and your services account name with its app password:

Can I disable all basic authentication protocols in my Office 365 organization?

While Veeam Backup for Microsoft Office 365 v3 fully supports modern authentication, it has to fill in the existing gaps in Office 365 API support by utilizing a few basic authentication protocols.
First, for Exchange Online PowerShell, the AllowBasicAuthPowershell protocol must be enabled for your Veeam service account in order to get the correct information on licensed users, users’ mailboxes, and so on. Note that it can be applied on a per-user basis and you don’t need to enable it for your entire organization but for Veeam accounts only, thus minimizing the footprint for a possible security breach.
Another Exchange Online PowerShell authentication protocol you need to pay attention to is the AllowBasicAuthWebServices. You can disable it within your Office 365 organization for all users — Veeam Backup for Microsoft Office 365 can make do without it. Note though, that in this case, you will need to use application certificate instead of application secret when adding your organization to Veeam Backup for Microsoft Office 365.
And last but not the least, to be able to protect text, images, files, video, dynamic content and more added to your SharePoint Online modern site pages, Veeam Backup for Microsoft Office 365 requires LegacyAuthProtocolsEnabled to be set to $True. This basic authentication protocol takes effect for all your SharePoint Online organization, but it is required to work with certain specific services, such as ASMX.

How can I get my application ID, application secret and application certificate?

Application credentials, such as an application ID, application secret and application certificate, become available on the Office 365 Azure Active Directory portal upon registering a new application in the Azure Active Directory.
To register a new application, sign into the Microsoft 365 Admin Center with your Global Administrator, Application Administrator or Cloud Application Administrator account and go to the Azure Active Directory admin center. Select New application registration under the App registrations section:

Add the app name, select Web app/API application type, add a sign-on URL (this can be any custom URL) and click Create:

Your application ID is now available in the app settings, but there’re a few more steps to take to complete your app configuration. Next, you need to grant your new application the required permissions. Select Settings on the application’s main registration page, go to the Required permissions and click Add:

In the Select an API section, select Microsoft Graph:

Then click Select permissions and select Read all groups and Read directory data:

Note that if you want to use application certificate instead of application secret, you must additionally select the following API and corresponding permissions when registering a new application:

  • Microsoft Exchange Online API access with Use Exchange Web Services with full access to all mailboxes‘ permissions
  • Microsoft SharePoint Online API access with Have full control of all site collections permissions

To complete granting permissions, you need to grant administrator consent. Select your new app from the list in the App registrations (Preview) section, go to API Permissions and click Grant admin consent for <tenant name>. Click Yes to confirm granting permissions:

Now your app is all set and you can generate an application secret and/or application certificate. Both are managed on the same page. Select your app from the list in the App registrations (Preview) section, click Certificates & secrets and select New client secret to create a new application secret or select Upload certificate to add a new application certificate:

For application secret, you will need to add secret description and its expiration period. Once it’s created, copy its value, for example, to Notepad, as it won’t be displayed again:

How can I get my app password?

If you already have a user account enabled for MFA for Office 365 and granted with all the roles and permissions required by Veeam Backup for Microsoft Office 365, you can create a new app password the following way:

  • Sign into the Office 365 with this account and pass additional security verification. Go to user’s settings and click Your app settings:
  • You will be redirected to, where you need to navigate to Security & privacy and select Create and manage app passwords:
  • Create a new app password and copy it, for example, to Notepad. Note that the same app password can be used for multiple apps or a new unique app password can be created for each app.

What’s next?

Now you have all the credentials to start protecting your Office 365 data. When adding an Office 365 organization to the Veeam Backup for Microsoft Office 365 scope, make sure you select the correct deployment type (which is ‘Microsoft Office 365’) and the correct authentication method (which in our case is Modern authentication). Keep in mind that with v3, you can choose to use the same or different credentials for Exchange Online and SharePoint Online (together with OneDrive for Business). If you want to use separate custom applications for Exchange Online and SharePoint Online, don’t forget to register both in advance in a similar way as described in this article.
GD Star Rating
How to get App ID, App secret and app password in Office 365, 5.0 out of 5 based on 8 ratings

Original Article:


How to limit egress costs within AWS and Azure

How to limit egress costs within AWS and Azure

Veeam Software Official Blog  /  Nicholas Serrecchia

With Update 4’s exciting new cloud features, there are settings within AWS and Azure that you should familiarize yourself with to help negate some of the egress traffic costs, as well as help with security.
Right now, let’s talk about the scenarios where:

  • You are backing up Azure/AWS instances, utilizing Veeam Backup & Replication with a Veeam Agent, while utilizing Capacity Tier all inside of AWS/Azure
  • You have a SOBR instance in AWS/Azure and utilize Capacity Tier
  • When N2WS backup and recovery/Veeam Availability for AWS performs a copy to Amazon S3
  • If Veeam is deployed within AWS/Azure and you perform a DR2EC2 without a proxy or DR2MA

In AWS, by default, all traffic written into S3 from a resource within a VPC, like an EC2 instance, face egress costs for all these scenarios listed above. By default, when we archive data into S3 or do a disaster recovery to EC2, where Veeam uploads the virtual disk into S3, so AWS can convert to Elastic Block Store (EBS) volumes (AWS VMimport), we face an egress charge per GB. There is the option to utilize a NAT gateway/instance, but again there is a price associated with that as well.
Thankfully, there is an option that you could enable, which is basically the “don’t charge me egress!” button. That feature is called VPC Endpoints for AWS and VNet Service Endpoints for Azure.

Limit AWS egress costs

As stated by AWS:
“A VPC Endpoint enables you to privately connect your VPC to supported AWS services and VPC Endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network”.
You simply enable VPC Endpoints for your S3 service within that VPC and you will no longer face egress cost when an EC2 instance is traversing data into S3. This is because the EC2 instance doesn’t need a public IP, internet gateway or NAT device to send data to S3.

Now that you enabled the VPC Endpoint, I highly recommend that you create a bucket policy to specify which VPCs or external IP addresses can access the S3 bucket.

Limit Azure egress costs

Azure handles the egress costs from their instances into Blob in the same manner AWS does, but with Azure the nomenclature is different, they use VNets instead of VPCs and they too have a feature that can be enabled at the VNet level: VNet Service Endpoints.
As stated by Microsoft Azure:
“Virtual Network (VNet) service endpoints extend your virtual network private address space and the identity of your VNet to the Azure services, over a direct connection. Endpoints allow you to secure your critical Azure service resources to only your virtual networks. Traffic from your VNet to the Azure service always remains on the Microsoft Azure backbone network.”

With Azure, you can then setup a firewall within the storage account to limit internet access to that resource.
Again, this is for instances hosted within a VNet or VPC talking to their respected object storage within the same region, not on-premises to an S3/Azure storage account.


The post How to limit egress costs within AWS and Azure appeared first on Veeam Software Official Blog.

Original Article:


Veeam & NetApp – Backup for the NetApp Data Fabric

Veeam & NetApp – Backup for the NetApp Data Fabric

vZilla  /  michaelcade

ONTAP 9.5 went GA a few weeks back, waking up this morning to see the weekly digest from Anton Gostev mentioning the support for ONTAP 9.5 with Veeam Backup & Replication 9.5 Update 4a. This will not support the new feature released with ONTAP 9.5 synchronous SnapMirror.
I have been preaching the powers of NetApp & Veeam for a few years now, but if you have not seen the power of the integration between the two vendors then I am going to summarise everything in this post here today.


Let’s start with the free stuff you can do with Veeam in a NetApp ONTAP environment. Veeam Community Edition (Free version of Veeam) has the ability to add an ONTAP controller to your Veeam environment and regardless of if Veeam has created those native ONTAP snapshots or not, Veeam has the ability to go inside those ONTAP snapshots and perform recoveries against any VMware virtual machine you have within that snapshot. This could be full virtual machine recovery, also Instant VM recovery see here. It could also be file level recovery or if you want to get down to the application items, maybe a mail item or share point object then you can also do this from the FREE version!


Now natively you can create a NetApp Snapshot pretty stress free, however this is going to be crash consistent and where some application servers would withstand that crash consistency on recovery there are some most likely business critical applications that require application consistency. Why not use Veeam to orchestrate your application aware and application consistent snapshots. This provides a much tighter Recovery Point Objective. This also makes leveraging ONTAP snapshots for these reasons more appealing given that in ONTAP 9.4 the limit of snapshots per volume was increased from 255 to 100. With it being a snapshot your Recovery Time Objective is going to be seconds!
Veeam can also orchestrate the SnapMirror or SnapVault snapshot data transfer.


Ok sounds good so far. Now the icing on the cake or maybe the buttercream in the middle of the cake. Snapshots are great and I am not getting into the snapshot vs backup its 2019 and if you don’t get it or agree then you probably voted BREXIT and I am fine with that, but this is my time to shine not yours.
The integration with Veeam and NetApp gives us the ability to leverage those snapshots and gain all the benefits mentioned in the above section but also the ability to get those blocks of data into another media type.


We have you covered for granular recovery, fast RTO, tighter RPO and a copy of your data on a separate media type that could protect you from internal malicious threats or corruption. Now the cherry on top.
We can take those ONTAP Snapshots and those backups and we can spin them up in an isolated environment. Veeam DataLabs gives us the ability to leverage the FlexClone technology on the ONTAP system and automate the creation, provisioning and removal of small isolated sandbox environments. This YouTube video clip of a session I did is a little old now, but the capability is a huge feature within Veeam Backup & Replication.

Oh, I should also mention that we have the same integration with Element Software and NetApp HCI I wrote about that here.
I think that’s all for now in the world of NetApp & Veeam. If you have any questions, then you can find me on the twitters @MichaelCade1

Original Article:


Tenant Backup to Tape

Tenant Backup to Tape

CloudOasis  /  HalYaman

As a Service Provider, what are the ways available to you to protect your tenants data? What ways are there to offload aged data to keep storage costs under control? When looking at data archiving, is Cloud storage the only way to archive your tenants backup data?

I chose this topic because I feel that the Backup to Tape as a Service feature not getting the attention it deserves.
The Tape as a Service feature was included in Veeam Update 4 to help Service Providers free up space in their storage repositories and to help customers to safely archive their backup data for longer retention time.
In the same Update 4, Veeam also released the Archive Tier. This allows Service Providers and customers to tier their backup to the cloud for archiving. But somehow this Archive Tier feature grabbed the market’s attention and neglected Tape as a Service.
As someone who is still a big fan of Tapes, and believes that tape is not going to disappear in the near future, I think this blog post is a great read for Service Providers to learn more about the Tape as a Service feature included with Veeam Update 4.
So let’s learn how it works.


The tenant Backup to Tape architecture is an add-on to the Veeam Cloud Connect architecture, but with the addition of the Tape Infrastructure. For Service Providers who want to offer this feature, the Tape infrastructure must be added to the Veeam Cloud Connect. A tape backup job must then be configured to off-load the tenant backup from the cloud storage/repository to the tape using the Veeam Backup to Tape job.
The diagram below illustrates how the Veeam Tenant Backup to Tape feature integrates with Veeam Cloud Connect.
After the Tape infrastructure has been added to the Veeam Cloud Connect, the Service Provider can now create a GFS (Grandfather, Father, Son backup routine) backup job to back up the tenant data to tape for both these reasons:

  • Archival for long retention, and
  • Offline Archive.

Job Configuration

The configuration starts with creating a GFS media pool. The Service Provider can create a pool for each tenant, retention, or any other business offering. On the examples, we are going to use here, and to achieve full data segregation, I will create a single GFS media pool for each tenant. On the screenshot below, I created a Media Pool with five tapes to be used when I want to backup Tenant01 data:

The next step is illustrated below, where I created the Tape Backup Job and then selected Tenants as the data source:

Next, I selected the appropriate tenant repository; in the example below, it is Tenant01_Repo:

At the Media Pool menu item, select the specific tenant media pool. In this example, it is Tenant01 (HP MSL G3 Series 9.50)
The last step is to schedule the job to run the backup:

Recover Tenant Data from Tape

It is important to mention that Veeam Tenant Backup to Tape as a Service is managed and operated by only the Service Provider to help protect and archive the tenant data on behalf of the tenant. That said, the tenant will not be aware of the job or the restore process. To benefit from this service, the tenant and the Service Provider must agree to the policy to be implemented.
In the case of data loss, the Tenant can ask the service provider to restore the data using one of the following options:

  • Restore to the original location,
  • Restore to a new location, or
  • Export backup files to disk.

The screenshot below illustrates the options:

With the options are shown above, Service Providers can restore the tenant data to the original location, a different location, or export the data to removable storage. Original location restores the data to the same location it was backed up from, i.e. the tenant cloud repository. But sometimes the tenant wants to compare the data on tape with the data in the cloud repository; this is where the restore to a new location option becomes very useful.  Service providers can create a temporary tenant where the tenant is provided with access to check the data before committing the restore to the original location or sending the data using removable storage. The screenshot below illustrates the Service Provider restoring the tenant data to a new location using the temporary tenant name of Tenant01_Repo.
After the restoration was completed, the tenant can connect to the Service Provider using the temporary tenant to access the restored data:


As I mentioned at the start, I’m a big fan of tape backups for several reasons. They include secure recovery in the event of a Ransomware attack, doesn’t cost very much, and more. I am also aware of the limitations and challenges that can come with Tape backup, and maybe that can be the topic for another blog post. The Backup to Tape feature described here is a great feature, yet somehow several Service Providers I spoke to missed this feature. They were very happy to finally find a cheap and effective way to use their Tape infrastructure; some of them started using this feature as ransomware protection.
I hope this blog post provides a clear understanding of how you as a Service Provider can benefit from this Tenant Tape Backup feature and maximize your infrastructure return on investment.
The post Tenant Backup to Tape appeared first on CloudOasis.

Original Article:


Disaster recovery plan documentation with Veeam Availability Orchestrator

Disaster recovery plan documentation with Veeam Availability Orchestrator

Veeam Software Official Blog  /  Sam Nicholls

Without a doubt, the automated reporting engine in Veeam Availability Orchestrator and the disaster recovery plan documentation it produces are among its most powerful capabilities. They’re something we’ve had a lot of overwhelmingly positive feedback from customers that benefit from them, and I feel that sharing some more insights into what these documents are capable of will help you understand how you can benefit from them, too.
Imagine coming in to work on a Monday morning to an email containing an attachment that tells you that your entire disaster recovery plan was tested over the weekend without you so much as lifting a finger. Not only does that attachment confirm that your disaster recovery plan has been tested, but it tells you what was tested, how it was tested, how long the test took, and what the outcome of the test was. If it was a success, great! You’re ready to take on disaster if it decides to strike today. If it failed, you’ll know what failed, why it failed, and where to start fixing things. The document that details this for you is what we call a “test execution report,” but that is just one of four fully-automated documentation types that Veeam Availability Orchestrator can put in your possession.

Definition report

As soon as your first failover plan is created within Veeam Availability Orchestrator, you’ll be able to produce the plan definition report. This report provides an in-depth view into your entire disaster recovery plan’s configuration, as well as its components. This includes the groups of VMs included in that plan, the steps that will be taken to recover those VMs, and the applications they support in the event of a disaster, as well as any other necessary parameters. This information makes this report great for auditors and management, and can be used to obtain sign-off from application owners who need to verify the plan’s configuration.

Readiness check report

Veeam Availability Orchestrator contains many testing options, one of which we call a readiness check, a great sanity check that is so lightweight that it can be performed at any time. This test completes incredibly quickly and has zero impact on your environment’s performance, either in production or at the disaster recovery site. The resulting report documents the outcome of this test’s steps, including if the replica VMs are detected and prepared for failover, the desired RPO is currently met, the VMware vCenter server and Veeam Backup & Replication server are online and available, the required credentials are provided, and that the required failover plan steps and parameters have been configured.

Test execution report

Test execution reports are generated upon the completion of a test of the disaster recovery plan, powered by enhanced Veeam DataLabs that have been orchestrated and automated by Veeam Availability Orchestrator. This testing runs through every step identified in the plan as if it were a real-world scenario and documents in detail everything you could possibly want to know. This makes it ideal for evaluating the disaster recovery plan, proactively troubleshooting errors, and identifying areas that can be improved upon.

Execution report

This report is exactly the same as the test execution report but is only produced after the execution of a real-world failover.
Now that we understand the different types of reports and documentation available in Veeam Availability Orchestrator, I wanted to highlight some of the key features for you that will make them such an invaluable tool for your disaster recovery strategy.


All four reports are automatically created, updated and published based on your preferences and needs. They can be scheduled to complete at any frequency you see fit – daily, weekly, monthly, etc., but are also available on-demand with a single-click. This means that if management or an auditor ever wants the latest, you can hand them real-time, up-to-date documentation without the laborious, time-consuming and error-prone edits. You can even automate this step if you like by subscribing specific stakeholders or mailboxes to the reports relevant to them.


All four reports available with Veeam Availability Orchestrator ship in a default template format. This template may be used as-is, however, it is recommended to clone it (as the default template is not editable) and customize to your organization’s specific needs. Customization is key, as no two organizations are alike, and neither are their disaster recovery plans. You can include anything you like in your documentation, from logos, application owners, disaster recovery stakeholders and their contact information. Even all the 24-hour food delivery services in the area for when things might go wrong, and the team needs to get through the night. You name it, you can customize and include it.

Built-in change tracking

One of the most difficult things to stay on top of with disaster recovery planning is how quickly and dramatically environments can change. In fact, uncaptured changes are one of the most common causes behind disaster recovery failure. Plan definition reports conveniently contain a section titled “plan change log” that detail any edits to the plan’s configuration, whether by automation or manual changes. This affords you the ability to track things like who changed plan settings, when it was changed, and what was changed so that you can preemptively understand if a change was made correctly or in error, and account for it before a disaster happens.

Proactive error detection

The actionable information available in both readiness check and test execution reports enable you to eradicate risk to your disaster recovery plan’s viability and reliability. By knowing what will and what will not work ahead of time (e.g. a recovery that takes too long or a VM replica that has not been powered down post-test), you’re able to identify and proactively remediate any plans errors that occur before disaster. This in turn delivers confidence to you and your organization that you will be successful in a real-world event. Luckily in the screenshot below, everything succeeded in my test.

Assuring compliance

Understanding compliance requirements laid out by your organization or an external regulatory body is one thing. Assuring that those compliance requirements have been met today and in the past when undergoing a disaster recovery audit is another, and failure to do so can be a costly repercussion. Veeam Availability Orchestrator’s reports enables you to prove that your plan can meet measures like maximum acceptable outage (MAO) or recovery point objectives (RPO), whether they’re defined by governing bodies like SOX, HIPAA, SEC, or an internal SLA regulation.
If you’d like to learn more about how Veeam Availability Orchestrator can help you meet your disaster recovery documentation needs and more, schedule a demo with your Veeam representative, or download the 30-day FREE trial today. It contains everything you need to get started, even if you’re not currently a Veeam Backup & Replication user.
GD Star Rating
Disaster recovery plan documentation with Veeam Availability Orchestrator, 4.8 out of 5 based on 4 ratings

Original Article:


NEW Veeam Backup for Microsoft Office 365 v3

NEW Veeam Backup for Microsoft Office 365 v3
Veeam Backup for Microsoft Office 365 is Veeam’s
fastest growing product of all-time and with 
v3, it’s
easier than ever for your customers to efficiently back up and reliably
restore their Office 365 Exchange, SharePoint and OneDrive data.
Watch this short and educational video
on the ProPartner portal to learn more about the new features and
functionalities in 
v3 and how you can earn more
with Veeam Backup for Microsoft Office 365.

Helping customers migrate from vSphere 5.5

Helping customers migrate from vSphere 5.5
What happens when customers are still running vSphere 5.5 in
production? Matt Lloyd and Shawn Lieu sat down with VMware solution architect
Chris Morrow to discuss upgrading to vSphere 6.5 or 6.7, management changes
in the newer releases, leveraging Veeam DataLabs™ for validation and more.

Enhanced agent monitoring and reporting in Veeam ONE

Enhanced agent monitoring and reporting in Veeam ONE

Veeam Software Official Blog  /  Kirsten Stoner

Veeam ONE has many capabilities that can benefit your business, one of which is the ability to run reports that compile information about your entire IT environment. Veeam One Reporter is a tool you can use to document and report on your environment to support analysis, decision making, optimization and resource utilization. For those who have used Veeam ONE in the past, it’s the go-to tool for reporting on your Veeam Backup & Replication infrastructure, VMware vSphere and Microsoft Hyper-V environments. But one thing was missing from Veeam ONE, and that was the depth of visibility it provided for the Veeam Agents for Microsoft Windows and Linux. With the latest update, Veeam ONE Reporter gains three new reports that assess your Veeam Agent Backup Jobs. In addition to these predefined reports, the update brings information about computers to the “Backup Infrastructure Custom Data” report. It provides agent monitoring by being able to categorize any machine that is being protected by the Veeam Agent within Veeam ONE Business View, allowing you to monitor activity from a business perspective. Ultimately, this update aligns Veeam ONE to provide substantial reporting and monitoring for any physical machines that are protected by the Veeam Agent.

Which reports are new?

Veeam ONE Update 4 adds more predefined reports to analyze Veeam Agent Backup Job activity. These include, Computers with no Archive Copy, Computer Backup Status and Agent Backup Job and Policy History.

  • The “Computer with no Archive Copy” report will highlight all the computers that do not have an archive copy.
  • The “Computer Backup Status” report provides the daily backup status information for all protected agents.
  • The “Agent Backup Job and Policy History” report provides historical information for all Veeam Agent policies and jobs.

All these reports provide visibility into evaluating your data-protection strategies for any workload that is running a Veeam agent in your environment. In this post, I want to highlight the Agent Backup Job and Policy History Report as well as discuss the addition to the “Backup Infrastructure Custom Data” report because both provide a great amount of information on Agent Backup Jobs.

Figure 1: Veeam Backup Agent reports

Agent Backup Job and Policy History report

For Veeam Agents, this report provides great information on the status of the Backup Job, along with data on the job run itself. To access the report:

  1. Open Veeam ONE Reporter, switch to Workspace view, and find the Veeam Backup Agent Reports folder, this is where you will see all the pre-built reports available for Veeam Backup Agents.
  2. Select the report, choose Scope and select the time interval you want the report to contain. (You can build the report to be more specific by selecting an individual backup server, several job/policies to be reported on or drill down further to the exact Agent Backup Job.)
  3. Once you have made your decision, click Preview Report and the report will be created.

Figure 2: Report Scope options

The report contains historical information for your Veeam Agent Backup Jobs. How you defined the report in the first step will determine how specific or general the data will be. Either way, the report provides great depth of data.

Figure 3: Agent Backup Job and Policy History Report

The first page shows an overview of the Agent Backup Job and policies, the date the jobs were run, and if it is in the Success, Failed or Warning category. On the second page of the report, you can see a bit more details like the Total Backup size and the amount of Restore Points Created.

Figure 4: Backup Job and Policy History details

If you select a specific date when the job was run it gives even more analysis on the specific run of the backup job.

Figure 5: Detailed Description of Agent Backup Job results

This tells you when the job started, how long it took and backup size. You can even tell if the run was full or incremental. The report provides visibility into your IT environment by being able to gather real-time data on your protected agent backups.

Backup infrastructure custom data report

This report allows you to customize and add data protection elements that are not covered together in the predefined reports included in Veeam ONE. The report can define and display data points about Veeam Backup & Replication objects, including backup server, backup job, agent job and VMs.  This is useful because it allows you to create a report that includes aspects of the backup infrastructure of your choosing to be displayed for easy analysis and visibility. To run this report, much is the same that was discussed previously in this blog post. You will need to locate the custom report pack in the workspace view, once found you will be able to choose between the different objects you want to show and what aspects of the backup infrastructure you want the report to analyze.

Figure 6: Custom reporting

When creating the report, you have the option to choose between different aspects of the backup infrastructure you want to be shown. In addition, you can apply custom filters to have the report filter the data to show only what you want it to. This allows you to create custom filter for the selected objects you want to show. Here is an example of a report that was run using the custom report pack.

Figure 7: Backup Infrastructure Custom Data Report

The ability to create custom reports  allows you to define your own configuration parameters, performance metrics, and filters when utmost flexibility is required. How cool is that?

Agent monitoring in Business View

To assist with monitoring Veeam Agent Backup activity, you can utilize Veeam ONE Business View. Veeam ONE Business View has added the ability to categorize agents in business terms. If you have your backup server(s) connected into Veeam ONE, you can start categorizing any machine that is being protected by the Veeam Agent.

Figure 8: Agent Monitoring in Business View

Veeam ONE Business View allows you to group any computers running the Veeam Backup Agent managed by your backup server. This gives you another layer of monitoring for the Veeam Agents that was not available in previous versions of Veeam ONE.

Bringing visibility to the Veeam Agents

Veeam ONE gives you the tools needed to accurately monitor and report on your entire IT environment. Actively monitoring and reporting on your IT environment allows you to be proactive when addressing issues occurring in your environment, helps plan for future business IT operation needs, and provides understanding on how your data center works. By being able to add Veeam Agent Reporting to your physical environment, you can gather data points on the Veeam Agent backup jobs and document the results. The latest update brings to Veeam ONE many enhancements and functionality making it imperative to start using in your data center today.
The post Enhanced agent monitoring and reporting in Veeam ONE appeared first on Veeam Software Official Blog.

Original Article:


Application-level monitoring for your workloads

Application-level monitoring for your workloads

Veeam Software Official Blog  /  Rick Vanover

If you haven’t noticed, Veeam ONE has really taken on an incredible amount of capabilities with the 9.5 Update 4 release.
One capability that can be a difference-maker is application-level monitoring. This is a big deal for keeping applications available and is part of a bigger Availability story. Putting this together with incredible backup capabilities from Veeam Backup & Replication, application-level monitoring can extend your Availability to the applications on the workloads where you need the most Availability. What’s more, you can combine this with actions in Veeam ONE Monitor to put in the handling you want when applications don’t behave as expected.
Let’s take a look at application-level monitoring in Veeam ONE. This capability is inside of Veeam ONE Monitor, which is my personal favorite “part” of Veeam ONE. I’ve always said with Veeam ONE, “I guarantee that Veeam ONE will tell you something about your environment that you didn’t know, but need to fix.” And with application-level monitoring, the story is stronger than ever. Let’s start with both the processes and services inside of a running virtual machine in Veeam ONE Monitor:

I’ve selected the SQL Server service, which for any system with this service, is likely important. Veeam ONE Monitor can use a number of handling options for this service. The first are simple start, stop and restart service options that can be passed to the service control manager. But we also can set up some alarms based on the service:

The alarm capability for the services being monitored will allow a very explicit handling you can provide. Additionally, you can make it match the SLA or expectation that your stakeholders have. Take how this alarm is configured, if the service is not running for 5 minutes, the alarm will be triggered as an error. I’ll get to what happens next in a moment, but this 5-minute window (which is configurable) can be what you set as a reasonable amount of time for something to go through most routine maintenance. But if this time exceeds 5 minutes, something may not be operating as expected, and chances are the service should be restarted. This is especially true if you have a fiddlesome application that constantly or even occasionally requires manual intervention. This 5-minute threshold example may even be quick enough to avoid being paged in the middle of the night! The alarm rules are shown below:

The alarm by itself is good, but we need more sometimes. That’s where a different Veeam ONE capability can help out with remediation actions. I frequently equate, and it’s natural to do so, the remediation actions with the base capability. So, the base capability is the application-level monitoring, but the means to the end of how to fully leverage this capability comes from the remediation actions.
With the remediation actions, the proper handling can be applied for this application. In the screenshot below, I’ve put in a specific PowerShell script that can be automatically run when the alarm is triggered. Let your ideas go crazy here, it can be as simple as restarting the service — but you also may want to notify application owners that the application was remediated if they are not using Veeam ONE. This alone may be the motivation needed to setup read-only access to the application team for their applications. The configuration to run the script to automatically resolve that alarm is shown below:

Another piece of intelligence regarding services, application-level monitoring in Veeam ONE will also allow you to set an alarm based on the number of services changing. For example, if one or more services are added; an alarm would be triggered. This would be a possible indicator of an unauthorized software install or possibly a ransomware service.
Don’t let your creativity stop simply at service state, that’s one example, but application-level monitoring can be used for so many other use cases. Processes for example, can have alarms built on many criteria (including resource utilization) as shown below:

If we look closer at the process CPU, we can see that alarms can be built on if a process CPU usage (as well as other metrics) go beyond specified thresholds. As in the previous example, we can also put in handling with remediation actions to sort the situation based on pre-defined conditions. These warning and error thresholds are shown below:

As you can see, application-level monitoring used in conjunction with other new Veeam ONE capabilities can really set the bar high for MORE Availability. The backup, the application and more can be looked after with the exact amount of care you want to provide. Have you seen this new capability in Veeam ONE? If you haven’t, check it out!
You can find more information on Veeam Availability Suite 9.5 Update 4 here.

More on new Veeam ONE capabilities:

The post Application-level monitoring for your workloads appeared first on Veeam Software Official Blog.

Original Article:


Backup Azure SQL Databases

Backup Azure SQL Databases

CloudOasis  /  HalYaman

You have a good reason to use Microsoft Azure SQL Databases; but, you are wondering how you can backup the Database locally. Can you include the Azure Databases protection in your company’s backup strategy? What does it take to back up the Azure Databases?

In this blog post, I am going to share with you a solution I used for one of our Azure Database customers who wanted to backup the Azure SQL Database locally. The solution I came up with consists of the following:

  • Azure Databases – SQL Database
  • VM/Physical Server with local SQL server installed
  • An Empty SQL Database
  • Configure Azure: Sync to other Databases
  • Veeam Agent & Veeam Backup & Replication (depends on the deployment)

The following diagram illustrates the solution I am describing on this blog post:

Solution Overview

There are several ways to backup Azure SQL databases. Of the two ways, one is to use the Veeam Backup product, and you use the Veeam Agent in the event you choose to deploy your SQL Server on a physical server, or on a Virtual Machine, inside the Azure Cloud. The other way is to deploy the Veeam Backup solution on premises on a VM inside your Hypervisor infrastructure. In that case, you use the Veeam Backup and Replication product; or, you can also use a combination of both ways.
After the SQL Server is deployed,  the solution requires the creation of an empty SQL database, and then the synchronization between the two databases must be configured. No worries, I will take you through the steps.

Preparing your Azure SQL Databases for Sync

In this first step, I will discuss the preparations of the existent Azure SQL databases to be synchronised. You must follow these steps:

1. Create a Sync Group

From Azure Portal, select SQL Databases. Click on the Database you are going to work with. In the following example, I created a temporary SQL Database and then filled it with temporary data for testing.
After you have moved to the database properties screen, select the Sync to other databases option:
Then select New Sync Group. Complete the following entries:
a. Fill in the name of the group. In this example, it is GlobalSync,
b. The database you want to sync. We have used the AzureSqlDb on Server: depsqlsrv.
c. Select Automatic Sync on or off as desired. Off in this example.
d. Select the conflict resolution option. Here the Hub is to win.
In the next step, you are going to configure the On-Premises Database Sync members. That is, the Azure Database and the Sync Agent, as shown below.
Note: You must copy the key for later use.
Next, download the Client Sync Agent from the provided link and install it on the SQL Server. Press OK. The next step is to Select the On-Premises Database.

Prepare the SQL Server

After completing the steps above, switch on, then login to your SQL server and create an empty SQL database.
Next, start the installation of the Client Sync Agent. Run it after the installation.  After the Client Sync Agent has started running, press Submit Agent Key. Provide the key you copied from the previous step and your Azure SQL Database username and password:
After pressing the OK button, press Register, then provide your local SQL server name and the name of the temporary database you just created.
The steps above should run without connection issues. If you encounter problems, go back over the steps and setting and correct the errors.
You are now set up with your SQL Server preparation and your server is waiting for the synchronised data to be received. Before you get excited, we must switch back to the Azure Portal to continue with the final steps before we test the solution.

Back To Azure Portal

From the previous steps, we now continue with the selected database configuration. The Client Sync Agent communicates with the Azure Portal and updates on the local SQL Server database. Now we can complete the configuration below.

Press OK three times to return to the last step on the Sync Group configuration process. Here we must select the Hub Database tables to sync with the local database.

Press Save to finish.

Testing the Solution

To test our solution, let’s first browse to the SQL server and check to see if we can find any tables inside the database we just created.
We should not find any at this stage:

Now let’s initiate a manual sync. Remember, we must configure the solution to run manually from the Sync Group we just created. To sync the database with the local SQL, you must press the Sync button at the top of the GlobalSync screen. See below.

The Sync should complete successfully. If it doesn’t, check your settings and try again. Now it is time to check the local SQL. See the screenshot below.


With that procedure we have just demonstrated, I have been able to sync the Azure SQL Databases to the local SQL database where I run a frequent Veeam backup using the Veeam Backup and Replication product. This way, I achieved my customer’s objective of saving the Azure SQL Databases locally. If necessary, I can use Veeam SQL Explorer to recover the database and tables to the local server. From there, I can sync it back to the Azure SQL databases.
This time, I have demonstrated a manual sync process; but you can automate the sync to seconds, minutes, or hours. I hope you found this blog post useful.

The post Backup Azure SQL Databases appeared first on CloudOasis.

Original Article:


Backup infrastructure at your fingertips with Heatmaps

Backup infrastructure at your fingertips with Heatmaps

Veeam Software Official Blog  /  Rick Vanover

One of the best things an organization can do is have a well-performing backup infrastructure. This is usually done by fine-tuning backup proxies, sizing repositories, having specific conversations with business stakeholders about backup windows and more. Getting that set up and running is a great milestone, but there is a problem. Things change. Workloads grow, new workloads are introduced, storage consumption increases and more challenges come into the mix every day.
Veeam Availability Suite 9.5 Update 4 introduced a new capability that can help organizations adjust to the changes:


Heatmaps are part of Veeam ONE Reporter and do an outstanding job of giving an at-a-glance view of the backup infrastructure that can help you quickly view if the environment is performing both as expected AND as you designed it. Let’s dig into the new heatmaps.

The heatmaps are available on Veeam ONE Reporter in the web user interface and are very easy to get started. In the course of showing heatmaps, I’m going to show you two different environments. One that I’ve intentionally set to be performing in a non-optimized fashion and one that is in good shape and balanced so that the visual element of the heatmap can be seen easily.
Let’s first look at the heatmap of the environment that is well balanced:

Here you can see a number of things, the repositories are getting a bit low on free space, including one that is rather small. The proxies carry a nice green color scheme and do not show too much variation in their work during their backup windows. Conversely if we see a backup proxy is dark green, that indicates it is not in use, which is not a good thing.
We can click on the backup proxies to get a much more detailed view, and you can see that the proxy has a small amount of work during the backup window in this environment in the mid-day timeframe and carries a 50% busy load:

When we look at the environment that is not so balanced, the proxies tell a different story:

You can see that first of all there are three proxies, but one of them seems to be doing much more work than the rest due to the color changes. This clearly tells me the proxies are not balanced, and this proxy selected is doing a lot more work than the others during the overnight backup window — which stretches out the backup window.

One of the coolest parts of the heatmap capability is that we can drill into a timeframe in the grid (this timeline can have a set observation window) that will tell us which backup jobs are causing the proxies to be so busy during this time, shown below:

In the details of the proxy usage, you can see the specific jobs that are set which are taking the CPU cycles are shown.

How can this help me tune my environment?

This is very useful as it may indicate a number of things, such as backup jobs being configured to not use the correct proxies, proxies not having the connectivity they need to perform the correct type of backup job. An example of this would be if one or more proxies are configured for only Hot-Add mode and they are physical machines, which makes that impossible. The proxy would never be selected for a job and the remaining proxies would be in charge of doing the backup job. This is all visible in the heatmap yet the backup jobs would complete successfully, but this type of situation would extend the backup window. How cool is that?
Beyond proxy usage, repositories are also very well reported with the heatmaps. This includes the Scale-Out Backup Repositories as well. This will allow you to view the underlying storage free space. The following animation will show this in action:

Show me the heatmaps!

As you can see, the heatmaps add an incredible visibility element to your backup infrastructure. You can see how it is performing, including if things are successful yet not as expected. You can find more information on Veeam Availability Suite 9.5 Update 4 here.

Helpful resources:

The post Backup infrastructure at your fingertips with Heatmaps appeared first on Veeam Software Official Blog.

Original Article:


New era of Intelligent Diagnostics

New era of Intelligent Diagnostics

Veeam Software Official Blog  /  Melissa Palmer

Veeam Availability Suite 9.5 U4 is full of fantastic new features to enable Veeam customers to intelligently manage their data like never before. A big component of Veeam Availability Suite 9.5 U4 is of course, Veeam ONE.
Veeam ONE is all about providing unprecedented visibility into customers’ IT environments, and this update is full of fantastic enhancements, so stay tuned for more blogs all about new features like Veeam Agent monitoring and reporting enactments, Heatmaps, and more!
One of the most interesting new features of Veeam ONE 9.5 U4 is Veeam Intelligent Diagnostics. Think of Veeam Intelligent Diagnostics as the entity watching over your Veeam Backup & Replication environment so you don’t have to. Wouldn’t it be great for potential issues to fix themselves before they cause problems? Veeam Intelligent Diagnostics enables just that.

How it works

Veeam Intelligent Diagnostics works by comparing the logs from your Veeam Backup & Replication environment to a known list of issue signatures. These signatures are downloaded from Veeam directly, so think of it as a reverse call home. All log parsing and analysis is done by Veeam ONE with the help of a Veeam Intelligent Diagnostics agent which is installed on every Veeam Backup & Replication server you want to keep an eye on.
These signatures can easily be updated by Veeam Support as needed. This allows Veeam Support to be proactive if they are noticing a number of cases with the same characteristics or cause, and fix issues before customers even encounter them. Think of things like common misconfigurations that Veeam customers spend time troubleshooting. These items can easily be avoided by Veeam customers leveraging Veeam Intelligent Diagnostics.

When an issue is detected, Veeam Intelligent Diagnostics can fix things in one of two ways, automatically or semi-automatically. The semi-automatic method requires manual approval to remediate the issue. Veeam calls these fixes Remediation Actions. In either case, Veeam ONE alarms will be triggered when an issue is detected. Veeam Intelligent Diagnostics will also include handy Veeam Knowledge Base articles in the alarms to allow customers to understand the issues they are avoiding.

Management made easier

One of the things that has always attracted me to Veeam products is how easy they are to use and get started with. Veeam Intelligent Diagnostics takes things a step further by eliminating potential issues before they even cause problems, making Veeam Backup & Replication easier to use and manage. Reducing operational complexity is a win for any organization, and Veeam Intelligent Diagnostics helps do this.
The Veeam Availability Suite, which is made up of Veeam Backup & Replication and Veeam ONE is available for a completely free 30-day trial. Be sure to try it out to take advantage of Veeam Intelligent Diagnostics and the rest of the powerful features released in Veeam Availability Suite 9.5 U4.
The post New era of Intelligent Diagnostics appeared first on Veeam Software Official Blog.

Original Article:


Backup your Azure Storage Accounts

Backup your Azure Storage Accounts

CloudOasis  /  HalYaman

How do you backup your Microsoft Azure Storage Accounts? Is there a way to export the unstructured data from your blob storage accounts and then back it up to a different location?

Last week I had a discussion with a customer who wanted to backup his Microsoft Azure Storage Accounts, blob, files, tables, and more, and was asking if somehow Veeam Backup Agent could help achieve this.
Straight out of the box, it is challenging to backup the data resident in Azure Storage Accounts; but with a few simple steps, I assisted the customer to backup the Microsoft Azure unstructured blob files to a Veeam Repository using Veeam Backup Agent for Windows.
In this blog post, I will take you through the steps I took to help the customer back up their Microsoft Azure Storage Accounts. The solution is to use Microsoft AzCopy to download and sync the required blob container files to a local drive. The drive is where the Veeam Backup Agent is installed and configured to backup the Virtual Machine, or the targeted folder.

Solution Pre-requisites

As I mentioned before, the steps to achieve the requirements are straightforward and require the following items to be prepared and configured:

  • Veeam Backup Server.
  • Veeam Backup Agent installed on a Microsoft Azure VM.
  • Disk space attached to the VM to temporarily store the blob files.
  • Microsoft Azcopy tool.

AzCopy Tool

To copy data from a Microsoft Azure Storage Account, you must download and install the Microsoft Azcopy tool. You can download it from this link. Install it on the VM where the Veeam Backup Agent is installed.
After it has downloaded, run the setup, and then run from the start menu:

By default, the program files are stored in:
C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy\
After the process has started, you must configure the access to your Microsoft Azure subscription using this command:

AzCopy login

Follow the output instruction, where you browse to
Enter the generated random access code to authenticate. In this example, it is HP5DFJBZ9; you will get a different code.

After following those steps, you finished with the Microsoft AzCopy installation and configuration.


To begin copying the blob data to the local drive, we must perform the following steps:

  • Create a directory on the local drive where you will store the downloaded blob files.
  • Obtain your Storage Account key.
    • Note: In this screenshot at zcopytesting – Access keys, you are provided with two keys so that you can maintain connections using one key while the other regenerates. You must also update your other Microsoft Azure resources and applications accessing this storage account to use the new keys. Your disk access is not interrupted.
Screen Shot 2019-03-10 at 1.04.42 pm.png
  • Obtain the Blob Container URL:

Screen Shot 2019-03-10 at 1.07.40 pm
The next step is to create a .cmd script to apply all those steps above. The script looks like this:

“C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy\AzCopy” /Source:<Blob Container URL> /Dest:<Local Folder> /SourceKey:<Access Key> /S /Y /XO

This .cmd file can be scheduled to run before your Veeam Backup Agent job or using windows schedule.

Veeam Backup Demonstration

To Validate the solution developed above, I tested it by uploading several files to the blob storage account:

Screen Shot 2019-03-10 at 1.14.22 pm.png

I also started a backup job. After the backup job completed, I ran the restore process to validate using the Restoring Guest Files option:

Screen Shot 2019-03-10 at 1.17.22 pm.png

Restoring and editing the console.txt script before modifying the file:

Screen Shot 2019-03-10 at 1.18.43 pm

The second backup was completed after modifying the console.txt file and then starting the incremental backup. The following screenshot is the Restore from Incremental (note the time of the backup):
Screen Shot 2019-03-10 at 1.22.35 pm
Edit the console.txt file (added Hal – Test For Incremental x 2):

Screen Shot 2019-03-10 at 1.23.53 pm.png


With these simple and straight forward steps, I have been able to backup my Microsoft Azure storage account to my Veeam Backup Repository. The good news is that the .cmd script I provided for you on this blog post allows copying only the new updated/modified (/xo) files from the blob storage. This will save time and space, both on your Veeam Repository and the local disk. On this blog post, I have provided basic and manual steps to demonstrate the solution and capabilities. As previously mentioned, it is ideal to use the Microsoft Azcopy command before starting the backup job as a pre-run script, or as part of a Windows scheduler.
I hope this blog post will help you; please don’t hesitate to share your feedback.
The post Backup your Azure Storage Accounts appeared first on CloudOasis.

Original Article:


Tips on managing, testing and recovering your backups and replicas

Tips on managing, testing and recovering your backups and replicas

Veeam Software Official Blog  /  Evgenii Ivanov

Several months ago, I wrote a blog post about some common and easily avoidable misconfigurations that we, support engineers, keep seeing in customers’ infrastructures. It was met with much attention and hopefully helped administrators improve their Veeam Backup & Replication setups. In this blog post, I would like to share with you several other important topics. I invite you to reflect on them to make your experience with Veeam smooth and reliable.

Backups are only half of the deal — think about restores!

Every now and then we get calls from customers who catch themselves in a very bad situation. They needed a restore, but at a certain point hit an obstacle they could not circumvent. And I’m not talking about lost backups, CryptoLocker or something! It’s just that their focus was on creating a backup or replica. They never considered that data recovery is a whole different process that must be examined and tested separately. I’ll give you several examples to get the taste of it:

  1. The customer had a critical 20-terabyte VM that failed. Nobody wants downtime, so they started the VM in instant recovery and had it working in five minutes. However, instant recovery is a temporary state and must be finalized by migration to the production datastore. As it turned out, the infrastructure did not allow it to copy 20 TB of data in any reasonable time. And since instant recovery was started with an option to write changes to the C drive of Veeam Backup & Replication (as opposed to using a vSphere snapshot), it was quickly filling up without any possibility for sufficient extension. As some time passed before the customer approached support, the VM had already accumulated some changes that could not be discarded. With critical data at risk, there’s no way to finalize instant recovery in a sufficiently short time and imminent failure approaching. Quite a pickle, huh?
  2. The customer had a single domain controller in the infrastructure and everything added in Veeam Backup & Replication using DNS. I know, I know. It could have gone wrong in a hundred ways, but here is what happened: The customer planned some maintenance and decided to fail over to the replica of that DC. They used planned failover, which is ideal for such situations. The first phase went fine, however during the second phase, the original VM was turned off to transfer the last bits of data. Of course, at that moment the job failed because DNS went down. Luckily, here we could simply turn on the replica VM manually from vSphere (this is not something we recommend, see the next advice). However, it disrupted and delayed the maintenance process. Plus, we had to manually add host names to the C:\Windows\System32\drivers\etc\hosts file on Veeam Backup & Replication to allow a proper failback.
  3. The customer based backup infrastructure around tapes and maintained only a very short backup chain on disks. When they had to restore some guest files from a large file server, it turned out there was simply not sufficient space to be found on any machine to act as a staging repository.

I think in all these situations the clients fell into the same trap they simply assumed that if a backup is successful, then restore should be as well! Learn about restore, just as you learn about backups. A good way to start is our user guide. This section contains information on all the major types of restores. In the “Before you begin” section of each restore option, you can find initial considerations and prerequisites. Information on other types of restores such as restore from tapes or from storage snapshots can be found in their respective sections. Apart from the main user guide, be sure to check out the Veeam Explorers guide too. Each Veeam Explorer has a “Planning and preparation” section — this will help you prepare your system for restore beforehand.

Do not manage replicas from vSphere console

Veeam replicas are essentially normal virtual machines. As such, they can be managed using usual vSphere management tools, mainly vSphere client. It can, but should not be used. Replica failover in Veeam Backup & Replication is a sophisticated process, which allows you to carefully go one step at a time (with the possibility to roll back if something goes wrong) and finalize failover in a proper way. Take a look at the scheme below:

If instead of using the Veeam Backup & Replication console, you simply start a replica in vSphere client or start a failover from Veeam Backup & Replication. But if you switch to managing from the vSphere client later, you get a number of serious consequences:

  1. The failover mechanism in Veeam Backup & Replication will no longer be usable for this VM, as all that flexibility described above will no longer be available.
  2. You will have data in the Veeam Backup & Replication database that does not represent the actual state of the VM. In worst cases, fixing it requires database edits.
  3. You can lose data. Consider this example: A customer started a replica manually in vSphere client and decided to simply stick with it. Some time passed, and they noticed that the replica was still present in the Veeam Backup & Replication console. The customer decided to clean it up a little, right-clicked on the replica and chose “Delete from disk.” Veeam Backup & Replication did exactly what was told — deleted the replica, which unbeknownst to the software, had become a production VM with data.

There are situations when starting the replicas from the vSphere client is necessary (mainly, if the Veeam Backup & Replication server is down as well and replicas must be started without delay). However, if the Veeam Backup & Replication server is operational, it should be the management point from start to finish.
It is also not recommended to delete the replica VMs from vSphere client. Veeam Backup & Replication will not be aware of such changes, which can lead to failures and stale data in the console. If you do not need a replica anymore, delete it from the console and not from the vSphere client as a VM. That way your list of replicas will contain only the actual data.

Careful with updates!

I’m speaking about updates for hypervisors and various applications backed up by Veeam. From a Veeam Backup & Replication perspective, such updates can be roughly divided into two categories — major updates that bring a lot of changes and minor updates.
Let’s speak about major updates first. The most important ones are hypervisor updates. Before installing them, it is necessary to confirm that Veeam Backup & Replication supports them. These updates bring a lot of changes to the libraries and APIs that Veeam Backup & Replication uses, so updating Veeam Backup & Replication code and rigorous testing from QA is necessary before a new version is officially supported. Unfortunately, as of now VMware does not provide any preliminary access to the new vSphere versions for the vendors. So Veeam’s R&D gets access together with the rest of the world, which means that there is always a lag between a new version release and official support. The magnitude of changes also does not allow R&D to fit everything in a hotfix, so official support is typically added with the new Veeam Backup & Replication versions. This puts support and our customers in a tricky situation. Usually after a new vSphere release, the amount of cases increases because administrators start installing updates, only to find out that their backups are failing with weird issues. This forces us, support, to ask the customers to perform a rollback (if possible) or to propose workarounds that we cannot officially support, due to lack of testing. So please check the version compatibility before updates!
The same applies to backed up applications. Veeam Explorers also has a list of supported versions and new versions are added to this list with Veeam Backup & Replication updates. So once again, be sure to check the Veeam Explorers user guide before passing to a new version.
In the minor updates’ category, I put things like cumulative updates for Exchange, new VMware Tools versions, security updates for vSphere, etc. Typically, they do not contain major changes and in most situations Veeam Backup & Replication does not experience any issues. That’s why QA does not release official statements as with major updates. However, in our experience there were situations where minor updates changed workflow enough to cause issues with Veeam Backup & Replication. In these cases, once the presence of an issue is confirmed, R&D develops a hotfix as soon as possible.
How should you stay up to date on the recent developments? My advice is to register on You will be subscribed to a weekly “Word from Gostev” newsletter from our Senior Vice President Anton Gostev. It contains information on discovered issues (and not limited to Veeam products), release plans and interesting IT news. If you do not find what you are looking for in the newsletter, I recommend checking the forum. Due to the sheer number of Veeam clients, if any update breaks something, a related thread appears soon after.
Now backups are not the only thing that patches and updates can break. In reality, they can break a lot of stuff, the application itself included. And here Veeam has something to offer — Veeam DataLabs. Maybe you heard about SureBackup — our ultimate tool for verifying the consistency of backups. SureBackup is based on DataLabs, which allows you to create an isolated environment where you can test updates, before bringing them to production. If you want to save yourself some gray hair, be sure to check it out. I recommend starting with this post.

Advice to those planning to buy Veeam Backup & Replication or switching from another solution

Sometimes in technical support we get cases that go like this: “We have designed our backup strategy like this, we acquired Veeam Backup & Replication, however we can’t seem to find a way to do X. Can you help with it?” (Most commonly such requests are about unusual retention policies or tape management). We are happy to help, but at times we have to explain that Veeam Backup & Replication works differently and they will need to change their design. Sure enough, customers are not happy to hear that. However, I believe they are following an incorrect approach.
Veeam Backup & Replication is very robust and flexible and in its current form it can satisfy the absolute majority of the companies. But it is important to understand that it was designed with certain ideas in mind and to make the product really shine, it is necessary to follow these ideas. Unfortunately, sometimes the reality is quite different. Here is what I imagine happens with some of the customers: They decide that they need a backup solution. So they sit down in a room and meticulously design each element of their strategy. Once done, they move to choosing a backup solution, where Veeam Backup & Replication seems to be an obvious choice. In another scenario, the customer already has a backup solution and a developed backup strategy. However, for some reason their solution does not meet their expectations. So they decide to switch to Veeam and wish to carry their backup strategy in Veeam Backup & Replication unchanged. My firm belief is that this process should go vice versa.
These days Veeam Backup & Replication has become a de-facto standard for backup solution, so probably any administrator would like to have a glance at it. However, if you are serious about implementation, Veeam Backup & Replication needs to be studied and tested. Once you know the capabilities and know this is what you are looking for, build your backup strategy specifically for Veeam Backup & Replication. You will be able to use the functionality to the maximum, reduce the risks and support will have an easier time understanding the setup.
And that’s what I have for today’s episode. I hope that gave you something to consider.

The post Tips on managing, testing and recovering your backups and replicas appeared first on Veeam Software Official Blog.

Original Article:


Using Veeam PowerShell with Veeam DataLabs Staged Restore and Secure Restore

Using Veeam PowerShell with Veeam DataLabs Staged Restore and Secure Restore

vZilla  /  michaelcade

Before we get started, I wanted to share something that I found whilst testing out the beta releases and RC of Veeam Backup & Replication 9.5 Update 4. I mentioned whilst running my live Veeam DataLabs: Secure Restore at our Velocity sales and partner event that was also part of our 9.5 update 4 launch event that there was a huge influx of new things you could do with Veeam Backup & Replication and PowerShell Cmdlets.
If you want, see for yourself then run the following command this is going to output all of the cmdlets available in Veeam Backup & Replication 9.5 update 4. Note that if you want to compare you need to run this prior to an update to see the comparison.

VBR 9.5 Update 3a PowerShell Cmdlets 564 -> 734 VBR 9.5 Update 4 Cmdlets

New PowerShell Cmdlets (170 NEW)

A snapshot of the new PowerShell cmdlets is shown below in this table. The PowerShell Reference guide is a great place to start if you want to understand what these can do for you in your environment.
If you were to compare the update 3a release to the new update 4 release you will see that before there were 564 cmdlets and now 734. 170 new! That shows the depth of this release.

My focus for the Update 4 release has been around Veeam DataLabs Secure and Staged Restore features. This next section will go deeper into the options and scenarios of how to run Secure and Staged Restore via PowerShell during your recoveries.

Veeam DataLabs: Secure Restore

Veeam DataLabs Secure Restore gives us the ability to perform a third-party antivirus scan before continuing a recovery process. The command below highlights the parameters that can be used for Secure Restore while leveraging Instant VM Recovery®. You will find these same parameters under many other restore cmdlets. More information can be found here.

The script below is an example that I have used to kick off a secure restore within my own lab environment.
This is the script I used for the on-stage demo, in the demo I was using a tiny windows virtual machine that I had infected with the Eicar malware threat. Many antivirus vendors will use this safe virus to perform tests against their systems. I wanted to demonstrate the full force of what Veeam DataLabs: Secure Restore could bring to the recovery process by showing an infected machine. There is no fun showing you something that is clean, I mean anyone can do that.

Veeam DataLabs: Staged Restore

Veeam DataLabs Staged Restore allows for us to inject a script or think a process into an entire VM recovery process, many use cases will be shared, and this is not the place to go into that detail. The command below highlights the parameters available for Staged Restore functionality during a full VM failover.
The one-use case I will share and have had discussions about this since the release is the ability to mask data to departments. The challenge one of our customers was facing was the ability to take their backup data and pre-update 4 they were restoring that data to an isolated environment away from production for their developers to work on. The mention of Veeam DataLabs: Staged Restore allowed them to recognise that the databases they were exposing from these backup files to their development team would also be including sensitive data. With Staged Restore they can inject a script into the process that would exclude that sensitive data from the development team. More information can be found here.

Finally, the script below is another example but this time for Staged Restore. You will note that I have three options at the end that allow for different outcomes. The first option is a standard no staged restore entire VM recovery. The second option (the one that is not hashed out) is the one that will allow us to inject a script without the requirement of an application group. The third and final script example will allow us to leverage an Application Group where the VM we are restoring may require something from a dependency within the Application Group to complete the injection of the script.

Hopefully that was useful, and I am hoping to explore more PowerShell options especially around the Veeam DataLabs and the art of the possible when it comes to leveraging that data or providing even more additional value above the backup and recovery of your data.

Original Article:


N2WS Expands Cost Optimization for Amazon Web Services with Amazon EC2 Resource Scheduling

N2WS Expands Cost Optimization for Amazon Web Services with Amazon EC2 Resource Scheduling

Business Wire Technology News

WEST PALM BEACH, Fla.–()–N2WS, a Veeam® company, and provider of cloud-native backup and recovery solutions for data and applications on Amazon Web Services (AWS), today announced the availability of N2WS Backup & Recovery 2.5 (v2.5), which introduces Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Relational Database Service (Amazon RDS) resource management and control. Using the new N2WS Resource Control, organizations can lower costs by stopping or hibernating groups of Amazon EC2 and Amazon RDS resources on-demand or automatically, based on a pre-determined schedule.

“A good portion of our internal AWS bill is dedicated to Amazon EC2 compute resources, as much as 60 percent every month. A lot of these resources are not being used all the time,” explains Uri Wolloch, CTO at N2WS. “Being able to simply power on or off groups of Amazon EC2 and Amazon RDS resources with a simple mouse click is just like shutting the lights off when you leave the room or keeping your refrigerator door closed; it reduces waste and saves a huge amount of money if put into regular practice.”
N2WS Backup & Recovery v2.5 further optimizes the process for cycling Amazon Elastic Block Store (Amazon EBS) snapshots into the N2WS Amazon Simple Storage Service (Amazon S3) repository, extending costs saving to more than 60% for backups stored greater than 2 years. N2WS v2.5 also supports two new AWS Regions: AWS Europe (Stockholm) Region and the new AWS GovCloud (US-East) Region.
For public sector organization leveraging the AWS GovCloud (US-East or US-West) Regions, N2WS v2.5 offers automated cross-region disaster recovery between the two AWS Regions. This allows AWS GovCloud (US) users to orchestrate near instant recovery and disaster recovery, while keeping data and workloads in approved data centers – ensuring that workloads remain available in the event of a regional issue.
New updates and enhancements now available as part of N2WS Backup & Recovery v2.5 include:

  • Management of data storage and server instances from one simple console using N2WS Resource Control
  • Optimization enhancements to Amazon S3 integration, saving even more when cycling snapshots into an Amazon S3 repository
  • Added support for new AWS Regions – AWS Europe (Stockholm) and AWS GovCloud (US-East) Regions with cross-region DR between AWS GovCloud US-West and AWS GovCloud US-East now available
  • Expanded range of APIs allowing you to configure alerts and recovery targets
  • Full support for all available APIs via a new CLI tool

For organizations leveraging the N2WS Amazon S3 repository, further compression and deduplication enhancements have pushed costs saving above 60% for data stored over longer terms. During simulations, monthly backups of an 8TB volume stored in the N2WS Amazon S3 repository for 2 years, costs 68% less than the same backups stored as native Amazon EBS snapshots. These enhancements help enterprises manage and significantly reduce cloud storage costs for backup data being retained for compliance reasons.
N2WS Backup & Recovery is an enterprise-class backup, recovery and disaster recovery solution only available on AWS Marketplace. N2WS Backup & Recovery version 2.5 is available for immediate use by visiting AWS Marketplace at:
About N2WS
N2W Software (N2WS), a Veeam® company, is a leading provider of enterprise-class backup, recovery and disaster recovery solutions for Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database Services (Amazon RDS), and Amazon Redshift. Used by thousands of customers worldwide to backup hundreds of thousands of production servers, N2WS Backup & Recovery is a preferred backup solution for Fortune 500 companies, enterprise IT organizations and Managed Service Providers operating large-scale production environments on AWS. To learn more, visit

Original Article:


Veeam and Alibaba Cloud – OSS Integration

Veeam and Alibaba Cloud – OSS Integration

CloudOasis  /  HalYaman

alibaba-cloud-logoWhat options will Veeam offer to protect Alibaba Cloud workloads? How do you backup Alibaba ECS (Elastic Compute Service) instances? And is possible to use Alibaba OSS (Object Storage Services) to archive your Veeam backup?

Veeam Update 4, released on 22 January, not only introduced a new product feature; but also introduced a new Cloud Services integration. In this blog, I will discuss the integration of Veeam Update 4 with Alibaba Cloud.
There are several levels of integration that Veeam offers to protect or/and utilize the Alibaba Cloud resources. The following options are available to you when you use Veeam and Alibaba Cloud:

  • Deploy Veeam as an Elastic Compute (ECS) instance from the Alibaba MarketPlace,
  • Protect Alibaba ECS instances using a Veeam Agent,
  • Integrate Veeam with Alibaba Object Storage Service (OSS),
  • Integrate Veeam with Alibaba VTL.

In this blog post, I will focus on Veeam and Alibaba OSS integration to keep the blog short and focused. On future blog posts, I will discuss Veeam Agent protection and VTL.

Veeam and Alibaba Object Storage Service (OSS) Integration

Veeam Archive/Capacity Tier is a newcomer to the Veeam suite and is included in Update 4. The good news for those customers using Alibaba Cloud is that Alibaba OSS is supported out of the box. As an Alibaba customer, you can add your OSS bucket as part of the Veeam Scale-out-backup-repository to archive your aging backup to your OSS for longer retention.
To Add your Alibaba OSS to Veeam, you must first complete these tasks:

  • Create the Alibaba OSS bucket,
  • Acquire the Bucket URL and the Access key,
  • Add Alibaba OSS as a repository.

Let’s demonstrate the process to add the OSS to the Veeam infrastructure:
The process to add Alibaba OSS to the Veeam infrastructure is very simple. The process starts by adding a new repository, as shown in the following screenshot.

After you choose the Object storage, you must select the S3 Compatible option:
Next, name the new OSS repository, then press Next to enter the Service Point URL (OSS URL) and the credentials on a form of Access Key and Secret key. See the screenshot below.

The final step in this process is to create a folder where you will save your Veeam backups. See the screenshot below.

Before you finalize the process, you take the opportunity to limit the OSS consumption. See the settings in the screenshot below.

Now press Finish to complete the process.


In this blog post, I discussed the integration of Veeam and Alibaba OSS, walking you through the steps to add Alibaba OSS to Veeam infrastructure as a repository. To use the OSS as an Archive/Capacity target, you must configure a Veeam Scale-out-Repository (SOBOR), then include the OSS part of the Capacity Tier configuration. See the screenshot below for the setup.

On a future blog post, I will keep the discussion around Veeam and Alibaba Cloud, and the topics I will discuss will be Veeam and Alibaba VTL and Alibaba ECS Agent protection. Until then, I hope this blog post helps you get started. Please don’t forget to share.

The post Veeam and Alibaba Cloud – OSS Integration appeared first on CloudOasis.

Original Article:


Veeam & NetApp HCI (Element OS) Integration #NetAppATeam

Veeam & NetApp HCI (Element OS) Integration

vZilla  /  michaelcade

With the recent GA release of Veeam Backup & Replication 9.5 Update 4, the next storage integration also arrived, leveraging the Universal Storage API that was released late 2017 the list continues to grow. Here is an overview of the integration points previously I wrote about.
Functionality is covered in the post linked above, I wanted to take this time to highlight the simple process of getting your NetApp HCI or SolidFire storage added within your Veeam Backup & Replication console.
Before you begin you need to make sure you have added your vSphere environment this will unlock the Storage Infrastructure tab.
This week some of my posts are focusing on Veeam Community Edition, I will be running through the process of adding storage within this free edition, this allows even the community edition to have visibility and some control in to some of your historic snapshots that were not even created by Veeam, as well as the ability to recover some items from those snapshots.

As you can see from the above, we have a clean Veeam Backup & Replication install with our VMware environment added but in the shot above you see we have no storage integrations currently added to our Veeam Backup & Replication console. To add we need to select the Add Storage.

Next up you will see the long list of available storage integrations that are available along with the link to download any of the additional Universal Storage API plug ins. For us we can select the NetApp option below.

On this next page you will see Data ONTAP which we have covered a few times in previous posts, but the one we are interested in today is Element, Element is the operating system that is used on both SolidFire and NetApp HCI storage.

The element option is a link to the Veeam website where all plug ins can be downloaded. Authenticate with your account or create an account to see those available downloads.

Scroll down until you find the following screen, this is applicable for all Veeam Universal Storage integrations, here you will find the downloads for all.

Select and download the NetApp Element Plug-In.

The plug-in once downloaded will be in the form of a compressed file, extract this into an appropriate location.

You will see once extracted, it is an application file type that can then be ran, I always choose to run as administrator.

If you have not closed down all your Veeam Backup & Replication consoles then you will first of all get the following warning which will not let you proceed until all consoles are closed.

Next is a pretty straight forward wizard, its Veeam we wouldn’t make this complicated would we. Next.

Terms of usage, let’s make sure we all spend our time and read through this. It after all states in block capitals that this is IMPORTANT – READ CAREFULLY.

When you are happy with the terms of usage, your tray table is stowed, your seat in an upright position and your seat belt fastened you can proceed with the installation.

A note to mention is that this is going to stop some Veeam services, as a best practice you should disable your jobs before running the installation. Don’t worry it does start them again once installed. Just remember to re-enable the jobs after. In my lab I have no jobs running currently and it doesn’t take long at all.

Ok so that’s the plug-in installed on your Veeam Backup & Replication server. Pretty simple straight forward stuff.

Now let’s head back into our Veeam Backup & Replication console and we can add our NetApp HCI / SolidFire system. Follow the first four steps of the post above to get back to this stage, but this time we know we have the software downloaded and installed. First up add in your management IP of your system.

Provide the credentials to access the storage system.

The next options are around protocols available, volumes to be scanned and which of our proxies should we use or that have access to the storage array.
Volumes to scan, if you have volumes other than VMFS datastores for your VMware environment you may wish to remove them from the scanning schedule here.
Backup Proxies to use, this ensures that only certain proxies that have the capability of accessing the storage system are used. By default, all proxies will be used or attempted.

That’s it, confirm the summary of the system you are adding, once hitting finished here. Veeam is going to run a scan against the volumes for VMware VM files.

The next screen / pop up is now discovering the storage system. It’s going to scan for all VMFS volumes located on the storage array.

Another Veeam server has been orchestrating snapshots but with this instance of Veeam Backup & Replication we have the ability to look inside of those volumes and snapshots and see the VMs available for recovery options. We can also perform a level of restore against those.

I will follow up in another post what else we can do now we have our NetApp HCI system visible to our Veeam Backup & Replication server. Yes, we can backup from it and restore but there are also some more very interesting features that we can take advantage of including Veeam DataLabs.

Original Article:


Enhancing DRaaS with Veeam Cloud Connect and vCloud Director

Enhancing DRaaS with Veeam Cloud Connect and vCloud Director

Veeam Software Official Blog  /  Anthony Spiteri

The state of disaster recovery

While many organizations have understood the importance of the 3-2-1 rule of backup in getting at least one copy of their data offsite, they have traditionally struggled to understand the value of making their critical workloads available with replication technologies. Replication and Disaster Recovery as a Service (DRaaS) still predominantly focus on the Availability of Virtual machines and the services and applications that they run. The end goal is to have critical line of business applications identified, replicated and then made available in the case of a disaster.
The definition of a disaster varies depending on who you speak to, and the industry loves to use geo-scale impact events when talking about disasters, but the reality is that the failure of a single instance or application is much more likely than whole system failures. This is where Replication and Disaster Recovery as a Service becomes important, and organizations are starting to understand the critical benefits of combining offsite backup together with replication of their critical on-premises workloads.

Veeam Cloud Connect

While the cloud backup market has been flourishing, it’s true that most service providers who have been successful with Infrastructure as a Service (IaaS) have spent the last few years developing their Backup, Replication and Disaster Recovery as a Service offerings. With the release of Veeam Backup & Replication v8, Cloud Connect Backup was embraced by our cloud and service provider partners and became a critical part of their service offerings. With version 9, Cloud Connect Replication was added, and providers started offering Replication and Disaster Recovery as a Service.
Cloud Connect Replication was released with industry-leading network automation features, and the ability to abstract both Hyper-V and vSphere compute resources and have those resources made available for tenants to replicate workloads into service provider platforms and have them ready for full or partial disaster events. Networking is the hardest part to get right in a disaster recovery scenario and the Network Extension Appliance streamlined connectivity by simplifying networking requirements for tenants.
While Cloud Connect Replication as it stood pre-Update 4 was strong technology…it was missing one thing…

Introducing vCloud Director support for Veeam Cloud Connect Replication

VMware vCloud Director has become the de facto standard for service providers who offer Infrastructure as a Service. While always popular with top VMware Cloud Providers since its first release back in 2010, the recent enhancements with support for VMware NSX, a brand new HTML5-based user interface, together with increased interoperability, has resulted in huge growth in vCloud Director being deployed as the cloud management platform of choice.
Veeam has had a long history supporting vCloud Director, with the industry’s first support for vCloud Director-aware backups released in Veeam Backup & Replication v7. With the release of Update 4, we added support for Veeam Cloud Connect to replicate directly into vCloud Director virtual data centers, allowing both our cloud and service provider partners, and tenants alike, to take advantage of the enhancements VMware has built into the platform.

By extending Cloud Connect Replication to take advantage of vCloud Director as a way to allocate service provider cloud resources natively, we have given providers the ability to utilize the constructs of vCloud Director and have their tenants consume cloud resources easily and efficiently.

Benefits of vCloud Director with Cloud Connect Replication

By allowing tenants to consume vCloud Director resources, it allows them to take advantage of more powerful features when dealing with full disaster, or the failure of individual workloads. Not only will full or partial failovers be more transparent with the use of the vCloud Director HTML5 Tenant UI, but networking functionality will also be enhanced by tapping into VMware’s industry leasing Network Virtualization technology, NSX.
With tenants able to view and access VM replicas via the vCloud Director HTML5 UI, they will have greater visibility and access before and after failover events. The vCloud Director HTML5 UI will also allow tenants to see what is happening to workloads as they boot and interact with the guest OS directly, if required. This dramatically reduces the reliance on the service provider helpdesk and ensures that tenants are in direct control of their replicas.

From a networking point of view, being able to access the NSX Edge Gateway for replicated workloads means that tenants can take advantage of the advanced networking features available on the NSX Edge Gateway. While the existing Network Extension Appliance did a great job in offering basic network functionality, the NSX Edge offers:

  • Advanced Firewalling and NAT
  • Advanced Dynamic Routing (BGP, OSPF and more)
  • Advanced Load Balancing
  • IPsec and L2VPN
  • SSL Certificate Services

Put all that together with the ability to manage and configure everything through the vCloud Director HTML5 UI and you start to get an understanding of how utilizing NSX via vCloud Director enhances Cloud Connect Replication for both service providers and tenants.
There are also a number of options that can be used to extend the tenant network to the service provider cloud network when actioning a partial failover. Tenants and service providers can configure custom IPsec VPNs or use the IPsec functionality of the NSX Edge Gateway to be in place prior to partial failover.
The Network Extension Appliance is still available for deployment in the same way as before Update 4 and can be used directly from within a vCloud Director virtual data center to automate the extension of a tenant network so that the failed over workload can be accessible from the tenant network, even though it resides in the service provider’s environment.


For Veeam Cloud & Service Providers (VCSP) that underpin their backup and replication service offerings with Veeam Cloud Connect, the addition of vCloud Director support means that there is an even stronger case to deliver replication and disaster recovery to customers. For end users, the added benefits of the vCloud Director HTML5 UI, and enhanced networking services backed by NSX, means that you are able to have more confidence in recovering from disasters, and in your ability to provide greater business continuity.


The post Enhancing DRaaS with Veeam Cloud Connect and vCloud Director appeared first on Veeam Software Official Blog.

Original Article:


Veeam brings back-up protection to HyperFlex’s SAP HANA party

Veeam brings back-up protection to HyperFlex’s
SAP HANA party
Veeam can now protect SAP HANA deployments
with native SAP database backups. With Veeam integrations for Cisco HyperFlex
and Cisco UCS, we are excited about combining forces
to offer SAP HANA users a modern solution spanning
from production to protection.

SAP HANA integrated backup is here!

SAP HANA integrated backup is here!

Veeam Software Official Blog  /  Rick Vanover

SAP HANA is one of the most critical enterprise applications out there, and if you have worked with it, you know it likely runs part, if not all, of a business. In Veeam Availability Suite 9.5 Update 4, we are pleased to now have native, certified SAP support for backups and recoveries directly via backint.

What problem does it solve?

SAP HANA’s in-memory database platform also requires that a backup solution be integrated and aware of the platform. This SAP HANA support helps you to have certified solutions for SAP HANA backups, reduce impact of doing backups, ensure operational consistency for backups, and leverage all of the additional capabilities that Veeam Availability Suite has to offer. This also includes point-in-time restores, database integrity checks, storage efficiencies such as compression and deduplication as well.
This milestone comes after years of organizations wanting Veeam backups with their SAP installations. We spent many years advocating on backing up SAP with BRTOOLS and leveraging image-based backups as well to prepare for tests. Now, the story becomes even stronger with support for Veeam to drive backint backups from SAP and store them in a Veeam repository. Specifically, this means that a backint backup can happen for SAP HANA and Veeam can manage the storage of that backup. It is important to note now that the Veeam SAP Plug-In, which makes this native support work, is also supported for use with SAP HANA on Microsoft Azure.

How does it work?

The Veeam Plug-In for SAP HANA becomes a target available for native backups with SAP HANA Studio for backups of a few types: file-based backups, snapshots and backint backups. When backups are performed in SAP HANA Studio, a number of different types and targets can be selected. This is all native within the SAP HANA application and SAP HANA tools like SAP HANA Studio, SAP HANA Cockpit or SQL based command line entries. These include file backups (plain copies of files) and complete data backups using backint. Backint is an API framework that allows 3rd-party tools (such as Veeam) to directly connect the backup infrastructure to the SAP HANA database. The backint backup is configured to have a backup interval set in SAP HANA Studio, and that interval can be very small – such as 5 minutes. It is also recommended to do the backup with log backups (again, configured in SAP HANA Studio) to enable more granular restores which will be covered a bit later on.
SAP HANA can also call snapshots to its own application, while it does not have consistency or corruption checks – snapshots are a great addition to the overall backup strategy. By most common perspectives, backint is the best approach for backing up SAP HANA systems but using the snapshots can also add more options for recovery. The plug-in data flow for a backint backup as implemented in Veeam Availability Suite 9.5 Update 4 is shown in the figure below:

One of the key benefits of doing a backint backup of SAP HANA is that you can do direct restores to a specific point in time – either from snapshots or from the backint backups with point-in-time recovery. This is very important when considering how critical SAP HANA is to many organizations. So, when it comes to how often a backup is done, select the interval that works for your organization’s requirements and make sure the option to enable automatic log backup is selected as well.

Bring on the Enterprise applications!

Application support is a recent trend here at Veeam, and I do not expect this to slow down any time soon! The SAP HANA Plug-In support, along with the Oracle RMAN plug-in, are two big steps in bringing application support to Veeam for critical, enterprise applications. You can find more information on Veeam Availability Suite 9.5 Update 4 here.
The post SAP HANA integrated backup is here! appeared first on Veeam Software Official Blog.

Original Article:


VBO Office 365 Analysis with MS Graph

VBO Office 365 Analysis with MS Graph

CloudOasis  /  HalYaman

Screen Shot 2019-02-18 at 12.16.38 pmAre you thinking of backing up your Microsoft Office 365 Exchange Online and wish to size your backup accurately? Is there any way to measure the total size of your consumed Exchange Online storage, Change rate and more?

The answers to these questions are essential when it comes to sizing backup solutions. You must know the storage needed to store the backed up data and the backup solution components (Server roles).
In the era of Software as a Service (SaaS), it’s challenging to acquire this information, and then to get access to it. This makes solution sizing a challenging exercise for any business.

The Scenario

Let us consider the following scenario to understand the challenge that a business faces when they want to undertake sizing and backup of their Exchange Online. To use a specific example, I will consider the Veeam Backup for Office 365 product to simplify the discussion.
To size a Veeam backup repository, you need several numbers to calculate the total size. They are:

  • The total size of the current mailboxes (Active Mailboxes); and
  • Change rate.

The good news is that Microsoft offers several ways to pull this info from Office 365; you can use either PowerShell, or MS Graph API.
In this example, I will discuss the MS Graph API to show you how to easily retrieve the mailbox size and change rate, and more, to size the Veeam Backup Repository and the Veeam Backup infrastructure.

Veeam VBO Sizing

Veeam Software has released a great sizing guide to show you how to size the Veeam Backup for Office 365 Exchange Online. To access the guide, press this link.
But how great would it be if we could somehow automate the process! Let’s consider the Microsoft Graph APIs with two Graph API URLs (GET commands), we can get all the information we need to size the Veeam VBO repository and the infrastructure. Have a look at these two URLs:

From those two commands, we can extract all the information we need to size these Veeam items:

  • Backup Repository size;
  • Number of Servers needed;
  • Number of Proxies; and
  • Nnumber of jobs.

VBO Sizing WebApp

Taking the information from the MS Graph API and the Veeam VBO sizing guide, I asked myself this. “How can I can build a WebApp that can be accessed from any web-enabled device, then connect to the tenant Office 365 and draw down the necessary information on behalf of the tenant, and then allow the tenant to choose the retention period?”
All of that has already been assembled in a very small WebApp that you can use this link to access. You need only your Microsoft account to access your tenant Office 365 subscription.

How the WebApp works

When accessing the WebApp link, you will be greeted by a Connect Button. This will redirect you to the Microsoft Azure login page, a standard Microsoft secure Azure login to acquire the Graph token.
After the token has been retrieved, The WebApp sends several GET Restful API commands to acquire the following information:

With that information acquired, the WebApp will help with sizing the Veeam Backup Repository, and also return the number of Proxies, Repositories, VBO server and Jobs you need to fully protect your Exchange Online.


The WebApp is a tool that I wanted to implement to help me to get an initial understanding of the size of the Exchange Online deployment for my customers and service providers. With this WebApp, the data gathering and the sizing on the Veeam VBO infrastructure has become simple and fast. This helps customers to size their storage and calculate their budget before they start the sales cycle.
Over the following days I will be working on version two of the WebApp. I will be adding more functionality, like estimating the size of the OneDrive and SharePoint drives, and other exciting features. So, if you have any feedback or suggestions, they will be most welcome.
The post VBO Office 365 Analysis with MS Graph appeared first on CloudOasis.

Original Article:


Want a Visio of your lab? Veeam ONE can do it for you!

Want a Visio of your lab? Veeam ONE can do it for you!

Notes from MWhite  /  Michael White

Hi all,
I mentioned, when talking with a customer, about how I had to update my Visio of my home lab with the changes I had done recently. There were groans and I asked why.  The amount of work to update Visio diagrams.  I said there was very little work as I had a tool that did the work. I said that Veeam ONE can do a Visio of your VMware infrastructure and they laughed.  So here we go.
You will need a Windows machine with Visio installed.

  • First you should access the Veeam ONE Reporter – which is normally https://FQDN:1239. After authentication you will see a collection of reports.

  • The report that has an output that we can use with Visio is in the Offline Reports folder – seen near the end of the list.
  • When you look into the Offline Reports folder you see what we are after!

  • So click on the Infrastructure Overview (Visio) report name.

  • We need to confirm the scope of operations.  So click on the blue Virtual Infrastructure link.

  • I can see my infrastructure is all selected so that is good.  Now we can use OK to return.
  • Now if you want VMs included in your diagram, and I do, you can select the VM checkbox.

  • Once your Include VMs is checked, or not checked, you can hit the Preview button.
  • Once you hit that button, you will see a brief collecting data dialog, or if you have a very big lab it will take a little longer.  One it is done you will have a file downloaded.

  • No matter how hard you try and load this file into Visio it will not work. You must download the Veeam Report Viewer.  Which, BTW, is not a viewer.  It is a translate tool.  You need to download it and install it where Visio is. Also move the infrastructure.vmr where the Viewer and Visio is.

  • The viewer installs very fast and leaves an icon on your desktop.

  • Start up the tool, and you should see something like below.

  • You can see I used it once already successfully.  Lets use the File \ Open option to load the infrastructure.vmr file.
  • It takes a few minutes, but we are waiting for it to say done. Depending on the size and complexity of your environment it could take a few minutes.

  • Once it is done it will pop up Visio.  But put it away for now.

  • We want to see the viewer to make sure it was a success.  After all, this is not about one diagram but a multiple of them!

  • Now you change to the following folder on the machine where the Viewer and Visio were. Each day you run the Viewer it will put another folder into the My Veeam Reports folder. As you can see below it is the Valentine’s day folder we are in.

  • The Index is an overview, and the config one shows what is powered up or not but the others are what you think.  And they print nice.

So now you have Visios of your lab or infrastructure, it may have took you a few minutes to get it all done the first time, but the second or third is much faster.
BTW, I did this with Update 4 of Veeam ONE, and 6.7 U1 of vSphere.
=== END ===

Original Article:


How to improve security with Veeam DataLabs Secure Restore

How to improve security with Veeam DataLabs Secure Restore

Veeam Software Official Blog  /  Michael Cade

Today, ransomware and malware attacks are top of mind for every business. In fact, no business, large or small is immune. What’s even more concerning is that ransomware attacks are increasing worldwide at an alarming rate, and because of this, many of you have expressed concern. In a recent study administered by ESG, 70% of Veeam customers indicated malicious malware and virus contamination are major concerns for their businesses (source: ESG Data Protection Landscape Survey).
There are obviously multiple ways your environment can be infected by malware; however, do you currently have an easy way to scan backups for threats before introducing them to production? If not, Veeam DataLabs Secure Restore is the perfect solution for secure data recovery!
The premise behind Veeam DataLabs Secure Restore is to provide users an optional, fully-integrated anti-virus scan step as part of any chosen recovery process. This feature, included in the latest Veeam Backup & Replication Update 4 addresses the problems associated with managing malicious malware by providing the ability to assure any of your copy data that you want or need to recover into production is in a good state and malware free. To be clear, this is NOT a prevention of an attack, but instead it’s a new, patent-pending unique way of remediating an attack arising from malware hidden in your backup data, and also to provide you additional confidence that a threat has been properly neutralized and no longer exists within your environment.
Sounds valuable? If so, keep reading.

Recovery mode options

Veeam offers a number of unique recovery processes for different scenarios and Veeam DataLabs Secure Restore is simply an optional enhancement included in many of these recovery processes to make for a truly secure data recovery. It’s important to note though that Secure Restore is not a required, added step as part of a restore. Instead, it’s an optional anti-virus scan that is available to put into action quickly if and when a user suspects a specific backup is infected by malware, or wants to proceed with caution to ensure their production environment remains virus-free following a restore.


The workflow for Secure Restore is the same regardless of the specific recovery scenario used.

  1. Select the restore mode
  2. Choose the workload you need to recover
  3. Specify the desired restore point
  4. Enable Secure Restore within the wizard

Once Secure Restore is enabled you are presented with a few options on how to proceed when an infection has been detected. For example, with an Entire VM recovery, you can choose to continue the recovery process but disable the network adapters on the virtual machine or choose to abort the VM recovery process. In the event an actual infection is identified, you also have a third option to continue scanning the whole file system to protect against other threats to notify the third-party antivirus to continue scanning, to get visibility to any other threats residing in your backups.

As you work inside the wizard and the recovery process starts, the first part of the recovery process is to select the backup file and mount its disks to the mount server which contains the antivirus software and the latest virus definitions (not owned by Veeam). Veeam will then trigger an antivirus scan against the restored disks. For those of you familiar with Veeam, this is the same process leveraged with Veeam file level recovery. Currently, Veeam DataLabs Secure Restore has built-in, direct integrations with Microsoft Windows Defender, ESET NOD32 Smart Security, and Symantec Protection Engine to provide virus scanning, however, any antivirus software with CMD support can also interface with Secure Restore.

As a virus scan walks the mounted volumes to check for infections, this is the first part of the Secure Restore process. If an infection is found, then Secure Restore will default to the choice you selected in the recovery wizard and either abort the recovery or continue with the recovery but disable the network interfaces on the machine. In addition, you will have access to a portion of the antivirus scan log from the recovery session to get a clear understanding of what infection has been found and where to locate it on the machine’s file system.

This particular walkthrough is highlighting the virtual machine recovery aspect. Next, by logging into the Virtual Center, you can navigate to the machine and notice that the network interfaces have been disconnected, providing you the ability to login through the console and troubleshoot the files as necessary.

To quickly summarise the steps that we have walked through for the use case mentioned at the beginning, here they are in a diagram:


Probably my favourite part of this new feature is how Secure Restore fits within SureBackup, yet another powerful feature of Veeam DataLabs. For those of you unfamiliar with SureBackup, you can check out what you can achieve with this feature here.
SureBackup is a Veeam technology that allows you to automatically test VM backups and validate recoverability. This task automatically boots VMs in an isolated Virtual Lab environment, executes health checks for the VM backups and provides a status report to your mailbox. With the addition of Secure Restore as an option, we can now offer an automated and scheduled approach to scan your backups for infections with the antivirus software of your choice to ensure the most secure data recovery process.


Finally, it’s important to note that the options for Veeam DataLabs Secure Restore are also fully configurable through PowerShell, which means that if you automate recovery processes via a third-party integration or portal, then you are also able to take advantage of this.
Veeam DataLabs – VeeamHUB – PowerShell Scripts
The post How to improve security with Veeam DataLabs Secure Restore appeared first on Veeam Software Official Blog.

Original Article:


Missed the Update 4 Announcement? Check out What’s New in the release

Veeam Availability Suite 9.5 Update 4 and
NEW Veeam Availability Suite™ 9.5
Update 4
, part of the Veeam Hyper‑Availability
Platform™, is available for download, expanding Veeam’s leadership
in cloud data management by delivering new capabilities for AWS,
Microsoft Azure and Azure Stack, IBM Cloud and thousands
of service providers. These powerful new capabilities include native
object storage integration and new cloud mobility options.

Veeam Availability Orchestrator v1.0 Installation Overview

Veeam Availability Orchestrator v1.0 Installation Overview

Notes from MWhite  /  Michael White

It is important to understand the VAO installation overview to help you be successful with VAO quicker.  More info on VAO can be found in the release notes. The docs are here – sorry I cannot make a link right to the VAO docs.  You can find versions of supported OS or databases in it as well as get a good idea on resource levels of the VAO servers (RAM and memory).

Things to have ready

  • A VM ready for VAO to be installed within.
  • Information on SQL server to be used by VAO, and credentials in the form of service account.
  • Any existing VBR servers, and credentials for them
  • A Domain Controller to be replicated to the DR side for inclusion in test failovers, and it should have a vSphere tag that is known.
  • Applications to be protected should have a vSphere tag applied to all the VMs. For example, if you are going to protect Exchange, and it has 24 VM’s each should have a tag that says something like DR-Exchange. I suggest initially only protect one application and learn from how that goes so you can do the rest of your applications more easily, and more successfully too.
  • You should have a VMware Distributed switch at the DR site with a private and non-routing VLAN or network and this switch should be mounted on each host in the DR site. This vDS is a recommendation but is not required.


  1. Install VAO
    1. Confirm connected to vCenter (in DR)
    2. If appropriate, connect to vCenter in production too.
    3. Confirm connected to external VBR (in production too possibly but in DR for certain)
    4. Configure SMTP
    5. Configure email destinations
  2. Use the DR site to:
    1. VBR to replicate the Domain Controller to the DR site.
    2. Use the DR site VBR to replicate the first Application you wish to protect.
    3. Create virtual labs in DR.
  3. Log into VAO
  4. Confirm you see the Virtual Labs in Components – there can be up to 1.5 hours of delay between assigning a tag and seeing it in VAO – after the first config.
  5. Confirm you see the tagged VMs in Components.
  6. Customize the Plan Steps as necessary, such as Add scripts, or tweak config such as number of retries.
  7. Add a Lab Group for any necessary support services – like AD to your Virtual Lab.
  8. Test the Virtual Lab, make sure both the appliance and any lab services start successfully.
  9. Create failover plan
  10. Do a readiness check.
  11. Do a test failover. Make sure it works.  A key part of a successful test failover is having all of the necessary VMs inside the test.  For example, Exchange will not start if a domain controller is
  12. Do a real failover.

You now have successful VAO install and have had some success working with VAO.  Hopefully you have learned so that now you can protect more applications successfully. It is important to understand that you may need to tune your plan to make sure it as fast as it can be.  The outage windows sometimes are small!
Let me know if you have questions, or comments, or you would like to see VAO articles on specific things (my VAO related articles can all be found using this tag – VAO_Tech).
=== END ===

Original Article:


How to move your business to the cloud in 2 steps!

How to move your business to the cloud in 2 steps!

Veeam Software Official Blog  /  Edward Watson

Recently Veeam expanded on its existing cloud capabilities with the greatly anticipated release of Veeam Availability Suite 9.5 Update 4. This release has produced an overarching portability solution, Veeam Cloud Mobility, which contains support for restoring to Microsoft Azure, Azure Stack as well as AWS. In this blog, we’ll cover the business challenge these restore capabilities solve, how businesses can use them, and the advantage of using Veeam’s features over comparable features in the market.

Workload portability and the cloud is still a challenge

As more and more businesses look to move their data and workloads to the cloud, they can encounter some challenges and constraints with their current backup solutions. Many backup solutions today require complex processes, expensive add-ons, or require that you perform manual conversions to convert workloads for cloud compatibility.
In fact, 47% of IT leaders are concerned that their cloud workloads are not as portable as intended1. The truth is, portability is absolutely key to unlocking all the benefits that the cloud has to offer, and in terms of hybrid-cloud capabilities — portability is critical. There has to be an easier way…

Workload mobility made easy with Veeam

With the latest release of Veeam Availability Suite 9.5 Update 4, Veeam has delivered a more comprehensive set of restore options with the NEW Veeam Cloud Mobility. Veeam Cloud Mobility builds on the existing Direct Restore to Microsoft Azure with new restore capabilities to Microsoft Azure Stack as well as AWS. Veeam Cloud Mobility allows users to take any on-premises, physical and cloud-based workloads and move them directly to AWS, Microsoft Azure and Microsoft Azure Stack. When I say “any workload,” I mean it! Whether a virtual machine is running on VMware vSphere, Microsoft Hyper-V or Nutanix AHV, a physical server or an endpoint — anything that can be put into a Veeam repository can then be restored directly to the cloud. And the good news is these restore features are included in any edition of Veeam Backup & Replication 9.5 Update 4.
Now let’s go over some of the benefits:

Easy portability across the hybrid-cloud

As mentioned earlier, it is critical to have workload portability to get all the benefits that cloud has to offer no matter what stage of your cloud journey. Veeam Cloud Mobility overcomes many of the complexity and process challenges with the ease of use Veeam is known for. For customers looking to move workloads to the cloud, this is only a simple 2-step process:

  1. Register your cloud subscription with Veeam
  2. Restore ANY backup directly to the cloud

In addition to the simplicity of a 2-step process, with Veeam’s Direct Restore to AWS, there is a built-in UEFI to BIOS conversion which means there are no manual conversions needed to restore modern Windows workstations to the cloud. And, with the new Direct Restore to Microsoft Azure Stack feature, customers can get the added benefit of being able to move workloads from Microsoft Azure to Azure Stack. By doing so, unlocking the unlimited scalability and cost-efficiency of a true hybrid cloud model.

Enabling business continuity

Another primary benefit for businesses is that Veeam Cloud Mobility can be used for data recovery in emergency situations. For instance, if a site failure occurs, you will be able to easily restore your backed-up workloads, spinning them up as virtual machines in AWS, Azure and Azure Stack. Then as the smoke clears, you will be able to gain and provide access to the restored VMs. Unlike our Veeam Cloud Connect feature offered through our Veeam Cloud and Service Provider (VCSP) partners, Veeam Cloud Mobility is not considered a disaster recovery solution designed to meet strict RTOs or RPOs. Rather, this an efficient restore feature that gives you a simple, affordable game plan to gain eventual access to workloads should a disruption ever happen to your primary site.

Unlocking test and development in the cloud

Whether businesses need to test patches or applications, there is a big need for test and development. But many businesses find that conducting such tests locally puts a strain on their storage, compute and network. The cloud on the other hand is a ready-made testing paradise, where resources can be scaled up and down as needed on a dime. Veeam Cloud Mobility gives customers the ability to unlock the cloud for their test and development purposes, so they can conduct testing as frequently, quickly and affordably as needed without the constraints of a traditional infrastructure. 

Take away

Data portability is vital as part of an overall cloud strategy. With the NEW Veeam Cloud Mobility you can move ANY workload directly to AWS, Azure and Azure Stack to maintain the portability you need to leverage the power of the cloud. Whether for data portability, recovery needs or test and dev — Veeam Cloud Mobility helps make workload portability easy.
If you or your customer have not yet taken a free trial of Veeam, now is the perfect time to try Veeam Availability Suite for your cloud strategy!

2017 Cloud User Survey, Frost & Sullivan

Helpful resources:

The post How to move your business to the cloud in 2 steps! appeared first on Veeam Software Official Blog.

Original Article:


SAP HANA Certification is now posted live on SAP’s web site

Great News! SAP HANA Certification is now
posted live on SAP’s web site
While we have been certified since GA of VAS 9.5
Update 4, it’s now live on the SAP page if you have discussions with customers
needing proof. Blogs and PR activities will be following in the next week or
Note that the Veeam Plug-in for SAP HANA is
available to customers of Veeam Backup & Replication Enterprise Plus
edition, most likely requiring the Veeam Instance License for deployments on
Physical Servers/Clusters.

Harness the power of cloud storage for long-term retention with Veeam Cloud Tier

Harness the power of cloud storage for long-term retention with Veeam Cloud Tier

Veeam Software Official Blog  /  Anthony Spiteri

The cost and efficiency of data

All organizations are experiencing explosive data growth. Data growth continues to accelerate at almost exponential speed and with that comes pain points of organizations trying to manage that growth. More data means more robust applications to handle larger data sets, which also means more infrastructure to handle applications and the data itself. While the cost and management of on-premises storage has come down as hardware and disk technologies improve, organizations still face significant overhead when maintaining their own hardware infrastructure. Taking that a step further as it relates to backups, when you combine the growth of data together with more strict regulations around data retention, the challenges that come with managing storage platforms for production and backup workloads becomes even more complex. The reality persists that organizations still struggle to achieve the economy of scale both from an operational and cost point of view that makes storing data long term viable.

The rise of Object Storage

Object Storage has fundamentally shifted the storage landscape, mainly due to its popularity in the public cloud space but also because it offers advantages over traditional block and file-based storage systems. Object Storage overcomes many of the limitations of file and block due to its design and fundamental concept of being able to scale out infinitely. Because a large percentage of backup data is considered to be for long-term retention. Object Storage seems to be a perfect fit. Though the likes of Amazon, Azure and IBM Cloud offer Object Storage, the amount of organizations that have deployed Object Storage to their on-premises environments remains relatively low. The popular trend is to consume cloud-based Object Storage platforms to take advantage of the hyper-scalers own economies of scale which can’t be matched. With the cost of storage at fractions of a cent per GB, organizations desire to consume cloud-based Object Storage has increased and many have been made aware of its benefits.

Introducing Veeam Cloud Tier

With the launch of Update 4 for Veeam Backup & Replication 9.5, we have added Veeam Cloud Tier as a new innovative way to extend backup repositories to the cloud effectively delivering an infinitely scalable Scale-out Backup Repository. By using the new Object Storage Repository as a Capacity Tier Extent as part of the Scale-out Backup Repository, we have fundamentally changed the way in which organizations and our Veeam Cloud & Service Provider (VCSP) partners will think about how they design and architect backup repositories.
  By extending the Scale-out Backup Repository to take advantage of Object Storage, whether that be Amazon S3, Azure Blob, IBM Cloud Object Storage or any S3-Compatible platform (hosted or internal), we have enabled this feature to take advantage of cloud storage technologies to tier data blocks and offload them from the local Scale-out Backup Repository Performance Tier extents to Capacity Tier extents which can be configured to consume storage services as shown below.

How is Veeam Cloud Tier different?

The innovative technology we have built into this feature allows for data to be stripped out of Veeam backup files (which are part of a sealed chain) and offloaded as blocks of data to Object Storage leaving a dehydrated Veeam backup file on the local extents with just the metadata remaining in place. This is done based on a policy that is set against the Scale-out Backup Repository that dictates the operational restore window of which local storage is used as the primary landing zone for backup data and processed as a Tiering Job every four hours. The result is a space saving, smaller footprint on the local storage without sacrificing any of Veeam’s industry-leading recovery operations. This is what truly sets this feature apart and means that even with data residing in the Capacity Tier, you can still perform:

  • Instant VM Recoveries
  • Entire computer and disk-level restores
  • File-level and item-level restores
  • Direct Restore to Amazon EC2, Azure and Azure Stack

Just stepping back to think about what that means. With Veeam Cloud Tier you are now able to recover or restore directly from Object Storage without the need for any additional, potentially expensive components. With that, you can start to understand just how innovative a feature Veeam Cloud Tier is! In addition to that, we have built in further space saving efficiencies in the form of effective source side dedupe where by the same blocks of data are not offloaded to Object Storage, reducing the amount of consumed storage and reducing data transfer times up to the Capacity Tier. We have also added Intelligent Block Recovery that will source data blocks from the local backup files instead of what is tiered to Object Storage resulting in not only faster recovery times, but more importantly, cost savings when pulling data back when using Object Storage services that charge for egress.


For all Veeam customers and partners, both end users and VCSP partners alike, Veeam Cloud Tier represents an important inflection point in the way in which backup repositories are designed and built. No longer are there limitations on how big backup repositories can grow before complications arise from the accelerated growth of data. We have leveraged the power of the cloud with the efficiencies and cost savings of Object Storage platforms to deliver a feature that is unique in the market and we have been able to deliver this in such a way that no industry leading Veeam functionality has been lost. Update 4 is now Generally Available and can be downloaded here.
GD Star Rating
Harness the power of cloud storage for long-term retention with Veeam Cloud Tier, 5.0 out of 5 based on 1 rating

Original Article:


How to restore and convert your workloads directly to the cloud

How to restore and convert your workloads directly to the cloud

Veeam Software Official Blog  /  David Hill

Veeam has been expanding the capabilities of the Veeam Availability Suite product set. Recently, Veeam announced Veeam Cloud Mobility, providing workload portability across clouds and different platforms.
In this blog, we will focus on some of the technical aspects of the Veeam Cloud Mobility capability. With the latest release of Veeam Availability Suite 9.5 Update 4, Veeam has provided an extensive set of restore options. Building on the existing Direct Restore to Microsoft Azure functionality, are extra restore capabilities including Direct Restore to Microsoft Azure Stack and Amazon AWS EC2. With the Direct Restore capabilities, a user can restore any backup from any on-premises platform and move them directly to the cloud. This is what we will look at here.

Direct Restore to Amazon AWS EC2

First let’s take a look at how we easily restore workloads to Amazon AWS EC2. We begin by selecting the backup of the workload we want to run in the cloud. We simply right-click and select Restore to Amazon EC2

We select the AWS account we want to use, the AWS region, and then the data center region that we want to restore the workload too.

We can now change the original VM name to an EC2 specific instance name, with additional capabilities to add prefix and suffixes.

Select the EC2 instance type and what kind of license you want to use, whether to bring your own (BYOL) or lease a license from Amazon.

Pick your Amazon VPC, subnet and security group.

Pick the proxy options (this will be used to do the actual restore into Amazon AWS EC2).

Then the restore will begin and the status will update during the process.

That is how simple it is to move a workload to Amazon AWS EC2 from an on-premises backup. The great aspect of this capability is its simplicity. Seven to eight steps and you have migrated a workload to the public cloud.
The steps are the same for Microsoft Azure and Microsoft Azure Stack as well.

Virtual machine conversion

Now with any migration, it is important to understand that a conversion has to take place. When understanding the technology in migrating from different platforms, it is important to understand what is actually happening. Let’s take an example of moving a VMware vSphere VM to Amazon AWS EC2. As one expects, Amazon is not running VMware vSphere to provide the IaaS capabilities in EC2, which means they are using a different hypervisor. In order to restore a VMware vSphere VM to Amazon AWS EC2, a conversion has to take place. This allows for the VMware vSphere virtual machine files (vmdk, .vmx) to be changed to the Amazon equivalent. Also, Amazon AWS EC2 does not directly support GPT boot volumes, so a conversion is necessary. The diagram below shows how this works.


As companies invest further into the multi-cloud capabilities and strategy, it is becoming a more complex environment to operate in. With the new Veeam Cloud Mobility having the key capabilities to move workloads from and to any platform in any location is a major step forward in simplifying the complexities that a multi-cloud strategy brings.
If you or your customer have not yet taken a free trial of Veeam, now is the perfect time to try Veeam Availability Suite for your cloud strategy!

The post How to restore and convert your workloads directly to the cloud appeared first on Veeam Software Official Blog.

Original Article:


Veeam innovation continues

Veeam innovation continues

Veeam Executive Blog – The Availability Lounge  /  Jason Buffington

At our Velocity 2019 event last week, we celebrated four Veeam partners who developed some really great solutions that are powered by Veeam technologies. Our most heartfelt congratulations and thanks to each of these Veeam Innovation Award (VIA) recipients. If you haven’t seen them yet, please check out the short videos of each VIA partner and their solutions:
In the weeks leading up to Velocity, the general public had the opportunity to vote on a “Best of Show” among the four VIA awardees. The winner of Best of Show for Velocity 2019 is OffsiteDataSync! I had the chance to reach out to visit with Matt Chesterton, CEO of OffsiteDataSync, after the award was presented:

But wait, there’s more!
At Velocity, we celebrated Veeam partners who built great solutions to offer to customers. At VeeamON, we will be celebrating Veeam customers who are achieving great outcomes through the use of Veeam technologies (including partner services). Here are the details:

  • Customer solutions must include some level of significant deployment (which can be on-premises or cloud-based), but the nomination may come from the customer themselves or their service provider, Veeam partner, or Veeam point-of-contact (anyone directly involved in the deployment and ongoing operation).
  • Customer nominations will be accepted from February 1 through March 8. All complete submissions received by this date are eligible for review and could be selected as a VIA awardee.
  • Upon closure of the submission window, Veeam will leverage a diverse board of roughly 30 judges to determine four VIA winners, who will be notified approximately two weeks after submission closure. After notification, we’ll be sending a Veeam video team to interview the four winning organizations (and their partners where applicable).
  • And in May, we’ll open voting for the general public to vote among the four VIAs for a Best of Show at VeeamON 2019 in Miami, Florida on May 22.

Nominations for VIAs for customers are now OPEN. Nominate yourself or your customer for a Veeam Innovation Award. Good luck!
Show more articles from this author

Makes disaster recovery, compliance, and continuity automatic

NEW Veeam Availability Orchestrator helps you reduce the time, cost and effort of planning for and recovering from a disaster by automatically creating plans that meet compliance.


Veeam Availability Orchestrator

Original Article:


Watch | Axiz, Cisco, Veeam partner to secure hyper-availability – TechCentral

Watch | Axiz, Cisco, Veeam partner to secure hyper-availability – TechCentral

“veeam” – Google News

At Axiz, we know better than most that today’s businesses understand that hyper-availability is the new norm.
This means that data and applications have become critical, and customers, users and vendors want them to be available at all times, and on all devices.
The fact is, in this age of hyper-availability, any disruption or downtime is unacceptable, and let’s face it, legacy systems simply can’t keep up with today’s modern, highly virtualised environments, and the IT department ends up spending too much time managing the disparate parts of their data storage environments just to keep it up, available and secure.

Veeam and Cisco have addressed this, and are uniting the power of hyper-converged infrastructure, with data protection.
Axiz is now able to offer its customers, a new, highly resilient data-storage management platform that offers seamless scalability, ease of management and support for multi-cloud environments.
The new solution combines the Veeam Hyper-Availability Platform with Cisco’s innovative Hyper-Converged Infrastructure (HCI) solution, HyperFlex, to meet the needs of organisations.
By implementing the new solution, Axiz customers need no longer battle with outdated, unreliable legacy tools that cannot scale, or even hope to provide hyper availability that has become crucial for modern organisations.

  • This promoted content may have been paid for by the party concerned

Original Article:


Veeam’s NEW native cloud object storage delivers infinite capacity

Veeam’s NEW native cloud object storage delivers infinite capacity

Veeam Software Official Blog  /  Edward Watson

Veeam has always been a company passionate about addressing its customers’ needs. In the latest release of NEW Veeam Availability Suite 9.5 Update 4, we continued this trend by meeting an important need for many customers with our NEW native object storage integration: Veeam Cloud Tier. In this blog, we’ll cover the challenges we’re addressing through this new feature — what it is, how it works and the benefits you can take advantage of. You’ll also learn a few points about the uniqueness and innovation built into Veeam Cloud Tier compared to other backup providers. Veeam is changing the data protection game again!

The challenge of data growth and compliance

One thing is certain about data today — it’s growing exponentially. There are many sources on data growth prediction but there seems to be a consensus that the size of the digital universe will double every two years at least.
With this rapid increase of data created by organizations, it’s no surprise that challenges are starting to take hold. Hardware storage costs are skyrocketing, storage capacity planning and management is becoming more difficult, and all the while this resource juggling takes away precious time from core business priorities. Sound familiar? Now, as data begins to reach new highs we’re seeing organizations make a deliberate attempt to manage costs — with many organizations now having a mandate to reduce hardware costs listed as a priority. In addition to the growth, many organizations are having to meet compliance requirements that require they hold onto data for longer and longer periods of time.

What can IT do?

It’s clear there are two major challenges: data growth and longer-term data retention needs. IT departments need to carefully weigh in a few factors when trying to solve these issues, but they should start with two main considerations:

  • Investigate affordable storage: The most obvious answer to reducing hardware costs is to adopt a more affordable storage. For organizations looking for long-term data retention with cost-efficiency and scalability there is no better place to look than cloud or on-premises based object storage targets. And this is precisely why Veeam customers have requested native object storage integration. When accessibility and quick recovery are still an integral part of a long-term retention strategy, object storage will make the most sense to meet this need.
  • Take a tiered approach: Choosing a more affordable storage option is half the battle — now you must make sure you are freeing up your local storage in the best, most intelligent way possible. Taking a tiered approach to data management allows you to effectively free up only the storage you want freed up, and tier off only the data you need for longer-term retention. This tiered approach allows an overall better capacity management process. This should ideally be a “set it and forget it” solution that automatically tiers off files to the storage according to your exact retention needs.

Veeam Cloud Tier to the rescue!

Ok, so we’ve covered the challenges and what to look for in a solution to solve the challenges, now let’s look at Veeam’s new feature that meets all these requirements. Veeam Cloud Tier was designed to provide infinite capacity through native integrations with Amazon S3, Azure Blob Storage, IBM Cloud Object Storage, S3-compatible service providers or on-premises storage offerings. This feature delivers on the promise of instant and automatic tiering directly to object storage to free up local storage. It is also built with innovative attributes that reduce overall storage consumption and lower ingress and egress costs. Let’s check out the benefits

Infinite Capacity

First, it’s important to note that Veeam Cloud Tier is the built-in automatic tiering feature of Veeam’s Scale-out Backup Repository (SoBR) feature. SoBR consists of multiple repositories under one abstracted layer on-premises, that deliver storage efficiencies, enabling the tiered approach to data management I mentioned earlier. What makes Veeam Cloud Tier special is its ability to now make SoBR completely unlimited in terms of its capacity and scale with it now being backed by object storage. Veeam Cloud Tier acts as a capacity tier to automatically offload backups to object storage.

No double charging

One of the most important aspects of Veeam Cloud Tier is its advantage over competitors from a cost perspective. Believe it or not many backup providers will actually “double charge” for the data you store in the cloud. What this means is not only are you paying for the cost of leveraging the cloud provider’s storage but also incurring an additional charge simply for leveraging object storage with their solution. Veeam Cloud Tier is built into our existing Veeam Availability Suite Enterprise and Enterprise Plus editions and you will never see a “cloud tax” fee from Veeam on your bill!

No vendor lock-in

Veeam has always been known as a storage agnostic vendor. We want our customers to know that if they work with Veeam they can trust that we deliver protection for any application, any data, in any cloud. Veeam Cloud Tier is no different, you will not be locked into a particular storage vendor, in fact we are proud to offer a variety of storage options mentioned above — including both hyper-scale and S3 compatible on-premises and cloud-based service provider options. This feature also opens up the option for service providers to scale-out their IaaS customers to hyper-scale clouds in addition to leveraging their own clouds.

Storage reduction magic

The built-in storage reduction technique is truly what sets Veeam apart from other solutions. We know that customers don’t want to have to send their entire backup to the cloud, or bring it back down for that matter… So, we developed a technique by which the on-premises files remain on-premises as a shell with metadata while the bulk is offloaded to object storage. This drastically reduces the size of the local storage without sacrificing any recoverability of the data. For instance, a 1TB VBK file on local storage could be reduced to a 20MB file.
With this lightweight file format customers incur only minimal cloud-based ingest fees for sending data to the cloud. They can also just as easily retrieve data from the cloud without having to recover the entire backup repository through Veeam’s file-based granular recovery options — minimizing egress charges and increasing productivity.
Here is a view of the Veeam Cloud Tier (Capacity Tier) within the Scale-out Backup Repository:

Take away

It’s clear that massive data growth and compliance requirements continue to take a toll on not only the cost of local storage, but the length of time organizations are required to hold onto data. Object storage is becoming increasingly appealing to many organizations for long-term data retention needs, but leveraging the right solution is imperative to enable a tiered approach to Intelligent Data Management. Veeam Cloud Tier provides a simplified approach, enabling customers to natively tier backup files to a variety of cloud and on-premises object storage targets including Amazon S3, Azure Blob Storage, IBM Cloud Object Storage and S3-compatible service providers. In addition, Veeam Cloud Tier has many benefits over other solutions including no “cloud tax” and no vendor lock-in.
If you or your customers are not yet leveraging Veeam Availability Suite, now is the absolute perfect time to execute on this tiering strategy to take advantage of the infinite capacity Veeam delivers on object storage.

Helpful resources:

The post Veeam’s NEW native cloud object storage delivers infinite capacity appeared first on Veeam Software Official Blog.

Original Article:


HyperFlex 4.0 and Veeam Availability Suite 9.5 Update 4 deliver big!

HyperFlex 4.0 and Veeam Availability Suite 9.5 Update 4 deliver big!

Veeam Software Official Blog  /  Andrew Lickly

With the announcement today of Cisco HyperFlex 4.0 and the Veeam announcement last week of Veeam Availability Suite 9.5 update 4, there is a lot to digest on all the benefits these provide to our joint customers from the core to the edge to the cloud and back again.

There’s magic in the air

It seems it wasn’t that long ago when HyperFlex version 1.0 was released. In just 2.5 years, HyperFlex has gone from inception to a leader in the Gartner Magic Quadrant for Hyperconverged Infrastructures (HCI). This is a testament to their innovation and not being just another hyperconverged infrastructure solution but a true next-generation solution with features and benefits that customers require in today’s frenetic IT world. Veeam is also a leader in the Gartner Magic Quadrant for backup solutions which gives customers added confidence when pairing Cisco and Veeam solutions.

HyperFlex anywhere

HyperFlex’s core values have always been around simplicity, agility and multi-cloud services. HyperFlex 4.0 builds on this with their recognition that customers live in a distributed, multi-site world. To meet customer requirements in this environment, HyperFlex added several key features. HyperFlex can now utilize Cisco Intersight, their cloud-based management platform, to streamline, orchestrate and automate the management and deployment of HyperFlex HCI. HyperFlex and Intersight extend the simplicity and efficiency of HCI from core data centers to the edge with consistent policy enforcement and cloud-powered systems management. Veeam has worked with HyperFlex since its initial release, integrating with HyperFlex native snapshots to backup and replicate data on HyperFlex systems. Veeam can replicate from one HyperFlex cluster to another, backup to a main data center and move backup copy jobs to other sites or the cloud delivering the application and data Availability customers require. Building on the automation and orchestration, Veeam has added orchestration for virtual environments with the addition of intelligent automation in Veeam Availability Suite 9.5 update 4. Veeam delivers proactive monitoring and alerting backed by intelligent automation to alert customers and resolve potential problems before they have operational impact. Together, Veeam and Cisco HyperFlex have greatly simplified the deployment, management and Availability of multi-site distributed environments.


HyperFlex 4.0 also delivers many new performance and security enhancements specifically for mission-critical business applications like SAP. SAP is critical to customers’ business and it requires an infrastructure that is fast, offers consistent performance and is highly available and protected. Cisco HyperFlex is SAP HANA certified and provides the performance needed for SAP application workloads and databases with simple management, reduced TCO, and cloud-like flexibility. With the latest Veeam release, Veeam is now SAP HANA certified. This is big news for our joint SAP HANA customers.  They can utilize Veeam to protect their SAP HANA deployments using native SAP database backup methods, something all SAP administrators prefer, and they will get enhanced backup capabilities like Instant VM Recovery and optimized backups with Veeam’s native HyperFlex snapshot integration. SAP users will be able to leverage Veeam DataLabs to easily deploy SAP test environments and clone SAP databases from Veeam repositories.
Putting it all together, customers can run SAP HANA databases and SAP applications on Cisco HyperFlex, modernizing their production environments with SAP HANA and HyperFlex and protecting it using Veeam’s native HyperFlex snapshots and an SAP-certified plug-in for database backup and restores. Veeam and Cisco have worked together for years with Veeam running on Cisco UCS storage servers, and, at the end of last year, Cisco and Veeam introduced a new joint solution, Veeam Availability on Cisco HyperFlex, an enterprise-grade data protection solution running Veeam services and repository on HyperFlex. Together, we deliver an end-to-end solution; customers can now run their production SAP environments on HyperFlex while keeping it highly available with a modern data protection solution, Veeam running on another Cisco HyperFlex System.

From the edge to the center to the cloud

What Veeam and Cisco are really delivering is simplicity, flexibility and lower costs. HyperFlex 4.0 is introducing smaller 2-node edge clusters for ROBO environments, combined with cloud-based management, making it easier to deploy and manage these systems. Veeam Availability Suite 9.5 Update 4 adds Cloud Data Management, Veeam Cloud Mobility and Veeam Cloud Tier, all designed to make it easier for customers to move workloads on premises, to the cloud and back again.
The result is that customers and IT departments today, whether they are large or small, are being challenged to do more with less, while innovating the business and driving measurable impact. Cisco and Veeam continue to work together to deliver solutions that offer simplicity, flexibility and lower TCO that provide the business outcomes our customers require.
All from two industry-leading vendors! Don’t take our word for it, go ask Gartner!
For more information on joint Cisco and Veeam solutions and to receive exclusive announcements, subscribe on our Digital Hub.
The post HyperFlex 4.0 and Veeam Availability Suite 9.5 Update 4 deliver big! appeared first on Veeam Software Official Blog.

Original Article:


Veeam Availability Suite 9.5 Update 4: New Stuff!

Veeam Availability Suite 9.5 Update 4: New Stuff!

vZilla  /  michaelcade

After having a super busy week last week and not having much chance to share some of the top stuff Veeam have released with this monster release, I am now back home and can get some content out the door.
I wanted this post to touch on some of them top features that are worth taking a deeper look into.

External Repository

Lots of people are going to be jumping into the new Veeam Backup & Replication console and they are going to see several new “repositories” that can be added to their Veeam Backup Infrastructure. One of them, is the new External Repository.
The external repository is the first point of integration since the acquisition of N2W Software in early 2018. The development of Veeam N2WS Backup & Recovery (formerly known as Cloud Protection Manager) is still rather separated from the Veeam development however in the latest release the capability of being able to tier EBS Snapshots of your EC2 instances into an S3 repository to reduce costs on your cloud storage.
For a summary on the broader capabilities of Veeam N2WS Backup & Recovery you can find that here.
In a nutshell though this gives us the ability to protect our EC2 instances within AWS and then store them in this S3 Bucket that can then be accessed by Veeam Backup & Replication, this VBR server can really be stored anywhere with access. This then opens the door to various granular recovery techniques including File Level and Application Item level recovery. What this also enables is the ability to send this to a secondary location to adhere to the 3-2-1 backup methodology. Meaning we can use Veeam Backup Copy Jobs, Tape or even send these backup files to one of the many Veeam Cloud Connect Backup As A Service providers.
The smart thing here is that the portable data format, the .VBK file that we get from any Veeam on premises backups (apart from VBO365) this is the same or at least very similar to the file format we will find in the External Repository.

Cloud Mobility: Direct Restore to AWS

“Provides easy portability and recovery of ANY on-premises or cloud-based workloads to AWS, Azure and AzureStack.”
With Update 4 Veeam extended the ability to directly restore those backup files into Amazon EC2 and Microsoft AzureStack, prior to this Veeam had the ability to send those backup files and convert them to Azure Virtual Machines. This new feature adds the ability to for customers to restore VMs that have been backed up by Veeam Backup & Replication directly into Amazon EC2 instances.
Here is that portable data format again those VBK files are created when a backup is created from anything on the left shown below meaning we have the ability to convert them into the above.

This feature can be used for VM migrations into the public cloud, this feature can also be used for provisioning workloads from backup sets into the public cloud for test and development scenarios.

Veeam will look after the conversion process by leveraging the AWS VM Conversion process. This will look after the addition of EC2 configuration, Drivers and Modules to the machine for storage and network devices. This injects the necessary utilities and services to interact with AWS.
In addition to the AWS conversion engine, the restore mechanism can also make an additional conversion of EFI to BIOS boot for Windows machines.

Cloud Tier

In my opinion, this is the most innovative feature release from Veeam in a while. Leveraging Object Storage is not new, but the way in which this feature uses Object Storage is the differentiator.
I am conscious that we have some more detailed posts coming out and I don’t want to steal their thunder so I will briefly touch on what this feature gives you within your Veeam environment.
Firstly, I hope you have had heard of our Scale Out Backup Repository, this feature arrived back in 2016… with V9 here is a great FAQ that you can do a quick 101 on.
The Cloud tier is an extension of the Scale Out Backup Repository, meaning that you can leverage Object Storage for tiering out older backup files to cheaper storage.
To really see the power of this, check out the live launch event linked below where you can see Anthony demo this live and talk through the concept and capabilities.

Veeam DataLabs: Secure Restore

I must mention this as this was my demo on mainstage. Veeam DataLabs is more of a group of functions available within the Veeam Availability Suite and Veeam Availability Orchestrator this feature announced and released with update 4 gives us the ability to check for malware during the recovery process. I will put some more detail on this soon. Lots of content coming around Veeam DataLabs and the new features but also stuff that has been within the product for years!

Veeam ONE

I also must mention Veeam ONE as it got a huge release here. Some crazy smart features that allow us to remove the manual intervention points in the day to day running of your environment. Like Secure Restore I am going to put some effort into getting some more detailed posts out the door on this one. A couple of things to look at though is the “Intelligent Diagnostics” and “Remediation Actions” both deserve their own posts.

Other useful references:

Update 4 Launch Event Video and Recap Links – Anthony Spiteri
Veeam Executive Blog – Danny Allan
My Top Three Favourite Veeam 9.5 Update 4 Features – Melissa Palmer
Storage Review

Original Article:


Veeam cloud backup gains tiering, mobility, AWS enhancements

Veeam cloud backup gains tiering, mobility, AWS enhancements


The latest version of Veeam Software’s flagship Availability Suite focuses on facilitating tiering and migrating data into public clouds.
Veeam launched what it calls Update 4 of Availability Suite 9.5 today, less than a week after picking up $500 million in funding that can help the vendor expand its technology through acquisitions, as well as organically.
Availability Suite includes Backup & Replication and the Veeam One monitoring and reporting tool. Update 4 of Veeam Availability Suite 9.5 had been available to partners for a short time and is now generally available to end-user customers.
The new Veeam Availability Suite update includes Cloud Tier and Cloud Mobility features, and extends its Direct Restore capability to AWS.
With its cloud focus, Veeam is making a push in a trending area in the market, according to Christophe Bertrand, senior analyst at Enterprise Strategy Group.
“Our research clearly indicates that backup data and processes are shifting to the cloud,” Bertrand wrote in an email. “Cloud-related features are going to be critical in the next few years as end users continue to ‘hybridize’ their environments, and there is still a lot to do to harmonize backup/recovery service levels and instrumentation across on-premises and cloud infrastructures.”

Cloud Tier, Cloud Mobility, N2WS integration launched

Veeam started out as backup for virtual machines but now also protects data on physical servers and cloud platforms.
Availability Suite’s Cloud Tier offers built-in automatic tiering of data in a new Scale-out Backup Repository to object storage. The Veeam cloud backup feature provides long-term data retention by using object storage integration with Amazon Simple Storage Service (S3), Azure Blob Storage, IBM Cloud Object Storage, S3-compatible service providers and on-premises storage products.

Cloud-related features are going to be critical in the next few years as end users continue to ‘hybridize’ their environments. Christophe Bertrandsenior analyst, Enterprise Strategy Group

Cloud Tier helps to increase storage capacity and lower cost, as object storage is cheaper than block storage, according to Danny Allan, Veeam vice president of product strategy.
Cloud Mobility provides migration and recovery of on-premises or cloud-based workloads to AWS, Azure and Azure Stack.
Allan said Veeam cloud backup users simply need to answer two questions for business continuity: Which region and what size instance?
Veeam also extended its Direct Restore feature to AWS. It previously offered Direct Restore for Azure. Direct Restore allows organizations to restore data directly to the public cloud.
Veeam also added a new Staged Restore feature to its DataLabs copy management tool. Staged Restore reduces the time it takes to review sensitive data and remove personal information, streamlining compliance with regulations such as the GDPR. The new Secure Restore scans backups with an antivirus software interface to stop malware such as ransomware from entering production during recovery.
The Veeam cloud backup updates also include integration with N2WS, an AWS data protection company that Veeam acquired a year ago. The new Veeam Availability for AWS combines Veeam N2WS cloud-native backup and recovery of AWS workloads with consolidation of backup data in a central Veeam repository. End users can move data to multi-cloud environments and manage it.
“This is designed for the hybrid customer, which is primarily what we see now,” Allan said.
For cloud-only customers, N2WS Backup & Recovery is still available from the AWS Marketplace.
Bertrand said he thinks the N2WS integration is a “starting point.”
“The key here is to offer the capability of managing all the backups in one place,” Bertrand wrote. “It gives end users better visibility over their Veeam backups whether in AWS or on premises.”
Veeam also updated its licensing. Users can move licenses automatically when workloads move between platforms, such as multiple clouds.

How Veeam products stack up in crowded market

Private equity firm Insight Venture Partners, which acquired a minority share in Veeam in 2013, invested an additional $500 million in the software vendor last Wednesday.
Veeam has about 3,500 employees and plans to add almost 1,000 more over the coming year. In addition, last November, Veeam said it is investing $150 million to expand its main research and development center in Prague. Veeam claims to have about 330,000 customers.
Veeam founder Ratmir Timashev said the vendor is looking to grow further by using its latest funding haul to acquire companies and technologies.
Veeam isn’t the only well-funded startup in the data protection and data management market. Last week, Rubrik also completed a large funding round. The $261 million Series E funding will help Rubrik build out its Cloud Data Management platform. Cohesity closed a $250 million funding round and Actifio pulled in $100 million in 2018. The data protection software market also includes established vendors such as Veritas, Dell EMC, IBM and Commvault.
While many of its competitors now offer integrated appliances, Veeam is sticking with a software-only model. Instead of selling its own hardware, Veeam partners with such vendors as Cisco, Hewlett Packard Enterprise, NetApp and Lenovo.
“Unlike many others, we have a completely software-defined platform,” Allan said, “giving our customers choices for on-premises hardware vendors or cloud infrastructure.”

Original Article:


Chaining plans in Veeam Availability Orchestrator

Chaining plans in Veeam Availability Orchestrator

Notes from MWhite  /  Michael White

So you have installed VAO, and have tested your plans and you are ready for a crisis.  But you want to know how you can trigger one plan, and have them all go.  Right?  I can help with that.
The overview is you do plans, test them and when happy you chain them.  But you need to make sure the order of recovery is what it needs to be.  For example, database’s should be recovered before the applications that need them. In a crisis it is best to have email working so management can reassure the market, customers and employees.
In another article I am working on I will show you a method of recovering multi – tier applications fast, rather than the slow method – which I suspect will be default for many as it is obvious, and yet, there is a much better way to do it.

Starting Point

In my screenshot below you can see my plans.  I want to recover EMail Protection first, followed by SQL, and finally View.  View requires SQL, and I need email recovered first so I can tell my customers, and employees that we are OK.

And yes, my EMail protection plan is not verified but I am having trouble with it so it needs more time. But we can still chain plans.  When you do this, make sure all your plans are verified.
The next step is to right click on Email Protection Plan and select Schedule.  You can also do that under the Launch menu too.

Yes, we are not going to schedule a failover, but this is the path we must take to chain these plans together so we do the failover on the first and they all go one after another.
We need to authenticate – we don’t want anyone doing this after all. Next we see a schedule screen and so you need to schedule a failover – in the future.  Not too worry, we will not do that.

You choose Next, and Finish.
We now right click on the next plan and select Schedule.

This time, we are enable the checkbox, but rather than select a date we will choose a plan to failover after.  So select Choose a Plan button. In our case we are going to have this second plan fire after the first one.

Now we Next, and Finish.
Now we right + click on the last plan and select Schedule. Then we enable and select the previous plan.

Then you do the same old Next, and Finish. Now when we look at our plans we see a little of what we have done – but not yet are we complete.

You can see in the Failover Scheduled column how the email plan – first to be recovered – occurs on a particular date, but then we have View execute after SQL, and SQL recovery after Email.
Now, we must right click on EMail and select Schedule.  We must disable the scheduled failover.

As you see above, remove the checkbox, and the typical Next and Finish.

Now we see no scheduled failover for the EMail protection plan, but we still see the other two plans with Run after so if I execute a failover for EMail the other two will execute in the order we set.
So things are good.  Now one plan triggered will cause the next to execute.  If you select EMail, it will trigger SQL which will than trigger View.

Things to be aware of

  • The plans need to be enabled for this to work.
  • Test failovers are not impacted by this order of execution.
  • When you do a failover with any of these plans, you will be prompted to execute the other plans in the appropriate order. This will be enabled by default but you can disable it.

One you have your plans chained, you should arrange a day and time to do a real failover to your DR site.  It is the best way to confirm things are working. BTW, all of my VAO technical articles can be found with this link.
=== END ===

Original Article:


10 Tips for a Solid AWS Disaster Recovery Plan

10 Tips for a Solid AWS Disaster Recovery Plan

N2WS  /  Cristian Puricica

AWS is a scalable and high-performance computing infrastructure used by many organizations to modernize their IT. However, no system is invulnerable, and if you want to ensure business continuity, you need to have some kind of insurance in place. Disaster events do occur, whether they are a malicious hack, natural disaster, or even an outage due to a hardware failure or human error. And while AWS is designed in such a way to greatly offset some of these events, you still need to have a proper disaster recovery plan in place. Here are 10 tips you should consider when building a DR plan for your AWS environment.

1. Ship Your EBS Volumes to Another AZ/Region

By default EBS volumes are automatically replicated within an Availability Zone (AZ) where they have been created, in order to increase durability and offer high availability. And while this is a great way to protect your data from relying on a lone copy, you are still tied to a single point of failure, since your data is located only in one AZ. In order to properly secure your data, you can either replicate your EBS volumes to another AZ, or even better, to another region.
To copy EBS volumes to another AZ, you simply create a snapshot of it, and then recreate a volume in the desired AZ from that snapshot. And if you want to move a copy of your data to another region, take a snapshot of your EBS, and then utilize the “copy” option and pick a region where your data will be replicated.

2. Utilize Multi-AZ for EC2 and RDS

Just like your EBS volumes, your other AWS resources are susceptible to local failures. Making sure you are not relying on a single AZ is probably the first step you can take when setting up your infrastructure. For your database needs covered by RDS, there is a Multi-AZ option you can enable in order to create a backup RDS instance, which will be used in case the primary one fails by switching the CNAME DNS record of your primary RDS instance. Keep in mind that this will generate additional costs, as AWS charges you double if you want a multi-AZ RDS setup compared to having a single RDS instance.
Your EC2 instances should also be spread across more than one AZ, especially the ones running your production workloads, to make sure you are not seriously affected if a disaster happens. Another reason to utilize multiple AZs with your EC2 instances is the potential lack of available resources in a given AZ, which can occur sometimes. To properly spread your instances, make sure AutoScaling Groups (ASG) are used, along with an Elastic Load Balancer (ELB) in front of them. ASG will allow you to choose multiple AZs in which your instances will be deployed, and ELB will distribute the traffic between them in order to properly balance the workload. If there is a failure in one of the AZs, ELB will forward the traffic to others, therefore preventing any disruptions.
With EC2 instances, you can even go across regions, in which case you would have to utilize Route53 (a highly available and scalable cloud DNS service) to route the traffic, as well as do the load balancing between regions.

3. Sync Your S3 Data to Another Region

When we consider storing data on AWS, S3 is probably the most commonly used service. That is the reason why, by default, S3 duplicates your data behind the scene to multiple locations within a region. This creates high durability, but data is still vulnerable if your region is affected by a disaster event. For example, there was a full regional S3 outage back in 2017 (which actually hit a couple other services as well), which led to many companies being unable to access their data for almost 13 hours. This is a great (and painful) example of why you need a disaster recovery plan in place.
In order to protect your data, or just provide even higher durability and availability, you can use the cross-region replication option which allows you to have your data copied to a designated bucket in another region automatically. To get started, go to your S3 console and enable cross-region replication (versioning must be enabled for this to work). You will be able to pick the source bucket and prefix but will also have to create an IAM role so that your S3 can get objects from the source bucket and initiate transfer. You can even set up replication between different AWS accounts if necessary.
Do note though that the cross-region sync starts from the moment you enable it, so any data that already exists in the bucket prior to this will have to be synced by hand.

4. Use Cross-Region Replication for Your DynamoDB Data

Just like your data residing in S3, DynamoDB only replicates data within a region. For those who want to have a copy of their data in another region, or even support for multi-master writes, DynamoDB global tables should be used. These provide a managed solution that deploys a multi-region multi-master database and propagates changes to various tables for you. Global tables are not only great for disaster recovery scenarios but are also very useful for delivering data to your customers worldwide.
Another option would be to use scheduled (or one-time) jobs which rely on EMR to back up your DynamoDB tables to S3, which can be later used to restore them to, not only another region, but also another account if needed. You can find out more about it here.

5. Safely Store Away Your AWS Root Credentials

It is extremely important to understand the basics around security on AWS, especially if you are the owner of the account or the company. AWS root credentials should ONLY be used to create initial users with admin privileges which would take over from there. Root password should be stored away safely, and programmatic keys (Access Key ID and Secret Access Key) should be disabled if already created.
Somebody getting access to your admin keys would be very bad, especially if they have malicious intentions (disgruntled employee, rival company, etc.), but getting your root credentials would be even worse. If a hack like this happens, your root user is the one you would use to recover, whether to disable all other affected users, or contact AWS for help. So, one of the things you should definitely consider is protecting your account with multi-factor authentication (MFA), preferably a hardware version.
The advice to protect your credentials sometimes sounds like a broken record, but many don’t understand the actual severity of this, and companies have gone out of business because of this oversight.

6. Define your RTO and RPO

Recovery Time Objective (RTO) represents the allowed time it would take to restore a process back to service, after a disaster event occurs. If you guarantee an RTO of 30 minutes to your clients, it means that if your service goes down at 5 p.m., your recovery process should have everything up and running again within half an hour. RTO is important to help determine the disaster recovery strategy. If your RTO is 15 minutes or less, it means that you potentially don’t have time to reprovision your entire infrastructure from scratch. Instead, you would have some instances up and running in another region, ready to take over.
When looking at recovering data from backups, RTO defines what AWS services can be used as a part of disaster recovery. For example if your RTO is 8 hours, you will be able to utilize Glacier as a backup storage, knowing that you can retrieve the data within 3–5 hours using standard retrieval. If your RTO is 1 hour, you can still opt for Glacier, but expedited retrieval costs more, so you might chose to keep your backups in S3 standard storage instead.
Recovery Point Objective (RPO) defines the acceptable amount of data loss measured in time, prior to a disaster event happening. If your RPO is 2 hours, and your system has gone down at 3 p.m., you must be able to recover all the data up until 1 p.m. The loss of data from 1 p.m. to 3 p.m. is acceptable in this case. RPO determines how often you have to take backups, and in some cases continuous replication of data might be necessary.

7. Pick the Correct DR Scenario for Your Use Case

The AWS Disaster Recovery white paper goes to great lengths to describe various aspects of DR on AWS, and does a good job of covering four basic scenarios (Backup and Restore, Pilot Light, Warm Standby and Multi Site) in detail. When creating a plan for DR, it is important to understand your requirements, but also what each scenario can provide for you. Your needs are also closely related to your RTO and RPO, as those determine which options are viable for your use case.
These DR plans can be very cheap (if you rely on simple backups only for example), or very costly (multi-site effectively doubles your cost), so make sure you have considered everything before making the choice.

8. Identify Mission Critical Apps and Data and Design Your DR Strategy Around Them

While all your applications and data might be important to you or your company, not all of them are critical for running a business. In most cases not all apps and data are treated equally, due to the additional cost it would create. Some things have to take priority, both when making a DR plan, and when restoring your environments after a disaster event. An improper prioritization will either cost you money, or simply risk your business continuity.

9. Test your Disaster Recovery

Disaster Recovery is more than just a plan to follow in case something goes wrong. It is a solution that has to be reliable, so make sure it is up to the task. Test your entire DR process thoroughly and regularly. If there are any issues, or room for improvement, give it the highest possible priority. Also don’t forget to focus on your technical people as well as—they too need to be up to the task. Have procedures in place to familiarize them with every piece of the DR process.

10. Consider Utilizing 3rd-party DR Tools

AWS provides a lot of services, and while many companies won’t ever use the majority of them, for most use cases you are being provided with options. But having options doesn’t mean that you have to solely rely on AWS. Instead, you can consider using some 3rd-party tools available in AWS Marketplace, whether for disaster recovery or something else entirely. N2WS Backup & Recovery is the top-rated backup and DR solution for AWS that creates efficient backups and meets aggressive recovery point and recovery time objectives with lower TCO. Starting with the latest version, N2WS Backup & Recovery offers the ability to move snapshots to S3 and store them in a Veeam repository format. This new feature enables organizations to achieve significant cost-savings and a more flexible approach toward data storage and retention. Learn more about this here.


Disaster recovery planning should be taken very seriously, nonetheless many companies don’t invest enough time and effort to properly protect themselves, leaving their data vulnerable. And while people will often learn from their mistakes, it is much better to not make them in the first place. Make disaster recovery planning a priority and consider the tips we have covered here, but also do further research.
Try N2WS Backup & Recovery 2.4 for FREE!

Read Also

Original Article: