SAP HANA integrated backup is here!

SAP HANA integrated backup is here!

Veeam Software Official Blog  /  Rick Vanover

SAP HANA is one of the most critical enterprise applications out there, and if you have worked with it, you know it likely runs part, if not all, of a business. In Veeam Availability Suite 9.5 Update 4, we are pleased to now have native, certified SAP support for backups and recoveries directly via backint.

What problem does it solve?

SAP HANA’s in-memory database platform also requires that a backup solution be integrated and aware of the platform. This SAP HANA support helps you to have certified solutions for SAP HANA backups, reduce impact of doing backups, ensure operational consistency for backups, and leverage all of the additional capabilities that Veeam Availability Suite has to offer. This also includes point-in-time restores, database integrity checks, storage efficiencies such as compression and deduplication as well.
This milestone comes after years of organizations wanting Veeam backups with their SAP installations. We spent many years advocating on backing up SAP with BRTOOLS and leveraging image-based backups as well to prepare for tests. Now, the story becomes even stronger with support for Veeam to drive backint backups from SAP and store them in a Veeam repository. Specifically, this means that a backint backup can happen for SAP HANA and Veeam can manage the storage of that backup. It is important to note now that the Veeam SAP Plug-In, which makes this native support work, is also supported for use with SAP HANA on Microsoft Azure.

How does it work?

The Veeam Plug-In for SAP HANA becomes a target available for native backups with SAP HANA Studio for backups of a few types: file-based backups, snapshots and backint backups. When backups are performed in SAP HANA Studio, a number of different types and targets can be selected. This is all native within the SAP HANA application and SAP HANA tools like SAP HANA Studio, SAP HANA Cockpit or SQL based command line entries. These include file backups (plain copies of files) and complete data backups using backint. Backint is an API framework that allows 3rd-party tools (such as Veeam) to directly connect the backup infrastructure to the SAP HANA database. The backint backup is configured to have a backup interval set in SAP HANA Studio, and that interval can be very small – such as 5 minutes. It is also recommended to do the backup with log backups (again, configured in SAP HANA Studio) to enable more granular restores which will be covered a bit later on.
SAP HANA can also call snapshots to its own application, while it does not have consistency or corruption checks – snapshots are a great addition to the overall backup strategy. By most common perspectives, backint is the best approach for backing up SAP HANA systems but using the snapshots can also add more options for recovery. The plug-in data flow for a backint backup as implemented in Veeam Availability Suite 9.5 Update 4 is shown in the figure below:

One of the key benefits of doing a backint backup of SAP HANA is that you can do direct restores to a specific point in time – either from snapshots or from the backint backups with point-in-time recovery. This is very important when considering how critical SAP HANA is to many organizations. So, when it comes to how often a backup is done, select the interval that works for your organization’s requirements and make sure the option to enable automatic log backup is selected as well.

Bring on the Enterprise applications!

Application support is a recent trend here at Veeam, and I do not expect this to slow down any time soon! The SAP HANA Plug-In support, along with the Oracle RMAN plug-in, are two big steps in bringing application support to Veeam for critical, enterprise applications. You can find more information on Veeam Availability Suite 9.5 Update 4 here.
The post SAP HANA integrated backup is here! appeared first on Veeam Software Official Blog.

Original Article:


VBO Office 365 Analysis with MS Graph

VBO Office 365 Analysis with MS Graph

CloudOasis  /  HalYaman

Screen Shot 2019-02-18 at 12.16.38 pmAre you thinking of backing up your Microsoft Office 365 Exchange Online and wish to size your backup accurately? Is there any way to measure the total size of your consumed Exchange Online storage, Change rate and more?

The answers to these questions are essential when it comes to sizing backup solutions. You must know the storage needed to store the backed up data and the backup solution components (Server roles).
In the era of Software as a Service (SaaS), it’s challenging to acquire this information, and then to get access to it. This makes solution sizing a challenging exercise for any business.

The Scenario

Let us consider the following scenario to understand the challenge that a business faces when they want to undertake sizing and backup of their Exchange Online. To use a specific example, I will consider the Veeam Backup for Office 365 product to simplify the discussion.
To size a Veeam backup repository, you need several numbers to calculate the total size. They are:

  • The total size of the current mailboxes (Active Mailboxes); and
  • Change rate.

The good news is that Microsoft offers several ways to pull this info from Office 365; you can use either PowerShell, or MS Graph API.
In this example, I will discuss the MS Graph API to show you how to easily retrieve the mailbox size and change rate, and more, to size the Veeam Backup Repository and the Veeam Backup infrastructure.

Veeam VBO Sizing

Veeam Software has released a great sizing guide to show you how to size the Veeam Backup for Office 365 Exchange Online. To access the guide, press this link.
But how great would it be if we could somehow automate the process! Let’s consider the Microsoft Graph APIs with two Graph API URLs (GET commands), we can get all the information we need to size the Veeam VBO repository and the infrastructure. Have a look at these two URLs:

From those two commands, we can extract all the information we need to size these Veeam items:

  • Backup Repository size;
  • Number of Servers needed;
  • Number of Proxies; and
  • Nnumber of jobs.

VBO Sizing WebApp

Taking the information from the MS Graph API and the Veeam VBO sizing guide, I asked myself this. “How can I can build a WebApp that can be accessed from any web-enabled device, then connect to the tenant Office 365 and draw down the necessary information on behalf of the tenant, and then allow the tenant to choose the retention period?”
All of that has already been assembled in a very small WebApp that you can use this link to access. You need only your Microsoft account to access your tenant Office 365 subscription.

How the WebApp works

When accessing the WebApp link, you will be greeted by a Connect Button. This will redirect you to the Microsoft Azure login page, a standard Microsoft secure Azure login to acquire the Graph token.
After the token has been retrieved, The WebApp sends several GET Restful API commands to acquire the following information:

With that information acquired, the WebApp will help with sizing the Veeam Backup Repository, and also return the number of Proxies, Repositories, VBO server and Jobs you need to fully protect your Exchange Online.


The WebApp is a tool that I wanted to implement to help me to get an initial understanding of the size of the Exchange Online deployment for my customers and service providers. With this WebApp, the data gathering and the sizing on the Veeam VBO infrastructure has become simple and fast. This helps customers to size their storage and calculate their budget before they start the sales cycle.
Over the following days I will be working on version two of the WebApp. I will be adding more functionality, like estimating the size of the OneDrive and SharePoint drives, and other exciting features. So, if you have any feedback or suggestions, they will be most welcome.
The post VBO Office 365 Analysis with MS Graph appeared first on CloudOasis.

Original Article:


Want a Visio of your lab? Veeam ONE can do it for you!

Want a Visio of your lab? Veeam ONE can do it for you!

Notes from MWhite  /  Michael White

Hi all,
I mentioned, when talking with a customer, about how I had to update my Visio of my home lab with the changes I had done recently. There were groans and I asked why.  The amount of work to update Visio diagrams.  I said there was very little work as I had a tool that did the work. I said that Veeam ONE can do a Visio of your VMware infrastructure and they laughed.  So here we go.
You will need a Windows machine with Visio installed.

  • First you should access the Veeam ONE Reporter – which is normally https://FQDN:1239. After authentication you will see a collection of reports.

  • The report that has an output that we can use with Visio is in the Offline Reports folder – seen near the end of the list.
  • When you look into the Offline Reports folder you see what we are after!

  • So click on the Infrastructure Overview (Visio) report name.

  • We need to confirm the scope of operations.  So click on the blue Virtual Infrastructure link.

  • I can see my infrastructure is all selected so that is good.  Now we can use OK to return.
  • Now if you want VMs included in your diagram, and I do, you can select the VM checkbox.

  • Once your Include VMs is checked, or not checked, you can hit the Preview button.
  • Once you hit that button, you will see a brief collecting data dialog, or if you have a very big lab it will take a little longer.  One it is done you will have a file downloaded.

  • No matter how hard you try and load this file into Visio it will not work. You must download the Veeam Report Viewer.  Which, BTW, is not a viewer.  It is a translate tool.  You need to download it and install it where Visio is. Also move the infrastructure.vmr where the Viewer and Visio is.

  • The viewer installs very fast and leaves an icon on your desktop.

  • Start up the tool, and you should see something like below.

  • You can see I used it once already successfully.  Lets use the File \ Open option to load the infrastructure.vmr file.
  • It takes a few minutes, but we are waiting for it to say done. Depending on the size and complexity of your environment it could take a few minutes.

  • Once it is done it will pop up Visio.  But put it away for now.

  • We want to see the viewer to make sure it was a success.  After all, this is not about one diagram but a multiple of them!

  • Now you change to the following folder on the machine where the Viewer and Visio were. Each day you run the Viewer it will put another folder into the My Veeam Reports folder. As you can see below it is the Valentine’s day folder we are in.

  • The Index is an overview, and the config one shows what is powered up or not but the others are what you think.  And they print nice.

So now you have Visios of your lab or infrastructure, it may have took you a few minutes to get it all done the first time, but the second or third is much faster.
BTW, I did this with Update 4 of Veeam ONE, and 6.7 U1 of vSphere.
=== END ===

Original Article:


How to improve security with Veeam DataLabs Secure Restore

How to improve security with Veeam DataLabs Secure Restore

Veeam Software Official Blog  /  Michael Cade

Today, ransomware and malware attacks are top of mind for every business. In fact, no business, large or small is immune. What’s even more concerning is that ransomware attacks are increasing worldwide at an alarming rate, and because of this, many of you have expressed concern. In a recent study administered by ESG, 70% of Veeam customers indicated malicious malware and virus contamination are major concerns for their businesses (source: ESG Data Protection Landscape Survey).
There are obviously multiple ways your environment can be infected by malware; however, do you currently have an easy way to scan backups for threats before introducing them to production? If not, Veeam DataLabs Secure Restore is the perfect solution for secure data recovery!
The premise behind Veeam DataLabs Secure Restore is to provide users an optional, fully-integrated anti-virus scan step as part of any chosen recovery process. This feature, included in the latest Veeam Backup & Replication Update 4 addresses the problems associated with managing malicious malware by providing the ability to assure any of your copy data that you want or need to recover into production is in a good state and malware free. To be clear, this is NOT a prevention of an attack, but instead it’s a new, patent-pending unique way of remediating an attack arising from malware hidden in your backup data, and also to provide you additional confidence that a threat has been properly neutralized and no longer exists within your environment.
Sounds valuable? If so, keep reading.

Recovery mode options

Veeam offers a number of unique recovery processes for different scenarios and Veeam DataLabs Secure Restore is simply an optional enhancement included in many of these recovery processes to make for a truly secure data recovery. It’s important to note though that Secure Restore is not a required, added step as part of a restore. Instead, it’s an optional anti-virus scan that is available to put into action quickly if and when a user suspects a specific backup is infected by malware, or wants to proceed with caution to ensure their production environment remains virus-free following a restore.


The workflow for Secure Restore is the same regardless of the specific recovery scenario used.

  1. Select the restore mode
  2. Choose the workload you need to recover
  3. Specify the desired restore point
  4. Enable Secure Restore within the wizard

Once Secure Restore is enabled you are presented with a few options on how to proceed when an infection has been detected. For example, with an Entire VM recovery, you can choose to continue the recovery process but disable the network adapters on the virtual machine or choose to abort the VM recovery process. In the event an actual infection is identified, you also have a third option to continue scanning the whole file system to protect against other threats to notify the third-party antivirus to continue scanning, to get visibility to any other threats residing in your backups.

As you work inside the wizard and the recovery process starts, the first part of the recovery process is to select the backup file and mount its disks to the mount server which contains the antivirus software and the latest virus definitions (not owned by Veeam). Veeam will then trigger an antivirus scan against the restored disks. For those of you familiar with Veeam, this is the same process leveraged with Veeam file level recovery. Currently, Veeam DataLabs Secure Restore has built-in, direct integrations with Microsoft Windows Defender, ESET NOD32 Smart Security, and Symantec Protection Engine to provide virus scanning, however, any antivirus software with CMD support can also interface with Secure Restore.

As a virus scan walks the mounted volumes to check for infections, this is the first part of the Secure Restore process. If an infection is found, then Secure Restore will default to the choice you selected in the recovery wizard and either abort the recovery or continue with the recovery but disable the network interfaces on the machine. In addition, you will have access to a portion of the antivirus scan log from the recovery session to get a clear understanding of what infection has been found and where to locate it on the machine’s file system.

This particular walkthrough is highlighting the virtual machine recovery aspect. Next, by logging into the Virtual Center, you can navigate to the machine and notice that the network interfaces have been disconnected, providing you the ability to login through the console and troubleshoot the files as necessary.

To quickly summarise the steps that we have walked through for the use case mentioned at the beginning, here they are in a diagram:


Probably my favourite part of this new feature is how Secure Restore fits within SureBackup, yet another powerful feature of Veeam DataLabs. For those of you unfamiliar with SureBackup, you can check out what you can achieve with this feature here.
SureBackup is a Veeam technology that allows you to automatically test VM backups and validate recoverability. This task automatically boots VMs in an isolated Virtual Lab environment, executes health checks for the VM backups and provides a status report to your mailbox. With the addition of Secure Restore as an option, we can now offer an automated and scheduled approach to scan your backups for infections with the antivirus software of your choice to ensure the most secure data recovery process.


Finally, it’s important to note that the options for Veeam DataLabs Secure Restore are also fully configurable through PowerShell, which means that if you automate recovery processes via a third-party integration or portal, then you are also able to take advantage of this.
Veeam DataLabs – VeeamHUB – PowerShell Scripts
The post How to improve security with Veeam DataLabs Secure Restore appeared first on Veeam Software Official Blog.

Original Article:


Missed the Update 4 Announcement? Check out What’s New in the release

Veeam Availability Suite 9.5 Update 4 and
NEW Veeam Availability Suite™ 9.5
Update 4
, part of the Veeam Hyper‑Availability
Platform™, is available for download, expanding Veeam’s leadership
in cloud data management by delivering new capabilities for AWS,
Microsoft Azure and Azure Stack, IBM Cloud and thousands
of service providers. These powerful new capabilities include native
object storage integration and new cloud mobility options.

Veeam Availability Orchestrator v1.0 Installation Overview

Veeam Availability Orchestrator v1.0 Installation Overview

Notes from MWhite  /  Michael White

It is important to understand the VAO installation overview to help you be successful with VAO quicker.  More info on VAO can be found in the release notes. The docs are here – sorry I cannot make a link right to the VAO docs.  You can find versions of supported OS or databases in it as well as get a good idea on resource levels of the VAO servers (RAM and memory).

Things to have ready

  • A VM ready for VAO to be installed within.
  • Information on SQL server to be used by VAO, and credentials in the form of service account.
  • Any existing VBR servers, and credentials for them
  • A Domain Controller to be replicated to the DR side for inclusion in test failovers, and it should have a vSphere tag that is known.
  • Applications to be protected should have a vSphere tag applied to all the VMs. For example, if you are going to protect Exchange, and it has 24 VM’s each should have a tag that says something like DR-Exchange. I suggest initially only protect one application and learn from how that goes so you can do the rest of your applications more easily, and more successfully too.
  • You should have a VMware Distributed switch at the DR site with a private and non-routing VLAN or network and this switch should be mounted on each host in the DR site. This vDS is a recommendation but is not required.


  1. Install VAO
    1. Confirm connected to vCenter (in DR)
    2. If appropriate, connect to vCenter in production too.
    3. Confirm connected to external VBR (in production too possibly but in DR for certain)
    4. Configure SMTP
    5. Configure email destinations
  2. Use the DR site to:
    1. VBR to replicate the Domain Controller to the DR site.
    2. Use the DR site VBR to replicate the first Application you wish to protect.
    3. Create virtual labs in DR.
  3. Log into VAO
  4. Confirm you see the Virtual Labs in Components – there can be up to 1.5 hours of delay between assigning a tag and seeing it in VAO – after the first config.
  5. Confirm you see the tagged VMs in Components.
  6. Customize the Plan Steps as necessary, such as Add scripts, or tweak config such as number of retries.
  7. Add a Lab Group for any necessary support services – like AD to your Virtual Lab.
  8. Test the Virtual Lab, make sure both the appliance and any lab services start successfully.
  9. Create failover plan
  10. Do a readiness check.
  11. Do a test failover. Make sure it works.  A key part of a successful test failover is having all of the necessary VMs inside the test.  For example, Exchange will not start if a domain controller is
  12. Do a real failover.

You now have successful VAO install and have had some success working with VAO.  Hopefully you have learned so that now you can protect more applications successfully. It is important to understand that you may need to tune your plan to make sure it as fast as it can be.  The outage windows sometimes are small!
Let me know if you have questions, or comments, or you would like to see VAO articles on specific things (my VAO related articles can all be found using this tag – VAO_Tech).
=== END ===

Original Article:


How to move your business to the cloud in 2 steps!

How to move your business to the cloud in 2 steps!

Veeam Software Official Blog  /  Edward Watson

Recently Veeam expanded on its existing cloud capabilities with the greatly anticipated release of Veeam Availability Suite 9.5 Update 4. This release has produced an overarching portability solution, Veeam Cloud Mobility, which contains support for restoring to Microsoft Azure, Azure Stack as well as AWS. In this blog, we’ll cover the business challenge these restore capabilities solve, how businesses can use them, and the advantage of using Veeam’s features over comparable features in the market.

Workload portability and the cloud is still a challenge

As more and more businesses look to move their data and workloads to the cloud, they can encounter some challenges and constraints with their current backup solutions. Many backup solutions today require complex processes, expensive add-ons, or require that you perform manual conversions to convert workloads for cloud compatibility.
In fact, 47% of IT leaders are concerned that their cloud workloads are not as portable as intended1. The truth is, portability is absolutely key to unlocking all the benefits that the cloud has to offer, and in terms of hybrid-cloud capabilities — portability is critical. There has to be an easier way…

Workload mobility made easy with Veeam

With the latest release of Veeam Availability Suite 9.5 Update 4, Veeam has delivered a more comprehensive set of restore options with the NEW Veeam Cloud Mobility. Veeam Cloud Mobility builds on the existing Direct Restore to Microsoft Azure with new restore capabilities to Microsoft Azure Stack as well as AWS. Veeam Cloud Mobility allows users to take any on-premises, physical and cloud-based workloads and move them directly to AWS, Microsoft Azure and Microsoft Azure Stack. When I say “any workload,” I mean it! Whether a virtual machine is running on VMware vSphere, Microsoft Hyper-V or Nutanix AHV, a physical server or an endpoint — anything that can be put into a Veeam repository can then be restored directly to the cloud. And the good news is these restore features are included in any edition of Veeam Backup & Replication 9.5 Update 4.
Now let’s go over some of the benefits:

Easy portability across the hybrid-cloud

As mentioned earlier, it is critical to have workload portability to get all the benefits that cloud has to offer no matter what stage of your cloud journey. Veeam Cloud Mobility overcomes many of the complexity and process challenges with the ease of use Veeam is known for. For customers looking to move workloads to the cloud, this is only a simple 2-step process:

  1. Register your cloud subscription with Veeam
  2. Restore ANY backup directly to the cloud

In addition to the simplicity of a 2-step process, with Veeam’s Direct Restore to AWS, there is a built-in UEFI to BIOS conversion which means there are no manual conversions needed to restore modern Windows workstations to the cloud. And, with the new Direct Restore to Microsoft Azure Stack feature, customers can get the added benefit of being able to move workloads from Microsoft Azure to Azure Stack. By doing so, unlocking the unlimited scalability and cost-efficiency of a true hybrid cloud model.

Enabling business continuity

Another primary benefit for businesses is that Veeam Cloud Mobility can be used for data recovery in emergency situations. For instance, if a site failure occurs, you will be able to easily restore your backed-up workloads, spinning them up as virtual machines in AWS, Azure and Azure Stack. Then as the smoke clears, you will be able to gain and provide access to the restored VMs. Unlike our Veeam Cloud Connect feature offered through our Veeam Cloud and Service Provider (VCSP) partners, Veeam Cloud Mobility is not considered a disaster recovery solution designed to meet strict RTOs or RPOs. Rather, this an efficient restore feature that gives you a simple, affordable game plan to gain eventual access to workloads should a disruption ever happen to your primary site.

Unlocking test and development in the cloud

Whether businesses need to test patches or applications, there is a big need for test and development. But many businesses find that conducting such tests locally puts a strain on their storage, compute and network. The cloud on the other hand is a ready-made testing paradise, where resources can be scaled up and down as needed on a dime. Veeam Cloud Mobility gives customers the ability to unlock the cloud for their test and development purposes, so they can conduct testing as frequently, quickly and affordably as needed without the constraints of a traditional infrastructure. 

Take away

Data portability is vital as part of an overall cloud strategy. With the NEW Veeam Cloud Mobility you can move ANY workload directly to AWS, Azure and Azure Stack to maintain the portability you need to leverage the power of the cloud. Whether for data portability, recovery needs or test and dev — Veeam Cloud Mobility helps make workload portability easy.
If you or your customer have not yet taken a free trial of Veeam, now is the perfect time to try Veeam Availability Suite for your cloud strategy!

2017 Cloud User Survey, Frost & Sullivan

Helpful resources:

The post How to move your business to the cloud in 2 steps! appeared first on Veeam Software Official Blog.

Original Article:


SAP HANA Certification is now posted live on SAP’s web site

Great News! SAP HANA Certification is now
posted live on SAP’s web site
While we have been certified since GA of VAS 9.5
Update 4, it’s now live on the SAP page if you have discussions with customers
needing proof. Blogs and PR activities will be following in the next week or
Note that the Veeam Plug-in for SAP HANA is
available to customers of Veeam Backup & Replication Enterprise Plus
edition, most likely requiring the Veeam Instance License for deployments on
Physical Servers/Clusters.

Harness the power of cloud storage for long-term retention with Veeam Cloud Tier

Harness the power of cloud storage for long-term retention with Veeam Cloud Tier

Veeam Software Official Blog  /  Anthony Spiteri

The cost and efficiency of data

All organizations are experiencing explosive data growth. Data growth continues to accelerate at almost exponential speed and with that comes pain points of organizations trying to manage that growth. More data means more robust applications to handle larger data sets, which also means more infrastructure to handle applications and the data itself. While the cost and management of on-premises storage has come down as hardware and disk technologies improve, organizations still face significant overhead when maintaining their own hardware infrastructure. Taking that a step further as it relates to backups, when you combine the growth of data together with more strict regulations around data retention, the challenges that come with managing storage platforms for production and backup workloads becomes even more complex. The reality persists that organizations still struggle to achieve the economy of scale both from an operational and cost point of view that makes storing data long term viable.

The rise of Object Storage

Object Storage has fundamentally shifted the storage landscape, mainly due to its popularity in the public cloud space but also because it offers advantages over traditional block and file-based storage systems. Object Storage overcomes many of the limitations of file and block due to its design and fundamental concept of being able to scale out infinitely. Because a large percentage of backup data is considered to be for long-term retention. Object Storage seems to be a perfect fit. Though the likes of Amazon, Azure and IBM Cloud offer Object Storage, the amount of organizations that have deployed Object Storage to their on-premises environments remains relatively low. The popular trend is to consume cloud-based Object Storage platforms to take advantage of the hyper-scalers own economies of scale which can’t be matched. With the cost of storage at fractions of a cent per GB, organizations desire to consume cloud-based Object Storage has increased and many have been made aware of its benefits.

Introducing Veeam Cloud Tier

With the launch of Update 4 for Veeam Backup & Replication 9.5, we have added Veeam Cloud Tier as a new innovative way to extend backup repositories to the cloud effectively delivering an infinitely scalable Scale-out Backup Repository. By using the new Object Storage Repository as a Capacity Tier Extent as part of the Scale-out Backup Repository, we have fundamentally changed the way in which organizations and our Veeam Cloud & Service Provider (VCSP) partners will think about how they design and architect backup repositories.
  By extending the Scale-out Backup Repository to take advantage of Object Storage, whether that be Amazon S3, Azure Blob, IBM Cloud Object Storage or any S3-Compatible platform (hosted or internal), we have enabled this feature to take advantage of cloud storage technologies to tier data blocks and offload them from the local Scale-out Backup Repository Performance Tier extents to Capacity Tier extents which can be configured to consume storage services as shown below.

How is Veeam Cloud Tier different?

The innovative technology we have built into this feature allows for data to be stripped out of Veeam backup files (which are part of a sealed chain) and offloaded as blocks of data to Object Storage leaving a dehydrated Veeam backup file on the local extents with just the metadata remaining in place. This is done based on a policy that is set against the Scale-out Backup Repository that dictates the operational restore window of which local storage is used as the primary landing zone for backup data and processed as a Tiering Job every four hours. The result is a space saving, smaller footprint on the local storage without sacrificing any of Veeam’s industry-leading recovery operations. This is what truly sets this feature apart and means that even with data residing in the Capacity Tier, you can still perform:

  • Instant VM Recoveries
  • Entire computer and disk-level restores
  • File-level and item-level restores
  • Direct Restore to Amazon EC2, Azure and Azure Stack

Just stepping back to think about what that means. With Veeam Cloud Tier you are now able to recover or restore directly from Object Storage without the need for any additional, potentially expensive components. With that, you can start to understand just how innovative a feature Veeam Cloud Tier is! In addition to that, we have built in further space saving efficiencies in the form of effective source side dedupe where by the same blocks of data are not offloaded to Object Storage, reducing the amount of consumed storage and reducing data transfer times up to the Capacity Tier. We have also added Intelligent Block Recovery that will source data blocks from the local backup files instead of what is tiered to Object Storage resulting in not only faster recovery times, but more importantly, cost savings when pulling data back when using Object Storage services that charge for egress.


For all Veeam customers and partners, both end users and VCSP partners alike, Veeam Cloud Tier represents an important inflection point in the way in which backup repositories are designed and built. No longer are there limitations on how big backup repositories can grow before complications arise from the accelerated growth of data. We have leveraged the power of the cloud with the efficiencies and cost savings of Object Storage platforms to deliver a feature that is unique in the market and we have been able to deliver this in such a way that no industry leading Veeam functionality has been lost. Update 4 is now Generally Available and can be downloaded here.
GD Star Rating
Harness the power of cloud storage for long-term retention with Veeam Cloud Tier, 5.0 out of 5 based on 1 rating

Original Article:


How to restore and convert your workloads directly to the cloud

How to restore and convert your workloads directly to the cloud

Veeam Software Official Blog  /  David Hill

Veeam has been expanding the capabilities of the Veeam Availability Suite product set. Recently, Veeam announced Veeam Cloud Mobility, providing workload portability across clouds and different platforms.
In this blog, we will focus on some of the technical aspects of the Veeam Cloud Mobility capability. With the latest release of Veeam Availability Suite 9.5 Update 4, Veeam has provided an extensive set of restore options. Building on the existing Direct Restore to Microsoft Azure functionality, are extra restore capabilities including Direct Restore to Microsoft Azure Stack and Amazon AWS EC2. With the Direct Restore capabilities, a user can restore any backup from any on-premises platform and move them directly to the cloud. This is what we will look at here.

Direct Restore to Amazon AWS EC2

First let’s take a look at how we easily restore workloads to Amazon AWS EC2. We begin by selecting the backup of the workload we want to run in the cloud. We simply right-click and select Restore to Amazon EC2

We select the AWS account we want to use, the AWS region, and then the data center region that we want to restore the workload too.

We can now change the original VM name to an EC2 specific instance name, with additional capabilities to add prefix and suffixes.

Select the EC2 instance type and what kind of license you want to use, whether to bring your own (BYOL) or lease a license from Amazon.

Pick your Amazon VPC, subnet and security group.

Pick the proxy options (this will be used to do the actual restore into Amazon AWS EC2).

Then the restore will begin and the status will update during the process.

That is how simple it is to move a workload to Amazon AWS EC2 from an on-premises backup. The great aspect of this capability is its simplicity. Seven to eight steps and you have migrated a workload to the public cloud.
The steps are the same for Microsoft Azure and Microsoft Azure Stack as well.

Virtual machine conversion

Now with any migration, it is important to understand that a conversion has to take place. When understanding the technology in migrating from different platforms, it is important to understand what is actually happening. Let’s take an example of moving a VMware vSphere VM to Amazon AWS EC2. As one expects, Amazon is not running VMware vSphere to provide the IaaS capabilities in EC2, which means they are using a different hypervisor. In order to restore a VMware vSphere VM to Amazon AWS EC2, a conversion has to take place. This allows for the VMware vSphere virtual machine files (vmdk, .vmx) to be changed to the Amazon equivalent. Also, Amazon AWS EC2 does not directly support GPT boot volumes, so a conversion is necessary. The diagram below shows how this works.


As companies invest further into the multi-cloud capabilities and strategy, it is becoming a more complex environment to operate in. With the new Veeam Cloud Mobility having the key capabilities to move workloads from and to any platform in any location is a major step forward in simplifying the complexities that a multi-cloud strategy brings.
If you or your customer have not yet taken a free trial of Veeam, now is the perfect time to try Veeam Availability Suite for your cloud strategy!

The post How to restore and convert your workloads directly to the cloud appeared first on Veeam Software Official Blog.

Original Article:


Veeam innovation continues

Veeam innovation continues

Veeam Executive Blog – The Availability Lounge  /  Jason Buffington

At our Velocity 2019 event last week, we celebrated four Veeam partners who developed some really great solutions that are powered by Veeam technologies. Our most heartfelt congratulations and thanks to each of these Veeam Innovation Award (VIA) recipients. If you haven’t seen them yet, please check out the short videos of each VIA partner and their solutions:
In the weeks leading up to Velocity, the general public had the opportunity to vote on a “Best of Show” among the four VIA awardees. The winner of Best of Show for Velocity 2019 is OffsiteDataSync! I had the chance to reach out to visit with Matt Chesterton, CEO of OffsiteDataSync, after the award was presented:

But wait, there’s more!
At Velocity, we celebrated Veeam partners who built great solutions to offer to customers. At VeeamON, we will be celebrating Veeam customers who are achieving great outcomes through the use of Veeam technologies (including partner services). Here are the details:

  • Customer solutions must include some level of significant deployment (which can be on-premises or cloud-based), but the nomination may come from the customer themselves or their service provider, Veeam partner, or Veeam point-of-contact (anyone directly involved in the deployment and ongoing operation).
  • Customer nominations will be accepted from February 1 through March 8. All complete submissions received by this date are eligible for review and could be selected as a VIA awardee.
  • Upon closure of the submission window, Veeam will leverage a diverse board of roughly 30 judges to determine four VIA winners, who will be notified approximately two weeks after submission closure. After notification, we’ll be sending a Veeam video team to interview the four winning organizations (and their partners where applicable).
  • And in May, we’ll open voting for the general public to vote among the four VIAs for a Best of Show at VeeamON 2019 in Miami, Florida on May 22.

Nominations for VIAs for customers are now OPEN. Nominate yourself or your customer for a Veeam Innovation Award. Good luck!
Show more articles from this author

Makes disaster recovery, compliance, and continuity automatic

NEW Veeam Availability Orchestrator helps you reduce the time, cost and effort of planning for and recovering from a disaster by automatically creating plans that meet compliance.


Veeam Availability Orchestrator

Original Article:


Watch | Axiz, Cisco, Veeam partner to secure hyper-availability – TechCentral

Watch | Axiz, Cisco, Veeam partner to secure hyper-availability – TechCentral

“veeam” – Google News

At Axiz, we know better than most that today’s businesses understand that hyper-availability is the new norm.
This means that data and applications have become critical, and customers, users and vendors want them to be available at all times, and on all devices.
The fact is, in this age of hyper-availability, any disruption or downtime is unacceptable, and let’s face it, legacy systems simply can’t keep up with today’s modern, highly virtualised environments, and the IT department ends up spending too much time managing the disparate parts of their data storage environments just to keep it up, available and secure.

Veeam and Cisco have addressed this, and are uniting the power of hyper-converged infrastructure, with data protection.
Axiz is now able to offer its customers, a new, highly resilient data-storage management platform that offers seamless scalability, ease of management and support for multi-cloud environments.
The new solution combines the Veeam Hyper-Availability Platform with Cisco’s innovative Hyper-Converged Infrastructure (HCI) solution, HyperFlex, to meet the needs of organisations.
By implementing the new solution, Axiz customers need no longer battle with outdated, unreliable legacy tools that cannot scale, or even hope to provide hyper availability that has become crucial for modern organisations.

  • This promoted content may have been paid for by the party concerned

Original Article:


Veeam’s NEW native cloud object storage delivers infinite capacity

Veeam’s NEW native cloud object storage delivers infinite capacity

Veeam Software Official Blog  /  Edward Watson

Veeam has always been a company passionate about addressing its customers’ needs. In the latest release of NEW Veeam Availability Suite 9.5 Update 4, we continued this trend by meeting an important need for many customers with our NEW native object storage integration: Veeam Cloud Tier. In this blog, we’ll cover the challenges we’re addressing through this new feature — what it is, how it works and the benefits you can take advantage of. You’ll also learn a few points about the uniqueness and innovation built into Veeam Cloud Tier compared to other backup providers. Veeam is changing the data protection game again!

The challenge of data growth and compliance

One thing is certain about data today — it’s growing exponentially. There are many sources on data growth prediction but there seems to be a consensus that the size of the digital universe will double every two years at least.
With this rapid increase of data created by organizations, it’s no surprise that challenges are starting to take hold. Hardware storage costs are skyrocketing, storage capacity planning and management is becoming more difficult, and all the while this resource juggling takes away precious time from core business priorities. Sound familiar? Now, as data begins to reach new highs we’re seeing organizations make a deliberate attempt to manage costs — with many organizations now having a mandate to reduce hardware costs listed as a priority. In addition to the growth, many organizations are having to meet compliance requirements that require they hold onto data for longer and longer periods of time.

What can IT do?

It’s clear there are two major challenges: data growth and longer-term data retention needs. IT departments need to carefully weigh in a few factors when trying to solve these issues, but they should start with two main considerations:

  • Investigate affordable storage: The most obvious answer to reducing hardware costs is to adopt a more affordable storage. For organizations looking for long-term data retention with cost-efficiency and scalability there is no better place to look than cloud or on-premises based object storage targets. And this is precisely why Veeam customers have requested native object storage integration. When accessibility and quick recovery are still an integral part of a long-term retention strategy, object storage will make the most sense to meet this need.
  • Take a tiered approach: Choosing a more affordable storage option is half the battle — now you must make sure you are freeing up your local storage in the best, most intelligent way possible. Taking a tiered approach to data management allows you to effectively free up only the storage you want freed up, and tier off only the data you need for longer-term retention. This tiered approach allows an overall better capacity management process. This should ideally be a “set it and forget it” solution that automatically tiers off files to the storage according to your exact retention needs.

Veeam Cloud Tier to the rescue!

Ok, so we’ve covered the challenges and what to look for in a solution to solve the challenges, now let’s look at Veeam’s new feature that meets all these requirements. Veeam Cloud Tier was designed to provide infinite capacity through native integrations with Amazon S3, Azure Blob Storage, IBM Cloud Object Storage, S3-compatible service providers or on-premises storage offerings. This feature delivers on the promise of instant and automatic tiering directly to object storage to free up local storage. It is also built with innovative attributes that reduce overall storage consumption and lower ingress and egress costs. Let’s check out the benefits

Infinite Capacity

First, it’s important to note that Veeam Cloud Tier is the built-in automatic tiering feature of Veeam’s Scale-out Backup Repository (SoBR) feature. SoBR consists of multiple repositories under one abstracted layer on-premises, that deliver storage efficiencies, enabling the tiered approach to data management I mentioned earlier. What makes Veeam Cloud Tier special is its ability to now make SoBR completely unlimited in terms of its capacity and scale with it now being backed by object storage. Veeam Cloud Tier acts as a capacity tier to automatically offload backups to object storage.

No double charging

One of the most important aspects of Veeam Cloud Tier is its advantage over competitors from a cost perspective. Believe it or not many backup providers will actually “double charge” for the data you store in the cloud. What this means is not only are you paying for the cost of leveraging the cloud provider’s storage but also incurring an additional charge simply for leveraging object storage with their solution. Veeam Cloud Tier is built into our existing Veeam Availability Suite Enterprise and Enterprise Plus editions and you will never see a “cloud tax” fee from Veeam on your bill!

No vendor lock-in

Veeam has always been known as a storage agnostic vendor. We want our customers to know that if they work with Veeam they can trust that we deliver protection for any application, any data, in any cloud. Veeam Cloud Tier is no different, you will not be locked into a particular storage vendor, in fact we are proud to offer a variety of storage options mentioned above — including both hyper-scale and S3 compatible on-premises and cloud-based service provider options. This feature also opens up the option for service providers to scale-out their IaaS customers to hyper-scale clouds in addition to leveraging their own clouds.

Storage reduction magic

The built-in storage reduction technique is truly what sets Veeam apart from other solutions. We know that customers don’t want to have to send their entire backup to the cloud, or bring it back down for that matter… So, we developed a technique by which the on-premises files remain on-premises as a shell with metadata while the bulk is offloaded to object storage. This drastically reduces the size of the local storage without sacrificing any recoverability of the data. For instance, a 1TB VBK file on local storage could be reduced to a 20MB file.
With this lightweight file format customers incur only minimal cloud-based ingest fees for sending data to the cloud. They can also just as easily retrieve data from the cloud without having to recover the entire backup repository through Veeam’s file-based granular recovery options — minimizing egress charges and increasing productivity.
Here is a view of the Veeam Cloud Tier (Capacity Tier) within the Scale-out Backup Repository:

Take away

It’s clear that massive data growth and compliance requirements continue to take a toll on not only the cost of local storage, but the length of time organizations are required to hold onto data. Object storage is becoming increasingly appealing to many organizations for long-term data retention needs, but leveraging the right solution is imperative to enable a tiered approach to Intelligent Data Management. Veeam Cloud Tier provides a simplified approach, enabling customers to natively tier backup files to a variety of cloud and on-premises object storage targets including Amazon S3, Azure Blob Storage, IBM Cloud Object Storage and S3-compatible service providers. In addition, Veeam Cloud Tier has many benefits over other solutions including no “cloud tax” and no vendor lock-in.
If you or your customers are not yet leveraging Veeam Availability Suite, now is the absolute perfect time to execute on this tiering strategy to take advantage of the infinite capacity Veeam delivers on object storage.

Helpful resources:

The post Veeam’s NEW native cloud object storage delivers infinite capacity appeared first on Veeam Software Official Blog.

Original Article:


HyperFlex 4.0 and Veeam Availability Suite 9.5 Update 4 deliver big!

HyperFlex 4.0 and Veeam Availability Suite 9.5 Update 4 deliver big!

Veeam Software Official Blog  /  Andrew Lickly

With the announcement today of Cisco HyperFlex 4.0 and the Veeam announcement last week of Veeam Availability Suite 9.5 update 4, there is a lot to digest on all the benefits these provide to our joint customers from the core to the edge to the cloud and back again.

There’s magic in the air

It seems it wasn’t that long ago when HyperFlex version 1.0 was released. In just 2.5 years, HyperFlex has gone from inception to a leader in the Gartner Magic Quadrant for Hyperconverged Infrastructures (HCI). This is a testament to their innovation and not being just another hyperconverged infrastructure solution but a true next-generation solution with features and benefits that customers require in today’s frenetic IT world. Veeam is also a leader in the Gartner Magic Quadrant for backup solutions which gives customers added confidence when pairing Cisco and Veeam solutions.

HyperFlex anywhere

HyperFlex’s core values have always been around simplicity, agility and multi-cloud services. HyperFlex 4.0 builds on this with their recognition that customers live in a distributed, multi-site world. To meet customer requirements in this environment, HyperFlex added several key features. HyperFlex can now utilize Cisco Intersight, their cloud-based management platform, to streamline, orchestrate and automate the management and deployment of HyperFlex HCI. HyperFlex and Intersight extend the simplicity and efficiency of HCI from core data centers to the edge with consistent policy enforcement and cloud-powered systems management. Veeam has worked with HyperFlex since its initial release, integrating with HyperFlex native snapshots to backup and replicate data on HyperFlex systems. Veeam can replicate from one HyperFlex cluster to another, backup to a main data center and move backup copy jobs to other sites or the cloud delivering the application and data Availability customers require. Building on the automation and orchestration, Veeam has added orchestration for virtual environments with the addition of intelligent automation in Veeam Availability Suite 9.5 update 4. Veeam delivers proactive monitoring and alerting backed by intelligent automation to alert customers and resolve potential problems before they have operational impact. Together, Veeam and Cisco HyperFlex have greatly simplified the deployment, management and Availability of multi-site distributed environments.


HyperFlex 4.0 also delivers many new performance and security enhancements specifically for mission-critical business applications like SAP. SAP is critical to customers’ business and it requires an infrastructure that is fast, offers consistent performance and is highly available and protected. Cisco HyperFlex is SAP HANA certified and provides the performance needed for SAP application workloads and databases with simple management, reduced TCO, and cloud-like flexibility. With the latest Veeam release, Veeam is now SAP HANA certified. This is big news for our joint SAP HANA customers.  They can utilize Veeam to protect their SAP HANA deployments using native SAP database backup methods, something all SAP administrators prefer, and they will get enhanced backup capabilities like Instant VM Recovery and optimized backups with Veeam’s native HyperFlex snapshot integration. SAP users will be able to leverage Veeam DataLabs to easily deploy SAP test environments and clone SAP databases from Veeam repositories.
Putting it all together, customers can run SAP HANA databases and SAP applications on Cisco HyperFlex, modernizing their production environments with SAP HANA and HyperFlex and protecting it using Veeam’s native HyperFlex snapshots and an SAP-certified plug-in for database backup and restores. Veeam and Cisco have worked together for years with Veeam running on Cisco UCS storage servers, and, at the end of last year, Cisco and Veeam introduced a new joint solution, Veeam Availability on Cisco HyperFlex, an enterprise-grade data protection solution running Veeam services and repository on HyperFlex. Together, we deliver an end-to-end solution; customers can now run their production SAP environments on HyperFlex while keeping it highly available with a modern data protection solution, Veeam running on another Cisco HyperFlex System.

From the edge to the center to the cloud

What Veeam and Cisco are really delivering is simplicity, flexibility and lower costs. HyperFlex 4.0 is introducing smaller 2-node edge clusters for ROBO environments, combined with cloud-based management, making it easier to deploy and manage these systems. Veeam Availability Suite 9.5 Update 4 adds Cloud Data Management, Veeam Cloud Mobility and Veeam Cloud Tier, all designed to make it easier for customers to move workloads on premises, to the cloud and back again.
The result is that customers and IT departments today, whether they are large or small, are being challenged to do more with less, while innovating the business and driving measurable impact. Cisco and Veeam continue to work together to deliver solutions that offer simplicity, flexibility and lower TCO that provide the business outcomes our customers require.
All from two industry-leading vendors! Don’t take our word for it, go ask Gartner!
For more information on joint Cisco and Veeam solutions and to receive exclusive announcements, subscribe on our Digital Hub.
The post HyperFlex 4.0 and Veeam Availability Suite 9.5 Update 4 deliver big! appeared first on Veeam Software Official Blog.

Original Article:


Veeam Availability Suite 9.5 Update 4: New Stuff!

Veeam Availability Suite 9.5 Update 4: New Stuff!

vZilla  /  michaelcade

After having a super busy week last week and not having much chance to share some of the top stuff Veeam have released with this monster release, I am now back home and can get some content out the door.
I wanted this post to touch on some of them top features that are worth taking a deeper look into.

External Repository

Lots of people are going to be jumping into the new Veeam Backup & Replication console and they are going to see several new “repositories” that can be added to their Veeam Backup Infrastructure. One of them, is the new External Repository.
The external repository is the first point of integration since the acquisition of N2W Software in early 2018. The development of Veeam N2WS Backup & Recovery (formerly known as Cloud Protection Manager) is still rather separated from the Veeam development however in the latest release the capability of being able to tier EBS Snapshots of your EC2 instances into an S3 repository to reduce costs on your cloud storage.
For a summary on the broader capabilities of Veeam N2WS Backup & Recovery you can find that here.
In a nutshell though this gives us the ability to protect our EC2 instances within AWS and then store them in this S3 Bucket that can then be accessed by Veeam Backup & Replication, this VBR server can really be stored anywhere with access. This then opens the door to various granular recovery techniques including File Level and Application Item level recovery. What this also enables is the ability to send this to a secondary location to adhere to the 3-2-1 backup methodology. Meaning we can use Veeam Backup Copy Jobs, Tape or even send these backup files to one of the many Veeam Cloud Connect Backup As A Service providers.
The smart thing here is that the portable data format, the .VBK file that we get from any Veeam on premises backups (apart from VBO365) this is the same or at least very similar to the file format we will find in the External Repository.

Cloud Mobility: Direct Restore to AWS

“Provides easy portability and recovery of ANY on-premises or cloud-based workloads to AWS, Azure and AzureStack.”
With Update 4 Veeam extended the ability to directly restore those backup files into Amazon EC2 and Microsoft AzureStack, prior to this Veeam had the ability to send those backup files and convert them to Azure Virtual Machines. This new feature adds the ability to for customers to restore VMs that have been backed up by Veeam Backup & Replication directly into Amazon EC2 instances.
Here is that portable data format again those VBK files are created when a backup is created from anything on the left shown below meaning we have the ability to convert them into the above.

This feature can be used for VM migrations into the public cloud, this feature can also be used for provisioning workloads from backup sets into the public cloud for test and development scenarios.

Veeam will look after the conversion process by leveraging the AWS VM Conversion process. This will look after the addition of EC2 configuration, Drivers and Modules to the machine for storage and network devices. This injects the necessary utilities and services to interact with AWS.
In addition to the AWS conversion engine, the restore mechanism can also make an additional conversion of EFI to BIOS boot for Windows machines.

Cloud Tier

In my opinion, this is the most innovative feature release from Veeam in a while. Leveraging Object Storage is not new, but the way in which this feature uses Object Storage is the differentiator.
I am conscious that we have some more detailed posts coming out and I don’t want to steal their thunder so I will briefly touch on what this feature gives you within your Veeam environment.
Firstly, I hope you have had heard of our Scale Out Backup Repository, this feature arrived back in 2016… with V9 here is a great FAQ that you can do a quick 101 on.
The Cloud tier is an extension of the Scale Out Backup Repository, meaning that you can leverage Object Storage for tiering out older backup files to cheaper storage.
To really see the power of this, check out the live launch event linked below where you can see Anthony demo this live and talk through the concept and capabilities.

Veeam DataLabs: Secure Restore

I must mention this as this was my demo on mainstage. Veeam DataLabs is more of a group of functions available within the Veeam Availability Suite and Veeam Availability Orchestrator this feature announced and released with update 4 gives us the ability to check for malware during the recovery process. I will put some more detail on this soon. Lots of content coming around Veeam DataLabs and the new features but also stuff that has been within the product for years!

Veeam ONE

I also must mention Veeam ONE as it got a huge release here. Some crazy smart features that allow us to remove the manual intervention points in the day to day running of your environment. Like Secure Restore I am going to put some effort into getting some more detailed posts out the door on this one. A couple of things to look at though is the “Intelligent Diagnostics” and “Remediation Actions” both deserve their own posts.

Other useful references:

Update 4 Launch Event Video and Recap Links – Anthony Spiteri
Veeam Executive Blog – Danny Allan
My Top Three Favourite Veeam 9.5 Update 4 Features – Melissa Palmer
Storage Review

Original Article:


Veeam cloud backup gains tiering, mobility, AWS enhancements

Veeam cloud backup gains tiering, mobility, AWS enhancements


The latest version of Veeam Software’s flagship Availability Suite focuses on facilitating tiering and migrating data into public clouds.
Veeam launched what it calls Update 4 of Availability Suite 9.5 today, less than a week after picking up $500 million in funding that can help the vendor expand its technology through acquisitions, as well as organically.
Availability Suite includes Backup & Replication and the Veeam One monitoring and reporting tool. Update 4 of Veeam Availability Suite 9.5 had been available to partners for a short time and is now generally available to end-user customers.
The new Veeam Availability Suite update includes Cloud Tier and Cloud Mobility features, and extends its Direct Restore capability to AWS.
With its cloud focus, Veeam is making a push in a trending area in the market, according to Christophe Bertrand, senior analyst at Enterprise Strategy Group.
“Our research clearly indicates that backup data and processes are shifting to the cloud,” Bertrand wrote in an email. “Cloud-related features are going to be critical in the next few years as end users continue to ‘hybridize’ their environments, and there is still a lot to do to harmonize backup/recovery service levels and instrumentation across on-premises and cloud infrastructures.”

Cloud Tier, Cloud Mobility, N2WS integration launched

Veeam started out as backup for virtual machines but now also protects data on physical servers and cloud platforms.
Availability Suite’s Cloud Tier offers built-in automatic tiering of data in a new Scale-out Backup Repository to object storage. The Veeam cloud backup feature provides long-term data retention by using object storage integration with Amazon Simple Storage Service (S3), Azure Blob Storage, IBM Cloud Object Storage, S3-compatible service providers and on-premises storage products.

Cloud-related features are going to be critical in the next few years as end users continue to ‘hybridize’ their environments. Christophe Bertrandsenior analyst, Enterprise Strategy Group

Cloud Tier helps to increase storage capacity and lower cost, as object storage is cheaper than block storage, according to Danny Allan, Veeam vice president of product strategy.
Cloud Mobility provides migration and recovery of on-premises or cloud-based workloads to AWS, Azure and Azure Stack.
Allan said Veeam cloud backup users simply need to answer two questions for business continuity: Which region and what size instance?
Veeam also extended its Direct Restore feature to AWS. It previously offered Direct Restore for Azure. Direct Restore allows organizations to restore data directly to the public cloud.
Veeam also added a new Staged Restore feature to its DataLabs copy management tool. Staged Restore reduces the time it takes to review sensitive data and remove personal information, streamlining compliance with regulations such as the GDPR. The new Secure Restore scans backups with an antivirus software interface to stop malware such as ransomware from entering production during recovery.
The Veeam cloud backup updates also include integration with N2WS, an AWS data protection company that Veeam acquired a year ago. The new Veeam Availability for AWS combines Veeam N2WS cloud-native backup and recovery of AWS workloads with consolidation of backup data in a central Veeam repository. End users can move data to multi-cloud environments and manage it.
“This is designed for the hybrid customer, which is primarily what we see now,” Allan said.
For cloud-only customers, N2WS Backup & Recovery is still available from the AWS Marketplace.
Bertrand said he thinks the N2WS integration is a “starting point.”
“The key here is to offer the capability of managing all the backups in one place,” Bertrand wrote. “It gives end users better visibility over their Veeam backups whether in AWS or on premises.”
Veeam also updated its licensing. Users can move licenses automatically when workloads move between platforms, such as multiple clouds.

How Veeam products stack up in crowded market

Private equity firm Insight Venture Partners, which acquired a minority share in Veeam in 2013, invested an additional $500 million in the software vendor last Wednesday.
Veeam has about 3,500 employees and plans to add almost 1,000 more over the coming year. In addition, last November, Veeam said it is investing $150 million to expand its main research and development center in Prague. Veeam claims to have about 330,000 customers.
Veeam founder Ratmir Timashev said the vendor is looking to grow further by using its latest funding haul to acquire companies and technologies.
Veeam isn’t the only well-funded startup in the data protection and data management market. Last week, Rubrik also completed a large funding round. The $261 million Series E funding will help Rubrik build out its Cloud Data Management platform. Cohesity closed a $250 million funding round and Actifio pulled in $100 million in 2018. The data protection software market also includes established vendors such as Veritas, Dell EMC, IBM and Commvault.
While many of its competitors now offer integrated appliances, Veeam is sticking with a software-only model. Instead of selling its own hardware, Veeam partners with such vendors as Cisco, Hewlett Packard Enterprise, NetApp and Lenovo.
“Unlike many others, we have a completely software-defined platform,” Allan said, “giving our customers choices for on-premises hardware vendors or cloud infrastructure.”

Original Article:


Chaining plans in Veeam Availability Orchestrator

Chaining plans in Veeam Availability Orchestrator

Notes from MWhite  /  Michael White

So you have installed VAO, and have tested your plans and you are ready for a crisis.  But you want to know how you can trigger one plan, and have them all go.  Right?  I can help with that.
The overview is you do plans, test them and when happy you chain them.  But you need to make sure the order of recovery is what it needs to be.  For example, database’s should be recovered before the applications that need them. In a crisis it is best to have email working so management can reassure the market, customers and employees.
In another article I am working on I will show you a method of recovering multi – tier applications fast, rather than the slow method – which I suspect will be default for many as it is obvious, and yet, there is a much better way to do it.

Starting Point

In my screenshot below you can see my plans.  I want to recover EMail Protection first, followed by SQL, and finally View.  View requires SQL, and I need email recovered first so I can tell my customers, and employees that we are OK.

And yes, my EMail protection plan is not verified but I am having trouble with it so it needs more time. But we can still chain plans.  When you do this, make sure all your plans are verified.
The next step is to right click on Email Protection Plan and select Schedule.  You can also do that under the Launch menu too.

Yes, we are not going to schedule a failover, but this is the path we must take to chain these plans together so we do the failover on the first and they all go one after another.
We need to authenticate – we don’t want anyone doing this after all. Next we see a schedule screen and so you need to schedule a failover – in the future.  Not too worry, we will not do that.

You choose Next, and Finish.
We now right click on the next plan and select Schedule.

This time, we are enable the checkbox, but rather than select a date we will choose a plan to failover after.  So select Choose a Plan button. In our case we are going to have this second plan fire after the first one.

Now we Next, and Finish.
Now we right + click on the last plan and select Schedule. Then we enable and select the previous plan.

Then you do the same old Next, and Finish. Now when we look at our plans we see a little of what we have done – but not yet are we complete.

You can see in the Failover Scheduled column how the email plan – first to be recovered – occurs on a particular date, but then we have View execute after SQL, and SQL recovery after Email.
Now, we must right click on EMail and select Schedule.  We must disable the scheduled failover.

As you see above, remove the checkbox, and the typical Next and Finish.

Now we see no scheduled failover for the EMail protection plan, but we still see the other two plans with Run after so if I execute a failover for EMail the other two will execute in the order we set.
So things are good.  Now one plan triggered will cause the next to execute.  If you select EMail, it will trigger SQL which will than trigger View.

Things to be aware of

  • The plans need to be enabled for this to work.
  • Test failovers are not impacted by this order of execution.
  • When you do a failover with any of these plans, you will be prompted to execute the other plans in the appropriate order. This will be enabled by default but you can disable it.

One you have your plans chained, you should arrange a day and time to do a real failover to your DR site.  It is the best way to confirm things are working. BTW, all of my VAO technical articles can be found with this link.
=== END ===

Original Article:


10 Tips for a Solid AWS Disaster Recovery Plan

10 Tips for a Solid AWS Disaster Recovery Plan

N2WS  /  Cristian Puricica

AWS is a scalable and high-performance computing infrastructure used by many organizations to modernize their IT. However, no system is invulnerable, and if you want to ensure business continuity, you need to have some kind of insurance in place. Disaster events do occur, whether they are a malicious hack, natural disaster, or even an outage due to a hardware failure or human error. And while AWS is designed in such a way to greatly offset some of these events, you still need to have a proper disaster recovery plan in place. Here are 10 tips you should consider when building a DR plan for your AWS environment.

1. Ship Your EBS Volumes to Another AZ/Region

By default EBS volumes are automatically replicated within an Availability Zone (AZ) where they have been created, in order to increase durability and offer high availability. And while this is a great way to protect your data from relying on a lone copy, you are still tied to a single point of failure, since your data is located only in one AZ. In order to properly secure your data, you can either replicate your EBS volumes to another AZ, or even better, to another region.
To copy EBS volumes to another AZ, you simply create a snapshot of it, and then recreate a volume in the desired AZ from that snapshot. And if you want to move a copy of your data to another region, take a snapshot of your EBS, and then utilize the “copy” option and pick a region where your data will be replicated.

2. Utilize Multi-AZ for EC2 and RDS

Just like your EBS volumes, your other AWS resources are susceptible to local failures. Making sure you are not relying on a single AZ is probably the first step you can take when setting up your infrastructure. For your database needs covered by RDS, there is a Multi-AZ option you can enable in order to create a backup RDS instance, which will be used in case the primary one fails by switching the CNAME DNS record of your primary RDS instance. Keep in mind that this will generate additional costs, as AWS charges you double if you want a multi-AZ RDS setup compared to having a single RDS instance.
Your EC2 instances should also be spread across more than one AZ, especially the ones running your production workloads, to make sure you are not seriously affected if a disaster happens. Another reason to utilize multiple AZs with your EC2 instances is the potential lack of available resources in a given AZ, which can occur sometimes. To properly spread your instances, make sure AutoScaling Groups (ASG) are used, along with an Elastic Load Balancer (ELB) in front of them. ASG will allow you to choose multiple AZs in which your instances will be deployed, and ELB will distribute the traffic between them in order to properly balance the workload. If there is a failure in one of the AZs, ELB will forward the traffic to others, therefore preventing any disruptions.
With EC2 instances, you can even go across regions, in which case you would have to utilize Route53 (a highly available and scalable cloud DNS service) to route the traffic, as well as do the load balancing between regions.

3. Sync Your S3 Data to Another Region

When we consider storing data on AWS, S3 is probably the most commonly used service. That is the reason why, by default, S3 duplicates your data behind the scene to multiple locations within a region. This creates high durability, but data is still vulnerable if your region is affected by a disaster event. For example, there was a full regional S3 outage back in 2017 (which actually hit a couple other services as well), which led to many companies being unable to access their data for almost 13 hours. This is a great (and painful) example of why you need a disaster recovery plan in place.
In order to protect your data, or just provide even higher durability and availability, you can use the cross-region replication option which allows you to have your data copied to a designated bucket in another region automatically. To get started, go to your S3 console and enable cross-region replication (versioning must be enabled for this to work). You will be able to pick the source bucket and prefix but will also have to create an IAM role so that your S3 can get objects from the source bucket and initiate transfer. You can even set up replication between different AWS accounts if necessary.
Do note though that the cross-region sync starts from the moment you enable it, so any data that already exists in the bucket prior to this will have to be synced by hand.

4. Use Cross-Region Replication for Your DynamoDB Data

Just like your data residing in S3, DynamoDB only replicates data within a region. For those who want to have a copy of their data in another region, or even support for multi-master writes, DynamoDB global tables should be used. These provide a managed solution that deploys a multi-region multi-master database and propagates changes to various tables for you. Global tables are not only great for disaster recovery scenarios but are also very useful for delivering data to your customers worldwide.
Another option would be to use scheduled (or one-time) jobs which rely on EMR to back up your DynamoDB tables to S3, which can be later used to restore them to, not only another region, but also another account if needed. You can find out more about it here.

5. Safely Store Away Your AWS Root Credentials

It is extremely important to understand the basics around security on AWS, especially if you are the owner of the account or the company. AWS root credentials should ONLY be used to create initial users with admin privileges which would take over from there. Root password should be stored away safely, and programmatic keys (Access Key ID and Secret Access Key) should be disabled if already created.
Somebody getting access to your admin keys would be very bad, especially if they have malicious intentions (disgruntled employee, rival company, etc.), but getting your root credentials would be even worse. If a hack like this happens, your root user is the one you would use to recover, whether to disable all other affected users, or contact AWS for help. So, one of the things you should definitely consider is protecting your account with multi-factor authentication (MFA), preferably a hardware version.
The advice to protect your credentials sometimes sounds like a broken record, but many don’t understand the actual severity of this, and companies have gone out of business because of this oversight.

6. Define your RTO and RPO

Recovery Time Objective (RTO) represents the allowed time it would take to restore a process back to service, after a disaster event occurs. If you guarantee an RTO of 30 minutes to your clients, it means that if your service goes down at 5 p.m., your recovery process should have everything up and running again within half an hour. RTO is important to help determine the disaster recovery strategy. If your RTO is 15 minutes or less, it means that you potentially don’t have time to reprovision your entire infrastructure from scratch. Instead, you would have some instances up and running in another region, ready to take over.
When looking at recovering data from backups, RTO defines what AWS services can be used as a part of disaster recovery. For example if your RTO is 8 hours, you will be able to utilize Glacier as a backup storage, knowing that you can retrieve the data within 3–5 hours using standard retrieval. If your RTO is 1 hour, you can still opt for Glacier, but expedited retrieval costs more, so you might chose to keep your backups in S3 standard storage instead.
Recovery Point Objective (RPO) defines the acceptable amount of data loss measured in time, prior to a disaster event happening. If your RPO is 2 hours, and your system has gone down at 3 p.m., you must be able to recover all the data up until 1 p.m. The loss of data from 1 p.m. to 3 p.m. is acceptable in this case. RPO determines how often you have to take backups, and in some cases continuous replication of data might be necessary.

7. Pick the Correct DR Scenario for Your Use Case

The AWS Disaster Recovery white paper goes to great lengths to describe various aspects of DR on AWS, and does a good job of covering four basic scenarios (Backup and Restore, Pilot Light, Warm Standby and Multi Site) in detail. When creating a plan for DR, it is important to understand your requirements, but also what each scenario can provide for you. Your needs are also closely related to your RTO and RPO, as those determine which options are viable for your use case.
These DR plans can be very cheap (if you rely on simple backups only for example), or very costly (multi-site effectively doubles your cost), so make sure you have considered everything before making the choice.

8. Identify Mission Critical Apps and Data and Design Your DR Strategy Around Them

While all your applications and data might be important to you or your company, not all of them are critical for running a business. In most cases not all apps and data are treated equally, due to the additional cost it would create. Some things have to take priority, both when making a DR plan, and when restoring your environments after a disaster event. An improper prioritization will either cost you money, or simply risk your business continuity.

9. Test your Disaster Recovery

Disaster Recovery is more than just a plan to follow in case something goes wrong. It is a solution that has to be reliable, so make sure it is up to the task. Test your entire DR process thoroughly and regularly. If there are any issues, or room for improvement, give it the highest possible priority. Also don’t forget to focus on your technical people as well as—they too need to be up to the task. Have procedures in place to familiarize them with every piece of the DR process.

10. Consider Utilizing 3rd-party DR Tools

AWS provides a lot of services, and while many companies won’t ever use the majority of them, for most use cases you are being provided with options. But having options doesn’t mean that you have to solely rely on AWS. Instead, you can consider using some 3rd-party tools available in AWS Marketplace, whether for disaster recovery or something else entirely. N2WS Backup & Recovery is the top-rated backup and DR solution for AWS that creates efficient backups and meets aggressive recovery point and recovery time objectives with lower TCO. Starting with the latest version, N2WS Backup & Recovery offers the ability to move snapshots to S3 and store them in a Veeam repository format. This new feature enables organizations to achieve significant cost-savings and a more flexible approach toward data storage and retention. Learn more about this here.


Disaster recovery planning should be taken very seriously, nonetheless many companies don’t invest enough time and effort to properly protect themselves, leaving their data vulnerable. And while people will often learn from their mistakes, it is much better to not make them in the first place. Make disaster recovery planning a priority and consider the tips we have covered here, but also do further research.
Try N2WS Backup & Recovery 2.4 for FREE!

Read Also

Original Article:


Top Veeam Forum Posts and Content

HELP!.. Error
connecting veeam 9.5 with esxi 5.5
   [VIEWS: 266 • REPLIES: 16]
Hi, I’m new with
veeam backup and I have the following problem. When I try to connect Veeam
backup 9.5 with the V Center Esxi 5.5 server. I missed proxy error (407).
Error on the proxy server (407). Proxy authentication is required. How can I
solve it to connect with my ESXI. more
confused about
which ReFS version to use
   [VIEWS: 153 • REPLIES: 3]
There are 2 Veeam
Knowledge Base articles about ReFS, that stats what version that is good to
use with ReFS.
This one i the latest from 2018-12-26:

This says that the good version is like this: more
V2V support   [VIEWS: 140 • REPLIES: 6]
V2V support Veeam v9.5 U4 ?
Hello all,
I would like to edit bakup copy job’s schedule via powershell.
I made a script as follows with reference to other forums. more
Backup to a
Mapped Drive?
94 • REPLIES: 3]
I have a NAS set up
as a mapped drive on windows 10. I cannot correctly configure my backup to a
shared folder on my NAS. What could I be doing wrong? Veeam continues to tell
me it Failed to get disk free space.

Veeam Availability Console 2.0 Update 1 – New patch release

Veeam Availability Console 2.0 Update 1 – New patch
Recently a new patch was released
for Veeam Availability Console 2.0 Update 1.
This new patch resolves several issues surrounding core VAC server functionality,
discovery rules, monitoring, alarms and more. Read this blog post from
Anthony Spiteri, global technologist of product strategy at Veeam
to learn about these issues and how they will be resolved.

Prepare you vLab for your VAO DR planning

Prepare you vLab for your VAO DR planning

CloudOasis  /  HalYaman

disaster-recovery-plan-ts-100662705-primary.idge_.jpgA good disaster plan needs careful preparation. But after your careful planning, how do you validate your DR strategy without disrupting your daily business operation? Veeam Availability Orchestrator just might be your solution.
As this blog post focuses on one aspect of the VBR and VAO configuration and to learn more about Veeam Availability Orchestrator, and to find out more about Veeam Replication and Veeam Virtual Labs, continue reading.
Veeam Availability Orchestrator is a very powerful tool to help you implement and document your DR strategy. However, this product relies on Veeam Backup & Replication Tool to:

  • perform Replication, and
  • Virtual Lab.

Therefore, to successfully configure the Veeam Availability Orchestrator, you must master Replication and the Virtual Lab. Don’t worry, I will take you through the important steps to successfully configure your Veeam Availability Orchestrator, and to implement your DR Plan.
The best way to get started is to share with you a real-life DR scenario I experienced last week during my VAO POC.


A customer wanted to implement Veeam Availability Orchestrator with the following objectives:

  • Replication between the production and DR datacentre across the county,
  • Re-Mapping the Network attached to each VM at the DR site,
  • Re-IP the VM IP address of each VM at the DR site,
  • Scheduling the DR testing to perform every Saturday morning,
  • Document the DR Plan.

As you might already be aware, all those objectives can be achieved using Veeam VBR & VAO.
So let’s get started.

The Environment and the Design

To understand the design and the configuration lets first introduce the customer network subnets at PRODUCTION and the DR site.

  • At the PRODUCTION site, the customer used the 192.168.33.x/24 subnet,
  • Virtual Distribution Switch and Group: SaSYD-Prod.

DR Site

  • Production network at the DR site uses the 192.168.48.x/24 subnet
  • Prod vDS: SP-DSWProd
  • DR Re-IP subnet & Network name: vxw-dvs-662-virtualwire-4-sid-5000-SP_Prod_Switch

To accomplish the configuration and requirements at those sites, the configuration must consider the following steps:

  • Replication between the PRODUCTION and the DR Sites
  • Re-Mapping the VM Network from ProdNet to vxw-dvs-662-virtualwire-4-sid-5000-SP_Prod_Switch
  • vLab with that configuration listed above.

The following diagram and design is what we are going to discuss on this blog:

Replication Job Configuration

To prepare for a disaster and to implement failover, we must create a replication job that will replicate the intended VMs from the PRODUCTION site to the DR site. In this scenario, to achieve the requirements above, we must use Veeam replication with Network Mapping and Re-IP options when configuring the Replication Job. To do this, we have to tick the checkbox for the Separate virtual networks (enable network mapping), and Different IP addressing scheme (enable re-IP):

At the Network stage, we will specify the source and destination networks:

Note: to follow the diagram, the source network must be: Prod_Net and the Target network will be the DR_ReIP network.
On the Re-IP, enter the original IP address and the new Re-IP address to be used at the DR site:
Next, continue with the replication job configuration as normal.

Virtual Lab Configuration

To use Veeam Availability Orchestrator to check our DR planning, and to make sure our VMs will start on the DR site as expected, we must create a Veeam Virtual Lab to test our configuration. First, let’s create a Veeam vLab, starting with the name of the vLab and the ESXi host at the DR site which will host the Veeam Proxy appliance. At the following screenshot, the hostname is
Choose a data store where you will keep the redo log files. After you have selected the datastore, press next. You must configure the proxy appliance specifically for the network to be attached. In our example, the network is the PRODUCTION network at the DR site named SP-DSWProd, and it has a static DR site IP address. See below.
Next, we must configure the Networking as Advanced multi-host (manual configuration), and then select the appropriate Distributed virtual switch; in our case, SP-ProdSwitch.

This leads us to the next configuration stage, Isolated Network. At this stage, we must assign the DR network that each replicated VM will be connected.
Note: This network must be the same as the Re-Mapped network you selected as a destination network during the replication job configuration. The Isolation network is any name you assign to the temporary network used during the DR plan check.

Next, we must configure the temporary DR network. As shown on the following screenshot, I chose the Omnitra_VAO_vLab network I named on the previous step (Isolated network). The IP address is the same IP address of the DR PRODUCTION gateway. Also on the following screenshot, you can see the Masquerade network address I can use to get access to each of the VMs from the DR PRODUCTION network:

Finally, let’s create a static Mapping to access the VMs during the DR testing. We will use the Masquerade IPs as shown in the following screenshot.


Veeam Availability Orchestrator is a very powerful tool to help companies streamline and simplify their DR planning. The initial configuration of the VAO DR planning is not complex; but it is a little involved. You must navigate between the two products, Veeam Backup and Replication, and the Veeam Availability Orchestrator.
After you have grasped the DR concept, your next steps to DR planning will be smooth and simple. Also, you may have noted that to configure your DR planning using Veeam Availability Orchestrator, you must be familiar with Veeam vLabs and networking in general. I highly recommend that you read more on Veeam Virtual Labs before starting your DR Plan configuration.
I hope this blog post helps you to get started with vLab and VAO configuration. Stay tuned for my next blog or recording about the complete configuration of the DR Plane. Until then, I hope you find this blog post informative; please post your comments in the chatter below if you have questions or suggestions.
The post Prepare you vLab for your VAO DR planning appeared first on CloudOasis.

Original Article:


Migration is never fun – Backups are no exception

Migration is never fun – Backups are no exception

Veeam Software Official Blog  /  Rick Vanover

One of the interesting things I’ve seen over the years is people switching backup products. Additionally, it is reasonable to say that the average organization has more than one backup product. At Veeam, we’ve seen this over time as organizations started with our solutions. This was especially the case before Veeam had any solutions for the non-virtualized (physical server and workstation device) space. Especially in the early days of Veeam, effectively 100% of business was displacing other products — or sitting next to them for workloads where Veeam would suit the client’s needs better.
The question of migration is something that should be discussed, as it is not necessarily easy. It reminds me of personal collections of media such as music or movies. For movies, I have VHS tapes, DVDs and DVR recordings, and use them each differently. For music, I have CDs, MP3s and streaming services — used differently again. Backup data is, in a way, similar. This means that the work to change has to be worth the benefit.
There are many reasons people migrate to a new backup product. This can be due to a product being too complicated or error-prone, too costly, or a product discontinued (current example is VMware vSphere Data Protection). Even at Veeam we’ve deprecated products over the years. In my time here at Veeam, I’ve observed that backup products in the industry come, change and go. Further, almost all of Veeam’s most strategic partners have at least one backup product — yet we forge a path built on joint value, strong capabilities and broad platform support.
When the migration topic comes up, it is very important to have a clear understanding about what happens if a solution no longer fits the needs of the organization. As stated above, this can be because a product exits the market, drops support for a key platform or simply isn’t meeting expectations. How can the backup data over time be trusted to still meet any requirements that may arise? This is an important forethought that should be raised in any migration scenario. This means that the time to think about what migration from a product would look like, actually should occur before that solution is ever deployed.
Veeam takes this topic seriously, and the ability to handle this is built into the backup data. My colleagues and I on the Veeam Product Strategy Team have casually referred to Veeam backups being “self-describing data.” This means that you open it up (which can be done easily) and you can clearly see what it is. One way to realize this is the fact that Veeam backup products have an extract utility available. The extract utility is very helpful to recover data from the command line, which is a good use case if an organization is no longer a Veeam client (but we all know that won’t be the case!). Here is a blog by Vanguard Andreas Lesslhumer on this little-known tool.
Why do I bring up the extract utility when it comes to switching backup products? Because it hits on something that I have taken very seriously of late. I call it Absolute Portability. This is a very significant topic in a world where organizations passionately want to avoid lock-in. Take the example I mentioned before of VMware vSphere Data Protection going end-of-life, Veeam Vanguard Andrea Mauro highlights how they can migrate to a new solution; but chances are that will be a different experience. Lock-in can occur in many ways, and organizations want to avoid lock-in. This can be a cloud lock-in, a storage device lock-in, or a services lock-in. Veeam is completely against lock-ins, and arguably so agnostic that it makes it hard to make a specific recommendation sometimes!
I want to underscore the ability to move data — in, out and around — as organizations see fit. For organizations who choose Veeam, there are great capabilities to keep data available.
So, why move? Because expanded capabilities will give organizations what they need.
GD Star Rating
Migration is never fun – Backups are no exception, 4.8 out of 5 based on 4 ratings

Original Article:


On-Premises Object Storage for Testing

On-Premises Object Storage for Testing

CloudOasis  /  HalYaman

objectstorage.pngMany Customers and Partners are wanting to deploy an on-premises object storage for testing and learning purposes. How you can deploy a totally free of charge object storage instance for an unlimited time and unrestricted capacity in your own lab?

On this blog post, I will take you through the deployment and configuration of an on-premises object storage instance for test purposes so that you might learn more about the new feature.
For this test, I will use a product call Minio. You can download it from this link for free, and it is available for Windows, Linux and MacOS.
To get started, download Minio and run the installation.
On my CloudOasis lab, I decided to use a Windows core server to act as a test platform for my object storage. Read on to the steps below to see the configuration I used:


By default, after the download of Minio Server, it will install as an unsecured service. The downside of this is that many applications out there need a secure URL over HTTPS to interact with the Object Storage. To fix this insecurity, we have to download GnuTLS to generate a private and public key facility to secure our Object storage connection.

Preparing Certificate

After GnuTLS has downloaded and extracted to a folder on your drive, for example, C:\GnuTLS, you now must add that path to Windows with the following command:
setx path "%path%;C:\gnutls\bin"

Private Key

The next step is to generate the private.key:
certtool.exe --generate-privkey --outfile private.key
After the private.key has been  generated, create a new file called cert.cnf and past the following script into it:
# X.509 Certificate options  #  # DN options    # The organization of the subject.  organization = "CloudOasis"    # The organizational unit of the subject.  #unit = "CloudOasis Lab"    # The state of the certificate owner.  state = "NSW"    # The country of the subject. Two letter code.  country = "AU"    # The common name of the certificate owner.  cn = "Hal Yaman"    # In how many days, counting from today, this certificate will expire.  expiration_days = 365    # X.509 v3 extensions    # DNS name(s) of the server  dns_name = ""    # (Optional) Server IP address  ip_address = ""    # Whether this certificate will be used for a TLS server  tls_www_server    # Whether this certificate will be used to encrypt data (needed  # in TLS RSA cipher suites). Note that it is preferred to use different  # keys for encryption and signing.  encryption_key

Public Key

Now we are ready to generate the public certificate using the following command:
certtool.exe --generate-self-signed --load-privkey private.key --template cert.cnf --outfile public.crt
After you have completed those steps, you must copy the private and the public keys to the following path:
Note: If you already had your own certificates, all you have to do is to copy them to the .\.\certs folder.

Run Minio Secure Service

Now we are ready to run a secured Minio Server using the following command. Here, we are assuming your minio.exe has been installed on your C:\ drive:
C:\minio.exe server S:\bucket
Note: the S:\bucket is a second volume I created and configured in Minio Server to receive the saved the objects.
After the minio.exe has run, you will get the following screen:
Screen Shot 2018-11-09 at 8.46.33 am


This blog was prepared following several requests from customers and partners who wanted to familiarise themselves with object storage integration during their private beta experience.
As you saw, the steps we have described here will help you to deploy an on-premises object storage for your own testing, without the cost, time limit, or storage size limit.
The steps to deploy the object storage are very simple, but the outcome is huge, and very helpful to you in your learning journey.

The post On-Premises Object Storage for Testing appeared first on CloudOasis.

Original Article:


QNAP NAS Verified Veeam Ready for Efficient Disaster Recovery – HEXUS

QNAP NAS Verified Veeam Ready for Efficient Disaster Recovery – HEXUS

veeam – Google News


Taipei, Taiwan, November 19, 2018 – QNAP® Systems, Inc. today announced that multiple Enterprise-class QNAP NAS systems, including the ES1640dc v2 and TES-1885U, have been verified as Veeam® Ready. Veeam Software, the leader in Intelligent Data Management for the Hyper-Available Enterprise™, has granted QNAP NAS systems with the Veeam Ready Repository distinction, verifying these systems achieve the performance levels for efficient backup and recovery with Veeam® Backup & Replication™ for virtual environments built on VMware® vSphere™ and Microsoft® Hyper-V® hypervisors.
“Veeam provides industry-leading Availability solutions for virtual, physical and cloud-based workloads, and verifying performance ensures that organizations can leverage Veeam advanced capabilities with QNAP NAS systems to improve recovery time and point objectives and keep their businesses up and running,” said Jack Yang, Associate Vice President of Enterprise Storage Business Division of QNAP.
Veeam Backup & Replication helps achieve Availability for ALL virtual, physical and cloud-based workloads and provides fast, flexible and reliable backup, recovery and replication of all applications and data. Organizations can now choose among several QNAP systems verified with Veeam for backup and recovery including:
For more information, please visit
About QNAP Systems, Inc.
QNAP Systems, Inc., headquartered in Taipei, Taiwan, provides a comprehensive range of cutting-edge Network-attached Storage (NAS) and video surveillance solutions based on the principles of usability, high security, and flexible scalability. QNAP offers quality NAS products for home and business users, providing solutions for storage, backup/snapshot, virtualization, teamwork, multimedia, and more. QNAP envisions NAS as being more than “simple storage”, and has created many NAS-based innovations to encourage users to host and develop Internet of Things, artificial intelligence, and machine learning solutions on their QNAP NAS.

Original Article:


Stateful containers in production, is this a thing?

Stateful containers in production, is this a thing?

Veeam Software Official Blog  /  David Hill

As the new world debate of containers vs virtual machines continues, there is also a debate raging about stateful vs stateless containers. Is this really a thing? Is this really happening in a production environment? Do we really need to backup containers, or can we just backup the data sets they access? Containers are not meant to be stateful are they? This debate rages daily on Twitter, reddit and pretty much in every conversation I have with customers.
Now the debate typically starts with the question Why run a stateful container? In order for us to understand that question, first we need to understand the difference between a stateful and stateless container and what the purpose behind them is.

What is a container?

“Containers enable abstraction of resources at the operating system level, enabling multiple applications to share binaries while remaining isolated from each other” *Quote from Actual Tech Media
A container is an application and dependencies bundled together that can be deployed as an image on a container host. This allows the deployment of the application to be quick and easy, without the need to worry about the underlying operating system. The diagram below helps explain this:
stateful containers
When you look at the diagram above, you can see that each application is deployed with its own libraries.

What about the application state?

When we think about any application in general, they all have persistent data and they all have application state data. It doesn’t matter what the application is, it has to store data somewhere, otherwise what would be the point of the application? Take a CRM application, all that customer data needs to be kept somewhere. Traditionally these applications use database servers to store all the information. Nothing has changed from that regard. But when we think about the application state, this is where the discussion about stateful containers comes in. Typically, an application has five state types:

  1. Connection
  2. Session
  3. Configuration
  4. Cluster
  5. Persistent

For the interests of this blog, we won’t go into depth on each of these states, but for applications that are being written today, native to containers, these states are all offloaded to a database somewhere. The challenge comes when existing applications have been containerized. This is the process of taking a traditional application that is installed on top of an OS and turning it into a containerized application so that it can be deployed in the model shown earlier. These applications save these states locally somewhere, and where depends on the application and the developer. Also, a more common approach is running databases as containers, and as a consequence, these meet a lot of the state types listed above.

Stateful containers

A container with stateful data is either typically written to persistent storage or kept in memory, and this is where the challenges come in. Being able to recover the applications in the event of an infrastructure failure is important. If everything that is backed up is running in databases, then as mentioned earlier, that is an easy solution, but if it’s not, how do you orchestrate the recovery of these applications without interruption to users? If you have load balanced applications running, and you have to restore that application, but it doesn’t know the connection or session state, the end user is going to face issues.

If we look at the diagram, we can see that “App 1” has been deployed twice across different hosts. We have multiple users accessing these applications through a load balancer. If “App 1” on the right crashes and is then restarted without any application state awareness, User 2 will not simply reconnect to that application. That application won’t understand the connection and will more than likely ask the user to re-authenticate. Really frustrating for the user, and terrible for the company providing that service to the user. Now of course this can be mitigated with different types of load balancers and other software, but the challenge is real. This is the challenge for stateful containers. It’s not just about backing up data in the event of data corruption, it’s how to recover and operate a continuous service.

Stateless containers

Now with stateless containers its extremely easy. Taking the diagram above, the session data would be stored in a database somewhere. In the event of a failure, the application is simply redeployed and picks up where it left off. Exactly how containers were designed to work.

So, are stateful containers really happening?

When we think of containerized applications, we typically think about the new age, cloud native, born in the cloud, serverless [insert latest buzz word here] application, but when we dive deeper and look at the simplistic approach containers bring, we can understand what businesses are doing to leverage containers to reduce the complex infrastructure required to run these applications. It makes sense that lots of existing applications that require consistent state data are appearing everywhere in production.
Understanding how to orchestrate the recovery of stateful containers is what needs to be focused on, not whether they are happening or not.
GD Star Rating
Stateful containers in production, is this a thing?, 5.0 out of 5 based on 4 ratings

Original Article:


Enhanced self-service restore in Veeam Backup for Microsoft Office 365 v2

Enhanced self-service restore in Veeam Backup for Microsoft Office 365 v2

In Veeam Backup for Microsoft Office 365 1.5, you could only restore the most recently backed up recovery point, which limited its usefulness for most administrators looking to take advantage of the feature. That’s changed in Veeam Backup for Microsoft Office 365 v2 with the ability to now choose a point in time from the Veeam Explorers™. Read this blog from Anthony Spiteri, Global Technologist, Product Strategy to learn more.


More tips and tricks for a smooth Veeam Availability Orchestrator deployment

More tips and tricks for a smooth Veeam Availability Orchestrator deployment

Veeam Software Official Blog  /  Melissa Palmer

Welcome to even more tips and tricks for a smooth Veeam Availability Orchestrator deployment. In the first part of our series, we covered the following topics:

  • Plan first, install next
  • Pick the right application to protect to get a feel for the product
  • Decide on your categorization strategy, such as using VMware vSphere Tags, and implement it
  • Start with a fresh virtual machine

Configure the DR site first

After you have installed Veeam Availability Orchestrator, the first site you configure will be your DR site. If you are also deploying production sites, it is important to note, you cannot change your site’s personality after the initial configuration. This is why it is so important to plan before you install, as we discussed in the first article in this series.

As you are configuring your Veeam Availability Orchestrator site, you will see an option for installing the Veeam Availability Orchestrator Agent on a Veeam Backup & Replication server. Remember, you have two options here:

  1. Use the embedded Veeam Backup & Replication server that is installed with Veeam Availability Orchestrator
  2. Push the Veeam Availability Orchestrator Agent to existing Veeam Backup & Replication servers

If you change your mind and do in fact want to use an existing Veeam Backup & Replication server, it is very easy to install the agent after initial configuration. In the Veeam Availability Orchestrator configuration screen, simply click VAO Agents, then Install. You will just need to know the name of the Veeam Backup & Replication server you would like to add and have the proper credentials.

Ensure replication jobs are configured

No matter which Veeam Backup & Replication server you choose to use for Veeam Availability Orchestrator, it is important to ensure your replication jobs are configured in Veeam Backup & Replication before you get too far in configuring your Veeam Availability Orchestrator environment. After all, Veeam Availability Orchestrator cannot fail replicas over if they are not there!

If for some reason you forget this step, do not worry. Veeam Availability Orchestrator will let you know when a Readiness Check is run on a Failover Plan. As the last step in creating a Failover Plan, Veeam Availability Orchestrator will run a Readiness Check unless you specifically un-check this option.

If you did forget to set up your replication jobs, Veeam Availability Orchestrator will let you know, because your Readiness Check will fail, and you will not see green checkmarks like this in the VM section of the Readiness Check Report.

For a much more in-depth overview of the relationship between Veeam Backup & Replication and Veeam Availability Orchestrator, be sure to read the white paper Technical Overview of Veeam Availability Orchestrator Integration with Veeam Backup & Replication.

Do not forget to configure Veeam DataLabs

Before you can run a Virtual Lab Test on your new Failover Plan (you can find a step-by-step guide to configuring your first Failover Plan here), you must first configure your Veeam DataLab in Veeam Backup & Replication. If you have not worked with Veeam DataLabs before (previously known as Veeam Virtual Labs), be sure to read the white paper I mentioned above, as configuration of your first Veeam DataLab is also covered there.

After you have configured your Veeam DataLab in Veeam Backup & Replication, you will then be able to run Virtual Lab Tests on your Failover Plan, as well as schedule Veeam DataLabs to run whenever you would like. Scheduling Veeam DataLabs is ideal to provide an isolated production environment for application testing, and can help you make better use of those idle DR resources.

Veeam DataLabs can be run on demand or scheduled from the Virtual Labs screen. When running or scheduling a lab, you can also select the duration of time you would like the lab to run for, which can be handy when scheduling Veeam DataLab resources for use by multiple teams.

There you have it, even more tips and tricks to help you get Veeam Availability Orchestrator up and running quickly and easily. Remember, a free 30-day trial of Veeam Availability Orchestrator is available, so be sure to download it today!

The post More tips and tricks for a smooth Veeam Availability Orchestrator deployment appeared first on Veeam Software Official Blog.

Original Article:


How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs

How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs

Veeam Software Official Blog  /  Melissa Palmer

Unfortunately, bad patches are something everyone has experienced at one point or another. Just take the most recent example of the Microsoft Windows October 2018 Update that impacted both desktop and server versions of Windows. Unfortunately, this update resulted in missing files for impacted systems, and has temporarily been paused as Microsoft investigates.
Because of incidents like this, organizations are often seldom to quickly adopt patches. This is one of the reasons the WannaCry ransomware was so impactful. Unpatched systems introduce risk into environments, as new exploits for old problems are on the rise. In order to patch a system, organizations must first do two things, back up the systems to be patched, and perform patch testing.

A recent, verified Veeam Backup

Before we patch a system, we always want to make sure we have a backup that matches our organization’s Recovery Point Objective (RPO), and that the backup was successful. Luckily, Veeam Backup & Replication makes this easy to schedule, or even run on demand as needed.
Beyond the backup itself succeeding, we also want to verify the backup works correctly. Veeam’s SureBackup technology allows for this by booting the VM in an isolated environment, then tests the VM to make sure it is functioning properly. Veeam SureBackup gives organizations additional piece of mind that their backups have not only succeeded, but will be useable.

Rapid patch testing with Veeam DataLabs

Veeam DataLabs enable us to test patches rapidly, without impacting production. In fact, we can use that most recent backup we just took of our environment to perform the patch testing. Remember the isolated environment we just talked about with Veeam SureBackup technology? You guessed it, it is powered by Veeam DataLabs.
Veeam DataLabs allows us to spin up complete applications in an isolated environment. This means that we can test patches across a variety of servers with different functions, all without even touching our production environment. Perfect for patch testing, right?
Now, let’s take a look at how the Veeam DataLab technology works.
Veeam DataLabs are configured in Veeam Backup & Replication. Once they are configured, a virtual appliance is created in VMware vSphere to house the virtual machines to be tested. Beyond the virtual machines you plan on testing, you can also include key infrastructure services such as Active Directory, or anything else the virtual machines you plan on testing require to work correctly. This group of supporting VMs is called an Application Group.
patch testing veeam backup datalabs
In the above diagram, you can see the components that support a Veeam DataLab environment.
Remember, these are just copies from the latest backup, they do not impact the production virtual machines at all. To learn more about Veeam DataLabs, be sure to take a look at this great overview hosted here on the blog.
So what happens if we apply a bad patch to a Veeam DataLab environment? Absolutely nothing. At the end of the DataLab session, the VMs are powered off, and the changes made during the session are thrown away. There is no impact to the production virtual machines or the backups leveraged inside the Veeam DataLab. With Veeam DataLabs, patch testing is no longer a big deal, and organizations can proceed with their patching activities with confidence.
This DataLab can then be leveraged for testing, or for running Veeam SureBackup jobs. SureBackup jobs also provide reports upon completion. To learn more about SureBackup jobs, and see how easy they are to configure, be sure to check out the SureBackup information in the Veeam Help Center.

Patch testing to improve confidence

The hesitance to apply patches is understandable in organizations, however, that does not mean there can be significant risk if patches are not applied in a timely manner. By leveraging Veeam Backups along with Veeam DataLabs, organizations can quickly test as many servers and environments as they would like before installing patches on production systems. The ability to rapidly test patches ensures any potential issue is discovered long before any data loss or negative impact to production occurs.

No VMs? No problem!

What about the other assets in your environment that can be impacted by a bad patch, such as physical servers, dekstops, laptops, and full Windows tablets? You can still protect these assets by backing them up using Veeam Agent for Microsoft Windows. These agents can be automatically deployed to your assets from Veeam Backup & Replication. To learn more about Veeam Agents, take a look at the Veeam Agent Getting Started Guide.
To see the power of Veeam Backup & Replication, Veeam DataLabs, and Veeam Agent for Microsoft Windows for yourself, be sure to download the 30-day free trial of Veeam Backup & Replication here.
The post How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs appeared first on Veeam Software Official Blog.

Original Article:


Office 365 Backup now available in the Azure Marketplace!

Office 365 Backup now available in the Azure Marketplace!

Veeam Software Official Blog  /  Niels Engelen

When we released Veeam Backup for Microsoft Office 365 in July, we saw a huge adoption rate and a large inquiry on running the solution within Azure. It is with great pleasure, we can announce that Veeam Backup for Microsoft Office 365 is now available in the Azure Marketplace!

A simple deployment model

Veeam Backup for Microsoft Office 365 within Azure falls under the BYOL (Bring Your Own License) model, which means you only have to buy the amount of licenses needed besides the Azure infrastructure costs.
The deployment is easy. Just define your project and instance details combined with an administrator login and you’re good to go. You will notice a default size will be selected, however, this can always be redefined. Keep in mind it is advised to leverage the minimum system requirements which can be found in the User Guide.

VBO365 Azure deployment

Once you’ve added your disks and configured the networking, you’re good to go and the Azure portal will even share you details on the Azure infrastructure costs such as the example below for a Standard A4 v2 VM.

VBO365 Azure pricing

If you are wondering on how to calculate the amount of storage needed for Exchange, SharePoint and OneDrive data, Microsoft provides great reports for this within the Microsoft 365 admin center under the reports option.
Once the VM has been deployed, you can leverage RDP and are good to go with a pre-installed VBO installation. Keep in mind that by default, the standard retention on the repository is set to 3 years, so you may need to modify this to adjust to your organization’s needs.

Two ways to get started!

You can provision Veeam Backup for Microsoft Office 365 in Azure and bring a 30-day trial key with you to begin testing.
You can also deploy the solution within Azure and back up all your Office 365 data free forever – limited to a maximum of 10 users and 1TB of SharePoint data within your organization.
Ready to get started? Try it out today and head to the Azure Marketplace right now!

See Also:

The post Office 365 Backup now available in the Azure Marketplace! appeared first on Veeam Software Official Blog.

Original Article:


Availability for your Nutanix AHV with Veeam

Availability for your Nutanix AHV with Veeam

vZilla  /  michaelcade

This series is to highlight the steps to deploy, install, configuration and then how to start protecting workloads and then the recovery options that we have within Veeam Availability for Nutanix AHV.
Everything You Need to for Veeam Availability for Nutanix AHV

Now that we have our Veeam Proxy Appliance deployed, installed and configured, the next step is to start protecting some of the workloads we have sitting in our Nutanix AHV Cluster.
Navigate to the backup jobs tab on the top ribbon.

Here we can add a new backup job, a simple wizard driven approach to start protecting those workloads.

Next, we need to add in our virtual machines,

In my scenario I have simple virtual machines, if you are leveraging Nutanix Protection Domains then you can also leverage this grouping here to select your virtual machines, we can also leverage dynamic mode this is to allow the adding and removing of new workloads under that protection domain.

Add the virtual machine or machines that you wish to protect.

Next, the next option is selecting the destination of the backup job. To be able to see the backup repository the access on the VBR server needs to have the correct permissions to allow for access. This is done from the Veeam Backup & Replication console.

There are some advanced settings that can also be set to remove deleted VMs from the backup that are no longer included in the backup job.

The final step is to configure through the schedule. This will allow, you to choose the interim of backups and how many restore points that you must retain.

The final screen is the summary of the backup job you are about to complete.

You will also notice the ability to run the backup job when finish is selected, this will then start the backup job process. This will trigger the backup job to perform a full backup of the virtual machines you have selected.

Over in VBR you can see the job also running. In a very similar fashion to what we saw with the original Veeam Endpoint backup, we see enough that something is happening, but nothing can be configured from this job within Veeam Backup & Replication.

Back in the Veeam Availability for Nutanix AHV we now have a completed backup job.

Veeam Backup & Replication also shows the completed job and the steps that have occurred during the job.

We will also now see the specific job in our Veeam Backup & Replication console under the backups giving us the ability to perform certain recovery tasks against those backup files.

And we also see the completed job now under the backup jobs in the proxy appliance interface. Here we can perform an Active Full in an ad hoc scenario but also, we can start and stop the job and edit that job.

Over on the Protected VMs tab you will also notice that we now have visibility into the virtual machines that are protected with how many snapshots and backups are present.

To finish, if you head back to the dashboard you will now see the job status showing that we have one created backup job and it is currently idle.

That’s all for the availability section of this series, this is really giving us the ability to create those backup jobs for the virtual machines that sit within the Nutanix AHV cluster, this is an agentless approach for any application consistency you will require the Nutanix Guest Tools.
One thing to note is if you have a transactional workload we would recommend using the Veeam Agent to provide not only the application consistent but also the log truncation within the application. Not required if you have an application that can manage that truncation task.
Next up we will look at the recovery steps and options we have.
The post Availability for your Nutanix AHV with Veeam appeared first on vZilla.

Original Article:


NetApp ONTAP with Veeam – simpler access to Hyper-Availability

NetApp ONTAP with Veeam – simpler access to Hyper-Availability

Veeam Executive Blog – The Availability Lounge  /  Carey Stanton

What a great year it has been for our partnership with NetApp. Our two companies have worked diligently to realize the vision of a single point of sale for Veeam Hyper-Availability with NetApp ONTAP. Now, I’m pleased to share that the strength of the NetApp and Veeam partnership has made this a reality. NetApp customers can purchase joint NetApp ONTAP and Veeam Hyper-Availability solutions from our joint partners around the globe.
The NetApp and Veeam partnership began in 2015, with each successive year bringing the two companies closer together in our common cause to deliver the best experience to our customers. Last year, when we announced the intent to offer Veeam Hyper-Availability solutions on the NetApp Global price list, we knew great things were coming to build success together. And this year has delivered.
Veeam is a premier sponsor at NetApp Insight, delivering the Veeam Hyper-Availability Platform across the entire NetApp Data Fabric portfolio. Whether you use ONTAP, NetApp HCI, E-Series or StorageGRID, Veeam is right there as the Availability solution of choice.
If you’re a NetApp customer or partner, then you have the ability and confidence of knowing two of the hottest tech companies are in sync to transform and protect your data.
NetApp ONTAP is the foundation for many organizations’ data management strategy. Veeam’s integration with ONTAP, and the simplicity, agility and efficiency the joint solution delivers, is a key reason why NetApp customers choose Veeam over competitive data protection solutions.
The NetApp resale of Veeam Hyper-Availability solutions provides our joint customers with the confidence to invest in pre-validated NetApp and Veeam solutions. This is particularly important for organizations transforming their business workloads across multi-cloud environments.
By simplifying application performance and Hyper-Availability for multi-cloud workloads, NetApp and Veeam provide the following key benefits:

  • Simplified IT. Less time spent on IT operations and more time spent helping the business transform and innovate.
  • Lower costs. Improved data center efficiencies combined with the flexibility to leverage the economies of scale of public cloud infrastructure without compromising performance or Availability.
  • Increased ROI. Accelerate business innovation through Intelligent Data Management by leveraging Veeam DataLabs with secondary NetApp storage to facilitate application development, data analytics, DR compliance testing, end-user training and more.

Don’t just take our word for it. According to recent IDC research, customers using Veeam and NetApp get nearly a 300% ROI over five years with the following benefits:

  • Nine-month payback on investment
  • 58% lower cost of operations
  • 36% lower cost of hardware
  • 89% less unplanned downtime

We here at Veeam believe partnerships are critical to the continued success of our customers. Working in unison with others in the industry is our philosophy, on both a personal and corporate level. We are proud to be a NetApp partner and proud of the trust between our two organizations. I look forward to seeing you at NetApp Insight this year and the continued success of our two companies in the year to come.
Show more articles from this author

Makes disaster recovery, compliance, and continuity automatic

NEW Veeam Availability Orchestrator helps you reduce the time, cost and effort of planning for and recovering from a disaster by automatically creating plans that meet compliance.


Veeam Availability Orchestrator

Original Article:


Creating policy with exclusion of folders

Creating policy with exclusion of folders

Notes from MWhite  /  Michael White

I need to add Bitdefender protection to my Veeam Backup & Replication server so this article is going to help with that. But it is mostly going to talk about adjusting the policy that is applied to the Veeam server so that it doesn’t impact the backups.


We need to have access to the info about what to exclude on the Veeam server.  That is found in this article. We also need to have access to our GravityZone UI so we can create a policy, add exclusions to it, and than apply it to the VM.

Lets get started.

Creating Policy

We need to start off in the GravityZone UI and change to the Policies area.

We do not want to edit the default policy as it is applied to everyone, plus I think we cannot delete it.  So we are going to select it and clone it so we have a new copy of it that we can tweak and attach to our Veeam server. Once it is cloned it opens up and is seen like below.

We need to name it something appropriate – I will call it VBR Exclusions. I like the default policy and think it pretty good. So I am going to leave this clone as it was in the default policy and only add to it the Veeam exclusions.  So change to the Antimalware area and select Settings.

You can see it below, where I have already entered the Veeam server exclusions.

You only need to enable the Custom Exclusions by checkmark.  Then add in what you see above. Once you have finished you use the Save button to save this new policy.  It is the same as the default – which I said I like, except it has additional exclusions.

I do not know how to attach a policy to a package so that when it is installed it gets the policy.  So we are going to install with the default and change it afterwards.

Install the Bitdefender client

Likely you know how to do that – download a package and execute it. Once done make sure you see it in the GravityZone UI.

Now we need to assign the proper policy to our Veeam server.

Policy Assignment

We need to be in the Policies \ Assignment Rules area.

We add a location rule by using Add \ Location. Once we do that we see the following screen.

We add a name, and description, plus select the policy we just created the exclusions in and add an IP address for our Veeam server.

Now we change to the Policies view and it may take a minute or two and you will see something different.

We see that one has the policy which makes sense, but there is 4 applied which is confusing. However, I do a Policy Compliance report which shows me who has what policy and I see that VBR01 – my Veeam server – is the only one that has the policy.

So things look good now. We have created a special policy for our Veeam server, applied it, and confirmed it was applied.

Any questions or comments let me know.


=== END ===

Original Article:


How to bring balance into your infrastructure

How to bring balance into your infrastructure

Veeam Software Official Blog  /  Evgenii Ivanov

Veeam Backup & Replication is known for ease of installation and a moderate learning curve. It is something that we take as a great achievement, but as we see in our support practice, it can sometimes lead to a “deploy and forget” approach, without fine-tuning the software or learning the nuances of its work. In our previous blog posts, we examined tape configuration considerations and some common misconfigurations. This time, the blog post is aimed at giving the reader some insight on a Veeam Backup & Replication infrastructure, how data flows between the components, and most importantly, how to properly load-balance backup components so that the system can work stably and efficiently.

Overview of a Veeam Backup & Replication infrastructure

Veeam Backup & Replication is a modular system. This means that Veeam as a backup solution consists of a number of components, each with a specific function. Examples of such components are the Veeam server itself (as the management component), proxy, repository, WAN accelerator and others. Of course, several components can be installed on a single server (provided that it has sufficient resources) and many customers opt for all-in-one installations. However, distributing components can give several benefits:

  • For customers with branch offices, it is possible to localize the majority of backup traffic by deploying components locally.
  • It allows to scale out easily. If your backup window increases, you can deploy an additional proxy. If you need to expand your backup repository, you can switch to scale-out backup repository and add new extents as needed.
  • You can achieve a High Availability for some of the components. For example, if you have multiple proxies and one goes offline, the backups will still be created.

Such system can only work efficiently if everything is balanced. An unbalanced backup infrastructure can slow down due to unexpected bottlenecks or even cause backup failures because of overloaded components.

Let’s review how data flows in a Veeam infrastructure during a backup (we’re using a vSphere environment in this example):

All data in Veeam Backup & Replication flows between source and target transport agents. Let’s take a backup job as an example: a source agent is running on a backup proxy and its job is to read the data from a datastore, apply compression and source-side deduplication and send it over to a target agent. The target agent is running directly on a Windows/Linux repository or a gateway if a CIFS share is used. Its job is to apply a target-side deduplication and save the data in a backup file (.VKB, .VIB etc).

That means there are always two components involved, even if they are essentially on the same server and both must be taken into account when planning the resources.

Tasks balancing between proxy and repository

To start, we must examine the notion of a “task.” In Veeam Backup & Replication, a task is equal to a VM disk transfer. So, if you have a job with 5 VMs and each has 2 virtual disks, there is a total of 10 tasks to process. Veeam Backup & Replication is able to process multiple tasks in parallel, but the number is still limited.

If you go to the proxy properties, on the first step you can configure the maximum concurrent tasks this proxy can process in parallel:

For normal backup operations, a task on the repository side also means one virtual disk transfer.

On the repository side, you can find a very similar setting:

For normal backup operations, a task on the repository side also means one virtual disk transfer.

This brings us to our first important point: it is crucial to keep the resources and number of tasks in balance between proxy and repository.  Suppose you have 3 proxies set to 4 tasks each (that means that on the source side, 12 virtual disks can be processed in parallel), but the repository is set to 4 tasks only (that is the default setting). That means that only 4 tasks will be processed, leaving idle resources.

The meaning of a task on a repository is different when it comes to synthetic operations (like creating synthetic full). Recall that synthetic operations do not use proxies and happen locally on a Windows/Linux repository or between a gateway and a CIFS share. In this case for normal backup chains, a task is a backup job (so 4 tasks mean that 4 jobs will be able to generate synthetic full in parallel), while for per-VM backup chains, a task is still a VM (so 4 tasks mean that repo can generate 4 separate VBKs for 4 VMs in parallel). Depending on the setup, the same number of tasks can create a very different load on a repository! Be sure to analyze your setup (the backup job mode, the job scheduling, the per-VM option) and plan resources accordingly.

Note that, unlike for a proxy, you can disable the limit for number of parallel tasks for a repository. In this case, the repository will accept all incoming data flows from proxies. This might seem convenient at first, but we highly discourage from disabling this limitation, as it may lead to overload and even job failures. Consider this scenario: a job has many VMs with a total of 100 virtual disks to process and the repository uses the per-VM option. The proxies can process 10 disks in parallel and the repository is set to the unlimited number of tasks. During an incremental backup, the load on the repository will be naturally limited by proxies, so the system will be in balance. However, then a synthetic full starts. Synthetic full does not use proxies and all operations happen solely on the repository. Since the number of tasks is not limited, the repository will try to process all 100 tasks in parallel! This will require immense resources from the repository hardware and will likely cause an overload.

Considerations when using CIFS share

If you are using a Windows or Linux repository, the target agent will start directly on the server.  When using a CIFS share as a repository, the target agent starts on a special component called a “gateway,” that will receive the incoming traffic from the source agent and send the data blocks to the CIFS share. The gateway must be placed as close to the system sharing the folder over SMB as possible, especially in scenarios with a WAN connection. You should not create topologies with a proxy/gateway on one site and CIFS share on another site “in the cloud” — you will likely encounter periodic network failures.

The same load balancing considerations described previously apply to gateways as well. However, the gateway setup requires an additional attention because there are 2 options available — set the gateway explicitly or use an automatic selection mechanism:

Any Windows “managed server” can become a gateway for a CIFS share. Depending on the situation, both options can come handy. Let’s review them.

You can set the gateway explicitly. This option can simplify the resource management — there can be no surprises as to where the target agent will start. It is recommended to use this option if an access to the share is restricted to specific servers or in case of distributed environments — you don’t want your target agent to start far away from the server hosting the share!

Things become more interesting if you choose Automatic selection. If you are using several proxies, automatic selection gives ability to use more than one gateway and distribute the load. Automatic does not mean random though and there are indeed strict rules involved.

The target agent starts on the proxy that is doing the backup. In case of normal backup chains, if there are several jobs running in parallel and each is processed by its own proxy, then multiple target agents can start as well. However, within a single job, even if the VMs in the job are processed by several proxies, the target agent will start only on one proxy, the first to start processing. For per-VM backup chains, a separate target agent starts for each VM, so you can get the load distribution even within a single job.

Synthetic operations do not use proxies, so the selection mechanism is different: the target agent starts on the mount server associated with the repository (with an ability to fail over to Veeam server if the mount server in unavailable). This means that the load of synthetic operations will not be distributed across multiple servers. As mentioned above, we discourage from setting the number of tasks to unlimited — that can cause a huge load spike on the mount/Veeam server during synthetic operations.

Additional notes

Scale-out backup repositorySOBR is essentially a collection of usual repositories (called extents). You cannot point a backup job to a specific extent, only to SOBR, however extents retain some of settings, including the load control. So what was discussed about standalone repositories, pertains to SOBR extents as well. SOBR with per-VM option (enabled by default), the “Performance” placement policy and backup chains spread out across extents will be able to optimize the resource usage.

Backup copy. Instead of a proxy, source agents will start on the source repository. All considerations described above apply to source repositories as well (although in case of Backup Copy Job, synthetic operations on a source repository are logically not possible). Note that if the source repository is a CIFS share, the source agents will start on the mount server (with a failover to Veeam server).

Deduplication appliances. For DataDomain, StoreOnce (and possibly other appliances in the future) with Veeam integration enabled, the same considerations apply as for CIFS share repositories. For a StoreOnce repository with source-side deduplication (Low Bandwidth mode) the requirement to place gateway as close to the repository as possible does not apply — for example, a gateway on one site can be configured to send data to a StoreOnce appliance on another site over WAN.

Proxy affinity. A feature added in 9.5, proxy affinity creates a “priority list” of proxies that should be preferred when a certain repository is used.

If a proxy from the list is not available, a job will use any other available proxy. However, if the proxy is available, but does not have free task slots, the job will be paused waiting for free slots. Even though the proxy affinity is a very useful feature for distributed environments, it should be used with care, especially because it is very easy to set and forget about this option. Veeam Support encountered cases about “hanging” jobs which came down to the affinity setting that was enabled and forgotten about. More details on proxy affinity.


Whether you are setting up your backup infrastructure from scratch or have been using Veeam Backup & Replication for a long time, we encourage you to review your setup with the information from this blog post in mind. You might be able to optimize the use of resources or mitigate some pending risks!

The post How to bring balance into your infrastructure appeared first on Veeam Software Official Blog.

Original Article:


Veeam ONE Report – Custom – host firmware info

Veeam ONE Report – Custom – host firmware info

Notes from MWhite  /  Michael White

I recently showed a customer how to do a Veeam ONE report that showed the firmware of the VMware hosts, and the version / build of the VMware software.  It was an interesting use of the custom reporting in ONE and I have since shown other people and since they liked it too I thought I would write about it.
We start by logging into the Veeam ONE Reporter UI.  It is the one on port 1239 so https://fqdn_ONE_server:1239. Next change to the Workspace tab, next would be select Custom Reports folder. You can see it below.

Once we click on Custom Reports we will see a selection of them.

The one that we are going to start with, sort of as framework for what we want is the one called Custom Infrastructure. It is seen above with the red arrow pointing at it.
Lets click on Custom Infrastructure so that we can work with it.

Depending on what you have done with this report type before some of the fields may look different.  Object type above says Host System, but that is due to my using that before.  It may say Click to choose for you, and so if it does, select it and select Host System as you see above.
Next we select the Click to choose for Columns.  What we are going to do here is layout our report.  I would like to have the following columns – Name, Virtualization Info, Version, Build, Host BIOS firmware, Manufacture.  So lets pick like that. As you select these values notice all the other things you can pick! Once selected it should look like below.

Once you have selected what you need, you can use the OK button to return to the main screen.  I like to use the Preview Button now to confirm what it looks like.

And it is what I want, so I change tabs so I am back where we hit the Preview button.
Now we use the Save as button to save our report.

I like to save my reports to My Reports folder.  And even though we made this report ourselves, it is now treated like the others.  So produced automatically on a schedule for example. Or you can click on report in your My Reports folder and do things like edit it, View it, delete it as some examples.
When you have previewed the report, or when you are looking at it, you can always export it.  At the top of the screen you will see an Export button.

When you use that Export button you can export to Word, Excel or PDF.
Hope that this helps but you can always comment or ask questions. ONE is a pretty powerful tool but people don’t always seem to see that.  So I am going to do some articles to help with that.
BTW, any technical ONE articles will be able to be found using this tag.
===END ===

Original Article:


Resume Veeam Failed Backup Jobs

Resume Veeam Failed Backup Jobs

CloudOasis  /  HalYaman

Screen Shot 2018-09-14 at 8.55.00 am
What are your options if you wish to automate the resumption of backup jobs on your Veeam Backup server after a failover? Is there a way to automatically resume your backup jobs after switching over to the Veeam Backup server?
As you may be aware, Veeam does not offer an “out-of-the-box” High Availability function for it own backup server. The reason is probably that the simplicity of deploying and recovering the backup server might you such an option redundant.  You can read more about the steps on this link. However, you might also like to have an HA function for its convenience.
One of the ways to protect the Veeam Backup server, and to offer High Availability to the Veeam Backup server, is to replicate the backup server using the Veeam Backup & Replication product. Or you could use the VMware vSphere replication Tool
To accomplish a complete High Availability solution for the Veeam Backup server, it is important that after the failover has happened, the Replica Veeam backup server automatically resumes the backup jobs.
To accomplish the automatic resumption after a failover, you can use Veeam PowerShell to initiate the resumption of failed backup jobs. It requires a few simple steps, and can be automated by adding the following PowerShell commands to the post-failover script:

Screen Shot 2018-09-14 at 9.08.37 am

The following commands check the status of each Backup and Replication Job. They will resume, or retry, each failed Backup job and Replication Job:

Get-VBRJob | where {$_.Jobtype -eq “Backup” -and $_.GetLastResult() -eq “Failed”} | Start-VBRJob -RetryBackup
Get-VBRJob | where {$_.JobType -eq “Replica” -and $_.GetLastResult() -eq “Failed”} | Start-VBRJob -RetryBackup
Get-VBRJob | where {$_.Jobtype -eq “Backup” -or $_.JobType -eq “Replica” -and $_.GetLastResult() -eq “Failed”} | Start-VBRJob -RetryBackup


These steps, together with the steps in the previous blog post where I told you about protecting the Veeam Backup server, allowed the Service Provider mentioned at the top of the page to implement an easy and straightforward HA solution for his Veeam Backup server.
Testing was carried out, adjustments were made, and the finished solution was implemented in the PRODUCTION environment. These easy steps have become the cornerstone of his Veeam High Availability solution. “Easy, and very effective.” … is how he described it.

The post Resume Veeam Failed Backup Jobs appeared first on CloudOasis.

Original Article:


The Office 365 Shared Responsibility Model

The Office 365 Shared Responsibility Model

Veeam Software Official Blog  /  Russ Kerscher

The No. 1 question we get all the time: “Why do I need to back up my Office 365 Exchange Online, SharePoint Online and OneDrive for Business data?”
And it’s normally instantaneously followed up with a statement similar to this: “Microsoft takes care of it.”
Do they? Are you sure?
To add some clarity to this discussion, we’ve created an Office 365 Shared Responsibility Model. It’s designed to help you — and anyone close to this technology — understand exactly what Microsoft is responsible for and what responsibility falls on the business itself. After all — it is YOUR data!
Over the course of this post, you’ll see we’re going to populate out this Shared Responsibility Model. On the top half of the model, you will see Microsoft’s responsibility. This information was compiled based on information from the Microsoft Office 365 Trust Center, in case you would like to look for yourself.
On the bottom half, we will populate out the responsibility that falls on the business, or more specifically, the IT organization.

Now, let’s kick this off by talking specifically about each group’s primary responsibility. Microsoft’s primary responsibility is focused on THEIR global infrastructure and their commitment to millions of customers to keep this infrastructure up and running, consistently delivering uptime reliability of their cloud service and enabling the productivity of users across the globe.
An IT organization’s responsibility is to have complete access and control of their data — regardless of where it resides. This responsibility doesn’t magically disappear simply because the organization made a business decision to utilize a SaaS application.

Here you can see the supporting technology designed to help each group meet that primary responsibility. Office 365 includes built-in data replication, which provides data center to data center georedundancy. This functionality is a necessity. If something goes wrong at one of Microsoft’s global data centers, they can failover to their replication target, and, in most cases, the users are completely oblivious to any change.
But replication isn’t a backup. And furthermore, this replica isn’t even YOUR replica; it’s Microsoft’s. To further explain this point, take a minute and think about this hypothetical question:

What has you fully protected, a backup or a replica?

Some of you might be thinking a replica — because data that is continuously or near-continuously replicated to a second site can eliminate application downtime. But some of you also know there are issues with a replication-only data protection strategy. For example, deleted data or corrupt data is also replicated along with good data, which means your replicated data is now also deleted or corrupt.
To be fully protected, you need both a backup and a replica! This fundamental principle has been the bedrock of Veeam’s data protection strategy for over 10 years. Look no further than our flagship product, aptly named Veeam Backup & Replication.

Some of you are probably already thinking: “But what about the Office 365 recycle bin?” Yes, Microsoft has a few different recycle bin options, and they can help you with limited, short-term data loss recovery. But if you are truly in complete control of your data, then “limited” can’t check the box. To truly have complete access and control of your business-critical data, you need full data retention. This is short-term retention, long-term retention and the ability to fill any / all retention policy gaps. In addition, you need both granular recovery, bulk restore and point-in-time recovery options at your fingertips.

The next part of the Office 365 Shared Responsibility Model is security. You’ll see that this is strategically designed as a blended box, not separate boxes — because both Microsoft AND the IT organization are each responsible for security.
Microsoft protects Office 365 at the infrastructure level. This includes the physical security of their data centers and the authentication and identification within their cloud services, as well as the user and admin controls built into the Office 365 UI.
The IT organization is responsible for security at a data-level.  There’s a long list of internal and external data security risks, including accidental deletion, rogue admins abusing access and ransomware to name a few. Watch this five-minute video on how ransomware can take over Office 365. This alone will give you nightmares.

The final components are legal and compliance requirements. Microsoft makes it very clear in the Office 365 Trust Center that their role is of the data processor. This drives their focus on data privacy, and you can see on their site that they have a great list of industry certifications. Even though your data resides within Office 365, an IT organization’s role is still that of the data owner. And this responsibility comes with all types of external pressures from your industry, as well as compliance demands from your legal, compliance or HR peers.

In summary, now you should have a better understanding of exactly what Microsoft protects within Office 365 and WHY they protect what they do. Without a backup of Office 365, you have limited access and control of your own data. You can fall victim to retention policy gaps and data loss dangers. You also open yourself up to some serious internal and external security risks, as well as regulatory exposure.
All of this can be easily solved with a backup of your own data, stored in a place of your choosing, so that you can easily access and recover exactly what you want, when you want.

Looking to find a simple, easy-to-use Office 365 backup solution?
Look no further than Veeam Backup for Microsoft Office 365. This solution has already been downloaded by over 35,000 organizations worldwide, representing 4.1 million Office 365 users across the globe. Veeam was also named to Forbes World’s Best 100 Cloud Companies and is a Gold Microsoft Partner. Give Veeam a try and see for yourself.

Additional resources:

The post The Office 365 Shared Responsibility Model appeared first on Veeam Software Official Blog.

Original Article:


Intelligent Data Management for a Hybrid World @ VMworld

Intelligent Data Management for a Hybrid World

vZilla  /  michaelcade

Our session this year is focusing on the automation and orchestration around Veeam and VMware. But what does that mean? The point of our session to highlight the flexibility of the Veeam Hyper-Availability Platform, some people just want the simple, easy to use wizard driven approach to install their Veeam components within their environments but some will want that little bit more and this is where APIs come in and allow us to drive a more streamlined and automated approach to delivering Veeam components.
We also highlighted this by running through everything live, I will get to the nuts and bolts of that shortly.

With there being a strong focus this year’s event, we wanted to highlight the capabilities by using VMware on AWS. Veeam were one of the first vendors highlighted as a supported data protection platform that could protect workloads within VMware on AWS that was 1 year ago and we wanted to highlight those features and capabilities within Veeam.

Veeam Availability Orchestrator – “Replication on Steroids”

The first thing we will touch on is Veeam Availability Orchestrator released this year, this provides a “Replication on Steroids” option for your vSphere environment, this environment can be on premises or leveraging any other vSphere environment, maybe say VMware on AWS where maybe you would still like to keep your DR location on premises and send those replicas down in case of any service disruption within the AWS cloud. The replication concentrates on the application over just sending a VM from site to site, what this also enables is the ability to run automated testing against these replicas to simulate disaster recovery scenarios. Oh and the other large part of this is the automated documentation. Ever had to create your own DR Run Book? I have this does the majority for you whilst being dynamic to any changes in configuration.

Veeam DataLabs

The we wanted to highlight some more automation goodness around Veeam DataLabs, what this gives alongside that Backup and Replication capability is the ability to have an automated way of testing that your backups, replicas or storage snapshots are in a good recoverable state. It also provides the ability to get more leverage from those sources to provide isolated environments for other ways of gaining insight or improving better business outcomes.
I plan to follow up on this as this is one of my passions within our technology stack the ability to leverage Veeam DataLabs from many of the products in the platform to drive different outcomes is really where I see us differentiating in the market.

The Bulk of the session

As you can see we are already cramming quite a bit into the session. But this is the main focus point of the day for us, it’s about delivering a fully configured Veeam environment from the ground up, all live whilst we are on stage. Oh and because we can we are doing this on VMware on AWS.
The driving use case for this was around the Veeam proof of concept process, I mean it was fast, deploy one windows server and 7 clicks later you have Veeam configured, perfect. But the issue wasn’t the Veeam installation, what if we could take an automated approach and be able to understand the customer pain points and needs and then in the background automate out the process of building the Veeam components and automatically start protecting a sub set of data all in the first hour of that meeting?
The beauty of this is, is you do not need to be a DevOps engineer skilled in configuration management or a developer in Ruby. The hard work has been done already and is available for free on GitHub and in the CHEF Supermarket.
I have listed the tools below that we used to get things up and running and to be honest if you were to pull this down when we make this available you will only really need PowerShell, PowerCLI and Terraform installed on your workstation.

The steps we went through live was deploying that Veeam Backup & Replication server along with multiple proxy servers to deal with the load appropriately. Because of the location of the Veeam components and our production environment we chose to also leverage the native AWS services and we deployed an EC2 instance for our backup repository but this could be any storage in any location as per our repository best practices, we also added a Veeam cloud connect service provider to show a different media type and location for your backup or replication requirements. Finally we automated the provisioning of vSphere tags and then created backup jobs based on those.

By the end of the session we had the following built out, over on the right you can see we have a Veeam Backup & Replication server and some additional proxies. On the left at the top we have our Veeam cloud connect backup as a service offering and at the bottom left we also have our on-premises vSphere environment where we could send further backup files or even as a target for those replication jobs. Underneath the VMware on AWS you can see the Amazon EC2 instance where we will store our initial backup files for fast recovery.

As I know some of you will be catching this whilst at the show I want to give a shameless plug out for the next session which goes into more detail around the Chef element of this dynamic deployment so you can find those details below.

I also want to give a huge shoutout to @VirtPirate aka Jeremy Goodrum of Exosphere who helped make the terraform and chef peice happen he also has an article over here diving into the latest version of the cookbook and some other code releases he has made that are related.
Veeam Cookbook 2.1.1 released and Sample vSphere Terraform Templates
Expect to see much more content about this in the form of a whitepaper and more blogs to consume.
The post Intelligent Data Management for a Hybrid World appeared first on vZilla.

Original Article:


White paper: Six reasons to backup Office 365 – ITWeb

White paper: Six reasons to backup Office 365 – ITWeb

veeam – Google News

Six reasons to backup Office 365.Six reasons to backup Office 365.Do you have control of your Office 365 data? Do you have access to all the items you need? The knee-jerk reaction is typically: “Of course I do,” or “Microsoft takes care of it all.” But if you really think about it, are you sure?This report explores the hazards of not having an Office 365 backup in your arsenal, and why backup solutions for Microsoft Office 365 fill the gap of long-term retention and data protection.

Original Article:


#VMworld 2018 – Day 2 – #Veeam in The #AWS Marketplace

#VMworld 2018 – Day 2 – #Veeam in The #AWS Marketplace

vZilla  /  michaelcade

That’s a wrap for day 2 of VMworld, there were two big bits worth mentioning from a Veeam perspective, firstly it is the VMware on AWS marketplace and the addition of Veeam Backup & Replication as an option for automated deployment.

This screen has been taken from beta testing.

To summarise what this means is, for the Veeam customers that have taken the steps to leverage VMware on AWS by giving those customers the same simple, easy to use and seamlessly easy to get thing protected from a backup and replication point of view, using the same toolset that we know from our on-premises vSphere environment.

Yesterday I spoke about automation and we also shared this information in our session about this feature also coming available, both this and the chef and terraform we spoke about yesterday are both types of automation but for possibly different end users and prospects.

The beauty of what we discussed of the CHEF deployment is it allows for us to be more distributed and dynamic, this offering is going to get things up and running as fast as possible and start protecting workloads. There will also be some customers that don’t want to explore the open source community of our CHEF cookbook.

It’s as simple as deploying to your SDDC and then choosing the CloudFormation template. The template will use the VPC that is linked to your VMware on AWS instance. The CloudFormation template will be executed on the VMware on AWS instance.

This template from CloudFormation will then run through the creation of the stack required.

When that CloudFormation template has been ran it’s then time to start the environment configuration, resource pools, network configuration etc.

Then is the summary screen to show you all the configuration you are about to commit.

This will then continue the deployment with your configuration and you will be able to see the Veeam server deployed within your SDDC.

When you first login to the newly created Veeam server you will see that the repository server has been added down to the stack configuration of the CloudFormation template. It has also added your vSphere vCenter server. You can now see the VMs within your SDDC and can begin protecting those instances.

The final thing I wanted to share was the capabilities, it’s not just backup, you also have the ability to replicate these virtual machines from this vSphere environment to any other environment including an on-premises environment, a Cloud Connect Service Provider offering replication as a service or to another vSphere environment anywhere.

The post #VMworld 2018 – Day 2 – #Veeam in The #AWS Marketplace appeared first on vZilla.

Original Article:


Instant visibility & restore for Microsoft SharePoint

Instant visibility & restore for Microsoft SharePoint

Veeam Software Official Blog  /  Kirsten Stoner

Microsoft SharePoint is an invaluable tool used by organizations worldwide for data sharing and collaboration among teams. SharePoint provides businesses with a way to increase teamwork and productivity to streamline their processes and improve their business outcomes. There are several deployment options available for SharePoint, such as on-premises, online through Office 365 or a hybrid deployment. Each option has its own benefits, however, this blog post focuses on Veeam Explorer for SharePoint being used in an on premises deployment model of Microsoft SharePoint. If you’re using SharePoint online, Veeam Explorer is available to you as well through Veeam Backup for Microsoft Office 365.

An on-premises SharePoint farm could consist of multiple servers with each server needing to remain operating to meet your end user’s expectation. To meet a user’s demands, it’s important to have an Availability strategy in place. Veeam meets these expectations by giving you the technology to browse the database, restore individual items, and gain instant visibility while still being easy to use.

Veeam Explorer for Microsoft SharePoint

Veeam has developed many powerful built-in Explorers in its software, and Veeam Explorer for Microsoft SharePoint is no different. From the Veeam backup of your SharePoint server, you gain the ability to browse the content database, recover necessary items without having to fully restore, and start the virtual machine hosting the content database. Like the other Veeam Explorers, this tool is available with all editions of Veeam Backup & Replication, even the Free Edition!

When you perform a backup of your SharePoint Server, remember to enable Application Aware Image Processing. This technology creates a transactional-consistent backup to guarantee the proper recovery of your applications running on VMs. Once you successfully created the backup or replica of your SharePoint Server, you can start using the Explorer. There are a couple options available to you when using the Explorer, these include: browsing the SharePoint database, restoring individual SharePoint items and permissions, exporting items (sending as an email attachment or saving them to another location), and the ability to restore SharePoint sites.

Instant Visibility

Once you’re ready to perform a recovery, the application item restore wizard will auto-discover the SharePoint farms that were backed up and initiate the mount operation. During this operation, Veeam Backup & Replication retrieves information about SharePoint sites, the corresponding database server VMs, and restore points.

Figure 1: Veeam SharePoint Item restore wizard

When first initiating the restore, the wizard shows you the list of available sites included in the backup, allowing you to choose which site you want to explore to find the items you need. The Application Aware Image Processing technology is how Veeam Backup & Replication auto-discovers the information about your SharePoint Servers. It is important to remember to select this option when first performing the backup.

Figure 2: Veeam Explorer for Microsoft SharePoint

Within the Explorer itself is where you can view the content databases, sites, subsites, libraries and lists. Depending on what you select, you can browse and view its contents to find what is needed to restore. If your restoring a document, you can even open and preview the document to ensure it’s the correct item needed to recover. Available in all the editions of Veeam Backup & Replication, Veeam Explorer for Microsoft SharePoint delivers granular browsing and search capabilities to find any item or multiple items stored in any number of Microsoft SharePoint databases. To support this capability, the guest file system of the VM is mounted directly from the backup to a staging Microsoft SQL Server. By default, Veeam will use the SQL Server Express instance that was installed when you deployed Veeam Backup & Replication. One thing to note, the staging system must be compatible or the same version as the Microsoft SQL server that hosts the Microsoft SharePoint Content databases. If it is not, you will need to identify a staging SQL Server that is compatible to be able to use the Explorer. This is available within the Veeam Explorer options, under the SQL Server Settings tab. For detailed instructions on this functionality, please refer to the user guide.

With the amount of visibility Veeam Explorer for Microsoft SharePoint provides, you may want to be able to keep track of who is accessing the Explorer, what they are looking at, and why they are performing restores. For this, Veeam offers another layer of visibility, especially when it comes to restore operations. This visibility comes in the form of Veeam ONE, specifically the Restore Operators Report allowing you to safeguard your data with the ability to see who is accessing your data, where it is being restored to, and what items are being restored.

Veeam ONE Restore Operators Report

Veeam offers powerful, useful tools to ensure Availability for your business. Sometimes, we need to take an extra step to ensure we are still meeting the security requirements for the business as well. Veeam ONE’s Restore Operator’s Report gives you a detailed description of who is accessing your backup data and what restores they are performing or not performing. This allows you to gain an extra layer of visibility by being able to view all types of restore actions performed across the Veeam Backup Servers.

Figure 3. Restore Operators Activity Report

The above report shows who is accessing the backup data and what restores they are performing. This is an easy way to ensure that the correct people who have permission to be accessing certain data, are only accessing that data when and how they’re supposed to. The above image shows the different users performing restores and what type of restore it is, if its application, full VM, files, or even a restore from tape.

Figure 4. Restore Operators Report Continued

Going deeper into the report, you can see which VMs the users are accessing and what restores they are performing, or if they’re even performing a restore. This report is very useful to double check to ensure your users are only accessing what they should be accessing.


Microsoft SharePoint is a valuable tool used in organizations today to increase collaboration among teams to improve teamwork and organizational knowledge to be able to make better decisions. Veeam Explorer for Microsoft SharePoint allows you to keep your business’ most important applications available to meet your end users demands. An added benefit is this Veeam Explorer for Microsoft SharePoint is even included in Veeam Backup Free Edition — allowing you to start using this powerful technology today!

GD Star Rating

Instant visibility & restore for Microsoft SharePoint, 5.0 out of 5 based on 3 ratings

Original Article:


Microsoft LAPS deployment and configuration guide

Microsoft LAPS deployment and configuration guide

Veeam Software Official Blog  /  Gary Williams

If you haven’t come across the term “LAPS” before, you might wonder what it is. The acronym stands for the “Local Administrator Password Solution.” The idea behind LAPS is that it allows for a piece of software to generate a password for the local administrator and then store that password in plain text in an Active Directory (AD) attribute.

Storing passwords in plain text may sound counter to all good security practices, but because LAPS using Active Directory permissions, those passwords can only be seen by users that have been given the rights to see them or those in a group with rights to see them.

The main use case here shows that you can freely give out the local admin password to someone who is travelling and might have problems logging in using cached account credentials. You can then have LAPS request a new password the next time they want to talk to an on-site AD over a VPN.

The tool is also useful for applications that have an auto login capability. The recently released Windows Admin Center is a great example of this:

To set up LAPS, there are a few things you will need to do to get it working properly.

  1. Download the LAPS MSI file
  2. Schema change
  3. Install the LAPS Group Policy files
  4. Assign permissions to groups
  5. Install the LAPS DLL

Download LAPS

LAPS comes as an MSI file, which you’ll need to download and install onto a client machine, you can download it from Microsoft.

Schema change

LAPS needs to add two attributes to Active Directory, the administrator password and the expiration time. Changing the schema requires the LAPS PowerShell component to be installed. When done, launch PowerShell and run the commands:

Import-module AdmPwd.PS


You need to run these commands while logged in to the network as a schema admin.

Install the LAPS group policy files

The group policy needs to be installed onto your AD servers. The *.admx file goes into the “windowspolicydefintions” folder and the *.adml file goes into “windowspolicydefinitions[language]”

Once installed, you should see a LAPS section in GPMC under Computer configuration -> Policies -> Administrative Templates -> LAPS

The four options are as follows:

Password settings — This lets you set the complexity of the password and how often it is required to be changed.

Name of administrator account to manage — This is only required if you rename the administrator to something else. If you do not rename the local administrator, then leave it as “not configured.”

Do not allow password expiration time longer than required by policy — On some occasions (e.g. if the machine is remote), the device may not be on the network when the password expiration time is up. In those cases, LAPS will wait to change the password. If you set this to FALSE, then the password will be changed regardless of it can talk to AD or not.

Enable local password management — Turns on the group policy (GPO) and allows the computer to push the password into Active Directory.

The only option that needs to be altered from “not configured” is the “Enable local admin password management,” which enables the LAPS policy. Without this setting, you can deploy a LAPS GPO to a client machine and it will not work.

Assign permissions to groups

Now that the schema has been extended, the LAPS group policy needs to be configured and permissions need to be allocated. The way I do this is to setup an organizational until (OU), where computers will get the LAPS policy and a read-only group and a read/write group.

Because LAPS is a push process, (i.e. because the LAPS client on the computer is the one to set the password and push it to AD) the computer’s SELF object in AD needs to have permission to write to AD.

The PowerShell command to allow this to happen is:

Set-AdmPwdComputerSelfPermission -OrgUnit <name of the OU to delegate permissions>

To allow helpdesk admins to read LAPS set passwords, we need to allow a group to have that permission. I always setup a “LAPS Password Readers” group in AD, as it makes future administration easier. I do that with this line of PowerShell:

Set-AdmPwdReadPasswordPermission -OrgUnit <name of the OU to delegate permissions> -AllowedPrincipals <users or groups>

The last group I set up is a “LAPS Admins” group. This group can tell LAPS to reset a password the next time that computer connects to AD. This is also set by PowerShell and the command to set it is:

Set-AdmPwdResetPasswordPermission -OrgUnit <name of the OU to delegate permissions> -AllowedPrincipals <users or groups>

Once the necessary permissions have been set up, you can move computers into the LAPS enabled OU and install the LAPS DLL onto those machines.


Now that the OU and permissions have been set up, the admpwd.dll file needs to be installed onto all the machines in the OU that have the LAPS GPO assigned to it. There are two ways of doing this. First, you can simply select the admpwd dll extension from the LAPS MSI file.

Or, you can copy the DLL (admpwd.dll) to a location on the path, such as “%windir%system32”, and then issue a regsvr32.exe AdmPwd.dll command. This process can also be included into a GPO start-up script or a golden image for future deployments.

Now that the DLL has been installed on the client, a gpupdate /force should allow the locally installed DLL to do its job and push the password into AD for future retrieval.

Retrieving passwords is straight forward. If the user in question has at least the LAPS read permission, they can use the LAPS GUI to retrieve the password.

The LAPS GUI can be installed by running the setup process and ensuring that “Fat Client UI” is selected. Once installed, it can be run just by launching the “LAPS UI.” Once launched, just enter the name of the computer you want the local admin password for and, if the permissions are set up correctly, you will see the password displayed.

If you do not, check that that the GPO is being applied and that the permissions are set for the OU where the user account is configured.


Like anything, LAPS can cause a few quirks. The two most common quirks I see include when staff with permissions cannot view passwords and client machines do not update the password as required.

The first thing to check is that the admpwd.dll file is installed and registered. Then, check that the GPO is applying to the server that you’re trying to change the local admin password on with the command gpresult /r. I always like to give applications like LAPS their own GPO to make this sort of troubleshooting much easier.

Next, check that the GPO is actually turned on. One of the oddities of LAPS is that it is perfectly possible to set everything in the GPO and assign the GPO to an OU, but it will not do anything unless the “Enable Local password management” option is enabled.

If there are still problems, double check that the permissions that have been assigned. LAPS won’t error out, but the LAPS GUI will just show a blank for the password, which could mean that either the password has not been set or that the permissions have not been set correctly.

You can double check permissions using the extended attribute section of windows permissions. You can access this by launching Active Directory users and computers -> Browse to the computer object -> Properties -> Security -> Advanced

Double click on the security principal:

Scroll down and check that both Read ms-Mcs-AdmPwd and Write ms-Mcs-admpwd are ticked.

In summary, LAPS works very well and it is a great tool for deployment to servers, especially laptops and the like. It can be a little tricky to get working, but it is certainly worth the time investment.

See more

The post Microsoft LAPS deployment and configuration guide appeared first on Veeam Software Official Blog.

Original Article:


How to Build a Failover Plan in Veeam Availability Orchestrator

How to Build a Failover Plan in Veeam Availability Orchestrator

Veeam Software Official Blog  /  Melissa Palmer

One of the most important components of Veeam Availability Orchestrator is the Failover Plan. The Failover Plan is an essential part of an organization’s disaster recovery plan. It contains the virtual machines to be protected, what steps to take during recovery, and other important information.

Now, we are going to take a look at the step-by-step process to creating your disaster recovery plan with Veeam Availability Orchestrator.

When you start the New Failover Plan Wizard, you will first be prompted to select a site. If you have multiple sites in your VAO environment, you would pick the production site of the application you are protecting.

Next, we want to give our Failover Plan a name. I like to use something that is clear and concise, such as the application name. You can also enter a description of your Failover Plan, as well as the contact information for the application you are protecting.

Next, we select the VM Group (or multiple VM Groups) containing the virtual machines of our application. As we mentioned in a previous post, VM Groups can be powered by VMware vSphere Tags. In this list, you can see the VMware vSphere Tags I have setup in my environment. In this case, I am going to select the applications with the HeliumRUN Windows Tag, since it has the virtual machines I am protecting with this Failover Plan.

Next are our VM Recovery Options. In this screen, we can decide how to handle a VM recovery failure in the unlikely event it happens. We can use VAO to run scheduled recovery tests on a regular basis, so this sort of failure would be a rare occurrence. We can also specify if we want to recover our VMs in a particular order, or at the same time, or finally how many VMs to recover simultaneously.

In the next screen, we are going to select the steps we are going to take for each VM during recovery. After we finish creating the Failover Plan, we will be able to add additional steps for individual VMs, including custom steps we upload to VAO. This is useful when we want to configure particular steps to verify the operation of an application such as Exchange, SharePoint, ISS, or SQL. For a complete list of Failover Plan steps included with VAO, be sure to take a look at the Veeam Availability Orchestrator official user guide here. Some steps, such as Verify SQL Database require credentials. If you select a step that requires credentials, you will be prompted to enter them for use.

One of the most important things to remember is that after we execute a disaster recovery plan, our disaster recovery site is now our production site. Because of this, it is very important that our applications receive the same level of protection they would on any other day. Luckily, Veeam Availability Orchestrator makes this easy by leveraging a pre-configured template job in Veeam Backup & Replication. At this screen, you can simply select the backup job you wish to use to protect your data at the disaster recovery site.

After ensuring your data is protected after your disaster recovery plan has executed, the next step is to configure Veeam Availability Orchestrator’s reporting capabilities. VAO has a completely customizable report template. These disaster recovery plan templates allow for the inclusion of all information needed during a disaster recovery plan execution, and can be scheduled to be sent to key stakeholders on a regular basis to ensure the environment is always ready for failover. For more about the reports included in VAO, be sure to check out this guide to VAO terminology.

By default, the Plan Definition Report and Readiness Check are scheduled to run daily, which is a great way to check the health of our disaster recovery plan. The Plan Definition Report includes all the information about the Failover Plan we just created, as well as a log of changes that have been made. The Readiness Check is a light-weight test that checks to ensure we are ready for a failover at a moment’s notice. If for some reason our Readiness Check has an error, we can then act to remediate it before a disaster strikes.

Finally, we are presented with a summary screen that shows us how our Failover Plan has been configured.  Once we click Finish, we have completed setting up our Failover Plan.

If we want to make any changes to our Failover Plan, it’s as simple as right-clicking on our Failover Plan and selecting “Edit,” or highlighting our Failover Plan and clicking “Manage” and then “Edit” on the navigation bar. The edit state is where we can add specific steps to each virtual machine, or to the failover plan in general. For example, I have uploaded a script to be run in the event of a disaster to make some DNS changes for my environment DNS changes.

This screen can be used to add either Pre or Post failover steps, or steps to each VM individually. The steps can also be put into a particular order if desired. The best part of this functionally is the ability to create a custom flow of steps as needed for each VM. For example, I may want to use the included steps of Verify Web Server Port and Verify Web Site (IIS) for a web server in the Failover Plan, and different steps on the SQL server. All of these steps will then be captured in a Plan Definition Report the next time it is run.

Congratulations, you are now protecting your application with Veeam Availability Orchestrator! If you want to take a look at creating your own Failover Plan, you can download a 30-day FREE trial of Veeam Availability Orchestrator.

The post How to Build a Failover Plan in Veeam Availability Orchestrator appeared first on Veeam Software Official Blog.

Original Article:


Virtual Lab “xray” not found on server “”

Virtual Lab “xray” not found on server “

Notes from MWhite  /  Michael White

This has been bothering me for a bit.  When I do a test failover I see the error above.  Here is a screenshot:

This is an example out of my lab.  I could start and stop the virtual lab in the VAO UI with no issues.  I removed / uninstalled everything and tried again.  Still had the same issue.
The solution is a major surprise to me.  Virtual Labs must be created where the replication consumed in the virtual lab is configured and started.
In the example of the screenshot I started the replication to my DR site on  But yet, I created the virtual lab in the DR site.
The answer to this all is that you need to start your replication in the DR site and of course that is where you should create your virtual labs that you use in VAO.
Hope that this is clear but if not Google will hopefully deliver you to this article, and you can use the comments to learn more.
=== END ===

Original Article:


Veeam Intelligent Data Management for Huawei OceanStor

Veeam Intelligent Data Management for Huawei OceanStor

Veeam Software Official Blog  /  Stefan Renner

Veeam and Huawei recently released new, integrated storage snapshot and orchestration capabilities for customers using Veeam and Huawei OceanStor storage. This new Veeam Plug-in is based on the Veeam Universal Storage API and allows Veeam solutions to deliver higher levels of backup and recovery efficiency when paired with the Huawei OceanStor storage infrastructure.
The constant flow and management of data is taxing today’s organizations to their limit. Data has become hyper-critical to business, but IT organizations struggle to cope with their data’s hyper-growth and hyper-sprawl while protecting against data loss threats, ransomware, service outages and human error — all of which result in loss of business, productivity and reputation.
To address these new business and technical requirements, Veeam partnered with Huawei and other leading storage providers to deliver integrated data protection and storage solutions. OceanStor customers can now leverage Veeam storage integration for VMware environments, bringing new levels of Intelligent Data Management to their data center for better RTPO (recovery time and point objectives).

Faster, efficient backup for Huawei OceanStor

Backup operations strain production storage environments, resulting in lower performance. New Veeam-Huawei integration brings agentless Veeam Backup from Storage Snapshots capabilities to OceanStor storage, increasing Veeam backup speed by 2x, and up to 10x faster compared to competing backup solutions.
Veeam’s usage of VMware Change Block Tracking while reading data from storage snapshots minimizes the performance impact on production VMs during backup. As a result, the VMware snapshot lifetime is lowered to minutes instead of hours — which is often the case when using VMware-based backups without storage snapshots.

During standard Veeam backup procedures, Veeam uses parallel disk backup to reduce the time window, but the VMware VM snapshot may be open for some time as would be expected. This leads to higher IO at the time of the VMware VM snapshot, and could cause a possible performance impact on the VM.
The new integration allows Veeam to create Huawei snapshots in the background directly after the VM snapshot creation. The result is nearly instantaneous. In the example below, the VMware VM snapshot is open only until the storage snapshot was created, lowering the time required for the VMware snapshot to be open.

While certain variables can come into play and affect performance, this level of performance will not be uncommon for Huawei OceanStor customers using Veeam Availability storage snapshot integration.

Orchestrate your Huawei OceanStor storage snapshots

Veeam integration with OceanStor also includes orchestration capabilities that reduce backup management complexity and increase efficiency. The new Veeam Plug-in can orchestrate application-consistent storage snapshots without the need of any agent installed within the VMs.

In most IT organizations, backups are ideally scheduled during off hours, when they will not affect performance. However, the realities of today’s business demands on IT infrastructures make this approach obsolete. In today’s world, “off hours” don’t exist anymore. Constant use of available compute and infrastructure resources is key to better return on investment, so low usage windows are harder to find. More importantly, backups need to be taken more often to protect against data loss. Where one backup a night was acceptable in the past, the continuous creation of recovery points to meet higher recovery point objectives is now the common service level demand.
Veeam snapshot orchestration helps you address this need with a mix of frequent crash-consistent and application-consistent snapshots.

Unlimited recovery options from storage snapshot

With Veeam Explorer for Storage Snapshots, you can use either Veeam orchestrated, or any other existing Huawei storage snapshot, to recover full VMs or single items from a snapshot, in many ways depending on what will be most efficient for the recovery situation.

Veeam integration brings new levels of flexibility for recovery with Huawei OceanStor storage. Full VM recovery is simple and quick, but more often than not, simple item recovery within a VM is the recovery use case.
Veeam Explorer for Storage Snapshots gives IT teams recovery of individual items without requiring the normal time and resource consuming process of re-provisioning a VM. This item-level recovery is supported for Microsoft Exchange, SQL, Active Directory and SharePoint objects through a simple Windows Explorer-like interface. Veeam allows data, files, emails and more to be pulled from backups and into the production environment with a few clicks. Recovering Oracle databases out of a storage snapshot is also supported.

Automated DR/recovery verification and DataLabs

The new Veeam Plug-in for Huawei also brings automation to one of the biggest pain points and inconsistencies most organizations struggle to address in their backup and recovery operations. That is the fact that most IT backup administrators can never really be sure they can recover from the restore points in a disaster or when data is lost.

With Veeam and Huawei integration, Veeam On-Demand Sandbox for Storage Snapshots automates the process of creating a completely isolated copy of your production environment, verifies the viability from the snapshot, reports that status, and then deprovisions the environment. Veeam builds the DataLab from recent storage snapshots created by Huawei or any other third-party software and runs through a complete routine to verify VM boot, network connections, and application function. When finished, Veeam then reports the test results via email or through enhanced reporting found in Veeam Availability Suite.
Creating what Veeam calls a DataLab, addresses two core needs in modern IT Infrastructures. First, it addresses the need for verified recoverability to meet regulatory and operational requirements, not to mention peace of mind for the IT team knowing they are prepared for disaster recovery when they are called.
Second, DataLabs are extremely valuable to address the needs of your development teams, or any others that constantly require a dedicated lab environment with real-world data for the purpose of new solution development, upgrade and deployment testing, as well as risk assessment and mitigation planning.

Veeam and Huawei OceanStor are better together

Huawei is the latest storage provider to partner with Veeam for more efficient data management, more efficient backup, and faster recovery. Regardless of whether you want to speed up your Veeam backups or if you want to use storage snapshots next to real backups to lower your RTPO, with the newly released Veeam Storage Plug-in for Huawei OceanStor, you are on the right track and ready for the future.

More resources

The post Veeam Intelligent Data Management for Huawei OceanStor appeared first on Veeam Software Official Blog.

Original Article:


VBO365 Backup: The Quick “How To” Guide

VBO365 Backup: The Quick “How To” Guide

CloudOasis  /  HalYaman

DirectionBlueVeeam Backup for Office 365 version 2 was released last week. In this blog post, and following the Sydney VeeamOn event where the following videos were presented, I’m pleased to share the demonstration with you all to help you quickly get started with a few basic topics.

Add Organization and Creating a backup Job:

Restoring a Mailbox:

Restore a OneDrive Item:

Restore Microsoft SharePoint:

Migrate an MS365 Mailbox to On Premises-Exchange:

Share this:

Like this:

Like Loading…

Original Article:


3 steps to extend your archival options to Microsoft Azure Blob storage

3 steps to extend your archival options to Microsoft Azure Blob storage

Veeam Software Official Blog  /  Andrew Zhelezko

Long-term archival policies remain a consistent part of many enterprise infrastructures today. The need to maintain the 3-2-1 rule, ensure corporate standards, or provide regulatory compliance, keep liability of archival options out of the question. Convenient use of virtual tape libraries (VTL) enables enterprise businesses to extend their tape-based backups to virtual disks, or simply switch to newer operational methods without a long preparation as the staff is familiar with the terms already. While Veeam provides native tape support for backing up to virtual tape libraries, it’s now extending it with an option to leverage StarWind VTL for Microsoft Azure Blob Storage, enabling all users looking for cheap and reliable cloud storage to easily and securely store backups/files there.
With this integration, Veeam and StarWind customers can tier their backup data on site, maintaining one to two weeks of data on-premises, while moving longer-term archives directly to a more cost-effective Microsoft Azure Blob Storage. In this blog, I’ll be covering how to tackle the latter. For a more detailed look, you can view the webinar.
Before we rush into details, I’d like to mention that it will take very little effort from existing Veeam customers to accomplish the process. However, due to many available deployment scenarios (starting from Veeam Backup & Replication (VBR) and StarWind software installed on the same server on-premise and ending with VBR and StarWind being independently deployed on Azure VMs), please take a moment to think about traffic flow and the best configuration for your system before you proceed.
As for the configuration itself, I’d prefer to split it onto three logical steps:

1. Azure preparation

Go to Azure portal, find storage accounts, and add another one (or repurpose an existing blob storage). Make sure to provide all the required information and select blob storage within the account kind option. Then, proceed to a newly created storage account and copy the storage account name and a key (settings –> access keys) as well as create a container which is going to be used for storing the data (blob service –> containers –> new). You’ll need this data later, when configuring a cloud replication in StarWind VTL.

Figure 1. Azure blob storage details

2. Configuring StarWind VTL

The purpose of such an action is to emulate a tape library setup on a desired server, so Veeam is going to send data to that library where it will be processed and ready for a cloud archival. A classic Disk to Disk to Cloud (D2D2C) scheme in action.
Get the latest StarWind VTL package (version or newer) and install it on any appropriate physical, virtual, cloud server or even Veeam server itself. During installation, make sure to select the “VTL and Cloud Replication” option so StarWind automatically deploys the corresponding components. Specify a convenient path for the storage pool or leave it by default at disk C. Then, operating from the StarWind management console, connect to a desired server (use localhost or when setting up an all-in-one scenario) and add a virtual tape device (drive) with as many virtual tapes you’d like. StarWind VTL emulates the actual HPE MSL8096 Tape Library, so all the principles of working with such a library will be applicable here. Note: You might need to install the latest drivers pack so that the server recognizes the said tape library properly. Once that is done, this server can be pointed to a VTL using the standard Windows iSCSI tools (control panel –> administrative tools –> iSCSI initiator). Go to Discovery –> Discovery Portal to initialize VTL and then connect to it from the Targets tab.

Figure 2. Connecting to discovered VTL target
Now you should enable a cloud archival via the cloud replication functionality: Simply select Microsoft Azure Cloud Storage on the first step, then specify the required Azure details from step #1, and finish the process by providing the desired retention settings.

Figure 3. StarWind. Setting up the retention settings

3. Veeam Backup & Replication

From a Veeam Backup & Replication perspective, you’ll need to add a server from above as a tape server to the VBR console. For that, an IP/DNS address and appropriate credentials will be required. During the procedure, Veeam will install a Transport and Tape Proxy services to the server and perform a tape libraries inventory if the option is specified. Once the tapes are detected and put into a Free Media Pool, it’s a good idea to create a dedicated Media Pool with some tapes, which will be used on another step.
Now, Veeam is connected to VTL and can push the data there. Create a Backup to Tape or File to Tape Job, specify the backup scope (you’ll need some pre-created backups for the first option), and point the Job to a previously created Media Pool. Depending on the backup/file size, you’ll get the data effectively delivered to the VTL server.

Figure 4. Backup to Tape Job in action

Figure 5. StarWind management console. Tape in Slot 1 has gotten a backup.
Now you can switch to the VTL server and remove the tape from the slot if it wasn’t automatically exported upon the Veeam Job completion. Since in my case the cloud replication was scheduled to start immediately, I can already see the motion in progress.

Figure 6. Uploading to Azure Blob Storage in action
After a successful upload, the Cloud tab will get a blue check and I should be able to verify that by navigating to my Azure Blob Storage and seeing the actual files uploaded to this container.

Figure 7. “ColdContainer” with uploaded data
I can go ahead and manually change the access tier for any of those files right from the Azure portal.

Figure 8. Azure Access Tier change
As an alternative, I could tweak the StarWind settings or even use PowerShell to manipulate the access tier in an automated way.

Restore VMs from VTLs in Azure

On the restore side, StarWind customers can initiate restores from Azure through their Veeam Backup & Replication console to recover the necessary files or VMs. Better yet, why not recover in Azure? Available in the Azure Marketplace, the virtual StarWind appliance, as well as Veeam Backup & Replication, can be installed in an Azure instance, and recoveries can be done from the archive storage directly into a new Azure virtual machine, accelerating your restore times and providing application portability across your backup infrastructure.
In addition, you can provide access to these newly restored VMs in Azure with Veeam PN (Powered Network), establishing a secure connection back to your HQ data center or wherever you need to provide access to these workloads.


Organizations are still using tape for a variety of reasons, but many want to take advantage of the cloud for their backups in order to maintain business continuity and unlock Availability. With Veeam Backup & Replication customers can leverage a seamless integration with StarWind to get their backups off site and into Microsoft Azure Blob Storage. To watch a full demo on the solution, you can watch this webinar.
GD Star Rating
3 steps to extend your archival options to Microsoft Azure Blob storage, 5.0 out of 5 based on 2 ratings

Original Article:


Tips to backup & restore your SQL Server

Tips to backup & restore your SQL Server

Veeam Software Official Blog  /  Kirsten Stoner

Microsoft SQL Server is often one of the most critical applications in an organization, with too many uses to count. Due to its criticality, your SQL Server and its data should be thoroughly protected. Business operations rely on a core component like Microsoft SQL Server to manage databases and data. The importance of backing up this server and ensuring you have a recovery plan in place is tangible. People want consistent Availability of data. Any loss of critical application Availability can result in decreased productivity, lost sales, lost customer confidence and potentially loss of customers. Does your company have a recovery plan in place to protect its Microsoft SQL Server application Availability? Has this plan been thoroughly tested?
Microsoft SQL Server works on the backend of your critical applications, making it imperative to have a strategy set in place in case something happens to your server. Veeam specifically has tools to back up your SQL Server and restore it when needed. Veeam’s intuitive tool, Veeam Explorer for Microsoft SQL Server, is easy to use and doesn’t require you to be a database expert to quickly restore the database. This blog post aims to discuss using these tools and what Veeam can offer to help ensure your SQL Server databases are well protected and always available to your business.

The Basics

There are some things you should take note of when using Veeam to back up your Microsoft SQL Server. An important aspect and easy way to ensure your backup is consistent is to check that application-aware processing is enabled for the backup job. Application aware processing is Veeam’s proprietary technology based on Microsoft Volume Shadow Copy Service. This technology quiescences the applications running on the virtual machine to create a consistent view of data. This is done so there are no unfinished database transactions when a backup is performed. This technology creates a transactionally consistent backup of a running VM minimizing the potential for data loss.
Enabling Application Aware processing is just the first step, you must also consider how you want to handle the transaction logs. Veeam has different options available to help process the transaction logs. The options available are truncate logs, do not truncate logs, or backup logs periodically.

Figure 1: SQL Server Transaction logs Options

Figure 1 shows the Backup logs periodically option is selected in this scenario. This option supports any database restore operation offered through Veeam Backup & Replication. In this case, Veeam periodically will transfer transaction logs to the backup repository and store them with the SQL server VM backup, truncating logs on the original VM. Make sure you have set the recovery model for the required SQL Server database to full or bulk-logged.
If you decide you do not want to truncate logs, Veeam will preserve the logs. This option puts the control into the database administrator’s hands, allowing them to take care of the database logs. The other alternative is to truncate logs, this selection allows Veeam to perform a database restore to the state of the latest restore point. To read more about backing up transaction logs check out this blog post.

Data recovery

Veeam Explorer for Microsoft SQL Server delivers consistent application Availability through the different restore options it offers to you. These include the ability to restore a database to a specific point in time, restore a database to the same or different server, restore it back to its original location or export to a specified location. Other options include performing restores of multiple databases at once, the ability to perform a table-level recovery or running transaction log replay to perform quick point-in-time restores.

Figure 2: Veeam Explorer for Microsoft SQL Server

Recovery is the most important aspect of data Availability. SQL Transaction log backup allows you to back up your transaction logs on a regular basis meeting recovery point objectives (RPOs). This provides not only database recovery options, but also point-in-time database recovery. Transaction-level recovery saves you from a bad transaction such as a table drop, or a mass delete of records. This functionality allows you to do a restore to a point in time right before the bad transaction had occurred, for minimal data loss.

And it is available for FREE!

Veeam offers a variety of free products and Veeam Explorer for Microsoft SQL Server is one that is included in that bunch. If you are using Veeam Backup Free Edition already, you currently have this Explorer available to you. The free version allows you to view database information, export a database and export a database schema or data. If you’re interested in learning more about what you get with Veeam Backup Free Edition, be sure to download this HitchHikers Guide.

Read more

The post Tips to backup & restore your SQL Server appeared first on Veeam Software Official Blog.

Original Article:


Understanding Veeam Availability Orchestrator terminology

Understanding Veeam Availability Orchestrator terminology

Veeam Software Official Blog  /  Melissa Palmer

When it comes to creating disaster recovery (DR) plans, Veeam Availability Orchestrator makes it easy to ensure your data is available when disaster strikes. Beyond creating what we call a Failover Plan in Veeam Availability Orchestrator, we also ensure that our DR plans are tested successfully on a regular basis, with the documentation to prove it. This documentation can also be used for compliance, auditing, and ensuring members of an organization know the state of the DR plan at all times.
You may be asking yourself, “What is a Failover Plan?” after reading the first paragraph of this post. Don’t worry, we are about to explore what they are, as well as other terms we commonly use when talking about Veeam Availability Orchestrator.
A Failover Plan is what is created in Veeam Availability Orchestrator to protect applications. The Failover Plan is central to an organization’s DR plan. The goal of the Failover Plan is to make failovers (and fail backs) as simple as possible. Within a Failover Plan, there are a number of Plan Components, which are added to the Failover Plan to meet the business’ requirements.

VM Groups contain the virtual machines (VMs) we are ensuring Hyper-Availability for in the event of a disaster. The VM Groups are powered by VMware vSphere Tags. VMs are simply tagged in vCenter, and the vSphere Tag name will appear in Veeam Availability Orchestrator as shown in the screenshot above. Plan Steps are the steps taken on the VMs during a failover. This includes a number of Application Verification steps available out of the box including verification of applications such as Exchange, SQL, IIS, Domain Controllers, and DNS. Credentials for verifying the applications are also one of the plan components. In addition to these built in application verification tests, Custom Steps can also be added to the Plan Components, allowing organizations to leverage already existing DR scripts.
Template Jobs ensure data is backed up and kept available during a failover scenario. They are created in Veeam Backup & Replication and then added to a Failover Plan during creation. Another big component of a Failover Plan is a Virtual Lab, which we refer to as a Veeam Data Lab. Veeam Data Labs allow for an isolated copy of a production application to be created and tested against. When we are finished using this copy of the data, we simply delete it without ever having impacted or changed our actual production data. This allows for Virtual Lab Tests to be performed to prove recoverability, and the corresponding Test Execution Report to be generated.
We all know how difficult testing DR plans used to be. We would spend a few days locked in the data center, without even getting the applications running correctly. We would say it would get fixed “next time,” but we all know the truth was often that these broken DR plans were never fixed. Veeam Availability Orchestrator removes this overhead, and allows for quick and easy testing without impacting production. In the event a test fails, the Test Execution Report shows us exactly what went wrong so we can fix it.
Before we run a Virtual Lab Test, we first run a Readiness Check on our Failover Plan, and yes, this also comes with a Readiness Check Report so we can easily see the state of our DR plan. This is a lightweight test that is performed to ensure we are ready to failover at a moment’s notice. Best of all, this check can be scheduled to run daily, along with a Plan Definition Report. The Plan Definition Report shows us exactly what is in a Failover Plan, including the VMs in a VM Group and all of our Plan Steps. This report also shows any changes so we have a full audit trail of our DR plan.
As you can tell by this image, our Failover Plans are ready in the event of a failover. They are listed as a “verified” state which means we have successfully run a Virtual Lab Test and a Readiness Check, both of which can be scheduled to run as often as we would like. We can also ensure reports are sent to key stakeholders when the checks are run.

In the event of a failover, which we can trigger on demand or schedule, an Execution Report will be generated. This will detail the steps taken as part of the Failover Plan on the VMs in a VM Group, and show that the application has been successfully verified and is running in the DR site. We know the execution of a Failover Plan will be successful since we have already tested it successfully.
Now that you are ready to start speaking Availability, you can download a 30-day FREE trial of Veeam Availability Orchestrator and try it out. Make sure to check these tips and tricks to ensure a smooth first deployment.
The post Understanding Veeam Availability Orchestrator terminology appeared first on Veeam Software Official Blog.

Original Article:


Veeam Availability for Nutanix Enterprise Cloud now GA

Veeam Availability for Nutanix Enterprise Cloud now GA

vZilla  /  michaelcade

Veeam Availability for Nutanix Enterprise Cloud

This post will walk through the areas and demo of the newly released Veeam Availability for Nutanix AHV.

It is important to note that the screen shot used in this post are based on the beta versions, there should be very little difference between the GA and beta edition.

Basic Overview

The Veeam Backup Proxy Appliance for AHV, used to authenticate with a Veeam Backup & Replication server to gain access to Veeam repositories. The appliance will also provide a web interface where full VM restores and disk restores can take place.

The Veeam Backup & Replication server is a required component and used in conjunction with the Veeam Backup Proxy Appliance for AHV. Primary use case is to allow access to Veeam backup repositories but also to provide granular level recovery for both files and application items.

The Veeam repository (not including HPE StoreOnce or DellEMC Data Domain in version 1) provides us the ability to store the Veeam backups from AHV in that forward incremental proprietary vbk format.

In my lab I have the above configured as following:

Nutanix AHV Console

I will first show below that we have our Nutanix Community Edition running in our lab based on the networking configuration shared above.

The opening screen will look like the below, it’s an overview screen for the whole AHV environment, this is a nested Nutanix CE edition.

Select the drop down near Home and select VMs and then table. All machines are then listed, most of the backups you will see throughout this demo are based on the Windows 2016 VM that you can see below.

Veeam backup proxy appliance

The opening web interface for the Veeam backup proxy appliance looks as per below.

The opening screen is in the form of a dashboard interface giving you an overview of the AHV and Veeam environment.

The top navigation bar is very simple to find what it is you are looking for, this is where you can go back to that dashboard shown above, create and view backup jobs, protected VMs is where the recovery options can be performed.

Protected VMs tab: this is where the full VM and disk restore operations can be seen.

Event Log tab: all time logs are shown here. This is an extended visual from the one seen on the opening dashboard.

Configuration settings: simply the ability to add your Veeam backup servers and Nutanix clusters, there is also the appliance settings tab which is where you can configure how the configuration backup will take place. This is also where the licensing will be added.

Veeam Backup & Replication 9.5 update 3

The Veeam Backup & Replication console is required to perform granular and application item level recovery. It will also enable for additional availability options.

Connect and launch VBR console. Under backups you will see the “Nutanix Policy”

Backup copy options, would also can send via backup copy to cloud connect provider and to tape.

Restore Functionality

As mentioned there are lots of restore scenarios available from the Veeam Backup & Replication console.

Restore guest files demo, you will have many restore points based on the job configuration that has been set. as you click through the wizard all will contain restore points.

Application Explorer demo, this is achieved from within the FLR explorer, select the explorer you wish to demo, you can see in the root for demo purposes I have AD, Exchange, SQL and File Data to make life easier to just show us mounting those database files.

I am excited to see where this grows next as a v1 product it will remove so many pain points for many of the Nutanix AHV administrators I have spoken to. I have been lucky enough to see this from the very early days so the feedback has been great and we already have a great list of feature requests for v2.

The post Veeam Availability for Nutanix Enterprise Cloud now GA appeared first on vZilla.

Original Article: