Prepare you vLab for your VAO DR planning

Prepare you vLab for your VAO DR planning

CloudOasis  /  HalYaman

disaster-recovery-plan-ts-100662705-primary.idge_.jpgA good disaster plan needs careful preparation. But after your careful planning, how do you validate your DR strategy without disrupting your daily business operation? Veeam Availability Orchestrator just might be your solution.
As this blog post focuses on one aspect of the VBR and VAO configuration and to learn more about Veeam Availability Orchestrator, and to find out more about Veeam Replication and Veeam Virtual Labs, continue reading.
Veeam Availability Orchestrator is a very powerful tool to help you implement and document your DR strategy. However, this product relies on Veeam Backup & Replication Tool to:

  • perform Replication, and
  • Virtual Lab.

Therefore, to successfully configure the Veeam Availability Orchestrator, you must master Replication and the Virtual Lab. Don’t worry, I will take you through the important steps to successfully configure your Veeam Availability Orchestrator, and to implement your DR Plan.
The best way to get started is to share with you a real-life DR scenario I experienced last week during my VAO POC.


A customer wanted to implement Veeam Availability Orchestrator with the following objectives:

  • Replication between the production and DR datacentre across the county,
  • Re-Mapping the Network attached to each VM at the DR site,
  • Re-IP the VM IP address of each VM at the DR site,
  • Scheduling the DR testing to perform every Saturday morning,
  • Document the DR Plan.

As you might already be aware, all those objectives can be achieved using Veeam VBR & VAO.
So let’s get started.

The Environment and the Design

To understand the design and the configuration lets first introduce the customer network subnets at PRODUCTION and the DR site.

  • At the PRODUCTION site, the customer used the 192.168.33.x/24 subnet,
  • Virtual Distribution Switch and Group: SaSYD-Prod.

DR Site

  • Production network at the DR site uses the 192.168.48.x/24 subnet
  • Prod vDS: SP-DSWProd
  • DR Re-IP subnet & Network name: vxw-dvs-662-virtualwire-4-sid-5000-SP_Prod_Switch

To accomplish the configuration and requirements at those sites, the configuration must consider the following steps:

  • Replication between the PRODUCTION and the DR Sites
  • Re-Mapping the VM Network from ProdNet to vxw-dvs-662-virtualwire-4-sid-5000-SP_Prod_Switch
  • vLab with that configuration listed above.

The following diagram and design is what we are going to discuss on this blog:

Replication Job Configuration

To prepare for a disaster and to implement failover, we must create a replication job that will replicate the intended VMs from the PRODUCTION site to the DR site. In this scenario, to achieve the requirements above, we must use Veeam replication with Network Mapping and Re-IP options when configuring the Replication Job. To do this, we have to tick the checkbox for the Separate virtual networks (enable network mapping), and Different IP addressing scheme (enable re-IP):

At the Network stage, we will specify the source and destination networks:

Note: to follow the diagram, the source network must be: Prod_Net and the Target network will be the DR_ReIP network.
On the Re-IP, enter the original IP address and the new Re-IP address to be used at the DR site:
Next, continue with the replication job configuration as normal.

Virtual Lab Configuration

To use Veeam Availability Orchestrator to check our DR planning, and to make sure our VMs will start on the DR site as expected, we must create a Veeam Virtual Lab to test our configuration. First, let’s create a Veeam vLab, starting with the name of the vLab and the ESXi host at the DR site which will host the Veeam Proxy appliance. At the following screenshot, the hostname is
Choose a data store where you will keep the redo log files. After you have selected the datastore, press next. You must configure the proxy appliance specifically for the network to be attached. In our example, the network is the PRODUCTION network at the DR site named SP-DSWProd, and it has a static DR site IP address. See below.
Next, we must configure the Networking as Advanced multi-host (manual configuration), and then select the appropriate Distributed virtual switch; in our case, SP-ProdSwitch.

This leads us to the next configuration stage, Isolated Network. At this stage, we must assign the DR network that each replicated VM will be connected.
Note: This network must be the same as the Re-Mapped network you selected as a destination network during the replication job configuration. The Isolation network is any name you assign to the temporary network used during the DR plan check.

Next, we must configure the temporary DR network. As shown on the following screenshot, I chose the Omnitra_VAO_vLab network I named on the previous step (Isolated network). The IP address is the same IP address of the DR PRODUCTION gateway. Also on the following screenshot, you can see the Masquerade network address I can use to get access to each of the VMs from the DR PRODUCTION network:

Finally, let’s create a static Mapping to access the VMs during the DR testing. We will use the Masquerade IPs as shown in the following screenshot.


Veeam Availability Orchestrator is a very powerful tool to help companies streamline and simplify their DR planning. The initial configuration of the VAO DR planning is not complex; but it is a little involved. You must navigate between the two products, Veeam Backup and Replication, and the Veeam Availability Orchestrator.
After you have grasped the DR concept, your next steps to DR planning will be smooth and simple. Also, you may have noted that to configure your DR planning using Veeam Availability Orchestrator, you must be familiar with Veeam vLabs and networking in general. I highly recommend that you read more on Veeam Virtual Labs before starting your DR Plan configuration.
I hope this blog post helps you to get started with vLab and VAO configuration. Stay tuned for my next blog or recording about the complete configuration of the DR Plane. Until then, I hope you find this blog post informative; please post your comments in the chatter below if you have questions or suggestions.
The post Prepare you vLab for your VAO DR planning appeared first on CloudOasis.

Original Article:


Migration is never fun – Backups are no exception

Migration is never fun – Backups are no exception

Veeam Software Official Blog  /  Rick Vanover

One of the interesting things I’ve seen over the years is people switching backup products. Additionally, it is reasonable to say that the average organization has more than one backup product. At Veeam, we’ve seen this over time as organizations started with our solutions. This was especially the case before Veeam had any solutions for the non-virtualized (physical server and workstation device) space. Especially in the early days of Veeam, effectively 100% of business was displacing other products — or sitting next to them for workloads where Veeam would suit the client’s needs better.
The question of migration is something that should be discussed, as it is not necessarily easy. It reminds me of personal collections of media such as music or movies. For movies, I have VHS tapes, DVDs and DVR recordings, and use them each differently. For music, I have CDs, MP3s and streaming services — used differently again. Backup data is, in a way, similar. This means that the work to change has to be worth the benefit.
There are many reasons people migrate to a new backup product. This can be due to a product being too complicated or error-prone, too costly, or a product discontinued (current example is VMware vSphere Data Protection). Even at Veeam we’ve deprecated products over the years. In my time here at Veeam, I’ve observed that backup products in the industry come, change and go. Further, almost all of Veeam’s most strategic partners have at least one backup product — yet we forge a path built on joint value, strong capabilities and broad platform support.
When the migration topic comes up, it is very important to have a clear understanding about what happens if a solution no longer fits the needs of the organization. As stated above, this can be because a product exits the market, drops support for a key platform or simply isn’t meeting expectations. How can the backup data over time be trusted to still meet any requirements that may arise? This is an important forethought that should be raised in any migration scenario. This means that the time to think about what migration from a product would look like, actually should occur before that solution is ever deployed.
Veeam takes this topic seriously, and the ability to handle this is built into the backup data. My colleagues and I on the Veeam Product Strategy Team have casually referred to Veeam backups being “self-describing data.” This means that you open it up (which can be done easily) and you can clearly see what it is. One way to realize this is the fact that Veeam backup products have an extract utility available. The extract utility is very helpful to recover data from the command line, which is a good use case if an organization is no longer a Veeam client (but we all know that won’t be the case!). Here is a blog by Vanguard Andreas Lesslhumer on this little-known tool.
Why do I bring up the extract utility when it comes to switching backup products? Because it hits on something that I have taken very seriously of late. I call it Absolute Portability. This is a very significant topic in a world where organizations passionately want to avoid lock-in. Take the example I mentioned before of VMware vSphere Data Protection going end-of-life, Veeam Vanguard Andrea Mauro highlights how they can migrate to a new solution; but chances are that will be a different experience. Lock-in can occur in many ways, and organizations want to avoid lock-in. This can be a cloud lock-in, a storage device lock-in, or a services lock-in. Veeam is completely against lock-ins, and arguably so agnostic that it makes it hard to make a specific recommendation sometimes!
I want to underscore the ability to move data — in, out and around — as organizations see fit. For organizations who choose Veeam, there are great capabilities to keep data available.
So, why move? Because expanded capabilities will give organizations what they need.
GD Star Rating
Migration is never fun – Backups are no exception, 4.8 out of 5 based on 4 ratings

Original Article:


On-Premises Object Storage for Testing

On-Premises Object Storage for Testing

CloudOasis  /  HalYaman

objectstorage.pngMany Customers and Partners are wanting to deploy an on-premises object storage for testing and learning purposes. How you can deploy a totally free of charge object storage instance for an unlimited time and unrestricted capacity in your own lab?

On this blog post, I will take you through the deployment and configuration of an on-premises object storage instance for test purposes so that you might learn more about the new feature.
For this test, I will use a product call Minio. You can download it from this link for free, and it is available for Windows, Linux and MacOS.
To get started, download Minio and run the installation.
On my CloudOasis lab, I decided to use a Windows core server to act as a test platform for my object storage. Read on to the steps below to see the configuration I used:


By default, after the download of Minio Server, it will install as an unsecured service. The downside of this is that many applications out there need a secure URL over HTTPS to interact with the Object Storage. To fix this insecurity, we have to download GnuTLS to generate a private and public key facility to secure our Object storage connection.

Preparing Certificate

After GnuTLS has downloaded and extracted to a folder on your drive, for example, C:\GnuTLS, you now must add that path to Windows with the following command:
setx path "%path%;C:\gnutls\bin"

Private Key

The next step is to generate the private.key:
certtool.exe --generate-privkey --outfile private.key
After the private.key has been  generated, create a new file called cert.cnf and past the following script into it:
# X.509 Certificate options  #  # DN options    # The organization of the subject.  organization = "CloudOasis"    # The organizational unit of the subject.  #unit = "CloudOasis Lab"    # The state of the certificate owner.  state = "NSW"    # The country of the subject. Two letter code.  country = "AU"    # The common name of the certificate owner.  cn = "Hal Yaman"    # In how many days, counting from today, this certificate will expire.  expiration_days = 365    # X.509 v3 extensions    # DNS name(s) of the server  dns_name = ""    # (Optional) Server IP address  ip_address = ""    # Whether this certificate will be used for a TLS server  tls_www_server    # Whether this certificate will be used to encrypt data (needed  # in TLS RSA cipher suites). Note that it is preferred to use different  # keys for encryption and signing.  encryption_key

Public Key

Now we are ready to generate the public certificate using the following command:
certtool.exe --generate-self-signed --load-privkey private.key --template cert.cnf --outfile public.crt
After you have completed those steps, you must copy the private and the public keys to the following path:
Note: If you already had your own certificates, all you have to do is to copy them to the .\.\certs folder.

Run Minio Secure Service

Now we are ready to run a secured Minio Server using the following command. Here, we are assuming your minio.exe has been installed on your C:\ drive:
C:\minio.exe server S:\bucket
Note: the S:\bucket is a second volume I created and configured in Minio Server to receive the saved the objects.
After the minio.exe has run, you will get the following screen:
Screen Shot 2018-11-09 at 8.46.33 am


This blog was prepared following several requests from customers and partners who wanted to familiarise themselves with object storage integration during their private beta experience.
As you saw, the steps we have described here will help you to deploy an on-premises object storage for your own testing, without the cost, time limit, or storage size limit.
The steps to deploy the object storage are very simple, but the outcome is huge, and very helpful to you in your learning journey.

The post On-Premises Object Storage for Testing appeared first on CloudOasis.

Original Article:


QNAP NAS Verified Veeam Ready for Efficient Disaster Recovery – HEXUS

QNAP NAS Verified Veeam Ready for Efficient Disaster Recovery – HEXUS

veeam – Google News


Taipei, Taiwan, November 19, 2018 – QNAP® Systems, Inc. today announced that multiple Enterprise-class QNAP NAS systems, including the ES1640dc v2 and TES-1885U, have been verified as Veeam® Ready. Veeam Software, the leader in Intelligent Data Management for the Hyper-Available Enterprise™, has granted QNAP NAS systems with the Veeam Ready Repository distinction, verifying these systems achieve the performance levels for efficient backup and recovery with Veeam® Backup & Replication™ for virtual environments built on VMware® vSphere™ and Microsoft® Hyper-V® hypervisors.
“Veeam provides industry-leading Availability solutions for virtual, physical and cloud-based workloads, and verifying performance ensures that organizations can leverage Veeam advanced capabilities with QNAP NAS systems to improve recovery time and point objectives and keep their businesses up and running,” said Jack Yang, Associate Vice President of Enterprise Storage Business Division of QNAP.
Veeam Backup & Replication helps achieve Availability for ALL virtual, physical and cloud-based workloads and provides fast, flexible and reliable backup, recovery and replication of all applications and data. Organizations can now choose among several QNAP systems verified with Veeam for backup and recovery including:
For more information, please visit
About QNAP Systems, Inc.
QNAP Systems, Inc., headquartered in Taipei, Taiwan, provides a comprehensive range of cutting-edge Network-attached Storage (NAS) and video surveillance solutions based on the principles of usability, high security, and flexible scalability. QNAP offers quality NAS products for home and business users, providing solutions for storage, backup/snapshot, virtualization, teamwork, multimedia, and more. QNAP envisions NAS as being more than “simple storage”, and has created many NAS-based innovations to encourage users to host and develop Internet of Things, artificial intelligence, and machine learning solutions on their QNAP NAS.

Original Article:


Stateful containers in production, is this a thing?

Stateful containers in production, is this a thing?

Veeam Software Official Blog  /  David Hill

As the new world debate of containers vs virtual machines continues, there is also a debate raging about stateful vs stateless containers. Is this really a thing? Is this really happening in a production environment? Do we really need to backup containers, or can we just backup the data sets they access? Containers are not meant to be stateful are they? This debate rages daily on Twitter, reddit and pretty much in every conversation I have with customers.
Now the debate typically starts with the question Why run a stateful container? In order for us to understand that question, first we need to understand the difference between a stateful and stateless container and what the purpose behind them is.

What is a container?

“Containers enable abstraction of resources at the operating system level, enabling multiple applications to share binaries while remaining isolated from each other” *Quote from Actual Tech Media
A container is an application and dependencies bundled together that can be deployed as an image on a container host. This allows the deployment of the application to be quick and easy, without the need to worry about the underlying operating system. The diagram below helps explain this:
stateful containers
When you look at the diagram above, you can see that each application is deployed with its own libraries.

What about the application state?

When we think about any application in general, they all have persistent data and they all have application state data. It doesn’t matter what the application is, it has to store data somewhere, otherwise what would be the point of the application? Take a CRM application, all that customer data needs to be kept somewhere. Traditionally these applications use database servers to store all the information. Nothing has changed from that regard. But when we think about the application state, this is where the discussion about stateful containers comes in. Typically, an application has five state types:

  1. Connection
  2. Session
  3. Configuration
  4. Cluster
  5. Persistent

For the interests of this blog, we won’t go into depth on each of these states, but for applications that are being written today, native to containers, these states are all offloaded to a database somewhere. The challenge comes when existing applications have been containerized. This is the process of taking a traditional application that is installed on top of an OS and turning it into a containerized application so that it can be deployed in the model shown earlier. These applications save these states locally somewhere, and where depends on the application and the developer. Also, a more common approach is running databases as containers, and as a consequence, these meet a lot of the state types listed above.

Stateful containers

A container with stateful data is either typically written to persistent storage or kept in memory, and this is where the challenges come in. Being able to recover the applications in the event of an infrastructure failure is important. If everything that is backed up is running in databases, then as mentioned earlier, that is an easy solution, but if it’s not, how do you orchestrate the recovery of these applications without interruption to users? If you have load balanced applications running, and you have to restore that application, but it doesn’t know the connection or session state, the end user is going to face issues.

If we look at the diagram, we can see that “App 1” has been deployed twice across different hosts. We have multiple users accessing these applications through a load balancer. If “App 1” on the right crashes and is then restarted without any application state awareness, User 2 will not simply reconnect to that application. That application won’t understand the connection and will more than likely ask the user to re-authenticate. Really frustrating for the user, and terrible for the company providing that service to the user. Now of course this can be mitigated with different types of load balancers and other software, but the challenge is real. This is the challenge for stateful containers. It’s not just about backing up data in the event of data corruption, it’s how to recover and operate a continuous service.

Stateless containers

Now with stateless containers its extremely easy. Taking the diagram above, the session data would be stored in a database somewhere. In the event of a failure, the application is simply redeployed and picks up where it left off. Exactly how containers were designed to work.

So, are stateful containers really happening?

When we think of containerized applications, we typically think about the new age, cloud native, born in the cloud, serverless [insert latest buzz word here] application, but when we dive deeper and look at the simplistic approach containers bring, we can understand what businesses are doing to leverage containers to reduce the complex infrastructure required to run these applications. It makes sense that lots of existing applications that require consistent state data are appearing everywhere in production.
Understanding how to orchestrate the recovery of stateful containers is what needs to be focused on, not whether they are happening or not.
GD Star Rating
Stateful containers in production, is this a thing?, 5.0 out of 5 based on 4 ratings

Original Article:


Enhanced self-service restore in Veeam Backup for Microsoft Office 365 v2

Enhanced self-service restore in Veeam Backup for Microsoft Office 365 v2

In Veeam Backup for Microsoft Office 365 1.5, you could only restore the most recently backed up recovery point, which limited its usefulness for most administrators looking to take advantage of the feature. That’s changed in Veeam Backup for Microsoft Office 365 v2 with the ability to now choose a point in time from the Veeam Explorers™. Read this blog from Anthony Spiteri, Global Technologist, Product Strategy to learn more.


More tips and tricks for a smooth Veeam Availability Orchestrator deployment

More tips and tricks for a smooth Veeam Availability Orchestrator deployment

Veeam Software Official Blog  /  Melissa Palmer

Welcome to even more tips and tricks for a smooth Veeam Availability Orchestrator deployment. In the first part of our series, we covered the following topics:

  • Plan first, install next
  • Pick the right application to protect to get a feel for the product
  • Decide on your categorization strategy, such as using VMware vSphere Tags, and implement it
  • Start with a fresh virtual machine

Configure the DR site first

After you have installed Veeam Availability Orchestrator, the first site you configure will be your DR site. If you are also deploying production sites, it is important to note, you cannot change your site’s personality after the initial configuration. This is why it is so important to plan before you install, as we discussed in the first article in this series.

As you are configuring your Veeam Availability Orchestrator site, you will see an option for installing the Veeam Availability Orchestrator Agent on a Veeam Backup & Replication server. Remember, you have two options here:

  1. Use the embedded Veeam Backup & Replication server that is installed with Veeam Availability Orchestrator
  2. Push the Veeam Availability Orchestrator Agent to existing Veeam Backup & Replication servers

If you change your mind and do in fact want to use an existing Veeam Backup & Replication server, it is very easy to install the agent after initial configuration. In the Veeam Availability Orchestrator configuration screen, simply click VAO Agents, then Install. You will just need to know the name of the Veeam Backup & Replication server you would like to add and have the proper credentials.

Ensure replication jobs are configured

No matter which Veeam Backup & Replication server you choose to use for Veeam Availability Orchestrator, it is important to ensure your replication jobs are configured in Veeam Backup & Replication before you get too far in configuring your Veeam Availability Orchestrator environment. After all, Veeam Availability Orchestrator cannot fail replicas over if they are not there!

If for some reason you forget this step, do not worry. Veeam Availability Orchestrator will let you know when a Readiness Check is run on a Failover Plan. As the last step in creating a Failover Plan, Veeam Availability Orchestrator will run a Readiness Check unless you specifically un-check this option.

If you did forget to set up your replication jobs, Veeam Availability Orchestrator will let you know, because your Readiness Check will fail, and you will not see green checkmarks like this in the VM section of the Readiness Check Report.

For a much more in-depth overview of the relationship between Veeam Backup & Replication and Veeam Availability Orchestrator, be sure to read the white paper Technical Overview of Veeam Availability Orchestrator Integration with Veeam Backup & Replication.

Do not forget to configure Veeam DataLabs

Before you can run a Virtual Lab Test on your new Failover Plan (you can find a step-by-step guide to configuring your first Failover Plan here), you must first configure your Veeam DataLab in Veeam Backup & Replication. If you have not worked with Veeam DataLabs before (previously known as Veeam Virtual Labs), be sure to read the white paper I mentioned above, as configuration of your first Veeam DataLab is also covered there.

After you have configured your Veeam DataLab in Veeam Backup & Replication, you will then be able to run Virtual Lab Tests on your Failover Plan, as well as schedule Veeam DataLabs to run whenever you would like. Scheduling Veeam DataLabs is ideal to provide an isolated production environment for application testing, and can help you make better use of those idle DR resources.

Veeam DataLabs can be run on demand or scheduled from the Virtual Labs screen. When running or scheduling a lab, you can also select the duration of time you would like the lab to run for, which can be handy when scheduling Veeam DataLab resources for use by multiple teams.

There you have it, even more tips and tricks to help you get Veeam Availability Orchestrator up and running quickly and easily. Remember, a free 30-day trial of Veeam Availability Orchestrator is available, so be sure to download it today!

The post More tips and tricks for a smooth Veeam Availability Orchestrator deployment appeared first on Veeam Software Official Blog.

Original Article:


How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs

How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs

Veeam Software Official Blog  /  Melissa Palmer

Unfortunately, bad patches are something everyone has experienced at one point or another. Just take the most recent example of the Microsoft Windows October 2018 Update that impacted both desktop and server versions of Windows. Unfortunately, this update resulted in missing files for impacted systems, and has temporarily been paused as Microsoft investigates.
Because of incidents like this, organizations are often seldom to quickly adopt patches. This is one of the reasons the WannaCry ransomware was so impactful. Unpatched systems introduce risk into environments, as new exploits for old problems are on the rise. In order to patch a system, organizations must first do two things, back up the systems to be patched, and perform patch testing.

A recent, verified Veeam Backup

Before we patch a system, we always want to make sure we have a backup that matches our organization’s Recovery Point Objective (RPO), and that the backup was successful. Luckily, Veeam Backup & Replication makes this easy to schedule, or even run on demand as needed.
Beyond the backup itself succeeding, we also want to verify the backup works correctly. Veeam’s SureBackup technology allows for this by booting the VM in an isolated environment, then tests the VM to make sure it is functioning properly. Veeam SureBackup gives organizations additional piece of mind that their backups have not only succeeded, but will be useable.

Rapid patch testing with Veeam DataLabs

Veeam DataLabs enable us to test patches rapidly, without impacting production. In fact, we can use that most recent backup we just took of our environment to perform the patch testing. Remember the isolated environment we just talked about with Veeam SureBackup technology? You guessed it, it is powered by Veeam DataLabs.
Veeam DataLabs allows us to spin up complete applications in an isolated environment. This means that we can test patches across a variety of servers with different functions, all without even touching our production environment. Perfect for patch testing, right?
Now, let’s take a look at how the Veeam DataLab technology works.
Veeam DataLabs are configured in Veeam Backup & Replication. Once they are configured, a virtual appliance is created in VMware vSphere to house the virtual machines to be tested. Beyond the virtual machines you plan on testing, you can also include key infrastructure services such as Active Directory, or anything else the virtual machines you plan on testing require to work correctly. This group of supporting VMs is called an Application Group.
patch testing veeam backup datalabs
In the above diagram, you can see the components that support a Veeam DataLab environment.
Remember, these are just copies from the latest backup, they do not impact the production virtual machines at all. To learn more about Veeam DataLabs, be sure to take a look at this great overview hosted here on the blog.
So what happens if we apply a bad patch to a Veeam DataLab environment? Absolutely nothing. At the end of the DataLab session, the VMs are powered off, and the changes made during the session are thrown away. There is no impact to the production virtual machines or the backups leveraged inside the Veeam DataLab. With Veeam DataLabs, patch testing is no longer a big deal, and organizations can proceed with their patching activities with confidence.
This DataLab can then be leveraged for testing, or for running Veeam SureBackup jobs. SureBackup jobs also provide reports upon completion. To learn more about SureBackup jobs, and see how easy they are to configure, be sure to check out the SureBackup information in the Veeam Help Center.

Patch testing to improve confidence

The hesitance to apply patches is understandable in organizations, however, that does not mean there can be significant risk if patches are not applied in a timely manner. By leveraging Veeam Backups along with Veeam DataLabs, organizations can quickly test as many servers and environments as they would like before installing patches on production systems. The ability to rapidly test patches ensures any potential issue is discovered long before any data loss or negative impact to production occurs.

No VMs? No problem!

What about the other assets in your environment that can be impacted by a bad patch, such as physical servers, dekstops, laptops, and full Windows tablets? You can still protect these assets by backing them up using Veeam Agent for Microsoft Windows. These agents can be automatically deployed to your assets from Veeam Backup & Replication. To learn more about Veeam Agents, take a look at the Veeam Agent Getting Started Guide.
To see the power of Veeam Backup & Replication, Veeam DataLabs, and Veeam Agent for Microsoft Windows for yourself, be sure to download the 30-day free trial of Veeam Backup & Replication here.
The post How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs appeared first on Veeam Software Official Blog.

Original Article:


Office 365 Backup now available in the Azure Marketplace!

Office 365 Backup now available in the Azure Marketplace!

Veeam Software Official Blog  /  Niels Engelen

When we released Veeam Backup for Microsoft Office 365 in July, we saw a huge adoption rate and a large inquiry on running the solution within Azure. It is with great pleasure, we can announce that Veeam Backup for Microsoft Office 365 is now available in the Azure Marketplace!

A simple deployment model

Veeam Backup for Microsoft Office 365 within Azure falls under the BYOL (Bring Your Own License) model, which means you only have to buy the amount of licenses needed besides the Azure infrastructure costs.
The deployment is easy. Just define your project and instance details combined with an administrator login and you’re good to go. You will notice a default size will be selected, however, this can always be redefined. Keep in mind it is advised to leverage the minimum system requirements which can be found in the User Guide.

VBO365 Azure deployment

Once you’ve added your disks and configured the networking, you’re good to go and the Azure portal will even share you details on the Azure infrastructure costs such as the example below for a Standard A4 v2 VM.

VBO365 Azure pricing

If you are wondering on how to calculate the amount of storage needed for Exchange, SharePoint and OneDrive data, Microsoft provides great reports for this within the Microsoft 365 admin center under the reports option.
Once the VM has been deployed, you can leverage RDP and are good to go with a pre-installed VBO installation. Keep in mind that by default, the standard retention on the repository is set to 3 years, so you may need to modify this to adjust to your organization’s needs.

Two ways to get started!

You can provision Veeam Backup for Microsoft Office 365 in Azure and bring a 30-day trial key with you to begin testing.
You can also deploy the solution within Azure and back up all your Office 365 data free forever – limited to a maximum of 10 users and 1TB of SharePoint data within your organization.
Ready to get started? Try it out today and head to the Azure Marketplace right now!

See Also:

The post Office 365 Backup now available in the Azure Marketplace! appeared first on Veeam Software Official Blog.

Original Article:


Availability for your Nutanix AHV with Veeam

Availability for your Nutanix AHV with Veeam

vZilla  /  michaelcade

This series is to highlight the steps to deploy, install, configuration and then how to start protecting workloads and then the recovery options that we have within Veeam Availability for Nutanix AHV.
Everything You Need to for Veeam Availability for Nutanix AHV

Now that we have our Veeam Proxy Appliance deployed, installed and configured, the next step is to start protecting some of the workloads we have sitting in our Nutanix AHV Cluster.
Navigate to the backup jobs tab on the top ribbon.

Here we can add a new backup job, a simple wizard driven approach to start protecting those workloads.

Next, we need to add in our virtual machines,

In my scenario I have simple virtual machines, if you are leveraging Nutanix Protection Domains then you can also leverage this grouping here to select your virtual machines, we can also leverage dynamic mode this is to allow the adding and removing of new workloads under that protection domain.

Add the virtual machine or machines that you wish to protect.

Next, the next option is selecting the destination of the backup job. To be able to see the backup repository the access on the VBR server needs to have the correct permissions to allow for access. This is done from the Veeam Backup & Replication console.

There are some advanced settings that can also be set to remove deleted VMs from the backup that are no longer included in the backup job.

The final step is to configure through the schedule. This will allow, you to choose the interim of backups and how many restore points that you must retain.

The final screen is the summary of the backup job you are about to complete.

You will also notice the ability to run the backup job when finish is selected, this will then start the backup job process. This will trigger the backup job to perform a full backup of the virtual machines you have selected.

Over in VBR you can see the job also running. In a very similar fashion to what we saw with the original Veeam Endpoint backup, we see enough that something is happening, but nothing can be configured from this job within Veeam Backup & Replication.

Back in the Veeam Availability for Nutanix AHV we now have a completed backup job.

Veeam Backup & Replication also shows the completed job and the steps that have occurred during the job.

We will also now see the specific job in our Veeam Backup & Replication console under the backups giving us the ability to perform certain recovery tasks against those backup files.

And we also see the completed job now under the backup jobs in the proxy appliance interface. Here we can perform an Active Full in an ad hoc scenario but also, we can start and stop the job and edit that job.

Over on the Protected VMs tab you will also notice that we now have visibility into the virtual machines that are protected with how many snapshots and backups are present.

To finish, if you head back to the dashboard you will now see the job status showing that we have one created backup job and it is currently idle.

That’s all for the availability section of this series, this is really giving us the ability to create those backup jobs for the virtual machines that sit within the Nutanix AHV cluster, this is an agentless approach for any application consistency you will require the Nutanix Guest Tools.
One thing to note is if you have a transactional workload we would recommend using the Veeam Agent to provide not only the application consistent but also the log truncation within the application. Not required if you have an application that can manage that truncation task.
Next up we will look at the recovery steps and options we have.
The post Availability for your Nutanix AHV with Veeam appeared first on vZilla.

Original Article:


NetApp ONTAP with Veeam – simpler access to Hyper-Availability

NetApp ONTAP with Veeam – simpler access to Hyper-Availability

Veeam Executive Blog – The Availability Lounge  /  Carey Stanton

What a great year it has been for our partnership with NetApp. Our two companies have worked diligently to realize the vision of a single point of sale for Veeam Hyper-Availability with NetApp ONTAP. Now, I’m pleased to share that the strength of the NetApp and Veeam partnership has made this a reality. NetApp customers can purchase joint NetApp ONTAP and Veeam Hyper-Availability solutions from our joint partners around the globe.
The NetApp and Veeam partnership began in 2015, with each successive year bringing the two companies closer together in our common cause to deliver the best experience to our customers. Last year, when we announced the intent to offer Veeam Hyper-Availability solutions on the NetApp Global price list, we knew great things were coming to build success together. And this year has delivered.
Veeam is a premier sponsor at NetApp Insight, delivering the Veeam Hyper-Availability Platform across the entire NetApp Data Fabric portfolio. Whether you use ONTAP, NetApp HCI, E-Series or StorageGRID, Veeam is right there as the Availability solution of choice.
If you’re a NetApp customer or partner, then you have the ability and confidence of knowing two of the hottest tech companies are in sync to transform and protect your data.
NetApp ONTAP is the foundation for many organizations’ data management strategy. Veeam’s integration with ONTAP, and the simplicity, agility and efficiency the joint solution delivers, is a key reason why NetApp customers choose Veeam over competitive data protection solutions.
The NetApp resale of Veeam Hyper-Availability solutions provides our joint customers with the confidence to invest in pre-validated NetApp and Veeam solutions. This is particularly important for organizations transforming their business workloads across multi-cloud environments.
By simplifying application performance and Hyper-Availability for multi-cloud workloads, NetApp and Veeam provide the following key benefits:

  • Simplified IT. Less time spent on IT operations and more time spent helping the business transform and innovate.
  • Lower costs. Improved data center efficiencies combined with the flexibility to leverage the economies of scale of public cloud infrastructure without compromising performance or Availability.
  • Increased ROI. Accelerate business innovation through Intelligent Data Management by leveraging Veeam DataLabs with secondary NetApp storage to facilitate application development, data analytics, DR compliance testing, end-user training and more.

Don’t just take our word for it. According to recent IDC research, customers using Veeam and NetApp get nearly a 300% ROI over five years with the following benefits:

  • Nine-month payback on investment
  • 58% lower cost of operations
  • 36% lower cost of hardware
  • 89% less unplanned downtime

We here at Veeam believe partnerships are critical to the continued success of our customers. Working in unison with others in the industry is our philosophy, on both a personal and corporate level. We are proud to be a NetApp partner and proud of the trust between our two organizations. I look forward to seeing you at NetApp Insight this year and the continued success of our two companies in the year to come.
Show more articles from this author

Makes disaster recovery, compliance, and continuity automatic

NEW Veeam Availability Orchestrator helps you reduce the time, cost and effort of planning for and recovering from a disaster by automatically creating plans that meet compliance.


Veeam Availability Orchestrator

Original Article:


Creating policy with exclusion of folders

Creating policy with exclusion of folders

Notes from MWhite  /  Michael White

I need to add Bitdefender protection to my Veeam Backup & Replication server so this article is going to help with that. But it is mostly going to talk about adjusting the policy that is applied to the Veeam server so that it doesn’t impact the backups.


We need to have access to the info about what to exclude on the Veeam server.  That is found in this article. We also need to have access to our GravityZone UI so we can create a policy, add exclusions to it, and than apply it to the VM.

Lets get started.

Creating Policy

We need to start off in the GravityZone UI and change to the Policies area.

We do not want to edit the default policy as it is applied to everyone, plus I think we cannot delete it.  So we are going to select it and clone it so we have a new copy of it that we can tweak and attach to our Veeam server. Once it is cloned it opens up and is seen like below.

We need to name it something appropriate – I will call it VBR Exclusions. I like the default policy and think it pretty good. So I am going to leave this clone as it was in the default policy and only add to it the Veeam exclusions.  So change to the Antimalware area and select Settings.

You can see it below, where I have already entered the Veeam server exclusions.

You only need to enable the Custom Exclusions by checkmark.  Then add in what you see above. Once you have finished you use the Save button to save this new policy.  It is the same as the default – which I said I like, except it has additional exclusions.

I do not know how to attach a policy to a package so that when it is installed it gets the policy.  So we are going to install with the default and change it afterwards.

Install the Bitdefender client

Likely you know how to do that – download a package and execute it. Once done make sure you see it in the GravityZone UI.

Now we need to assign the proper policy to our Veeam server.

Policy Assignment

We need to be in the Policies \ Assignment Rules area.

We add a location rule by using Add \ Location. Once we do that we see the following screen.

We add a name, and description, plus select the policy we just created the exclusions in and add an IP address for our Veeam server.

Now we change to the Policies view and it may take a minute or two and you will see something different.

We see that one has the policy which makes sense, but there is 4 applied which is confusing. However, I do a Policy Compliance report which shows me who has what policy and I see that VBR01 – my Veeam server – is the only one that has the policy.

So things look good now. We have created a special policy for our Veeam server, applied it, and confirmed it was applied.

Any questions or comments let me know.


=== END ===

Original Article:


How to bring balance into your infrastructure

How to bring balance into your infrastructure

Veeam Software Official Blog  /  Evgenii Ivanov

Veeam Backup & Replication is known for ease of installation and a moderate learning curve. It is something that we take as a great achievement, but as we see in our support practice, it can sometimes lead to a “deploy and forget” approach, without fine-tuning the software or learning the nuances of its work. In our previous blog posts, we examined tape configuration considerations and some common misconfigurations. This time, the blog post is aimed at giving the reader some insight on a Veeam Backup & Replication infrastructure, how data flows between the components, and most importantly, how to properly load-balance backup components so that the system can work stably and efficiently.

Overview of a Veeam Backup & Replication infrastructure

Veeam Backup & Replication is a modular system. This means that Veeam as a backup solution consists of a number of components, each with a specific function. Examples of such components are the Veeam server itself (as the management component), proxy, repository, WAN accelerator and others. Of course, several components can be installed on a single server (provided that it has sufficient resources) and many customers opt for all-in-one installations. However, distributing components can give several benefits:

  • For customers with branch offices, it is possible to localize the majority of backup traffic by deploying components locally.
  • It allows to scale out easily. If your backup window increases, you can deploy an additional proxy. If you need to expand your backup repository, you can switch to scale-out backup repository and add new extents as needed.
  • You can achieve a High Availability for some of the components. For example, if you have multiple proxies and one goes offline, the backups will still be created.

Such system can only work efficiently if everything is balanced. An unbalanced backup infrastructure can slow down due to unexpected bottlenecks or even cause backup failures because of overloaded components.

Let’s review how data flows in a Veeam infrastructure during a backup (we’re using a vSphere environment in this example):

All data in Veeam Backup & Replication flows between source and target transport agents. Let’s take a backup job as an example: a source agent is running on a backup proxy and its job is to read the data from a datastore, apply compression and source-side deduplication and send it over to a target agent. The target agent is running directly on a Windows/Linux repository or a gateway if a CIFS share is used. Its job is to apply a target-side deduplication and save the data in a backup file (.VKB, .VIB etc).

That means there are always two components involved, even if they are essentially on the same server and both must be taken into account when planning the resources.

Tasks balancing between proxy and repository

To start, we must examine the notion of a “task.” In Veeam Backup & Replication, a task is equal to a VM disk transfer. So, if you have a job with 5 VMs and each has 2 virtual disks, there is a total of 10 tasks to process. Veeam Backup & Replication is able to process multiple tasks in parallel, but the number is still limited.

If you go to the proxy properties, on the first step you can configure the maximum concurrent tasks this proxy can process in parallel:

For normal backup operations, a task on the repository side also means one virtual disk transfer.

On the repository side, you can find a very similar setting:

For normal backup operations, a task on the repository side also means one virtual disk transfer.

This brings us to our first important point: it is crucial to keep the resources and number of tasks in balance between proxy and repository.  Suppose you have 3 proxies set to 4 tasks each (that means that on the source side, 12 virtual disks can be processed in parallel), but the repository is set to 4 tasks only (that is the default setting). That means that only 4 tasks will be processed, leaving idle resources.

The meaning of a task on a repository is different when it comes to synthetic operations (like creating synthetic full). Recall that synthetic operations do not use proxies and happen locally on a Windows/Linux repository or between a gateway and a CIFS share. In this case for normal backup chains, a task is a backup job (so 4 tasks mean that 4 jobs will be able to generate synthetic full in parallel), while for per-VM backup chains, a task is still a VM (so 4 tasks mean that repo can generate 4 separate VBKs for 4 VMs in parallel). Depending on the setup, the same number of tasks can create a very different load on a repository! Be sure to analyze your setup (the backup job mode, the job scheduling, the per-VM option) and plan resources accordingly.

Note that, unlike for a proxy, you can disable the limit for number of parallel tasks for a repository. In this case, the repository will accept all incoming data flows from proxies. This might seem convenient at first, but we highly discourage from disabling this limitation, as it may lead to overload and even job failures. Consider this scenario: a job has many VMs with a total of 100 virtual disks to process and the repository uses the per-VM option. The proxies can process 10 disks in parallel and the repository is set to the unlimited number of tasks. During an incremental backup, the load on the repository will be naturally limited by proxies, so the system will be in balance. However, then a synthetic full starts. Synthetic full does not use proxies and all operations happen solely on the repository. Since the number of tasks is not limited, the repository will try to process all 100 tasks in parallel! This will require immense resources from the repository hardware and will likely cause an overload.

Considerations when using CIFS share

If you are using a Windows or Linux repository, the target agent will start directly on the server.  When using a CIFS share as a repository, the target agent starts on a special component called a “gateway,” that will receive the incoming traffic from the source agent and send the data blocks to the CIFS share. The gateway must be placed as close to the system sharing the folder over SMB as possible, especially in scenarios with a WAN connection. You should not create topologies with a proxy/gateway on one site and CIFS share on another site “in the cloud” — you will likely encounter periodic network failures.

The same load balancing considerations described previously apply to gateways as well. However, the gateway setup requires an additional attention because there are 2 options available — set the gateway explicitly or use an automatic selection mechanism:

Any Windows “managed server” can become a gateway for a CIFS share. Depending on the situation, both options can come handy. Let’s review them.

You can set the gateway explicitly. This option can simplify the resource management — there can be no surprises as to where the target agent will start. It is recommended to use this option if an access to the share is restricted to specific servers or in case of distributed environments — you don’t want your target agent to start far away from the server hosting the share!

Things become more interesting if you choose Automatic selection. If you are using several proxies, automatic selection gives ability to use more than one gateway and distribute the load. Automatic does not mean random though and there are indeed strict rules involved.

The target agent starts on the proxy that is doing the backup. In case of normal backup chains, if there are several jobs running in parallel and each is processed by its own proxy, then multiple target agents can start as well. However, within a single job, even if the VMs in the job are processed by several proxies, the target agent will start only on one proxy, the first to start processing. For per-VM backup chains, a separate target agent starts for each VM, so you can get the load distribution even within a single job.

Synthetic operations do not use proxies, so the selection mechanism is different: the target agent starts on the mount server associated with the repository (with an ability to fail over to Veeam server if the mount server in unavailable). This means that the load of synthetic operations will not be distributed across multiple servers. As mentioned above, we discourage from setting the number of tasks to unlimited — that can cause a huge load spike on the mount/Veeam server during synthetic operations.

Additional notes

Scale-out backup repositorySOBR is essentially a collection of usual repositories (called extents). You cannot point a backup job to a specific extent, only to SOBR, however extents retain some of settings, including the load control. So what was discussed about standalone repositories, pertains to SOBR extents as well. SOBR with per-VM option (enabled by default), the “Performance” placement policy and backup chains spread out across extents will be able to optimize the resource usage.

Backup copy. Instead of a proxy, source agents will start on the source repository. All considerations described above apply to source repositories as well (although in case of Backup Copy Job, synthetic operations on a source repository are logically not possible). Note that if the source repository is a CIFS share, the source agents will start on the mount server (with a failover to Veeam server).

Deduplication appliances. For DataDomain, StoreOnce (and possibly other appliances in the future) with Veeam integration enabled, the same considerations apply as for CIFS share repositories. For a StoreOnce repository with source-side deduplication (Low Bandwidth mode) the requirement to place gateway as close to the repository as possible does not apply — for example, a gateway on one site can be configured to send data to a StoreOnce appliance on another site over WAN.

Proxy affinity. A feature added in 9.5, proxy affinity creates a “priority list” of proxies that should be preferred when a certain repository is used.

If a proxy from the list is not available, a job will use any other available proxy. However, if the proxy is available, but does not have free task slots, the job will be paused waiting for free slots. Even though the proxy affinity is a very useful feature for distributed environments, it should be used with care, especially because it is very easy to set and forget about this option. Veeam Support encountered cases about “hanging” jobs which came down to the affinity setting that was enabled and forgotten about. More details on proxy affinity.


Whether you are setting up your backup infrastructure from scratch or have been using Veeam Backup & Replication for a long time, we encourage you to review your setup with the information from this blog post in mind. You might be able to optimize the use of resources or mitigate some pending risks!

The post How to bring balance into your infrastructure appeared first on Veeam Software Official Blog.

Original Article:


Veeam ONE Report – Custom – host firmware info

Veeam ONE Report – Custom – host firmware info

Notes from MWhite  /  Michael White

I recently showed a customer how to do a Veeam ONE report that showed the firmware of the VMware hosts, and the version / build of the VMware software.  It was an interesting use of the custom reporting in ONE and I have since shown other people and since they liked it too I thought I would write about it.
We start by logging into the Veeam ONE Reporter UI.  It is the one on port 1239 so https://fqdn_ONE_server:1239. Next change to the Workspace tab, next would be select Custom Reports folder. You can see it below.

Once we click on Custom Reports we will see a selection of them.

The one that we are going to start with, sort of as framework for what we want is the one called Custom Infrastructure. It is seen above with the red arrow pointing at it.
Lets click on Custom Infrastructure so that we can work with it.

Depending on what you have done with this report type before some of the fields may look different.  Object type above says Host System, but that is due to my using that before.  It may say Click to choose for you, and so if it does, select it and select Host System as you see above.
Next we select the Click to choose for Columns.  What we are going to do here is layout our report.  I would like to have the following columns – Name, Virtualization Info, Version, Build, Host BIOS firmware, Manufacture.  So lets pick like that. As you select these values notice all the other things you can pick! Once selected it should look like below.

Once you have selected what you need, you can use the OK button to return to the main screen.  I like to use the Preview Button now to confirm what it looks like.

And it is what I want, so I change tabs so I am back where we hit the Preview button.
Now we use the Save as button to save our report.

I like to save my reports to My Reports folder.  And even though we made this report ourselves, it is now treated like the others.  So produced automatically on a schedule for example. Or you can click on report in your My Reports folder and do things like edit it, View it, delete it as some examples.
When you have previewed the report, or when you are looking at it, you can always export it.  At the top of the screen you will see an Export button.

When you use that Export button you can export to Word, Excel or PDF.
Hope that this helps but you can always comment or ask questions. ONE is a pretty powerful tool but people don’t always seem to see that.  So I am going to do some articles to help with that.
BTW, any technical ONE articles will be able to be found using this tag.
===END ===

Original Article:


Resume Veeam Failed Backup Jobs

Resume Veeam Failed Backup Jobs

CloudOasis  /  HalYaman

Screen Shot 2018-09-14 at 8.55.00 am
What are your options if you wish to automate the resumption of backup jobs on your Veeam Backup server after a failover? Is there a way to automatically resume your backup jobs after switching over to the Veeam Backup server?
As you may be aware, Veeam does not offer an “out-of-the-box” High Availability function for it own backup server. The reason is probably that the simplicity of deploying and recovering the backup server might you such an option redundant.  You can read more about the steps on this link. However, you might also like to have an HA function for its convenience.
One of the ways to protect the Veeam Backup server, and to offer High Availability to the Veeam Backup server, is to replicate the backup server using the Veeam Backup & Replication product. Or you could use the VMware vSphere replication Tool
To accomplish a complete High Availability solution for the Veeam Backup server, it is important that after the failover has happened, the Replica Veeam backup server automatically resumes the backup jobs.
To accomplish the automatic resumption after a failover, you can use Veeam PowerShell to initiate the resumption of failed backup jobs. It requires a few simple steps, and can be automated by adding the following PowerShell commands to the post-failover script:

Screen Shot 2018-09-14 at 9.08.37 am

The following commands check the status of each Backup and Replication Job. They will resume, or retry, each failed Backup job and Replication Job:

Get-VBRJob | where {$_.Jobtype -eq “Backup” -and $_.GetLastResult() -eq “Failed”} | Start-VBRJob -RetryBackup
Get-VBRJob | where {$_.JobType -eq “Replica” -and $_.GetLastResult() -eq “Failed”} | Start-VBRJob -RetryBackup
Get-VBRJob | where {$_.Jobtype -eq “Backup” -or $_.JobType -eq “Replica” -and $_.GetLastResult() -eq “Failed”} | Start-VBRJob -RetryBackup


These steps, together with the steps in the previous blog post where I told you about protecting the Veeam Backup server, allowed the Service Provider mentioned at the top of the page to implement an easy and straightforward HA solution for his Veeam Backup server.
Testing was carried out, adjustments were made, and the finished solution was implemented in the PRODUCTION environment. These easy steps have become the cornerstone of his Veeam High Availability solution. “Easy, and very effective.” … is how he described it.

The post Resume Veeam Failed Backup Jobs appeared first on CloudOasis.

Original Article:


The Office 365 Shared Responsibility Model

The Office 365 Shared Responsibility Model

Veeam Software Official Blog  /  Russ Kerscher

The No. 1 question we get all the time: “Why do I need to back up my Office 365 Exchange Online, SharePoint Online and OneDrive for Business data?”
And it’s normally instantaneously followed up with a statement similar to this: “Microsoft takes care of it.”
Do they? Are you sure?
To add some clarity to this discussion, we’ve created an Office 365 Shared Responsibility Model. It’s designed to help you — and anyone close to this technology — understand exactly what Microsoft is responsible for and what responsibility falls on the business itself. After all — it is YOUR data!
Over the course of this post, you’ll see we’re going to populate out this Shared Responsibility Model. On the top half of the model, you will see Microsoft’s responsibility. This information was compiled based on information from the Microsoft Office 365 Trust Center, in case you would like to look for yourself.
On the bottom half, we will populate out the responsibility that falls on the business, or more specifically, the IT organization.

Now, let’s kick this off by talking specifically about each group’s primary responsibility. Microsoft’s primary responsibility is focused on THEIR global infrastructure and their commitment to millions of customers to keep this infrastructure up and running, consistently delivering uptime reliability of their cloud service and enabling the productivity of users across the globe.
An IT organization’s responsibility is to have complete access and control of their data — regardless of where it resides. This responsibility doesn’t magically disappear simply because the organization made a business decision to utilize a SaaS application.

Here you can see the supporting technology designed to help each group meet that primary responsibility. Office 365 includes built-in data replication, which provides data center to data center georedundancy. This functionality is a necessity. If something goes wrong at one of Microsoft’s global data centers, they can failover to their replication target, and, in most cases, the users are completely oblivious to any change.
But replication isn’t a backup. And furthermore, this replica isn’t even YOUR replica; it’s Microsoft’s. To further explain this point, take a minute and think about this hypothetical question:

What has you fully protected, a backup or a replica?

Some of you might be thinking a replica — because data that is continuously or near-continuously replicated to a second site can eliminate application downtime. But some of you also know there are issues with a replication-only data protection strategy. For example, deleted data or corrupt data is also replicated along with good data, which means your replicated data is now also deleted or corrupt.
To be fully protected, you need both a backup and a replica! This fundamental principle has been the bedrock of Veeam’s data protection strategy for over 10 years. Look no further than our flagship product, aptly named Veeam Backup & Replication.

Some of you are probably already thinking: “But what about the Office 365 recycle bin?” Yes, Microsoft has a few different recycle bin options, and they can help you with limited, short-term data loss recovery. But if you are truly in complete control of your data, then “limited” can’t check the box. To truly have complete access and control of your business-critical data, you need full data retention. This is short-term retention, long-term retention and the ability to fill any / all retention policy gaps. In addition, you need both granular recovery, bulk restore and point-in-time recovery options at your fingertips.

The next part of the Office 365 Shared Responsibility Model is security. You’ll see that this is strategically designed as a blended box, not separate boxes — because both Microsoft AND the IT organization are each responsible for security.
Microsoft protects Office 365 at the infrastructure level. This includes the physical security of their data centers and the authentication and identification within their cloud services, as well as the user and admin controls built into the Office 365 UI.
The IT organization is responsible for security at a data-level.  There’s a long list of internal and external data security risks, including accidental deletion, rogue admins abusing access and ransomware to name a few. Watch this five-minute video on how ransomware can take over Office 365. This alone will give you nightmares.

The final components are legal and compliance requirements. Microsoft makes it very clear in the Office 365 Trust Center that their role is of the data processor. This drives their focus on data privacy, and you can see on their site that they have a great list of industry certifications. Even though your data resides within Office 365, an IT organization’s role is still that of the data owner. And this responsibility comes with all types of external pressures from your industry, as well as compliance demands from your legal, compliance or HR peers.

In summary, now you should have a better understanding of exactly what Microsoft protects within Office 365 and WHY they protect what they do. Without a backup of Office 365, you have limited access and control of your own data. You can fall victim to retention policy gaps and data loss dangers. You also open yourself up to some serious internal and external security risks, as well as regulatory exposure.
All of this can be easily solved with a backup of your own data, stored in a place of your choosing, so that you can easily access and recover exactly what you want, when you want.

Looking to find a simple, easy-to-use Office 365 backup solution?
Look no further than Veeam Backup for Microsoft Office 365. This solution has already been downloaded by over 35,000 organizations worldwide, representing 4.1 million Office 365 users across the globe. Veeam was also named to Forbes World’s Best 100 Cloud Companies and is a Gold Microsoft Partner. Give Veeam a try and see for yourself.

Additional resources:

The post The Office 365 Shared Responsibility Model appeared first on Veeam Software Official Blog.

Original Article:


Intelligent Data Management for a Hybrid World @ VMworld

Intelligent Data Management for a Hybrid World

vZilla  /  michaelcade

Our session this year is focusing on the automation and orchestration around Veeam and VMware. But what does that mean? The point of our session to highlight the flexibility of the Veeam Hyper-Availability Platform, some people just want the simple, easy to use wizard driven approach to install their Veeam components within their environments but some will want that little bit more and this is where APIs come in and allow us to drive a more streamlined and automated approach to delivering Veeam components.
We also highlighted this by running through everything live, I will get to the nuts and bolts of that shortly.

With there being a strong focus this year’s event, we wanted to highlight the capabilities by using VMware on AWS. Veeam were one of the first vendors highlighted as a supported data protection platform that could protect workloads within VMware on AWS that was 1 year ago and we wanted to highlight those features and capabilities within Veeam.

Veeam Availability Orchestrator – “Replication on Steroids”

The first thing we will touch on is Veeam Availability Orchestrator released this year, this provides a “Replication on Steroids” option for your vSphere environment, this environment can be on premises or leveraging any other vSphere environment, maybe say VMware on AWS where maybe you would still like to keep your DR location on premises and send those replicas down in case of any service disruption within the AWS cloud. The replication concentrates on the application over just sending a VM from site to site, what this also enables is the ability to run automated testing against these replicas to simulate disaster recovery scenarios. Oh and the other large part of this is the automated documentation. Ever had to create your own DR Run Book? I have this does the majority for you whilst being dynamic to any changes in configuration.

Veeam DataLabs

The we wanted to highlight some more automation goodness around Veeam DataLabs, what this gives alongside that Backup and Replication capability is the ability to have an automated way of testing that your backups, replicas or storage snapshots are in a good recoverable state. It also provides the ability to get more leverage from those sources to provide isolated environments for other ways of gaining insight or improving better business outcomes.
I plan to follow up on this as this is one of my passions within our technology stack the ability to leverage Veeam DataLabs from many of the products in the platform to drive different outcomes is really where I see us differentiating in the market.

The Bulk of the session

As you can see we are already cramming quite a bit into the session. But this is the main focus point of the day for us, it’s about delivering a fully configured Veeam environment from the ground up, all live whilst we are on stage. Oh and because we can we are doing this on VMware on AWS.
The driving use case for this was around the Veeam proof of concept process, I mean it was fast, deploy one windows server and 7 clicks later you have Veeam configured, perfect. But the issue wasn’t the Veeam installation, what if we could take an automated approach and be able to understand the customer pain points and needs and then in the background automate out the process of building the Veeam components and automatically start protecting a sub set of data all in the first hour of that meeting?
The beauty of this is, is you do not need to be a DevOps engineer skilled in configuration management or a developer in Ruby. The hard work has been done already and is available for free on GitHub and in the CHEF Supermarket.
I have listed the tools below that we used to get things up and running and to be honest if you were to pull this down when we make this available you will only really need PowerShell, PowerCLI and Terraform installed on your workstation.

The steps we went through live was deploying that Veeam Backup & Replication server along with multiple proxy servers to deal with the load appropriately. Because of the location of the Veeam components and our production environment we chose to also leverage the native AWS services and we deployed an EC2 instance for our backup repository but this could be any storage in any location as per our repository best practices, we also added a Veeam cloud connect service provider to show a different media type and location for your backup or replication requirements. Finally we automated the provisioning of vSphere tags and then created backup jobs based on those.

By the end of the session we had the following built out, over on the right you can see we have a Veeam Backup & Replication server and some additional proxies. On the left at the top we have our Veeam cloud connect backup as a service offering and at the bottom left we also have our on-premises vSphere environment where we could send further backup files or even as a target for those replication jobs. Underneath the VMware on AWS you can see the Amazon EC2 instance where we will store our initial backup files for fast recovery.

As I know some of you will be catching this whilst at the show I want to give a shameless plug out for the next session which goes into more detail around the Chef element of this dynamic deployment so you can find those details below.

I also want to give a huge shoutout to @VirtPirate aka Jeremy Goodrum of Exosphere who helped make the terraform and chef peice happen he also has an article over here diving into the latest version of the cookbook and some other code releases he has made that are related.
Veeam Cookbook 2.1.1 released and Sample vSphere Terraform Templates
Expect to see much more content about this in the form of a whitepaper and more blogs to consume.
The post Intelligent Data Management for a Hybrid World appeared first on vZilla.

Original Article:


White paper: Six reasons to backup Office 365 – ITWeb

White paper: Six reasons to backup Office 365 – ITWeb

veeam – Google News

Six reasons to backup Office 365.Six reasons to backup Office 365.Do you have control of your Office 365 data? Do you have access to all the items you need? The knee-jerk reaction is typically: “Of course I do,” or “Microsoft takes care of it all.” But if you really think about it, are you sure?This report explores the hazards of not having an Office 365 backup in your arsenal, and why backup solutions for Microsoft Office 365 fill the gap of long-term retention and data protection.

Original Article:


#VMworld 2018 – Day 2 – #Veeam in The #AWS Marketplace

#VMworld 2018 – Day 2 – #Veeam in The #AWS Marketplace

vZilla  /  michaelcade

That’s a wrap for day 2 of VMworld, there were two big bits worth mentioning from a Veeam perspective, firstly it is the VMware on AWS marketplace and the addition of Veeam Backup & Replication as an option for automated deployment.

This screen has been taken from beta testing.

To summarise what this means is, for the Veeam customers that have taken the steps to leverage VMware on AWS by giving those customers the same simple, easy to use and seamlessly easy to get thing protected from a backup and replication point of view, using the same toolset that we know from our on-premises vSphere environment.

Yesterday I spoke about automation and we also shared this information in our session about this feature also coming available, both this and the chef and terraform we spoke about yesterday are both types of automation but for possibly different end users and prospects.

The beauty of what we discussed of the CHEF deployment is it allows for us to be more distributed and dynamic, this offering is going to get things up and running as fast as possible and start protecting workloads. There will also be some customers that don’t want to explore the open source community of our CHEF cookbook.

It’s as simple as deploying to your SDDC and then choosing the CloudFormation template. The template will use the VPC that is linked to your VMware on AWS instance. The CloudFormation template will be executed on the VMware on AWS instance.

This template from CloudFormation will then run through the creation of the stack required.

When that CloudFormation template has been ran it’s then time to start the environment configuration, resource pools, network configuration etc.

Then is the summary screen to show you all the configuration you are about to commit.

This will then continue the deployment with your configuration and you will be able to see the Veeam server deployed within your SDDC.

When you first login to the newly created Veeam server you will see that the repository server has been added down to the stack configuration of the CloudFormation template. It has also added your vSphere vCenter server. You can now see the VMs within your SDDC and can begin protecting those instances.

The final thing I wanted to share was the capabilities, it’s not just backup, you also have the ability to replicate these virtual machines from this vSphere environment to any other environment including an on-premises environment, a Cloud Connect Service Provider offering replication as a service or to another vSphere environment anywhere.

The post #VMworld 2018 – Day 2 – #Veeam in The #AWS Marketplace appeared first on vZilla.

Original Article:


Instant visibility & restore for Microsoft SharePoint

Instant visibility & restore for Microsoft SharePoint

Veeam Software Official Blog  /  Kirsten Stoner

Microsoft SharePoint is an invaluable tool used by organizations worldwide for data sharing and collaboration among teams. SharePoint provides businesses with a way to increase teamwork and productivity to streamline their processes and improve their business outcomes. There are several deployment options available for SharePoint, such as on-premises, online through Office 365 or a hybrid deployment. Each option has its own benefits, however, this blog post focuses on Veeam Explorer for SharePoint being used in an on premises deployment model of Microsoft SharePoint. If you’re using SharePoint online, Veeam Explorer is available to you as well through Veeam Backup for Microsoft Office 365.

An on-premises SharePoint farm could consist of multiple servers with each server needing to remain operating to meet your end user’s expectation. To meet a user’s demands, it’s important to have an Availability strategy in place. Veeam meets these expectations by giving you the technology to browse the database, restore individual items, and gain instant visibility while still being easy to use.

Veeam Explorer for Microsoft SharePoint

Veeam has developed many powerful built-in Explorers in its software, and Veeam Explorer for Microsoft SharePoint is no different. From the Veeam backup of your SharePoint server, you gain the ability to browse the content database, recover necessary items without having to fully restore, and start the virtual machine hosting the content database. Like the other Veeam Explorers, this tool is available with all editions of Veeam Backup & Replication, even the Free Edition!

When you perform a backup of your SharePoint Server, remember to enable Application Aware Image Processing. This technology creates a transactional-consistent backup to guarantee the proper recovery of your applications running on VMs. Once you successfully created the backup or replica of your SharePoint Server, you can start using the Explorer. There are a couple options available to you when using the Explorer, these include: browsing the SharePoint database, restoring individual SharePoint items and permissions, exporting items (sending as an email attachment or saving them to another location), and the ability to restore SharePoint sites.

Instant Visibility

Once you’re ready to perform a recovery, the application item restore wizard will auto-discover the SharePoint farms that were backed up and initiate the mount operation. During this operation, Veeam Backup & Replication retrieves information about SharePoint sites, the corresponding database server VMs, and restore points.

Figure 1: Veeam SharePoint Item restore wizard

When first initiating the restore, the wizard shows you the list of available sites included in the backup, allowing you to choose which site you want to explore to find the items you need. The Application Aware Image Processing technology is how Veeam Backup & Replication auto-discovers the information about your SharePoint Servers. It is important to remember to select this option when first performing the backup.

Figure 2: Veeam Explorer for Microsoft SharePoint

Within the Explorer itself is where you can view the content databases, sites, subsites, libraries and lists. Depending on what you select, you can browse and view its contents to find what is needed to restore. If your restoring a document, you can even open and preview the document to ensure it’s the correct item needed to recover. Available in all the editions of Veeam Backup & Replication, Veeam Explorer for Microsoft SharePoint delivers granular browsing and search capabilities to find any item or multiple items stored in any number of Microsoft SharePoint databases. To support this capability, the guest file system of the VM is mounted directly from the backup to a staging Microsoft SQL Server. By default, Veeam will use the SQL Server Express instance that was installed when you deployed Veeam Backup & Replication. One thing to note, the staging system must be compatible or the same version as the Microsoft SQL server that hosts the Microsoft SharePoint Content databases. If it is not, you will need to identify a staging SQL Server that is compatible to be able to use the Explorer. This is available within the Veeam Explorer options, under the SQL Server Settings tab. For detailed instructions on this functionality, please refer to the user guide.

With the amount of visibility Veeam Explorer for Microsoft SharePoint provides, you may want to be able to keep track of who is accessing the Explorer, what they are looking at, and why they are performing restores. For this, Veeam offers another layer of visibility, especially when it comes to restore operations. This visibility comes in the form of Veeam ONE, specifically the Restore Operators Report allowing you to safeguard your data with the ability to see who is accessing your data, where it is being restored to, and what items are being restored.

Veeam ONE Restore Operators Report

Veeam offers powerful, useful tools to ensure Availability for your business. Sometimes, we need to take an extra step to ensure we are still meeting the security requirements for the business as well. Veeam ONE’s Restore Operator’s Report gives you a detailed description of who is accessing your backup data and what restores they are performing or not performing. This allows you to gain an extra layer of visibility by being able to view all types of restore actions performed across the Veeam Backup Servers.

Figure 3. Restore Operators Activity Report

The above report shows who is accessing the backup data and what restores they are performing. This is an easy way to ensure that the correct people who have permission to be accessing certain data, are only accessing that data when and how they’re supposed to. The above image shows the different users performing restores and what type of restore it is, if its application, full VM, files, or even a restore from tape.

Figure 4. Restore Operators Report Continued

Going deeper into the report, you can see which VMs the users are accessing and what restores they are performing, or if they’re even performing a restore. This report is very useful to double check to ensure your users are only accessing what they should be accessing.


Microsoft SharePoint is a valuable tool used in organizations today to increase collaboration among teams to improve teamwork and organizational knowledge to be able to make better decisions. Veeam Explorer for Microsoft SharePoint allows you to keep your business’ most important applications available to meet your end users demands. An added benefit is this Veeam Explorer for Microsoft SharePoint is even included in Veeam Backup Free Edition — allowing you to start using this powerful technology today!

GD Star Rating

Instant visibility & restore for Microsoft SharePoint, 5.0 out of 5 based on 3 ratings

Original Article:


How to Build a Failover Plan in Veeam Availability Orchestrator

How to Build a Failover Plan in Veeam Availability Orchestrator

Veeam Software Official Blog  /  Melissa Palmer

One of the most important components of Veeam Availability Orchestrator is the Failover Plan. The Failover Plan is an essential part of an organization’s disaster recovery plan. It contains the virtual machines to be protected, what steps to take during recovery, and other important information.

Now, we are going to take a look at the step-by-step process to creating your disaster recovery plan with Veeam Availability Orchestrator.

When you start the New Failover Plan Wizard, you will first be prompted to select a site. If you have multiple sites in your VAO environment, you would pick the production site of the application you are protecting.

Next, we want to give our Failover Plan a name. I like to use something that is clear and concise, such as the application name. You can also enter a description of your Failover Plan, as well as the contact information for the application you are protecting.

Next, we select the VM Group (or multiple VM Groups) containing the virtual machines of our application. As we mentioned in a previous post, VM Groups can be powered by VMware vSphere Tags. In this list, you can see the VMware vSphere Tags I have setup in my environment. In this case, I am going to select the applications with the HeliumRUN Windows Tag, since it has the virtual machines I am protecting with this Failover Plan.

Next are our VM Recovery Options. In this screen, we can decide how to handle a VM recovery failure in the unlikely event it happens. We can use VAO to run scheduled recovery tests on a regular basis, so this sort of failure would be a rare occurrence. We can also specify if we want to recover our VMs in a particular order, or at the same time, or finally how many VMs to recover simultaneously.

In the next screen, we are going to select the steps we are going to take for each VM during recovery. After we finish creating the Failover Plan, we will be able to add additional steps for individual VMs, including custom steps we upload to VAO. This is useful when we want to configure particular steps to verify the operation of an application such as Exchange, SharePoint, ISS, or SQL. For a complete list of Failover Plan steps included with VAO, be sure to take a look at the Veeam Availability Orchestrator official user guide here. Some steps, such as Verify SQL Database require credentials. If you select a step that requires credentials, you will be prompted to enter them for use.

One of the most important things to remember is that after we execute a disaster recovery plan, our disaster recovery site is now our production site. Because of this, it is very important that our applications receive the same level of protection they would on any other day. Luckily, Veeam Availability Orchestrator makes this easy by leveraging a pre-configured template job in Veeam Backup & Replication. At this screen, you can simply select the backup job you wish to use to protect your data at the disaster recovery site.

After ensuring your data is protected after your disaster recovery plan has executed, the next step is to configure Veeam Availability Orchestrator’s reporting capabilities. VAO has a completely customizable report template. These disaster recovery plan templates allow for the inclusion of all information needed during a disaster recovery plan execution, and can be scheduled to be sent to key stakeholders on a regular basis to ensure the environment is always ready for failover. For more about the reports included in VAO, be sure to check out this guide to VAO terminology.

By default, the Plan Definition Report and Readiness Check are scheduled to run daily, which is a great way to check the health of our disaster recovery plan. The Plan Definition Report includes all the information about the Failover Plan we just created, as well as a log of changes that have been made. The Readiness Check is a light-weight test that checks to ensure we are ready for a failover at a moment’s notice. If for some reason our Readiness Check has an error, we can then act to remediate it before a disaster strikes.

Finally, we are presented with a summary screen that shows us how our Failover Plan has been configured.  Once we click Finish, we have completed setting up our Failover Plan.

If we want to make any changes to our Failover Plan, it’s as simple as right-clicking on our Failover Plan and selecting “Edit,” or highlighting our Failover Plan and clicking “Manage” and then “Edit” on the navigation bar. The edit state is where we can add specific steps to each virtual machine, or to the failover plan in general. For example, I have uploaded a script to be run in the event of a disaster to make some DNS changes for my environment DNS changes.

This screen can be used to add either Pre or Post failover steps, or steps to each VM individually. The steps can also be put into a particular order if desired. The best part of this functionally is the ability to create a custom flow of steps as needed for each VM. For example, I may want to use the included steps of Verify Web Server Port and Verify Web Site (IIS) for a web server in the Failover Plan, and different steps on the SQL server. All of these steps will then be captured in a Plan Definition Report the next time it is run.

Congratulations, you are now protecting your application with Veeam Availability Orchestrator! If you want to take a look at creating your own Failover Plan, you can download a 30-day FREE trial of Veeam Availability Orchestrator.

The post How to Build a Failover Plan in Veeam Availability Orchestrator appeared first on Veeam Software Official Blog.

Original Article:


Microsoft LAPS deployment and configuration guide

Microsoft LAPS deployment and configuration guide

Veeam Software Official Blog  /  Gary Williams

If you haven’t come across the term “LAPS” before, you might wonder what it is. The acronym stands for the “Local Administrator Password Solution.” The idea behind LAPS is that it allows for a piece of software to generate a password for the local administrator and then store that password in plain text in an Active Directory (AD) attribute.

Storing passwords in plain text may sound counter to all good security practices, but because LAPS using Active Directory permissions, those passwords can only be seen by users that have been given the rights to see them or those in a group with rights to see them.

The main use case here shows that you can freely give out the local admin password to someone who is travelling and might have problems logging in using cached account credentials. You can then have LAPS request a new password the next time they want to talk to an on-site AD over a VPN.

The tool is also useful for applications that have an auto login capability. The recently released Windows Admin Center is a great example of this:

To set up LAPS, there are a few things you will need to do to get it working properly.

  1. Download the LAPS MSI file
  2. Schema change
  3. Install the LAPS Group Policy files
  4. Assign permissions to groups
  5. Install the LAPS DLL

Download LAPS

LAPS comes as an MSI file, which you’ll need to download and install onto a client machine, you can download it from Microsoft.

Schema change

LAPS needs to add two attributes to Active Directory, the administrator password and the expiration time. Changing the schema requires the LAPS PowerShell component to be installed. When done, launch PowerShell and run the commands:

Import-module AdmPwd.PS


You need to run these commands while logged in to the network as a schema admin.

Install the LAPS group policy files

The group policy needs to be installed onto your AD servers. The *.admx file goes into the “windowspolicydefintions” folder and the *.adml file goes into “windowspolicydefinitions[language]”

Once installed, you should see a LAPS section in GPMC under Computer configuration -> Policies -> Administrative Templates -> LAPS

The four options are as follows:

Password settings — This lets you set the complexity of the password and how often it is required to be changed.

Name of administrator account to manage — This is only required if you rename the administrator to something else. If you do not rename the local administrator, then leave it as “not configured.”

Do not allow password expiration time longer than required by policy — On some occasions (e.g. if the machine is remote), the device may not be on the network when the password expiration time is up. In those cases, LAPS will wait to change the password. If you set this to FALSE, then the password will be changed regardless of it can talk to AD or not.

Enable local password management — Turns on the group policy (GPO) and allows the computer to push the password into Active Directory.

The only option that needs to be altered from “not configured” is the “Enable local admin password management,” which enables the LAPS policy. Without this setting, you can deploy a LAPS GPO to a client machine and it will not work.

Assign permissions to groups

Now that the schema has been extended, the LAPS group policy needs to be configured and permissions need to be allocated. The way I do this is to setup an organizational until (OU), where computers will get the LAPS policy and a read-only group and a read/write group.

Because LAPS is a push process, (i.e. because the LAPS client on the computer is the one to set the password and push it to AD) the computer’s SELF object in AD needs to have permission to write to AD.

The PowerShell command to allow this to happen is:

Set-AdmPwdComputerSelfPermission -OrgUnit <name of the OU to delegate permissions>

To allow helpdesk admins to read LAPS set passwords, we need to allow a group to have that permission. I always setup a “LAPS Password Readers” group in AD, as it makes future administration easier. I do that with this line of PowerShell:

Set-AdmPwdReadPasswordPermission -OrgUnit <name of the OU to delegate permissions> -AllowedPrincipals <users or groups>

The last group I set up is a “LAPS Admins” group. This group can tell LAPS to reset a password the next time that computer connects to AD. This is also set by PowerShell and the command to set it is:

Set-AdmPwdResetPasswordPermission -OrgUnit <name of the OU to delegate permissions> -AllowedPrincipals <users or groups>

Once the necessary permissions have been set up, you can move computers into the LAPS enabled OU and install the LAPS DLL onto those machines.


Now that the OU and permissions have been set up, the admpwd.dll file needs to be installed onto all the machines in the OU that have the LAPS GPO assigned to it. There are two ways of doing this. First, you can simply select the admpwd dll extension from the LAPS MSI file.

Or, you can copy the DLL (admpwd.dll) to a location on the path, such as “%windir%system32”, and then issue a regsvr32.exe AdmPwd.dll command. This process can also be included into a GPO start-up script or a golden image for future deployments.

Now that the DLL has been installed on the client, a gpupdate /force should allow the locally installed DLL to do its job and push the password into AD for future retrieval.

Retrieving passwords is straight forward. If the user in question has at least the LAPS read permission, they can use the LAPS GUI to retrieve the password.

The LAPS GUI can be installed by running the setup process and ensuring that “Fat Client UI” is selected. Once installed, it can be run just by launching the “LAPS UI.” Once launched, just enter the name of the computer you want the local admin password for and, if the permissions are set up correctly, you will see the password displayed.

If you do not, check that that the GPO is being applied and that the permissions are set for the OU where the user account is configured.


Like anything, LAPS can cause a few quirks. The two most common quirks I see include when staff with permissions cannot view passwords and client machines do not update the password as required.

The first thing to check is that the admpwd.dll file is installed and registered. Then, check that the GPO is applying to the server that you’re trying to change the local admin password on with the command gpresult /r. I always like to give applications like LAPS their own GPO to make this sort of troubleshooting much easier.

Next, check that the GPO is actually turned on. One of the oddities of LAPS is that it is perfectly possible to set everything in the GPO and assign the GPO to an OU, but it will not do anything unless the “Enable Local password management” option is enabled.

If there are still problems, double check that the permissions that have been assigned. LAPS won’t error out, but the LAPS GUI will just show a blank for the password, which could mean that either the password has not been set or that the permissions have not been set correctly.

You can double check permissions using the extended attribute section of windows permissions. You can access this by launching Active Directory users and computers -> Browse to the computer object -> Properties -> Security -> Advanced

Double click on the security principal:

Scroll down and check that both Read ms-Mcs-AdmPwd and Write ms-Mcs-admpwd are ticked.

In summary, LAPS works very well and it is a great tool for deployment to servers, especially laptops and the like. It can be a little tricky to get working, but it is certainly worth the time investment.

See more

The post Microsoft LAPS deployment and configuration guide appeared first on Veeam Software Official Blog.

Original Article:


Virtual Lab “xray” not found on server “”

Virtual Lab “xray” not found on server “

Notes from MWhite  /  Michael White

This has been bothering me for a bit.  When I do a test failover I see the error above.  Here is a screenshot:

This is an example out of my lab.  I could start and stop the virtual lab in the VAO UI with no issues.  I removed / uninstalled everything and tried again.  Still had the same issue.
The solution is a major surprise to me.  Virtual Labs must be created where the replication consumed in the virtual lab is configured and started.
In the example of the screenshot I started the replication to my DR site on  But yet, I created the virtual lab in the DR site.
The answer to this all is that you need to start your replication in the DR site and of course that is where you should create your virtual labs that you use in VAO.
Hope that this is clear but if not Google will hopefully deliver you to this article, and you can use the comments to learn more.
=== END ===

Original Article:


Veeam Intelligent Data Management for Huawei OceanStor

Veeam Intelligent Data Management for Huawei OceanStor

Veeam Software Official Blog  /  Stefan Renner

Veeam and Huawei recently released new, integrated storage snapshot and orchestration capabilities for customers using Veeam and Huawei OceanStor storage. This new Veeam Plug-in is based on the Veeam Universal Storage API and allows Veeam solutions to deliver higher levels of backup and recovery efficiency when paired with the Huawei OceanStor storage infrastructure.
The constant flow and management of data is taxing today’s organizations to their limit. Data has become hyper-critical to business, but IT organizations struggle to cope with their data’s hyper-growth and hyper-sprawl while protecting against data loss threats, ransomware, service outages and human error — all of which result in loss of business, productivity and reputation.
To address these new business and technical requirements, Veeam partnered with Huawei and other leading storage providers to deliver integrated data protection and storage solutions. OceanStor customers can now leverage Veeam storage integration for VMware environments, bringing new levels of Intelligent Data Management to their data center for better RTPO (recovery time and point objectives).

Faster, efficient backup for Huawei OceanStor

Backup operations strain production storage environments, resulting in lower performance. New Veeam-Huawei integration brings agentless Veeam Backup from Storage Snapshots capabilities to OceanStor storage, increasing Veeam backup speed by 2x, and up to 10x faster compared to competing backup solutions.
Veeam’s usage of VMware Change Block Tracking while reading data from storage snapshots minimizes the performance impact on production VMs during backup. As a result, the VMware snapshot lifetime is lowered to minutes instead of hours — which is often the case when using VMware-based backups without storage snapshots.

During standard Veeam backup procedures, Veeam uses parallel disk backup to reduce the time window, but the VMware VM snapshot may be open for some time as would be expected. This leads to higher IO at the time of the VMware VM snapshot, and could cause a possible performance impact on the VM.
The new integration allows Veeam to create Huawei snapshots in the background directly after the VM snapshot creation. The result is nearly instantaneous. In the example below, the VMware VM snapshot is open only until the storage snapshot was created, lowering the time required for the VMware snapshot to be open.

While certain variables can come into play and affect performance, this level of performance will not be uncommon for Huawei OceanStor customers using Veeam Availability storage snapshot integration.

Orchestrate your Huawei OceanStor storage snapshots

Veeam integration with OceanStor also includes orchestration capabilities that reduce backup management complexity and increase efficiency. The new Veeam Plug-in can orchestrate application-consistent storage snapshots without the need of any agent installed within the VMs.

In most IT organizations, backups are ideally scheduled during off hours, when they will not affect performance. However, the realities of today’s business demands on IT infrastructures make this approach obsolete. In today’s world, “off hours” don’t exist anymore. Constant use of available compute and infrastructure resources is key to better return on investment, so low usage windows are harder to find. More importantly, backups need to be taken more often to protect against data loss. Where one backup a night was acceptable in the past, the continuous creation of recovery points to meet higher recovery point objectives is now the common service level demand.
Veeam snapshot orchestration helps you address this need with a mix of frequent crash-consistent and application-consistent snapshots.

Unlimited recovery options from storage snapshot

With Veeam Explorer for Storage Snapshots, you can use either Veeam orchestrated, or any other existing Huawei storage snapshot, to recover full VMs or single items from a snapshot, in many ways depending on what will be most efficient for the recovery situation.

Veeam integration brings new levels of flexibility for recovery with Huawei OceanStor storage. Full VM recovery is simple and quick, but more often than not, simple item recovery within a VM is the recovery use case.
Veeam Explorer for Storage Snapshots gives IT teams recovery of individual items without requiring the normal time and resource consuming process of re-provisioning a VM. This item-level recovery is supported for Microsoft Exchange, SQL, Active Directory and SharePoint objects through a simple Windows Explorer-like interface. Veeam allows data, files, emails and more to be pulled from backups and into the production environment with a few clicks. Recovering Oracle databases out of a storage snapshot is also supported.

Automated DR/recovery verification and DataLabs

The new Veeam Plug-in for Huawei also brings automation to one of the biggest pain points and inconsistencies most organizations struggle to address in their backup and recovery operations. That is the fact that most IT backup administrators can never really be sure they can recover from the restore points in a disaster or when data is lost.

With Veeam and Huawei integration, Veeam On-Demand Sandbox for Storage Snapshots automates the process of creating a completely isolated copy of your production environment, verifies the viability from the snapshot, reports that status, and then deprovisions the environment. Veeam builds the DataLab from recent storage snapshots created by Huawei or any other third-party software and runs through a complete routine to verify VM boot, network connections, and application function. When finished, Veeam then reports the test results via email or through enhanced reporting found in Veeam Availability Suite.
Creating what Veeam calls a DataLab, addresses two core needs in modern IT Infrastructures. First, it addresses the need for verified recoverability to meet regulatory and operational requirements, not to mention peace of mind for the IT team knowing they are prepared for disaster recovery when they are called.
Second, DataLabs are extremely valuable to address the needs of your development teams, or any others that constantly require a dedicated lab environment with real-world data for the purpose of new solution development, upgrade and deployment testing, as well as risk assessment and mitigation planning.

Veeam and Huawei OceanStor are better together

Huawei is the latest storage provider to partner with Veeam for more efficient data management, more efficient backup, and faster recovery. Regardless of whether you want to speed up your Veeam backups or if you want to use storage snapshots next to real backups to lower your RTPO, with the newly released Veeam Storage Plug-in for Huawei OceanStor, you are on the right track and ready for the future.

More resources

The post Veeam Intelligent Data Management for Huawei OceanStor appeared first on Veeam Software Official Blog.

Original Article:


VBO365 Backup: The Quick “How To” Guide

VBO365 Backup: The Quick “How To” Guide

CloudOasis  /  HalYaman

DirectionBlueVeeam Backup for Office 365 version 2 was released last week. In this blog post, and following the Sydney VeeamOn event where the following videos were presented, I’m pleased to share the demonstration with you all to help you quickly get started with a few basic topics.

Add Organization and Creating a backup Job:

Restoring a Mailbox:

Restore a OneDrive Item:

Restore Microsoft SharePoint:

Migrate an MS365 Mailbox to On Premises-Exchange:

Share this:

Like this:

Like Loading…

Original Article:


3 steps to extend your archival options to Microsoft Azure Blob storage

3 steps to extend your archival options to Microsoft Azure Blob storage

Veeam Software Official Blog  /  Andrew Zhelezko

Long-term archival policies remain a consistent part of many enterprise infrastructures today. The need to maintain the 3-2-1 rule, ensure corporate standards, or provide regulatory compliance, keep liability of archival options out of the question. Convenient use of virtual tape libraries (VTL) enables enterprise businesses to extend their tape-based backups to virtual disks, or simply switch to newer operational methods without a long preparation as the staff is familiar with the terms already. While Veeam provides native tape support for backing up to virtual tape libraries, it’s now extending it with an option to leverage StarWind VTL for Microsoft Azure Blob Storage, enabling all users looking for cheap and reliable cloud storage to easily and securely store backups/files there.
With this integration, Veeam and StarWind customers can tier their backup data on site, maintaining one to two weeks of data on-premises, while moving longer-term archives directly to a more cost-effective Microsoft Azure Blob Storage. In this blog, I’ll be covering how to tackle the latter. For a more detailed look, you can view the webinar.
Before we rush into details, I’d like to mention that it will take very little effort from existing Veeam customers to accomplish the process. However, due to many available deployment scenarios (starting from Veeam Backup & Replication (VBR) and StarWind software installed on the same server on-premise and ending with VBR and StarWind being independently deployed on Azure VMs), please take a moment to think about traffic flow and the best configuration for your system before you proceed.
As for the configuration itself, I’d prefer to split it onto three logical steps:

1. Azure preparation

Go to Azure portal, find storage accounts, and add another one (or repurpose an existing blob storage). Make sure to provide all the required information and select blob storage within the account kind option. Then, proceed to a newly created storage account and copy the storage account name and a key (settings –> access keys) as well as create a container which is going to be used for storing the data (blob service –> containers –> new). You’ll need this data later, when configuring a cloud replication in StarWind VTL.

Figure 1. Azure blob storage details

2. Configuring StarWind VTL

The purpose of such an action is to emulate a tape library setup on a desired server, so Veeam is going to send data to that library where it will be processed and ready for a cloud archival. A classic Disk to Disk to Cloud (D2D2C) scheme in action.
Get the latest StarWind VTL package (version or newer) and install it on any appropriate physical, virtual, cloud server or even Veeam server itself. During installation, make sure to select the “VTL and Cloud Replication” option so StarWind automatically deploys the corresponding components. Specify a convenient path for the storage pool or leave it by default at disk C. Then, operating from the StarWind management console, connect to a desired server (use localhost or when setting up an all-in-one scenario) and add a virtual tape device (drive) with as many virtual tapes you’d like. StarWind VTL emulates the actual HPE MSL8096 Tape Library, so all the principles of working with such a library will be applicable here. Note: You might need to install the latest drivers pack so that the server recognizes the said tape library properly. Once that is done, this server can be pointed to a VTL using the standard Windows iSCSI tools (control panel –> administrative tools –> iSCSI initiator). Go to Discovery –> Discovery Portal to initialize VTL and then connect to it from the Targets tab.

Figure 2. Connecting to discovered VTL target
Now you should enable a cloud archival via the cloud replication functionality: Simply select Microsoft Azure Cloud Storage on the first step, then specify the required Azure details from step #1, and finish the process by providing the desired retention settings.

Figure 3. StarWind. Setting up the retention settings

3. Veeam Backup & Replication

From a Veeam Backup & Replication perspective, you’ll need to add a server from above as a tape server to the VBR console. For that, an IP/DNS address and appropriate credentials will be required. During the procedure, Veeam will install a Transport and Tape Proxy services to the server and perform a tape libraries inventory if the option is specified. Once the tapes are detected and put into a Free Media Pool, it’s a good idea to create a dedicated Media Pool with some tapes, which will be used on another step.
Now, Veeam is connected to VTL and can push the data there. Create a Backup to Tape or File to Tape Job, specify the backup scope (you’ll need some pre-created backups for the first option), and point the Job to a previously created Media Pool. Depending on the backup/file size, you’ll get the data effectively delivered to the VTL server.

Figure 4. Backup to Tape Job in action

Figure 5. StarWind management console. Tape in Slot 1 has gotten a backup.
Now you can switch to the VTL server and remove the tape from the slot if it wasn’t automatically exported upon the Veeam Job completion. Since in my case the cloud replication was scheduled to start immediately, I can already see the motion in progress.

Figure 6. Uploading to Azure Blob Storage in action
After a successful upload, the Cloud tab will get a blue check and I should be able to verify that by navigating to my Azure Blob Storage and seeing the actual files uploaded to this container.

Figure 7. “ColdContainer” with uploaded data
I can go ahead and manually change the access tier for any of those files right from the Azure portal.

Figure 8. Azure Access Tier change
As an alternative, I could tweak the StarWind settings or even use PowerShell to manipulate the access tier in an automated way.

Restore VMs from VTLs in Azure

On the restore side, StarWind customers can initiate restores from Azure through their Veeam Backup & Replication console to recover the necessary files or VMs. Better yet, why not recover in Azure? Available in the Azure Marketplace, the virtual StarWind appliance, as well as Veeam Backup & Replication, can be installed in an Azure instance, and recoveries can be done from the archive storage directly into a new Azure virtual machine, accelerating your restore times and providing application portability across your backup infrastructure.
In addition, you can provide access to these newly restored VMs in Azure with Veeam PN (Powered Network), establishing a secure connection back to your HQ data center or wherever you need to provide access to these workloads.


Organizations are still using tape for a variety of reasons, but many want to take advantage of the cloud for their backups in order to maintain business continuity and unlock Availability. With Veeam Backup & Replication customers can leverage a seamless integration with StarWind to get their backups off site and into Microsoft Azure Blob Storage. To watch a full demo on the solution, you can watch this webinar.
GD Star Rating
3 steps to extend your archival options to Microsoft Azure Blob storage, 5.0 out of 5 based on 2 ratings

Original Article:


Tips to backup & restore your SQL Server

Tips to backup & restore your SQL Server

Veeam Software Official Blog  /  Kirsten Stoner

Microsoft SQL Server is often one of the most critical applications in an organization, with too many uses to count. Due to its criticality, your SQL Server and its data should be thoroughly protected. Business operations rely on a core component like Microsoft SQL Server to manage databases and data. The importance of backing up this server and ensuring you have a recovery plan in place is tangible. People want consistent Availability of data. Any loss of critical application Availability can result in decreased productivity, lost sales, lost customer confidence and potentially loss of customers. Does your company have a recovery plan in place to protect its Microsoft SQL Server application Availability? Has this plan been thoroughly tested?
Microsoft SQL Server works on the backend of your critical applications, making it imperative to have a strategy set in place in case something happens to your server. Veeam specifically has tools to back up your SQL Server and restore it when needed. Veeam’s intuitive tool, Veeam Explorer for Microsoft SQL Server, is easy to use and doesn’t require you to be a database expert to quickly restore the database. This blog post aims to discuss using these tools and what Veeam can offer to help ensure your SQL Server databases are well protected and always available to your business.

The Basics

There are some things you should take note of when using Veeam to back up your Microsoft SQL Server. An important aspect and easy way to ensure your backup is consistent is to check that application-aware processing is enabled for the backup job. Application aware processing is Veeam’s proprietary technology based on Microsoft Volume Shadow Copy Service. This technology quiescences the applications running on the virtual machine to create a consistent view of data. This is done so there are no unfinished database transactions when a backup is performed. This technology creates a transactionally consistent backup of a running VM minimizing the potential for data loss.
Enabling Application Aware processing is just the first step, you must also consider how you want to handle the transaction logs. Veeam has different options available to help process the transaction logs. The options available are truncate logs, do not truncate logs, or backup logs periodically.

Figure 1: SQL Server Transaction logs Options

Figure 1 shows the Backup logs periodically option is selected in this scenario. This option supports any database restore operation offered through Veeam Backup & Replication. In this case, Veeam periodically will transfer transaction logs to the backup repository and store them with the SQL server VM backup, truncating logs on the original VM. Make sure you have set the recovery model for the required SQL Server database to full or bulk-logged.
If you decide you do not want to truncate logs, Veeam will preserve the logs. This option puts the control into the database administrator’s hands, allowing them to take care of the database logs. The other alternative is to truncate logs, this selection allows Veeam to perform a database restore to the state of the latest restore point. To read more about backing up transaction logs check out this blog post.

Data recovery

Veeam Explorer for Microsoft SQL Server delivers consistent application Availability through the different restore options it offers to you. These include the ability to restore a database to a specific point in time, restore a database to the same or different server, restore it back to its original location or export to a specified location. Other options include performing restores of multiple databases at once, the ability to perform a table-level recovery or running transaction log replay to perform quick point-in-time restores.

Figure 2: Veeam Explorer for Microsoft SQL Server

Recovery is the most important aspect of data Availability. SQL Transaction log backup allows you to back up your transaction logs on a regular basis meeting recovery point objectives (RPOs). This provides not only database recovery options, but also point-in-time database recovery. Transaction-level recovery saves you from a bad transaction such as a table drop, or a mass delete of records. This functionality allows you to do a restore to a point in time right before the bad transaction had occurred, for minimal data loss.

And it is available for FREE!

Veeam offers a variety of free products and Veeam Explorer for Microsoft SQL Server is one that is included in that bunch. If you are using Veeam Backup Free Edition already, you currently have this Explorer available to you. The free version allows you to view database information, export a database and export a database schema or data. If you’re interested in learning more about what you get with Veeam Backup Free Edition, be sure to download this HitchHikers Guide.

Read more

The post Tips to backup & restore your SQL Server appeared first on Veeam Software Official Blog.

Original Article:


Understanding Veeam Availability Orchestrator terminology

Understanding Veeam Availability Orchestrator terminology

Veeam Software Official Blog  /  Melissa Palmer

When it comes to creating disaster recovery (DR) plans, Veeam Availability Orchestrator makes it easy to ensure your data is available when disaster strikes. Beyond creating what we call a Failover Plan in Veeam Availability Orchestrator, we also ensure that our DR plans are tested successfully on a regular basis, with the documentation to prove it. This documentation can also be used for compliance, auditing, and ensuring members of an organization know the state of the DR plan at all times.
You may be asking yourself, “What is a Failover Plan?” after reading the first paragraph of this post. Don’t worry, we are about to explore what they are, as well as other terms we commonly use when talking about Veeam Availability Orchestrator.
A Failover Plan is what is created in Veeam Availability Orchestrator to protect applications. The Failover Plan is central to an organization’s DR plan. The goal of the Failover Plan is to make failovers (and fail backs) as simple as possible. Within a Failover Plan, there are a number of Plan Components, which are added to the Failover Plan to meet the business’ requirements.

VM Groups contain the virtual machines (VMs) we are ensuring Hyper-Availability for in the event of a disaster. The VM Groups are powered by VMware vSphere Tags. VMs are simply tagged in vCenter, and the vSphere Tag name will appear in Veeam Availability Orchestrator as shown in the screenshot above. Plan Steps are the steps taken on the VMs during a failover. This includes a number of Application Verification steps available out of the box including verification of applications such as Exchange, SQL, IIS, Domain Controllers, and DNS. Credentials for verifying the applications are also one of the plan components. In addition to these built in application verification tests, Custom Steps can also be added to the Plan Components, allowing organizations to leverage already existing DR scripts.
Template Jobs ensure data is backed up and kept available during a failover scenario. They are created in Veeam Backup & Replication and then added to a Failover Plan during creation. Another big component of a Failover Plan is a Virtual Lab, which we refer to as a Veeam Data Lab. Veeam Data Labs allow for an isolated copy of a production application to be created and tested against. When we are finished using this copy of the data, we simply delete it without ever having impacted or changed our actual production data. This allows for Virtual Lab Tests to be performed to prove recoverability, and the corresponding Test Execution Report to be generated.
We all know how difficult testing DR plans used to be. We would spend a few days locked in the data center, without even getting the applications running correctly. We would say it would get fixed “next time,” but we all know the truth was often that these broken DR plans were never fixed. Veeam Availability Orchestrator removes this overhead, and allows for quick and easy testing without impacting production. In the event a test fails, the Test Execution Report shows us exactly what went wrong so we can fix it.
Before we run a Virtual Lab Test, we first run a Readiness Check on our Failover Plan, and yes, this also comes with a Readiness Check Report so we can easily see the state of our DR plan. This is a lightweight test that is performed to ensure we are ready to failover at a moment’s notice. Best of all, this check can be scheduled to run daily, along with a Plan Definition Report. The Plan Definition Report shows us exactly what is in a Failover Plan, including the VMs in a VM Group and all of our Plan Steps. This report also shows any changes so we have a full audit trail of our DR plan.
As you can tell by this image, our Failover Plans are ready in the event of a failover. They are listed as a “verified” state which means we have successfully run a Virtual Lab Test and a Readiness Check, both of which can be scheduled to run as often as we would like. We can also ensure reports are sent to key stakeholders when the checks are run.

In the event of a failover, which we can trigger on demand or schedule, an Execution Report will be generated. This will detail the steps taken as part of the Failover Plan on the VMs in a VM Group, and show that the application has been successfully verified and is running in the DR site. We know the execution of a Failover Plan will be successful since we have already tested it successfully.
Now that you are ready to start speaking Availability, you can download a 30-day FREE trial of Veeam Availability Orchestrator and try it out. Make sure to check these tips and tricks to ensure a smooth first deployment.
The post Understanding Veeam Availability Orchestrator terminology appeared first on Veeam Software Official Blog.

Original Article:


Veeam Availability for Nutanix Enterprise Cloud now GA

Veeam Availability for Nutanix Enterprise Cloud now GA

vZilla  /  michaelcade

Veeam Availability for Nutanix Enterprise Cloud

This post will walk through the areas and demo of the newly released Veeam Availability for Nutanix AHV.

It is important to note that the screen shot used in this post are based on the beta versions, there should be very little difference between the GA and beta edition.

Basic Overview

The Veeam Backup Proxy Appliance for AHV, used to authenticate with a Veeam Backup & Replication server to gain access to Veeam repositories. The appliance will also provide a web interface where full VM restores and disk restores can take place.

The Veeam Backup & Replication server is a required component and used in conjunction with the Veeam Backup Proxy Appliance for AHV. Primary use case is to allow access to Veeam backup repositories but also to provide granular level recovery for both files and application items.

The Veeam repository (not including HPE StoreOnce or DellEMC Data Domain in version 1) provides us the ability to store the Veeam backups from AHV in that forward incremental proprietary vbk format.

In my lab I have the above configured as following:

Nutanix AHV Console

I will first show below that we have our Nutanix Community Edition running in our lab based on the networking configuration shared above.

The opening screen will look like the below, it’s an overview screen for the whole AHV environment, this is a nested Nutanix CE edition.

Select the drop down near Home and select VMs and then table. All machines are then listed, most of the backups you will see throughout this demo are based on the Windows 2016 VM that you can see below.

Veeam backup proxy appliance

The opening web interface for the Veeam backup proxy appliance looks as per below.

The opening screen is in the form of a dashboard interface giving you an overview of the AHV and Veeam environment.

The top navigation bar is very simple to find what it is you are looking for, this is where you can go back to that dashboard shown above, create and view backup jobs, protected VMs is where the recovery options can be performed.

Protected VMs tab: this is where the full VM and disk restore operations can be seen.

Event Log tab: all time logs are shown here. This is an extended visual from the one seen on the opening dashboard.

Configuration settings: simply the ability to add your Veeam backup servers and Nutanix clusters, there is also the appliance settings tab which is where you can configure how the configuration backup will take place. This is also where the licensing will be added.

Veeam Backup & Replication 9.5 update 3

The Veeam Backup & Replication console is required to perform granular and application item level recovery. It will also enable for additional availability options.

Connect and launch VBR console. Under backups you will see the “Nutanix Policy”

Backup copy options, would also can send via backup copy to cloud connect provider and to tape.

Restore Functionality

As mentioned there are lots of restore scenarios available from the Veeam Backup & Replication console.

Restore guest files demo, you will have many restore points based on the job configuration that has been set. as you click through the wizard all will contain restore points.

Application Explorer demo, this is achieved from within the FLR explorer, select the explorer you wish to demo, you can see in the root for demo purposes I have AD, Exchange, SQL and File Data to make life easier to just show us mounting those database files.

I am excited to see where this grows next as a v1 product it will remove so many pain points for many of the Nutanix AHV administrators I have spoken to. I have been lucky enough to see this from the very early days so the feedback has been great and we already have a great list of feature requests for v2.

The post Veeam Availability for Nutanix Enterprise Cloud now GA appeared first on vZilla.

Original Article:


Veeam Backup for Microsoft Office 365 v2: SharePoint and OneDrive support is here!

Veeam Backup for Microsoft Office 365 v2: SharePoint and OneDrive support is here!

Veeam Software Official Blog  /  Niels Engelen

Microsoft Office 365 adoption is bigger than ever. When Veeam introduced Veeam Backup for Microsoft Office 365 in November 2016, it became an immense success and Veeam has continued building on top of that. When we released version 1.5 in 2017, we added automation and scalability improvements which became a tremendous success for service providers and larger deployments. Today, Veeam is announcing v2 which takes our solution to a completely new level by adding support for Microsoft SharePoint and Microsoft OneDrive for Business. Download it right now!

Data protection for SharePoint

By adding support for SharePoint, Veeam extends its granular restore capabilities known from the Veeam Explorer for Microsoft SharePoint into Office 365. This allows you to restore individual items – documents, calendars, libraries and lists – as well as a complete SharePoint site when needed. With the new release, Veeam can also help you back up your data if you are still in the migration process and are still using Microsoft SharePoint on premises or running in a hybrid scenario.

Data protection for OneDrive for Business

The most requested feature was support for OneDrive for Business as more and more companies are using it to share files, folders and OneNote books internally. With Veeam Explorer for Microsoft OneDrive for Business, you can granularly restore any item available in your OneDrive folder (including Microsoft OneNote notebooks). You have the option to perform an in-place restore, restore to another OneDrive user or another folder in OneDrive, export files as an original or zip file, and if you get hit by a ransomware attack and your complete OneDrive folder gets encrypted Veeam can perform a full restore as well.


Besides the introduction of new platform support, there are also several enhancements added.

Major ease-of-use and backup flexibility improvements with a newly redesigned job wizard for easier and more flexible selection of Exchange Online, OneDrive for Business and SharePoint Online objects. Making it easier than ever to set-up, search and maintain visibility into your Office 365 data. Granularly search, scale and perform management of backup jobs for tens-of-thousands of Office 365 users!

Restore data located in Microsoft Teams! You can protect Microsoft Teams when the underlying storage of the Teams data is within SharePoint Online, Exchange Online or OneDrive for Business. While data can be protected and restored, the Teams tabs and channels cannot. After restoring the item, it can however be reattached manually.

Compare items with Veeam Explorer for Microsoft Exchange. It is now possible to perform a comparison on items with your production mailbox to see which properties are missing and only restore those without restoring the full file.

As with the 1.5 release, everything is also available for automation by either leveraging PowerShell or the Restful API which now fully supports OneDrive for Business and SharePoint.

Another enhancement is the possibility to change the GUI color as you like. This option made its way into Veeam Backup for Microsoft Office 365 after being introduced in Veeam Backup & Replication.

Starting with version 2, Veeam Backup for Microsoft Office 365 is now able to automatically check for updates, so you can rest assured you are always up to date.

And finally, the log collection wizard has been updated as it now allows you to collect logs for support in case you run into an issue, as well as configure extended logging for all components.

Community Edition introduced

Version 2 marks the release of Veeam Backup for Microsoft Office 365 Community Edition! This FREE product functionality is identical to the paid version, but with the following limitations:

  • Maximum number of Exchange Online users: 10
  • Maximum number of OneDrive for Business users: 10 users associated with the same 10 Exchange Online users
  • Maximum amount of SharePoint data protected: 1TB
  • Best effort support

See more

The post Veeam Backup <em>for Microsoft Office 365 v2</em>: SharePoint and OneDrive support is here! appeared first on Veeam Software Official Blog.

Original Article: