First Look: On‑Demand Recovery with Cloud Tier and VMware Cloud on AWS

First Look: On‑Demand Recovery with Cloud Tier and VMware Cloud on AWS
Since Veeam® Cloud Tier was released as part of Veeam Backup & Replication™ 9.5 Update 4, Anthony Spiteri, Senior Technologist, Product Strategy, has written a lot about how it works and what it offers in terms of offloading data from more expensive local storage to what is fundamentally cheaper remote Object Storage. As with most innovative technologies, dig a little deeper and different use cases start to present themselves and unintended use cases find their way to the surface. Read this blog from Anthony Spiteri to learn more.

Helping customers migrate from vSphere 5.5

Helping customers migrate from vSphere 5.5
What happens when customers are still running vSphere 5.5 in production? Matt Lloyd and Shawn Lieu sat down with VMware solution architect Chris Morrow to discuss upgrading to vSphere 6.5 or 6.7, management changes in the newer releases, leveraging Veeam DataLabs™ for validation and more.

5 pro tips for recovering applications quickly and reliably

5 pro tips for recovering applications quickly and reliably

Veeam Software Official Blog  /  Sam Nicholls

Today’s apps consist of multiple component tiers with dependencies both inside the app and outside (on other applications). While separating these apps into logical operational layers yields many benefits, it also introduces a challenge: complexity.
Complexity is one of many enemies when it comes to recovery. Especially as service level objectives (SLOs) and tolerances for downtime become more and more aggressive. Traditional approaches to disaster recovery (DR) — manual processes — just cannot cater to these ever-increasing expectations, ultimately exposing the business to greater risk and longer periods of costly downtime, not to mention potential infringement on compliance requirements.
Orchestration and automation are proven solutions to recovering entire apps more efficiently and quickly, mitigating the impact of outage which, as Agent Smith puts it, is

I wanted to share with you some tips, tricks, best practices and some of the tools that will help you be more successful and efficient when building and executing a DR plan that will recover entire apps quickly and reliably.

Discover and understand applications

After the intro to this blog, it almost needn’t be said here, but it’s critical to take an application-centric mindset to DR and plan based on business logic, not just VMs. Involve application owners and management to understand the app and build your plans accordingly. What VMs make it work? Are they tiered? Do those VMs need to be powered on in correct sequence? Or can they be powered simultaneously? How can I confirm that the plan works, and the app is running? These are just a few questions you need to ask to get started. But understanding the answers, documenting them, and building your strategy based on business logic is essential to successful DR planning.

Utilize vSphere tags

Once you understand those apps, utilize VMware vSphere tags to categorize them for a more policy-driven DR practice. This is critical as changes to apps that have not been identified and implemented into a plan are among the more common causes for DR plan failure. For example, when using tags and Veeam Availability Orchestrator, as an app changes (such as a VM being added), those VMs are automatically imported into orchestration plans and dynamically grouped based on the metadata in the tag assigned. The recoverability of this new app configuration will then be determined when the scheduled automated test executes, with the outcome subsequently documented. If it succeeds, you can be confident that that app is still recoverable despite the changes. If it fails, you have the actionable insights you need to proactively remediate the failure before a real-world event.

Optimize recovery sequences

Recovering the multiple VMs that make up an app is traditionally a manual, one-by-one process, and that’s a problem. Manual processes are inefficient, lengthy, error-prone and do not scale. Especially when we consider large apps, or those that are tiered with dependencies where the VMs need to be powered-on in perfect sequence. Utilizing orchestration eliminates the challenges of manual processes while optimizing the recovery process, even when a single plan contains multiple VMs that require differing recovery logic.
For example, the below illustrates a traditional three-tiered app where in a recovery scenario we must power on our database VMs, then the application VMs, and finally the web farm VMs. Some of these VMs can be recovered simultaneously, while others must be recovered sequentially due to the aforementioned dependencies, not to mention differing recovery steps for different types of VMs. Through orchestration, we can drastically optimize recovery processes and efficiencies while also greatly improving the time it takes to recover the application. Within the orchestration plan, we can ensure that each VM has the relevant plan steps applied (e.g. scripting), that certain groups of VMs are recovered simultaneously where applicable, and that there is next to no latency between the plan steps, like sequential recovery of the VMs. This gives the plan speed, and that’s exactly what we want in disaster.

To account for disasters that are more widespread, we could build multiple applications into a single orchestration plan, although this isn’t necessarily advisable as in the event of an outage that is confined to a single app, we only want to failover that single app, not all of them. Instead, build out orchestration plans on a per-app basis, and then execute multiple orchestration plans simultaneously in the event of a more widespread outage. If the infrastructure at the DR site is sized and architected appropriately, you can test and optimize the recovery process for multiple apps and gauge how many orchestration plans your environment is capable of executing at once.

Leverage scripting

Building on the above, the ability to orchestrate and automate scripts will also help further reduce manual processes. Instead of manually accessing the VM console to check if the VM or application has been successfully recovered, orchestration tools can execute that check for me, e.g. ping the NIC to check for a response from a VM, or check VMware tools for a heartbeat. While this is useful from a VM perspective, it doesn’t confirm that the application is working. A tool like Veeam Availability Orchestrator features expansive scripting capabilities that go beyond VM-based checks. It not only ships with a number of out-of-the-box scripts for verifying apps like Exchange, SharePoint, SQL and IIS, but also allows you to import your own custom PowerShell scripts for ultimate flexibility when verifying other apps in your environment. No two applications are the same, and the more flexibility you have in scripting, the more precise and optimal you can be in recovering your apps.

Test often

Frequent and thorough testing is essential to success when facing a real-world DR event for multiple reasons:

  • When tests are successful, it’ll deliver surety and confidence that the business is resilient enough to withstand a DR event, verifying the recoverability and reliability of plans.
  • When tests fail, it enables the organization to proactively identify and remediate errors, and subsequently retest. It’s better to fail when things are OK than when they’re not.
  • Practice! Nobody is ever a pro the first time they do something. Simulating an outage and practicing recovery often develops skills and muscle memory to better overcome a real-world disaster.
  • Testing environments can also be utilized for other scenarios outside of DR, like testing patches and upgrades, DevOps, troubleshooting, analytics and more.

Guidelines on DR testing frequency can vary from once-a-year to whenever there is an application change, but you can never test too often. Veeam Availability Orchestrator enables you to test whenever you like as tests are fully-automated, available on a scheduled basis or on-demand, and have no impact to production systems or data.

Veeam can help

I’ve mentioned Veeam Availability Orchestrator a few times throughout this blog, delivering an extensible, enterprise-grade orchestration and automation tool that’s purpose-built to help you plan, prepare, test and execute your DR strategy quickly and reliably.
What’s more is that with v2, we delivered full support for orchestrated restores from Veeam backups — an industry-first. True DR is no longer just for the largest of organizations or the most mission-critical of apps. Veeam has democratized DR, making it available for all organizations, apps and data.
Whether you’re already a Veeam customer or not, I strongly recommend downloading the FREE 30-day trial. There are no limitations, it’s fully-featured and contains everything you need to get started. Have a play around, build your first plan for an application and test it. I almost guarantee you’ll learn something about your environment or plan that you didn’t know before.
The post 5 pro tips for recovering applications quickly and reliably appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/7lMWUop3i7w/application-disaster-recovery-tips-tricks.html

DR

vCD Tenant Backup Portal

vCD Tenant Backup Portal

CloudOasis  /  HalYaman

Veeam Enterprise Manager can be used by your VMware vCloud Director tenants to perform their own workload backup and restore. They can do this using the Veeam Self-Service Portal, and in this blog, I will show you how to give your tenants access to the Self-Service Portal without leaving the vCD portal?

Many Service Providers are impressed with the services offered by the Veeam Enterprise Manager Self-Service Portal; but also, many of them ask if there is a way to easily give the tenant access to the Enterprise Manager Self-Service Backup Portal without leaving the vCD.
Out of the box, Veeam doesn’t provide such a capability; but with several hours of coding, you can make this option possible. Don’t worry, I am not going to ask you to do any coding here, as I have already done that for you.
So let’s have a look at what was done to get this solution to work.
A month or so ago, I received a call from one of our Australian Vanguard members, (http://stevenonofaro.com/) wanting to discuss this requirement. He wanted to cooperate to build such a vCD plug-in. Working with his local development team, he was able to demonstrate a workable solution based on the code at this link: GitHub sample code.
The workable solution developed demonstrated was limited to one Web application portal. We agreed to release a version of the code that can be used by any Service Provider, and that can point to any Web Application. I will share here what I came out with after three weeks of research, coding, errors, and more.

Solution

To provide a community release version, it was necessary to rebuild the code from scratch. This was done in two stages; the first stage runs a simple application where the Service Provider needs to provide the Web Application URL; i.e., Veeam Enterprise Manager in this case. Then the plugin is generated as a .zip file, which the Service Provider uploads to the vCD Provider portal.
Let’s take a look at the steps:

  • Download the Plugin Generator tool:
To get started, you must download this file. The next step is to extract the .zip, and then change the extension to .exe to run it.
  • After renaming the file to .exe file and running it, you are greeted with the tool shown below, where you must provide the Web Application URL
Screen Shot 2019-06-28 at 8.41.37 pm
  • After entering the URL and pressing the Generate vCD Plugin Package button, a Zip file is generated. You upload this file to the vCD provider portal.

vCD Provider Portal

To upload the vCD plugin, you browse your vCD admin portal using as the following: https://vCDURL.domain/provider and then:

  • Browse to the customize portal:
Screen Shot 2019-06-28 at 9.10.47 pm
  • Chose to upload, then select the .zip file. By default, the zip file is generated with Cloudoasis.zip as a file name.

Tenant Side

The tenant will access the link by selecting Backup Portal from the tenant menu:

Screen Shot 2019-06-28 at 9.16.25 pm

When selecting Backup Portal, the tenant is directed to the Web Application URL specified by the vCD plugin generator. In this example, the portal points to the Veeam Enterprise Manager Self-Service Backup Portal:

Screen Shot 2019-06-28 at 9.19.21 pm

Veeam Enterprise Manager Preparation

You must complete the following steps to allow the Veeam Enterprise Manager Self-Service Portal to integrate into the vCD:
On the Enterprise Manager Server, open IIS. Then select the VeeamBackup site. The last step, click on the HTTP Response Headers icon.

Screen Shot 2019-06-28 at 9.41.46 pm.png

At this point, you must update the X-Frame-Options and enter your Enterprise Manager URL. It will look like this:
“allow-from https://<Enterprisemanagerurl>:9443/”
Screen Shot 2019-06-28 at 9.42.58 pm.png
Finally, restart IIS.

Summary

With this small community tool, I hope to help the Service Provider community make it easy for their tenants to open the Veeam Enterprise Manager Self-Service Portal into the vCD portal without the requirement to open another browser, or browser tab, just to conduct backup and restore operations. Also, as you may have noticed, the plugin generator allows you to point to any Web application, not just the Veeam Enterprise Manager, so long as you provide the correct URL and port.

The post vCD Tenant Backup Portal appeared first on CloudOasis.

Original Article: https://cloudoasis.com.au/2019/06/30/self-service-portal-without-leaving-the-vcd-portal/

DR

Western Digital IntelliFlash Storage Plug-In Now Available

Western Digital IntelliFlash Storage Plug-In Now Available

Veeam Software Official Blog  /  Rick Vanover

I am often asked to explain why we here at Veeam have implemented the storage plug-ins and it is a simple answer: to deliver solutions faster to customers! By having storage integrations outside of product release requirements our partners can help innovate and customers can benefit with a better backup experience and amazing recovery. The latest storage plug-in is now available for the Western Digital IntelliFlash array.
Frequently, people look at a Veeam storage plug-in for faster backups with the Backup from Storage Snapshots capability. But the plug-in brings much more:

These collectively make a very powerful option to not just take backups more effectively, but introduce some new recovery options. Let’s tour them here:

Veeam Explorer for Storage Snapshots

Veeam Explorers are usually associated with application-level restores but the Veeam Explorer for Storage Snapshots can use both the application restores and do more — all from an IntelliFlash storage snapshot! My favorite part of our storage integration is it allows an existing storage snapshot to be used for some recovery scenarios. This includes file-level recovery, a whole VM restore to a new location Application Object Exports (Enterprise and higher editions can restore applications and databases to original location) or even an application item.
The Veeam Explorer for Storage Snapshots will read the storage snapshot, then expose the snapshot’s contents to Veeam allowing the seven restore scenarios shown below:

Veeam Backup from Storage Snapshots

Leveraging the Western Digital snapshot integration will bring the real power to a backup infrastructure and make the biggest difference to backup windows. I’ve always said that this integration will allow organizations to take backups at any time. This will allow the data mover (a Veeam VMware backup proxy) to do all of the work with associated data for a backup from the storage snapshot, compared to a vSphere hypervisor snapshot. One good thing that comes along with the Veeam approach is that VMware Changed Block Tracking (CBT) is retained and application consistency is achieved. The figure below visualizes this compared to a non-integrated backup:

On-Demand Sandbox for Storage Snapshots

One of the best things that comes with the Western Digital IntelliFlash plug-in is the On-Demand Sandbox, and I encourage everyone to learn about this technology. It can be used to avoid production problems by easily producing a sandbox to test changes, perform update testing, test security configurations, upgrades and more. You can answer questions like “How long will it take?” and “What will happen?” But most importantly “Will it work?” This can save time by avoiding cancelled change requests due to not knowing exactly what to expect with some changes.

Primary Storage Snapshot Orchestration

This is a capability that many don’t know about with the Veeam storage plug-in. This capability allows users to create a Veeam job to generate a storage snapshot on the Western Digital IntelliFlash array. This should be in addition to a backup job that goes to a backup repository. This plug-in sets Western Digital IntelliFlash customers up to use Veeam Explorers for Storage Snapshots for high-speed recoveries.

Download the Western Digital IntelliFlash plug-in Now

The Western Digital IntelliFlash plug-in can now be downloaded and installed into Veeam Backup & Replication. Check it out now!
The post Western Digital IntelliFlash Storage Plug-In Now Available appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/_vYh6rrx96s/western-digital-intelliflash-storage-plug-in-now-available.html

Production Data Export

Production Data Export

CloudOasis  /  HalYaman

data-exportConsider a situation where you want to extract a “point in time” data set from your production data for data analysis, migration or auditing? How can you easily accomplish this task? What options do you have to meet this requirement?

Sometimes your business requires a particular data set to be shared with others; perhaps you have to share for development, data analysis, data sharing with remote offices, compliance, auditing, and more.

To carry out your data extraction task, you must first carefully plan your steps before exporting the data set. This caution is to prevent accidentally exposing your entire production data to un-authorized personnel. Accidental data exposure is a familiar problem that we all want to avoid.

Now, what if I tell you that you can safely extract your “point in time” data set in a few easy steps with Veeam Backup and Replication using a new feature introduced in Update 4, called “Export Backup”. This new feature allows you to export the selected restore point of a particular object in the backup job. It can be as a standalone full backup file (VBK), and to keep it as separate retention, or can be kept indefinitely. To read more about this aspect of the features, follow this link https://www.veeam.com/veeam_backup_9_5_whats_new_wn.pdf – page 8).

Scenario

Let’s consider the following scenario; as a backup admin, your manager has asked you to extract all the data from a restore point beginning at three weeks ago. The data extraction he needs must include all the incremental data up to the most recent, and then you are to share it with a data mining company who will use it to make a report about the business performance.

Solution

As a backup admin who is using Veeam Backup and Replication software to backup the production infrastructure, you can easily accomplish this request by following these steps:

From the Veeam Backup and Replication Console, select Home -> Backup, then select Export Backup.

Screen Shot 2019-07-26 at 11.22.07 am.png

At the Export Backup interface, choose the desired backup point by pressing on the Add icon, then select the Restore Point, and then select Add.

Screen Shot 2019-07-26 at 1.46.01 pm.png

By pressing on Point option, you can check the Restore Points with all the Restore Point Dependencies:

Screen Shot 2019-07-26 at 1.47.56 pm.png

Also on the same wizard, you can specify the retention time of the export file before it is deleted. You can choose from one, two, or three months. There are also options for one, two, three, and five years.

Screen Shot 2019-07-26 at 1.49.26 pm.png

When you have made your selections to set up your data point export, press Finish. A Full backup file will be created and available in the same repository as the source files.

Screen Shot 2019-07-26 at 1.53.22 pm.png

Note: This newly created export file contains the full and the two incremental files.

File Location and Copy

After the export has been set up and runs successfully, the.VBK file will be available in the same repository as that of the source files. These can be located from the Veeam Console. The file will be attached under Backups -> Disk(imported) option.

To copy the file from the repository, you can copy manually, or use Veeam Copy Job -> File option:

Screen Shot 2019-07-26 at 2.10.06 pm.png

Conclusion

On this blog post, I have shown how you can benefit from the new Export Backup function. As you noted, with these easy steps, you can encapsulate restore points together under a single file. The single file can then be easily shared or shipped to other systems or operators for many business reasons. The single file can also be used as an offsite archive file, used in system migration if that is a requirement.

I hope this blog post helps you to use this new feature and find other beneficial business use cases for your business. Thank you, and please don’t hesitate to share this blog and share your feedback with the readers.

The post Production Data Export appeared first on CloudOasis.

Original Article: https://cloudoasis.com.au/2019/07/28/production-data-export/

DR

How to recover deleted emails in Office 365

How to recover deleted emails in Office 365

Veeam Software Official Blog  /  Niels Engelen

Even though collaboration tools like Microsoft Teams and Slack are changing the workplace, email is still going strong and used for most of the communication within companies. Whether you are using Exchange on-premises, online or in a hybrid setup, it’s important to protect the data so you can recover in case of data loss. At Veeam, we have been talking about six reasons why you need Office 365 backups for quite some time, but the same applies if you are still using Exchange on-premises or are still in the migration process.

How to recover deleted emails from accidental deletion

The most common reason for data loss is accidentally deleted emails. However, within Office 365 email there are two different types of deletion, soft-delete or hard-delete. If you perform a soft-delete, you just hit the delete button on an email and it is moved to the Deleted Items folder (also called the recycle bin). In most cases, you can just go in there and recover deleted emails with a few clicks. However, if you perform a hard-delete, you leverage the shift button and delete the email. In this case, the email is deleted directly and can’t be recovered from the Deleted Items folder. While the email becomes hidden, you can still recover it based upon the Deleted Item Retention threshold. This applies to both Exchange on-premises and Exchange Online. By default, this is set to 14 days, but it can be modified. But what if you need to retrieve deleted emails beyond these 14 days?

Recover deleted emails using the Veeam Explorer

Veeam Explorer for Microsoft Exchange was introduced back in 2013 and is one of the most popular recovery features across the Veeam platform. While the first release only supported Exchange 2010, it now supports Exchange 2010 up to 2019, as well as hybrid deployments and Exchange Online. In addition to retrieving deleted emails, you can also recover contacts, litigation hold items, calendar items, notes, tasks, etc. You can even compare an entire production mailbox or folder with its associated backup mailbox or folder and only recover what is missing or deleted!

If you still have Exchange on-premises, you can use Veeam Backup & Replication or the Veeam Agent for Microsoft Windows to protect your server, allowing you to recover items (like deleted emails, meetings…) directly. Make sure you enable application-aware processing within the backup job and Veeam Backup & Replication will detect the Exchange service. From the Backups node view, right click on the corresponding Exchange server (or search for it at the top) and go to Restore application items and select Microsoft Exchange mailbox items.

 

Are you already using Office 365 (or in hybrid mode during the migration process)? Just leverage Veeam Backup for Microsoft Office 365 to recover deleted emails or any other any Exchange, SharePoint or OneDrive for Business item you want.

 

Once the Veeam Explorer has started, it will load the corresponding database and allow you to perform a restore.

 

If you know which item(s) you have lost, you can just navigate to the corresponding mailbox and folder to restore the deleted email. If you don’t know where it was at the point of deletion, you can leverage multiple eDiscovery options.

 

If we for example search for all emails from a specific person, it would look like this.

 

Once you’ve found the deleted email you need to restore, we can continue with the restore process. Veeam Explorer for Microsoft Exchange gives you multiple options to perform the restore.

  • Restore to the original location
  • Restore to another location
  • Export as MSG/PST file
  • Send as an attachment

In the example below we will restore the deleted email back to the original location.

 

After filling in the account credentials, the item will be restored and available in the mailbox.

Still in need of a backup?

Are you still in need of a backup of Office 365 Exchange Online? Start today and download Veeam Backup for Microsoft Office 365 v3, or try the Community Edition for up to 10 users and 1 TB of SharePoint data.

If you need a backup for Exchange on-premises, download Veeam Backup & Replication today. Veeam Explorer for Microsoft Exchange is even available in the Community Edition!

Still not convinced? Read an unbiased review of Veeam Backup for Microsoft Office 365 from TechGenix.

 

The post How to recover deleted emails in Office 365 appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/Xt4rmhnVX2I/recover-deleted-emails-guide-office-365.html

DR

Pro tips on getting the most from Veeam ONE’s monitoring and analytics

Pro tips on getting the most from Veeam ONE’s monitoring and analytics

Veeam Software Official Blog  /  Vic Cleary

Few realize that Veeam ONE is one of our oldest products. Like every product at Veeam, it continues to evolve based on the needs of our customers and today serves as an important component of Veeam Availability Suite, delivering Monitoring & Analytics for Veeam Backup & Replication, as well as for virtual and physical environments.

Since 2008, Veeam ONE has amassed an impressive arsenal of capabilities through dozens of updates. This has allowed users to leverage the latest monitoring and reporting functionality for any new Veeam Backup & Replication feature and supported infrastructure, ensuring ALL eyes and ears remain fixed on the health state of critical infrastructures, datasets, backup activities, and now, even applications. The list of unique alarms and reports continue to grow with each new release. In fact, did you know that Veeam ONE includes 340 individual Alarms and over 145 pre-built reports? This alone is 485 reasons why you should leverage the power of Veeam Availability Suite, by tapping into the most helpful Veeam ONE alarms and reports for your business. And below, we’ll discuss a number of ways how you can do that as well as outline them in more detail during an informative Veeam ONE webinar, July 25th!

Our customers continue to pave the way to innovation

Part of our on-going effort to enhance existing products is to collect user experience feedback— whether that data comes from customer surveys or from comments logged during technical support calls. Regardless of the source, we take customer needs seriously.

Recently, we asked a number of our customers what they liked most about Veeam ONE and they told us they relied heavily on alarms they receive on the state of their virtual and backup infrastructures. The alarms run in parallel with a handful of pre-built reports that provide periodic data to ensure backups and infrastructure are operating at peak performance. Due to the high quantity of unique alarms and reports that exist, it’s nearly impossible to be familiar with them all. However, we found that a handful of these tools are consistently used and relied upon by a majority of our users on a daily basis.

TOP Veeam ONE Alarms every business should be using

Veeam ONE Alarms are essential in notifying you when part of your virtual, physical or backup environment is not working as designed or as it should be. Pre-built alarms can monitor everything from physical machines running Veeam Agents to VMware, Hyper-V, vCloud Director, Veeam Cloud Connect and other environments. They are designed to keep IT admin up to date on the operations and issues occurring in their environment in real-time. They also help users quickly identify, troubleshoot react to and even fix issues that may impact critical applications and business operations.

The truth is, every Veeam Availability Suite user should be leveraging Veeam ONE alarms to optimize their data center. As mentioned earlier, there are 340 unique alarms, many of which have been disabled by default to avoid spamming users, however, they can also be turned on individually with ease. And while we realize it’s impossible to be an expert on all 340, at a minimum, the following alarms are a great place to start.

Guest Disk Space Alarm

The Guest Disk Space Alarm identifies when a Guest machine is running out of disk space, enabling users to add resources to the machine before performance is negatively impacted. This is important because if guest disk space runs out, the machine may become inactive and inaccessible, which will undoubtedly increase the possibility of downtime. This alarm alerts you before the issue occurs allowing you to proactively manage your environment.

Backup Job State Alarm

Another critical alarm is the Backup Job state alarm. This alarm is beneficial because it will notify you if one or more VMs have failed to backup successfully, allowing you to take immediate action to ensure all your data is secure.

Internal Veeam ONE alarms

In addition to alarms designed to alert you to critical issues with your environment, there are also internal Veeam ONE alarms that help ensure data is being collected properly. This specific “pack” of alarms is important to pay attention to in case Veeam ONE has failed to collect data properly. If this has happened, then there is a chance you are basing decisions on unreliable metrics. To protect against inaccuracies, monitoring internal alarms regularly can ensure Veeam ONE is performing as it should.

These are just two critical alarms and an alarm pack that every Veeam Availability Suite user should understand. During our webinar, we will discuss these and other important alarms, so be sure to register and tune in July 25th.

Reports to start using today

So, what about Veeam ONE’s 145 Pre-built Reports? Where do you start? Veeam ONE offers detailed reports including pre-built and customizable reports that are ready to be used for several critical processes. To start, Veeam ONE includes a number of backup-related reports that our power users mentioned as “must-haves” and “must-check regularly.”

Backup Infrastructure Assessment report

The Backup Infrastructure Assessment report evaluates how optimally your backup infrastructure is configured and offers a set of actions to improve efficiency. This is done by examining a set of recommended baseline settings and implementations, and then comparing these recommendations to your environment. By analyzing factors like VM configuration and application aware image processing to job performance and backup configuration, the report can verify problem areas to help you more effectively mitigate issues.

Job History report

The Job History report has also been proven very useful for users because it provides all job-related data, statistics and performance data in a single package. As you can see from the screenshot below, the Job History report is a great resource to provide you the details you need regarding your data protection jobs.

This report is useful because it provides information on all jobs that are running in your environment. It allows you to easily identify jobs that are taking the longest amount of time as well as the jobs transferring the most data. By providing you with all job-related data, it helps identify performance bottlenecks allowing you to reconfigure jobs as needed.

Not only do Veeam ONE reports analyze backup data, they can also show you how your entire virtual environment is running. A number of reports can additionally assist with capacity planning, managing VM growth in your environment as well as identifying over or undersized VMs. Another report worth highlighting that assists with capacity planning is the Host-failure Modeling report. This report provides resource recommendations to prevent future CPU and memory shortages. And since the Capacity Planning Report is one of our most popular reports, let’s take a closer look.

Capacity Planning report

Capacity Planning reports are available for VMware and Hyper-V, as well as to assist with capacity planning for backup repositories. Specific report that have proven to be especially beneficial for our customers are the Capacity Planning Reports for VMware vSphere and Microsoft Hyper-V. In the screenshot below, you can see how this report identifies the number of days remaining before the level of resource utilization reaches a specified threshold. The report analyzes historical performance, build trends, and predicts when a defined threshold will be violated. This is very useful as it allows you to be proactive and avoid resource shortages.

Billing and Chargeback Reports

Veeam ONE also includes excellent reports for managing billing and chargeback. These are beneficial in understanding the costs of a virtual infrastructure by analyzing resource usage for CPU, memory, and storage. We’ll discuss this report and others in more detail during the webinar.

Building on Veeam ONE’s Legacy – Our latest capabilities

The second half of our webinar will call out a number of the new capabilities introduced with NEW Veeam Availability Suite 9.5 Update 4. We’ll highlight five new Veeam ONE features that introduce new technologies designed to simplify monitoring and reporting processes and deliver on a number of customer requests. You’ll learn more about our new era of Veeam Intelligent Automation, including Veeam Intelligent Diagnostics, to help quickly address known infrastructure-related issues, as well as new automated Remediation Actions that can be leveraged to automate specific activities. We’ll also discuss our new Heatmaps feature, new Business View groupings, and introduce our highly-anticipated application-level monitoring capabilities, plus more. We promise you won’t leave empty handed.

Register now to join us July 25th

We look forward to sharing these details with you July 25th. Register today!

GD Star Rating
loading…

Pro tips on getting the most from Veeam ONE’s monitoring and analytics, 5.0 out of 5 based on 3 ratings

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/MkARcV17iJ8/monitoring-analytics-pro-tips.html

DR

How to create a Failover Cluster in Windows Server 2019

How to create a Failover Cluster in Windows Server 2019

Veeam Software Official Blog  /  Hannes Kasparick


This article gives a short overview of how to create a Microsoft Windows Failover Cluster (WFC) with Windows Server 2019 or 2016. The result will be a two-node cluster with one shared disk and a cluster compute resource (computer object in Active Directory).

Preparation

It does not matter whether you use physical or virtual machines, just make sure your technology is suitable for Windows clusters. Before you start, make sure you meet the following prerequisites:
Two Windows 2019 machines with the latest updates installed. The machines have at least two network interfaces: one for production traffic, one for cluster traffic. In my example, there are three network interfaces (one additional for iSCSI traffic). I prefer static IP addresses, but you can also use DHCP.

Join both servers to your Microsoft Active Directory domain and make sure that both servers see the shared storage device available in disk management. Don’t bring the disk online yet.
The next step before we can really start is to add the “Failover clustering” feature (Server Manager > add roles and features).

Reboot your server if required. As an alternative, you can also use the following PowerShell command:

Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools

After a successful installation, the Failover Cluster Manager appears in the start menu in the Windows Administrative Tools.
After you installed the Failover-Clustering feature, you can bring the shared disk online and format it on one of the servers. Don’t change anything on the second server. On the second server, the disk stays offline.
After a refresh of the disk management, you can see something similar to this:
Server 1 Disk Management (disk status online)

Server 2 Disk Management (disk status offline)

Failover Cluster readiness check

Before we create the cluster, we need to make sure that everything is set up properly. Start the Failover Cluster Manager from the start menu and scroll down to the management section and click Validate Configuration.

Select the two servers for validation.

Run all tests. There is also a description of which solutions Microsoft supports.

After you made sure that every applicable test passed with the status “successful,” you can create the cluster by using the checkbox Create the cluster now using the validated nodes, or you can do that later. If you have errors or warnings, you can use the detailed report by clicking on View Report.

Create the cluster

If you choose to create the cluster by clicking on Create Cluster in the Failover Cluster Manager, you will be prompted again to select the cluster nodes. If you use the Create the cluster now using the validated nodes checkbox from the cluster validation wizard, then you will skip that step. The next relevant step is to create the Access Point for Administering the Cluster. This will be the virtual object that clients will communicate with later. It is a computer object in Active Directory.
The wizard asks for the Cluster Name and IP address configuration.

As a last step, confirm everything and wait for the cluster to be created.

The wizard will add the shared disk automatically to the cluster per default. If you did not configure it yet, then it is also possible afterwards.
As a result, you can see a new Active Directory computer object named WFC2019.

You can ping the new computer to check whether it is online (if you allow ping on the Windows firewall).

As an alternative, you can create the cluster also with PowerShell. The following command will also add all eligible storage automatically:

New-Cluster -Name WFC2019 -Node SRV2019-WFC1, SRV2019-WFC2 -StaticAddress 172.21.237.32

You can see the result in the Failover Cluster Manager in the Nodes and Storage > Disks sections.

The picture shows that the disk is currently used as a quorum. As we want to use that disk for data, we need to configure the quorum manually. From the cluster context menu, choose More Actions > Configure Cluster Quorum Settings.

Here, we want to select the quorum witness manually.

Currently, the cluster is using the disk configured earlier as a disk witness. Alternative options are the file share witness or an Azure storage account as witness. We will use the file share witness in this example. There is a step-by-step how-to on the Microsoft website for the cloud witness. I always recommend configuring a quorum witness for proper operations. So, the last option is not really an option for production.

Just point to the path and finish the wizard.

After that, the shared disk is available for use for data.

Congratulations, you have set up a Microsoft failover cluster with one shared disk.

Next steps and backup

One of the next steps would be to add a role to the cluster, which is out of scope of this article. As soon as the cluster contains data, it is also time to think about backing up the cluster. Veeam Agent for Microsoft Windows can back up Windows failover clusters with shared disks. We also recommend doing backups of the “entire system” of the cluster. This also backs up the operating systems of the cluster members. This helps to speed up restore of a failed cluster node, as you don’t need to search for drivers, etc. in case of a restore.

The post How to create a Failover Cluster in Windows Server 2019 appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/RAHMs-UOU7I/windows-server-2019-failover-cluster.html

DR

3 steps to protect your SAP HANA database

3 steps to protect your SAP HANA database

Veeam Software Official Blog  /  Clemens Zerbe


Since the Veeam Plug-in for SAP HANA was introduced, I’m often asked about the installation and operation processes for it. Unfortunately, this cannot be covered in one blog, but I’ve often found the best place to start is the beginning. For this first blog I will cover how to install and configure the plug-in — mainly as a step-by-step guide for Veeam administrators with less or little Linux and SAP HANA knowledge.
If you are an experienced SAP Basis administrator, you may still find information of value as you scan through. I have planned a series of blogs dedicated to SAP HANA and I encourage you read each of them as I will cover day-to-day operations and advanced topics such as system copies or recoveries without having an SAP HANA catalog in the future.
Before I discuss the details of deploying and configuring the SAP certified Veeam Plug-in for SAP HANA, I want to share with you pertinent details that may be helpful. This plug-In depends on SAP Backint for SAP HANA, which is an API that enables Veeam to directly connect the Veeam agent to the SAP HANA database. I feel compelled to point out that HANA handles its own backup catalog with its own retention and scheduling. Therefore, Veeam Backup & Replication simply takes the data (technically from data pipes) and stores it into Veeam repositories. During restore operations, SAP HANA tells Veeam Backup & Replication what data needs to be restored and Veeam delivering the data as appropriate. This approach is contrary to the typical Veeam agentless approach, and it is very important to understand the difference. While this may not be news to an experienced SAP Basis administrator, it’s worth sharing this information as for some of you, this may be new information and therefore, helpful.

In addition to the Backint API, the next important matter to discuss is that SAP HANA Backint only takes care of the database data, including full, differential, incremental and log backups and restores. Though I should point out it is also important to take care of the underlying operating system (Red Hat or SUSE) and the SAP HANA installation and configuration files. More on this later.

The installation process

Though the installation process is one of the easiest things to do in my opinion, I urge you to keep in mind the prerequisites, including:

  • Veeam Backup & Replication 9.5 Update 4 (or 4a) already installed and a non-encrypted repository available with the appropriate access rights
  • DNS (forward & reverse) entries on your HANA system and your Veeam Backup & Replication repository system
  • HANA 2.0 SPS02 or newer installed on a x86 system
  • For the Veeam admin -> have your HANA buddy beside you
  • For the SAP Basis admin -> have your Veeam buddy beside you

The installation files can be found on the Veeam Backup & Replication iso.

Copy this RPM file to your SAP HANA system. You can perform this task with SCP, SFTP, or use any tool that you prefer. If you prefer a graphical interface, I recommend Filezilla or WinSCP.
Next, use a command line tool on the SAP HANA system. Putty or any other ssh client may be used for connecting and logging onto the system. For the installation process, you need to have sudo rights. Go to your folder with the RPM and run the installation.

sles15hana04:~ # cd /mnt/install/VBR2753/  sles15hana04:/mnt/install/VBR2753 # ll  total 18648  -rw-r–r– 1 root root 19093281 Apr  9 13:26 VeeamPluginforSAPHANA-9.5.4.2753-1.x86_64.rpm  sles15hana04:/mnt/install/VBR2753 # rpm -ivh VeeamPluginforSAPHANA-9.5.4.2753-1.x86_64.rpm  Preparing...                         ################################# [100%]  Updating / installing...    1:VeeamPluginforSAPHANA-9.5.4.2753-################################# [100%]  Run "SapBackintConfigTool –wizard" to configure the Veeam Plug-in for SAP HANA  sles15hana04:/mnt/install/VBR2753 #

Not tough at all right? Veeam is easy! Keep in mind for Veeam Backup & Replication 9.5 Update 4a, there is a performance patch for HANA available: https://www.veeam.com/kb2948

The configuration process

The installation process guides you to the next step. Run “SapBackintConfigTool –wizard” with the root user. Ready to do it? Remember to have your HANA and/or Veeam buddy at your side as you will need to work together for the next few steps.

sles15hana04:/mnt/install/VBR2753 # SapBackintConfigTool –wizard  Enter backup server name or IP address: w2k16vbr  Enter backup server port number 10006:  Enter username: Administrator  Enter password for Administrator:  Available backup repositories:  1. Default Backup Repository  2. SOBR1  3. w2k19repo_ext1  4. sles15repo_ext1  Enter repository number: 2  Configuration result:    SID DEV has been configured    sles15hana04:/mnt/install/VBR2753 #

The first question about your Veeam Backup & Replication server is obvious. Port number is defaulted to 10006 if you have not changed it in your environment. Username and Password for the Veeam Backup & Replication server and repository rights need to be provided by the Veeam administrator. If you already set the proper rights on your repositories, you should now see a list of repositories. Choose one and you are ready to move forward.
The final part is performed by the wizard — enabling SAP Backint via a soft link. If you already configured SAP Backint with other software, Veeam’s wizard will tell you what to delete and rerun the wizard.
Congratulations! You’re done with the installation and configuration. Now it’s time for our first backup and some initial SAP HANA configurations.

The first backup

The easiest way to create your first backup is via SAP HANA Studio as this is the most commonly used tool. But you can also use SAP HANA Cockpit, DBA planer, or any external scheduler.
Start SAP HANA Studio and connect to your recently configured SAP HANA instance in SYSTEM DB mode. Remember, this is a good time to be close to your SAP Basis administrator.

Provide the SAP HANA credentials (you don’t need to be the system user). You can create your backup and restore user with backup and catalog rights. See the HANA admin guide for details.

If everything has been configured properly, you should see something similar to the screenshot below:

Double-click on the SYSTEMDB@DEV (SYSTEM) and it will open an overview window. Keep this in mind later when additional configuration details are possible.

Open Backup Console via right-click.

Go directly to configuration and click on the blue arrow to expand the Backint settings. If you see Backint Agent pointing to /opt/Veeam/VeeamPluginforSAPHANA/acking -> you are only seconds away from your first backup.

It is important to emphasize that:

  • Veeam does not use any Backint Parameter files. Leave these fields empty, or if you were running something else before, delete the entries.
  • Log Backup Settings: This allows you to have the logs either on your filesystem or also use Backint to forward all new logs directly to Veeam Backup & Replication. We recommend to back them up via Backint, but please discuss this with your SAP Basis buddy. If you change this, do not forget to save your settings by clicking on the disk symbol.

Now the big moment…start your first backup!
Right-click on SYSTEMDB@SID and choose Back Up System Database (and afterwards, Tenant Database).

Ensure to choose Backint as target. Click Next – see the summary and click Finish.

You should see a screenshot like this one.

Check the Log File and go back to your Backup System DB window and go to the Backup Catalog. There should be an entry like this one with your first backup.

Now do the same for the tenant. Run backup, check the logs and review the catalog

In parallel, have you checked what happened in Veeam Backup & Replication? Under Jobs you will find a new one with the format “hostname SAP backint backup (repository name)“:

Under History, there is a new Plug-ins folder for all Plug-ins (Sap HANA & Oracle RMAN). So you see, the Plug-in is very database centric. We will cover some optimizations on this later in this blog series.

Yeah! But you are right. It´s all about the restore. Let’s also quickly test this.

Recover database

Now is a great time to be best friends with your SAP Basis buddy.

WARNING:
Do not do this on your own and always test your first restore on a test environment. Do not use any production database for the following steps!

We only want to recover the tenant database for now. SYSTEM databases only need to be recovered if there are errors, and I would recommend this only if advised to do so by SAP support.
Again, right-click on your SYSTEMDB@SID entry, Backup and Recovery -> Recover Tenant Database.

Choose your tenant to recover and click Next and choose Recover the database to its most recent state. I will cover the other options in a later blog of this series.

If you chose to backup also into Backint – change this entry to search in Backint.

Attention: the tenant will shut down now!

Select your backup – check Availability if you like. This should be green after checking.

Click Next after Availability is green. Click Next on Locate Log Backups.

Click Next and don’t forget to include logs on Backint. It’s important to reiterate that these options are Database centric and should be discussed with your SAP Basis administrator if you ever need to change something here. Clicking Next brings you a summary and Finish will start the restore process.

Finally, the restore will start and, when completed, you should see a result similar to this.

Congratulations! You have configured, backed up, and restored your first SAP HANA database with the SAP certified Veeam Plug-in for SAP HANA.
In the next blog, we will discuss additional optimization options in HANA and discuss how to back up your file system with HANA ini files and protect your operation system for DR preparation. Additionally, we also want to dive deeper into scheduling options, retention of backups, etc. In the meantime, if your business runs SAP HANA, I encourage you to check out this exciting feature from Veeam using this blog as a guide.
The post 3 steps to protect your SAP HANA database appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/d-aoo4GjGUU/sap-hana-plugin-installation-guide.html

DR

Windows Server 2019 support, expect no less from us!

Windows Server 2019 support, expect no less from us!

Veeam Software Official Blog  /  Rick Vanover


Veeam has always taken platform support seriously. Whether it’s the latest hypervisor, storage system or a Windows operating system. Both Windows Server and Windows desktop operating systems have very broad support with Veeam, and I recently just did some work around Windows Server 2019 and I am happy to say that Veeam fully supports the newest Microsoft data center operating system.
One of the first things that we did here at Veeam was a webinar overviewing both what is new with Windows Server 2019 and how Veeam supports Windows Server 2019. I was lucky to be joined by Nicolas Bonnet, a Veeam Vanguard and author of many books on Windows Server.
If any IT pro is getting ready to support a new platform, such as Windows Server 2019, the first steps are critical to introduce the new technology without surprises. The good news is that Veeam can help, and that was the spirit of the webinar and overall the way Veeam addresses new platforms.
One of the first places to start is Microsoft’s awesome 45-page “What’s new” document for Windows Server 2019. This is without a doubt the place to get the latest capabilities on what the new platform brings to your data center and workloads in the cloud running Windows Server.
There are scores of new capabilities in Windows Server 2019, so it’s hard to pick one or even a few favorites; but here are some that may help modernize your Windows server administration:

Storage Migration Service

I love hearing Ned Pyle from Microsoft talk about the storage technologies in Windows Server, and Storage Migration Service is one that can get older file server data to newer file server technologies (including in Azure).

Storage Space Direct

It’s incredible what has been going on with Windows Storage over the years! One of the latest capabilities is now ReFS volumes can have deduplication and compression. There are additional scalability limits and management capabilities as well (Windows Admin Center in particular) as other improvements across the operating system.
And finally, smallest but rather awesome feature:

Windows Time Service

In Windows Server 2019, the Windows Time Service now offers true UTC-compliant leap second support; so it’s ready for any time service role in your organization.
Again, hard to pick one or even a few features, but here are some that you may know about or may not have known. But the natural question that comes into play is “How do I migrate to Windows Server 2019?” This is where Veeam may help you and you may not have even known it.

Veeam DataLabs

The Veeam DataLab is a way you can test upgrades to Windows Server 2019 from previous versions of Windows Server. In fact, I wrote a whitepaper on this very topic last year.
Do you think the idea that your backup application can help you in your upgrade to Windows Server 2019 sounds crazy? It’s not, let me explain. The Veeam DataLab will allow you to take what you have backed up and provide an isolated environment to test. The test can be many things, ensuring that backups are recoverable (what the technology was made for), testing Windows Updates, testing security configurations, testing scripted or automated configurations, testing application upgrades and more. One additional use case can be testing the upgrade of Windows Server 2019.
In the Veeam DataLab, you can simulate the upgrade to the latest operating system based on what Veeam has backed up. The best part is the changes are not going to interfere with the production system. This way you will have a complete understanding of what it will take to upgrade to Windows Server 2019 from an older Windows Server operating system. How long it will take, what changes need to happen, etc. Further, if you need to put other components into a DataLab to simulate a multi-tiered application or communication to a domain controller, you can!
Here’s an example of a DataLab:

Conclusion

Windows Server 2019 and platform support in general are key priorities at Veeam. Our next update, Veeam Backup & Replication 9.5 Update 4b, will include support for Windows Server version 1903. This means you can run 1903 and have it backed up with Veeam or you can run Veeam Backup & Replication and its components on this version of Windows. Veeam is a safe bet for platform support as well as the ability to activate the data in your backups to do important migrations, such as to the latest version of Windows Server.
Have you used a DataLab to do an upgrade? If so, share your comment below. If not, give it a try!

The post Windows Server 2019 support, expect no less from us! appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/TSMFGvyI3I4/windows-server-2019-support.html

DR

Looking back on VeeamON 2019, and forward to what’s next with Rickatron

Looking back on VeeamON 2019, and forward to what’s next with Rickatron

Veeam Software Official Blog  /  Rick Vanover

This year’s VeeamON was epic, if I do say so myself. Our fifth event was in my opinion our best yet. In this post, I’ll recap a few key takeaways of the event and set the stage for what’s next for Veeam regarding events and product updates.

I wrote two specific recaps of the full days of the event, Day 1 and Day 2, which have good summaries of event specifics. Additionally, all of the general sessions and breakouts for those who attended are available for replay on the VeeamON recap site. The press releases are listed below:

We also made a short video that summarizes the event as well:

 

But what is there to focus on next? Short answer: A LOT!

I’ll start with the products. I’ve already been updated on the next version of some products we showcased at VeeamON, so we’ll be doing promotion for that at regional VeeamON Forum and Tour events as well as key industry events like VMworld and Microsoft Inspire. Veeam Availability Suite v10, which was previewed in the technology general session (you can replay here, in the technology keynote), is going to be an EPIC release. What we previewed at VeeamON is just a teaser; trust me, there is a lot more goodness to come in this release. I keep saying with each release recently that it’s the biggest release ever; and I’m pretty sure I’ll do the same with this one.

Next, I have been working with global teams to prepare the regional VeeamON Forum and VeeamON Tour events. These are underway, starting in June and going into the fall worldwide. Local markets will promote them specifically, but pages are already up for Asia, India, France, Germany, Turkey, many EMEA countries for Tours, and more to come. I just completed a VeeamON Tour in Sweden and will do subsequent ones in China, Brazil and Argentina later this year, and it feels great to bring the Veeam update to more markets around the world. Additionally, the rest of the Veeam Product Strategy team will be doing many events around the world as well.

I am fortunate to have a role on the VeeamON team as the content manager for the breakouts. This is in addition to the 57 other things I do at the event, including the technology general session presentation, presenting breakouts, meeting with customers and partners as well as meeting with press and analysts. I took a much more data-driven view to the breakout content this year. Everything from working with the event management team to have the right sized rooms, the balance of content (much more technical and product-focused this year), speaker consideration, partner interests and more. The end result is a trend that I feel embodies where Veeam is going as well as meeting, and based on survey data, exceeding the expectations of attendees. Feel free to send me a note via email or Twitter for ideas on how to make VeeamON better and the best investment for your time to attend the event.

VeeamON 2020 will take the event back to Las Vegas, and I for one am happy with that. I just think Vegas works! But the bar is set rather high. The Miami event was epic, and for my role on the VeeamON team, I’m going to expand the team I have and double the effort to raise the bar. See you there!

The post Looking back on VeeamON 2019, and forward to what’s next with Rickatron appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/BGR8x1LGGGQ/veeamon-2019-recap-whats-next-with-rickatron.html

DR

What else can you do with Veeam Availability Orchestrator?

What else can you do with Veeam Availability Orchestrator?

Notes from MWhite  /  Michael White

Hello all,

I mention this stuff a lot and people are always surprised.  So I thought I would share this here so that more of you can see this.

Veeam Availability Orchestrator (VAO) v2 is a DR orchestration tool that can use replicas or backups as the source for failovers in a crisis. As a professional DR tool it allows you to test your failovers to make sure your applications recovery is as you might hope.  It also has great documentation – mostly automatic too that is very helpful.  But there is other things you can do with this very useful tool that are not necessarily obvious.

  • You can use a test failover to test out a few things – not just does the app work but also:
    • Application Patching – if it works you know it will work in production as you are working in a copy of production, not just a lab built copy.
    • OS patching or updates – same here, you will know if it works in production or not.
    • Security vulnerability scanning – if you break something in the test only you know and you can prepare for that in production by maybe tweaking the scanning tool.
  • Plan, and test, a migration to a new data center.  It is very good indeed to test your migration before it becomes permanent.
  • You can pull data into a test failover from physical machines if necessary.  So like an AIX server that is running Oracle.  How it would look is:
    • During a test failover a PowerShell script executes and talks to AIX
    • A new LPAR is deployed,
    • A virtual nic is attached to the LPAR that is connected to the private and non-routing test network.
    • Another script runs that copies a subset of the Oracle database to the new LPAR.
    • So now in the test failover an app can find Oracle. Which is important to support a test failover.
    • Now at the end of the test there is a PowerShell script that runs and notes it is the end of the test so it talks to AIX and deletes the LAPR that has the subset of Oracle database on it.  This is important to make sure no test data gets into the production Oracle.
    • I have seen this done with AIX, and I have had customers tell me they have done this with HPUX.
  • You can protect physical machines.
    • Use a tool like VMware Converter to do lunchtime and night time conversions.  Then on a different schedule it is replicated and protected.  This means if the physical machine is lost, a virtual version will be available, and that is quite usable even if the app owner didn’t want to virtualize for some reason as the alternative is total loss.
    • I am working on a blog to do this as part of your backups.  When I have that figured out and documented I will share it out.
  • One that I heard from a customer is to do organized and auditable restores for people.  Since it is a DR activity there is a nice audit trail, and you can test it too.

So I hope that this all helps you understand that there is a lot that a DR orchestration tool like VAO can do over and above the typical DR stuff.

Let me know if you have questions or comments,

Michael

=== END ===

Original Article: https://notesfrommwhite.net/2019/06/19/what-else-can-you-do-with-veeam-availability-orchestrator/

DR

v10 Sneak peek: Cloud Tier Copy Mode

v10 Sneak peek: Cloud Tier Copy Mode

Veeam Software Official Blog  /  Anthony Spiteri

At VeeamON 2019, I joined members of the Product Strategy Team to do a number of live demos showing current and future products and features. While the first half focused on what had been released in 2019 so far, the second half focused on what is coming. One of those was the highly anticipated Copy Mode feature being added to the Cloud Tier in v10 of Veeam Backup & Replication.

As with the existing 9.5 Update 4 functionality, all restore operations are still possible for backup data that has been copied to the Capacity Tier. During the Technical Main Stage keynote, I demoed an Instant VM Recovery of a machine from a SOBR that we had simulated the loss of all Performance Tier extents due to a disaster. As you can see in the demo below, the recovery was streamed directly off the Capacity Tier, which in this case was backed by Amazon S3.

Copy Mode feature

Copy Mode is an additional policy you can set against the Capacity Tier extent of the Scale Out Backup Repository (SOBR), which is backed by an Object Storage repository. Once selected, any backup files that are created as part of any standard backup or backup copy job will be copied to the Capacity Tier. This fully satisfies the 3-2-1 rule of backup, which asks for one full copy of your data off site.

This feature adds to the existing move functionality released as part of Veeam Backup & Replication 9.5 Update 4 earlier this year and allows the almost immediate copy of backup data up to an Object Storage Repository. Here’s a deep dive into the technology (and best practices!) that we presented on the main stage:

 

And a full demo only session to show off some of the new Copy Mode functionality coming in v10:

 

NOTE: Demo and screenshots taken from Tech Preview Build and is subject to change.

For a recap of what is currently possible in Update 4, head to this link to check out all the Cloud Tier content I’ve created since launch.

Conclusion

Copy Mode is going to be a huge enhancement to the Cloud Tier in v10. I can see lots of great use cases for on-premises Veeam customers and for our Veeam Cloud & Service Provider partners who should be looking to leverage this feature to offer new service offerings for IaaS and Cloud Connect Backup data resiliency.

Stay tuned for more information around Cloud Tier Copy Mode as we get closer to the release of v10!

The post v10 Sneak peek: Cloud Tier Copy Mode appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/qIh8Lpj98ho/v10-sneak-peek-cloud-tier-copy-mode.html

DR

Improving failover speed in VAO

Improving failover speed in VAO

Notes from MWhite  /  Michael White

Hello there,

If you are using Veeam Availability Orchestrator (VAO) you may have noticed that we limit a VM group to start 10 VMs simultaneously. If your plan has two VM groups that means the first group starts 10 VMs at a time until there are not more to start, than the next VM group starts recovering 10 VMs at a time.

What happens if you want to recover faster?  First is it possible?  If you have all flash storage, or very fast storage, and your vCenter has spare RAM / Processor then it is possible.

Here is the process.

  • Do a test failover of the plan you wish to have proceed more quickly.  If the DataLab is already running the test failover time is fairly close to the actual failover time. Remember or record the time.
  • Confirm your vCenter is healthy.  This is a good place to start.

  • Next you should increase the number of simultaneous starts.  It can be found when you edit your VM group.

  • Depending on your storage you will make the choice.  If you have all flash for example, then I would suggest trying 20 instead of 10.
  • Now with the lab group already running, do your test failover again.  Record the time.
  • More VMs starting might impact vCenter so make sure it is still good.  Us the Past Day setting and see if there is a big spike where you did the test.
  • But, if the vCenter is still good, determine how much time you saved.
  • If the time is not good enough, and you think you still have resources in vCenter, you can try increasing the number again.
  • Remember this impacts your storage, and your vCenter strongly.  So at some point you potentially may need to back the number down.  I think 10 is very safe for everyone, but if you have a good vCenter and nice storage you should be able to get it quite a bit higher.  Maybe 30?

Another way to decrease the time of failovers is how you design your plans.  Multi – tier apps can make plans very slow if you use sequencing, but can be much faster if you use a different design.  I talk about that design in this article.

I hope that this article helps, but I am very open to questions and comments. And don’t forget all my VAO technical articles can be found via this tag.

BTW, I don’t have a decent work lab, and I have a small lab at home I use.  So I could not show better screenshots.  Sorry about that.

Michael

=== END ===

Original Article: https://notesfrommwhite.net/2019/06/26/improving-failover-speed-in-vao/

DR

NEW Veeam Powered Network v2

NEW Veeam Powered Network v2
NEW Veeam PN (Powered Network) v2 delivers 5x to 20x faster VPN
throughput! In this latest version of Veeam PN,
a FREE tool that simplifies creation and management of site-to-site and
point-to-site VPN networks, improvements have been made to performance and
security with the adoption of WireGuard. Read this blog
to learn more about version two enhancements and visit the help center
for technical details.

NEW Veeam Availability Orchestrator v2 is now available!

NEW Veeam Availability Orchestrator v2 is
now available!
NEW Veeam® Availability Orchestrator v2 – an
industry-first – delivers a comprehensive solution that provides DR
orchestration from replicas and backups. DR is now democratized; attainable
and affordable for all organizations, applications and data, not just the mission-critical
workloads. Check out this blog to
discover what else is new in version two, or start building your first DR
plan today with a FREE 30-day download.

Top takeaways from VeeamON 2019

Top takeaways from VeeamON 2019
VeeamON 2019, our annual conference, turned Miami Beach
green last month! 2,000 attendees learned how Veeam® is committed
to supplying backup and recovery solutions that consistently deliver Cloud
Data Management™ for users across the globe. Couldn’t make it to the event?
Check out the keynote and breakout sessions, daily recap
videos
, the CUBE interviews, blog posts
and latest
announcements
.

Veeam Availability Orchestrator v2.0 is now GA

Veeam Availability Orchestrator v2.0 is now GA

Notes from MWhite  /  Michael White


Hello all, I am very happy to say that what I have been working for a long time – with many other people – has now reached General Availability. It is a big release and we are going to look at things in this article.
Release Notes – https://www.veeam.com/veeam_orchestrator_2_0_release_notes_en_rn.pdf
Bits – https://www.veeam.com/availability-orchestrator-download.html
It is important to note there is no upgrade to 2.0.  You will need to install it separate and recreate your plans. There will be an KB article to help.  But most important is remember to uninstall your VAO agents from your VBR servers so that new agents can be installed. The plan definition reports will provide the info for each plan you need to recreate.
Us two product managers did not make the decision for no upgrade lately.  It was important though as it will allow some architecture decisions to be made that will be good for customers in the future.  Once such decision is to remove the need for Production VAO servers.  They are not required or useful any longer.
What are some of the new features – the key ones?
Scopes
Scopes will let you have resources and reporting – plus people that are separate and unseen by other users.  Sort of like multi-tenancy but more to allow offices or groups to have separate plans.

Above you can see both a default and HR scope.  Here you can say who belongs to what.

Above you can see the reporting options that can be defined by each scope.

Above is VM groups and you can use the button indicated above to assign them to different scopes. This is true for other resources such as DataLabs.
Backups
In v1 you could use replicas to create your plans, but in v2 you can use replicas OR backups to create your plans.  It is almost identical no matter which you choose but there are some small differences.

In our Orchestration Plans they will look identical except for as we can see above.  Failover for replica’s and Restore for backups.

As well when you look at the individual steps you will notice one that is Restore VM rather than process replica – as you can see above.
BTW, recoveries using backups can go to the original or new locations. Re-IP is possible as well.
If you design your VBR servers well, the recovery of a backup is surprising fast!
VM Console
This feature will allow you, from within VAO to access a VM’s console.  It uses the VMware VMRC HTML5 functionality to do it.
The plan needs to stay on for a some time, and if so you will see an option like below.

You highlight a VM, and select that button and you end up in the VMware HTML5 VMRC and will need to log in.  Very handy for things like checking to see if a complex app has started and is usable.
General
There is a lot of little things like polish in the UI.  Sort of discrete but also nice. One that is not so discrete is that we have added RTO and RPO to the plans and reports.  This means a readiness check will have a warning if the RTO / RPO defined is not possible.

Another thing is that production VAO servers are no longer required or useful.  There is only DR VAO servers.
Summary
A big release, and with an interesting range of new stuff.  You can watch my blog using this tag – https://notesfrommwhite.net/tag/vao_tech for new articles.
Michael
=== END ===

Original Article: https://notesfrommwhite.net/2019/05/21/veeam-availability-orchestrator-v2-0-is-now-ga/

DR

Why we chose WireGuard for Veeam PN v2

Why we chose WireGuard for Veeam PN v2

Veeam Software Official Blog  /  Anthony Spiteri

The challenge with OpenVPN

In 2017, Veeam PN was released as part of Veeam Recovery to Microsoft Azure. The Veeam PN use case went beyond the extending of Azure networks for workload recoverability and was quickly adopted by IT enthusiasts for use of remote connectivity to home labs and the connectivity of remote networks that could be spread out across cloud and on-premises platforms.
Our development roadmap for Veeam PN had been locked in, however, we found that customers wanted more. They wanted to use it for data protection, with Veeam Backup & Replication to move data between sites. When moving backup data utilizing the underlying physical connections to its maximum is critical. With OpenVPN, our R&D found that it couldn’t scale and perform to expectation no matter what combination of CPU and other resources we threw at it.

Veeam Powered Network v2 featuring WireGuard

We strongly believe that WireGuard is the future of VPNs with significant advantages over more established protocols like OpenVPN and IPsec. WireGuard is more scalable and has proven to outperform OpenVPN in terms of throughput. This is why we chose to make a tough call to rip out OpenVPN and replace it with WireGuard for site-to-site VPNs. For Veeam PN developers, this meant the rip and replacement of existing source code and meant that existing users of Veeam PN would not be able to perform an in-place upgrade.

Further to our own belief in WireGuard, we also looked at it as the protocol of choice due to the rise of it in the Open Source world as a new standard in VPN technologies. WireGuard offers a higher degree of security through enhanced cryptography that operates more efficiently, leading to increased performance and security. It achieves this by working in kernel and by using fewer lines of code (4,000 compared to 600,000 in OpenVPN) and offers greater reliability when connecting hundreds of sites…again thinking about performance and scalability for more specific backup and replication use cases.
Recent support for WireGuard becoming a defacto standard in VPNs was ratified by Linus Torvalds:

“Can I just once again state my love for [WireGuard] and hope it gets merged soon? Maybe the code isn’t perfect, but I’ve skimmed it, and compared to the horrors that are OpenVPN and IPSec, it’s a work of art.”
Linus Torvalds, on the Linux Kernel Mailing List

Increased security and performance

WireGuard’s security was also a factor in moving on from OpenVPN. Security is always a concern with any VPN, and WireGuard takes a more simplistic approach to security by relying on crypto versioning to deal with cryptographic attacks… in a nutshell it’s easier to move through versions of primitives to authenticate rather than client server negotiation of cipher type and key lengths.
Because of this streamlined approach to encryption in addition to the efficiency of the code WireGuard can outperform OpenVPN, meaning that Veeam PN can sustain significantly higher throughputs (testing has shown performance increases of 5x to 20x depending on CPU configuration) which opens up the use cases to be for more than just basic remote office or homelab use. Veeam PN can now be considered as a way to connect multiple sites together and have the ability to transfer and sustain hundreds of Mb/s which is perfect for data protection and disaster recovery scenarios.

Solving the UDP problem, easy configuration and point-to-site connectivity

One of the perceived limitations of WireGuard is the fact that it does all it’s work over UDP, which can cause challenges when deploying into locked down networks that by default trust TCP connections more than UDP. To solve this potential road block for adoption, our developers worked out a way to encapsulate (with minimal overhead) the WireGuard UDP over TCP to give customers choice depending on their network security setup.
By incorporating WireGuard into an all in one appliance (or installable via a simple script on an already installed Ubuntu Server), we have made the installation and configuration of complex VPNs simple and reliable. We have kept OpenVPN as the protocol for connecting point-to-site for the moment due to the wider distribution of OpenVPN Clients across platforms. Client access through to the Veeam PN Hub is done via OpenVPN with site-to-site access handled by WireGuard.

Other enhancements

Core what we wanted to achieve with Veeam PN is simplifying complexity and we wanted to ensure this remained in place regardless of what was doing the heavy lifting underneath. The addition of WireGuard is easily the biggest enhancement from Veeam PN v1, however, there are several other enhancements listed below

  • DNS forwarding and configuring to resolve FQDNs in connected sites.
  • New deployment process report.
  • Microsoft Azure integration enhancements.
  • Easy manual product deployment.

Conclusion

Once again, the premise of Veeam PN is to offer Veeam customers a free tool that simplifies the traditionally complex process around the configuration, creation and management of site-to-site and point-to-site VPN networks. The addition of WireGuard as the site-to-site VPN platform will allow Veeam PN to go beyond the initial basic use cases and become an option for more business-critical applications due to the enhancements that WireGuard offers. Once again, we chose WireGuard because we believe it is the future of VPN protocols and we are excited to bring it to our customers with Veeam PN v2.
Download it now!

Helpful resources:

The post Why we chose WireGuard for Veeam PN v2 appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/AMxHPVxAv3E/veeam-pn-v2-wireguard.html

DR

Veeam Availability for Nutanix AHV – Automated Deployment with Terraform

Veeam Availability for Nutanix AHV – Automated Deployment with Terraform

vZilla  /  michaelcade

Based on some work myself and Anthony have been doing over the last 12-18 months around terraform, chef and automation of Veeam components I wanted to extend this out further within the Veeam Availability Platform and see what could be done with our Veeam Availability for Nutanix AHV.
I have covered lots of deep dive on the capabilities that the product brings even in a v1 state, but I wanted to highlight the ability to automate the deployment of the Veeam Availability for Nutanix Proxy appliance. The idea was also validated when I spoke to a large customer that had heavily invested in Nutanix AHV across 200 of their sites in the US.
Having been a Veeam customer they were very interested in exploring the new product and its capabilities matched up with their requirements. But they then had a challenge in deploying the proxy appliance to all 200 clusters they have across the US. There were some angles around PowerShell and from a configuration this may be added later to what I have done here. I took the deployment process and I created a terraform script that would deploy the image disk and new virtual machine for the proxy with the correct system requirements.
This was the first challenge that validated what and why we needed to make a better option for an automated deployment. The second was that the deployment process of the v1 is a little off putting I think is a fair thing to say, so by being able to automate this it would really help that initial deployment process.
Here is the terraform script, if you are not used to or aware of terraform then I suggest you look this up, the possibilities with terraform really change the way you think about infrastructure and software deployment.

One caveat you will need to consider is as part of this deployment I placed the image file that I downloaded from veeam.com on my web server this allows for that central location and part of the script to deploy that image from this shared published resource. Again the thought process to this was that 200 site customer and providing a central location to pull the image disk from, without downloading it 200 times in each location.
Once the script has run you will see the created machine in the Nutanix AHV management console.

There is a requirement for DHCP in the environment so that the appliance gets an IP and you can connect to it.

Now you have your Veeam Availability for Nutanix AHV proxy appliance deployed and you can run through the configuration steps. Next up I will be looking at how we can also automate that configuration step. However for now the configuration steps can be found here.
Thanks to Jon Kohler he is the main contributor to the Nutanix Terraform Provider and also has some great demo YouTube videos worth watch, You will see that the base line for the script I have created is based on examples that Jon uses in his demonstrations.

Original Article: https://vzilla.co.uk/vzilla-blog/veeam-availability-for-nutanix-ahv-automated-deployment-with-terraform

DR

Veeam Direct Restore from AWS EC2 Backup & Replication Server and Repository

Veeam Direct Restore from AWS EC2 Backup & Replication Server and Repository

vZilla  /  michaelcade


A couple of weeks back at Cloud Field Day one of the questions asked by the delegates was would the conversion process for the Direct Restore to AWS be faster if we stored the data within AWS as part of the backup process, for example if we were to have our on-premises operational restore window local to the production data but have a backup copy job sending a retention period to a completely separate Veeam Backup & Replication server and repository that is located within AWS.

I wanted to check and compare the speed and performance of this but also if we did see a dramatic increase in speed then at what cost does this come at.
As you can see from the diagram above, we have our production environment on the left and this is the control layer for the operational backups for our production workloads. We are then for the purposes of the test going to backup copy those backups to a secondary backup repository to a machine running in AWS. We then have a complete stand-alone system running within AWS that has Veeam Community Edition installed.
This Veeam Backup & Replication server within AWS is a Windows 2016 machine with the following specifications.

Availability Zone = us-east-2
T2.Large
8GB Memory
2 CPUs

On this machine we then installed Veeam backup & Replication – Community Edition, the functionality of cloud mobility is completely available within the free community edition.

The installation doesn’t take too long.

As you can see in the below image, we have a full backup file. this is located within the repository in AWS. Next up we need to import this to our AWS Veeam Backup & Replication server.

It’s really simple to import any Veeam backup first up open up the Veeam Backup & Replication console and connect to the Veeam Backup & Replication server, if using the console on the server then you can choose “localhost”

On the top ribbon, select import backup.

Depending on where that backup file is located you will need to select this from the drop down, if this is not on an already managed Veeam server (i.e separate to the the Veeam backup & replication server, you will need to add this)


We can choose different Veeam files I changed the file type to VBK so I could see the full backup file in the directory.

If the backup file also contains guest file system index, then you can check the box to import this also.

Next you will see the progress, this should not take long.

We are now in a position where we can see the contents of the backup, and from here we can begin our tests.

For us to test the direct restore to AWS functionality then we first need to add our AWS Cloud Credentials.

We can choose to add our Amazon AWS Access Key from the selection.


From your Amazon account you will need to provide your Access Key and Secret Key these are super super important not to share externally. These need to be super secure.

Notice how I am not going to show you my secret key.

You will then see your newly added cloud credentials.

Ok so at this stage we can right click on the machine we wish to directly restore to AWS.


The approach we take here is a very easy to use wizard, we choose our AWS Cloud Credentials from the drop down. Choose the Region and data center region you wish to restore the machine into.

Next up we need to choose the name we wish to use for the restored machine, but also we can add specific AWS tags.


I am going to rename mine to differentiate later on.


We then need to define the AWS Instance type we wish to use, you will need to make sure that the system you are sending there is supported by AWS for the OS.

Next, we choose the Amazon VPC and Security Group we wish to use.

We can choose if we would like to use a Helper Proxy Appliance.

Finally, we are able to give a reason for the restore.

Summary of the process and outcome.

This is the simple and easy to use steps you would take to get any direct restore to AWS.
Let’s get back to the test now, we want to know if by having this machine backup located within AWS would this make the conversion faster than having this on premises. This was the question asked during Cloud Field Day.
First of all, let’s take a look at the time it takes for the same machine from on-premises, the test is from our Columbus DC which also happens to be an AWS DC as well, I will reflect and take this into account later on.
This is the screen grab for the process that took place from on premises to AWS us-east-2 upload of data looks to have taken the exact same amount of time so the proof will be in the importing of the VM and if that is faster.
The on-premises system took 14:29 to import and 15:34 for the whole process to complete.

The connectivity for my “on-premises” location is as follows.


Now we want to run the same test using the imported backup file that we showed previously. As you can see the combined total took 12:39 and the conversion took 11:33 minutes.
The machine we are restoring is 1.1GB you can see the upload speed was the same regardless of the location of the backup, it’s the conversion that is slightly improved.


We also ran a test from our lab to us east 1 (North Virginia) and as you can see this was different again in terms of the import process.

I then wanted to test outside of our lab and very helpfully Jorge was able to run a test from his home lab, as you can expect his connectivity is not the same as an enterprise lab or data centre.
Jorge took the same machine, imported it to his Veeam Backup & Replication server within his home lab with a much slower upload speed than we have used previously but you can see that the import process and here we are importing to London and its quicker still. But the import process here shows it to be faster still.


I will continue gathering data points around this but the one clear statistic at the moment is it really doesn’t make any difference really on a small workload by having the backup stored within AWS already, now you consider a larger dataset and that import process time saving could be considerable but you have to really consider the costs behind storing those backups in AWS. And is it worth it.

Original Article: https://vzilla.co.uk/vzilla-blog/veeam-direct-restore-from-aws-ec2-backup-replication-server-and-repository

DR

VeeamON 2019: Day one recap

VeeamON 2019: Day one recap

Veeam Software Official Blog  /  Rick Vanover


VeeamON Miami is under way! On Monday, we hosted the VeeamON 2019 Welcome Reception, which was a great start to the VeeamON 2019 event, where we are welcoming a full house of customers and partners.
The first full day was also packed with key announcements, new Veeam technologies and an awesome agenda of breakouts for attendees. Here is a rundown of Tuesday’s news:

The general session also featured key perspectives from Veeam co-founder Ratmir Timashev on Veeam’s momentum, customer testimonials and some key focus on Microsoft. Veeam Vice President, Global Business & Corporate Development Carey Stanton welcomed Tad Brockway, corporate vice president for Azure Storage, Media, and Edge platform team at Microsoft.


Image via @anbakal on Twitter

We also had a very special general session focused on technology, both already existing and coming soon. In this session, Veeam Product Strategy and R&D gave a number of key overviews of the new Veeam Availability Orchestrator v2 general availability announcement, Veeam Availability Console and Veeam Backup for Microsoft Office 365. New capabilities were shown for Veeam Availability Suite, as well as new technologies for Microsoft Azure.


Image via @anbakal on Twitter

However, the key part of the event for attendees is the breakouts! This year, the breakout menu features technical topics making up 80% of the breakouts delivered by Veeam. Everything from best practices, worst practices, how-to tips and more has been covered. Today had presentations from Platinum Sponsors, Cisco, NetApp, Microsoft Azure, ExaGrid and IBM Cloud. Here are two slides from Veeam presentations that I found compelling:

 “From the Architect’s Desk: Sizing of Veeam Backup & Replication, Proxies and Repositories”

This session was presented by Tim Smith, a solutions architect based in the US (Tim also runs the Tim’s Tech Thoughts blog and is on Twitter at: Tsmith_co). Here is one slide where Tim outlines the sizing of the Veeam backup server for 2,000 VMs with eight jobs (just as an example). This is important as sizing goes all the way through the environment: backup server, proxies, repositories, etc.

 “Let’s Manage Agents”

This session was presented by Dmitry Popov, senior analyst, product management in charge of products, including Veeam Agent for Microsoft Windows. Here is one slide where Dmitry shows a cool tip where unmanaged agents (where agents running without a license in free mode will show up) can be put into a protection group to have centralized management and a schedule:

For attendees of the event, you will be able to access the recording of these and all other sessions. More information will be sent as a follow-up email for the replay information.
Check out this recap video by our senior global technologists Anthony Spiteri and Michael Cade:

We will have more content tomorrow as well! I’ll be posting another blog with a recap from today’s event. For those of you at the event, be sure to use the hashtag #VeeamON to share your experiences!
The post VeeamON 2019: Day one recap appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/ehzxIPBeeTw/veeamon-2019-day-1-recap.html

DR

Fujitsu ETERNUS storage plug-in now available

Fujitsu ETERNUS storage plug-in now available

Veeam Software Official Blog  /  Rick Vanover


One of the things I love is how fast we can add storage integrations with the Veeam Universal Storage API. This framework was introduced with Veeam Backup & Replication 9.5 Update 3 and has allowed Veeam and our alliance partners to rapidly add new supported storage systems. Fujitsu Storage ETERNUS DX and AF are the newest systems to be supported with a Veeam plug-in.
This plug-in actually exposes many outstanding capabilities, much more than just the Backup from Storage Snapshots that seems to get all of the attention. This particular plug-in offers the following capabilities:

Let’s review these benefits for this storage plug-in.

Veeam Explorer for Storage Snapshots

This is my favorite capability and it has a powerful set of options in our free Community Edition – and even more in the Enterprise and higher editions. This allows existing storage snapshots to be used for Veeam restores, such as a file-level recovery, a whole VM restore to a new location Application Object Exports  (Enterprise and higher editions can restore Applications and Databases  to original location) or even an application item. This will read the storage snapshots on the Fujitsu ETERNUS and present them to Veeam for launching the restores, all seamlessly without impacting the operation of primary storage resources. The diagram below shows how you can launch the 7 restore scenarios from a snapshot on the Fujitsu ETERNUS array:

Veeam Backup from Storage Snapshots

This capability that comes from the plug-in will really unleash the power of the storage integration when it comes to performing the backup. The backup from storage snapshots engine will allow the data mover (a Veeam VMware Backup Proxy) to do the heavy-lifting of a backup job from the storage snapshot versus from a conventional VMware vSphere snapshot. I like to say that this integration will allow organizations to take backups or replicas at any time of the day. All of the goodness you would expect is maintained, such as VMware Changed Block Tracking (CBT) and application consistency. This is all transparent with this integration. The diagram below shows the time sequence in relation to the I/O associated with the backup job:

On-Demand Sandbox for Storage Snapshots

Veeam has pioneered the “DataLabs” use case, and when using an integrated storage system like Fujitsu ETERNUS, you are truly unlocking endless opportunities. The On-Demand Sandbox for Storage Snapshots will take a storage snapshot and expose the VMs on it to an isolated virtual lab. This can be used for testing changes to critical applications, testing scripts, testing upgrades, verification of a result from an application and more.
One practical way the On-demand Sandbox for Storage Snapshots can help organizations is by helping avoid changes that don’t go as expected for one of many reasons. Firing up an application in an on-demand sandbox, powered by the Fujitsu ETERNUS DX/AF array, will give you a good sense of the changes. You can answer questions like “How long will it take?” and “What will happen?” But most importantly “Will it work?” This can save time by avoiding cancelled change requests due to not knowing exactly what to expect with some changes.

Primary Storage Snapshot Orchestration

The plug-in for Fujitsu ETERNUS DX/AF storage will also allow you make Veeam jobs that just create snapshots. These jobs will not produce a backup but will produce a storage snapshot that can be used for Veeam Explorer for Storage Snapshots.

The Power of the Storage Snapshot! Harness it!

The Fujitsu ETERNUS DX/AF storage systems are the latest integrated array with Veeam. You can download the plug-in here.

The post Fujitsu ETERNUS storage plug-in now available appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/f2fb71I_tOs/fujitsu-eternus-storage-snapshot-integration.html

DR

What’s new in v3 of Veeam’s Office 365 backup

What’s new in v3 of Veeam’s Office 365 backup

Veeam Software Official Blog  /  Niels Engelen


It is no secret anymore, you need a backup for Microsoft Office 365! While Microsoft is responsible for the infrastructure and its availability, you are responsible for the data as it is your data. And to fully protect it, you need a backup. It is the individual company’s responsibility to be in control of their data and meet the needs of compliance and legal requirements. In addition to having an extra copy of your data in case of accidental deletion, here are five more reasons WHY you need a backup.

With that quick overview out of the way, let’s dive straight into the new features.

Increased backup speeds from minutes to seconds

With the release of Veeam Backup for Microsoft Office 365 v2, Veeam added support for protecting SharePoint and OneDrive for Business data. Now with v3, we are improving the backup speed of SharePoint Online and OneDrive for Business incremental backups by integrating with the native Change API for Microsoft Office 365. By doing so, this speeds up backup times up to 30 times which is a huge game changer! The feedback we have seen so far is amazing and we are convinced you will see the difference as well.

Improved security with multi-factor authentication support

Multi-factor authentication is an extra layer of security with multiple verification methods for an Office 365 user account. As multi-factor authentication is the baseline security policy for Azure Active Directory and Office 365, Veeam Backup for Microsoft Office 365 v3 adds support for it. This capability allows Veeam Backup for Microsoft Office 365 v3 to connect to Office 365 securely by leveraging a custom application in Azure Active Directory along with MFA-enabled service account with its app password to create secure backups.

From a restore point of view, this will also allow you to perform secure restores to Office 365.

Veeam Backup for Microsoft Office 365 v3 will still support basic authentication, however, using multi-factor authentication is advised.

Enhanced visibility

By adding Office 365 data protection reports, Veeam Backup for Microsoft Office 365 will allow you to identify unprotected Office 365 user mailboxes as well as manage license and storage usage. Three reports are available via the GUI (as well as PowerShell and RESTful API).
License Overview report gives insight in your license usage. It shows detailed information on licenses used for each protected user within the organization. As a Service Provider, you will be able to identify the top five tenants by license usage and bring the license consumption under control.
Storage Consumption report shows how much storage is consumed by the repositories of the selected organization. It will give insight on the top-consuming repositories and assist you with daily change rate and growth of your Office 365 backup data per repository.

Mailbox Protection report shows information on all protected and unprotected mailboxes helping you maintain visibility of all your business-critical Office 365 mailboxes. As a Service Provider, you will especially benefit from the flexibility of generating this report either for all tenant organizations in the scope or a selected tenant organization only.

Simplified management for larger environments

Microsoft’s Extensible Storage Engine has a file size limit of 64 TB per year. The workaround for this, for larger environments, was to create multiple repositories. Starting with v3, this limitation and the manual workaround is eliminated! Veeam’s storage repositories are intelligent enough to know when you are about to hit a file size limit, and automatically scale out the repository, eliminating this file size limit issue. The extra databases will be easy to identify by their numerical order, should you need it:

Flexible retention options

Before v3, the only available retention policy was based on items age, meaning Veeam Backup for Microsoft Office 365 backed up and stored the Office 365 data (Exchange, OneDrive and SharePoint items) which was created or modified within the defined retention period.
Item-level retention works similar to how classic document archive works:

  • First run: We collect ALL items that are younger (attribute used is the change date) than the chosen retention (importantly, this could mean that not ALL items are taken).
  • Following runs: We collect ALL items that have been created or modified (again, attribute used is the change date) since the previous run.
  • Retention processing: Happens at the chosen time interval and removes all items where the change date became older than the chosen retention.

This retention type is particularly useful when you want to make sure you don’t store content for longer than the required retention time, which can be important for legal reasons.
Starting with Veeam Backup for Microsoft Office 365 v3, you can also leverage a “snapshot-based” retention type option. Within the repository settings, v3 offers two options to choose from: Item-level retention (existing retention approach) and Snapshot-based retention (new).
Snapshot-based retention works similar to image-level backups that many Veeam customers are so used to:

  • First run: We collect ALL items no matter what the change date is. Thus, the first backup is an exact copy (snapshot) of an Exchange mailbox / OneDrive account / SharePoint site state as it looks at that point in time.
  • Following runs: We collect ALL new items that have been created or modified (attribute used here is the change date) since the previous run. Which means that the backup represents again an exact copy (snapshot) of the mailbox/site/folder state as it looks at that point in time.
  • Retention processing: During clean-up, we will remove all items belonging to snapshots of mailbox/site/folder that are older than the retention period.

Retention is a global setting per repository. Also note that once you set your retention option, you will not be able to change it.

Other enhancements

As Microsoft released new major versions for both Exchange and SharePoint, we have added support for Exchange and SharePoint 2019. We have made a change to the interface and now support internet proxies. This was already possible in previous versions by leveraging a change to the XML configuration, however, starting from Veeam Backup for Microsoft Office 365 v3, it is now an option within the GUI. As an extra, you can even configure an internet proxy per any of your Veeam Backup for Microsoft Office 365 remote proxies.  All of these new options are also available via PowerShell and the RESTful API for all the automation lovers out there.

On the point of license capabilities, we have added two new options as well:

  • Revoking an unneeded license is now available via PowerShell
  • Service Providers can gather license and repository information per tenant via PowerShell and the RESTful API and create custom reports

To keep a clean view on the Veeam Backup for Microsoft Office 365 console, Service Providers can now give organizations a custom name.

Based upon feature requests, starting with Veeam Backup for Microsoft Office 365 v3, it is possible to exclude or include specific OneDrive for Business folders per job. This feature is available via PowerShell or RESTful API. Go to the What’s New page for a full list of all the new capabilities in Veeam Backup for Microsoft Office 365.

Time to start testing?

There’s no better time than the present for you to get your hands-on Office 365 backup. Download Veeam Backup for Microsoft Office 365 v3, or try Community Edition FREE forever for up to 10 users and 1 TB of SharePoint data.
GD Star Rating
loading…
What’s new in v3 of Veeam’s Office 365 backup, 5.0 out of 5 based on 4 ratings

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/_liOANg0v5g/new-features-office-365-backup-v3.html

DR

How to enable MFA for Office 365

How to enable MFA for Office 365

Veeam Software Official Blog  /  Polina Vasileva


Starting from the recently released version 3, Veeam Backup for Microsoft Office 365 allows for retrieving your cloud data in a more secure way by leveraging modern authentication. For backup and restores, you can now use service accounts enabled for multi-factor authentication (MFA). In this article, you will learn how it works and how to set up things quickly.

How does it work?

For modern authentication in Office 365, Veeam Backup for Microsoft Office 365 leverages two different accounts: an Azure Active Directory custom application and a service account enabled for MFA. An application, which you must register in your Azure Active Directory portal in advance, will allow Veeam Backup for Microsoft Office 365 to access Microsoft Graph API and retrieve your Microsoft Office 365 organizations’ data. A service account will be used to connect to EWS and PowerShell services.
Correspondingly, when adding an organization to the Veeam Backup for Microsoft Office 365 scope, you will need to provide two sets of credentials: your Azure Active Directory application ID with either an application secret or application certificate and your services account name with its app password:

Can I disable all basic authentication protocols in my Office 365 organization?

While Veeam Backup for Microsoft Office 365 v3 fully supports modern authentication, it has to fill in the existing gaps in Office 365 API support by utilizing a few basic authentication protocols.
First, for Exchange Online PowerShell, the AllowBasicAuthPowershell protocol must be enabled for your Veeam service account in order to get the correct information on licensed users, users’ mailboxes, and so on. Note that it can be applied on a per-user basis and you don’t need to enable it for your entire organization but for Veeam accounts only, thus minimizing the footprint for a possible security breach.
Another Exchange Online PowerShell authentication protocol you need to pay attention to is the AllowBasicAuthWebServices. You can disable it within your Office 365 organization for all users — Veeam Backup for Microsoft Office 365 can make do without it. Note though, that in this case, you will need to use application certificate instead of application secret when adding your organization to Veeam Backup for Microsoft Office 365.
And last but not the least, to be able to protect text, images, files, video, dynamic content and more added to your SharePoint Online modern site pages, Veeam Backup for Microsoft Office 365 requires LegacyAuthProtocolsEnabled to be set to $True. This basic authentication protocol takes effect for all your SharePoint Online organization, but it is required to work with certain specific services, such as ASMX.

How can I get my application ID, application secret and application certificate?

Application credentials, such as an application ID, application secret and application certificate, become available on the Office 365 Azure Active Directory portal upon registering a new application in the Azure Active Directory.
To register a new application, sign into the Microsoft 365 Admin Center with your Global Administrator, Application Administrator or Cloud Application Administrator account and go to the Azure Active Directory admin center. Select New application registration under the App registrations section:

Add the app name, select Web app/API application type, add a sign-on URL (this can be any custom URL) and click Create:

Your application ID is now available in the app settings, but there’re a few more steps to take to complete your app configuration. Next, you need to grant your new application the required permissions. Select Settings on the application’s main registration page, go to the Required permissions and click Add:

In the Select an API section, select Microsoft Graph:

Then click Select permissions and select Read all groups and Read directory data:

Note that if you want to use application certificate instead of application secret, you must additionally select the following API and corresponding permissions when registering a new application:

  • Microsoft Exchange Online API access with Use Exchange Web Services with full access to all mailboxes‘ permissions
  • Microsoft SharePoint Online API access with Have full control of all site collections permissions

To complete granting permissions, you need to grant administrator consent. Select your new app from the list in the App registrations (Preview) section, go to API Permissions and click Grant admin consent for <tenant name>. Click Yes to confirm granting permissions:

Now your app is all set and you can generate an application secret and/or application certificate. Both are managed on the same page. Select your app from the list in the App registrations (Preview) section, click Certificates & secrets and select New client secret to create a new application secret or select Upload certificate to add a new application certificate:

For application secret, you will need to add secret description and its expiration period. Once it’s created, copy its value, for example, to Notepad, as it won’t be displayed again:

How can I get my app password?

If you already have a user account enabled for MFA for Office 365 and granted with all the roles and permissions required by Veeam Backup for Microsoft Office 365, you can create a new app password the following way:

  • Sign into the Office 365 with this account and pass additional security verification. Go to user’s settings and click Your app settings:
  • You will be redirected to https://portal.office.com/account, where you need to navigate to Security & privacy and select Create and manage app passwords:
  • Create a new app password and copy it, for example, to Notepad. Note that the same app password can be used for multiple apps or a new unique app password can be created for each app.

What’s next?

Now you have all the credentials to start protecting your Office 365 data. When adding an Office 365 organization to the Veeam Backup for Microsoft Office 365 scope, make sure you select the correct deployment type (which is ‘Microsoft Office 365’) and the correct authentication method (which in our case is Modern authentication). Keep in mind that with v3, you can choose to use the same or different credentials for Exchange Online and SharePoint Online (together with OneDrive for Business). If you want to use separate custom applications for Exchange Online and SharePoint Online, don’t forget to register both in advance in a similar way as described in this article.
GD Star Rating
loading…
How to get App ID, App secret and app password in Office 365, 5.0 out of 5 based on 8 ratings

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/qtrIGmLFGT8/setup-multi-factor-authentication-office-365.html

DR

How to limit egress costs within AWS and Azure

How to limit egress costs within AWS and Azure

Veeam Software Official Blog  /  Nicholas Serrecchia


With Update 4’s exciting new cloud features, there are settings within AWS and Azure that you should familiarize yourself with to help negate some of the egress traffic costs, as well as help with security.
Right now, let’s talk about the scenarios where:

  • You are backing up Azure/AWS instances, utilizing Veeam Backup & Replication with a Veeam Agent, while utilizing Capacity Tier all inside of AWS/Azure
  • You have a SOBR instance in AWS/Azure and utilize Capacity Tier
  • When N2WS backup and recovery/Veeam Availability for AWS performs a copy to Amazon S3
  • If Veeam is deployed within AWS/Azure and you perform a DR2EC2 without a proxy or DR2MA

In AWS, by default, all traffic written into S3 from a resource within a VPC, like an EC2 instance, face egress costs for all these scenarios listed above. By default, when we archive data into S3 or do a disaster recovery to EC2, where Veeam uploads the virtual disk into S3, so AWS can convert to Elastic Block Store (EBS) volumes (AWS VMimport), we face an egress charge per GB. There is the option to utilize a NAT gateway/instance, but again there is a price associated with that as well.
Thankfully, there is an option that you could enable, which is basically the “don’t charge me egress!” button. That feature is called VPC Endpoints for AWS and VNet Service Endpoints for Azure.

Limit AWS egress costs

As stated by AWS:
“A VPC Endpoint enables you to privately connect your VPC to supported AWS services and VPC Endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network”.
You simply enable VPC Endpoints for your S3 service within that VPC and you will no longer face egress cost when an EC2 instance is traversing data into S3. This is because the EC2 instance doesn’t need a public IP, internet gateway or NAT device to send data to S3.

Now that you enabled the VPC Endpoint, I highly recommend that you create a bucket policy to specify which VPCs or external IP addresses can access the S3 bucket.

Limit Azure egress costs

Azure handles the egress costs from their instances into Blob in the same manner AWS does, but with Azure the nomenclature is different, they use VNets instead of VPCs and they too have a feature that can be enabled at the VNet level: VNet Service Endpoints.
As stated by Microsoft Azure:
“Virtual Network (VNet) service endpoints extend your virtual network private address space and the identity of your VNet to the Azure services, over a direct connection. Endpoints allow you to secure your critical Azure service resources to only your virtual networks. Traffic from your VNet to the Azure service always remains on the Microsoft Azure backbone network.”

With Azure, you can then setup a firewall within the storage account to limit internet access to that resource.
Again, this is for instances hosted within a VNet or VPC talking to their respected object storage within the same region, not on-premises to an S3/Azure storage account.

References:

The post How to limit egress costs within AWS and Azure appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/QmIGpnTArlU/limit-aws-azure-data-transfer-costs.html

DR

Veeam & NetApp – Backup for the NetApp Data Fabric

Veeam & NetApp – Backup for the NetApp Data Fabric

vZilla  /  michaelcade

ONTAP 9.5 went GA a few weeks back, waking up this morning to see the weekly digest from Anton Gostev mentioning the support for ONTAP 9.5 with Veeam Backup & Replication 9.5 Update 4a. This will not support the new feature released with ONTAP 9.5 synchronous SnapMirror.
I have been preaching the powers of NetApp & Veeam for a few years now, but if you have not seen the power of the integration between the two vendors then I am going to summarise everything in this post here today.

Explorer

Let’s start with the free stuff you can do with Veeam in a NetApp ONTAP environment. Veeam Community Edition (Free version of Veeam) has the ability to add an ONTAP controller to your Veeam environment and regardless of if Veeam has created those native ONTAP snapshots or not, Veeam has the ability to go inside those ONTAP snapshots and perform recoveries against any VMware virtual machine you have within that snapshot. This could be full virtual machine recovery, also Instant VM recovery see here. It could also be file level recovery or if you want to get down to the application items, maybe a mail item or share point object then you can also do this from the FREE version!

Orchestrate

Now natively you can create a NetApp Snapshot pretty stress free, however this is going to be crash consistent and where some application servers would withstand that crash consistency on recovery there are some most likely business critical applications that require application consistency. Why not use Veeam to orchestrate your application aware and application consistent snapshots. This provides a much tighter Recovery Point Objective. This also makes leveraging ONTAP snapshots for these reasons more appealing given that in ONTAP 9.4 the limit of snapshots per volume was increased from 255 to 100. With it being a snapshot your Recovery Time Objective is going to be seconds!
Veeam can also orchestrate the SnapMirror or SnapVault snapshot data transfer.

Backup

Ok sounds good so far. Now the icing on the cake or maybe the buttercream in the middle of the cake. Snapshots are great and I am not getting into the snapshot vs backup its 2019 and if you don’t get it or agree then you probably voted BREXIT and I am fine with that, but this is my time to shine not yours.
The integration with Veeam and NetApp gives us the ability to leverage those snapshots and gain all the benefits mentioned in the above section but also the ability to get those blocks of data into another media type.

OnDemand

We have you covered for granular recovery, fast RTO, tighter RPO and a copy of your data on a separate media type that could protect you from internal malicious threats or corruption. Now the cherry on top.
We can take those ONTAP Snapshots and those backups and we can spin them up in an isolated environment. Veeam DataLabs gives us the ability to leverage the FlexClone technology on the ONTAP system and automate the creation, provisioning and removal of small isolated sandbox environments. This YouTube video clip of a session I did is a little old now, but the capability is a huge feature within Veeam Backup & Replication.

Oh, I should also mention that we have the same integration with Element Software and NetApp HCI I wrote about that here.
I think that’s all for now in the world of NetApp & Veeam. If you have any questions, then you can find me on the twitters @MichaelCade1

Original Article: https://vzilla.co.uk/vzilla-blog/veeam-netapp-backup-for-the-netapp-data-fabric

DR

Tenant Backup to Tape

Tenant Backup to Tape

CloudOasis  /  HalYaman


As a Service Provider, what are the ways available to you to protect your tenants data? What ways are there to offload aged data to keep storage costs under control? When looking at data archiving, is Cloud storage the only way to archive your tenants backup data?

I chose this topic because I feel that the Backup to Tape as a Service feature not getting the attention it deserves.
The Tape as a Service feature was included in Veeam Update 4 to help Service Providers free up space in their storage repositories and to help customers to safely archive their backup data for longer retention time.
In the same Update 4, Veeam also released the Archive Tier. This allows Service Providers and customers to tier their backup to the cloud for archiving. But somehow this Archive Tier feature grabbed the market’s attention and neglected Tape as a Service.
As someone who is still a big fan of Tapes, and believes that tape is not going to disappear in the near future, I think this blog post is a great read for Service Providers to learn more about the Tape as a Service feature included with Veeam Update 4.
So let’s learn how it works.

Architecture

The tenant Backup to Tape architecture is an add-on to the Veeam Cloud Connect architecture, but with the addition of the Tape Infrastructure. For Service Providers who want to offer this feature, the Tape infrastructure must be added to the Veeam Cloud Connect. A tape backup job must then be configured to off-load the tenant backup from the cloud storage/repository to the tape using the Veeam Backup to Tape job.
The diagram below illustrates how the Veeam Tenant Backup to Tape feature integrates with Veeam Cloud Connect.
After the Tape infrastructure has been added to the Veeam Cloud Connect, the Service Provider can now create a GFS (Grandfather, Father, Son backup routine) backup job to back up the tenant data to tape for both these reasons:

  • Archival for long retention, and
  • Offline Archive.

Job Configuration

The configuration starts with creating a GFS media pool. The Service Provider can create a pool for each tenant, retention, or any other business offering. On the examples, we are going to use here, and to achieve full data segregation, I will create a single GFS media pool for each tenant. On the screenshot below, I created a Media Pool with five tapes to be used when I want to backup Tenant01 data:

The next step is illustrated below, where I created the Tape Backup Job and then selected Tenants as the data source:

Next, I selected the appropriate tenant repository; in the example below, it is Tenant01_Repo:

At the Media Pool menu item, select the specific tenant media pool. In this example, it is Tenant01 (HP MSL G3 Series 9.50)
The last step is to schedule the job to run the backup:

Recover Tenant Data from Tape

It is important to mention that Veeam Tenant Backup to Tape as a Service is managed and operated by only the Service Provider to help protect and archive the tenant data on behalf of the tenant. That said, the tenant will not be aware of the job or the restore process. To benefit from this service, the tenant and the Service Provider must agree to the policy to be implemented.
In the case of data loss, the Tenant can ask the service provider to restore the data using one of the following options:

  • Restore to the original location,
  • Restore to a new location, or
  • Export backup files to disk.

The screenshot below illustrates the options:

With the options are shown above, Service Providers can restore the tenant data to the original location, a different location, or export the data to removable storage. Original location restores the data to the same location it was backed up from, i.e. the tenant cloud repository. But sometimes the tenant wants to compare the data on tape with the data in the cloud repository; this is where the restore to a new location option becomes very useful.  Service providers can create a temporary tenant where the tenant is provided with access to check the data before committing the restore to the original location or sending the data using removable storage. The screenshot below illustrates the Service Provider restoring the tenant data to a new location using the temporary tenant name of Tenant01_Repo.
After the restoration was completed, the tenant can connect to the Service Provider using the temporary tenant to access the restored data:

Summary

As I mentioned at the start, I’m a big fan of tape backups for several reasons. They include secure recovery in the event of a Ransomware attack, doesn’t cost very much, and more. I am also aware of the limitations and challenges that can come with Tape backup, and maybe that can be the topic for another blog post. The Backup to Tape feature described here is a great feature, yet somehow several Service Providers I spoke to missed this feature. They were very happy to finally find a cheap and effective way to use their Tape infrastructure; some of them started using this feature as ransomware protection.
I hope this blog post provides a clear understanding of how you as a Service Provider can benefit from this Tenant Tape Backup feature and maximize your infrastructure return on investment.
The post Tenant Backup to Tape appeared first on CloudOasis.

Original Article: https://cloudoasis.com.au/2019/04/10/tenantbackuptotape/

DR

Disaster recovery plan documentation with Veeam Availability Orchestrator

Disaster recovery plan documentation with Veeam Availability Orchestrator

Veeam Software Official Blog  /  Sam Nicholls


Without a doubt, the automated reporting engine in Veeam Availability Orchestrator and the disaster recovery plan documentation it produces are among its most powerful capabilities. They’re something we’ve had a lot of overwhelmingly positive feedback from customers that benefit from them, and I feel that sharing some more insights into what these documents are capable of will help you understand how you can benefit from them, too.
Imagine coming in to work on a Monday morning to an email containing an attachment that tells you that your entire disaster recovery plan was tested over the weekend without you so much as lifting a finger. Not only does that attachment confirm that your disaster recovery plan has been tested, but it tells you what was tested, how it was tested, how long the test took, and what the outcome of the test was. If it was a success, great! You’re ready to take on disaster if it decides to strike today. If it failed, you’ll know what failed, why it failed, and where to start fixing things. The document that details this for you is what we call a “test execution report,” but that is just one of four fully-automated documentation types that Veeam Availability Orchestrator can put in your possession.

Definition report

As soon as your first failover plan is created within Veeam Availability Orchestrator, you’ll be able to produce the plan definition report. This report provides an in-depth view into your entire disaster recovery plan’s configuration, as well as its components. This includes the groups of VMs included in that plan, the steps that will be taken to recover those VMs, and the applications they support in the event of a disaster, as well as any other necessary parameters. This information makes this report great for auditors and management, and can be used to obtain sign-off from application owners who need to verify the plan’s configuration.

Readiness check report

Veeam Availability Orchestrator contains many testing options, one of which we call a readiness check, a great sanity check that is so lightweight that it can be performed at any time. This test completes incredibly quickly and has zero impact on your environment’s performance, either in production or at the disaster recovery site. The resulting report documents the outcome of this test’s steps, including if the replica VMs are detected and prepared for failover, the desired RPO is currently met, the VMware vCenter server and Veeam Backup & Replication server are online and available, the required credentials are provided, and that the required failover plan steps and parameters have been configured.

Test execution report

Test execution reports are generated upon the completion of a test of the disaster recovery plan, powered by enhanced Veeam DataLabs that have been orchestrated and automated by Veeam Availability Orchestrator. This testing runs through every step identified in the plan as if it were a real-world scenario and documents in detail everything you could possibly want to know. This makes it ideal for evaluating the disaster recovery plan, proactively troubleshooting errors, and identifying areas that can be improved upon.

Execution report

This report is exactly the same as the test execution report but is only produced after the execution of a real-world failover.
Now that we understand the different types of reports and documentation available in Veeam Availability Orchestrator, I wanted to highlight some of the key features for you that will make them such an invaluable tool for your disaster recovery strategy.

Automation

All four reports are automatically created, updated and published based on your preferences and needs. They can be scheduled to complete at any frequency you see fit – daily, weekly, monthly, etc., but are also available on-demand with a single-click. This means that if management or an auditor ever wants the latest, you can hand them real-time, up-to-date documentation without the laborious, time-consuming and error-prone edits. You can even automate this step if you like by subscribing specific stakeholders or mailboxes to the reports relevant to them.

Customization

All four reports available with Veeam Availability Orchestrator ship in a default template format. This template may be used as-is, however, it is recommended to clone it (as the default template is not editable) and customize to your organization’s specific needs. Customization is key, as no two organizations are alike, and neither are their disaster recovery plans. You can include anything you like in your documentation, from logos, application owners, disaster recovery stakeholders and their contact information. Even all the 24-hour food delivery services in the area for when things might go wrong, and the team needs to get through the night. You name it, you can customize and include it.

Built-in change tracking

One of the most difficult things to stay on top of with disaster recovery planning is how quickly and dramatically environments can change. In fact, uncaptured changes are one of the most common causes behind disaster recovery failure. Plan definition reports conveniently contain a section titled “plan change log” that detail any edits to the plan’s configuration, whether by automation or manual changes. This affords you the ability to track things like who changed plan settings, when it was changed, and what was changed so that you can preemptively understand if a change was made correctly or in error, and account for it before a disaster happens.

Proactive error detection

The actionable information available in both readiness check and test execution reports enable you to eradicate risk to your disaster recovery plan’s viability and reliability. By knowing what will and what will not work ahead of time (e.g. a recovery that takes too long or a VM replica that has not been powered down post-test), you’re able to identify and proactively remediate any plans errors that occur before disaster. This in turn delivers confidence to you and your organization that you will be successful in a real-world event. Luckily in the screenshot below, everything succeeded in my test.

Assuring compliance

Understanding compliance requirements laid out by your organization or an external regulatory body is one thing. Assuring that those compliance requirements have been met today and in the past when undergoing a disaster recovery audit is another, and failure to do so can be a costly repercussion. Veeam Availability Orchestrator’s reports enables you to prove that your plan can meet measures like maximum acceptable outage (MAO) or recovery point objectives (RPO), whether they’re defined by governing bodies like SOX, HIPAA, SEC, or an internal SLA regulation.
If you’d like to learn more about how Veeam Availability Orchestrator can help you meet your disaster recovery documentation needs and more, schedule a demo with your Veeam representative, or download the 30-day FREE trial today. It contains everything you need to get started, even if you’re not currently a Veeam Backup & Replication user.
GD Star Rating
loading…
Disaster recovery plan documentation with Veeam Availability Orchestrator, 4.8 out of 5 based on 4 ratings

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/S7QtakHRATg/disaster-recovery-planning-documentation.html

DR

NEW Veeam Backup for Microsoft Office 365 v3

NEW Veeam Backup for Microsoft Office 365 v3
Veeam Backup for Microsoft Office 365 is Veeam’s
fastest growing product of all-time and with 
v3, it’s
easier than ever for your customers to efficiently back up and reliably
restore their Office 365 Exchange, SharePoint and OneDrive data.
Watch this short and educational video
on the ProPartner portal to learn more about the new features and
functionalities in 
v3 and how you can earn more
with Veeam Backup for Microsoft Office 365.

Helping customers migrate from vSphere 5.5

Helping customers migrate from vSphere 5.5
What happens when customers are still running vSphere 5.5 in
production? Matt Lloyd and Shawn Lieu sat down with VMware solution architect
Chris Morrow to discuss upgrading to vSphere 6.5 or 6.7, management changes
in the newer releases, leveraging Veeam DataLabs™ for validation and more.

Enhanced agent monitoring and reporting in Veeam ONE

Enhanced agent monitoring and reporting in Veeam ONE

Veeam Software Official Blog  /  Kirsten Stoner


Veeam ONE has many capabilities that can benefit your business, one of which is the ability to run reports that compile information about your entire IT environment. Veeam One Reporter is a tool you can use to document and report on your environment to support analysis, decision making, optimization and resource utilization. For those who have used Veeam ONE in the past, it’s the go-to tool for reporting on your Veeam Backup & Replication infrastructure, VMware vSphere and Microsoft Hyper-V environments. But one thing was missing from Veeam ONE, and that was the depth of visibility it provided for the Veeam Agents for Microsoft Windows and Linux. With the latest update, Veeam ONE Reporter gains three new reports that assess your Veeam Agent Backup Jobs. In addition to these predefined reports, the update brings information about computers to the “Backup Infrastructure Custom Data” report. It provides agent monitoring by being able to categorize any machine that is being protected by the Veeam Agent within Veeam ONE Business View, allowing you to monitor activity from a business perspective. Ultimately, this update aligns Veeam ONE to provide substantial reporting and monitoring for any physical machines that are protected by the Veeam Agent.

Which reports are new?

Veeam ONE Update 4 adds more predefined reports to analyze Veeam Agent Backup Job activity. These include, Computers with no Archive Copy, Computer Backup Status and Agent Backup Job and Policy History.

  • The “Computer with no Archive Copy” report will highlight all the computers that do not have an archive copy.
  • The “Computer Backup Status” report provides the daily backup status information for all protected agents.
  • The “Agent Backup Job and Policy History” report provides historical information for all Veeam Agent policies and jobs.

All these reports provide visibility into evaluating your data-protection strategies for any workload that is running a Veeam agent in your environment. In this post, I want to highlight the Agent Backup Job and Policy History Report as well as discuss the addition to the “Backup Infrastructure Custom Data” report because both provide a great amount of information on Agent Backup Jobs.


Figure 1: Veeam Backup Agent reports

Agent Backup Job and Policy History report

For Veeam Agents, this report provides great information on the status of the Backup Job, along with data on the job run itself. To access the report:

  1. Open Veeam ONE Reporter, switch to Workspace view, and find the Veeam Backup Agent Reports folder, this is where you will see all the pre-built reports available for Veeam Backup Agents.
  2. Select the report, choose Scope and select the time interval you want the report to contain. (You can build the report to be more specific by selecting an individual backup server, several job/policies to be reported on or drill down further to the exact Agent Backup Job.)
  3. Once you have made your decision, click Preview Report and the report will be created.

Figure 2: Report Scope options

The report contains historical information for your Veeam Agent Backup Jobs. How you defined the report in the first step will determine how specific or general the data will be. Either way, the report provides great depth of data.


Figure 3: Agent Backup Job and Policy History Report

The first page shows an overview of the Agent Backup Job and policies, the date the jobs were run, and if it is in the Success, Failed or Warning category. On the second page of the report, you can see a bit more details like the Total Backup size and the amount of Restore Points Created.


Figure 4: Backup Job and Policy History details

If you select a specific date when the job was run it gives even more analysis on the specific run of the backup job.


Figure 5: Detailed Description of Agent Backup Job results

This tells you when the job started, how long it took and backup size. You can even tell if the run was full or incremental. The report provides visibility into your IT environment by being able to gather real-time data on your protected agent backups.

Backup infrastructure custom data report

This report allows you to customize and add data protection elements that are not covered together in the predefined reports included in Veeam ONE. The report can define and display data points about Veeam Backup & Replication objects, including backup server, backup job, agent job and VMs.  This is useful because it allows you to create a report that includes aspects of the backup infrastructure of your choosing to be displayed for easy analysis and visibility. To run this report, much is the same that was discussed previously in this blog post. You will need to locate the custom report pack in the workspace view, once found you will be able to choose between the different objects you want to show and what aspects of the backup infrastructure you want the report to analyze.


Figure 6: Custom reporting

When creating the report, you have the option to choose between different aspects of the backup infrastructure you want to be shown. In addition, you can apply custom filters to have the report filter the data to show only what you want it to. This allows you to create custom filter for the selected objects you want to show. Here is an example of a report that was run using the custom report pack.


Figure 7: Backup Infrastructure Custom Data Report

The ability to create custom reports  allows you to define your own configuration parameters, performance metrics, and filters when utmost flexibility is required. How cool is that?

Agent monitoring in Business View

To assist with monitoring Veeam Agent Backup activity, you can utilize Veeam ONE Business View. Veeam ONE Business View has added the ability to categorize agents in business terms. If you have your backup server(s) connected into Veeam ONE, you can start categorizing any machine that is being protected by the Veeam Agent.


Figure 8: Agent Monitoring in Business View

Veeam ONE Business View allows you to group any computers running the Veeam Backup Agent managed by your backup server. This gives you another layer of monitoring for the Veeam Agents that was not available in previous versions of Veeam ONE.

Bringing visibility to the Veeam Agents

Veeam ONE gives you the tools needed to accurately monitor and report on your entire IT environment. Actively monitoring and reporting on your IT environment allows you to be proactive when addressing issues occurring in your environment, helps plan for future business IT operation needs, and provides understanding on how your data center works. By being able to add Veeam Agent Reporting to your physical environment, you can gather data points on the Veeam Agent backup jobs and document the results. The latest update brings to Veeam ONE many enhancements and functionality making it imperative to start using in your data center today.
The post Enhanced agent monitoring and reporting in Veeam ONE appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/JuL1W3jIYes/enhanced-agent-monitoring-reporting.html

DR

Application-level monitoring for your workloads

Application-level monitoring for your workloads

Veeam Software Official Blog  /  Rick Vanover


If you haven’t noticed, Veeam ONE has really taken on an incredible amount of capabilities with the 9.5 Update 4 release.
One capability that can be a difference-maker is application-level monitoring. This is a big deal for keeping applications available and is part of a bigger Availability story. Putting this together with incredible backup capabilities from Veeam Backup & Replication, application-level monitoring can extend your Availability to the applications on the workloads where you need the most Availability. What’s more, you can combine this with actions in Veeam ONE Monitor to put in the handling you want when applications don’t behave as expected.
Let’s take a look at application-level monitoring in Veeam ONE. This capability is inside of Veeam ONE Monitor, which is my personal favorite “part” of Veeam ONE. I’ve always said with Veeam ONE, “I guarantee that Veeam ONE will tell you something about your environment that you didn’t know, but need to fix.” And with application-level monitoring, the story is stronger than ever. Let’s start with both the processes and services inside of a running virtual machine in Veeam ONE Monitor:

I’ve selected the SQL Server service, which for any system with this service, is likely important. Veeam ONE Monitor can use a number of handling options for this service. The first are simple start, stop and restart service options that can be passed to the service control manager. But we also can set up some alarms based on the service:

The alarm capability for the services being monitored will allow a very explicit handling you can provide. Additionally, you can make it match the SLA or expectation that your stakeholders have. Take how this alarm is configured, if the service is not running for 5 minutes, the alarm will be triggered as an error. I’ll get to what happens next in a moment, but this 5-minute window (which is configurable) can be what you set as a reasonable amount of time for something to go through most routine maintenance. But if this time exceeds 5 minutes, something may not be operating as expected, and chances are the service should be restarted. This is especially true if you have a fiddlesome application that constantly or even occasionally requires manual intervention. This 5-minute threshold example may even be quick enough to avoid being paged in the middle of the night! The alarm rules are shown below:

The alarm by itself is good, but we need more sometimes. That’s where a different Veeam ONE capability can help out with remediation actions. I frequently equate, and it’s natural to do so, the remediation actions with the base capability. So, the base capability is the application-level monitoring, but the means to the end of how to fully leverage this capability comes from the remediation actions.
With the remediation actions, the proper handling can be applied for this application. In the screenshot below, I’ve put in a specific PowerShell script that can be automatically run when the alarm is triggered. Let your ideas go crazy here, it can be as simple as restarting the service — but you also may want to notify application owners that the application was remediated if they are not using Veeam ONE. This alone may be the motivation needed to setup read-only access to the application team for their applications. The configuration to run the script to automatically resolve that alarm is shown below:

Another piece of intelligence regarding services, application-level monitoring in Veeam ONE will also allow you to set an alarm based on the number of services changing. For example, if one or more services are added; an alarm would be triggered. This would be a possible indicator of an unauthorized software install or possibly a ransomware service.
Don’t let your creativity stop simply at service state, that’s one example, but application-level monitoring can be used for so many other use cases. Processes for example, can have alarms built on many criteria (including resource utilization) as shown below:

If we look closer at the process CPU, we can see that alarms can be built on if a process CPU usage (as well as other metrics) go beyond specified thresholds. As in the previous example, we can also put in handling with remediation actions to sort the situation based on pre-defined conditions. These warning and error thresholds are shown below:

As you can see, application-level monitoring used in conjunction with other new Veeam ONE capabilities can really set the bar high for MORE Availability. The backup, the application and more can be looked after with the exact amount of care you want to provide. Have you seen this new capability in Veeam ONE? If you haven’t, check it out!
You can find more information on Veeam Availability Suite 9.5 Update 4 here.

More on new Veeam ONE capabilities:

The post Application-level monitoring for your workloads appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/o53BIn44Mzk/application-level-monitoring.html

DR

Backup Azure SQL Databases

Backup Azure SQL Databases

CloudOasis  /  HalYaman


You have a good reason to use Microsoft Azure SQL Databases; but, you are wondering how you can backup the Database locally. Can you include the Azure Databases protection in your company’s backup strategy? What does it take to back up the Azure Databases?

In this blog post, I am going to share with you a solution I used for one of our Azure Database customers who wanted to backup the Azure SQL Database locally. The solution I came up with consists of the following:

  • Azure Databases – SQL Database
  • VM/Physical Server with local SQL server installed
  • An Empty SQL Database
  • Configure Azure: Sync to other Databases
  • Veeam Agent & Veeam Backup & Replication (depends on the deployment)

The following diagram illustrates the solution I am describing on this blog post:

Solution Overview

There are several ways to backup Azure SQL databases. Of the two ways, one is to use the Veeam Backup product, and you use the Veeam Agent in the event you choose to deploy your SQL Server on a physical server, or on a Virtual Machine, inside the Azure Cloud. The other way is to deploy the Veeam Backup solution on premises on a VM inside your Hypervisor infrastructure. In that case, you use the Veeam Backup and Replication product; or, you can also use a combination of both ways.
After the SQL Server is deployed,  the solution requires the creation of an empty SQL database, and then the synchronization between the two databases must be configured. No worries, I will take you through the steps.

Preparing your Azure SQL Databases for Sync

In this first step, I will discuss the preparations of the existent Azure SQL databases to be synchronised. You must follow these steps:

1. Create a Sync Group

From Azure Portal, select SQL Databases. Click on the Database you are going to work with. In the following example, I created a temporary SQL Database and then filled it with temporary data for testing.
After you have moved to the database properties screen, select the Sync to other databases option:
Then select New Sync Group. Complete the following entries:
a. Fill in the name of the group. In this example, it is GlobalSync,
b. The database you want to sync. We have used the AzureSqlDb on Server: depsqlsrv.
c. Select Automatic Sync on or off as desired. Off in this example.
d. Select the conflict resolution option. Here the Hub is to win.
In the next step, you are going to configure the On-Premises Database Sync members. That is, the Azure Database and the Sync Agent, as shown below.
Note: You must copy the key for later use.
Next, download the Client Sync Agent from the provided link and install it on the SQL Server. Press OK. The next step is to Select the On-Premises Database.

Prepare the SQL Server

After completing the steps above, switch on, then login to your SQL server and create an empty SQL database.
Next, start the installation of the Client Sync Agent. Run it after the installation.  After the Client Sync Agent has started running, press Submit Agent Key. Provide the key you copied from the previous step and your Azure SQL Database username and password:
After pressing the OK button, press Register, then provide your local SQL server name and the name of the temporary database you just created.
The steps above should run without connection issues. If you encounter problems, go back over the steps and setting and correct the errors.
You are now set up with your SQL Server preparation and your server is waiting for the synchronised data to be received. Before you get excited, we must switch back to the Azure Portal to continue with the final steps before we test the solution.

Back To Azure Portal

From the previous steps, we now continue with the selected database configuration. The Client Sync Agent communicates with the Azure Portal and updates on the local SQL Server database. Now we can complete the configuration below.

Press OK three times to return to the last step on the Sync Group configuration process. Here we must select the Hub Database tables to sync with the local database.

Press Save to finish.

Testing the Solution

To test our solution, let’s first browse to the SQL server and check to see if we can find any tables inside the database we just created.
We should not find any at this stage:

Now let’s initiate a manual sync. Remember, we must configure the solution to run manually from the Sync Group we just created. To sync the database with the local SQL, you must press the Sync button at the top of the GlobalSync screen. See below.

The Sync should complete successfully. If it doesn’t, check your settings and try again. Now it is time to check the local SQL. See the screenshot below.

Conclusion

With that procedure we have just demonstrated, I have been able to sync the Azure SQL Databases to the local SQL database where I run a frequent Veeam backup using the Veeam Backup and Replication product. This way, I achieved my customer’s objective of saving the Azure SQL Databases locally. If necessary, I can use Veeam SQL Explorer to recover the database and tables to the local server. From there, I can sync it back to the Azure SQL databases.
This time, I have demonstrated a manual sync process; but you can automate the sync to seconds, minutes, or hours. I hope you found this blog post useful.

The post Backup Azure SQL Databases appeared first on CloudOasis.

Original Article: https://cloudoasis.com.au/2019/03/18/backup-azure-sql-databases/

DR

Backup infrastructure at your fingertips with Heatmaps

Backup infrastructure at your fingertips with Heatmaps

Veeam Software Official Blog  /  Rick Vanover


One of the best things an organization can do is have a well-performing backup infrastructure. This is usually done by fine-tuning backup proxies, sizing repositories, having specific conversations with business stakeholders about backup windows and more. Getting that set up and running is a great milestone, but there is a problem. Things change. Workloads grow, new workloads are introduced, storage consumption increases and more challenges come into the mix every day.
Veeam Availability Suite 9.5 Update 4 introduced a new capability that can help organizations adjust to the changes:

Heatmaps!

Heatmaps are part of Veeam ONE Reporter and do an outstanding job of giving an at-a-glance view of the backup infrastructure that can help you quickly view if the environment is performing both as expected AND as you designed it. Let’s dig into the new heatmaps.

The heatmaps are available on Veeam ONE Reporter in the web user interface and are very easy to get started. In the course of showing heatmaps, I’m going to show you two different environments. One that I’ve intentionally set to be performing in a non-optimized fashion and one that is in good shape and balanced so that the visual element of the heatmap can be seen easily.
Let’s first look at the heatmap of the environment that is well balanced:

Here you can see a number of things, the repositories are getting a bit low on free space, including one that is rather small. The proxies carry a nice green color scheme and do not show too much variation in their work during their backup windows. Conversely if we see a backup proxy is dark green, that indicates it is not in use, which is not a good thing.
We can click on the backup proxies to get a much more detailed view, and you can see that the proxy has a small amount of work during the backup window in this environment in the mid-day timeframe and carries a 50% busy load:

When we look at the environment that is not so balanced, the proxies tell a different story:

You can see that first of all there are three proxies, but one of them seems to be doing much more work than the rest due to the color changes. This clearly tells me the proxies are not balanced, and this proxy selected is doing a lot more work than the others during the overnight backup window — which stretches out the backup window.

One of the coolest parts of the heatmap capability is that we can drill into a timeframe in the grid (this timeline can have a set observation window) that will tell us which backup jobs are causing the proxies to be so busy during this time, shown below:

In the details of the proxy usage, you can see the specific jobs that are set which are taking the CPU cycles are shown.

How can this help me tune my environment?

This is very useful as it may indicate a number of things, such as backup jobs being configured to not use the correct proxies, proxies not having the connectivity they need to perform the correct type of backup job. An example of this would be if one or more proxies are configured for only Hot-Add mode and they are physical machines, which makes that impossible. The proxy would never be selected for a job and the remaining proxies would be in charge of doing the backup job. This is all visible in the heatmap yet the backup jobs would complete successfully, but this type of situation would extend the backup window. How cool is that?
Beyond proxy usage, repositories are also very well reported with the heatmaps. This includes the Scale-Out Backup Repositories as well. This will allow you to view the underlying storage free space. The following animation will show this in action:

Show me the heatmaps!

As you can see, the heatmaps add an incredible visibility element to your backup infrastructure. You can see how it is performing, including if things are successful yet not as expected. You can find more information on Veeam Availability Suite 9.5 Update 4 here.

Helpful resources:

The post Backup infrastructure at your fingertips with Heatmaps appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/nHjHt9YMUwE/one-reporter-heatmaps-monitoring.html

DR

New era of Intelligent Diagnostics

New era of Intelligent Diagnostics

Veeam Software Official Blog  /  Melissa Palmer


Veeam Availability Suite 9.5 U4 is full of fantastic new features to enable Veeam customers to intelligently manage their data like never before. A big component of Veeam Availability Suite 9.5 U4 is of course, Veeam ONE.
Veeam ONE is all about providing unprecedented visibility into customers’ IT environments, and this update is full of fantastic enhancements, so stay tuned for more blogs all about new features like Veeam Agent monitoring and reporting enactments, Heatmaps, and more!
One of the most interesting new features of Veeam ONE 9.5 U4 is Veeam Intelligent Diagnostics. Think of Veeam Intelligent Diagnostics as the entity watching over your Veeam Backup & Replication environment so you don’t have to. Wouldn’t it be great for potential issues to fix themselves before they cause problems? Veeam Intelligent Diagnostics enables just that.

How it works

Veeam Intelligent Diagnostics works by comparing the logs from your Veeam Backup & Replication environment to a known list of issue signatures. These signatures are downloaded from Veeam directly, so think of it as a reverse call home. All log parsing and analysis is done by Veeam ONE with the help of a Veeam Intelligent Diagnostics agent which is installed on every Veeam Backup & Replication server you want to keep an eye on.
These signatures can easily be updated by Veeam Support as needed. This allows Veeam Support to be proactive if they are noticing a number of cases with the same characteristics or cause, and fix issues before customers even encounter them. Think of things like common misconfigurations that Veeam customers spend time troubleshooting. These items can easily be avoided by Veeam customers leveraging Veeam Intelligent Diagnostics.

When an issue is detected, Veeam Intelligent Diagnostics can fix things in one of two ways, automatically or semi-automatically. The semi-automatic method requires manual approval to remediate the issue. Veeam calls these fixes Remediation Actions. In either case, Veeam ONE alarms will be triggered when an issue is detected. Veeam Intelligent Diagnostics will also include handy Veeam Knowledge Base articles in the alarms to allow customers to understand the issues they are avoiding.

Management made easier

One of the things that has always attracted me to Veeam products is how easy they are to use and get started with. Veeam Intelligent Diagnostics takes things a step further by eliminating potential issues before they even cause problems, making Veeam Backup & Replication easier to use and manage. Reducing operational complexity is a win for any organization, and Veeam Intelligent Diagnostics helps do this.
The Veeam Availability Suite, which is made up of Veeam Backup & Replication and Veeam ONE is available for a completely free 30-day trial. Be sure to try it out to take advantage of Veeam Intelligent Diagnostics and the rest of the powerful features released in Veeam Availability Suite 9.5 U4.
The post New era of Intelligent Diagnostics appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/vIWWpwJY2UI/intelligent-diagnostics-log-analysis.html

DR

Backup your Azure Storage Accounts

Backup your Azure Storage Accounts

CloudOasis  /  HalYaman


How do you backup your Microsoft Azure Storage Accounts? Is there a way to export the unstructured data from your blob storage accounts and then back it up to a different location?


Last week I had a discussion with a customer who wanted to backup his Microsoft Azure Storage Accounts, blob, files, tables, and more, and was asking if somehow Veeam Backup Agent could help achieve this.
Straight out of the box, it is challenging to backup the data resident in Azure Storage Accounts; but with a few simple steps, I assisted the customer to backup the Microsoft Azure unstructured blob files to a Veeam Repository using Veeam Backup Agent for Windows.
In this blog post, I will take you through the steps I took to help the customer back up their Microsoft Azure Storage Accounts. The solution is to use Microsoft AzCopy to download and sync the required blob container files to a local drive. The drive is where the Veeam Backup Agent is installed and configured to backup the Virtual Machine, or the targeted folder.

Solution Pre-requisites

As I mentioned before, the steps to achieve the requirements are straightforward and require the following items to be prepared and configured:

  • Veeam Backup Server.
  • Veeam Backup Agent installed on a Microsoft Azure VM.
  • Disk space attached to the VM to temporarily store the blob files.
  • Microsoft Azcopy tool.

AzCopy Tool

To copy data from a Microsoft Azure Storage Account, you must download and install the Microsoft Azcopy tool. You can download it from this link. Install it on the VM where the Veeam Backup Agent is installed.
After it has downloaded, run the setup, and then run from the start menu:

By default, the program files are stored in:
C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy\
After the process has started, you must configure the access to your Microsoft Azure subscription using this command:

AzCopy login

Follow the output instruction, where you browse to
https://microsoft.com/devicelogin
Enter the generated random access code to authenticate. In this example, it is HP5DFJBZ9; you will get a different code.

After following those steps, you finished with the Microsoft AzCopy installation and configuration.

Solution

To begin copying the blob data to the local drive, we must perform the following steps:

  • Create a directory on the local drive where you will store the downloaded blob files.
  • Obtain your Storage Account key.
    • Note: In this screenshot at zcopytesting – Access keys, you are provided with two keys so that you can maintain connections using one key while the other regenerates. You must also update your other Microsoft Azure resources and applications accessing this storage account to use the new keys. Your disk access is not interrupted.
Screen Shot 2019-03-10 at 1.04.42 pm.png
  • Obtain the Blob Container URL:

Screen Shot 2019-03-10 at 1.07.40 pm
The next step is to create a .cmd script to apply all those steps above. The script looks like this:

“C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy\AzCopy” /Source:<Blob Container URL> /Dest:<Local Folder> /SourceKey:<Access Key> /S /Y /XO

This .cmd file can be scheduled to run before your Veeam Backup Agent job or using windows schedule.

Veeam Backup Demonstration

To Validate the solution developed above, I tested it by uploading several files to the blob storage account:

Screen Shot 2019-03-10 at 1.14.22 pm.png

I also started a backup job. After the backup job completed, I ran the restore process to validate using the Restoring Guest Files option:

Screen Shot 2019-03-10 at 1.17.22 pm.png

Restoring and editing the console.txt script before modifying the file:

Screen Shot 2019-03-10 at 1.18.43 pm

The second backup was completed after modifying the console.txt file and then starting the incremental backup. The following screenshot is the Restore from Incremental (note the time of the backup):
Screen Shot 2019-03-10 at 1.22.35 pm
Edit the console.txt file (added Hal – Test For Incremental x 2):

Screen Shot 2019-03-10 at 1.23.53 pm.png

Conclusion

With these simple and straight forward steps, I have been able to backup my Microsoft Azure storage account to my Veeam Backup Repository. The good news is that the .cmd script I provided for you on this blog post allows copying only the new updated/modified (/xo) files from the blob storage. This will save time and space, both on your Veeam Repository and the local disk. On this blog post, I have provided basic and manual steps to demonstrate the solution and capabilities. As previously mentioned, it is ideal to use the Microsoft Azcopy command before starting the backup job as a pre-run script, or as part of a Windows scheduler.
I hope this blog post will help you; please don’t hesitate to share your feedback.
The post Backup your Azure Storage Accounts appeared first on CloudOasis.

Original Article: https://cloudoasis.com.au/2019/03/10/backup-your-azure-storage-accounts/

DR

Tips on managing, testing and recovering your backups and replicas

Tips on managing, testing and recovering your backups and replicas

Veeam Software Official Blog  /  Evgenii Ivanov


Several months ago, I wrote a blog post about some common and easily avoidable misconfigurations that we, support engineers, keep seeing in customers’ infrastructures. It was met with much attention and hopefully helped administrators improve their Veeam Backup & Replication setups. In this blog post, I would like to share with you several other important topics. I invite you to reflect on them to make your experience with Veeam smooth and reliable.

Backups are only half of the deal — think about restores!

Every now and then we get calls from customers who catch themselves in a very bad situation. They needed a restore, but at a certain point hit an obstacle they could not circumvent. And I’m not talking about lost backups, CryptoLocker or something! It’s just that their focus was on creating a backup or replica. They never considered that data recovery is a whole different process that must be examined and tested separately. I’ll give you several examples to get the taste of it:

  1. The customer had a critical 20-terabyte VM that failed. Nobody wants downtime, so they started the VM in instant recovery and had it working in five minutes. However, instant recovery is a temporary state and must be finalized by migration to the production datastore. As it turned out, the infrastructure did not allow it to copy 20 TB of data in any reasonable time. And since instant recovery was started with an option to write changes to the C drive of Veeam Backup & Replication (as opposed to using a vSphere snapshot), it was quickly filling up without any possibility for sufficient extension. As some time passed before the customer approached support, the VM had already accumulated some changes that could not be discarded. With critical data at risk, there’s no way to finalize instant recovery in a sufficiently short time and imminent failure approaching. Quite a pickle, huh?
  2. The customer had a single domain controller in the infrastructure and everything added in Veeam Backup & Replication using DNS. I know, I know. It could have gone wrong in a hundred ways, but here is what happened: The customer planned some maintenance and decided to fail over to the replica of that DC. They used planned failover, which is ideal for such situations. The first phase went fine, however during the second phase, the original VM was turned off to transfer the last bits of data. Of course, at that moment the job failed because DNS went down. Luckily, here we could simply turn on the replica VM manually from vSphere (this is not something we recommend, see the next advice). However, it disrupted and delayed the maintenance process. Plus, we had to manually add host names to the C:\Windows\System32\drivers\etc\hosts file on Veeam Backup & Replication to allow a proper failback.
  3. The customer based backup infrastructure around tapes and maintained only a very short backup chain on disks. When they had to restore some guest files from a large file server, it turned out there was simply not sufficient space to be found on any machine to act as a staging repository.

I think in all these situations the clients fell into the same trap they simply assumed that if a backup is successful, then restore should be as well! Learn about restore, just as you learn about backups. A good way to start is our user guide. This section contains information on all the major types of restores. In the “Before you begin” section of each restore option, you can find initial considerations and prerequisites. Information on other types of restores such as restore from tapes or from storage snapshots can be found in their respective sections. Apart from the main user guide, be sure to check out the Veeam Explorers guide too. Each Veeam Explorer has a “Planning and preparation” section — this will help you prepare your system for restore beforehand.

Do not manage replicas from vSphere console

Veeam replicas are essentially normal virtual machines. As such, they can be managed using usual vSphere management tools, mainly vSphere client. It can, but should not be used. Replica failover in Veeam Backup & Replication is a sophisticated process, which allows you to carefully go one step at a time (with the possibility to roll back if something goes wrong) and finalize failover in a proper way. Take a look at the scheme below:

If instead of using the Veeam Backup & Replication console, you simply start a replica in vSphere client or start a failover from Veeam Backup & Replication. But if you switch to managing from the vSphere client later, you get a number of serious consequences:

  1. The failover mechanism in Veeam Backup & Replication will no longer be usable for this VM, as all that flexibility described above will no longer be available.
  2. You will have data in the Veeam Backup & Replication database that does not represent the actual state of the VM. In worst cases, fixing it requires database edits.
  3. You can lose data. Consider this example: A customer started a replica manually in vSphere client and decided to simply stick with it. Some time passed, and they noticed that the replica was still present in the Veeam Backup & Replication console. The customer decided to clean it up a little, right-clicked on the replica and chose “Delete from disk.” Veeam Backup & Replication did exactly what was told — deleted the replica, which unbeknownst to the software, had become a production VM with data.

There are situations when starting the replicas from the vSphere client is necessary (mainly, if the Veeam Backup & Replication server is down as well and replicas must be started without delay). However, if the Veeam Backup & Replication server is operational, it should be the management point from start to finish.
It is also not recommended to delete the replica VMs from vSphere client. Veeam Backup & Replication will not be aware of such changes, which can lead to failures and stale data in the console. If you do not need a replica anymore, delete it from the console and not from the vSphere client as a VM. That way your list of replicas will contain only the actual data.

Careful with updates!

I’m speaking about updates for hypervisors and various applications backed up by Veeam. From a Veeam Backup & Replication perspective, such updates can be roughly divided into two categories — major updates that bring a lot of changes and minor updates.
Let’s speak about major updates first. The most important ones are hypervisor updates. Before installing them, it is necessary to confirm that Veeam Backup & Replication supports them. These updates bring a lot of changes to the libraries and APIs that Veeam Backup & Replication uses, so updating Veeam Backup & Replication code and rigorous testing from QA is necessary before a new version is officially supported. Unfortunately, as of now VMware does not provide any preliminary access to the new vSphere versions for the vendors. So Veeam’s R&D gets access together with the rest of the world, which means that there is always a lag between a new version release and official support. The magnitude of changes also does not allow R&D to fit everything in a hotfix, so official support is typically added with the new Veeam Backup & Replication versions. This puts support and our customers in a tricky situation. Usually after a new vSphere release, the amount of cases increases because administrators start installing updates, only to find out that their backups are failing with weird issues. This forces us, support, to ask the customers to perform a rollback (if possible) or to propose workarounds that we cannot officially support, due to lack of testing. So please check the version compatibility before updates!
The same applies to backed up applications. Veeam Explorers also has a list of supported versions and new versions are added to this list with Veeam Backup & Replication updates. So once again, be sure to check the Veeam Explorers user guide before passing to a new version.
In the minor updates’ category, I put things like cumulative updates for Exchange, new VMware Tools versions, security updates for vSphere, etc. Typically, they do not contain major changes and in most situations Veeam Backup & Replication does not experience any issues. That’s why QA does not release official statements as with major updates. However, in our experience there were situations where minor updates changed workflow enough to cause issues with Veeam Backup & Replication. In these cases, once the presence of an issue is confirmed, R&D develops a hotfix as soon as possible.
How should you stay up to date on the recent developments? My advice is to register on https://forums.veeam.com/. You will be subscribed to a weekly “Word from Gostev” newsletter from our Senior Vice President Anton Gostev. It contains information on discovered issues (and not limited to Veeam products), release plans and interesting IT news. If you do not find what you are looking for in the newsletter, I recommend checking the forum. Due to the sheer number of Veeam clients, if any update breaks something, a related thread appears soon after.
Now backups are not the only thing that patches and updates can break. In reality, they can break a lot of stuff, the application itself included. And here Veeam has something to offer — Veeam DataLabs. Maybe you heard about SureBackup — our ultimate tool for verifying the consistency of backups. SureBackup is based on DataLabs, which allows you to create an isolated environment where you can test updates, before bringing them to production. If you want to save yourself some gray hair, be sure to check it out. I recommend starting with this post.

Advice to those planning to buy Veeam Backup & Replication or switching from another solution

Sometimes in technical support we get cases that go like this: “We have designed our backup strategy like this, we acquired Veeam Backup & Replication, however we can’t seem to find a way to do X. Can you help with it?” (Most commonly such requests are about unusual retention policies or tape management). We are happy to help, but at times we have to explain that Veeam Backup & Replication works differently and they will need to change their design. Sure enough, customers are not happy to hear that. However, I believe they are following an incorrect approach.
Veeam Backup & Replication is very robust and flexible and in its current form it can satisfy the absolute majority of the companies. But it is important to understand that it was designed with certain ideas in mind and to make the product really shine, it is necessary to follow these ideas. Unfortunately, sometimes the reality is quite different. Here is what I imagine happens with some of the customers: They decide that they need a backup solution. So they sit down in a room and meticulously design each element of their strategy. Once done, they move to choosing a backup solution, where Veeam Backup & Replication seems to be an obvious choice. In another scenario, the customer already has a backup solution and a developed backup strategy. However, for some reason their solution does not meet their expectations. So they decide to switch to Veeam and wish to carry their backup strategy in Veeam Backup & Replication unchanged. My firm belief is that this process should go vice versa.
These days Veeam Backup & Replication has become a de-facto standard for backup solution, so probably any administrator would like to have a glance at it. However, if you are serious about implementation, Veeam Backup & Replication needs to be studied and tested. Once you know the capabilities and know this is what you are looking for, build your backup strategy specifically for Veeam Backup & Replication. You will be able to use the functionality to the maximum, reduce the risks and support will have an easier time understanding the setup.
And that’s what I have for today’s episode. I hope that gave you something to consider.

The post Tips on managing, testing and recovering your backups and replicas appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/srEg219FagQ/manage-test-recover-backup-replica.html

DR

Using Veeam PowerShell with Veeam DataLabs Staged Restore and Secure Restore

Using Veeam PowerShell with Veeam DataLabs Staged Restore and Secure Restore

vZilla  /  michaelcade

Before we get started, I wanted to share something that I found whilst testing out the beta releases and RC of Veeam Backup & Replication 9.5 Update 4. I mentioned whilst running my live Veeam DataLabs: Secure Restore at our Velocity sales and partner event that was also part of our 9.5 update 4 launch event that there was a huge influx of new things you could do with Veeam Backup & Replication and PowerShell Cmdlets.
If you want, see for yourself then run the following command this is going to output all of the cmdlets available in Veeam Backup & Replication 9.5 update 4. Note that if you want to compare you need to run this prior to an update to see the comparison.

VBR 9.5 Update 3a PowerShell Cmdlets 564 -> 734 VBR 9.5 Update 4 Cmdlets

New PowerShell Cmdlets (170 NEW)

A snapshot of the new PowerShell cmdlets is shown below in this table. The PowerShell Reference guide is a great place to start if you want to understand what these can do for you in your environment.
If you were to compare the update 3a release to the new update 4 release you will see that before there were 564 cmdlets and now 734. 170 new! That shows the depth of this release.

My focus for the Update 4 release has been around Veeam DataLabs Secure and Staged Restore features. This next section will go deeper into the options and scenarios of how to run Secure and Staged Restore via PowerShell during your recoveries.

Veeam DataLabs: Secure Restore

Veeam DataLabs Secure Restore gives us the ability to perform a third-party antivirus scan before continuing a recovery process. The command below highlights the parameters that can be used for Secure Restore while leveraging Instant VM Recovery®. You will find these same parameters under many other restore cmdlets. More information can be found here.


The script below is an example that I have used to kick off a secure restore within my own lab environment.
This is the script I used for the on-stage demo, in the demo I was using a tiny windows virtual machine that I had infected with the Eicar malware threat. Many antivirus vendors will use this safe virus to perform tests against their systems. I wanted to demonstrate the full force of what Veeam DataLabs: Secure Restore could bring to the recovery process by showing an infected machine. There is no fun showing you something that is clean, I mean anyone can do that.

Veeam DataLabs: Staged Restore

Veeam DataLabs Staged Restore allows for us to inject a script or think a process into an entire VM recovery process, many use cases will be shared, and this is not the place to go into that detail. The command below highlights the parameters available for Staged Restore functionality during a full VM failover.
The one-use case I will share and have had discussions about this since the release is the ability to mask data to departments. The challenge one of our customers was facing was the ability to take their backup data and pre-update 4 they were restoring that data to an isolated environment away from production for their developers to work on. The mention of Veeam DataLabs: Staged Restore allowed them to recognise that the databases they were exposing from these backup files to their development team would also be including sensitive data. With Staged Restore they can inject a script into the process that would exclude that sensitive data from the development team. More information can be found here.


Finally, the script below is another example but this time for Staged Restore. You will note that I have three options at the end that allow for different outcomes. The first option is a standard no staged restore entire VM recovery. The second option (the one that is not hashed out) is the one that will allow us to inject a script without the requirement of an application group. The third and final script example will allow us to leverage an Application Group where the VM we are restoring may require something from a dependency within the Application Group to complete the injection of the script.

Hopefully that was useful, and I am hoping to explore more PowerShell options especially around the Veeam DataLabs and the art of the possible when it comes to leveraging that data or providing even more additional value above the backup and recovery of your data.

Original Article: https://vzilla.co.uk/vzilla-blog/using-veeam-powershell-with-veeam-datalabs-staged-restore-and-secure-restore

DR

N2WS Expands Cost Optimization for Amazon Web Services with Amazon EC2 Resource Scheduling

N2WS Expands Cost Optimization for Amazon Web Services with Amazon EC2 Resource Scheduling

Business Wire Technology News

WEST PALM BEACH, Fla.–()–N2WS, a Veeam® company, and provider of cloud-native backup and recovery solutions for data and applications on Amazon Web Services (AWS), today announced the availability of N2WS Backup & Recovery 2.5 (v2.5), which introduces Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Relational Database Service (Amazon RDS) resource management and control. Using the new N2WS Resource Control, organizations can lower costs by stopping or hibernating groups of Amazon EC2 and Amazon RDS resources on-demand or automatically, based on a pre-determined schedule.

“A good portion of our internal AWS bill is dedicated to Amazon EC2 compute resources, as much as 60 percent every month. A lot of these resources are not being used all the time,” explains Uri Wolloch, CTO at N2WS. “Being able to simply power on or off groups of Amazon EC2 and Amazon RDS resources with a simple mouse click is just like shutting the lights off when you leave the room or keeping your refrigerator door closed; it reduces waste and saves a huge amount of money if put into regular practice.”
N2WS Backup & Recovery v2.5 further optimizes the process for cycling Amazon Elastic Block Store (Amazon EBS) snapshots into the N2WS Amazon Simple Storage Service (Amazon S3) repository, extending costs saving to more than 60% for backups stored greater than 2 years. N2WS v2.5 also supports two new AWS Regions: AWS Europe (Stockholm) Region and the new AWS GovCloud (US-East) Region.
For public sector organization leveraging the AWS GovCloud (US-East or US-West) Regions, N2WS v2.5 offers automated cross-region disaster recovery between the two AWS Regions. This allows AWS GovCloud (US) users to orchestrate near instant recovery and disaster recovery, while keeping data and workloads in approved data centers – ensuring that workloads remain available in the event of a regional issue.
New updates and enhancements now available as part of N2WS Backup & Recovery v2.5 include:

  • Management of data storage and server instances from one simple console using N2WS Resource Control
  • Optimization enhancements to Amazon S3 integration, saving even more when cycling snapshots into an Amazon S3 repository
  • Added support for new AWS Regions – AWS Europe (Stockholm) and AWS GovCloud (US-East) Regions with cross-region DR between AWS GovCloud US-West and AWS GovCloud US-East now available
  • Expanded range of APIs allowing you to configure alerts and recovery targets
  • Full support for all available APIs via a new CLI tool

For organizations leveraging the N2WS Amazon S3 repository, further compression and deduplication enhancements have pushed costs saving above 60% for data stored over longer terms. During simulations, monthly backups of an 8TB volume stored in the N2WS Amazon S3 repository for 2 years, costs 68% less than the same backups stored as native Amazon EBS snapshots. These enhancements help enterprises manage and significantly reduce cloud storage costs for backup data being retained for compliance reasons.
N2WS Backup & Recovery is an enterprise-class backup, recovery and disaster recovery solution only available on AWS Marketplace. N2WS Backup & Recovery version 2.5 is available for immediate use by visiting AWS Marketplace at: https://aws.amazon.com/marketplace/pp/B00UIO8514.
About N2WS
N2W Software (N2WS), a Veeam® company, is a leading provider of enterprise-class backup, recovery and disaster recovery solutions for Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database Services (Amazon RDS), and Amazon Redshift. Used by thousands of customers worldwide to backup hundreds of thousands of production servers, N2WS Backup & Recovery is a preferred backup solution for Fortune 500 companies, enterprise IT organizations and Managed Service Providers operating large-scale production environments on AWS. To learn more, visit www.n2ws.com.

Original Article: http://www.businesswire.com/news/home/20190305005142/en/N2WS-Expands-Cost-Optimization-Amazon-Web-Services/?feedref=JjAwJuNHiystnCoBq_hl-Q-tiwWZwkcswR1UZtV7eGe24xL9TZOyQUMS3J72mJlQ7fxFuNFTHSunhvli30RlBNXya2izy9YOgHlBiZQk2LOzmn6JePCpHPCiYGaEx4DL1Rq8pNwkf3AarimpDzQGuQ==

DR

Veeam and Alibaba Cloud – OSS Integration

Veeam and Alibaba Cloud – OSS Integration

CloudOasis  /  HalYaman


alibaba-cloud-logoWhat options will Veeam offer to protect Alibaba Cloud workloads? How do you backup Alibaba ECS (Elastic Compute Service) instances? And is possible to use Alibaba OSS (Object Storage Services) to archive your Veeam backup?

Veeam Update 4, released on 22 January, not only introduced a new product feature; but also introduced a new Cloud Services integration. In this blog, I will discuss the integration of Veeam Update 4 with Alibaba Cloud.
There are several levels of integration that Veeam offers to protect or/and utilize the Alibaba Cloud resources. The following options are available to you when you use Veeam and Alibaba Cloud:

  • Deploy Veeam as an Elastic Compute (ECS) instance from the Alibaba MarketPlace,
  • Protect Alibaba ECS instances using a Veeam Agent,
  • Integrate Veeam with Alibaba Object Storage Service (OSS),
  • Integrate Veeam with Alibaba VTL.

In this blog post, I will focus on Veeam and Alibaba OSS integration to keep the blog short and focused. On future blog posts, I will discuss Veeam Agent protection and VTL.

Veeam and Alibaba Object Storage Service (OSS) Integration

Veeam Archive/Capacity Tier is a newcomer to the Veeam suite and is included in Update 4. The good news for those customers using Alibaba Cloud is that Alibaba OSS is supported out of the box. As an Alibaba customer, you can add your OSS bucket as part of the Veeam Scale-out-backup-repository to archive your aging backup to your OSS for longer retention.
To Add your Alibaba OSS to Veeam, you must first complete these tasks:

  • Create the Alibaba OSS bucket,
  • Acquire the Bucket URL and the Access key,
  • Add Alibaba OSS as a repository.

Let’s demonstrate the process to add the OSS to the Veeam infrastructure:
The process to add Alibaba OSS to the Veeam infrastructure is very simple. The process starts by adding a new repository, as shown in the following screenshot.

After you choose the Object storage, you must select the S3 Compatible option:
Next, name the new OSS repository, then press Next to enter the Service Point URL (OSS URL) and the credentials on a form of Access Key and Secret key. See the screenshot below.

The final step in this process is to create a folder where you will save your Veeam backups. See the screenshot below.

Before you finalize the process, you take the opportunity to limit the OSS consumption. See the settings in the screenshot below.

Now press Finish to complete the process.

Summary

In this blog post, I discussed the integration of Veeam and Alibaba OSS, walking you through the steps to add Alibaba OSS to Veeam infrastructure as a repository. To use the OSS as an Archive/Capacity target, you must configure a Veeam Scale-out-Repository (SOBOR), then include the OSS part of the Capacity Tier configuration. See the screenshot below for the setup.

On a future blog post, I will keep the discussion around Veeam and Alibaba Cloud, and the topics I will discuss will be Veeam and Alibaba VTL and Alibaba ECS Agent protection. Until then, I hope this blog post helps you get started. Please don’t forget to share.

The post Veeam and Alibaba Cloud – OSS Integration appeared first on CloudOasis.

Original Article: https://cloudoasis.com.au/2019/03/04/veeam-abibaba-cloud-oss-integration/

DR

Veeam & NetApp HCI (Element OS) Integration #NetAppATeam

Veeam & NetApp HCI (Element OS) Integration

vZilla  /  michaelcade


With the recent GA release of Veeam Backup & Replication 9.5 Update 4, the next storage integration also arrived, leveraging the Universal Storage API that was released late 2017 the list continues to grow. Here is an overview of the integration points previously I wrote about.
Functionality is covered in the post linked above, I wanted to take this time to highlight the simple process of getting your NetApp HCI or SolidFire storage added within your Veeam Backup & Replication console.
Before you begin you need to make sure you have added your vSphere environment this will unlock the Storage Infrastructure tab.
This week some of my posts are focusing on Veeam Community Edition, I will be running through the process of adding storage within this free edition, this allows even the community edition to have visibility and some control in to some of your historic snapshots that were not even created by Veeam, as well as the ability to recover some items from those snapshots.

As you can see from the above, we have a clean Veeam Backup & Replication install with our VMware environment added but in the shot above you see we have no storage integrations currently added to our Veeam Backup & Replication console. To add we need to select the Add Storage.

Next up you will see the long list of available storage integrations that are available along with the link to download any of the additional Universal Storage API plug ins. For us we can select the NetApp option below.

On this next page you will see Data ONTAP which we have covered a few times in previous posts, but the one we are interested in today is Element, Element is the operating system that is used on both SolidFire and NetApp HCI storage.

The element option is a link to the Veeam website where all plug ins can be downloaded. Authenticate with your account or create an account to see those available downloads.

Scroll down until you find the following screen, this is applicable for all Veeam Universal Storage integrations, here you will find the downloads for all.

Select and download the NetApp Element Plug-In.

The plug-in once downloaded will be in the form of a compressed file, extract this into an appropriate location.

You will see once extracted, it is an application file type that can then be ran, I always choose to run as administrator.

If you have not closed down all your Veeam Backup & Replication consoles then you will first of all get the following warning which will not let you proceed until all consoles are closed.

Next is a pretty straight forward wizard, its Veeam we wouldn’t make this complicated would we. Next.


Terms of usage, let’s make sure we all spend our time and read through this. It after all states in block capitals that this is IMPORTANT – READ CAREFULLY.

When you are happy with the terms of usage, your tray table is stowed, your seat in an upright position and your seat belt fastened you can proceed with the installation.

A note to mention is that this is going to stop some Veeam services, as a best practice you should disable your jobs before running the installation. Don’t worry it does start them again once installed. Just remember to re-enable the jobs after. In my lab I have no jobs running currently and it doesn’t take long at all.

Ok so that’s the plug-in installed on your Veeam Backup & Replication server. Pretty simple straight forward stuff.

Now let’s head back into our Veeam Backup & Replication console and we can add our NetApp HCI / SolidFire system. Follow the first four steps of the post above to get back to this stage, but this time we know we have the software downloaded and installed. First up add in your management IP of your system.

Provide the credentials to access the storage system.

The next options are around protocols available, volumes to be scanned and which of our proxies should we use or that have access to the storage array.
Volumes to scan, if you have volumes other than VMFS datastores for your VMware environment you may wish to remove them from the scanning schedule here.
Backup Proxies to use, this ensures that only certain proxies that have the capability of accessing the storage system are used. By default, all proxies will be used or attempted.

That’s it, confirm the summary of the system you are adding, once hitting finished here. Veeam is going to run a scan against the volumes for VMware VM files.

The next screen / pop up is now discovering the storage system. It’s going to scan for all VMFS volumes located on the storage array.

Another Veeam server has been orchestrating snapshots but with this instance of Veeam Backup & Replication we have the ability to look inside of those volumes and snapshots and see the VMs available for recovery options. We can also perform a level of restore against those.

I will follow up in another post what else we can do now we have our NetApp HCI system visible to our Veeam Backup & Replication server. Yes, we can backup from it and restore but there are also some more very interesting features that we can take advantage of including Veeam DataLabs.

Original Article: https://vzilla.co.uk/vzilla-blog/veeam-netapp-hci-element-os-integration-netappateam

DR

Enhancing DRaaS with Veeam Cloud Connect and vCloud Director

Enhancing DRaaS with Veeam Cloud Connect and vCloud Director

Veeam Software Official Blog  /  Anthony Spiteri

The state of disaster recovery

While many organizations have understood the importance of the 3-2-1 rule of backup in getting at least one copy of their data offsite, they have traditionally struggled to understand the value of making their critical workloads available with replication technologies. Replication and Disaster Recovery as a Service (DRaaS) still predominantly focus on the Availability of Virtual machines and the services and applications that they run. The end goal is to have critical line of business applications identified, replicated and then made available in the case of a disaster.
The definition of a disaster varies depending on who you speak to, and the industry loves to use geo-scale impact events when talking about disasters, but the reality is that the failure of a single instance or application is much more likely than whole system failures. This is where Replication and Disaster Recovery as a Service becomes important, and organizations are starting to understand the critical benefits of combining offsite backup together with replication of their critical on-premises workloads.

Veeam Cloud Connect

While the cloud backup market has been flourishing, it’s true that most service providers who have been successful with Infrastructure as a Service (IaaS) have spent the last few years developing their Backup, Replication and Disaster Recovery as a Service offerings. With the release of Veeam Backup & Replication v8, Cloud Connect Backup was embraced by our cloud and service provider partners and became a critical part of their service offerings. With version 9, Cloud Connect Replication was added, and providers started offering Replication and Disaster Recovery as a Service.
Cloud Connect Replication was released with industry-leading network automation features, and the ability to abstract both Hyper-V and vSphere compute resources and have those resources made available for tenants to replicate workloads into service provider platforms and have them ready for full or partial disaster events. Networking is the hardest part to get right in a disaster recovery scenario and the Network Extension Appliance streamlined connectivity by simplifying networking requirements for tenants.
While Cloud Connect Replication as it stood pre-Update 4 was strong technology…it was missing one thing…

Introducing vCloud Director support for Veeam Cloud Connect Replication

VMware vCloud Director has become the de facto standard for service providers who offer Infrastructure as a Service. While always popular with top VMware Cloud Providers since its first release back in 2010, the recent enhancements with support for VMware NSX, a brand new HTML5-based user interface, together with increased interoperability, has resulted in huge growth in vCloud Director being deployed as the cloud management platform of choice.
Veeam has had a long history supporting vCloud Director, with the industry’s first support for vCloud Director-aware backups released in Veeam Backup & Replication v7. With the release of Update 4, we added support for Veeam Cloud Connect to replicate directly into vCloud Director virtual data centers, allowing both our cloud and service provider partners, and tenants alike, to take advantage of the enhancements VMware has built into the platform.

By extending Cloud Connect Replication to take advantage of vCloud Director as a way to allocate service provider cloud resources natively, we have given providers the ability to utilize the constructs of vCloud Director and have their tenants consume cloud resources easily and efficiently.

Benefits of vCloud Director with Cloud Connect Replication

By allowing tenants to consume vCloud Director resources, it allows them to take advantage of more powerful features when dealing with full disaster, or the failure of individual workloads. Not only will full or partial failovers be more transparent with the use of the vCloud Director HTML5 Tenant UI, but networking functionality will also be enhanced by tapping into VMware’s industry leasing Network Virtualization technology, NSX.
With tenants able to view and access VM replicas via the vCloud Director HTML5 UI, they will have greater visibility and access before and after failover events. The vCloud Director HTML5 UI will also allow tenants to see what is happening to workloads as they boot and interact with the guest OS directly, if required. This dramatically reduces the reliance on the service provider helpdesk and ensures that tenants are in direct control of their replicas.

From a networking point of view, being able to access the NSX Edge Gateway for replicated workloads means that tenants can take advantage of the advanced networking features available on the NSX Edge Gateway. While the existing Network Extension Appliance did a great job in offering basic network functionality, the NSX Edge offers:

  • Advanced Firewalling and NAT
  • Advanced Dynamic Routing (BGP, OSPF and more)
  • Advanced Load Balancing
  • IPsec and L2VPN
  • SSL VPN
  • SSL Certificate Services

Put all that together with the ability to manage and configure everything through the vCloud Director HTML5 UI and you start to get an understanding of how utilizing NSX via vCloud Director enhances Cloud Connect Replication for both service providers and tenants.
There are also a number of options that can be used to extend the tenant network to the service provider cloud network when actioning a partial failover. Tenants and service providers can configure custom IPsec VPNs or use the IPsec functionality of the NSX Edge Gateway to be in place prior to partial failover.
The Network Extension Appliance is still available for deployment in the same way as before Update 4 and can be used directly from within a vCloud Director virtual data center to automate the extension of a tenant network so that the failed over workload can be accessible from the tenant network, even though it resides in the service provider’s environment.

Conclusion

For Veeam Cloud & Service Providers (VCSP) that underpin their backup and replication service offerings with Veeam Cloud Connect, the addition of vCloud Director support means that there is an even stronger case to deliver replication and disaster recovery to customers. For end users, the added benefits of the vCloud Director HTML5 UI, and enhanced networking services backed by NSX, means that you are able to have more confidence in recovering from disasters, and in your ability to provide greater business continuity.

Resources:

The post Enhancing DRaaS with Veeam Cloud Connect and vCloud Director appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/-nGSBauTYDk/cloud-connect-replication-to-vcloud-director.html

DR

Veeam brings back-up protection to HyperFlex’s SAP HANA party

Veeam brings back-up protection to HyperFlex’s
SAP HANA party
Veeam can now protect SAP HANA deployments
with native SAP database backups. With Veeam integrations for Cisco HyperFlex
and Cisco UCS, we are excited about combining forces
to offer SAP HANA users a modern solution spanning
from production to protection.

SAP HANA integrated backup is here!

SAP HANA integrated backup is here!

Veeam Software Official Blog  /  Rick Vanover


SAP HANA is one of the most critical enterprise applications out there, and if you have worked with it, you know it likely runs part, if not all, of a business. In Veeam Availability Suite 9.5 Update 4, we are pleased to now have native, certified SAP support for backups and recoveries directly via backint.

What problem does it solve?

SAP HANA’s in-memory database platform also requires that a backup solution be integrated and aware of the platform. This SAP HANA support helps you to have certified solutions for SAP HANA backups, reduce impact of doing backups, ensure operational consistency for backups, and leverage all of the additional capabilities that Veeam Availability Suite has to offer. This also includes point-in-time restores, database integrity checks, storage efficiencies such as compression and deduplication as well.
This milestone comes after years of organizations wanting Veeam backups with their SAP installations. We spent many years advocating on backing up SAP with BRTOOLS and leveraging image-based backups as well to prepare for tests. Now, the story becomes even stronger with support for Veeam to drive backint backups from SAP and store them in a Veeam repository. Specifically, this means that a backint backup can happen for SAP HANA and Veeam can manage the storage of that backup. It is important to note now that the Veeam SAP Plug-In, which makes this native support work, is also supported for use with SAP HANA on Microsoft Azure.

How does it work?

The Veeam Plug-In for SAP HANA becomes a target available for native backups with SAP HANA Studio for backups of a few types: file-based backups, snapshots and backint backups. When backups are performed in SAP HANA Studio, a number of different types and targets can be selected. This is all native within the SAP HANA application and SAP HANA tools like SAP HANA Studio, SAP HANA Cockpit or SQL based command line entries. These include file backups (plain copies of files) and complete data backups using backint. Backint is an API framework that allows 3rd-party tools (such as Veeam) to directly connect the backup infrastructure to the SAP HANA database. The backint backup is configured to have a backup interval set in SAP HANA Studio, and that interval can be very small – such as 5 minutes. It is also recommended to do the backup with log backups (again, configured in SAP HANA Studio) to enable more granular restores which will be covered a bit later on.
SAP HANA can also call snapshots to its own application, while it does not have consistency or corruption checks – snapshots are a great addition to the overall backup strategy. By most common perspectives, backint is the best approach for backing up SAP HANA systems but using the snapshots can also add more options for recovery. The plug-in data flow for a backint backup as implemented in Veeam Availability Suite 9.5 Update 4 is shown in the figure below:

One of the key benefits of doing a backint backup of SAP HANA is that you can do direct restores to a specific point in time – either from snapshots or from the backint backups with point-in-time recovery. This is very important when considering how critical SAP HANA is to many organizations. So, when it comes to how often a backup is done, select the interval that works for your organization’s requirements and make sure the option to enable automatic log backup is selected as well.

Bring on the Enterprise applications!

Application support is a recent trend here at Veeam, and I do not expect this to slow down any time soon! The SAP HANA Plug-In support, along with the Oracle RMAN plug-in, are two big steps in bringing application support to Veeam for critical, enterprise applications. You can find more information on Veeam Availability Suite 9.5 Update 4 here.
The post SAP HANA integrated backup is here! appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/4U0sO6275AA/sap-hana-integrated-backup.html

DR

VBO Office 365 Analysis with MS Graph

VBO Office 365 Analysis with MS Graph

CloudOasis  /  HalYaman


Screen Shot 2019-02-18 at 12.16.38 pmAre you thinking of backing up your Microsoft Office 365 Exchange Online and wish to size your backup accurately? Is there any way to measure the total size of your consumed Exchange Online storage, Change rate and more?

The answers to these questions are essential when it comes to sizing backup solutions. You must know the storage needed to store the backed up data and the backup solution components (Server roles).
In the era of Software as a Service (SaaS), it’s challenging to acquire this information, and then to get access to it. This makes solution sizing a challenging exercise for any business.

The Scenario

Let us consider the following scenario to understand the challenge that a business faces when they want to undertake sizing and backup of their Exchange Online. To use a specific example, I will consider the Veeam Backup for Office 365 product to simplify the discussion.
To size a Veeam backup repository, you need several numbers to calculate the total size. They are:

  • The total size of the current mailboxes (Active Mailboxes); and
  • Change rate.

The good news is that Microsoft offers several ways to pull this info from Office 365; you can use either PowerShell, or MS Graph API.
In this example, I will discuss the MS Graph API to show you how to easily retrieve the mailbox size and change rate, and more, to size the Veeam Backup Repository and the Veeam Backup infrastructure.

Veeam VBO Sizing

Veeam Software has released a great sizing guide to show you how to size the Veeam Backup for Office 365 Exchange Online. To access the guide, press this link.
But how great would it be if we could somehow automate the process! Let’s consider the Microsoft Graph APIs with two Graph API URLs (GET commands), we can get all the information we need to size the Veeam VBO repository and the infrastructure. Have a look at these two URLs:

From those two commands, we can extract all the information we need to size these Veeam items:

  • Backup Repository size;
  • Number of Servers needed;
  • Number of Proxies; and
  • Nnumber of jobs.

VBO Sizing WebApp

Taking the information from the MS Graph API and the Veeam VBO sizing guide, I asked myself this. “How can I can build a WebApp that can be accessed from any web-enabled device, then connect to the tenant Office 365 and draw down the necessary information on behalf of the tenant, and then allow the tenant to choose the retention period?”
All of that has already been assembled in a very small WebApp that you can use this link to access. You need only your Microsoft account to access your tenant Office 365 subscription.

How the WebApp works

When accessing the WebApp link, you will be greeted by a Connect Button. This will redirect you to the Microsoft Azure login page, a standard Microsoft secure Azure login to acquire the Graph token.
After the token has been retrieved, The WebApp sends several GET Restful API commands to acquire the following information:

With that information acquired, the WebApp will help with sizing the Veeam Backup Repository, and also return the number of Proxies, Repositories, VBO server and Jobs you need to fully protect your Exchange Online.

Conclusion

The WebApp is a tool that I wanted to implement to help me to get an initial understanding of the size of the Exchange Online deployment for my customers and service providers. With this WebApp, the data gathering and the sizing on the Veeam VBO infrastructure has become simple and fast. This helps customers to size their storage and calculate their budget before they start the sales cycle.
Over the following days I will be working on version two of the WebApp. I will be adding more functionality, like estimating the size of the OneDrive and SharePoint drives, and other exciting features. So, if you have any feedback or suggestions, they will be most welcome.
The post VBO Office 365 Analysis with MS Graph appeared first on CloudOasis.

Original Article: https://cloudoasis.com.au/2019/02/18/office-365-analysis-with-ms-graph/

DR

Want a Visio of your lab? Veeam ONE can do it for you!

Want a Visio of your lab? Veeam ONE can do it for you!

Notes from MWhite  /  Michael White


Hi all,
I mentioned, when talking with a customer, about how I had to update my Visio of my home lab with the changes I had done recently. There were groans and I asked why.  The amount of work to update Visio diagrams.  I said there was very little work as I had a tool that did the work. I said that Veeam ONE can do a Visio of your VMware infrastructure and they laughed.  So here we go.
You will need a Windows machine with Visio installed.

  • First you should access the Veeam ONE Reporter – which is normally https://FQDN:1239. After authentication you will see a collection of reports.

  • The report that has an output that we can use with Visio is in the Offline Reports folder – seen near the end of the list.
  • When you look into the Offline Reports folder you see what we are after!

  • So click on the Infrastructure Overview (Visio) report name.

  • We need to confirm the scope of operations.  So click on the blue Virtual Infrastructure link.

  • I can see my infrastructure is all selected so that is good.  Now we can use OK to return.
  • Now if you want VMs included in your diagram, and I do, you can select the VM checkbox.

  • Once your Include VMs is checked, or not checked, you can hit the Preview button.
  • Once you hit that button, you will see a brief collecting data dialog, or if you have a very big lab it will take a little longer.  One it is done you will have a file downloaded.

  • No matter how hard you try and load this file into Visio it will not work. You must download the Veeam Report Viewer.  Which, BTW, is not a viewer.  It is a translate tool.  You need to download it and install it where Visio is. Also move the infrastructure.vmr where the Viewer and Visio is.

  • The viewer installs very fast and leaves an icon on your desktop.

  • Start up the tool, and you should see something like below.

  • You can see I used it once already successfully.  Lets use the File \ Open option to load the infrastructure.vmr file.
  • It takes a few minutes, but we are waiting for it to say done. Depending on the size and complexity of your environment it could take a few minutes.

  • Once it is done it will pop up Visio.  But put it away for now.

  • We want to see the viewer to make sure it was a success.  After all, this is not about one diagram but a multiple of them!

  • Now you change to the following folder on the machine where the Viewer and Visio were. Each day you run the Viewer it will put another folder into the My Veeam Reports folder. As you can see below it is the Valentine’s day folder we are in.

  • The Index is an overview, and the config one shows what is powered up or not but the others are what you think.  And they print nice.

So now you have Visios of your lab or infrastructure, it may have took you a few minutes to get it all done the first time, but the second or third is much faster.
BTW, I did this with Update 4 of Veeam ONE, and 6.7 U1 of vSphere.
Michael
=== END ===

Original Article: https://notesfrommwhite.net/2019/02/14/want-a-visio-of-your-lab-veeam-one-can-do-it-for-you/

DR

How to improve security with Veeam DataLabs Secure Restore

How to improve security with Veeam DataLabs Secure Restore

Veeam Software Official Blog  /  Michael Cade


Today, ransomware and malware attacks are top of mind for every business. In fact, no business, large or small is immune. What’s even more concerning is that ransomware attacks are increasing worldwide at an alarming rate, and because of this, many of you have expressed concern. In a recent study administered by ESG, 70% of Veeam customers indicated malicious malware and virus contamination are major concerns for their businesses (source: ESG Data Protection Landscape Survey).
There are obviously multiple ways your environment can be infected by malware; however, do you currently have an easy way to scan backups for threats before introducing them to production? If not, Veeam DataLabs Secure Restore is the perfect solution for secure data recovery!
The premise behind Veeam DataLabs Secure Restore is to provide users an optional, fully-integrated anti-virus scan step as part of any chosen recovery process. This feature, included in the latest Veeam Backup & Replication Update 4 addresses the problems associated with managing malicious malware by providing the ability to assure any of your copy data that you want or need to recover into production is in a good state and malware free. To be clear, this is NOT a prevention of an attack, but instead it’s a new, patent-pending unique way of remediating an attack arising from malware hidden in your backup data, and also to provide you additional confidence that a threat has been properly neutralized and no longer exists within your environment.
Sounds valuable? If so, keep reading.

Recovery mode options

Veeam offers a number of unique recovery processes for different scenarios and Veeam DataLabs Secure Restore is simply an optional enhancement included in many of these recovery processes to make for a truly secure data recovery. It’s important to note though that Secure Restore is not a required, added step as part of a restore. Instead, it’s an optional anti-virus scan that is available to put into action quickly if and when a user suspects a specific backup is infected by malware, or wants to proceed with caution to ensure their production environment remains virus-free following a restore.

Workflow

The workflow for Secure Restore is the same regardless of the specific recovery scenario used.

  1. Select the restore mode
  2. Choose the workload you need to recover
  3. Specify the desired restore point
  4. Enable Secure Restore within the wizard

Once Secure Restore is enabled you are presented with a few options on how to proceed when an infection has been detected. For example, with an Entire VM recovery, you can choose to continue the recovery process but disable the network adapters on the virtual machine or choose to abort the VM recovery process. In the event an actual infection is identified, you also have a third option to continue scanning the whole file system to protect against other threats to notify the third-party antivirus to continue scanning, to get visibility to any other threats residing in your backups.

As you work inside the wizard and the recovery process starts, the first part of the recovery process is to select the backup file and mount its disks to the mount server which contains the antivirus software and the latest virus definitions (not owned by Veeam). Veeam will then trigger an antivirus scan against the restored disks. For those of you familiar with Veeam, this is the same process leveraged with Veeam file level recovery. Currently, Veeam DataLabs Secure Restore has built-in, direct integrations with Microsoft Windows Defender, ESET NOD32 Smart Security, and Symantec Protection Engine to provide virus scanning, however, any antivirus software with CMD support can also interface with Secure Restore.

As a virus scan walks the mounted volumes to check for infections, this is the first part of the Secure Restore process. If an infection is found, then Secure Restore will default to the choice you selected in the recovery wizard and either abort the recovery or continue with the recovery but disable the network interfaces on the machine. In addition, you will have access to a portion of the antivirus scan log from the recovery session to get a clear understanding of what infection has been found and where to locate it on the machine’s file system.

This particular walkthrough is highlighting the virtual machine recovery aspect. Next, by logging into the Virtual Center, you can navigate to the machine and notice that the network interfaces have been disconnected, providing you the ability to login through the console and troubleshoot the files as necessary.

To quickly summarise the steps that we have walked through for the use case mentioned at the beginning, here they are in a diagram:

SureBackup

Probably my favourite part of this new feature is how Secure Restore fits within SureBackup, yet another powerful feature of Veeam DataLabs. For those of you unfamiliar with SureBackup, you can check out what you can achieve with this feature here.
SureBackup is a Veeam technology that allows you to automatically test VM backups and validate recoverability. This task automatically boots VMs in an isolated Virtual Lab environment, executes health checks for the VM backups and provides a status report to your mailbox. With the addition of Secure Restore as an option, we can now offer an automated and scheduled approach to scan your backups for infections with the antivirus software of your choice to ensure the most secure data recovery process.

PowerShell

Finally, it’s important to note that the options for Veeam DataLabs Secure Restore are also fully configurable through PowerShell, which means that if you automate recovery processes via a third-party integration or portal, then you are also able to take advantage of this.
Veeam DataLabs – VeeamHUB – PowerShell Scripts
The post How to improve security with Veeam DataLabs Secure Restore appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/7HPJ5-I2neA/datalabs-secure-restore-overview.html

DR