VMware NSX-T Simplified UI vs Advanced UI explained

VMware NSX-T Simplified UI vs Advanced UI explained

Veeam Software Official Blog  /  Jeffrey Kusters

Please welcome our Write for Veeam contributor Jeffrey Kusters with his debut on our platform!
If you’d also like to share your expertise, knowledge and bring something new to the table on Veeam blog (and earn money by doing that), then learn more details and drop us a line at the program page. Let’s get techy!

It’s hard to believe it has been seven years since VMware acquired Nicira and six years since VMware NSX was launched. Having evolved significantly since those days, features such as micro-segmentation, automaton and Virtual Cloud Networking have impacted the data center as well as disaster recovery.
With the release of NSX-T 2.4 in June 2019, VMware significantly modified the user experience of NSX manager. A new simplified UI was introduced and the previous UI from version 2.3, has become the advanced UI. During conversations with customers, this topic has surfaced frequently and I felt that explaining the key differences between the respective UIs and guidance regarding when to use which each would be helpful to Veeam customers.

Advantages of the simplified UI

The UI is basically a representation of the underlying API and that’s where the actual change was introduced. VMware introduced a second API in NSX-T 2.4 called the policy API (available at https://<nsxmanager>/policy/api), which is “declarative”, not to be confused with the management API (available at https://<nsxmanager>/api), which is “imperative.” By being imperative, all the correct actions or changes to a system must be specific and in exact order to reach the desired state. By being declarative, users can simply define the desired state in one API call and the system will determine which actions need to be taken to reach that desired state. This is a much simpler way of consuming a system because as an administrator or engineer, you don’t need intricate details on the current state and the specific order in which actions must be executed:

Single API Interface (Imperative) Single API Command (Declarative)
POST/GET Logical Switch PATCH https://<nsxmanager>/policy/api/v1/infra

{desired outcome human-readable JSON}

POST/GET NSGroups
POST/GET EDGE Firewall
POST/GET Tier-1 Router
POST/GET DFW-Section
POST/GET LB Config

The simplified UI aligns with the declarative policy API and the advanced UI aligns with the imperative management API:

In NSX-T manager, the top menu represents the simplified UI:

The advanced UI can be found by selecting the advanced networking & security option the top menu:

Object name changes in simplified UI

The new declarative policy API and simplified UI also introduced name changes for certain objects in NSX-T, including:

  • Logical switch → segment
  • T1 logical router → tier-1 gateway
  • T1 logical router → tier-1 gateway
  • NSGroup → group
  • Firewall section → security-policy
  • Edge firewall → gateway firewall

The following screenshot shows how a new segment, or logical switch as it is called in the advanced UI, is created:

The new segment is shown in the vSphere client and can be used to connect virtual machines:

How do the simplified UI and advanced UI relate to each other?

If you declare a desired state in the simplified UI, NSX-T will store the desired configuration data in the declarative interface and push it down to the control and data plane. It will also replicate all required objects as read-only objects to the imperative interface. All configurations created in the simplified UI are visible in the advanced UI. The Test-Segment referenced above is also visible as a read-only logical switch in the advanced UI:

(The red highlighted icon means the object is protected/read-only. Since all the logical switches starting with pks- are created and managed by VMware PKS which is running in this specific lab environment, these are also protected from manual changes).
This does not work in reverse. Objects created in the advanced UI are not replicated to or visible in the simplified UI. After upgrading to NSX-T 2.4, all existing objects will be unavailable in the simplified UI (as you can see by looking at the first simplified UI screenshot that shows zero objects available while there are numerous objects available in the advanced UI screenshot). Currently, there is no in-place migration capability available as part of the upgrade process. We can either leave our objects in the advanced UI or migrate them manually (or programmatically) to the simplified UI.

Which version of the UI should you use?

Moving forward, VMware appears to favor the simplified UI and the declarative policy API so it is recommended you become familiar with the simplified UI. It is expected that new features will emerge for the simplified UI and the advanced UI may even be deprecated at some point, although no timeline has been communicated. There are tasks that can only be performed in the advanced UI since support is not yet available for all functions and features in the simplified UI so familiarity with both is encouraged. Unfortunately, you are faced with a dilemma: keeping your existing NSX-T 2.3 topology in the advanced UI or migrating everything to the simplified UI (which can be a disruptive change). There are also integrations with other products that consume NSX-T, such as VMware PKS or third party container platforms for example. These integrations will need to be updated so, for the time being, they will continue to leverage the imperative API and advanced UI as well.

Conclusion

The introduction of the simplified UI and declarative policy API is a welcome addition in a world that is moving swiftly towards automation and desired state configuration. In the end this new consumption model allows administrators and engineers to utilize a software-defined network in a much easier, faster and more predictable way. In short-term, the move from the advanced UI to the simplified UI may cause some challenges for existing NSX-T customers. Today, there is not an alternative to the disruptive in-place migration tools and not all functions and features are fully available in the simplified UI. Though VMware appears to favor the simplified UI moving forward, both interfaces will still be required for your virtual networking needs.
The post VMware NSX-T Simplified UI vs Advanced UI explained appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/CDbCGvUW6cA/nsxt-simplified-vs-advanced-ui.html

DR

How we improved tape support since Update 4

How we improved tape support since Update 4

Veeam Software Official Blog  /  Evgenii Ivanov


Veeam Backup & Replication 9.5 Update 4 was a major release that brought many improvements and interesting features. With tape not only being far from dead but rising in popularity, it’s not a surprise that tape support was a major focus item in the new version. Over the last few months, Veeam technical support has gathered enough cases to see what are the main issues that customers encounter, and with the recent release of Update 4b, it’s time that we go over the new tape functionality and show our readers the practical side of incorporating tape into their backup strategies.

Library parallel processing

Several versions ago, Veeam Backup & Replication’s media pool became “global.” That means that a single media pool can contain tapes that are associated with multiple tape devices. That already gave users several advantages. First, it considerably simplified tape management — now tape could “travel” seamlessly between the libraries. All users had to do is to rescan the library where the tapes were moved to, or if a barcode reader is not present (for example, in the case of a standalone drive), run an inventory (forcing Veeam Backup & Replication to read the tape header). Second, global media pools introduced tape “High Availability.” Users could specify among three events (“Library is offline,” “No media available,” and “All tape drives are busy”) when a tape job would fail over to the next tape device in the list.
By the way, this is also where many users get stuck when trying to delete an old device. They would get an error stating that the tape device is still in use by media pool X. If on the Tapes step you don’t see any tapes associated with the tape device, be sure to click on the Manage button (see the screenshot below); most likely that’s where the old tape device is hiding.
Update 4 went one step further and introduced true parallel processing between tape libraries. Now, if you go to media pool settings – Tapes – Manage you will be able to select a mode in which your tape devices should operate:

To enable parallel processing, set the role to “Active,” giving devices a “Passive” role enables a failover mechanism. Keep in mind it’s not possible to set only some of devices to active and others to passive — you can only have one active device with all other devices being passive, or all devices can be active. The general recommendation is to use active mode if you have several similar tape devices. If you have a modern and an old library, you can give the old library a passive role so that your jobs won’t stop, even if there are issues with the main (active) library.
As you can see, the setup is straightforward. However, I would like to take a moment to review how tape parallel processing works in Veeam Backup & Replication.

  1. To enable parallel processing within a single backup to tape job, this job must use either several backup jobs as sources, or the source backup job must be per-VM. If the source is a single job with multiple VMs that is creating normal backup files, the load cannot be split between the drives. Make sure your sources are set up correctly!
  2. Tape parallel processing is enabled at the media pool level. In media pool settings on the Options step, you can specify how many drives the job uses. If you use several tape devices in active mode, this number is the total number across all tape devices connected to this media pool. If parallel processing within a single job is allowed, see the previous point for requirements.
  3. Wondering how media sets are created with parallel processing? The rule is actually very simple: each drive keeps its own media set. However, this may create challenges in some setups. Consider this example: a customer opened a support ticket because Veeam Backup & Replication was using an excessive amount of tapes, but the customer could see that many of the tapes are almost empty. After investigation, it was found out that customer had a media pool that was set to create a separate media set for every session. Initially that worked well because the amount of data for each session fit a set number of tapes almost perfectly. Then another drive was added, and the customer started using parallel processing. Now the data was spread across two sets of tapes. Since the media set was finalized for every session, Veeam Backup & Replication could not continue writing to half-empty tapes on the next session, so the customer saw an unacceptable increase in tape usage. The customer had to choose between managing source data so that it 1) fits better on two sets of tape (which is a rather unreliable solution), setting the media set to be continuous, or 2) disabling parallel processing.

Two or more parallel media sets created in parallel can cause some incoherent naming. Customers sometimes complain when they see that several tapes have an identical sequence number and were written at the same time. This is actually normal and does not cause any issues. To differentiate the tapes better, we recommend adding the %ID% variable to the media set naming.

Tape GFS improvements

Before diving into tape GFS improvements, I would like to take a moment to review the default tape GFS workflow. On the day when the GFS run is scheduled, the job starts at 00:00 and starts waiting for a new point from the source job. If this new point is incremental, tape GFS will generate a virtual synthetic full on tape. If the point is a full backup, it will be copied as is. The tape GFS job will remain waiting for up to 24 hours, and if by that time a new point will not arrive, tape GFS will fail over to the previous point available. This creates a situation with backup copy jobs – because of BCJ’s continuous nature, the latest point remains locked for the duration of the copy interval. Depending on the BCJ’s settings, tape GFS may have to wait for a full day before failing over to the previous point. If it happens that the GFS job has to create points for several intervals on the same day (for example, weekly and monthly), tape GFS will not copy two separate points, instead a single point will be used as weekly and monthly. Finally, each interval keeps its own media set. For example, if a tape was used for a weekly tape, it will not be used to store a monthly point, unless erased or marked as free. This baffles a lot of users – they see an expired tape in the GFS media pool, however, the GFS tape job seems to ignore it, asking for a valid tape. This whole default workflow could be tweaked using registry values, which were used by support to great effect.
So, what improvements does Update 4 have to offer? First, two registry value tweaks have now been added to the GUI:

  1. You can now select the exact time when the tape GFS job should start (overriding the default 00:00 time) on the Schedule step.
  2. Now you can tell the tape GFS job to fail over immediately to the previous point (instead of waiting for up to 24 hours), if today’s point is not available. The option is found under Options – Advanced – Advanced tab:
  3. A new registry value has also been added, which allows the tape GFS job to use expired tapes from any media set, and not only the one that belongs to a particular interval. Please contact support if you think you need this.

GFS parallel processing

If you look at the previous version’s user guide page for GFS media pool, you will be greeted with a box saying that parallel processing is not supported for tape GFS. This is no longer true – Update 4 has added parallel processing for tape GFS jobs as well. The workflow is the same as for normal media pools, which was described in the first section. Be wary of possible elevated tape usage when using GFS and parallel processing as such a setup requires many media sets to be created. As a rule of thumb for the minimum amount of tapes, you can use this formula:
A (media sets configured) x B (drives) = C (tapes used)
For example, a setup with all 5 media sets enabled and 2 drives, will require a minimum of 10 tapes.

Daily media set

A major addition to tape GFS functionality is “daily media set”. This allows users to combine GFS and normal tape jobs in to a single entity. If you’re using GFS media pools for long-term archiving, but also use normal media pools to keep a short incremental chain on tapes, consider simplifying management by using daily media set instead of a separate job. You can enable this in GFS media pool settings:

Daily media set requires weekly media set to be enabled as well. With daily media set, GFS job will check every day if the source job created any full or incremental points and will put them on tape. If source job creates multiple points, they all will be copied on tape. This helps fill the gap between weekly GFS points and restore from an incremental point. During the restore, Veeam Backup & Replication will ask for tapes containing the closest full backup (most likely it will be a weekly GFS) and the linked increments. Mind that, just as other backup to tape jobs, daily media set does not copy rollbacks from reverse incremental jobs. Only a full backup, which is always the most recent point in reverse incremental jobs, will be copied.
To give you a practical example, imagine the following scenario:
You have a forever forward incremental job that runs every day at 5:00 AM and is set to keep 14 restore points.
For tape, you can configure a GFS job with the daily media pool set to keep 14 days, weekly of 4 weeks and the desired number of months, quarters and years. The daily media set can be configured to append data and not to export the tape. Weekly and older GFS intervals do not append data and export the tapes after each session. That way tapes containing incremental points are kept in the library and rotated according to the retention. Meanwhile tapes containing full points are brought offline and can be stowed in a safe vault.

Should you need to restore, Veeam Backup & Replication will issue a request to bring online a tape containing the weekly GFS point, and will use the tape from the daily media set to restore the linked increments.

WORM tapes support

WORM stands for Write Once Read Many. Once written, this kind of tape cannot be overwritten or erased, making it invaluable in certain scenarios. Due to technical difficulties, previous versions of Veeam Backup & Replication did not have support for WORM tapes.
Working with WORM tapes is not that different from normal tapes. The first step is to create a special WORM media pool. As usual, there are two types: normal and GFS. There is only one special thing about a WORM media pool — the retention policy is grayed out. Next, you need to add some tapes. This is probably where the majority of issues arise when WORM tapes are recognized as normal tapes, and vice versa. Remember that everything WORM-related has a blue-colored icon. Veeam Backup & Replication defines WORM tape by the barcode, or during inventory. Be sure to consult the documentation for your tape device and use correct barcodes! For example, IBM libraries consider letters from V to Y in the barcode as an indication for a WORM tape. Using them incorrectly will create confusion in Veeam Backup & Replication.

NDMP backup

In Update 4, it is possible to back up and restore volumes from NAS devices if they support the NDMP protocol. The first step here (which sometimes gets overlooked) is adding an NDMP server. This is done from Inventory view, our user’s guide provides the details.
NOTE: Mind the requirements and limitations, as the current version has several important things to keep in mind. For example,
NetApp Cluster Aware Backup (or CAB) extensions are currently not supported for NDMP backup in Update 4b. The workaround is to configure node-scoped NDMP.
After the NDMP server is added to the infrastructure, it can be set as a source to a normal file to tape job. The restore, as with other files on tapes, is initiated from Files view. Keep in mind that for NDMP, currently Veeam Backup & Replication does not allow for restores of individual files on volumes. Only the whole volume can be restored to the original or to a new location.

Tape selection mechanism

Update 4 added a new metric for used tapes: number of read/write sessions. This metric is seen in tape properties, under Wear field:

Once tape is reused the maximum number of times (according to the tape specs), it will be moved to the Retired media pool. However, this is not new. Previous versions of Veeam Backup & Replication also tracked warnings from tape devices when tape should no longer be used. What is new is that Veeam Backup & Replication Update 4 tries to balance the usage and, among other things, will select a tape that was used the least amount of times. Here is a neat little scheme for you to remember the process:

File from tape restore improvements

The logic for restores of folders from tapes saw some improvements in Update 4. Now when a user selects a specific backup set from which to restore, only the files that were present in the folder during that backup session are restored. The menu itself did not change, only the logic behind it. As before, in Update 4 you need to start the restore wizard (as always from the Files view) and click on the Backup Set button to choose the state to which to restore.

Tenant to tape

This feature allows service providers to offer a new type of service — writing clients’ backup data to tape. All configuration is done on the provider side — if Veeam Backup & Replication has a cloud provider license installed, the new job wizard will allow the addition of a Tenant as a source for the job. Such jobs can only work with GFS pools. Restore is also done on the provider side — the options are to 1) restore files to original cloud repository (existing files will be overwritten, and the job will be mapped to the restored backup set) 2) to a new cloud repository or 3) to a local disk. In the end, if the customer has their own tape infrastructure, it’s also possible to simply mail the tape and avoid any network traffic.

Other improvements

Tape operator role. A new user role is now available. Users with this role can perform any service actions with tapes, but not initiate a restore.
Source processing order. You can now change the ordering of how sources should be processed, just like VMs for backup jobs.
Include/exclude masks for file to tape jobs. In previous versions, file to tape jobs could only use inclusion masks. Update 4 adds exclusions masks as well. This works only for files and folders, NDMP backups are still done at the volume level.
Auto eject for standalone drive when tape is full. This is a small tweak in the tape logic when working with a standalone tape drive. If a tape gets filled up during a backup session, it will be ejected from the drive, so that a person near the drive could insert another tape. This is also useful in protecting against ransomware as the tape is ejected and offline.

Conclusion

The amount of new tape features this update brought us clearly shows that tape remains a focus of product management. I encourage you to explore these new capabilities and to consider how they can make your backup practice even better.

The post How we improved tape support since Update 4 appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/4R39VQB0FDU/tape-support-improvements-update-4.html

DR

How to backup Kubernetes Master Node configuation

How to backup Kubernetes Master Node configuation

Veeam Software Official Blog  /  David Hill


As the use of Kubernetes grows significantly in production environments, backing up the configuration of these clusters is becoming ever more critical. In previous blogs, we have talked about the difference between stateful and stateless workloads, but this blog focuses on backing up the cluster config, not the workloads.
When running a Kubernetes cluster in an environment, the Kubernetes Master Node contains all the configuration data including items like worker nodes, application configurations, network settings and more. This data is critical to restore in the event of a master node failure.
For us to understand what we need to back up, first we need to understand what components Kubernetes needs to operate.
Kubernetes Worker Node
In Kubernetes the etcd is one of the key components. The etcd component is used as Kubernetes’ backing store. All cluster data is stored here. The etcd is an open-source, key value store used for persistent storage of all Kubernetes objects like deployment and pod information. The etcd can only be run on a master node. This component is critical for backing up Kubernetes configurations.
Another key component is the certificates. By backing up the certificates, we can easily restore a master node. Without the certificates, we would need to recreate the cluster from scratch.
We can back up all the key Kubernetes Master Node components using a simple script. I got the basics of the script from this site.

  #K8S backup script  # David Hill 2019  # Backup certificates  sudo cp -r /etc/kubernetes/pki backup/  # Make etcd snapshot  sudo docker run --rm -v $(pwd)/backup:/backup \  --network host \  -v /etc/kubernetes/pki/etcd:/etc/kubernetes/pki/etcd \  --env ETCDCTL_API=3 \  k8s.gcr.io/etcd-amd64:3.2.18 \  etcdctl --endpoints=https://127.0.0.1:2379 \  --cacert=/etc/kubernetes/pki/etcd/ca.crt \  --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt \  --key=/etc/kubernetes/pki/etcd/healthcheck-client.key \  snapshot save /backup/etcd-snapshot-latest.db    

The script above does two things:

  1. It copies all the certificates
  2. It creates a snapshot of the etcd keystore.

These are all saved in a directory called backup.
After running the script, we have several files in the backup directory. These include certificates, snapshots and keys required for Kubernetes to run.
Kubelet Master Code Configuration
We now have a backup of the critical Kubernetes Master Node configuration. Now, the issue is that this data is all stored locally. What happens if we lose the node completely? This is where Veeam comes in. By using Veeam Agent for Linux, we can easily back up this directory and store it in a different location. We can also protect this critical data and manage and store that data in multiple locations, like a scale-out backup repository and the cloud tier leveraging object storage.

Veeam Agent for Linux

When configuring a backup job, we only want to back up the directory where the Kubernetes configuration data is stored. By running the script above, we store all that data in /root/backup. This is the directory we are going to back up in this example.
Walking through the backup job for our master node, two options we must select are File Level Backup and the directory to backup:
Edit Agent Backup Job K8S Infrastructure Backup
Once the job has run, we can open the backup and check to be sure all the files we requested to be backed up are included.
K8S-Master as of less than a day ago
We now have a successful backup of our Kubernetes Master Node configuration. We can offload this backup to an object storage repository for off-site backup storage.
In the next blog, the topic will be restoring this configuration data in the event of a failure or the loss of a master node.
GD Star Rating
loading…
How to backup Kubernetes Master Node configuation, 3.5 out of 5 based on 8 ratings

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/Li3z-SYeBYM/backup-kubernetes-master-node.html

DR

8 lessons from an Office 365 backup customer

8 lessons from an Office 365 backup customer

Veeam Software Official Blog  /  Russ Kerscher


A few months ago, one of our customers presented at VeeamON and gave fellow IT Pros an unedited, unfiltered take on using Office 365 backup.
This customer migrated to Office 365 almost two years ago, added Veeam Backup for Microsoft Office 365 to protect their data, went through a major company acquisition where they inherited heavy SharePoint Online and OneDrive for Business usage and had to manage through an exponential increase in Office 365 data. They had been through quite a bit and had a lot of lessons to share.
One slide was so good, we’ve provided a screenshot and a summary below. The hope is you can pick up one or two other things to help with managing or optimizing your current or future Office 365 backup infrastructure.

A simple slide but packed with over 18 months of experience. Here’s a summary of what they had to say.

#1 When your environment is changing, change with it!

When they originally migrated to Office 365, they had about 2,200 users and 40TB of data. Their retention policies were set to 15 months on all those 2,000+ users, and their organization only utilized Exchange and Skype. About six months later, they had to massively scale their environment due to a company acquisition that added 25% more users, but more importantly, many more applications from the Microsoft stack (Teams, OneDrive, SharePoint, Dynamics, Power BI) and users that had unlimited retention settings. They were now looking at an extended use case including tens of TBs of email data (some structured, some not) that they had to find a way to size it appropriately to successfully back it all up.

#2 Watch proxy CPU/memory utilization. Don’t skimp on resources.

Prior to the acquisition, they started with a simple backup configuration. One VBO backup, one proxy server, and a simple SMB share repository on a storage server. While this was a good way to start, it wasn’t going to cut it long term, especially with the acquisition on the horizon. Soon they scaled to five proxies, strategically spread across different portions of the Office 365 environment. Breaking this out ensured that SharePoint wasn’t blocked by Exchange, for example, and they could apply more resources to Exchange since it had the bulk of the data and ensure it completed fast. This was a big lesson learned, and after this optimization the backups started performing much better as a whole.

#3 Proxy threads are important!

The configuration of how much work the proxy server will do at a given time is super important since it may or may not improve importance for increasing or decreasing threads. They strongly encouraged other organizations to do this fine-tuning of proxy threads as it ensures you’ll see better performance. If the proxy is doing the work seamlessly, it should move more data and faster.

#4 Protect the JET database used as repository

The JET Blue database isn’t just a simple backup item that sits in a company’s repository, it’s a full archive database storing the retrieved data from Office 365 Exchange Online, SharePoint Online, OneDrive for Business and Microsoft Teams data from a cloud-based instance of Office 365. This still need to be protected from corruption on disk, as well as ransomware events and other security threats that can attack it. If you lose the JET database, you lose your backups. So, what they did was protect the JET database itself with Veeam Backup & Replication. This is even something that can be done with the free version, Veeam Backup & Replication Community Edition.

#5 Veeam support is an invaluable resource

The value of Veeam support cannot be understated. When they needed advice on planning and scaling due to data growth and the company acquisition, they reached out to the Veeam support team. Not only did they help here, but they also provided some tips and tricks for how to improve general, day-to-day maintenance.

#6 Watch for bottlenecks in your infrastructure, Veeam will expose them!

If your storage is slow, Veeam will make sure to out it! Veeam can tell you if the storage repository is underperforming, or if even the source infrastructure is the bottleneck (which is the latency in response from the Office 365 infrastructure). Basically, they are waiting on the cloud itself to process the data in most of the time. It’s not just about adding a backup, it’s again about evolving and improving your strategy as your environment changes.

#7 Office 365 resources are not unlimited

In Office 365, it’s important to remember that resources are not unlimited. With on-prem Exchange, it’s easy to move data quickly. But in Office 365, the data can only be moved as fast as Microsoft will let you, and they can even throttle you down. Another tip customer learned when they were doing the tenant to tenant migration during the acquisition, if you ask Microsoft via a case in Office 365’s support portal, they can remove some of the throttling constraints to your tenant temporarily so you can move data faster.

#8 Test your backups!

This one is simple, but often overlooked. It’s critically important to test your backups on a regular basis. An untested backup is as good as no backup at all! This is talked about quite a bit with Veeam Backup & Replication but applies to Office 365 backup as well.

There you have it – 8 great lessons learned from an Office 365 backup customer. If you have comments or questions, please add them below. If you’re new to Office 365 backup, you can download a FREE 30-day trial and see for yourself, read the 6 Reasons for Office 365 Backup or study the Office 365 Shared Responsibility Model.
The post 8 lessons from an Office 365 backup customer appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/ldxIespVYqg/office-365-backup-customer-lessons.html

DR

SAP HANA tenant database system copy and recovery

SAP HANA tenant database system copy and recovery

Veeam Software Official Blog  /  Clemens Zerbe

Navigation:

Part 1 — 3 steps to protect your SAP HANA database
Part 2 — How to optimize and backup your SAP HANA environment
Part 3 — SAP HANA tenant database system copy and recovery

 

Throughout these blog series dedicated to the Veeam Plug-in for SAP HANA, I have highlighted the challenges SAP administrators face to maintain reliability and downtime. What we have not yet had the opportunity to discuss is how the demand for data preservation has increased precipitously in recent years.

This could be due to regulations or the need to preserve critical records. Now businesses are grappling with maintaining data to leverage it for long-term strategic business decisions, testing, or even monetizing it. For the final blog in this series, I will provide you with answers to typical customer questions I receive before or when enabling the Veeam Plug-in for SAP HANA.

How to plan for long-term retention requirements and restore scenarios

The biggest challenge regarding this topic is how to restore data without knowing much about the backup. You might sit in front of an empty SAP HANA database without any catalog entries, or a different host, or even a different version of SAP HANA.

When performing a restore, the same version of SAP HANA or later is required, hence I highly recommend documenting the HANA version and consulting SAP´s HANA Update table. Please also consult any related SAP documentation before performing a restore to a different version of SAP HANA to prevent unnecessary challenges.

In most cases you might have a retention policy consisting of several days or even weeks. Customers typically delete older backups (you may recall we discussed proceeding carefully in the first blog of this series). You may encounter a missing catalog entry for this restore point. You might even have deleted all the <EBID>.vab files already from your repository — depending on your retention policy.

To make this backup recoverable, you need to proceed cautiously. Creating a proper backup would be a good start. In my example I only assume responsibility for the tenant, but the way of creating and restoring the database is the same for the SYSTEM DB — and you should always have both backups available to you.

Create a backup of tenant database

Please refer to the first blog in this series to create a backup with SAP HANA Studio if needed and document it. This will be important during the restore process.

Often at Veeam, we stress the importance of naming conventions for tags, backup jobs, and other related information so it’s intuitive to others. After clicking next and next again, the backup will be created. A recommended practice I should share is to copy the backup log into your backup documentation for this long-term backup.

You will now find your recently created backup in the backup catalog and can document the EBID numbers for your backup services.

All EBIDs have corresponding <EBID>.vab files in our repositories. Go to your repository and find these files and copy them into your long-term vault (e.g. file to tape, long-term S3 bucket).

It is important to note that if these <EBID>.vab files are deleted from the repository through backint, a restore will no longer be possible. To avoid this challenge, you MUST delete this specific data backup prior to the application of your created retention policy. The next screenshot shows how to delete a specific data backup and from the catalog. You will delete only this specific data backup from the catalog, but the files remain in the repository.

The Catalog and Backup Location option will delete the <EBID>.vab files inside the repository — which we wish to avoid.

After refreshing the catalog, you’ll see it’s gone forever from the SAP HANA catalog. Check the file system for the repository and you’ll still find the files.

Now you can copy/move/archive these <EBID>.vab files with file to tape or other tools.

Recovery of tenant database

To recover it, please follow the steps below:

  1. Copy the <EBID>.vab files back to your original repository folder
  2. Restore databackup with knowing prefix.
  3. Start a recovery of your tenant and select Recover the database to a specific data backup.

Click next and choose Recover without the backup catalog.

Now is the moment where we’ll be glad we documented the original backup. Have your Backup Prefix ready.

Click Next — you don’t have many other options to restore your database.

Make sure to check the summary — review the SQL statement if needed — and press Finish.

Tenant database System Copy with Veeam

The next question I often receive is how to create an SAP HANA System Copy with Veeam Backup & Replication. System copies are one of the often-used SAP Basis processes to create DB copies for your Quality Assurance, Development or Sandbox Systems. Keep in mind this is only the first step of this process — the database copy. Please consult SAP documentation on further steps like system renaming and cleanups.

Please follow these steps to make this happen:

Creating backup of tenant database — see above how to create it with a meaningful name and document your backup properly.

This is a backup of my S/4 production system.

Before I can restore this tenant on my S/4 QA system, I need to tell my secondary HANA System to search within a different job entry via command-line interface (CLI). Keep in mind you need access rights into the needed repositories to see the other SAP HANA system entries.

 

sles15hana04:/opt/veeam/VeeamPluginforSAPHANA # SapBackintConfigTool --help    --help                Show help    --show-config         Show configuration parameters    --wizard              Start configuration wizard    --set-credentials arg Set credentials    --set-host arg        Set backup server    --set-port arg        Set backup server port    --set-repository      Set backup respository    --set-restore-server  Set source restore server for system copy processing    --map-backup          Map backup       sles15hana04:/opt/veeam/VeeamPluginforSAPHANA # SapBackintConfigTool --set-restore-server    Select source SAP HANA plug-in server to be used for system copy restore:    1. sles15hana04  2. sles15hana03  3. sles15hana02  4. sles15hana01    Enter server number: 2    Available backup repositories:    1. Default Backup Repository  2. SOBR1  3. w2k19repo_ext1  4. sles15repo_ext1    Enter repository number: 2    sles15hana04:/opt/veeam/VeeamPluginforSAPHANA #

 

This set of commands tells the Veeam Backup & Replication SAP Backint client to search inside the SLESHANA03/SOBR1 backup job to find this specific backup. This has no impact on your running backups. If you are not able to modify the configuration, check your user permissions and the file permissions on /op/Veeam/VeeamHANAPlug/Veeam_config.xml

The next step is to perform a restore on your copy system. While this seems easy, I recommend reviewing the comments below for important steps you may not be familiar with.

Select the tenant you want to restore to.

Recover the database to a specific data backup — the one we just created.

Select Recover without backup catalog and enable Backint System Copy. It is important to add the Source system with tenant@SID naming.

Now add your Backup Prefix:

No other options are possible so click Next.

Review the summary before initiating the restore.

Now SAP HANA Studio will shut down this specific tenant and start the recovery.

If everything is going well, it should look like this and begin the recovery process.

The tenant will restart.

And thankfully, the HANA System copy process is done. Just keep in mind this was only the database copy. Additional SAP tasks such as SAP system renaming, etc. can begin.

As result, I created an SAP HANA DB system copy of my production S/4 database into a new HANA system with a new SID.

Closing thoughts

I hope you enjoyed this blog series and are now better prepared to leverage Veeam to more effectively backup, restore, and protect your SAP HANA environment. As SAP S/4 HANA is the market-leading intelligent ERP solution with over 11,000 customers, I’m thrilled Veeam Backup & Replication 9.5 Update 4 offers an SAP Backint certified solution to help customers minimize disruption and downtime for mission-critical applications like your SAP system. I highly encourage you to check out this capability and I also encourage you to check out the Veeam Plug-in for Oracle RMAN, another great feature in Update 4.

The post SAP HANA tenant database system copy and recovery appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/c3Xir0LEKWM/sap-hana-tenant-database-system-copy-recovery.html

DR

How to copy AWS EMR cluster hive scripts

How to copy AWS EMR cluster hive scripts

Veeam Software Official Blog  /  Rick Vanover

I recently had a situation where I was discussing a scenario with Veeam Backup & Replication deployed completely in the public cloud. In fact, many organizations have the Veeam Backup & Replication server in the cloud and are managing either AWS EC2 or Azure VM backups with Veeam Agent for Microsoft Windows and Veeam Agent for Linux.

In this situation, there was a discussion about some other services in the cloud. In particular, AWS EMR (Elastic MapReduce). There was a discussion about managing the hive scripts that are part of the EMR cluster. I offered a simple solution:

Veeam File Copy Job

Since the Veeam Backup & Replication server is in the public cloud, the EMR cluster can be inventoried. This means it is visible in the Veeam infrastructure. Generically speaking, the Linux requirements for a repository or other Linux functions are documented here. But mainly, Perl and SSH are what is needed to add the EMR cluster into Veeam Backup & Replication. This is shown in the figure below:

 

The second entry, ec2-54-196-190-21.compute-1.amazonaws.com, is EMR cluster’s master public DNS entry. From there, it is visible for some basic interaction with Veeam. This reminds me of a challenge I had nearly 10 years ago. I had a physical server with millions of files that I needed to move, the Veeam FastSCP engine saved the day.

I had an odd re-remembering of this situation with the EMR cluster. When you have some hive scripts that are inventoried in EMR, it is possible that Veeam can copy those off to another system so you have a copy of them. It’s important to note that this is not a backup — but a file copy job. Nonetheless, it can be helpful if you have a need to have a copy of your EMR scripts on another system. In the screenshot above, I could copy them to the ec2-18-206-235-47.compute-1.amazonaws.com host as that is a Linux EC2 instance, and I have Veeam Agent for Linux on that system. So, I could have a “proper” backup as well.

One other nice feature here is I can use the Veeam text editor (what used to be the FastSCP Editor). Simply right-click on your hive script and select Edit, this may be easier than dusting off your vi skills:

 

Then from there, I can build the File Copy Job. The file copy job is a simple copy with an optional schedule. You select a source and a target, and it will do just as it sounds. It’s handy in that you can coordinate its schedule with other jobs you may already have in the public cloud. In fact, that’s exactly what I have done in the example below. The file copy job is placing the hive scripts and some other folders I care about from my EMR cluster on a system I’m backing up with Veeam Agent for Linux. That is scheduled to run right after the Linux Agent job. This is a view of the file copy job:

 

Have you ever used the file copy job? It may be handy for services such as EMR clusters that you need to copy files to or from. You can use the file copy job in the free Veeam Community Edition, download it now and try for yourself.

The post How to copy AWS EMR cluster hive scripts appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/k_JMjkHQSjM/aws-emr-cluster-hive-scripts-file-copy-job.html

DR

How policy based backups will benefit you

How policy based backups will benefit you

Veeam Software Official Blog  /  Michael Cade

With VMworld 2019 right around the corner, we wanted to share a recap on some of the powerful things that VMware has in their armoury and also discuss how Veeam can leverage this to enhance your Availability.

This week VMware announced vSAN 6.7 Update 3. This release seems to have a heavy focus on simplifying data center management while improving overall performance. A few things that stood out to me with this release included:

  • Cleaner, simpler UI for capacity management: 6.7 Update 3 has color-coding, consumption breakdown, and usable capacity analysis for better capacity planning allowing administrators to more easily understand the consumption breakdown.
  • Storage Policy changes now occur in batches. This ensures that all policy changes complete successfully, and free capacity is not exhausted.
  • iSCSI LUNs presented from vSAN can now be resized without the need to take the volume offline, preventing application disruption.
  • SCSI-3 persistent reservations (SCSI-3 PR) allow for native support for Windows Server Failover Clusters (WSFC) requiring a shared disk.

Veeam is listed in the vSAN HCL for vSAN Partner Solutions and can protect and restore VMs. The certification for the new Update 3 release is also well on its way to being complete.

Another interesting point to mention is the Windows Server Failover Clusters (WSFC). While these are seen as VMDKs, they are not applicable to the data protection APIs used for data protection tasks. This is where the Veeam Agent for Microsoft Windows comes in with the ability to protect those failover clusters in the best possible way.

What is SPBM?

Storage Policy Based Management (SPBM) is the vSphere administrator’s answer to control within their environments. This framework allows them to overcome upfront storage provisioning challenges, such as capacity planning, differentiated service levels and managing capacity resources in a much better and efficient way. All of this is achieved by defining a set of policies within vSphere for the storage layer. These storage policies optimise the provisioning process of VMs by provisioning specific datastores at scale, which in turn will remove the headaches between vSphere admins and storage admins.

However, this is not a closed group between the storage and virtualisation admins. It also allows Veeam to hook into certain areas to provide better Availability for your virtualised workloads.

SPBM spans all storage offerings from VMware, traditional VMFS/NFS datastore as well as vSAN and Virtual Volumes, allowing policies to overarch any type of environment leveraging whatever type of storage that is required or in place.

What can Veeam do?

Veeam can leverage these policies to better protect virtual workloads, by utilising vSphere tags on old and newly created virtual machines and having specific jobs setup in Veeam Backup & Replication with specific schedules and settings that are required to meet the SLA of those workloads.

Veeam will also back up any virtual machine that has an SPBM policy assigned to it, as well as protect the data. It will also protect the policy, so if you had to restore the whole virtual machine, the policy would be available as part of the restore process.

Automate IT

Gone are the days of the backup admin adding and removing virtual machines from a backup job, so let’s spend time on the interesting and exciting things that provide much more benefit to your IT systems investment.

With vSphere tags, you can create logical groupings within your VMware environment based on any characteristic that is required. Once this is done, you are able to migrate those tags into Veeam Backup & Replication and create backup jobs based on vSphere tags. You can also create your own set of vSphere tags to assign to your virtual machine workloads based on how often you need to back up or replicate your data, providing a granular approach to the Availability of your infrastructure.

VMware Snapshots – The vSAN way

In vSAN 6.0, VMware introduced vSAN Sparse Snapshots. The snapshot implementation for vSAN provides significantly better I/O performance. The good news for Veeam customers is if you are using the traditional VMFS or the newer vSAN sparse snapshots the display and output are the same — a backup containing your data. The benefits are incredible from a performance and methodology point of view when it comes to the sparse snapshot way and can play a huge role in achieving your backup windows.

The difference between the “traditional” and the new snapshot methodology that both vSAN as well as Virtual Volumes leverage is that a traditional VMFS snapshot is using Redo logs which, when working with high I/O workloads, could cause performance hits when committing those changes back to the VM disk. The vSAN way is much more similar to a shared storage system and a Copy On Write snapshot. This means that there is no commitment after a backup job has released a snapshot, meaning that I/O can continue to run as the business needs.

There are lots of other integrations between Veeam and VMware but I feel that this is still the number one touch point where a vSphere and Backup Admin can really make their life easier by using policy-based backups using Veeam.

The post How policy based backups will benefit you appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/q0JAuQAOo-w/vmware-update-vsan-spbm.html

DR

First Look: On‑Demand Recovery with Cloud Tier and VMware Cloud on AWS

First Look: On‑Demand Recovery with Cloud Tier and VMware Cloud on AWS
Since Veeam® Cloud Tier was released as part of Veeam Backup & Replication™ 9.5 Update 4, Anthony Spiteri, Senior Technologist, Product Strategy, has written a lot about how it works and what it offers in terms of offloading data from more expensive local storage to what is fundamentally cheaper remote Object Storage. As with most innovative technologies, dig a little deeper and different use cases start to present themselves and unintended use cases find their way to the surface. Read this blog from Anthony Spiteri to learn more.

Helping customers migrate from vSphere 5.5

Helping customers migrate from vSphere 5.5
What happens when customers are still running vSphere 5.5 in production? Matt Lloyd and Shawn Lieu sat down with VMware solution architect Chris Morrow to discuss upgrading to vSphere 6.5 or 6.7, management changes in the newer releases, leveraging Veeam DataLabs™ for validation and more.

5 pro tips for recovering applications quickly and reliably

5 pro tips for recovering applications quickly and reliably

Veeam Software Official Blog  /  Sam Nicholls

Today’s apps consist of multiple component tiers with dependencies both inside the app and outside (on other applications). While separating these apps into logical operational layers yields many benefits, it also introduces a challenge: complexity.
Complexity is one of many enemies when it comes to recovery. Especially as service level objectives (SLOs) and tolerances for downtime become more and more aggressive. Traditional approaches to disaster recovery (DR) — manual processes — just cannot cater to these ever-increasing expectations, ultimately exposing the business to greater risk and longer periods of costly downtime, not to mention potential infringement on compliance requirements.
Orchestration and automation are proven solutions to recovering entire apps more efficiently and quickly, mitigating the impact of outage which, as Agent Smith puts it, is

I wanted to share with you some tips, tricks, best practices and some of the tools that will help you be more successful and efficient when building and executing a DR plan that will recover entire apps quickly and reliably.

Discover and understand applications

After the intro to this blog, it almost needn’t be said here, but it’s critical to take an application-centric mindset to DR and plan based on business logic, not just VMs. Involve application owners and management to understand the app and build your plans accordingly. What VMs make it work? Are they tiered? Do those VMs need to be powered on in correct sequence? Or can they be powered simultaneously? How can I confirm that the plan works, and the app is running? These are just a few questions you need to ask to get started. But understanding the answers, documenting them, and building your strategy based on business logic is essential to successful DR planning.

Utilize vSphere tags

Once you understand those apps, utilize VMware vSphere tags to categorize them for a more policy-driven DR practice. This is critical as changes to apps that have not been identified and implemented into a plan are among the more common causes for DR plan failure. For example, when using tags and Veeam Availability Orchestrator, as an app changes (such as a VM being added), those VMs are automatically imported into orchestration plans and dynamically grouped based on the metadata in the tag assigned. The recoverability of this new app configuration will then be determined when the scheduled automated test executes, with the outcome subsequently documented. If it succeeds, you can be confident that that app is still recoverable despite the changes. If it fails, you have the actionable insights you need to proactively remediate the failure before a real-world event.

Optimize recovery sequences

Recovering the multiple VMs that make up an app is traditionally a manual, one-by-one process, and that’s a problem. Manual processes are inefficient, lengthy, error-prone and do not scale. Especially when we consider large apps, or those that are tiered with dependencies where the VMs need to be powered-on in perfect sequence. Utilizing orchestration eliminates the challenges of manual processes while optimizing the recovery process, even when a single plan contains multiple VMs that require differing recovery logic.
For example, the below illustrates a traditional three-tiered app where in a recovery scenario we must power on our database VMs, then the application VMs, and finally the web farm VMs. Some of these VMs can be recovered simultaneously, while others must be recovered sequentially due to the aforementioned dependencies, not to mention differing recovery steps for different types of VMs. Through orchestration, we can drastically optimize recovery processes and efficiencies while also greatly improving the time it takes to recover the application. Within the orchestration plan, we can ensure that each VM has the relevant plan steps applied (e.g. scripting), that certain groups of VMs are recovered simultaneously where applicable, and that there is next to no latency between the plan steps, like sequential recovery of the VMs. This gives the plan speed, and that’s exactly what we want in disaster.

To account for disasters that are more widespread, we could build multiple applications into a single orchestration plan, although this isn’t necessarily advisable as in the event of an outage that is confined to a single app, we only want to failover that single app, not all of them. Instead, build out orchestration plans on a per-app basis, and then execute multiple orchestration plans simultaneously in the event of a more widespread outage. If the infrastructure at the DR site is sized and architected appropriately, you can test and optimize the recovery process for multiple apps and gauge how many orchestration plans your environment is capable of executing at once.

Leverage scripting

Building on the above, the ability to orchestrate and automate scripts will also help further reduce manual processes. Instead of manually accessing the VM console to check if the VM or application has been successfully recovered, orchestration tools can execute that check for me, e.g. ping the NIC to check for a response from a VM, or check VMware tools for a heartbeat. While this is useful from a VM perspective, it doesn’t confirm that the application is working. A tool like Veeam Availability Orchestrator features expansive scripting capabilities that go beyond VM-based checks. It not only ships with a number of out-of-the-box scripts for verifying apps like Exchange, SharePoint, SQL and IIS, but also allows you to import your own custom PowerShell scripts for ultimate flexibility when verifying other apps in your environment. No two applications are the same, and the more flexibility you have in scripting, the more precise and optimal you can be in recovering your apps.

Test often

Frequent and thorough testing is essential to success when facing a real-world DR event for multiple reasons:

  • When tests are successful, it’ll deliver surety and confidence that the business is resilient enough to withstand a DR event, verifying the recoverability and reliability of plans.
  • When tests fail, it enables the organization to proactively identify and remediate errors, and subsequently retest. It’s better to fail when things are OK than when they’re not.
  • Practice! Nobody is ever a pro the first time they do something. Simulating an outage and practicing recovery often develops skills and muscle memory to better overcome a real-world disaster.
  • Testing environments can also be utilized for other scenarios outside of DR, like testing patches and upgrades, DevOps, troubleshooting, analytics and more.

Guidelines on DR testing frequency can vary from once-a-year to whenever there is an application change, but you can never test too often. Veeam Availability Orchestrator enables you to test whenever you like as tests are fully-automated, available on a scheduled basis or on-demand, and have no impact to production systems or data.

Veeam can help

I’ve mentioned Veeam Availability Orchestrator a few times throughout this blog, delivering an extensible, enterprise-grade orchestration and automation tool that’s purpose-built to help you plan, prepare, test and execute your DR strategy quickly and reliably.
What’s more is that with v2, we delivered full support for orchestrated restores from Veeam backups — an industry-first. True DR is no longer just for the largest of organizations or the most mission-critical of apps. Veeam has democratized DR, making it available for all organizations, apps and data.
Whether you’re already a Veeam customer or not, I strongly recommend downloading the FREE 30-day trial. There are no limitations, it’s fully-featured and contains everything you need to get started. Have a play around, build your first plan for an application and test it. I almost guarantee you’ll learn something about your environment or plan that you didn’t know before.
The post 5 pro tips for recovering applications quickly and reliably appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/7lMWUop3i7w/application-disaster-recovery-tips-tricks.html

DR

vCD Tenant Backup Portal

vCD Tenant Backup Portal

CloudOasis  /  HalYaman

Veeam Enterprise Manager can be used by your VMware vCloud Director tenants to perform their own workload backup and restore. They can do this using the Veeam Self-Service Portal, and in this blog, I will show you how to give your tenants access to the Self-Service Portal without leaving the vCD portal?

Many Service Providers are impressed with the services offered by the Veeam Enterprise Manager Self-Service Portal; but also, many of them ask if there is a way to easily give the tenant access to the Enterprise Manager Self-Service Backup Portal without leaving the vCD.
Out of the box, Veeam doesn’t provide such a capability; but with several hours of coding, you can make this option possible. Don’t worry, I am not going to ask you to do any coding here, as I have already done that for you.
So let’s have a look at what was done to get this solution to work.
A month or so ago, I received a call from one of our Australian Vanguard members, (http://stevenonofaro.com/) wanting to discuss this requirement. He wanted to cooperate to build such a vCD plug-in. Working with his local development team, he was able to demonstrate a workable solution based on the code at this link: GitHub sample code.
The workable solution developed demonstrated was limited to one Web application portal. We agreed to release a version of the code that can be used by any Service Provider, and that can point to any Web Application. I will share here what I came out with after three weeks of research, coding, errors, and more.

Solution

To provide a community release version, it was necessary to rebuild the code from scratch. This was done in two stages; the first stage runs a simple application where the Service Provider needs to provide the Web Application URL; i.e., Veeam Enterprise Manager in this case. Then the plugin is generated as a .zip file, which the Service Provider uploads to the vCD Provider portal.
Let’s take a look at the steps:

  • Download the Plugin Generator tool:
To get started, you must download this file. The next step is to extract the .zip, and then change the extension to .exe to run it.
  • After renaming the file to .exe file and running it, you are greeted with the tool shown below, where you must provide the Web Application URL
Screen Shot 2019-06-28 at 8.41.37 pm
  • After entering the URL and pressing the Generate vCD Plugin Package button, a Zip file is generated. You upload this file to the vCD provider portal.

vCD Provider Portal

To upload the vCD plugin, you browse your vCD admin portal using as the following: https://vCDURL.domain/provider and then:

  • Browse to the customize portal:
Screen Shot 2019-06-28 at 9.10.47 pm
  • Chose to upload, then select the .zip file. By default, the zip file is generated with Cloudoasis.zip as a file name.

Tenant Side

The tenant will access the link by selecting Backup Portal from the tenant menu:

Screen Shot 2019-06-28 at 9.16.25 pm

When selecting Backup Portal, the tenant is directed to the Web Application URL specified by the vCD plugin generator. In this example, the portal points to the Veeam Enterprise Manager Self-Service Backup Portal:

Screen Shot 2019-06-28 at 9.19.21 pm

Veeam Enterprise Manager Preparation

You must complete the following steps to allow the Veeam Enterprise Manager Self-Service Portal to integrate into the vCD:
On the Enterprise Manager Server, open IIS. Then select the VeeamBackup site. The last step, click on the HTTP Response Headers icon.

Screen Shot 2019-06-28 at 9.41.46 pm.png

At this point, you must update the X-Frame-Options and enter your Enterprise Manager URL. It will look like this:
“allow-from https://<Enterprisemanagerurl>:9443/”
Screen Shot 2019-06-28 at 9.42.58 pm.png
Finally, restart IIS.

Summary

With this small community tool, I hope to help the Service Provider community make it easy for their tenants to open the Veeam Enterprise Manager Self-Service Portal into the vCD portal without the requirement to open another browser, or browser tab, just to conduct backup and restore operations. Also, as you may have noticed, the plugin generator allows you to point to any Web application, not just the Veeam Enterprise Manager, so long as you provide the correct URL and port.

The post vCD Tenant Backup Portal appeared first on CloudOasis.

Original Article: https://cloudoasis.com.au/2019/06/30/self-service-portal-without-leaving-the-vcd-portal/

DR

Western Digital IntelliFlash Storage Plug-In Now Available

Western Digital IntelliFlash Storage Plug-In Now Available

Veeam Software Official Blog  /  Rick Vanover

I am often asked to explain why we here at Veeam have implemented the storage plug-ins and it is a simple answer: to deliver solutions faster to customers! By having storage integrations outside of product release requirements our partners can help innovate and customers can benefit with a better backup experience and amazing recovery. The latest storage plug-in is now available for the Western Digital IntelliFlash array.
Frequently, people look at a Veeam storage plug-in for faster backups with the Backup from Storage Snapshots capability. But the plug-in brings much more:

These collectively make a very powerful option to not just take backups more effectively, but introduce some new recovery options. Let’s tour them here:

Veeam Explorer for Storage Snapshots

Veeam Explorers are usually associated with application-level restores but the Veeam Explorer for Storage Snapshots can use both the application restores and do more — all from an IntelliFlash storage snapshot! My favorite part of our storage integration is it allows an existing storage snapshot to be used for some recovery scenarios. This includes file-level recovery, a whole VM restore to a new location Application Object Exports (Enterprise and higher editions can restore applications and databases to original location) or even an application item.
The Veeam Explorer for Storage Snapshots will read the storage snapshot, then expose the snapshot’s contents to Veeam allowing the seven restore scenarios shown below:

Veeam Backup from Storage Snapshots

Leveraging the Western Digital snapshot integration will bring the real power to a backup infrastructure and make the biggest difference to backup windows. I’ve always said that this integration will allow organizations to take backups at any time. This will allow the data mover (a Veeam VMware backup proxy) to do all of the work with associated data for a backup from the storage snapshot, compared to a vSphere hypervisor snapshot. One good thing that comes along with the Veeam approach is that VMware Changed Block Tracking (CBT) is retained and application consistency is achieved. The figure below visualizes this compared to a non-integrated backup:

On-Demand Sandbox for Storage Snapshots

One of the best things that comes with the Western Digital IntelliFlash plug-in is the On-Demand Sandbox, and I encourage everyone to learn about this technology. It can be used to avoid production problems by easily producing a sandbox to test changes, perform update testing, test security configurations, upgrades and more. You can answer questions like “How long will it take?” and “What will happen?” But most importantly “Will it work?” This can save time by avoiding cancelled change requests due to not knowing exactly what to expect with some changes.

Primary Storage Snapshot Orchestration

This is a capability that many don’t know about with the Veeam storage plug-in. This capability allows users to create a Veeam job to generate a storage snapshot on the Western Digital IntelliFlash array. This should be in addition to a backup job that goes to a backup repository. This plug-in sets Western Digital IntelliFlash customers up to use Veeam Explorers for Storage Snapshots for high-speed recoveries.

Download the Western Digital IntelliFlash plug-in Now

The Western Digital IntelliFlash plug-in can now be downloaded and installed into Veeam Backup & Replication. Check it out now!
The post Western Digital IntelliFlash Storage Plug-In Now Available appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/_vYh6rrx96s/western-digital-intelliflash-storage-plug-in-now-available.html

Production Data Export

Production Data Export

CloudOasis  /  HalYaman

data-exportConsider a situation where you want to extract a “point in time” data set from your production data for data analysis, migration or auditing? How can you easily accomplish this task? What options do you have to meet this requirement?

Sometimes your business requires a particular data set to be shared with others; perhaps you have to share for development, data analysis, data sharing with remote offices, compliance, auditing, and more.

To carry out your data extraction task, you must first carefully plan your steps before exporting the data set. This caution is to prevent accidentally exposing your entire production data to un-authorized personnel. Accidental data exposure is a familiar problem that we all want to avoid.

Now, what if I tell you that you can safely extract your “point in time” data set in a few easy steps with Veeam Backup and Replication using a new feature introduced in Update 4, called “Export Backup”. This new feature allows you to export the selected restore point of a particular object in the backup job. It can be as a standalone full backup file (VBK), and to keep it as separate retention, or can be kept indefinitely. To read more about this aspect of the features, follow this link https://www.veeam.com/veeam_backup_9_5_whats_new_wn.pdf – page 8).

Scenario

Let’s consider the following scenario; as a backup admin, your manager has asked you to extract all the data from a restore point beginning at three weeks ago. The data extraction he needs must include all the incremental data up to the most recent, and then you are to share it with a data mining company who will use it to make a report about the business performance.

Solution

As a backup admin who is using Veeam Backup and Replication software to backup the production infrastructure, you can easily accomplish this request by following these steps:

From the Veeam Backup and Replication Console, select Home -> Backup, then select Export Backup.

Screen Shot 2019-07-26 at 11.22.07 am.png

At the Export Backup interface, choose the desired backup point by pressing on the Add icon, then select the Restore Point, and then select Add.

Screen Shot 2019-07-26 at 1.46.01 pm.png

By pressing on Point option, you can check the Restore Points with all the Restore Point Dependencies:

Screen Shot 2019-07-26 at 1.47.56 pm.png

Also on the same wizard, you can specify the retention time of the export file before it is deleted. You can choose from one, two, or three months. There are also options for one, two, three, and five years.

Screen Shot 2019-07-26 at 1.49.26 pm.png

When you have made your selections to set up your data point export, press Finish. A Full backup file will be created and available in the same repository as the source files.

Screen Shot 2019-07-26 at 1.53.22 pm.png

Note: This newly created export file contains the full and the two incremental files.

File Location and Copy

After the export has been set up and runs successfully, the.VBK file will be available in the same repository as that of the source files. These can be located from the Veeam Console. The file will be attached under Backups -> Disk(imported) option.

To copy the file from the repository, you can copy manually, or use Veeam Copy Job -> File option:

Screen Shot 2019-07-26 at 2.10.06 pm.png

Conclusion

On this blog post, I have shown how you can benefit from the new Export Backup function. As you noted, with these easy steps, you can encapsulate restore points together under a single file. The single file can then be easily shared or shipped to other systems or operators for many business reasons. The single file can also be used as an offsite archive file, used in system migration if that is a requirement.

I hope this blog post helps you to use this new feature and find other beneficial business use cases for your business. Thank you, and please don’t hesitate to share this blog and share your feedback with the readers.

The post Production Data Export appeared first on CloudOasis.

Original Article: https://cloudoasis.com.au/2019/07/28/production-data-export/

DR

How to recover deleted emails in Office 365

How to recover deleted emails in Office 365

Veeam Software Official Blog  /  Niels Engelen

Even though collaboration tools like Microsoft Teams and Slack are changing the workplace, email is still going strong and used for most of the communication within companies. Whether you are using Exchange on-premises, online or in a hybrid setup, it’s important to protect the data so you can recover in case of data loss. At Veeam, we have been talking about six reasons why you need Office 365 backups for quite some time, but the same applies if you are still using Exchange on-premises or are still in the migration process.

How to recover deleted emails from accidental deletion

The most common reason for data loss is accidentally deleted emails. However, within Office 365 email there are two different types of deletion, soft-delete or hard-delete. If you perform a soft-delete, you just hit the delete button on an email and it is moved to the Deleted Items folder (also called the recycle bin). In most cases, you can just go in there and recover deleted emails with a few clicks. However, if you perform a hard-delete, you leverage the shift button and delete the email. In this case, the email is deleted directly and can’t be recovered from the Deleted Items folder. While the email becomes hidden, you can still recover it based upon the Deleted Item Retention threshold. This applies to both Exchange on-premises and Exchange Online. By default, this is set to 14 days, but it can be modified. But what if you need to retrieve deleted emails beyond these 14 days?

Recover deleted emails using the Veeam Explorer

Veeam Explorer for Microsoft Exchange was introduced back in 2013 and is one of the most popular recovery features across the Veeam platform. While the first release only supported Exchange 2010, it now supports Exchange 2010 up to 2019, as well as hybrid deployments and Exchange Online. In addition to retrieving deleted emails, you can also recover contacts, litigation hold items, calendar items, notes, tasks, etc. You can even compare an entire production mailbox or folder with its associated backup mailbox or folder and only recover what is missing or deleted!

If you still have Exchange on-premises, you can use Veeam Backup & Replication or the Veeam Agent for Microsoft Windows to protect your server, allowing you to recover items (like deleted emails, meetings…) directly. Make sure you enable application-aware processing within the backup job and Veeam Backup & Replication will detect the Exchange service. From the Backups node view, right click on the corresponding Exchange server (or search for it at the top) and go to Restore application items and select Microsoft Exchange mailbox items.

 

Are you already using Office 365 (or in hybrid mode during the migration process)? Just leverage Veeam Backup for Microsoft Office 365 to recover deleted emails or any other any Exchange, SharePoint or OneDrive for Business item you want.

 

Once the Veeam Explorer has started, it will load the corresponding database and allow you to perform a restore.

 

If you know which item(s) you have lost, you can just navigate to the corresponding mailbox and folder to restore the deleted email. If you don’t know where it was at the point of deletion, you can leverage multiple eDiscovery options.

 

If we for example search for all emails from a specific person, it would look like this.

 

Once you’ve found the deleted email you need to restore, we can continue with the restore process. Veeam Explorer for Microsoft Exchange gives you multiple options to perform the restore.

  • Restore to the original location
  • Restore to another location
  • Export as MSG/PST file
  • Send as an attachment

In the example below we will restore the deleted email back to the original location.

 

After filling in the account credentials, the item will be restored and available in the mailbox.

Still in need of a backup?

Are you still in need of a backup of Office 365 Exchange Online? Start today and download Veeam Backup for Microsoft Office 365 v3, or try the Community Edition for up to 10 users and 1 TB of SharePoint data.

If you need a backup for Exchange on-premises, download Veeam Backup & Replication today. Veeam Explorer for Microsoft Exchange is even available in the Community Edition!

Still not convinced? Read an unbiased review of Veeam Backup for Microsoft Office 365 from TechGenix.

 

The post How to recover deleted emails in Office 365 appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/Xt4rmhnVX2I/recover-deleted-emails-guide-office-365.html

DR

Pro tips on getting the most from Veeam ONE’s monitoring and analytics

Pro tips on getting the most from Veeam ONE’s monitoring and analytics

Veeam Software Official Blog  /  Vic Cleary

Few realize that Veeam ONE is one of our oldest products. Like every product at Veeam, it continues to evolve based on the needs of our customers and today serves as an important component of Veeam Availability Suite, delivering Monitoring & Analytics for Veeam Backup & Replication, as well as for virtual and physical environments.

Since 2008, Veeam ONE has amassed an impressive arsenal of capabilities through dozens of updates. This has allowed users to leverage the latest monitoring and reporting functionality for any new Veeam Backup & Replication feature and supported infrastructure, ensuring ALL eyes and ears remain fixed on the health state of critical infrastructures, datasets, backup activities, and now, even applications. The list of unique alarms and reports continue to grow with each new release. In fact, did you know that Veeam ONE includes 340 individual Alarms and over 145 pre-built reports? This alone is 485 reasons why you should leverage the power of Veeam Availability Suite, by tapping into the most helpful Veeam ONE alarms and reports for your business. And below, we’ll discuss a number of ways how you can do that as well as outline them in more detail during an informative Veeam ONE webinar, July 25th!

Our customers continue to pave the way to innovation

Part of our on-going effort to enhance existing products is to collect user experience feedback— whether that data comes from customer surveys or from comments logged during technical support calls. Regardless of the source, we take customer needs seriously.

Recently, we asked a number of our customers what they liked most about Veeam ONE and they told us they relied heavily on alarms they receive on the state of their virtual and backup infrastructures. The alarms run in parallel with a handful of pre-built reports that provide periodic data to ensure backups and infrastructure are operating at peak performance. Due to the high quantity of unique alarms and reports that exist, it’s nearly impossible to be familiar with them all. However, we found that a handful of these tools are consistently used and relied upon by a majority of our users on a daily basis.

TOP Veeam ONE Alarms every business should be using

Veeam ONE Alarms are essential in notifying you when part of your virtual, physical or backup environment is not working as designed or as it should be. Pre-built alarms can monitor everything from physical machines running Veeam Agents to VMware, Hyper-V, vCloud Director, Veeam Cloud Connect and other environments. They are designed to keep IT admin up to date on the operations and issues occurring in their environment in real-time. They also help users quickly identify, troubleshoot react to and even fix issues that may impact critical applications and business operations.

The truth is, every Veeam Availability Suite user should be leveraging Veeam ONE alarms to optimize their data center. As mentioned earlier, there are 340 unique alarms, many of which have been disabled by default to avoid spamming users, however, they can also be turned on individually with ease. And while we realize it’s impossible to be an expert on all 340, at a minimum, the following alarms are a great place to start.

Guest Disk Space Alarm

The Guest Disk Space Alarm identifies when a Guest machine is running out of disk space, enabling users to add resources to the machine before performance is negatively impacted. This is important because if guest disk space runs out, the machine may become inactive and inaccessible, which will undoubtedly increase the possibility of downtime. This alarm alerts you before the issue occurs allowing you to proactively manage your environment.

Backup Job State Alarm

Another critical alarm is the Backup Job state alarm. This alarm is beneficial because it will notify you if one or more VMs have failed to backup successfully, allowing you to take immediate action to ensure all your data is secure.

Internal Veeam ONE alarms

In addition to alarms designed to alert you to critical issues with your environment, there are also internal Veeam ONE alarms that help ensure data is being collected properly. This specific “pack” of alarms is important to pay attention to in case Veeam ONE has failed to collect data properly. If this has happened, then there is a chance you are basing decisions on unreliable metrics. To protect against inaccuracies, monitoring internal alarms regularly can ensure Veeam ONE is performing as it should.

These are just two critical alarms and an alarm pack that every Veeam Availability Suite user should understand. During our webinar, we will discuss these and other important alarms, so be sure to register and tune in July 25th.

Reports to start using today

So, what about Veeam ONE’s 145 Pre-built Reports? Where do you start? Veeam ONE offers detailed reports including pre-built and customizable reports that are ready to be used for several critical processes. To start, Veeam ONE includes a number of backup-related reports that our power users mentioned as “must-haves” and “must-check regularly.”

Backup Infrastructure Assessment report

The Backup Infrastructure Assessment report evaluates how optimally your backup infrastructure is configured and offers a set of actions to improve efficiency. This is done by examining a set of recommended baseline settings and implementations, and then comparing these recommendations to your environment. By analyzing factors like VM configuration and application aware image processing to job performance and backup configuration, the report can verify problem areas to help you more effectively mitigate issues.

Job History report

The Job History report has also been proven very useful for users because it provides all job-related data, statistics and performance data in a single package. As you can see from the screenshot below, the Job History report is a great resource to provide you the details you need regarding your data protection jobs.

This report is useful because it provides information on all jobs that are running in your environment. It allows you to easily identify jobs that are taking the longest amount of time as well as the jobs transferring the most data. By providing you with all job-related data, it helps identify performance bottlenecks allowing you to reconfigure jobs as needed.

Not only do Veeam ONE reports analyze backup data, they can also show you how your entire virtual environment is running. A number of reports can additionally assist with capacity planning, managing VM growth in your environment as well as identifying over or undersized VMs. Another report worth highlighting that assists with capacity planning is the Host-failure Modeling report. This report provides resource recommendations to prevent future CPU and memory shortages. And since the Capacity Planning Report is one of our most popular reports, let’s take a closer look.

Capacity Planning report

Capacity Planning reports are available for VMware and Hyper-V, as well as to assist with capacity planning for backup repositories. Specific report that have proven to be especially beneficial for our customers are the Capacity Planning Reports for VMware vSphere and Microsoft Hyper-V. In the screenshot below, you can see how this report identifies the number of days remaining before the level of resource utilization reaches a specified threshold. The report analyzes historical performance, build trends, and predicts when a defined threshold will be violated. This is very useful as it allows you to be proactive and avoid resource shortages.

Billing and Chargeback Reports

Veeam ONE also includes excellent reports for managing billing and chargeback. These are beneficial in understanding the costs of a virtual infrastructure by analyzing resource usage for CPU, memory, and storage. We’ll discuss this report and others in more detail during the webinar.

Building on Veeam ONE’s Legacy – Our latest capabilities

The second half of our webinar will call out a number of the new capabilities introduced with NEW Veeam Availability Suite 9.5 Update 4. We’ll highlight five new Veeam ONE features that introduce new technologies designed to simplify monitoring and reporting processes and deliver on a number of customer requests. You’ll learn more about our new era of Veeam Intelligent Automation, including Veeam Intelligent Diagnostics, to help quickly address known infrastructure-related issues, as well as new automated Remediation Actions that can be leveraged to automate specific activities. We’ll also discuss our new Heatmaps feature, new Business View groupings, and introduce our highly-anticipated application-level monitoring capabilities, plus more. We promise you won’t leave empty handed.

Register now to join us July 25th

We look forward to sharing these details with you July 25th. Register today!

GD Star Rating
loading…

Pro tips on getting the most from Veeam ONE’s monitoring and analytics, 5.0 out of 5 based on 3 ratings

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/MkARcV17iJ8/monitoring-analytics-pro-tips.html

DR

How to create a Failover Cluster in Windows Server 2019

How to create a Failover Cluster in Windows Server 2019

Veeam Software Official Blog  /  Hannes Kasparick


This article gives a short overview of how to create a Microsoft Windows Failover Cluster (WFC) with Windows Server 2019 or 2016. The result will be a two-node cluster with one shared disk and a cluster compute resource (computer object in Active Directory).

Preparation

It does not matter whether you use physical or virtual machines, just make sure your technology is suitable for Windows clusters. Before you start, make sure you meet the following prerequisites:
Two Windows 2019 machines with the latest updates installed. The machines have at least two network interfaces: one for production traffic, one for cluster traffic. In my example, there are three network interfaces (one additional for iSCSI traffic). I prefer static IP addresses, but you can also use DHCP.

Join both servers to your Microsoft Active Directory domain and make sure that both servers see the shared storage device available in disk management. Don’t bring the disk online yet.
The next step before we can really start is to add the “Failover clustering” feature (Server Manager > add roles and features).

Reboot your server if required. As an alternative, you can also use the following PowerShell command:

Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools

After a successful installation, the Failover Cluster Manager appears in the start menu in the Windows Administrative Tools.
After you installed the Failover-Clustering feature, you can bring the shared disk online and format it on one of the servers. Don’t change anything on the second server. On the second server, the disk stays offline.
After a refresh of the disk management, you can see something similar to this:
Server 1 Disk Management (disk status online)

Server 2 Disk Management (disk status offline)

Failover Cluster readiness check

Before we create the cluster, we need to make sure that everything is set up properly. Start the Failover Cluster Manager from the start menu and scroll down to the management section and click Validate Configuration.

Select the two servers for validation.

Run all tests. There is also a description of which solutions Microsoft supports.

After you made sure that every applicable test passed with the status “successful,” you can create the cluster by using the checkbox Create the cluster now using the validated nodes, or you can do that later. If you have errors or warnings, you can use the detailed report by clicking on View Report.

Create the cluster

If you choose to create the cluster by clicking on Create Cluster in the Failover Cluster Manager, you will be prompted again to select the cluster nodes. If you use the Create the cluster now using the validated nodes checkbox from the cluster validation wizard, then you will skip that step. The next relevant step is to create the Access Point for Administering the Cluster. This will be the virtual object that clients will communicate with later. It is a computer object in Active Directory.
The wizard asks for the Cluster Name and IP address configuration.

As a last step, confirm everything and wait for the cluster to be created.

The wizard will add the shared disk automatically to the cluster per default. If you did not configure it yet, then it is also possible afterwards.
As a result, you can see a new Active Directory computer object named WFC2019.

You can ping the new computer to check whether it is online (if you allow ping on the Windows firewall).

As an alternative, you can create the cluster also with PowerShell. The following command will also add all eligible storage automatically:

New-Cluster -Name WFC2019 -Node SRV2019-WFC1, SRV2019-WFC2 -StaticAddress 172.21.237.32

You can see the result in the Failover Cluster Manager in the Nodes and Storage > Disks sections.

The picture shows that the disk is currently used as a quorum. As we want to use that disk for data, we need to configure the quorum manually. From the cluster context menu, choose More Actions > Configure Cluster Quorum Settings.

Here, we want to select the quorum witness manually.

Currently, the cluster is using the disk configured earlier as a disk witness. Alternative options are the file share witness or an Azure storage account as witness. We will use the file share witness in this example. There is a step-by-step how-to on the Microsoft website for the cloud witness. I always recommend configuring a quorum witness for proper operations. So, the last option is not really an option for production.

Just point to the path and finish the wizard.

After that, the shared disk is available for use for data.

Congratulations, you have set up a Microsoft failover cluster with one shared disk.

Next steps and backup

One of the next steps would be to add a role to the cluster, which is out of scope of this article. As soon as the cluster contains data, it is also time to think about backing up the cluster. Veeam Agent for Microsoft Windows can back up Windows failover clusters with shared disks. We also recommend doing backups of the “entire system” of the cluster. This also backs up the operating systems of the cluster members. This helps to speed up restore of a failed cluster node, as you don’t need to search for drivers, etc. in case of a restore.

The post How to create a Failover Cluster in Windows Server 2019 appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/RAHMs-UOU7I/windows-server-2019-failover-cluster.html

DR

3 steps to protect your SAP HANA database

3 steps to protect your SAP HANA database

Veeam Software Official Blog  /  Clemens Zerbe


Since the Veeam Plug-in for SAP HANA was introduced, I’m often asked about the installation and operation processes for it. Unfortunately, this cannot be covered in one blog, but I’ve often found the best place to start is the beginning. For this first blog I will cover how to install and configure the plug-in — mainly as a step-by-step guide for Veeam administrators with less or little Linux and SAP HANA knowledge.
If you are an experienced SAP Basis administrator, you may still find information of value as you scan through. I have planned a series of blogs dedicated to SAP HANA and I encourage you read each of them as I will cover day-to-day operations and advanced topics such as system copies or recoveries without having an SAP HANA catalog in the future.
Before I discuss the details of deploying and configuring the SAP certified Veeam Plug-in for SAP HANA, I want to share with you pertinent details that may be helpful. This plug-In depends on SAP Backint for SAP HANA, which is an API that enables Veeam to directly connect the Veeam agent to the SAP HANA database. I feel compelled to point out that HANA handles its own backup catalog with its own retention and scheduling. Therefore, Veeam Backup & Replication simply takes the data (technically from data pipes) and stores it into Veeam repositories. During restore operations, SAP HANA tells Veeam Backup & Replication what data needs to be restored and Veeam delivering the data as appropriate. This approach is contrary to the typical Veeam agentless approach, and it is very important to understand the difference. While this may not be news to an experienced SAP Basis administrator, it’s worth sharing this information as for some of you, this may be new information and therefore, helpful.

In addition to the Backint API, the next important matter to discuss is that SAP HANA Backint only takes care of the database data, including full, differential, incremental and log backups and restores. Though I should point out it is also important to take care of the underlying operating system (Red Hat or SUSE) and the SAP HANA installation and configuration files. More on this later.

The installation process

Though the installation process is one of the easiest things to do in my opinion, I urge you to keep in mind the prerequisites, including:

  • Veeam Backup & Replication 9.5 Update 4 (or 4a) already installed and a non-encrypted repository available with the appropriate access rights
  • DNS (forward & reverse) entries on your HANA system and your Veeam Backup & Replication repository system
  • HANA 2.0 SPS02 or newer installed on a x86 system
  • For the Veeam admin -> have your HANA buddy beside you
  • For the SAP Basis admin -> have your Veeam buddy beside you

The installation files can be found on the Veeam Backup & Replication iso.

Copy this RPM file to your SAP HANA system. You can perform this task with SCP, SFTP, or use any tool that you prefer. If you prefer a graphical interface, I recommend Filezilla or WinSCP.
Next, use a command line tool on the SAP HANA system. Putty or any other ssh client may be used for connecting and logging onto the system. For the installation process, you need to have sudo rights. Go to your folder with the RPM and run the installation.

sles15hana04:~ # cd /mnt/install/VBR2753/  sles15hana04:/mnt/install/VBR2753 # ll  total 18648  -rw-r–r– 1 root root 19093281 Apr  9 13:26 VeeamPluginforSAPHANA-9.5.4.2753-1.x86_64.rpm  sles15hana04:/mnt/install/VBR2753 # rpm -ivh VeeamPluginforSAPHANA-9.5.4.2753-1.x86_64.rpm  Preparing...                         ################################# [100%]  Updating / installing...    1:VeeamPluginforSAPHANA-9.5.4.2753-################################# [100%]  Run "SapBackintConfigTool –wizard" to configure the Veeam Plug-in for SAP HANA  sles15hana04:/mnt/install/VBR2753 #

Not tough at all right? Veeam is easy! Keep in mind for Veeam Backup & Replication 9.5 Update 4a, there is a performance patch for HANA available: https://www.veeam.com/kb2948

The configuration process

The installation process guides you to the next step. Run “SapBackintConfigTool –wizard” with the root user. Ready to do it? Remember to have your HANA and/or Veeam buddy at your side as you will need to work together for the next few steps.

sles15hana04:/mnt/install/VBR2753 # SapBackintConfigTool –wizard  Enter backup server name or IP address: w2k16vbr  Enter backup server port number 10006:  Enter username: Administrator  Enter password for Administrator:  Available backup repositories:  1. Default Backup Repository  2. SOBR1  3. w2k19repo_ext1  4. sles15repo_ext1  Enter repository number: 2  Configuration result:    SID DEV has been configured    sles15hana04:/mnt/install/VBR2753 #

The first question about your Veeam Backup & Replication server is obvious. Port number is defaulted to 10006 if you have not changed it in your environment. Username and Password for the Veeam Backup & Replication server and repository rights need to be provided by the Veeam administrator. If you already set the proper rights on your repositories, you should now see a list of repositories. Choose one and you are ready to move forward.
The final part is performed by the wizard — enabling SAP Backint via a soft link. If you already configured SAP Backint with other software, Veeam’s wizard will tell you what to delete and rerun the wizard.
Congratulations! You’re done with the installation and configuration. Now it’s time for our first backup and some initial SAP HANA configurations.

The first backup

The easiest way to create your first backup is via SAP HANA Studio as this is the most commonly used tool. But you can also use SAP HANA Cockpit, DBA planer, or any external scheduler.
Start SAP HANA Studio and connect to your recently configured SAP HANA instance in SYSTEM DB mode. Remember, this is a good time to be close to your SAP Basis administrator.

Provide the SAP HANA credentials (you don’t need to be the system user). You can create your backup and restore user with backup and catalog rights. See the HANA admin guide for details.

If everything has been configured properly, you should see something similar to the screenshot below:

Double-click on the SYSTEMDB@DEV (SYSTEM) and it will open an overview window. Keep this in mind later when additional configuration details are possible.

Open Backup Console via right-click.

Go directly to configuration and click on the blue arrow to expand the Backint settings. If you see Backint Agent pointing to /opt/Veeam/VeeamPluginforSAPHANA/acking -> you are only seconds away from your first backup.

It is important to emphasize that:

  • Veeam does not use any Backint Parameter files. Leave these fields empty, or if you were running something else before, delete the entries.
  • Log Backup Settings: This allows you to have the logs either on your filesystem or also use Backint to forward all new logs directly to Veeam Backup & Replication. We recommend to back them up via Backint, but please discuss this with your SAP Basis buddy. If you change this, do not forget to save your settings by clicking on the disk symbol.

Now the big moment…start your first backup!
Right-click on SYSTEMDB@SID and choose Back Up System Database (and afterwards, Tenant Database).

Ensure to choose Backint as target. Click Next – see the summary and click Finish.

You should see a screenshot like this one.

Check the Log File and go back to your Backup System DB window and go to the Backup Catalog. There should be an entry like this one with your first backup.

Now do the same for the tenant. Run backup, check the logs and review the catalog

In parallel, have you checked what happened in Veeam Backup & Replication? Under Jobs you will find a new one with the format “hostname SAP backint backup (repository name)“:

Under History, there is a new Plug-ins folder for all Plug-ins (Sap HANA & Oracle RMAN). So you see, the Plug-in is very database centric. We will cover some optimizations on this later in this blog series.

Yeah! But you are right. It´s all about the restore. Let’s also quickly test this.

Recover database

Now is a great time to be best friends with your SAP Basis buddy.

WARNING:
Do not do this on your own and always test your first restore on a test environment. Do not use any production database for the following steps!

We only want to recover the tenant database for now. SYSTEM databases only need to be recovered if there are errors, and I would recommend this only if advised to do so by SAP support.
Again, right-click on your SYSTEMDB@SID entry, Backup and Recovery -> Recover Tenant Database.

Choose your tenant to recover and click Next and choose Recover the database to its most recent state. I will cover the other options in a later blog of this series.

If you chose to backup also into Backint – change this entry to search in Backint.

Attention: the tenant will shut down now!

Select your backup – check Availability if you like. This should be green after checking.

Click Next after Availability is green. Click Next on Locate Log Backups.

Click Next and don’t forget to include logs on Backint. It’s important to reiterate that these options are Database centric and should be discussed with your SAP Basis administrator if you ever need to change something here. Clicking Next brings you a summary and Finish will start the restore process.

Finally, the restore will start and, when completed, you should see a result similar to this.

Congratulations! You have configured, backed up, and restored your first SAP HANA database with the SAP certified Veeam Plug-in for SAP HANA.
In the next blog, we will discuss additional optimization options in HANA and discuss how to back up your file system with HANA ini files and protect your operation system for DR preparation. Additionally, we also want to dive deeper into scheduling options, retention of backups, etc. In the meantime, if your business runs SAP HANA, I encourage you to check out this exciting feature from Veeam using this blog as a guide.
The post 3 steps to protect your SAP HANA database appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/d-aoo4GjGUU/sap-hana-plugin-installation-guide.html

DR

Windows Server 2019 support, expect no less from us!

Windows Server 2019 support, expect no less from us!

Veeam Software Official Blog  /  Rick Vanover


Veeam has always taken platform support seriously. Whether it’s the latest hypervisor, storage system or a Windows operating system. Both Windows Server and Windows desktop operating systems have very broad support with Veeam, and I recently just did some work around Windows Server 2019 and I am happy to say that Veeam fully supports the newest Microsoft data center operating system.
One of the first things that we did here at Veeam was a webinar overviewing both what is new with Windows Server 2019 and how Veeam supports Windows Server 2019. I was lucky to be joined by Nicolas Bonnet, a Veeam Vanguard and author of many books on Windows Server.
If any IT pro is getting ready to support a new platform, such as Windows Server 2019, the first steps are critical to introduce the new technology without surprises. The good news is that Veeam can help, and that was the spirit of the webinar and overall the way Veeam addresses new platforms.
One of the first places to start is Microsoft’s awesome 45-page “What’s new” document for Windows Server 2019. This is without a doubt the place to get the latest capabilities on what the new platform brings to your data center and workloads in the cloud running Windows Server.
There are scores of new capabilities in Windows Server 2019, so it’s hard to pick one or even a few favorites; but here are some that may help modernize your Windows server administration:

Storage Migration Service

I love hearing Ned Pyle from Microsoft talk about the storage technologies in Windows Server, and Storage Migration Service is one that can get older file server data to newer file server technologies (including in Azure).

Storage Space Direct

It’s incredible what has been going on with Windows Storage over the years! One of the latest capabilities is now ReFS volumes can have deduplication and compression. There are additional scalability limits and management capabilities as well (Windows Admin Center in particular) as other improvements across the operating system.
And finally, smallest but rather awesome feature:

Windows Time Service

In Windows Server 2019, the Windows Time Service now offers true UTC-compliant leap second support; so it’s ready for any time service role in your organization.
Again, hard to pick one or even a few features, but here are some that you may know about or may not have known. But the natural question that comes into play is “How do I migrate to Windows Server 2019?” This is where Veeam may help you and you may not have even known it.

Veeam DataLabs

The Veeam DataLab is a way you can test upgrades to Windows Server 2019 from previous versions of Windows Server. In fact, I wrote a whitepaper on this very topic last year.
Do you think the idea that your backup application can help you in your upgrade to Windows Server 2019 sounds crazy? It’s not, let me explain. The Veeam DataLab will allow you to take what you have backed up and provide an isolated environment to test. The test can be many things, ensuring that backups are recoverable (what the technology was made for), testing Windows Updates, testing security configurations, testing scripted or automated configurations, testing application upgrades and more. One additional use case can be testing the upgrade of Windows Server 2019.
In the Veeam DataLab, you can simulate the upgrade to the latest operating system based on what Veeam has backed up. The best part is the changes are not going to interfere with the production system. This way you will have a complete understanding of what it will take to upgrade to Windows Server 2019 from an older Windows Server operating system. How long it will take, what changes need to happen, etc. Further, if you need to put other components into a DataLab to simulate a multi-tiered application or communication to a domain controller, you can!
Here’s an example of a DataLab:

Conclusion

Windows Server 2019 and platform support in general are key priorities at Veeam. Our next update, Veeam Backup & Replication 9.5 Update 4b, will include support for Windows Server version 1903. This means you can run 1903 and have it backed up with Veeam or you can run Veeam Backup & Replication and its components on this version of Windows. Veeam is a safe bet for platform support as well as the ability to activate the data in your backups to do important migrations, such as to the latest version of Windows Server.
Have you used a DataLab to do an upgrade? If so, share your comment below. If not, give it a try!

The post Windows Server 2019 support, expect no less from us! appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/TSMFGvyI3I4/windows-server-2019-support.html

DR

Looking back on VeeamON 2019, and forward to what’s next with Rickatron

Looking back on VeeamON 2019, and forward to what’s next with Rickatron

Veeam Software Official Blog  /  Rick Vanover

This year’s VeeamON was epic, if I do say so myself. Our fifth event was in my opinion our best yet. In this post, I’ll recap a few key takeaways of the event and set the stage for what’s next for Veeam regarding events and product updates.

I wrote two specific recaps of the full days of the event, Day 1 and Day 2, which have good summaries of event specifics. Additionally, all of the general sessions and breakouts for those who attended are available for replay on the VeeamON recap site. The press releases are listed below:

We also made a short video that summarizes the event as well:

 

But what is there to focus on next? Short answer: A LOT!

I’ll start with the products. I’ve already been updated on the next version of some products we showcased at VeeamON, so we’ll be doing promotion for that at regional VeeamON Forum and Tour events as well as key industry events like VMworld and Microsoft Inspire. Veeam Availability Suite v10, which was previewed in the technology general session (you can replay here, in the technology keynote), is going to be an EPIC release. What we previewed at VeeamON is just a teaser; trust me, there is a lot more goodness to come in this release. I keep saying with each release recently that it’s the biggest release ever; and I’m pretty sure I’ll do the same with this one.

Next, I have been working with global teams to prepare the regional VeeamON Forum and VeeamON Tour events. These are underway, starting in June and going into the fall worldwide. Local markets will promote them specifically, but pages are already up for Asia, India, France, Germany, Turkey, many EMEA countries for Tours, and more to come. I just completed a VeeamON Tour in Sweden and will do subsequent ones in China, Brazil and Argentina later this year, and it feels great to bring the Veeam update to more markets around the world. Additionally, the rest of the Veeam Product Strategy team will be doing many events around the world as well.

I am fortunate to have a role on the VeeamON team as the content manager for the breakouts. This is in addition to the 57 other things I do at the event, including the technology general session presentation, presenting breakouts, meeting with customers and partners as well as meeting with press and analysts. I took a much more data-driven view to the breakout content this year. Everything from working with the event management team to have the right sized rooms, the balance of content (much more technical and product-focused this year), speaker consideration, partner interests and more. The end result is a trend that I feel embodies where Veeam is going as well as meeting, and based on survey data, exceeding the expectations of attendees. Feel free to send me a note via email or Twitter for ideas on how to make VeeamON better and the best investment for your time to attend the event.

VeeamON 2020 will take the event back to Las Vegas, and I for one am happy with that. I just think Vegas works! But the bar is set rather high. The Miami event was epic, and for my role on the VeeamON team, I’m going to expand the team I have and double the effort to raise the bar. See you there!

The post Looking back on VeeamON 2019, and forward to what’s next with Rickatron appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/BGR8x1LGGGQ/veeamon-2019-recap-whats-next-with-rickatron.html

DR

What else can you do with Veeam Availability Orchestrator?

What else can you do with Veeam Availability Orchestrator?

Notes from MWhite  /  Michael White

Hello all,

I mention this stuff a lot and people are always surprised.  So I thought I would share this here so that more of you can see this.

Veeam Availability Orchestrator (VAO) v2 is a DR orchestration tool that can use replicas or backups as the source for failovers in a crisis. As a professional DR tool it allows you to test your failovers to make sure your applications recovery is as you might hope.  It also has great documentation – mostly automatic too that is very helpful.  But there is other things you can do with this very useful tool that are not necessarily obvious.

  • You can use a test failover to test out a few things – not just does the app work but also:
    • Application Patching – if it works you know it will work in production as you are working in a copy of production, not just a lab built copy.
    • OS patching or updates – same here, you will know if it works in production or not.
    • Security vulnerability scanning – if you break something in the test only you know and you can prepare for that in production by maybe tweaking the scanning tool.
  • Plan, and test, a migration to a new data center.  It is very good indeed to test your migration before it becomes permanent.
  • You can pull data into a test failover from physical machines if necessary.  So like an AIX server that is running Oracle.  How it would look is:
    • During a test failover a PowerShell script executes and talks to AIX
    • A new LPAR is deployed,
    • A virtual nic is attached to the LPAR that is connected to the private and non-routing test network.
    • Another script runs that copies a subset of the Oracle database to the new LPAR.
    • So now in the test failover an app can find Oracle. Which is important to support a test failover.
    • Now at the end of the test there is a PowerShell script that runs and notes it is the end of the test so it talks to AIX and deletes the LAPR that has the subset of Oracle database on it.  This is important to make sure no test data gets into the production Oracle.
    • I have seen this done with AIX, and I have had customers tell me they have done this with HPUX.
  • You can protect physical machines.
    • Use a tool like VMware Converter to do lunchtime and night time conversions.  Then on a different schedule it is replicated and protected.  This means if the physical machine is lost, a virtual version will be available, and that is quite usable even if the app owner didn’t want to virtualize for some reason as the alternative is total loss.
    • I am working on a blog to do this as part of your backups.  When I have that figured out and documented I will share it out.
  • One that I heard from a customer is to do organized and auditable restores for people.  Since it is a DR activity there is a nice audit trail, and you can test it too.

So I hope that this all helps you understand that there is a lot that a DR orchestration tool like VAO can do over and above the typical DR stuff.

Let me know if you have questions or comments,

Michael

=== END ===

Original Article: https://notesfrommwhite.net/2019/06/19/what-else-can-you-do-with-veeam-availability-orchestrator/

DR

v10 Sneak peek: Cloud Tier Copy Mode

v10 Sneak peek: Cloud Tier Copy Mode

Veeam Software Official Blog  /  Anthony Spiteri

At VeeamON 2019, I joined members of the Product Strategy Team to do a number of live demos showing current and future products and features. While the first half focused on what had been released in 2019 so far, the second half focused on what is coming. One of those was the highly anticipated Copy Mode feature being added to the Cloud Tier in v10 of Veeam Backup & Replication.

As with the existing 9.5 Update 4 functionality, all restore operations are still possible for backup data that has been copied to the Capacity Tier. During the Technical Main Stage keynote, I demoed an Instant VM Recovery of a machine from a SOBR that we had simulated the loss of all Performance Tier extents due to a disaster. As you can see in the demo below, the recovery was streamed directly off the Capacity Tier, which in this case was backed by Amazon S3.

Copy Mode feature

Copy Mode is an additional policy you can set against the Capacity Tier extent of the Scale Out Backup Repository (SOBR), which is backed by an Object Storage repository. Once selected, any backup files that are created as part of any standard backup or backup copy job will be copied to the Capacity Tier. This fully satisfies the 3-2-1 rule of backup, which asks for one full copy of your data off site.

This feature adds to the existing move functionality released as part of Veeam Backup & Replication 9.5 Update 4 earlier this year and allows the almost immediate copy of backup data up to an Object Storage Repository. Here’s a deep dive into the technology (and best practices!) that we presented on the main stage:

 

And a full demo only session to show off some of the new Copy Mode functionality coming in v10:

 

NOTE: Demo and screenshots taken from Tech Preview Build and is subject to change.

For a recap of what is currently possible in Update 4, head to this link to check out all the Cloud Tier content I’ve created since launch.

Conclusion

Copy Mode is going to be a huge enhancement to the Cloud Tier in v10. I can see lots of great use cases for on-premises Veeam customers and for our Veeam Cloud & Service Provider partners who should be looking to leverage this feature to offer new service offerings for IaaS and Cloud Connect Backup data resiliency.

Stay tuned for more information around Cloud Tier Copy Mode as we get closer to the release of v10!

The post v10 Sneak peek: Cloud Tier Copy Mode appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/qIh8Lpj98ho/v10-sneak-peek-cloud-tier-copy-mode.html

DR

Improving failover speed in VAO

Improving failover speed in VAO

Notes from MWhite  /  Michael White

Hello there,

If you are using Veeam Availability Orchestrator (VAO) you may have noticed that we limit a VM group to start 10 VMs simultaneously. If your plan has two VM groups that means the first group starts 10 VMs at a time until there are not more to start, than the next VM group starts recovering 10 VMs at a time.

What happens if you want to recover faster?  First is it possible?  If you have all flash storage, or very fast storage, and your vCenter has spare RAM / Processor then it is possible.

Here is the process.

  • Do a test failover of the plan you wish to have proceed more quickly.  If the DataLab is already running the test failover time is fairly close to the actual failover time. Remember or record the time.
  • Confirm your vCenter is healthy.  This is a good place to start.

  • Next you should increase the number of simultaneous starts.  It can be found when you edit your VM group.

  • Depending on your storage you will make the choice.  If you have all flash for example, then I would suggest trying 20 instead of 10.
  • Now with the lab group already running, do your test failover again.  Record the time.
  • More VMs starting might impact vCenter so make sure it is still good.  Us the Past Day setting and see if there is a big spike where you did the test.
  • But, if the vCenter is still good, determine how much time you saved.
  • If the time is not good enough, and you think you still have resources in vCenter, you can try increasing the number again.
  • Remember this impacts your storage, and your vCenter strongly.  So at some point you potentially may need to back the number down.  I think 10 is very safe for everyone, but if you have a good vCenter and nice storage you should be able to get it quite a bit higher.  Maybe 30?

Another way to decrease the time of failovers is how you design your plans.  Multi – tier apps can make plans very slow if you use sequencing, but can be much faster if you use a different design.  I talk about that design in this article.

I hope that this article helps, but I am very open to questions and comments. And don’t forget all my VAO technical articles can be found via this tag.

BTW, I don’t have a decent work lab, and I have a small lab at home I use.  So I could not show better screenshots.  Sorry about that.

Michael

=== END ===

Original Article: https://notesfrommwhite.net/2019/06/26/improving-failover-speed-in-vao/

DR

NEW Veeam Powered Network v2

NEW Veeam Powered Network v2
NEW Veeam PN (Powered Network) v2 delivers 5x to 20x faster VPN
throughput! In this latest version of Veeam PN,
a FREE tool that simplifies creation and management of site-to-site and
point-to-site VPN networks, improvements have been made to performance and
security with the adoption of WireGuard. Read this blog
to learn more about version two enhancements and visit the help center
for technical details.

NEW Veeam Availability Orchestrator v2 is now available!

NEW Veeam Availability Orchestrator v2 is
now available!
NEW Veeam® Availability Orchestrator v2 – an
industry-first – delivers a comprehensive solution that provides DR
orchestration from replicas and backups. DR is now democratized; attainable
and affordable for all organizations, applications and data, not just the mission-critical
workloads. Check out this blog to
discover what else is new in version two, or start building your first DR
plan today with a FREE 30-day download.

Top takeaways from VeeamON 2019

Top takeaways from VeeamON 2019
VeeamON 2019, our annual conference, turned Miami Beach
green last month! 2,000 attendees learned how Veeam® is committed
to supplying backup and recovery solutions that consistently deliver Cloud
Data Management™ for users across the globe. Couldn’t make it to the event?
Check out the keynote and breakout sessions, daily recap
videos
, the CUBE interviews, blog posts
and latest
announcements
.

Veeam Availability Orchestrator v2.0 is now GA

Veeam Availability Orchestrator v2.0 is now GA

Notes from MWhite  /  Michael White


Hello all, I am very happy to say that what I have been working for a long time – with many other people – has now reached General Availability. It is a big release and we are going to look at things in this article.
Release Notes – https://www.veeam.com/veeam_orchestrator_2_0_release_notes_en_rn.pdf
Bits – https://www.veeam.com/availability-orchestrator-download.html
It is important to note there is no upgrade to 2.0.  You will need to install it separate and recreate your plans. There will be an KB article to help.  But most important is remember to uninstall your VAO agents from your VBR servers so that new agents can be installed. The plan definition reports will provide the info for each plan you need to recreate.
Us two product managers did not make the decision for no upgrade lately.  It was important though as it will allow some architecture decisions to be made that will be good for customers in the future.  Once such decision is to remove the need for Production VAO servers.  They are not required or useful any longer.
What are some of the new features – the key ones?
Scopes
Scopes will let you have resources and reporting – plus people that are separate and unseen by other users.  Sort of like multi-tenancy but more to allow offices or groups to have separate plans.

Above you can see both a default and HR scope.  Here you can say who belongs to what.

Above you can see the reporting options that can be defined by each scope.

Above is VM groups and you can use the button indicated above to assign them to different scopes. This is true for other resources such as DataLabs.
Backups
In v1 you could use replicas to create your plans, but in v2 you can use replicas OR backups to create your plans.  It is almost identical no matter which you choose but there are some small differences.

In our Orchestration Plans they will look identical except for as we can see above.  Failover for replica’s and Restore for backups.

As well when you look at the individual steps you will notice one that is Restore VM rather than process replica – as you can see above.
BTW, recoveries using backups can go to the original or new locations. Re-IP is possible as well.
If you design your VBR servers well, the recovery of a backup is surprising fast!
VM Console
This feature will allow you, from within VAO to access a VM’s console.  It uses the VMware VMRC HTML5 functionality to do it.
The plan needs to stay on for a some time, and if so you will see an option like below.

You highlight a VM, and select that button and you end up in the VMware HTML5 VMRC and will need to log in.  Very handy for things like checking to see if a complex app has started and is usable.
General
There is a lot of little things like polish in the UI.  Sort of discrete but also nice. One that is not so discrete is that we have added RTO and RPO to the plans and reports.  This means a readiness check will have a warning if the RTO / RPO defined is not possible.

Another thing is that production VAO servers are no longer required or useful.  There is only DR VAO servers.
Summary
A big release, and with an interesting range of new stuff.  You can watch my blog using this tag – https://notesfrommwhite.net/tag/vao_tech for new articles.
Michael
=== END ===

Original Article: https://notesfrommwhite.net/2019/05/21/veeam-availability-orchestrator-v2-0-is-now-ga/

DR

Why we chose WireGuard for Veeam PN v2

Why we chose WireGuard for Veeam PN v2

Veeam Software Official Blog  /  Anthony Spiteri

The challenge with OpenVPN

In 2017, Veeam PN was released as part of Veeam Recovery to Microsoft Azure. The Veeam PN use case went beyond the extending of Azure networks for workload recoverability and was quickly adopted by IT enthusiasts for use of remote connectivity to home labs and the connectivity of remote networks that could be spread out across cloud and on-premises platforms.
Our development roadmap for Veeam PN had been locked in, however, we found that customers wanted more. They wanted to use it for data protection, with Veeam Backup & Replication to move data between sites. When moving backup data utilizing the underlying physical connections to its maximum is critical. With OpenVPN, our R&D found that it couldn’t scale and perform to expectation no matter what combination of CPU and other resources we threw at it.

Veeam Powered Network v2 featuring WireGuard

We strongly believe that WireGuard is the future of VPNs with significant advantages over more established protocols like OpenVPN and IPsec. WireGuard is more scalable and has proven to outperform OpenVPN in terms of throughput. This is why we chose to make a tough call to rip out OpenVPN and replace it with WireGuard for site-to-site VPNs. For Veeam PN developers, this meant the rip and replacement of existing source code and meant that existing users of Veeam PN would not be able to perform an in-place upgrade.

Further to our own belief in WireGuard, we also looked at it as the protocol of choice due to the rise of it in the Open Source world as a new standard in VPN technologies. WireGuard offers a higher degree of security through enhanced cryptography that operates more efficiently, leading to increased performance and security. It achieves this by working in kernel and by using fewer lines of code (4,000 compared to 600,000 in OpenVPN) and offers greater reliability when connecting hundreds of sites…again thinking about performance and scalability for more specific backup and replication use cases.
Recent support for WireGuard becoming a defacto standard in VPNs was ratified by Linus Torvalds:

“Can I just once again state my love for [WireGuard] and hope it gets merged soon? Maybe the code isn’t perfect, but I’ve skimmed it, and compared to the horrors that are OpenVPN and IPSec, it’s a work of art.”
Linus Torvalds, on the Linux Kernel Mailing List

Increased security and performance

WireGuard’s security was also a factor in moving on from OpenVPN. Security is always a concern with any VPN, and WireGuard takes a more simplistic approach to security by relying on crypto versioning to deal with cryptographic attacks… in a nutshell it’s easier to move through versions of primitives to authenticate rather than client server negotiation of cipher type and key lengths.
Because of this streamlined approach to encryption in addition to the efficiency of the code WireGuard can outperform OpenVPN, meaning that Veeam PN can sustain significantly higher throughputs (testing has shown performance increases of 5x to 20x depending on CPU configuration) which opens up the use cases to be for more than just basic remote office or homelab use. Veeam PN can now be considered as a way to connect multiple sites together and have the ability to transfer and sustain hundreds of Mb/s which is perfect for data protection and disaster recovery scenarios.

Solving the UDP problem, easy configuration and point-to-site connectivity

One of the perceived limitations of WireGuard is the fact that it does all it’s work over UDP, which can cause challenges when deploying into locked down networks that by default trust TCP connections more than UDP. To solve this potential road block for adoption, our developers worked out a way to encapsulate (with minimal overhead) the WireGuard UDP over TCP to give customers choice depending on their network security setup.
By incorporating WireGuard into an all in one appliance (or installable via a simple script on an already installed Ubuntu Server), we have made the installation and configuration of complex VPNs simple and reliable. We have kept OpenVPN as the protocol for connecting point-to-site for the moment due to the wider distribution of OpenVPN Clients across platforms. Client access through to the Veeam PN Hub is done via OpenVPN with site-to-site access handled by WireGuard.

Other enhancements

Core what we wanted to achieve with Veeam PN is simplifying complexity and we wanted to ensure this remained in place regardless of what was doing the heavy lifting underneath. The addition of WireGuard is easily the biggest enhancement from Veeam PN v1, however, there are several other enhancements listed below

  • DNS forwarding and configuring to resolve FQDNs in connected sites.
  • New deployment process report.
  • Microsoft Azure integration enhancements.
  • Easy manual product deployment.

Conclusion

Once again, the premise of Veeam PN is to offer Veeam customers a free tool that simplifies the traditionally complex process around the configuration, creation and management of site-to-site and point-to-site VPN networks. The addition of WireGuard as the site-to-site VPN platform will allow Veeam PN to go beyond the initial basic use cases and become an option for more business-critical applications due to the enhancements that WireGuard offers. Once again, we chose WireGuard because we believe it is the future of VPN protocols and we are excited to bring it to our customers with Veeam PN v2.
Download it now!

Helpful resources:

The post Why we chose WireGuard for Veeam PN v2 appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/AMxHPVxAv3E/veeam-pn-v2-wireguard.html

DR

Veeam Availability for Nutanix AHV – Automated Deployment with Terraform

Veeam Availability for Nutanix AHV – Automated Deployment with Terraform

vZilla  /  michaelcade

Based on some work myself and Anthony have been doing over the last 12-18 months around terraform, chef and automation of Veeam components I wanted to extend this out further within the Veeam Availability Platform and see what could be done with our Veeam Availability for Nutanix AHV.
I have covered lots of deep dive on the capabilities that the product brings even in a v1 state, but I wanted to highlight the ability to automate the deployment of the Veeam Availability for Nutanix Proxy appliance. The idea was also validated when I spoke to a large customer that had heavily invested in Nutanix AHV across 200 of their sites in the US.
Having been a Veeam customer they were very interested in exploring the new product and its capabilities matched up with their requirements. But they then had a challenge in deploying the proxy appliance to all 200 clusters they have across the US. There were some angles around PowerShell and from a configuration this may be added later to what I have done here. I took the deployment process and I created a terraform script that would deploy the image disk and new virtual machine for the proxy with the correct system requirements.
This was the first challenge that validated what and why we needed to make a better option for an automated deployment. The second was that the deployment process of the v1 is a little off putting I think is a fair thing to say, so by being able to automate this it would really help that initial deployment process.
Here is the terraform script, if you are not used to or aware of terraform then I suggest you look this up, the possibilities with terraform really change the way you think about infrastructure and software deployment.

One caveat you will need to consider is as part of this deployment I placed the image file that I downloaded from veeam.com on my web server this allows for that central location and part of the script to deploy that image from this shared published resource. Again the thought process to this was that 200 site customer and providing a central location to pull the image disk from, without downloading it 200 times in each location.
Once the script has run you will see the created machine in the Nutanix AHV management console.

There is a requirement for DHCP in the environment so that the appliance gets an IP and you can connect to it.

Now you have your Veeam Availability for Nutanix AHV proxy appliance deployed and you can run through the configuration steps. Next up I will be looking at how we can also automate that configuration step. However for now the configuration steps can be found here.
Thanks to Jon Kohler he is the main contributor to the Nutanix Terraform Provider and also has some great demo YouTube videos worth watch, You will see that the base line for the script I have created is based on examples that Jon uses in his demonstrations.

Original Article: https://vzilla.co.uk/vzilla-blog/veeam-availability-for-nutanix-ahv-automated-deployment-with-terraform

DR

Veeam Direct Restore from AWS EC2 Backup & Replication Server and Repository

Veeam Direct Restore from AWS EC2 Backup & Replication Server and Repository

vZilla  /  michaelcade


A couple of weeks back at Cloud Field Day one of the questions asked by the delegates was would the conversion process for the Direct Restore to AWS be faster if we stored the data within AWS as part of the backup process, for example if we were to have our on-premises operational restore window local to the production data but have a backup copy job sending a retention period to a completely separate Veeam Backup & Replication server and repository that is located within AWS.

I wanted to check and compare the speed and performance of this but also if we did see a dramatic increase in speed then at what cost does this come at.
As you can see from the diagram above, we have our production environment on the left and this is the control layer for the operational backups for our production workloads. We are then for the purposes of the test going to backup copy those backups to a secondary backup repository to a machine running in AWS. We then have a complete stand-alone system running within AWS that has Veeam Community Edition installed.
This Veeam Backup & Replication server within AWS is a Windows 2016 machine with the following specifications.

Availability Zone = us-east-2
T2.Large
8GB Memory
2 CPUs

On this machine we then installed Veeam backup & Replication – Community Edition, the functionality of cloud mobility is completely available within the free community edition.

The installation doesn’t take too long.

As you can see in the below image, we have a full backup file. this is located within the repository in AWS. Next up we need to import this to our AWS Veeam Backup & Replication server.

It’s really simple to import any Veeam backup first up open up the Veeam Backup & Replication console and connect to the Veeam Backup & Replication server, if using the console on the server then you can choose “localhost”

On the top ribbon, select import backup.

Depending on where that backup file is located you will need to select this from the drop down, if this is not on an already managed Veeam server (i.e separate to the the Veeam backup & replication server, you will need to add this)


We can choose different Veeam files I changed the file type to VBK so I could see the full backup file in the directory.

If the backup file also contains guest file system index, then you can check the box to import this also.

Next you will see the progress, this should not take long.

We are now in a position where we can see the contents of the backup, and from here we can begin our tests.

For us to test the direct restore to AWS functionality then we first need to add our AWS Cloud Credentials.

We can choose to add our Amazon AWS Access Key from the selection.


From your Amazon account you will need to provide your Access Key and Secret Key these are super super important not to share externally. These need to be super secure.

Notice how I am not going to show you my secret key.

You will then see your newly added cloud credentials.

Ok so at this stage we can right click on the machine we wish to directly restore to AWS.


The approach we take here is a very easy to use wizard, we choose our AWS Cloud Credentials from the drop down. Choose the Region and data center region you wish to restore the machine into.

Next up we need to choose the name we wish to use for the restored machine, but also we can add specific AWS tags.


I am going to rename mine to differentiate later on.


We then need to define the AWS Instance type we wish to use, you will need to make sure that the system you are sending there is supported by AWS for the OS.

Next, we choose the Amazon VPC and Security Group we wish to use.

We can choose if we would like to use a Helper Proxy Appliance.

Finally, we are able to give a reason for the restore.

Summary of the process and outcome.

This is the simple and easy to use steps you would take to get any direct restore to AWS.
Let’s get back to the test now, we want to know if by having this machine backup located within AWS would this make the conversion faster than having this on premises. This was the question asked during Cloud Field Day.
First of all, let’s take a look at the time it takes for the same machine from on-premises, the test is from our Columbus DC which also happens to be an AWS DC as well, I will reflect and take this into account later on.
This is the screen grab for the process that took place from on premises to AWS us-east-2 upload of data looks to have taken the exact same amount of time so the proof will be in the importing of the VM and if that is faster.
The on-premises system took 14:29 to import and 15:34 for the whole process to complete.

The connectivity for my “on-premises” location is as follows.


Now we want to run the same test using the imported backup file that we showed previously. As you can see the combined total took 12:39 and the conversion took 11:33 minutes.
The machine we are restoring is 1.1GB you can see the upload speed was the same regardless of the location of the backup, it’s the conversion that is slightly improved.


We also ran a test from our lab to us east 1 (North Virginia) and as you can see this was different again in terms of the import process.

I then wanted to test outside of our lab and very helpfully Jorge was able to run a test from his home lab, as you can expect his connectivity is not the same as an enterprise lab or data centre.
Jorge took the same machine, imported it to his Veeam Backup & Replication server within his home lab with a much slower upload speed than we have used previously but you can see that the import process and here we are importing to London and its quicker still. But the import process here shows it to be faster still.


I will continue gathering data points around this but the one clear statistic at the moment is it really doesn’t make any difference really on a small workload by having the backup stored within AWS already, now you consider a larger dataset and that import process time saving could be considerable but you have to really consider the costs behind storing those backups in AWS. And is it worth it.

Original Article: https://vzilla.co.uk/vzilla-blog/veeam-direct-restore-from-aws-ec2-backup-replication-server-and-repository

DR

VeeamON 2019: Day one recap

VeeamON 2019: Day one recap

Veeam Software Official Blog  /  Rick Vanover


VeeamON Miami is under way! On Monday, we hosted the VeeamON 2019 Welcome Reception, which was a great start to the VeeamON 2019 event, where we are welcoming a full house of customers and partners.
The first full day was also packed with key announcements, new Veeam technologies and an awesome agenda of breakouts for attendees. Here is a rundown of Tuesday’s news:

The general session also featured key perspectives from Veeam co-founder Ratmir Timashev on Veeam’s momentum, customer testimonials and some key focus on Microsoft. Veeam Vice President, Global Business & Corporate Development Carey Stanton welcomed Tad Brockway, corporate vice president for Azure Storage, Media, and Edge platform team at Microsoft.


Image via @anbakal on Twitter

We also had a very special general session focused on technology, both already existing and coming soon. In this session, Veeam Product Strategy and R&D gave a number of key overviews of the new Veeam Availability Orchestrator v2 general availability announcement, Veeam Availability Console and Veeam Backup for Microsoft Office 365. New capabilities were shown for Veeam Availability Suite, as well as new technologies for Microsoft Azure.


Image via @anbakal on Twitter

However, the key part of the event for attendees is the breakouts! This year, the breakout menu features technical topics making up 80% of the breakouts delivered by Veeam. Everything from best practices, worst practices, how-to tips and more has been covered. Today had presentations from Platinum Sponsors, Cisco, NetApp, Microsoft Azure, ExaGrid and IBM Cloud. Here are two slides from Veeam presentations that I found compelling:

 “From the Architect’s Desk: Sizing of Veeam Backup & Replication, Proxies and Repositories”

This session was presented by Tim Smith, a solutions architect based in the US (Tim also runs the Tim’s Tech Thoughts blog and is on Twitter at: Tsmith_co). Here is one slide where Tim outlines the sizing of the Veeam backup server for 2,000 VMs with eight jobs (just as an example). This is important as sizing goes all the way through the environment: backup server, proxies, repositories, etc.

 “Let’s Manage Agents”

This session was presented by Dmitry Popov, senior analyst, product management in charge of products, including Veeam Agent for Microsoft Windows. Here is one slide where Dmitry shows a cool tip where unmanaged agents (where agents running without a license in free mode will show up) can be put into a protection group to have centralized management and a schedule:

For attendees of the event, you will be able to access the recording of these and all other sessions. More information will be sent as a follow-up email for the replay information.
Check out this recap video by our senior global technologists Anthony Spiteri and Michael Cade:

We will have more content tomorrow as well! I’ll be posting another blog with a recap from today’s event. For those of you at the event, be sure to use the hashtag #VeeamON to share your experiences!
The post VeeamON 2019: Day one recap appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/ehzxIPBeeTw/veeamon-2019-day-1-recap.html

DR

Fujitsu ETERNUS storage plug-in now available

Fujitsu ETERNUS storage plug-in now available

Veeam Software Official Blog  /  Rick Vanover


One of the things I love is how fast we can add storage integrations with the Veeam Universal Storage API. This framework was introduced with Veeam Backup & Replication 9.5 Update 3 and has allowed Veeam and our alliance partners to rapidly add new supported storage systems. Fujitsu Storage ETERNUS DX and AF are the newest systems to be supported with a Veeam plug-in.
This plug-in actually exposes many outstanding capabilities, much more than just the Backup from Storage Snapshots that seems to get all of the attention. This particular plug-in offers the following capabilities:

Let’s review these benefits for this storage plug-in.

Veeam Explorer for Storage Snapshots

This is my favorite capability and it has a powerful set of options in our free Community Edition – and even more in the Enterprise and higher editions. This allows existing storage snapshots to be used for Veeam restores, such as a file-level recovery, a whole VM restore to a new location Application Object Exports  (Enterprise and higher editions can restore Applications and Databases  to original location) or even an application item. This will read the storage snapshots on the Fujitsu ETERNUS and present them to Veeam for launching the restores, all seamlessly without impacting the operation of primary storage resources. The diagram below shows how you can launch the 7 restore scenarios from a snapshot on the Fujitsu ETERNUS array:

Veeam Backup from Storage Snapshots

This capability that comes from the plug-in will really unleash the power of the storage integration when it comes to performing the backup. The backup from storage snapshots engine will allow the data mover (a Veeam VMware Backup Proxy) to do the heavy-lifting of a backup job from the storage snapshot versus from a conventional VMware vSphere snapshot. I like to say that this integration will allow organizations to take backups or replicas at any time of the day. All of the goodness you would expect is maintained, such as VMware Changed Block Tracking (CBT) and application consistency. This is all transparent with this integration. The diagram below shows the time sequence in relation to the I/O associated with the backup job:

On-Demand Sandbox for Storage Snapshots

Veeam has pioneered the “DataLabs” use case, and when using an integrated storage system like Fujitsu ETERNUS, you are truly unlocking endless opportunities. The On-Demand Sandbox for Storage Snapshots will take a storage snapshot and expose the VMs on it to an isolated virtual lab. This can be used for testing changes to critical applications, testing scripts, testing upgrades, verification of a result from an application and more.
One practical way the On-demand Sandbox for Storage Snapshots can help organizations is by helping avoid changes that don’t go as expected for one of many reasons. Firing up an application in an on-demand sandbox, powered by the Fujitsu ETERNUS DX/AF array, will give you a good sense of the changes. You can answer questions like “How long will it take?” and “What will happen?” But most importantly “Will it work?” This can save time by avoiding cancelled change requests due to not knowing exactly what to expect with some changes.

Primary Storage Snapshot Orchestration

The plug-in for Fujitsu ETERNUS DX/AF storage will also allow you make Veeam jobs that just create snapshots. These jobs will not produce a backup but will produce a storage snapshot that can be used for Veeam Explorer for Storage Snapshots.

The Power of the Storage Snapshot! Harness it!

The Fujitsu ETERNUS DX/AF storage systems are the latest integrated array with Veeam. You can download the plug-in here.

The post Fujitsu ETERNUS storage plug-in now available appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/f2fb71I_tOs/fujitsu-eternus-storage-snapshot-integration.html

DR

What’s new in v3 of Veeam’s Office 365 backup

What’s new in v3 of Veeam’s Office 365 backup

Veeam Software Official Blog  /  Niels Engelen


It is no secret anymore, you need a backup for Microsoft Office 365! While Microsoft is responsible for the infrastructure and its availability, you are responsible for the data as it is your data. And to fully protect it, you need a backup. It is the individual company’s responsibility to be in control of their data and meet the needs of compliance and legal requirements. In addition to having an extra copy of your data in case of accidental deletion, here are five more reasons WHY you need a backup.

With that quick overview out of the way, let’s dive straight into the new features.

Increased backup speeds from minutes to seconds

With the release of Veeam Backup for Microsoft Office 365 v2, Veeam added support for protecting SharePoint and OneDrive for Business data. Now with v3, we are improving the backup speed of SharePoint Online and OneDrive for Business incremental backups by integrating with the native Change API for Microsoft Office 365. By doing so, this speeds up backup times up to 30 times which is a huge game changer! The feedback we have seen so far is amazing and we are convinced you will see the difference as well.

Improved security with multi-factor authentication support

Multi-factor authentication is an extra layer of security with multiple verification methods for an Office 365 user account. As multi-factor authentication is the baseline security policy for Azure Active Directory and Office 365, Veeam Backup for Microsoft Office 365 v3 adds support for it. This capability allows Veeam Backup for Microsoft Office 365 v3 to connect to Office 365 securely by leveraging a custom application in Azure Active Directory along with MFA-enabled service account with its app password to create secure backups.

From a restore point of view, this will also allow you to perform secure restores to Office 365.

Veeam Backup for Microsoft Office 365 v3 will still support basic authentication, however, using multi-factor authentication is advised.

Enhanced visibility

By adding Office 365 data protection reports, Veeam Backup for Microsoft Office 365 will allow you to identify unprotected Office 365 user mailboxes as well as manage license and storage usage. Three reports are available via the GUI (as well as PowerShell and RESTful API).
License Overview report gives insight in your license usage. It shows detailed information on licenses used for each protected user within the organization. As a Service Provider, you will be able to identify the top five tenants by license usage and bring the license consumption under control.
Storage Consumption report shows how much storage is consumed by the repositories of the selected organization. It will give insight on the top-consuming repositories and assist you with daily change rate and growth of your Office 365 backup data per repository.

Mailbox Protection report shows information on all protected and unprotected mailboxes helping you maintain visibility of all your business-critical Office 365 mailboxes. As a Service Provider, you will especially benefit from the flexibility of generating this report either for all tenant organizations in the scope or a selected tenant organization only.

Simplified management for larger environments

Microsoft’s Extensible Storage Engine has a file size limit of 64 TB per year. The workaround for this, for larger environments, was to create multiple repositories. Starting with v3, this limitation and the manual workaround is eliminated! Veeam’s storage repositories are intelligent enough to know when you are about to hit a file size limit, and automatically scale out the repository, eliminating this file size limit issue. The extra databases will be easy to identify by their numerical order, should you need it:

Flexible retention options

Before v3, the only available retention policy was based on items age, meaning Veeam Backup for Microsoft Office 365 backed up and stored the Office 365 data (Exchange, OneDrive and SharePoint items) which was created or modified within the defined retention period.
Item-level retention works similar to how classic document archive works:

  • First run: We collect ALL items that are younger (attribute used is the change date) than the chosen retention (importantly, this could mean that not ALL items are taken).
  • Following runs: We collect ALL items that have been created or modified (again, attribute used is the change date) since the previous run.
  • Retention processing: Happens at the chosen time interval and removes all items where the change date became older than the chosen retention.

This retention type is particularly useful when you want to make sure you don’t store content for longer than the required retention time, which can be important for legal reasons.
Starting with Veeam Backup for Microsoft Office 365 v3, you can also leverage a “snapshot-based” retention type option. Within the repository settings, v3 offers two options to choose from: Item-level retention (existing retention approach) and Snapshot-based retention (new).
Snapshot-based retention works similar to image-level backups that many Veeam customers are so used to:

  • First run: We collect ALL items no matter what the change date is. Thus, the first backup is an exact copy (snapshot) of an Exchange mailbox / OneDrive account / SharePoint site state as it looks at that point in time.
  • Following runs: We collect ALL new items that have been created or modified (attribute used here is the change date) since the previous run. Which means that the backup represents again an exact copy (snapshot) of the mailbox/site/folder state as it looks at that point in time.
  • Retention processing: During clean-up, we will remove all items belonging to snapshots of mailbox/site/folder that are older than the retention period.

Retention is a global setting per repository. Also note that once you set your retention option, you will not be able to change it.

Other enhancements

As Microsoft released new major versions for both Exchange and SharePoint, we have added support for Exchange and SharePoint 2019. We have made a change to the interface and now support internet proxies. This was already possible in previous versions by leveraging a change to the XML configuration, however, starting from Veeam Backup for Microsoft Office 365 v3, it is now an option within the GUI. As an extra, you can even configure an internet proxy per any of your Veeam Backup for Microsoft Office 365 remote proxies.  All of these new options are also available via PowerShell and the RESTful API for all the automation lovers out there.

On the point of license capabilities, we have added two new options as well:

  • Revoking an unneeded license is now available via PowerShell
  • Service Providers can gather license and repository information per tenant via PowerShell and the RESTful API and create custom reports

To keep a clean view on the Veeam Backup for Microsoft Office 365 console, Service Providers can now give organizations a custom name.

Based upon feature requests, starting with Veeam Backup for Microsoft Office 365 v3, it is possible to exclude or include specific OneDrive for Business folders per job. This feature is available via PowerShell or RESTful API. Go to the What’s New page for a full list of all the new capabilities in Veeam Backup for Microsoft Office 365.

Time to start testing?

There’s no better time than the present for you to get your hands-on Office 365 backup. Download Veeam Backup for Microsoft Office 365 v3, or try Community Edition FREE forever for up to 10 users and 1 TB of SharePoint data.
GD Star Rating
loading…
What’s new in v3 of Veeam’s Office 365 backup, 5.0 out of 5 based on 4 ratings

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/_liOANg0v5g/new-features-office-365-backup-v3.html

DR

How to enable MFA for Office 365

How to enable MFA for Office 365

Veeam Software Official Blog  /  Polina Vasileva


Starting from the recently released version 3, Veeam Backup for Microsoft Office 365 allows for retrieving your cloud data in a more secure way by leveraging modern authentication. For backup and restores, you can now use service accounts enabled for multi-factor authentication (MFA). In this article, you will learn how it works and how to set up things quickly.

How does it work?

For modern authentication in Office 365, Veeam Backup for Microsoft Office 365 leverages two different accounts: an Azure Active Directory custom application and a service account enabled for MFA. An application, which you must register in your Azure Active Directory portal in advance, will allow Veeam Backup for Microsoft Office 365 to access Microsoft Graph API and retrieve your Microsoft Office 365 organizations’ data. A service account will be used to connect to EWS and PowerShell services.
Correspondingly, when adding an organization to the Veeam Backup for Microsoft Office 365 scope, you will need to provide two sets of credentials: your Azure Active Directory application ID with either an application secret or application certificate and your services account name with its app password:

Can I disable all basic authentication protocols in my Office 365 organization?

While Veeam Backup for Microsoft Office 365 v3 fully supports modern authentication, it has to fill in the existing gaps in Office 365 API support by utilizing a few basic authentication protocols.
First, for Exchange Online PowerShell, the AllowBasicAuthPowershell protocol must be enabled for your Veeam service account in order to get the correct information on licensed users, users’ mailboxes, and so on. Note that it can be applied on a per-user basis and you don’t need to enable it for your entire organization but for Veeam accounts only, thus minimizing the footprint for a possible security breach.
Another Exchange Online PowerShell authentication protocol you need to pay attention to is the AllowBasicAuthWebServices. You can disable it within your Office 365 organization for all users — Veeam Backup for Microsoft Office 365 can make do without it. Note though, that in this case, you will need to use application certificate instead of application secret when adding your organization to Veeam Backup for Microsoft Office 365.
And last but not the least, to be able to protect text, images, files, video, dynamic content and more added to your SharePoint Online modern site pages, Veeam Backup for Microsoft Office 365 requires LegacyAuthProtocolsEnabled to be set to $True. This basic authentication protocol takes effect for all your SharePoint Online organization, but it is required to work with certain specific services, such as ASMX.

How can I get my application ID, application secret and application certificate?

Application credentials, such as an application ID, application secret and application certificate, become available on the Office 365 Azure Active Directory portal upon registering a new application in the Azure Active Directory.
To register a new application, sign into the Microsoft 365 Admin Center with your Global Administrator, Application Administrator or Cloud Application Administrator account and go to the Azure Active Directory admin center. Select New application registration under the App registrations section:

Add the app name, select Web app/API application type, add a sign-on URL (this can be any custom URL) and click Create:

Your application ID is now available in the app settings, but there’re a few more steps to take to complete your app configuration. Next, you need to grant your new application the required permissions. Select Settings on the application’s main registration page, go to the Required permissions and click Add:

In the Select an API section, select Microsoft Graph:

Then click Select permissions and select Read all groups and Read directory data:

Note that if you want to use application certificate instead of application secret, you must additionally select the following API and corresponding permissions when registering a new application:

  • Microsoft Exchange Online API access with Use Exchange Web Services with full access to all mailboxes‘ permissions
  • Microsoft SharePoint Online API access with Have full control of all site collections permissions

To complete granting permissions, you need to grant administrator consent. Select your new app from the list in the App registrations (Preview) section, go to API Permissions and click Grant admin consent for <tenant name>. Click Yes to confirm granting permissions:

Now your app is all set and you can generate an application secret and/or application certificate. Both are managed on the same page. Select your app from the list in the App registrations (Preview) section, click Certificates & secrets and select New client secret to create a new application secret or select Upload certificate to add a new application certificate:

For application secret, you will need to add secret description and its expiration period. Once it’s created, copy its value, for example, to Notepad, as it won’t be displayed again:

How can I get my app password?

If you already have a user account enabled for MFA for Office 365 and granted with all the roles and permissions required by Veeam Backup for Microsoft Office 365, you can create a new app password the following way:

  • Sign into the Office 365 with this account and pass additional security verification. Go to user’s settings and click Your app settings:
  • You will be redirected to https://portal.office.com/account, where you need to navigate to Security & privacy and select Create and manage app passwords:
  • Create a new app password and copy it, for example, to Notepad. Note that the same app password can be used for multiple apps or a new unique app password can be created for each app.

What’s next?

Now you have all the credentials to start protecting your Office 365 data. When adding an Office 365 organization to the Veeam Backup for Microsoft Office 365 scope, make sure you select the correct deployment type (which is ‘Microsoft Office 365’) and the correct authentication method (which in our case is Modern authentication). Keep in mind that with v3, you can choose to use the same or different credentials for Exchange Online and SharePoint Online (together with OneDrive for Business). If you want to use separate custom applications for Exchange Online and SharePoint Online, don’t forget to register both in advance in a similar way as described in this article.
GD Star Rating
loading…
How to get App ID, App secret and app password in Office 365, 5.0 out of 5 based on 8 ratings

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/qtrIGmLFGT8/setup-multi-factor-authentication-office-365.html

DR

How to limit egress costs within AWS and Azure

How to limit egress costs within AWS and Azure

Veeam Software Official Blog  /  Nicholas Serrecchia


With Update 4’s exciting new cloud features, there are settings within AWS and Azure that you should familiarize yourself with to help negate some of the egress traffic costs, as well as help with security.
Right now, let’s talk about the scenarios where:

  • You are backing up Azure/AWS instances, utilizing Veeam Backup & Replication with a Veeam Agent, while utilizing Capacity Tier all inside of AWS/Azure
  • You have a SOBR instance in AWS/Azure and utilize Capacity Tier
  • When N2WS backup and recovery/Veeam Availability for AWS performs a copy to Amazon S3
  • If Veeam is deployed within AWS/Azure and you perform a DR2EC2 without a proxy or DR2MA

In AWS, by default, all traffic written into S3 from a resource within a VPC, like an EC2 instance, face egress costs for all these scenarios listed above. By default, when we archive data into S3 or do a disaster recovery to EC2, where Veeam uploads the virtual disk into S3, so AWS can convert to Elastic Block Store (EBS) volumes (AWS VMimport), we face an egress charge per GB. There is the option to utilize a NAT gateway/instance, but again there is a price associated with that as well.
Thankfully, there is an option that you could enable, which is basically the “don’t charge me egress!” button. That feature is called VPC Endpoints for AWS and VNet Service Endpoints for Azure.

Limit AWS egress costs

As stated by AWS:
“A VPC Endpoint enables you to privately connect your VPC to supported AWS services and VPC Endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network”.
You simply enable VPC Endpoints for your S3 service within that VPC and you will no longer face egress cost when an EC2 instance is traversing data into S3. This is because the EC2 instance doesn’t need a public IP, internet gateway or NAT device to send data to S3.

Now that you enabled the VPC Endpoint, I highly recommend that you create a bucket policy to specify which VPCs or external IP addresses can access the S3 bucket.

Limit Azure egress costs

Azure handles the egress costs from their instances into Blob in the same manner AWS does, but with Azure the nomenclature is different, they use VNets instead of VPCs and they too have a feature that can be enabled at the VNet level: VNet Service Endpoints.
As stated by Microsoft Azure:
“Virtual Network (VNet) service endpoints extend your virtual network private address space and the identity of your VNet to the Azure services, over a direct connection. Endpoints allow you to secure your critical Azure service resources to only your virtual networks. Traffic from your VNet to the Azure service always remains on the Microsoft Azure backbone network.”

With Azure, you can then setup a firewall within the storage account to limit internet access to that resource.
Again, this is for instances hosted within a VNet or VPC talking to their respected object storage within the same region, not on-premises to an S3/Azure storage account.

References:

The post How to limit egress costs within AWS and Azure appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/QmIGpnTArlU/limit-aws-azure-data-transfer-costs.html

DR

Veeam & NetApp – Backup for the NetApp Data Fabric

Veeam & NetApp – Backup for the NetApp Data Fabric

vZilla  /  michaelcade

ONTAP 9.5 went GA a few weeks back, waking up this morning to see the weekly digest from Anton Gostev mentioning the support for ONTAP 9.5 with Veeam Backup & Replication 9.5 Update 4a. This will not support the new feature released with ONTAP 9.5 synchronous SnapMirror.
I have been preaching the powers of NetApp & Veeam for a few years now, but if you have not seen the power of the integration between the two vendors then I am going to summarise everything in this post here today.

Explorer

Let’s start with the free stuff you can do with Veeam in a NetApp ONTAP environment. Veeam Community Edition (Free version of Veeam) has the ability to add an ONTAP controller to your Veeam environment and regardless of if Veeam has created those native ONTAP snapshots or not, Veeam has the ability to go inside those ONTAP snapshots and perform recoveries against any VMware virtual machine you have within that snapshot. This could be full virtual machine recovery, also Instant VM recovery see here. It could also be file level recovery or if you want to get down to the application items, maybe a mail item or share point object then you can also do this from the FREE version!

Orchestrate

Now natively you can create a NetApp Snapshot pretty stress free, however this is going to be crash consistent and where some application servers would withstand that crash consistency on recovery there are some most likely business critical applications that require application consistency. Why not use Veeam to orchestrate your application aware and application consistent snapshots. This provides a much tighter Recovery Point Objective. This also makes leveraging ONTAP snapshots for these reasons more appealing given that in ONTAP 9.4 the limit of snapshots per volume was increased from 255 to 100. With it being a snapshot your Recovery Time Objective is going to be seconds!
Veeam can also orchestrate the SnapMirror or SnapVault snapshot data transfer.

Backup

Ok sounds good so far. Now the icing on the cake or maybe the buttercream in the middle of the cake. Snapshots are great and I am not getting into the snapshot vs backup its 2019 and if you don’t get it or agree then you probably voted BREXIT and I am fine with that, but this is my time to shine not yours.
The integration with Veeam and NetApp gives us the ability to leverage those snapshots and gain all the benefits mentioned in the above section but also the ability to get those blocks of data into another media type.

OnDemand

We have you covered for granular recovery, fast RTO, tighter RPO and a copy of your data on a separate media type that could protect you from internal malicious threats or corruption. Now the cherry on top.
We can take those ONTAP Snapshots and those backups and we can spin them up in an isolated environment. Veeam DataLabs gives us the ability to leverage the FlexClone technology on the ONTAP system and automate the creation, provisioning and removal of small isolated sandbox environments. This YouTube video clip of a session I did is a little old now, but the capability is a huge feature within Veeam Backup & Replication.

Oh, I should also mention that we have the same integration with Element Software and NetApp HCI I wrote about that here.
I think that’s all for now in the world of NetApp & Veeam. If you have any questions, then you can find me on the twitters @MichaelCade1

Original Article: https://vzilla.co.uk/vzilla-blog/veeam-netapp-backup-for-the-netapp-data-fabric

DR

Tenant Backup to Tape

Tenant Backup to Tape

CloudOasis  /  HalYaman


As a Service Provider, what are the ways available to you to protect your tenants data? What ways are there to offload aged data to keep storage costs under control? When looking at data archiving, is Cloud storage the only way to archive your tenants backup data?

I chose this topic because I feel that the Backup to Tape as a Service feature not getting the attention it deserves.
The Tape as a Service feature was included in Veeam Update 4 to help Service Providers free up space in their storage repositories and to help customers to safely archive their backup data for longer retention time.
In the same Update 4, Veeam also released the Archive Tier. This allows Service Providers and customers to tier their backup to the cloud for archiving. But somehow this Archive Tier feature grabbed the market’s attention and neglected Tape as a Service.
As someone who is still a big fan of Tapes, and believes that tape is not going to disappear in the near future, I think this blog post is a great read for Service Providers to learn more about the Tape as a Service feature included with Veeam Update 4.
So let’s learn how it works.

Architecture

The tenant Backup to Tape architecture is an add-on to the Veeam Cloud Connect architecture, but with the addition of the Tape Infrastructure. For Service Providers who want to offer this feature, the Tape infrastructure must be added to the Veeam Cloud Connect. A tape backup job must then be configured to off-load the tenant backup from the cloud storage/repository to the tape using the Veeam Backup to Tape job.
The diagram below illustrates how the Veeam Tenant Backup to Tape feature integrates with Veeam Cloud Connect.
After the Tape infrastructure has been added to the Veeam Cloud Connect, the Service Provider can now create a GFS (Grandfather, Father, Son backup routine) backup job to back up the tenant data to tape for both these reasons:

  • Archival for long retention, and
  • Offline Archive.

Job Configuration

The configuration starts with creating a GFS media pool. The Service Provider can create a pool for each tenant, retention, or any other business offering. On the examples, we are going to use here, and to achieve full data segregation, I will create a single GFS media pool for each tenant. On the screenshot below, I created a Media Pool with five tapes to be used when I want to backup Tenant01 data:

The next step is illustrated below, where I created the Tape Backup Job and then selected Tenants as the data source:

Next, I selected the appropriate tenant repository; in the example below, it is Tenant01_Repo:

At the Media Pool menu item, select the specific tenant media pool. In this example, it is Tenant01 (HP MSL G3 Series 9.50)
The last step is to schedule the job to run the backup:

Recover Tenant Data from Tape

It is important to mention that Veeam Tenant Backup to Tape as a Service is managed and operated by only the Service Provider to help protect and archive the tenant data on behalf of the tenant. That said, the tenant will not be aware of the job or the restore process. To benefit from this service, the tenant and the Service Provider must agree to the policy to be implemented.
In the case of data loss, the Tenant can ask the service provider to restore the data using one of the following options:

  • Restore to the original location,
  • Restore to a new location, or
  • Export backup files to disk.

The screenshot below illustrates the options:

With the options are shown above, Service Providers can restore the tenant data to the original location, a different location, or export the data to removable storage. Original location restores the data to the same location it was backed up from, i.e. the tenant cloud repository. But sometimes the tenant wants to compare the data on tape with the data in the cloud repository; this is where the restore to a new location option becomes very useful.  Service providers can create a temporary tenant where the tenant is provided with access to check the data before committing the restore to the original location or sending the data using removable storage. The screenshot below illustrates the Service Provider restoring the tenant data to a new location using the temporary tenant name of Tenant01_Repo.
After the restoration was completed, the tenant can connect to the Service Provider using the temporary tenant to access the restored data:

Summary

As I mentioned at the start, I’m a big fan of tape backups for several reasons. They include secure recovery in the event of a Ransomware attack, doesn’t cost very much, and more. I am also aware of the limitations and challenges that can come with Tape backup, and maybe that can be the topic for another blog post. The Backup to Tape feature described here is a great feature, yet somehow several Service Providers I spoke to missed this feature. They were very happy to finally find a cheap and effective way to use their Tape infrastructure; some of them started using this feature as ransomware protection.
I hope this blog post provides a clear understanding of how you as a Service Provider can benefit from this Tenant Tape Backup feature and maximize your infrastructure return on investment.
The post Tenant Backup to Tape appeared first on CloudOasis.

Original Article: https://cloudoasis.com.au/2019/04/10/tenantbackuptotape/

DR

Disaster recovery plan documentation with Veeam Availability Orchestrator

Disaster recovery plan documentation with Veeam Availability Orchestrator

Veeam Software Official Blog  /  Sam Nicholls


Without a doubt, the automated reporting engine in Veeam Availability Orchestrator and the disaster recovery plan documentation it produces are among its most powerful capabilities. They’re something we’ve had a lot of overwhelmingly positive feedback from customers that benefit from them, and I feel that sharing some more insights into what these documents are capable of will help you understand how you can benefit from them, too.
Imagine coming in to work on a Monday morning to an email containing an attachment that tells you that your entire disaster recovery plan was tested over the weekend without you so much as lifting a finger. Not only does that attachment confirm that your disaster recovery plan has been tested, but it tells you what was tested, how it was tested, how long the test took, and what the outcome of the test was. If it was a success, great! You’re ready to take on disaster if it decides to strike today. If it failed, you’ll know what failed, why it failed, and where to start fixing things. The document that details this for you is what we call a “test execution report,” but that is just one of four fully-automated documentation types that Veeam Availability Orchestrator can put in your possession.

Definition report

As soon as your first failover plan is created within Veeam Availability Orchestrator, you’ll be able to produce the plan definition report. This report provides an in-depth view into your entire disaster recovery plan’s configuration, as well as its components. This includes the groups of VMs included in that plan, the steps that will be taken to recover those VMs, and the applications they support in the event of a disaster, as well as any other necessary parameters. This information makes this report great for auditors and management, and can be used to obtain sign-off from application owners who need to verify the plan’s configuration.

Readiness check report

Veeam Availability Orchestrator contains many testing options, one of which we call a readiness check, a great sanity check that is so lightweight that it can be performed at any time. This test completes incredibly quickly and has zero impact on your environment’s performance, either in production or at the disaster recovery site. The resulting report documents the outcome of this test’s steps, including if the replica VMs are detected and prepared for failover, the desired RPO is currently met, the VMware vCenter server and Veeam Backup & Replication server are online and available, the required credentials are provided, and that the required failover plan steps and parameters have been configured.

Test execution report

Test execution reports are generated upon the completion of a test of the disaster recovery plan, powered by enhanced Veeam DataLabs that have been orchestrated and automated by Veeam Availability Orchestrator. This testing runs through every step identified in the plan as if it were a real-world scenario and documents in detail everything you could possibly want to know. This makes it ideal for evaluating the disaster recovery plan, proactively troubleshooting errors, and identifying areas that can be improved upon.

Execution report

This report is exactly the same as the test execution report but is only produced after the execution of a real-world failover.
Now that we understand the different types of reports and documentation available in Veeam Availability Orchestrator, I wanted to highlight some of the key features for you that will make them such an invaluable tool for your disaster recovery strategy.

Automation

All four reports are automatically created, updated and published based on your preferences and needs. They can be scheduled to complete at any frequency you see fit – daily, weekly, monthly, etc., but are also available on-demand with a single-click. This means that if management or an auditor ever wants the latest, you can hand them real-time, up-to-date documentation without the laborious, time-consuming and error-prone edits. You can even automate this step if you like by subscribing specific stakeholders or mailboxes to the reports relevant to them.

Customization

All four reports available with Veeam Availability Orchestrator ship in a default template format. This template may be used as-is, however, it is recommended to clone it (as the default template is not editable) and customize to your organization’s specific needs. Customization is key, as no two organizations are alike, and neither are their disaster recovery plans. You can include anything you like in your documentation, from logos, application owners, disaster recovery stakeholders and their contact information. Even all the 24-hour food delivery services in the area for when things might go wrong, and the team needs to get through the night. You name it, you can customize and include it.

Built-in change tracking

One of the most difficult things to stay on top of with disaster recovery planning is how quickly and dramatically environments can change. In fact, uncaptured changes are one of the most common causes behind disaster recovery failure. Plan definition reports conveniently contain a section titled “plan change log” that detail any edits to the plan’s configuration, whether by automation or manual changes. This affords you the ability to track things like who changed plan settings, when it was changed, and what was changed so that you can preemptively understand if a change was made correctly or in error, and account for it before a disaster happens.

Proactive error detection

The actionable information available in both readiness check and test execution reports enable you to eradicate risk to your disaster recovery plan’s viability and reliability. By knowing what will and what will not work ahead of time (e.g. a recovery that takes too long or a VM replica that has not been powered down post-test), you’re able to identify and proactively remediate any plans errors that occur before disaster. This in turn delivers confidence to you and your organization that you will be successful in a real-world event. Luckily in the screenshot below, everything succeeded in my test.

Assuring compliance

Understanding compliance requirements laid out by your organization or an external regulatory body is one thing. Assuring that those compliance requirements have been met today and in the past when undergoing a disaster recovery audit is another, and failure to do so can be a costly repercussion. Veeam Availability Orchestrator’s reports enables you to prove that your plan can meet measures like maximum acceptable outage (MAO) or recovery point objectives (RPO), whether they’re defined by governing bodies like SOX, HIPAA, SEC, or an internal SLA regulation.
If you’d like to learn more about how Veeam Availability Orchestrator can help you meet your disaster recovery documentation needs and more, schedule a demo with your Veeam representative, or download the 30-day FREE trial today. It contains everything you need to get started, even if you’re not currently a Veeam Backup & Replication user.
GD Star Rating
loading…
Disaster recovery plan documentation with Veeam Availability Orchestrator, 4.8 out of 5 based on 4 ratings

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/S7QtakHRATg/disaster-recovery-planning-documentation.html

DR

NEW Veeam Backup for Microsoft Office 365 v3

NEW Veeam Backup for Microsoft Office 365 v3
Veeam Backup for Microsoft Office 365 is Veeam’s
fastest growing product of all-time and with 
v3, it’s
easier than ever for your customers to efficiently back up and reliably
restore their Office 365 Exchange, SharePoint and OneDrive data.
Watch this short and educational video
on the ProPartner portal to learn more about the new features and
functionalities in 
v3 and how you can earn more
with Veeam Backup for Microsoft Office 365.

Helping customers migrate from vSphere 5.5

Helping customers migrate from vSphere 5.5
What happens when customers are still running vSphere 5.5 in
production? Matt Lloyd and Shawn Lieu sat down with VMware solution architect
Chris Morrow to discuss upgrading to vSphere 6.5 or 6.7, management changes
in the newer releases, leveraging Veeam DataLabs™ for validation and more.

Enhanced agent monitoring and reporting in Veeam ONE

Enhanced agent monitoring and reporting in Veeam ONE

Veeam Software Official Blog  /  Kirsten Stoner


Veeam ONE has many capabilities that can benefit your business, one of which is the ability to run reports that compile information about your entire IT environment. Veeam One Reporter is a tool you can use to document and report on your environment to support analysis, decision making, optimization and resource utilization. For those who have used Veeam ONE in the past, it’s the go-to tool for reporting on your Veeam Backup & Replication infrastructure, VMware vSphere and Microsoft Hyper-V environments. But one thing was missing from Veeam ONE, and that was the depth of visibility it provided for the Veeam Agents for Microsoft Windows and Linux. With the latest update, Veeam ONE Reporter gains three new reports that assess your Veeam Agent Backup Jobs. In addition to these predefined reports, the update brings information about computers to the “Backup Infrastructure Custom Data” report. It provides agent monitoring by being able to categorize any machine that is being protected by the Veeam Agent within Veeam ONE Business View, allowing you to monitor activity from a business perspective. Ultimately, this update aligns Veeam ONE to provide substantial reporting and monitoring for any physical machines that are protected by the Veeam Agent.

Which reports are new?

Veeam ONE Update 4 adds more predefined reports to analyze Veeam Agent Backup Job activity. These include, Computers with no Archive Copy, Computer Backup Status and Agent Backup Job and Policy History.

  • The “Computer with no Archive Copy” report will highlight all the computers that do not have an archive copy.
  • The “Computer Backup Status” report provides the daily backup status information for all protected agents.
  • The “Agent Backup Job and Policy History” report provides historical information for all Veeam Agent policies and jobs.

All these reports provide visibility into evaluating your data-protection strategies for any workload that is running a Veeam agent in your environment. In this post, I want to highlight the Agent Backup Job and Policy History Report as well as discuss the addition to the “Backup Infrastructure Custom Data” report because both provide a great amount of information on Agent Backup Jobs.


Figure 1: Veeam Backup Agent reports

Agent Backup Job and Policy History report

For Veeam Agents, this report provides great information on the status of the Backup Job, along with data on the job run itself. To access the report:

  1. Open Veeam ONE Reporter, switch to Workspace view, and find the Veeam Backup Agent Reports folder, this is where you will see all the pre-built reports available for Veeam Backup Agents.
  2. Select the report, choose Scope and select the time interval you want the report to contain. (You can build the report to be more specific by selecting an individual backup server, several job/policies to be reported on or drill down further to the exact Agent Backup Job.)
  3. Once you have made your decision, click Preview Report and the report will be created.

Figure 2: Report Scope options

The report contains historical information for your Veeam Agent Backup Jobs. How you defined the report in the first step will determine how specific or general the data will be. Either way, the report provides great depth of data.


Figure 3: Agent Backup Job and Policy History Report

The first page shows an overview of the Agent Backup Job and policies, the date the jobs were run, and if it is in the Success, Failed or Warning category. On the second page of the report, you can see a bit more details like the Total Backup size and the amount of Restore Points Created.


Figure 4: Backup Job and Policy History details

If you select a specific date when the job was run it gives even more analysis on the specific run of the backup job.


Figure 5: Detailed Description of Agent Backup Job results

This tells you when the job started, how long it took and backup size. You can even tell if the run was full or incremental. The report provides visibility into your IT environment by being able to gather real-time data on your protected agent backups.

Backup infrastructure custom data report

This report allows you to customize and add data protection elements that are not covered together in the predefined reports included in Veeam ONE. The report can define and display data points about Veeam Backup & Replication objects, including backup server, backup job, agent job and VMs.  This is useful because it allows you to create a report that includes aspects of the backup infrastructure of your choosing to be displayed for easy analysis and visibility. To run this report, much is the same that was discussed previously in this blog post. You will need to locate the custom report pack in the workspace view, once found you will be able to choose between the different objects you want to show and what aspects of the backup infrastructure you want the report to analyze.


Figure 6: Custom reporting

When creating the report, you have the option to choose between different aspects of the backup infrastructure you want to be shown. In addition, you can apply custom filters to have the report filter the data to show only what you want it to. This allows you to create custom filter for the selected objects you want to show. Here is an example of a report that was run using the custom report pack.


Figure 7: Backup Infrastructure Custom Data Report

The ability to create custom reports  allows you to define your own configuration parameters, performance metrics, and filters when utmost flexibility is required. How cool is that?

Agent monitoring in Business View

To assist with monitoring Veeam Agent Backup activity, you can utilize Veeam ONE Business View. Veeam ONE Business View has added the ability to categorize agents in business terms. If you have your backup server(s) connected into Veeam ONE, you can start categorizing any machine that is being protected by the Veeam Agent.


Figure 8: Agent Monitoring in Business View

Veeam ONE Business View allows you to group any computers running the Veeam Backup Agent managed by your backup server. This gives you another layer of monitoring for the Veeam Agents that was not available in previous versions of Veeam ONE.

Bringing visibility to the Veeam Agents

Veeam ONE gives you the tools needed to accurately monitor and report on your entire IT environment. Actively monitoring and reporting on your IT environment allows you to be proactive when addressing issues occurring in your environment, helps plan for future business IT operation needs, and provides understanding on how your data center works. By being able to add Veeam Agent Reporting to your physical environment, you can gather data points on the Veeam Agent backup jobs and document the results. The latest update brings to Veeam ONE many enhancements and functionality making it imperative to start using in your data center today.
The post Enhanced agent monitoring and reporting in Veeam ONE appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/JuL1W3jIYes/enhanced-agent-monitoring-reporting.html

DR

Application-level monitoring for your workloads

Application-level monitoring for your workloads

Veeam Software Official Blog  /  Rick Vanover


If you haven’t noticed, Veeam ONE has really taken on an incredible amount of capabilities with the 9.5 Update 4 release.
One capability that can be a difference-maker is application-level monitoring. This is a big deal for keeping applications available and is part of a bigger Availability story. Putting this together with incredible backup capabilities from Veeam Backup & Replication, application-level monitoring can extend your Availability to the applications on the workloads where you need the most Availability. What’s more, you can combine this with actions in Veeam ONE Monitor to put in the handling you want when applications don’t behave as expected.
Let’s take a look at application-level monitoring in Veeam ONE. This capability is inside of Veeam ONE Monitor, which is my personal favorite “part” of Veeam ONE. I’ve always said with Veeam ONE, “I guarantee that Veeam ONE will tell you something about your environment that you didn’t know, but need to fix.” And with application-level monitoring, the story is stronger than ever. Let’s start with both the processes and services inside of a running virtual machine in Veeam ONE Monitor:

I’ve selected the SQL Server service, which for any system with this service, is likely important. Veeam ONE Monitor can use a number of handling options for this service. The first are simple start, stop and restart service options that can be passed to the service control manager. But we also can set up some alarms based on the service:

The alarm capability for the services being monitored will allow a very explicit handling you can provide. Additionally, you can make it match the SLA or expectation that your stakeholders have. Take how this alarm is configured, if the service is not running for 5 minutes, the alarm will be triggered as an error. I’ll get to what happens next in a moment, but this 5-minute window (which is configurable) can be what you set as a reasonable amount of time for something to go through most routine maintenance. But if this time exceeds 5 minutes, something may not be operating as expected, and chances are the service should be restarted. This is especially true if you have a fiddlesome application that constantly or even occasionally requires manual intervention. This 5-minute threshold example may even be quick enough to avoid being paged in the middle of the night! The alarm rules are shown below:

The alarm by itself is good, but we need more sometimes. That’s where a different Veeam ONE capability can help out with remediation actions. I frequently equate, and it’s natural to do so, the remediation actions with the base capability. So, the base capability is the application-level monitoring, but the means to the end of how to fully leverage this capability comes from the remediation actions.
With the remediation actions, the proper handling can be applied for this application. In the screenshot below, I’ve put in a specific PowerShell script that can be automatically run when the alarm is triggered. Let your ideas go crazy here, it can be as simple as restarting the service — but you also may want to notify application owners that the application was remediated if they are not using Veeam ONE. This alone may be the motivation needed to setup read-only access to the application team for their applications. The configuration to run the script to automatically resolve that alarm is shown below:

Another piece of intelligence regarding services, application-level monitoring in Veeam ONE will also allow you to set an alarm based on the number of services changing. For example, if one or more services are added; an alarm would be triggered. This would be a possible indicator of an unauthorized software install or possibly a ransomware service.
Don’t let your creativity stop simply at service state, that’s one example, but application-level monitoring can be used for so many other use cases. Processes for example, can have alarms built on many criteria (including resource utilization) as shown below:

If we look closer at the process CPU, we can see that alarms can be built on if a process CPU usage (as well as other metrics) go beyond specified thresholds. As in the previous example, we can also put in handling with remediation actions to sort the situation based on pre-defined conditions. These warning and error thresholds are shown below:

As you can see, application-level monitoring used in conjunction with other new Veeam ONE capabilities can really set the bar high for MORE Availability. The backup, the application and more can be looked after with the exact amount of care you want to provide. Have you seen this new capability in Veeam ONE? If you haven’t, check it out!
You can find more information on Veeam Availability Suite 9.5 Update 4 here.

More on new Veeam ONE capabilities:

The post Application-level monitoring for your workloads appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/o53BIn44Mzk/application-level-monitoring.html

DR

Backup Azure SQL Databases

Backup Azure SQL Databases

CloudOasis  /  HalYaman


You have a good reason to use Microsoft Azure SQL Databases; but, you are wondering how you can backup the Database locally. Can you include the Azure Databases protection in your company’s backup strategy? What does it take to back up the Azure Databases?

In this blog post, I am going to share with you a solution I used for one of our Azure Database customers who wanted to backup the Azure SQL Database locally. The solution I came up with consists of the following:

  • Azure Databases – SQL Database
  • VM/Physical Server with local SQL server installed
  • An Empty SQL Database
  • Configure Azure: Sync to other Databases
  • Veeam Agent & Veeam Backup & Replication (depends on the deployment)

The following diagram illustrates the solution I am describing on this blog post:

Solution Overview

There are several ways to backup Azure SQL databases. Of the two ways, one is to use the Veeam Backup product, and you use the Veeam Agent in the event you choose to deploy your SQL Server on a physical server, or on a Virtual Machine, inside the Azure Cloud. The other way is to deploy the Veeam Backup solution on premises on a VM inside your Hypervisor infrastructure. In that case, you use the Veeam Backup and Replication product; or, you can also use a combination of both ways.
After the SQL Server is deployed,  the solution requires the creation of an empty SQL database, and then the synchronization between the two databases must be configured. No worries, I will take you through the steps.

Preparing your Azure SQL Databases for Sync

In this first step, I will discuss the preparations of the existent Azure SQL databases to be synchronised. You must follow these steps:

1. Create a Sync Group

From Azure Portal, select SQL Databases. Click on the Database you are going to work with. In the following example, I created a temporary SQL Database and then filled it with temporary data for testing.
After you have moved to the database properties screen, select the Sync to other databases option:
Then select New Sync Group. Complete the following entries:
a. Fill in the name of the group. In this example, it is GlobalSync,
b. The database you want to sync. We have used the AzureSqlDb on Server: depsqlsrv.
c. Select Automatic Sync on or off as desired. Off in this example.
d. Select the conflict resolution option. Here the Hub is to win.
In the next step, you are going to configure the On-Premises Database Sync members. That is, the Azure Database and the Sync Agent, as shown below.
Note: You must copy the key for later use.
Next, download the Client Sync Agent from the provided link and install it on the SQL Server. Press OK. The next step is to Select the On-Premises Database.

Prepare the SQL Server

After completing the steps above, switch on, then login to your SQL server and create an empty SQL database.
Next, start the installation of the Client Sync Agent. Run it after the installation.  After the Client Sync Agent has started running, press Submit Agent Key. Provide the key you copied from the previous step and your Azure SQL Database username and password:
After pressing the OK button, press Register, then provide your local SQL server name and the name of the temporary database you just created.
The steps above should run without connection issues. If you encounter problems, go back over the steps and setting and correct the errors.
You are now set up with your SQL Server preparation and your server is waiting for the synchronised data to be received. Before you get excited, we must switch back to the Azure Portal to continue with the final steps before we test the solution.

Back To Azure Portal

From the previous steps, we now continue with the selected database configuration. The Client Sync Agent communicates with the Azure Portal and updates on the local SQL Server database. Now we can complete the configuration below.

Press OK three times to return to the last step on the Sync Group configuration process. Here we must select the Hub Database tables to sync with the local database.

Press Save to finish.

Testing the Solution

To test our solution, let’s first browse to the SQL server and check to see if we can find any tables inside the database we just created.
We should not find any at this stage:

Now let’s initiate a manual sync. Remember, we must configure the solution to run manually from the Sync Group we just created. To sync the database with the local SQL, you must press the Sync button at the top of the GlobalSync screen. See below.

The Sync should complete successfully. If it doesn’t, check your settings and try again. Now it is time to check the local SQL. See the screenshot below.

Conclusion

With that procedure we have just demonstrated, I have been able to sync the Azure SQL Databases to the local SQL database where I run a frequent Veeam backup using the Veeam Backup and Replication product. This way, I achieved my customer’s objective of saving the Azure SQL Databases locally. If necessary, I can use Veeam SQL Explorer to recover the database and tables to the local server. From there, I can sync it back to the Azure SQL databases.
This time, I have demonstrated a manual sync process; but you can automate the sync to seconds, minutes, or hours. I hope you found this blog post useful.

The post Backup Azure SQL Databases appeared first on CloudOasis.

Original Article: https://cloudoasis.com.au/2019/03/18/backup-azure-sql-databases/

DR

Backup infrastructure at your fingertips with Heatmaps

Backup infrastructure at your fingertips with Heatmaps

Veeam Software Official Blog  /  Rick Vanover


One of the best things an organization can do is have a well-performing backup infrastructure. This is usually done by fine-tuning backup proxies, sizing repositories, having specific conversations with business stakeholders about backup windows and more. Getting that set up and running is a great milestone, but there is a problem. Things change. Workloads grow, new workloads are introduced, storage consumption increases and more challenges come into the mix every day.
Veeam Availability Suite 9.5 Update 4 introduced a new capability that can help organizations adjust to the changes:

Heatmaps!

Heatmaps are part of Veeam ONE Reporter and do an outstanding job of giving an at-a-glance view of the backup infrastructure that can help you quickly view if the environment is performing both as expected AND as you designed it. Let’s dig into the new heatmaps.

The heatmaps are available on Veeam ONE Reporter in the web user interface and are very easy to get started. In the course of showing heatmaps, I’m going to show you two different environments. One that I’ve intentionally set to be performing in a non-optimized fashion and one that is in good shape and balanced so that the visual element of the heatmap can be seen easily.
Let’s first look at the heatmap of the environment that is well balanced:

Here you can see a number of things, the repositories are getting a bit low on free space, including one that is rather small. The proxies carry a nice green color scheme and do not show too much variation in their work during their backup windows. Conversely if we see a backup proxy is dark green, that indicates it is not in use, which is not a good thing.
We can click on the backup proxies to get a much more detailed view, and you can see that the proxy has a small amount of work during the backup window in this environment in the mid-day timeframe and carries a 50% busy load:

When we look at the environment that is not so balanced, the proxies tell a different story:

You can see that first of all there are three proxies, but one of them seems to be doing much more work than the rest due to the color changes. This clearly tells me the proxies are not balanced, and this proxy selected is doing a lot more work than the others during the overnight backup window — which stretches out the backup window.

One of the coolest parts of the heatmap capability is that we can drill into a timeframe in the grid (this timeline can have a set observation window) that will tell us which backup jobs are causing the proxies to be so busy during this time, shown below:

In the details of the proxy usage, you can see the specific jobs that are set which are taking the CPU cycles are shown.

How can this help me tune my environment?

This is very useful as it may indicate a number of things, such as backup jobs being configured to not use the correct proxies, proxies not having the connectivity they need to perform the correct type of backup job. An example of this would be if one or more proxies are configured for only Hot-Add mode and they are physical machines, which makes that impossible. The proxy would never be selected for a job and the remaining proxies would be in charge of doing the backup job. This is all visible in the heatmap yet the backup jobs would complete successfully, but this type of situation would extend the backup window. How cool is that?
Beyond proxy usage, repositories are also very well reported with the heatmaps. This includes the Scale-Out Backup Repositories as well. This will allow you to view the underlying storage free space. The following animation will show this in action:

Show me the heatmaps!

As you can see, the heatmaps add an incredible visibility element to your backup infrastructure. You can see how it is performing, including if things are successful yet not as expected. You can find more information on Veeam Availability Suite 9.5 Update 4 here.

Helpful resources:

The post Backup infrastructure at your fingertips with Heatmaps appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/nHjHt9YMUwE/one-reporter-heatmaps-monitoring.html

DR

New era of Intelligent Diagnostics

New era of Intelligent Diagnostics

Veeam Software Official Blog  /  Melissa Palmer


Veeam Availability Suite 9.5 U4 is full of fantastic new features to enable Veeam customers to intelligently manage their data like never before. A big component of Veeam Availability Suite 9.5 U4 is of course, Veeam ONE.
Veeam ONE is all about providing unprecedented visibility into customers’ IT environments, and this update is full of fantastic enhancements, so stay tuned for more blogs all about new features like Veeam Agent monitoring and reporting enactments, Heatmaps, and more!
One of the most interesting new features of Veeam ONE 9.5 U4 is Veeam Intelligent Diagnostics. Think of Veeam Intelligent Diagnostics as the entity watching over your Veeam Backup & Replication environment so you don’t have to. Wouldn’t it be great for potential issues to fix themselves before they cause problems? Veeam Intelligent Diagnostics enables just that.

How it works

Veeam Intelligent Diagnostics works by comparing the logs from your Veeam Backup & Replication environment to a known list of issue signatures. These signatures are downloaded from Veeam directly, so think of it as a reverse call home. All log parsing and analysis is done by Veeam ONE with the help of a Veeam Intelligent Diagnostics agent which is installed on every Veeam Backup & Replication server you want to keep an eye on.
These signatures can easily be updated by Veeam Support as needed. This allows Veeam Support to be proactive if they are noticing a number of cases with the same characteristics or cause, and fix issues before customers even encounter them. Think of things like common misconfigurations that Veeam customers spend time troubleshooting. These items can easily be avoided by Veeam customers leveraging Veeam Intelligent Diagnostics.

When an issue is detected, Veeam Intelligent Diagnostics can fix things in one of two ways, automatically or semi-automatically. The semi-automatic method requires manual approval to remediate the issue. Veeam calls these fixes Remediation Actions. In either case, Veeam ONE alarms will be triggered when an issue is detected. Veeam Intelligent Diagnostics will also include handy Veeam Knowledge Base articles in the alarms to allow customers to understand the issues they are avoiding.

Management made easier

One of the things that has always attracted me to Veeam products is how easy they are to use and get started with. Veeam Intelligent Diagnostics takes things a step further by eliminating potential issues before they even cause problems, making Veeam Backup & Replication easier to use and manage. Reducing operational complexity is a win for any organization, and Veeam Intelligent Diagnostics helps do this.
The Veeam Availability Suite, which is made up of Veeam Backup & Replication and Veeam ONE is available for a completely free 30-day trial. Be sure to try it out to take advantage of Veeam Intelligent Diagnostics and the rest of the powerful features released in Veeam Availability Suite 9.5 U4.
The post New era of Intelligent Diagnostics appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/vIWWpwJY2UI/intelligent-diagnostics-log-analysis.html

DR

Backup your Azure Storage Accounts

Backup your Azure Storage Accounts

CloudOasis  /  HalYaman


How do you backup your Microsoft Azure Storage Accounts? Is there a way to export the unstructured data from your blob storage accounts and then back it up to a different location?


Last week I had a discussion with a customer who wanted to backup his Microsoft Azure Storage Accounts, blob, files, tables, and more, and was asking if somehow Veeam Backup Agent could help achieve this.
Straight out of the box, it is challenging to backup the data resident in Azure Storage Accounts; but with a few simple steps, I assisted the customer to backup the Microsoft Azure unstructured blob files to a Veeam Repository using Veeam Backup Agent for Windows.
In this blog post, I will take you through the steps I took to help the customer back up their Microsoft Azure Storage Accounts. The solution is to use Microsoft AzCopy to download and sync the required blob container files to a local drive. The drive is where the Veeam Backup Agent is installed and configured to backup the Virtual Machine, or the targeted folder.

Solution Pre-requisites

As I mentioned before, the steps to achieve the requirements are straightforward and require the following items to be prepared and configured:

  • Veeam Backup Server.
  • Veeam Backup Agent installed on a Microsoft Azure VM.
  • Disk space attached to the VM to temporarily store the blob files.
  • Microsoft Azcopy tool.

AzCopy Tool

To copy data from a Microsoft Azure Storage Account, you must download and install the Microsoft Azcopy tool. You can download it from this link. Install it on the VM where the Veeam Backup Agent is installed.
After it has downloaded, run the setup, and then run from the start menu:

By default, the program files are stored in:
C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy\
After the process has started, you must configure the access to your Microsoft Azure subscription using this command:

AzCopy login

Follow the output instruction, where you browse to
https://microsoft.com/devicelogin
Enter the generated random access code to authenticate. In this example, it is HP5DFJBZ9; you will get a different code.

After following those steps, you finished with the Microsoft AzCopy installation and configuration.

Solution

To begin copying the blob data to the local drive, we must perform the following steps:

  • Create a directory on the local drive where you will store the downloaded blob files.
  • Obtain your Storage Account key.
    • Note: In this screenshot at zcopytesting – Access keys, you are provided with two keys so that you can maintain connections using one key while the other regenerates. You must also update your other Microsoft Azure resources and applications accessing this storage account to use the new keys. Your disk access is not interrupted.
Screen Shot 2019-03-10 at 1.04.42 pm.png
  • Obtain the Blob Container URL:

Screen Shot 2019-03-10 at 1.07.40 pm
The next step is to create a .cmd script to apply all those steps above. The script looks like this:

“C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy\AzCopy” /Source:<Blob Container URL> /Dest:<Local Folder> /SourceKey:<Access Key> /S /Y /XO

This .cmd file can be scheduled to run before your Veeam Backup Agent job or using windows schedule.

Veeam Backup Demonstration

To Validate the solution developed above, I tested it by uploading several files to the blob storage account:

Screen Shot 2019-03-10 at 1.14.22 pm.png

I also started a backup job. After the backup job completed, I ran the restore process to validate using the Restoring Guest Files option:

Screen Shot 2019-03-10 at 1.17.22 pm.png

Restoring and editing the console.txt script before modifying the file:

Screen Shot 2019-03-10 at 1.18.43 pm

The second backup was completed after modifying the console.txt file and then starting the incremental backup. The following screenshot is the Restore from Incremental (note the time of the backup):
Screen Shot 2019-03-10 at 1.22.35 pm
Edit the console.txt file (added Hal – Test For Incremental x 2):

Screen Shot 2019-03-10 at 1.23.53 pm.png

Conclusion

With these simple and straight forward steps, I have been able to backup my Microsoft Azure storage account to my Veeam Backup Repository. The good news is that the .cmd script I provided for you on this blog post allows copying only the new updated/modified (/xo) files from the blob storage. This will save time and space, both on your Veeam Repository and the local disk. On this blog post, I have provided basic and manual steps to demonstrate the solution and capabilities. As previously mentioned, it is ideal to use the Microsoft Azcopy command before starting the backup job as a pre-run script, or as part of a Windows scheduler.
I hope this blog post will help you; please don’t hesitate to share your feedback.
The post Backup your Azure Storage Accounts appeared first on CloudOasis.

Original Article: https://cloudoasis.com.au/2019/03/10/backup-your-azure-storage-accounts/

DR

Tips on managing, testing and recovering your backups and replicas

Tips on managing, testing and recovering your backups and replicas

Veeam Software Official Blog  /  Evgenii Ivanov


Several months ago, I wrote a blog post about some common and easily avoidable misconfigurations that we, support engineers, keep seeing in customers’ infrastructures. It was met with much attention and hopefully helped administrators improve their Veeam Backup & Replication setups. In this blog post, I would like to share with you several other important topics. I invite you to reflect on them to make your experience with Veeam smooth and reliable.

Backups are only half of the deal — think about restores!

Every now and then we get calls from customers who catch themselves in a very bad situation. They needed a restore, but at a certain point hit an obstacle they could not circumvent. And I’m not talking about lost backups, CryptoLocker or something! It’s just that their focus was on creating a backup or replica. They never considered that data recovery is a whole different process that must be examined and tested separately. I’ll give you several examples to get the taste of it:

  1. The customer had a critical 20-terabyte VM that failed. Nobody wants downtime, so they started the VM in instant recovery and had it working in five minutes. However, instant recovery is a temporary state and must be finalized by migration to the production datastore. As it turned out, the infrastructure did not allow it to copy 20 TB of data in any reasonable time. And since instant recovery was started with an option to write changes to the C drive of Veeam Backup & Replication (as opposed to using a vSphere snapshot), it was quickly filling up without any possibility for sufficient extension. As some time passed before the customer approached support, the VM had already accumulated some changes that could not be discarded. With critical data at risk, there’s no way to finalize instant recovery in a sufficiently short time and imminent failure approaching. Quite a pickle, huh?
  2. The customer had a single domain controller in the infrastructure and everything added in Veeam Backup & Replication using DNS. I know, I know. It could have gone wrong in a hundred ways, but here is what happened: The customer planned some maintenance and decided to fail over to the replica of that DC. They used planned failover, which is ideal for such situations. The first phase went fine, however during the second phase, the original VM was turned off to transfer the last bits of data. Of course, at that moment the job failed because DNS went down. Luckily, here we could simply turn on the replica VM manually from vSphere (this is not something we recommend, see the next advice). However, it disrupted and delayed the maintenance process. Plus, we had to manually add host names to the C:\Windows\System32\drivers\etc\hosts file on Veeam Backup & Replication to allow a proper failback.
  3. The customer based backup infrastructure around tapes and maintained only a very short backup chain on disks. When they had to restore some guest files from a large file server, it turned out there was simply not sufficient space to be found on any machine to act as a staging repository.

I think in all these situations the clients fell into the same trap they simply assumed that if a backup is successful, then restore should be as well! Learn about restore, just as you learn about backups. A good way to start is our user guide. This section contains information on all the major types of restores. In the “Before you begin” section of each restore option, you can find initial considerations and prerequisites. Information on other types of restores such as restore from tapes or from storage snapshots can be found in their respective sections. Apart from the main user guide, be sure to check out the Veeam Explorers guide too. Each Veeam Explorer has a “Planning and preparation” section — this will help you prepare your system for restore beforehand.

Do not manage replicas from vSphere console

Veeam replicas are essentially normal virtual machines. As such, they can be managed using usual vSphere management tools, mainly vSphere client. It can, but should not be used. Replica failover in Veeam Backup & Replication is a sophisticated process, which allows you to carefully go one step at a time (with the possibility to roll back if something goes wrong) and finalize failover in a proper way. Take a look at the scheme below:

If instead of using the Veeam Backup & Replication console, you simply start a replica in vSphere client or start a failover from Veeam Backup & Replication. But if you switch to managing from the vSphere client later, you get a number of serious consequences:

  1. The failover mechanism in Veeam Backup & Replication will no longer be usable for this VM, as all that flexibility described above will no longer be available.
  2. You will have data in the Veeam Backup & Replication database that does not represent the actual state of the VM. In worst cases, fixing it requires database edits.
  3. You can lose data. Consider this example: A customer started a replica manually in vSphere client and decided to simply stick with it. Some time passed, and they noticed that the replica was still present in the Veeam Backup & Replication console. The customer decided to clean it up a little, right-clicked on the replica and chose “Delete from disk.” Veeam Backup & Replication did exactly what was told — deleted the replica, which unbeknownst to the software, had become a production VM with data.

There are situations when starting the replicas from the vSphere client is necessary (mainly, if the Veeam Backup & Replication server is down as well and replicas must be started without delay). However, if the Veeam Backup & Replication server is operational, it should be the management point from start to finish.
It is also not recommended to delete the replica VMs from vSphere client. Veeam Backup & Replication will not be aware of such changes, which can lead to failures and stale data in the console. If you do not need a replica anymore, delete it from the console and not from the vSphere client as a VM. That way your list of replicas will contain only the actual data.

Careful with updates!

I’m speaking about updates for hypervisors and various applications backed up by Veeam. From a Veeam Backup & Replication perspective, such updates can be roughly divided into two categories — major updates that bring a lot of changes and minor updates.
Let’s speak about major updates first. The most important ones are hypervisor updates. Before installing them, it is necessary to confirm that Veeam Backup & Replication supports them. These updates bring a lot of changes to the libraries and APIs that Veeam Backup & Replication uses, so updating Veeam Backup & Replication code and rigorous testing from QA is necessary before a new version is officially supported. Unfortunately, as of now VMware does not provide any preliminary access to the new vSphere versions for the vendors. So Veeam’s R&D gets access together with the rest of the world, which means that there is always a lag between a new version release and official support. The magnitude of changes also does not allow R&D to fit everything in a hotfix, so official support is typically added with the new Veeam Backup & Replication versions. This puts support and our customers in a tricky situation. Usually after a new vSphere release, the amount of cases increases because administrators start installing updates, only to find out that their backups are failing with weird issues. This forces us, support, to ask the customers to perform a rollback (if possible) or to propose workarounds that we cannot officially support, due to lack of testing. So please check the version compatibility before updates!
The same applies to backed up applications. Veeam Explorers also has a list of supported versions and new versions are added to this list with Veeam Backup & Replication updates. So once again, be sure to check the Veeam Explorers user guide before passing to a new version.
In the minor updates’ category, I put things like cumulative updates for Exchange, new VMware Tools versions, security updates for vSphere, etc. Typically, they do not contain major changes and in most situations Veeam Backup & Replication does not experience any issues. That’s why QA does not release official statements as with major updates. However, in our experience there were situations where minor updates changed workflow enough to cause issues with Veeam Backup & Replication. In these cases, once the presence of an issue is confirmed, R&D develops a hotfix as soon as possible.
How should you stay up to date on the recent developments? My advice is to register on https://forums.veeam.com/. You will be subscribed to a weekly “Word from Gostev” newsletter from our Senior Vice President Anton Gostev. It contains information on discovered issues (and not limited to Veeam products), release plans and interesting IT news. If you do not find what you are looking for in the newsletter, I recommend checking the forum. Due to the sheer number of Veeam clients, if any update breaks something, a related thread appears soon after.
Now backups are not the only thing that patches and updates can break. In reality, they can break a lot of stuff, the application itself included. And here Veeam has something to offer — Veeam DataLabs. Maybe you heard about SureBackup — our ultimate tool for verifying the consistency of backups. SureBackup is based on DataLabs, which allows you to create an isolated environment where you can test updates, before bringing them to production. If you want to save yourself some gray hair, be sure to check it out. I recommend starting with this post.

Advice to those planning to buy Veeam Backup & Replication or switching from another solution

Sometimes in technical support we get cases that go like this: “We have designed our backup strategy like this, we acquired Veeam Backup & Replication, however we can’t seem to find a way to do X. Can you help with it?” (Most commonly such requests are about unusual retention policies or tape management). We are happy to help, but at times we have to explain that Veeam Backup & Replication works differently and they will need to change their design. Sure enough, customers are not happy to hear that. However, I believe they are following an incorrect approach.
Veeam Backup & Replication is very robust and flexible and in its current form it can satisfy the absolute majority of the companies. But it is important to understand that it was designed with certain ideas in mind and to make the product really shine, it is necessary to follow these ideas. Unfortunately, sometimes the reality is quite different. Here is what I imagine happens with some of the customers: They decide that they need a backup solution. So they sit down in a room and meticulously design each element of their strategy. Once done, they move to choosing a backup solution, where Veeam Backup & Replication seems to be an obvious choice. In another scenario, the customer already has a backup solution and a developed backup strategy. However, for some reason their solution does not meet their expectations. So they decide to switch to Veeam and wish to carry their backup strategy in Veeam Backup & Replication unchanged. My firm belief is that this process should go vice versa.
These days Veeam Backup & Replication has become a de-facto standard for backup solution, so probably any administrator would like to have a glance at it. However, if you are serious about implementation, Veeam Backup & Replication needs to be studied and tested. Once you know the capabilities and know this is what you are looking for, build your backup strategy specifically for Veeam Backup & Replication. You will be able to use the functionality to the maximum, reduce the risks and support will have an easier time understanding the setup.
And that’s what I have for today’s episode. I hope that gave you something to consider.

The post Tips on managing, testing and recovering your backups and replicas appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/srEg219FagQ/manage-test-recover-backup-replica.html

DR

Using Veeam PowerShell with Veeam DataLabs Staged Restore and Secure Restore

Using Veeam PowerShell with Veeam DataLabs Staged Restore and Secure Restore

vZilla  /  michaelcade

Before we get started, I wanted to share something that I found whilst testing out the beta releases and RC of Veeam Backup & Replication 9.5 Update 4. I mentioned whilst running my live Veeam DataLabs: Secure Restore at our Velocity sales and partner event that was also part of our 9.5 update 4 launch event that there was a huge influx of new things you could do with Veeam Backup & Replication and PowerShell Cmdlets.
If you want, see for yourself then run the following command this is going to output all of the cmdlets available in Veeam Backup & Replication 9.5 update 4. Note that if you want to compare you need to run this prior to an update to see the comparison.

VBR 9.5 Update 3a PowerShell Cmdlets 564 -> 734 VBR 9.5 Update 4 Cmdlets

New PowerShell Cmdlets (170 NEW)

A snapshot of the new PowerShell cmdlets is shown below in this table. The PowerShell Reference guide is a great place to start if you want to understand what these can do for you in your environment.
If you were to compare the update 3a release to the new update 4 release you will see that before there were 564 cmdlets and now 734. 170 new! That shows the depth of this release.

My focus for the Update 4 release has been around Veeam DataLabs Secure and Staged Restore features. This next section will go deeper into the options and scenarios of how to run Secure and Staged Restore via PowerShell during your recoveries.

Veeam DataLabs: Secure Restore

Veeam DataLabs Secure Restore gives us the ability to perform a third-party antivirus scan before continuing a recovery process. The command below highlights the parameters that can be used for Secure Restore while leveraging Instant VM Recovery®. You will find these same parameters under many other restore cmdlets. More information can be found here.


The script below is an example that I have used to kick off a secure restore within my own lab environment.
This is the script I used for the on-stage demo, in the demo I was using a tiny windows virtual machine that I had infected with the Eicar malware threat. Many antivirus vendors will use this safe virus to perform tests against their systems. I wanted to demonstrate the full force of what Veeam DataLabs: Secure Restore could bring to the recovery process by showing an infected machine. There is no fun showing you something that is clean, I mean anyone can do that.

Veeam DataLabs: Staged Restore

Veeam DataLabs Staged Restore allows for us to inject a script or think a process into an entire VM recovery process, many use cases will be shared, and this is not the place to go into that detail. The command below highlights the parameters available for Staged Restore functionality during a full VM failover.
The one-use case I will share and have had discussions about this since the release is the ability to mask data to departments. The challenge one of our customers was facing was the ability to take their backup data and pre-update 4 they were restoring that data to an isolated environment away from production for their developers to work on. The mention of Veeam DataLabs: Staged Restore allowed them to recognise that the databases they were exposing from these backup files to their development team would also be including sensitive data. With Staged Restore they can inject a script into the process that would exclude that sensitive data from the development team. More information can be found here.


Finally, the script below is another example but this time for Staged Restore. You will note that I have three options at the end that allow for different outcomes. The first option is a standard no staged restore entire VM recovery. The second option (the one that is not hashed out) is the one that will allow us to inject a script without the requirement of an application group. The third and final script example will allow us to leverage an Application Group where the VM we are restoring may require something from a dependency within the Application Group to complete the injection of the script.

Hopefully that was useful, and I am hoping to explore more PowerShell options especially around the Veeam DataLabs and the art of the possible when it comes to leveraging that data or providing even more additional value above the backup and recovery of your data.

Original Article: https://vzilla.co.uk/vzilla-blog/using-veeam-powershell-with-veeam-datalabs-staged-restore-and-secure-restore

DR

N2WS Expands Cost Optimization for Amazon Web Services with Amazon EC2 Resource Scheduling

N2WS Expands Cost Optimization for Amazon Web Services with Amazon EC2 Resource Scheduling

Business Wire Technology News

WEST PALM BEACH, Fla.–()–N2WS, a Veeam® company, and provider of cloud-native backup and recovery solutions for data and applications on Amazon Web Services (AWS), today announced the availability of N2WS Backup & Recovery 2.5 (v2.5), which introduces Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Relational Database Service (Amazon RDS) resource management and control. Using the new N2WS Resource Control, organizations can lower costs by stopping or hibernating groups of Amazon EC2 and Amazon RDS resources on-demand or automatically, based on a pre-determined schedule.

“A good portion of our internal AWS bill is dedicated to Amazon EC2 compute resources, as much as 60 percent every month. A lot of these resources are not being used all the time,” explains Uri Wolloch, CTO at N2WS. “Being able to simply power on or off groups of Amazon EC2 and Amazon RDS resources with a simple mouse click is just like shutting the lights off when you leave the room or keeping your refrigerator door closed; it reduces waste and saves a huge amount of money if put into regular practice.”
N2WS Backup & Recovery v2.5 further optimizes the process for cycling Amazon Elastic Block Store (Amazon EBS) snapshots into the N2WS Amazon Simple Storage Service (Amazon S3) repository, extending costs saving to more than 60% for backups stored greater than 2 years. N2WS v2.5 also supports two new AWS Regions: AWS Europe (Stockholm) Region and the new AWS GovCloud (US-East) Region.
For public sector organization leveraging the AWS GovCloud (US-East or US-West) Regions, N2WS v2.5 offers automated cross-region disaster recovery between the two AWS Regions. This allows AWS GovCloud (US) users to orchestrate near instant recovery and disaster recovery, while keeping data and workloads in approved data centers – ensuring that workloads remain available in the event of a regional issue.
New updates and enhancements now available as part of N2WS Backup & Recovery v2.5 include:

  • Management of data storage and server instances from one simple console using N2WS Resource Control
  • Optimization enhancements to Amazon S3 integration, saving even more when cycling snapshots into an Amazon S3 repository
  • Added support for new AWS Regions – AWS Europe (Stockholm) and AWS GovCloud (US-East) Regions with cross-region DR between AWS GovCloud US-West and AWS GovCloud US-East now available
  • Expanded range of APIs allowing you to configure alerts and recovery targets
  • Full support for all available APIs via a new CLI tool

For organizations leveraging the N2WS Amazon S3 repository, further compression and deduplication enhancements have pushed costs saving above 60% for data stored over longer terms. During simulations, monthly backups of an 8TB volume stored in the N2WS Amazon S3 repository for 2 years, costs 68% less than the same backups stored as native Amazon EBS snapshots. These enhancements help enterprises manage and significantly reduce cloud storage costs for backup data being retained for compliance reasons.
N2WS Backup & Recovery is an enterprise-class backup, recovery and disaster recovery solution only available on AWS Marketplace. N2WS Backup & Recovery version 2.5 is available for immediate use by visiting AWS Marketplace at: https://aws.amazon.com/marketplace/pp/B00UIO8514.
About N2WS
N2W Software (N2WS), a Veeam® company, is a leading provider of enterprise-class backup, recovery and disaster recovery solutions for Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database Services (Amazon RDS), and Amazon Redshift. Used by thousands of customers worldwide to backup hundreds of thousands of production servers, N2WS Backup & Recovery is a preferred backup solution for Fortune 500 companies, enterprise IT organizations and Managed Service Providers operating large-scale production environments on AWS. To learn more, visit www.n2ws.com.

Original Article: http://www.businesswire.com/news/home/20190305005142/en/N2WS-Expands-Cost-Optimization-Amazon-Web-Services/?feedref=JjAwJuNHiystnCoBq_hl-Q-tiwWZwkcswR1UZtV7eGe24xL9TZOyQUMS3J72mJlQ7fxFuNFTHSunhvli30RlBNXya2izy9YOgHlBiZQk2LOzmn6JePCpHPCiYGaEx4DL1Rq8pNwkf3AarimpDzQGuQ==

DR

Veeam and Alibaba Cloud – OSS Integration

Veeam and Alibaba Cloud – OSS Integration

CloudOasis  /  HalYaman


alibaba-cloud-logoWhat options will Veeam offer to protect Alibaba Cloud workloads? How do you backup Alibaba ECS (Elastic Compute Service) instances? And is possible to use Alibaba OSS (Object Storage Services) to archive your Veeam backup?

Veeam Update 4, released on 22 January, not only introduced a new product feature; but also introduced a new Cloud Services integration. In this blog, I will discuss the integration of Veeam Update 4 with Alibaba Cloud.
There are several levels of integration that Veeam offers to protect or/and utilize the Alibaba Cloud resources. The following options are available to you when you use Veeam and Alibaba Cloud:

  • Deploy Veeam as an Elastic Compute (ECS) instance from the Alibaba MarketPlace,
  • Protect Alibaba ECS instances using a Veeam Agent,
  • Integrate Veeam with Alibaba Object Storage Service (OSS),
  • Integrate Veeam with Alibaba VTL.

In this blog post, I will focus on Veeam and Alibaba OSS integration to keep the blog short and focused. On future blog posts, I will discuss Veeam Agent protection and VTL.

Veeam and Alibaba Object Storage Service (OSS) Integration

Veeam Archive/Capacity Tier is a newcomer to the Veeam suite and is included in Update 4. The good news for those customers using Alibaba Cloud is that Alibaba OSS is supported out of the box. As an Alibaba customer, you can add your OSS bucket as part of the Veeam Scale-out-backup-repository to archive your aging backup to your OSS for longer retention.
To Add your Alibaba OSS to Veeam, you must first complete these tasks:

  • Create the Alibaba OSS bucket,
  • Acquire the Bucket URL and the Access key,
  • Add Alibaba OSS as a repository.

Let’s demonstrate the process to add the OSS to the Veeam infrastructure:
The process to add Alibaba OSS to the Veeam infrastructure is very simple. The process starts by adding a new repository, as shown in the following screenshot.

After you choose the Object storage, you must select the S3 Compatible option:
Next, name the new OSS repository, then press Next to enter the Service Point URL (OSS URL) and the credentials on a form of Access Key and Secret key. See the screenshot below.

The final step in this process is to create a folder where you will save your Veeam backups. See the screenshot below.

Before you finalize the process, you take the opportunity to limit the OSS consumption. See the settings in the screenshot below.

Now press Finish to complete the process.

Summary

In this blog post, I discussed the integration of Veeam and Alibaba OSS, walking you through the steps to add Alibaba OSS to Veeam infrastructure as a repository. To use the OSS as an Archive/Capacity target, you must configure a Veeam Scale-out-Repository (SOBOR), then include the OSS part of the Capacity Tier configuration. See the screenshot below for the setup.

On a future blog post, I will keep the discussion around Veeam and Alibaba Cloud, and the topics I will discuss will be Veeam and Alibaba VTL and Alibaba ECS Agent protection. Until then, I hope this blog post helps you get started. Please don’t forget to share.

The post Veeam and Alibaba Cloud – OSS Integration appeared first on CloudOasis.

Original Article: https://cloudoasis.com.au/2019/03/04/veeam-abibaba-cloud-oss-integration/

DR