NEW Cisco + Veeam solution brief with use cases

NEW Cisco + Veeam solution brief with use cases
By combining Cisco Data Center products and Veeam
solutions, customers eliminate data loss and slow data recovery, helping
to minimize risk, decrease downtime and easily adapt to business
changes to meet the most stringent recovery objectives. See all
the benefits and use cases in this NEW solution brief.

DOWNLOAD NOW 

Save 50% on your Enterprise Plus upgrade or receive FREE licenses when you purchase Veeam Availability Orchestrator

Save 50% on your Enterprise Plus upgrade or receive FREE
licenses when you purchase Veeam Availability Orchestrator
As As a valued Veeam customer, when you buy 10 bundles of Veeam
Availability Orchestrator or more, you are eligible to save 50% on your
Enterprise Plus edition upgrade! If you already have Enterprise Plus edition,
we have an offer for you too! Get an additional 20% of Veeam Availability
Orchestrator licenses at no cost when you buy 10 bundles of Veeam Availability
Orchestrator or more.

LEARN MORE 

VeeamON Virtual 2019

VeeamON Virtual 2019
The Premier Virtual Conference for Cloud Data Management
where we bring the latest and greatest in data protection,
security and management directly to you is back in 2019.
Save your seat now to engage with Veeam’s leading experts,
partners and more than 5,000 attendees and receive exclusive access
to sessions and content that will keep you on top
of the new innovations, tools and techniques needed
today.

VMware NSX-T Simplified UI vs Advanced UI explained

VMware NSX-T Simplified UI vs Advanced UI explained

Veeam Software Official Blog  /  Jeffrey Kusters

Please welcome our Write for Veeam contributor Jeffrey Kusters with his debut on our platform!
If you’d also like to share your expertise, knowledge and bring something new to the table on Veeam blog (and earn money by doing that), then learn more details and drop us a line at the program page. Let’s get techy!

It’s hard to believe it has been seven years since VMware acquired Nicira and six years since VMware NSX was launched. Having evolved significantly since those days, features such as micro-segmentation, automaton and Virtual Cloud Networking have impacted the data center as well as disaster recovery.
With the release of NSX-T 2.4 in June 2019, VMware significantly modified the user experience of NSX manager. A new simplified UI was introduced and the previous UI from version 2.3, has become the advanced UI. During conversations with customers, this topic has surfaced frequently and I felt that explaining the key differences between the respective UIs and guidance regarding when to use which each would be helpful to Veeam customers.

Advantages of the simplified UI

The UI is basically a representation of the underlying API and that’s where the actual change was introduced. VMware introduced a second API in NSX-T 2.4 called the policy API (available at https://<nsxmanager>/policy/api), which is “declarative”, not to be confused with the management API (available at https://<nsxmanager>/api), which is “imperative.” By being imperative, all the correct actions or changes to a system must be specific and in exact order to reach the desired state. By being declarative, users can simply define the desired state in one API call and the system will determine which actions need to be taken to reach that desired state. This is a much simpler way of consuming a system because as an administrator or engineer, you don’t need intricate details on the current state and the specific order in which actions must be executed:

Single API Interface (Imperative) Single API Command (Declarative)
POST/GET Logical Switch PATCH https://<nsxmanager>/policy/api/v1/infra

{desired outcome human-readable JSON}

POST/GET NSGroups
POST/GET EDGE Firewall
POST/GET Tier-1 Router
POST/GET DFW-Section
POST/GET LB Config

The simplified UI aligns with the declarative policy API and the advanced UI aligns with the imperative management API:

In NSX-T manager, the top menu represents the simplified UI:

The advanced UI can be found by selecting the advanced networking & security option the top menu:

Object name changes in simplified UI

The new declarative policy API and simplified UI also introduced name changes for certain objects in NSX-T, including:

  • Logical switch → segment
  • T1 logical router → tier-1 gateway
  • T1 logical router → tier-1 gateway
  • NSGroup → group
  • Firewall section → security-policy
  • Edge firewall → gateway firewall

The following screenshot shows how a new segment, or logical switch as it is called in the advanced UI, is created:

The new segment is shown in the vSphere client and can be used to connect virtual machines:

How do the simplified UI and advanced UI relate to each other?

If you declare a desired state in the simplified UI, NSX-T will store the desired configuration data in the declarative interface and push it down to the control and data plane. It will also replicate all required objects as read-only objects to the imperative interface. All configurations created in the simplified UI are visible in the advanced UI. The Test-Segment referenced above is also visible as a read-only logical switch in the advanced UI:

(The red highlighted icon means the object is protected/read-only. Since all the logical switches starting with pks- are created and managed by VMware PKS which is running in this specific lab environment, these are also protected from manual changes).
This does not work in reverse. Objects created in the advanced UI are not replicated to or visible in the simplified UI. After upgrading to NSX-T 2.4, all existing objects will be unavailable in the simplified UI (as you can see by looking at the first simplified UI screenshot that shows zero objects available while there are numerous objects available in the advanced UI screenshot). Currently, there is no in-place migration capability available as part of the upgrade process. We can either leave our objects in the advanced UI or migrate them manually (or programmatically) to the simplified UI.

Which version of the UI should you use?

Moving forward, VMware appears to favor the simplified UI and the declarative policy API so it is recommended you become familiar with the simplified UI. It is expected that new features will emerge for the simplified UI and the advanced UI may even be deprecated at some point, although no timeline has been communicated. There are tasks that can only be performed in the advanced UI since support is not yet available for all functions and features in the simplified UI so familiarity with both is encouraged. Unfortunately, you are faced with a dilemma: keeping your existing NSX-T 2.3 topology in the advanced UI or migrating everything to the simplified UI (which can be a disruptive change). There are also integrations with other products that consume NSX-T, such as VMware PKS or third party container platforms for example. These integrations will need to be updated so, for the time being, they will continue to leverage the imperative API and advanced UI as well.

Conclusion

The introduction of the simplified UI and declarative policy API is a welcome addition in a world that is moving swiftly towards automation and desired state configuration. In the end this new consumption model allows administrators and engineers to utilize a software-defined network in a much easier, faster and more predictable way. In short-term, the move from the advanced UI to the simplified UI may cause some challenges for existing NSX-T customers. Today, there is not an alternative to the disruptive in-place migration tools and not all functions and features are fully available in the simplified UI. Though VMware appears to favor the simplified UI moving forward, both interfaces will still be required for your virtual networking needs.
The post VMware NSX-T Simplified UI vs Advanced UI explained appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/CDbCGvUW6cA/nsxt-simplified-vs-advanced-ui.html

DR

How we improved tape support since Update 4

How we improved tape support since Update 4

Veeam Software Official Blog  /  Evgenii Ivanov


Veeam Backup & Replication 9.5 Update 4 was a major release that brought many improvements and interesting features. With tape not only being far from dead but rising in popularity, it’s not a surprise that tape support was a major focus item in the new version. Over the last few months, Veeam technical support has gathered enough cases to see what are the main issues that customers encounter, and with the recent release of Update 4b, it’s time that we go over the new tape functionality and show our readers the practical side of incorporating tape into their backup strategies.

Library parallel processing

Several versions ago, Veeam Backup & Replication’s media pool became “global.” That means that a single media pool can contain tapes that are associated with multiple tape devices. That already gave users several advantages. First, it considerably simplified tape management — now tape could “travel” seamlessly between the libraries. All users had to do is to rescan the library where the tapes were moved to, or if a barcode reader is not present (for example, in the case of a standalone drive), run an inventory (forcing Veeam Backup & Replication to read the tape header). Second, global media pools introduced tape “High Availability.” Users could specify among three events (“Library is offline,” “No media available,” and “All tape drives are busy”) when a tape job would fail over to the next tape device in the list.
By the way, this is also where many users get stuck when trying to delete an old device. They would get an error stating that the tape device is still in use by media pool X. If on the Tapes step you don’t see any tapes associated with the tape device, be sure to click on the Manage button (see the screenshot below); most likely that’s where the old tape device is hiding.
Update 4 went one step further and introduced true parallel processing between tape libraries. Now, if you go to media pool settings – Tapes – Manage you will be able to select a mode in which your tape devices should operate:

To enable parallel processing, set the role to “Active,” giving devices a “Passive” role enables a failover mechanism. Keep in mind it’s not possible to set only some of devices to active and others to passive — you can only have one active device with all other devices being passive, or all devices can be active. The general recommendation is to use active mode if you have several similar tape devices. If you have a modern and an old library, you can give the old library a passive role so that your jobs won’t stop, even if there are issues with the main (active) library.
As you can see, the setup is straightforward. However, I would like to take a moment to review how tape parallel processing works in Veeam Backup & Replication.

  1. To enable parallel processing within a single backup to tape job, this job must use either several backup jobs as sources, or the source backup job must be per-VM. If the source is a single job with multiple VMs that is creating normal backup files, the load cannot be split between the drives. Make sure your sources are set up correctly!
  2. Tape parallel processing is enabled at the media pool level. In media pool settings on the Options step, you can specify how many drives the job uses. If you use several tape devices in active mode, this number is the total number across all tape devices connected to this media pool. If parallel processing within a single job is allowed, see the previous point for requirements.
  3. Wondering how media sets are created with parallel processing? The rule is actually very simple: each drive keeps its own media set. However, this may create challenges in some setups. Consider this example: a customer opened a support ticket because Veeam Backup & Replication was using an excessive amount of tapes, but the customer could see that many of the tapes are almost empty. After investigation, it was found out that customer had a media pool that was set to create a separate media set for every session. Initially that worked well because the amount of data for each session fit a set number of tapes almost perfectly. Then another drive was added, and the customer started using parallel processing. Now the data was spread across two sets of tapes. Since the media set was finalized for every session, Veeam Backup & Replication could not continue writing to half-empty tapes on the next session, so the customer saw an unacceptable increase in tape usage. The customer had to choose between managing source data so that it 1) fits better on two sets of tape (which is a rather unreliable solution), setting the media set to be continuous, or 2) disabling parallel processing.

Two or more parallel media sets created in parallel can cause some incoherent naming. Customers sometimes complain when they see that several tapes have an identical sequence number and were written at the same time. This is actually normal and does not cause any issues. To differentiate the tapes better, we recommend adding the %ID% variable to the media set naming.

Tape GFS improvements

Before diving into tape GFS improvements, I would like to take a moment to review the default tape GFS workflow. On the day when the GFS run is scheduled, the job starts at 00:00 and starts waiting for a new point from the source job. If this new point is incremental, tape GFS will generate a virtual synthetic full on tape. If the point is a full backup, it will be copied as is. The tape GFS job will remain waiting for up to 24 hours, and if by that time a new point will not arrive, tape GFS will fail over to the previous point available. This creates a situation with backup copy jobs – because of BCJ’s continuous nature, the latest point remains locked for the duration of the copy interval. Depending on the BCJ’s settings, tape GFS may have to wait for a full day before failing over to the previous point. If it happens that the GFS job has to create points for several intervals on the same day (for example, weekly and monthly), tape GFS will not copy two separate points, instead a single point will be used as weekly and monthly. Finally, each interval keeps its own media set. For example, if a tape was used for a weekly tape, it will not be used to store a monthly point, unless erased or marked as free. This baffles a lot of users – they see an expired tape in the GFS media pool, however, the GFS tape job seems to ignore it, asking for a valid tape. This whole default workflow could be tweaked using registry values, which were used by support to great effect.
So, what improvements does Update 4 have to offer? First, two registry value tweaks have now been added to the GUI:

  1. You can now select the exact time when the tape GFS job should start (overriding the default 00:00 time) on the Schedule step.
  2. Now you can tell the tape GFS job to fail over immediately to the previous point (instead of waiting for up to 24 hours), if today’s point is not available. The option is found under Options – Advanced – Advanced tab:
  3. A new registry value has also been added, which allows the tape GFS job to use expired tapes from any media set, and not only the one that belongs to a particular interval. Please contact support if you think you need this.

GFS parallel processing

If you look at the previous version’s user guide page for GFS media pool, you will be greeted with a box saying that parallel processing is not supported for tape GFS. This is no longer true – Update 4 has added parallel processing for tape GFS jobs as well. The workflow is the same as for normal media pools, which was described in the first section. Be wary of possible elevated tape usage when using GFS and parallel processing as such a setup requires many media sets to be created. As a rule of thumb for the minimum amount of tapes, you can use this formula:
A (media sets configured) x B (drives) = C (tapes used)
For example, a setup with all 5 media sets enabled and 2 drives, will require a minimum of 10 tapes.

Daily media set

A major addition to tape GFS functionality is “daily media set”. This allows users to combine GFS and normal tape jobs in to a single entity. If you’re using GFS media pools for long-term archiving, but also use normal media pools to keep a short incremental chain on tapes, consider simplifying management by using daily media set instead of a separate job. You can enable this in GFS media pool settings:

Daily media set requires weekly media set to be enabled as well. With daily media set, GFS job will check every day if the source job created any full or incremental points and will put them on tape. If source job creates multiple points, they all will be copied on tape. This helps fill the gap between weekly GFS points and restore from an incremental point. During the restore, Veeam Backup & Replication will ask for tapes containing the closest full backup (most likely it will be a weekly GFS) and the linked increments. Mind that, just as other backup to tape jobs, daily media set does not copy rollbacks from reverse incremental jobs. Only a full backup, which is always the most recent point in reverse incremental jobs, will be copied.
To give you a practical example, imagine the following scenario:
You have a forever forward incremental job that runs every day at 5:00 AM and is set to keep 14 restore points.
For tape, you can configure a GFS job with the daily media pool set to keep 14 days, weekly of 4 weeks and the desired number of months, quarters and years. The daily media set can be configured to append data and not to export the tape. Weekly and older GFS intervals do not append data and export the tapes after each session. That way tapes containing incremental points are kept in the library and rotated according to the retention. Meanwhile tapes containing full points are brought offline and can be stowed in a safe vault.

Should you need to restore, Veeam Backup & Replication will issue a request to bring online a tape containing the weekly GFS point, and will use the tape from the daily media set to restore the linked increments.

WORM tapes support

WORM stands for Write Once Read Many. Once written, this kind of tape cannot be overwritten or erased, making it invaluable in certain scenarios. Due to technical difficulties, previous versions of Veeam Backup & Replication did not have support for WORM tapes.
Working with WORM tapes is not that different from normal tapes. The first step is to create a special WORM media pool. As usual, there are two types: normal and GFS. There is only one special thing about a WORM media pool — the retention policy is grayed out. Next, you need to add some tapes. This is probably where the majority of issues arise when WORM tapes are recognized as normal tapes, and vice versa. Remember that everything WORM-related has a blue-colored icon. Veeam Backup & Replication defines WORM tape by the barcode, or during inventory. Be sure to consult the documentation for your tape device and use correct barcodes! For example, IBM libraries consider letters from V to Y in the barcode as an indication for a WORM tape. Using them incorrectly will create confusion in Veeam Backup & Replication.

NDMP backup

In Update 4, it is possible to back up and restore volumes from NAS devices if they support the NDMP protocol. The first step here (which sometimes gets overlooked) is adding an NDMP server. This is done from Inventory view, our user’s guide provides the details.
NOTE: Mind the requirements and limitations, as the current version has several important things to keep in mind. For example,
NetApp Cluster Aware Backup (or CAB) extensions are currently not supported for NDMP backup in Update 4b. The workaround is to configure node-scoped NDMP.
After the NDMP server is added to the infrastructure, it can be set as a source to a normal file to tape job. The restore, as with other files on tapes, is initiated from Files view. Keep in mind that for NDMP, currently Veeam Backup & Replication does not allow for restores of individual files on volumes. Only the whole volume can be restored to the original or to a new location.

Tape selection mechanism

Update 4 added a new metric for used tapes: number of read/write sessions. This metric is seen in tape properties, under Wear field:

Once tape is reused the maximum number of times (according to the tape specs), it will be moved to the Retired media pool. However, this is not new. Previous versions of Veeam Backup & Replication also tracked warnings from tape devices when tape should no longer be used. What is new is that Veeam Backup & Replication Update 4 tries to balance the usage and, among other things, will select a tape that was used the least amount of times. Here is a neat little scheme for you to remember the process:

File from tape restore improvements

The logic for restores of folders from tapes saw some improvements in Update 4. Now when a user selects a specific backup set from which to restore, only the files that were present in the folder during that backup session are restored. The menu itself did not change, only the logic behind it. As before, in Update 4 you need to start the restore wizard (as always from the Files view) and click on the Backup Set button to choose the state to which to restore.

Tenant to tape

This feature allows service providers to offer a new type of service — writing clients’ backup data to tape. All configuration is done on the provider side — if Veeam Backup & Replication has a cloud provider license installed, the new job wizard will allow the addition of a Tenant as a source for the job. Such jobs can only work with GFS pools. Restore is also done on the provider side — the options are to 1) restore files to original cloud repository (existing files will be overwritten, and the job will be mapped to the restored backup set) 2) to a new cloud repository or 3) to a local disk. In the end, if the customer has their own tape infrastructure, it’s also possible to simply mail the tape and avoid any network traffic.

Other improvements

Tape operator role. A new user role is now available. Users with this role can perform any service actions with tapes, but not initiate a restore.
Source processing order. You can now change the ordering of how sources should be processed, just like VMs for backup jobs.
Include/exclude masks for file to tape jobs. In previous versions, file to tape jobs could only use inclusion masks. Update 4 adds exclusions masks as well. This works only for files and folders, NDMP backups are still done at the volume level.
Auto eject for standalone drive when tape is full. This is a small tweak in the tape logic when working with a standalone tape drive. If a tape gets filled up during a backup session, it will be ejected from the drive, so that a person near the drive could insert another tape. This is also useful in protecting against ransomware as the tape is ejected and offline.

Conclusion

The amount of new tape features this update brought us clearly shows that tape remains a focus of product management. I encourage you to explore these new capabilities and to consider how they can make your backup practice even better.

The post How we improved tape support since Update 4 appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/4R39VQB0FDU/tape-support-improvements-update-4.html

DR

British Columbia Forest Practices Board Replaces Legacy Backup Solution and Optimizes Cloud Data Management with Veeam

British Columbia Forest Practices Board Replaces Legacy Backup Solution and Optimizes Cloud Data Management with Veeam

Business Wire – News by Company: Veeam Software

BAAR, Switzerland–()–Veeam® Software, the leader in Backup solutions that deliver Cloud Data Management™, today announced that British Columbia’s Forest Practices Board, an independent, public watchdog for sound forest and range practices, has chosen Veeam Availability SuiteTM to replace its current legacy backup system. The Forest Practices Board opted to upgrade its unreliable and outdated system and implement Veeam with NetApp ONTAP along with tape-based backups in Amazon S3 and Amazon Glacier to transform its business continuity and disaster recovery strategy.

Overseeing Canada’s western and most biologically diverse province where forests account for 24.3 million hectares, the Forest Practices Board was established by the government in 1995 to conduct periodic audits of forest practices, investigate complaints and report results to the public and government. It completes up to 10 audits per year, capturing data on tablets and sending it back via secure network connections to the headquarters in Victoria where it could be compared with legacy information. The organization experienced issues when data was lost during the transfer process, resulting in incomplete or inaccurate analyses.
“We can’t risk losing field data,” said Tim Slater, Manager of Corporate Services and Information Systems at Forest Practices Board. “Lost data means staff and contractors making a second trip to the field, requiring them to rent a helicopter again. The cost could double to tens of thousands of dollars. Our organization is government-funded, so we are very aware of every cost.”
In addition, the Forest Practices Board faced issues recovering legacy data as its system was unreliable, leaving only a 50% chance it could be used due to corruption or it couldn’t be restored in its full form. “We found that our legacy backup wasn’t designed for virtualization and didn’t integrate with NetApp ONTAP. Configuring the legacy backup solution with NetApp SnapVault and NetApp SnapMirror was cumbersome and the replication never worked. As a consequence, our business continuity and disaster recovery readiness were on the line,” said Slater.
The Forest Practices Board selected Veeam Availability Suite in part due to its direct integration with NetApp ONTAP which eliminated many of the issues with the legacy system. Now, it backs up 69 virtual machines with 9TB of data from NetApp FA2650 Storage Snapshots every hour and replicates them to an off-premise DR site every four hours. Recovery time objectives decreased dramatically to mere minutes and the group can finally set recovery point objectives.
Also, the Forest Practices Board archives secondary tape-based backups in Amazon S3 and Amazon Glacier with Veeam’s Virtual Tape Library (VTL) integration. This practice, along with the NetApp and Veeam integration, saves the Board 12 to 15 hours a week. “Veeam has been backing up our data to tape for years, so now Veeam is backing up our virtual tape library to Amazon S3 and Glacier for long-term storage,” Slater said. “It’s an infinitely scalable, cost-effective solution that will help us ensure our data is available forever.”
According to the 2019 Veeam Cloud Data Management Report, application downtime costs organizations a total of $20.1 million globally in lost revenue and productivity each year. “Data availability brings significant business benefits in terms of brand reputation and customer confidence,” said Paul Strelzick, Senior Vice President of Americas at Veeam. “With proper cloud data management, the Forest Practices Board is assured that its reports are fully available on its website to encourage education, communication and dialogue about forest practices and conservation. It’s our pleasure to assist the organization in its mission and eliminate data challenges that might impede its work.”
“The best way to conserve forests is to apply sustainable forest management practices,” Slater said. “The best way to apply sustainable forest management practices is to have access to current and historical data. Veeam, NetApp ONTAP, Amazon S3 and Glacier help us to optimize data management across our hybrid cloud environment.”
For more information and the full success story, visit https://www.veeam.com/success-stories/veeam-customer-story-bc-forest-practice-board.html.
About Veeam Software
Veeam is the leader in Backup solutions that deliver Cloud Data Management. Veeam Availability Platform™ is the most complete backup solution for helping customers on the journey to achieving success in the 5 Stages of Cloud Data Management. Veeam has 355,000+ customers worldwide, including 82% of the Fortune 500 and 67% of the Global 2,000, with customer satisfaction scores at 3.5x the industry average, the highest in the industry. Veeam’s global ecosystem includes 66,000 channel partners; Cisco, HPE, NetApp and Lenovo as exclusive resellers; and 23,500+ cloud and service providers. Headquartered in Baar, Switzerland, Veeam has offices in more than 30 countries. To learn more, visit https://www.veeam.com or follow Veeam on Twitter @veeam.

Original Article: http://www.businesswire.com/news/home/20190905005863/en/British-Columbia-Forest-Practices-Board-Replaces-Legacy/?feedref=JjAwJuNHiystnCoBq_hl-c9EfAjTsk-flsGmdO9JqnwMCNrJ5zSzFYT7RwBKCzchnkvYMqDDYxFrLs-oQ2BHQypCsFFvugK3n-fFs_G7QqSAaM4HAi1dfjF46AFX3NGZlQo71FJ67f_ORYRya2TtNQ==

DR

We are available via the VMware Cloud Marketplace!

We are available via the VMware Cloud Marketplace!

Veeam Executive Blog – The Availability Lounge  /  Carey Stanton


I’m consistently amazed how Veeam’s intense focus on R&D drives technology innovation for our customers. Yet you cannot build solid solutions without understanding the goals of your customers. It was clear to me during VMworld 2019 US through customer conversations that the backup market is changing. Backup is still critical, but as customers are now building hybrid clouds, they need more than just backup. This becomes even more difficult with a multi-cloud strategy and challenges such as data criticality, growth, and sprawl coupled with the various tools and capabilities from disparate cloud providers can quickly complicate Cloud Data Management and protection.
In 2016, VMware and AWS announced a strategic alliance to build VMware Cloud on AWS. When the service launched, I was excited when Veeam was invited to participate as a design partner for VMware Cloud on AWS.
Veeam’s partner-centric strategy and ability to adapt to the market has been the cornerstone of success for more than a decade and is the foundation for Act II. It was clear that to succeed in this changing environment and take advantage of the new technologies offered by Veeam alliance partners like VMware, that we had to adapt.
In January, Veeam launched Veeam Availability Suite 9.5 Update 4, delivering new major capabilities that provide easy cloud migration and cloud mobility, cloud-native backup, cost-effective data retention, and portable cloud-ready licensing, increased security and data governance, and solutions to make it easier than ever for service providers to deliver Veeam-powered services to market. Yet customers did not have a way to discover and deploy third-party solutions that had been validated for VMware-based SDDCs, such as VMware Cloud on AWS and VMware Cloud Provider Partners.
At VMworld US 2018, VMware announced the tech preview of the VMware Cloud Marketplace. At VMworld Europe 2018, VMware announced the beta version was available, and Veeam was eager to participate. During VMworld 2019, VMware announced the VMware Cloud Marketplace is initially available and customers can search for, quickly find, and rapidly deploy Veeam.

 
VMware provides a good overview of the additional benefits achieved with this launch, including:

 
By creating an account and signing on to the VMware Cloud Marketplace, you can easily search for and locate Veeam.

From day one, we have focused on partnerships to deliver customer value and our collaboration with VMware is no exception. We are delivering choice, flexibility and value to customers of all sizes. The collaboration with VMware regarding the VMware Cloud Marketplace epitomizes this approach and will help customers search for, quickly find, and rapidly deploy Veeam, a VMware Ready certified solution. I encourage you to visit the VMware Cloud Marketplace and download the .iso, and deploy it. Though you may not have been able to celebrate this achievement during the Veeam party on Tuesday, August 27th, I assure you our journey for Act II is only beginning.
Show more articles from this author

Makes disaster recovery, compliance, and continuity automatic

NEW Veeam Availability Orchestrator helps you reduce the time, cost and effort of planning for and recovering from a disaster by automatically creating plans that meet compliance.

DOWNLOAD NOW

New
Veeam Availability Orchestrator

Original Article: http://feedproxy.google.com/~r/veeam-executive-blog/~3/pISycpJ1WCA/vmware-cloud-marketplace-partnership-solution.html

DR

Veeam Named to the 2019 Forbes Cloud 100 for Fourth Consecutive Year

Veeam Named to the 2019 Forbes Cloud 100 for Fourth Consecutive Year

Business Wire – News by Company: Veeam Software

SAN FRANCISCO–()–Veeam® Software, the leader in Backup solutions that deliver Cloud Data Management™, has, for the fourth consecutive year, been named on the Forbes 2019 Cloud 100, the definitive ranking of the top 100 private cloud companies in the world, published by Forbes in partnership with Bessemer Venture Partners and Salesforce Ventures.

“We are honored to be named to the Forbes Cloud 100 for the fourth consecutive year,” said Ratmir Timashev, Co-Founder and Executive Vice President (EVP) of Sales & Marketing. “This recognition further solidifies Veeam’s ongoing commitment to help enterprises transition to the cloud and embrace Digital Transformation. As enterprises look to embrace Cloud Data Management and move to hybrid cloud, they are looking for vendors who can deliver optimal solutions for these environments and Veeam, yet again, proves that it is leading this space.”
According to the 2019 Veeam Cloud Data Management Report, 72% of organizations are now looking to embrace Cloud Data Management to better meet protection needs and leverage the power of their data. Now a $1 billion private software company and with more than 350,000 customers and 23,500 partners in the Veeam Cloud & Service Provider (VCSP) program, Veeam’s momentum in both company growth and cloud commitment continue to accelerate its marketshare.
As part of the rigorous selection process for the Forbes 2019 Cloud 100, Bessemer Venture Partners received submissions from hundreds of cloud startups. The Cloud 100 Judging Panel, made up of public cloud company CEOs, reviewed the data to select, score, and rank the top 100 private cloud companies from all over the world. The evaluation process involved ranking companies across four factors: market leadership (35%), estimated valuation (30%), operating metrics (20%), and people & culture (15%).
“For four years now, we have ranked the best and brightest emerging companies in the cloud sector,” said Alex Konrad, Forbes editor of The Cloud 100. “With so many businesses growing fast in the cloud, from cybersecurity and marketing to data analytics and storage, it’s harder than ever to make the Cloud 100 list – but with more elite company if you do. Congratulations to each of the 2019 Cloud 100 honorees and the 20 Rising Stars honorees poised to join their ranks!”
The Forbes 2019 Cloud 100 and 20 Rising Stars lists are published online at www.forbes.com/cloud100 and will appear in the September 2019 issue of Forbes magazine.
Each year the CEOs of The Cloud 100 and the 20 Rising Stars companies are honored at the exclusive Cloud 100 Celebration hosted by Bessemer Venture Partners, Salesforce Ventures, and Forbes. A special thank you to our sponsors Amazon Web Services (AWS), Bank of America Merrill Lynch, Cooley, FuelxMcKinsey, Goldman Sachs, J.P. Morgan, Nasdaq, Qatalyst Partners, and Silicon Valley Bank who make this event possible.
About Veeam Software
Veeam is the leader in Backup solutions that deliver Cloud Data Management. Veeam Availability Platform™ is the most complete backup solution for helping customers on the journey to achieving success in the 5 Stages of Cloud Data Management. Veeam has 350,000+ customers worldwide, including 82% of the Fortune 500 and 67% of the Global 2,000, with customer satisfaction scores at 3.5x the industry average, the highest in the industry. Veeam’s global ecosystem includes 66,000 channel partners; Cisco, HPE, NetApp and Lenovo as exclusive resellers; and 23,500+ cloud and service providers. Headquartered in Baar, Switzerland, Veeam has offices in more than 30 countries. To learn more, visit https://www.veeam.com or follow Veeam on Twitter @veeam.
About Bessemer Venture Partners
Bessemer Venture Partners is the world’s most experienced early-stage venture capital firm. With a portfolio of more than 200 companies, Bessemer helps visionary entrepreneurs lay strong foundations to create companies that matter, and supports them through every stage of their growth. The firm has backed more than 120 IPOs, including Pinterest, Shopify, Yelp, LinkedIn, Skype, LifeLock, Twilio, SendGrid, PagerDuty, DocuSign, Wix, and MindBody. Bessemer’s 15 partners operate from offices in Silicon Valley, San Francisco, New York City, Boston, Israel, and India. For more information, please visit www.bvp.com.
About Forbes
The defining voice of entrepreneurial capitalism, Forbes champions success by celebrating those who have made it, and those who aspire to make it. Forbes convenes and curates the most-influential leaders and entrepreneurs who are driving change, transforming business and making a significant impact on the world. The Forbes brand today reaches more than 120 million people worldwide through its trusted journalism, signature LIVE events, custom marketing programs and 40 licensed local editions in 70 countries. Forbes Media’s brand extensions include real estate, education and financial services license agreements. For more information, visit: https://www.forbes.com/forbes-media/.
About Salesforce Ventures
Salesforce is the fastest growing top five enterprise software company and the #1 CRM provider globally. Salesforce Ventures—the company’s corporate investment group—invests in the next generation of enterprise technology that extends the power of the Salesforce Customer Success Platform, helping companies connect with their customers in entirely new ways. Portfolio companies receive funding as well as access to the world’s largest cloud ecosystem and the guidance of Salesforce’s innovators and executives. With Salesforce Ventures, portfolio companies can also leverage Salesforce’s expertise in corporate philanthropy by joining Pledge 1% to make giving back part of their business model. Salesforce Ventures has invested in more than 300 enterprise cloud startups in 20 different countries since 2009. For more information, please visit www.salesforce.com/ventures.

Original Article: http://www.businesswire.com/news/home/20190911005647/en/Veeam-Named-2019-Forbes-Cloud-100-Fourth/?feedref=JjAwJuNHiystnCoBq_hl-c9EfAjTsk-flsGmdO9JqnwMCNrJ5zSzFYT7RwBKCzchnkvYMqDDYxFrLs-oQ2BHQypCsFFvugK3n-fFs_G7QqSAaM4HAi1dfjF46AFX3NGZlQo71FJ67f_ORYRya2TtNQ==

DR

How to backup Kubernetes Master Node configuation

How to backup Kubernetes Master Node configuation

Veeam Software Official Blog  /  David Hill


As the use of Kubernetes grows significantly in production environments, backing up the configuration of these clusters is becoming ever more critical. In previous blogs, we have talked about the difference between stateful and stateless workloads, but this blog focuses on backing up the cluster config, not the workloads.
When running a Kubernetes cluster in an environment, the Kubernetes Master Node contains all the configuration data including items like worker nodes, application configurations, network settings and more. This data is critical to restore in the event of a master node failure.
For us to understand what we need to back up, first we need to understand what components Kubernetes needs to operate.
Kubernetes Worker Node
In Kubernetes the etcd is one of the key components. The etcd component is used as Kubernetes’ backing store. All cluster data is stored here. The etcd is an open-source, key value store used for persistent storage of all Kubernetes objects like deployment and pod information. The etcd can only be run on a master node. This component is critical for backing up Kubernetes configurations.
Another key component is the certificates. By backing up the certificates, we can easily restore a master node. Without the certificates, we would need to recreate the cluster from scratch.
We can back up all the key Kubernetes Master Node components using a simple script. I got the basics of the script from this site.

  #K8S backup script  # David Hill 2019  # Backup certificates  sudo cp -r /etc/kubernetes/pki backup/  # Make etcd snapshot  sudo docker run --rm -v $(pwd)/backup:/backup \  --network host \  -v /etc/kubernetes/pki/etcd:/etc/kubernetes/pki/etcd \  --env ETCDCTL_API=3 \  k8s.gcr.io/etcd-amd64:3.2.18 \  etcdctl --endpoints=https://127.0.0.1:2379 \  --cacert=/etc/kubernetes/pki/etcd/ca.crt \  --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt \  --key=/etc/kubernetes/pki/etcd/healthcheck-client.key \  snapshot save /backup/etcd-snapshot-latest.db    

The script above does two things:

  1. It copies all the certificates
  2. It creates a snapshot of the etcd keystore.

These are all saved in a directory called backup.
After running the script, we have several files in the backup directory. These include certificates, snapshots and keys required for Kubernetes to run.
Kubelet Master Code Configuration
We now have a backup of the critical Kubernetes Master Node configuration. Now, the issue is that this data is all stored locally. What happens if we lose the node completely? This is where Veeam comes in. By using Veeam Agent for Linux, we can easily back up this directory and store it in a different location. We can also protect this critical data and manage and store that data in multiple locations, like a scale-out backup repository and the cloud tier leveraging object storage.

Veeam Agent for Linux

When configuring a backup job, we only want to back up the directory where the Kubernetes configuration data is stored. By running the script above, we store all that data in /root/backup. This is the directory we are going to back up in this example.
Walking through the backup job for our master node, two options we must select are File Level Backup and the directory to backup:
Edit Agent Backup Job K8S Infrastructure Backup
Once the job has run, we can open the backup and check to be sure all the files we requested to be backed up are included.
K8S-Master as of less than a day ago
We now have a successful backup of our Kubernetes Master Node configuration. We can offload this backup to an object storage repository for off-site backup storage.
In the next blog, the topic will be restoring this configuration data in the event of a failure or the loss of a master node.
GD Star Rating
loading…
How to backup Kubernetes Master Node configuation, 3.5 out of 5 based on 8 ratings

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/Li3z-SYeBYM/backup-kubernetes-master-node.html

DR

8 lessons from an Office 365 backup customer

8 lessons from an Office 365 backup customer

Veeam Software Official Blog  /  Russ Kerscher


A few months ago, one of our customers presented at VeeamON and gave fellow IT Pros an unedited, unfiltered take on using Office 365 backup.
This customer migrated to Office 365 almost two years ago, added Veeam Backup for Microsoft Office 365 to protect their data, went through a major company acquisition where they inherited heavy SharePoint Online and OneDrive for Business usage and had to manage through an exponential increase in Office 365 data. They had been through quite a bit and had a lot of lessons to share.
One slide was so good, we’ve provided a screenshot and a summary below. The hope is you can pick up one or two other things to help with managing or optimizing your current or future Office 365 backup infrastructure.

A simple slide but packed with over 18 months of experience. Here’s a summary of what they had to say.

#1 When your environment is changing, change with it!

When they originally migrated to Office 365, they had about 2,200 users and 40TB of data. Their retention policies were set to 15 months on all those 2,000+ users, and their organization only utilized Exchange and Skype. About six months later, they had to massively scale their environment due to a company acquisition that added 25% more users, but more importantly, many more applications from the Microsoft stack (Teams, OneDrive, SharePoint, Dynamics, Power BI) and users that had unlimited retention settings. They were now looking at an extended use case including tens of TBs of email data (some structured, some not) that they had to find a way to size it appropriately to successfully back it all up.

#2 Watch proxy CPU/memory utilization. Don’t skimp on resources.

Prior to the acquisition, they started with a simple backup configuration. One VBO backup, one proxy server, and a simple SMB share repository on a storage server. While this was a good way to start, it wasn’t going to cut it long term, especially with the acquisition on the horizon. Soon they scaled to five proxies, strategically spread across different portions of the Office 365 environment. Breaking this out ensured that SharePoint wasn’t blocked by Exchange, for example, and they could apply more resources to Exchange since it had the bulk of the data and ensure it completed fast. This was a big lesson learned, and after this optimization the backups started performing much better as a whole.

#3 Proxy threads are important!

The configuration of how much work the proxy server will do at a given time is super important since it may or may not improve importance for increasing or decreasing threads. They strongly encouraged other organizations to do this fine-tuning of proxy threads as it ensures you’ll see better performance. If the proxy is doing the work seamlessly, it should move more data and faster.

#4 Protect the JET database used as repository

The JET Blue database isn’t just a simple backup item that sits in a company’s repository, it’s a full archive database storing the retrieved data from Office 365 Exchange Online, SharePoint Online, OneDrive for Business and Microsoft Teams data from a cloud-based instance of Office 365. This still need to be protected from corruption on disk, as well as ransomware events and other security threats that can attack it. If you lose the JET database, you lose your backups. So, what they did was protect the JET database itself with Veeam Backup & Replication. This is even something that can be done with the free version, Veeam Backup & Replication Community Edition.

#5 Veeam support is an invaluable resource

The value of Veeam support cannot be understated. When they needed advice on planning and scaling due to data growth and the company acquisition, they reached out to the Veeam support team. Not only did they help here, but they also provided some tips and tricks for how to improve general, day-to-day maintenance.

#6 Watch for bottlenecks in your infrastructure, Veeam will expose them!

If your storage is slow, Veeam will make sure to out it! Veeam can tell you if the storage repository is underperforming, or if even the source infrastructure is the bottleneck (which is the latency in response from the Office 365 infrastructure). Basically, they are waiting on the cloud itself to process the data in most of the time. It’s not just about adding a backup, it’s again about evolving and improving your strategy as your environment changes.

#7 Office 365 resources are not unlimited

In Office 365, it’s important to remember that resources are not unlimited. With on-prem Exchange, it’s easy to move data quickly. But in Office 365, the data can only be moved as fast as Microsoft will let you, and they can even throttle you down. Another tip customer learned when they were doing the tenant to tenant migration during the acquisition, if you ask Microsoft via a case in Office 365’s support portal, they can remove some of the throttling constraints to your tenant temporarily so you can move data faster.

#8 Test your backups!

This one is simple, but often overlooked. It’s critically important to test your backups on a regular basis. An untested backup is as good as no backup at all! This is talked about quite a bit with Veeam Backup & Replication but applies to Office 365 backup as well.

There you have it – 8 great lessons learned from an Office 365 backup customer. If you have comments or questions, please add them below. If you’re new to Office 365 backup, you can download a FREE 30-day trial and see for yourself, read the 6 Reasons for Office 365 Backup or study the Office 365 Shared Responsibility Model.
The post 8 lessons from an Office 365 backup customer appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/ldxIespVYqg/office-365-backup-customer-lessons.html

DR

Veeam Cares: Spotlight on Scouting

Veeam Cares: Spotlight on Scouting

Veeam Executive Blog – The Availability Lounge  /  Jason Buffington


Veeam Cares is the #Veeamazing corporate program that encourages Veeamers to volunteer in order to impact our world (in addition to fixing the world’s data protection and data management challenges). One of the perks of being a Veeamer is that Veeam gives our employees 24 hours (three business days) of volunteer time, above and beyond PTO and regional holidays. I’m a volunteer leader in Scouting (which is also a worldwide movement); so, this year, I spent my Veeam Cares days and some vacation time serving at the World Scout Jamboree, where 45,000 youth from 150 countries came together for 12 days.
Check out this video for more about Veeam Cares, Scouting, and the World Scout Jamboree.

It’s worth pointing out that the Scout Motto is “Be Prepared” — which probably resonates with a lot of the Backup Administrators and BC/DR Planners that utilize Veeam technologies, as well.
As always, thanks for watching.
Show more articles from this author

Makes disaster recovery, compliance, and continuity automatic

NEW Veeam Availability Orchestrator helps you reduce the time, cost and effort of planning for and recovering from a disaster by automatically creating plans that meet compliance.

DOWNLOAD NOW

New
Veeam Availability Orchestrator

Original Article: http://feedproxy.google.com/~r/veeam-executive-blog/~3/YoXLx4z9l1M/veeam-cares-spotlight-on-scouting.html

DR

September 2019 – Editor Note

Can you believe Summer is over and it’s Back to School time?!!!

Thank you to all that stopped by the Veeam booth and attended the Breakout Sessions at VMworld.  From what I heard, good times were had at the legendary Veeam Party, too.

Veeam 101: Learn about the Support and Renewals services – Webinar – Sept 4

September 04 | Wednesday

Show

Hosts: Tyler Payton, Jim Tedesco
No matter if you are just starting to use Veeam® or are a current customer, you have a lot to learn! Join us on Sept. 4 at 1 pm ET for a live session, where Jim Tedesco (SVPWorldwide Renewals and Customer Success) and Tyler Payton (Customer Care Manager) will host an interactive and educational webinar. Dive deep into the support and renewals services.


You’ll get detailed information on:
  • Veeam’s Technical support offering
  • Key features you might have missed
  • Our incident resolution process
  • Live demos
  • And more!
September 04, Wednesday 12:00 PM (UTC -05:00) REGISTER

Healthcare backup vs record retention

Healthcare backup vs record retention

Veeam Software Official Blog  /  Jonathan Butz

Healthcare overspends on long term backup retention

There is a dramatic range of perspective on how long hospitals should keep their backups: some keep theirs for 30 days while others keep their backups forever. Many assume the long retention is due to regulatory requirements, but that is not actually the case. Retention times longer than needed have significant cost implications and lead to capital spending 50-70% higher than necessary. At a time when hospitals are concerned with optimization and cost reduction across the board, this is a topic that merits further exploration and inspection.

What is the role of data protection?

The primary role of data protection is recovery in response to failure, malice, or accident. Wherever we have applications and data, we need assurance that those applications can be restored quickly (RTO) with tolerable near-term information loss (RPO). That is an IT deliverable common to all hospitals.

What are the relevant regulations?

HIPAA mandates that Covered Entities and Business Associates have backup and recovery procedures for Patient Health Information (PHI) to avoid loss of data. Nothing regarding duration is specified (CFR 164.306, CFR 164.308). State regulations govern how long PHI must be retained, usually ranging from six to 25 years, sometimes longer.

The retention regulations refer to the PHI records themselves, not the backups thereof. This is an important distinction and a source of confusion and debate. In the absence of deeper understanding, hospitals often opt for long term backup retention, which has significant cost implications without commensurate value.

How do we translate applicable regulations into policy?

There are actually two policies at play: PHI retention and Backup retention. PHI retention should be the responsibility of data governance and/or application data owners. Backup retention is IT policy that governs the recoverability of systems and data.

I have yet to encounter a hospital that actively purges PHI when permitted by regulations. There’s good reason not to: older records still have value as part of analytics datasets but only if they are present in live systems. If PHI is never purged, records in backups from one year ago will also be present in backups from last night. So, what value exists in the backups from one year ago, or even six months ago?

Keeping backups long term increases the capital requirements, complexity of data protection systems, and limits hospitals’ abilities to transition to new data protection architectures that offer a lower TCO, all without mitigating additional risk or adding additional value.

What is the right backup retention period for hospital systems?

Most agree that the right answer is 60-90 days. Thirty days may expose some risk from undesirable system changes that require going further back at the system (if not the data) level; examples given include changes that later caused a boot error. Beyond 90 days, it’s very difficult to identify scenarios where the data or systems would be valuable.

What about legacy applications?

Most hospitals have a list of legacy applications that contain older PHI that was not imported into the current primary EMR system or other replacement application. The applications exist purely for reference purposes, and they often have other challenges such as legacy operating systems and lack of support, which increases risk.

For PHI that only exists in legacy systems, we have only two choices: keep those aging apps in service or migrate those records to a more modern platform that replicates the interfaces and data structures. Hospitals that have pursued this path have been very successful reducing risk by decommissioning legacy applications, using solutions from Harmony, Mediquant, CITI, and Legacy Data Access.

What about email?

Hospitals have a great deal of freedom to define their email policies. Most agree that PHI should not be in email and actively prevent it by policy and process. Without PHI in email, each hospital can define whatever email retention policy they wish.

Most hospitals do not restrict how long emails can be retained, though many do restrict the ultimate size of user mailboxes. There is a trend, however, often led by legal to reduce the history of email. It is often phased in gradually: one year they will cut off the email history at ten years, then to eight or six and so on.

It takes a great deal of collaboration and unity among senior leaders to effect such changes, but the objectives align the interests of legal, finance, and IT. Legal reduces discoverable information; finance reduces cost and risk; and IT reduces the complexity and weight of infrastructure.

The shortest email history I have encountered is two years at a Detroit health system: once an item in a user mailbox reaches two years old, it is actively removed from the system by policy. They also only keep their backups for 30 days. They are the leanest healthcare data protection architecture I have yet encountered.

Closing thoughts

It is fascinating that hospitals serving the same customer needs bound by vastly similar regulatory requirements come to such different conclusions about backup retention. That should be a signal that there is real optimization potential both with PHI and email:

  • There is no additional value in backups older than 90 days.
  • Significant savings can be achieved through reduced backup retention of 60-90 days.
  • Longer backup retention times impose unnecessary capital costs by as much as 70% and hinder migration to more cost-effective architectures.
  • Email retention can be greatly shortened to reduce liability and cost through set policy.

The post Healthcare backup vs record retention appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/jwSN8LPyPaU/healthcare-data-protection-policy.html

DR

How to copy AWS EMR cluster hive scripts

How to copy AWS EMR cluster hive scripts

Veeam Software Official Blog  /  Rick Vanover

I recently had a situation where I was discussing a scenario with Veeam Backup & Replication deployed completely in the public cloud. In fact, many organizations have the Veeam Backup & Replication server in the cloud and are managing either AWS EC2 or Azure VM backups with Veeam Agent for Microsoft Windows and Veeam Agent for Linux.

In this situation, there was a discussion about some other services in the cloud. In particular, AWS EMR (Elastic MapReduce). There was a discussion about managing the hive scripts that are part of the EMR cluster. I offered a simple solution:

Veeam File Copy Job

Since the Veeam Backup & Replication server is in the public cloud, the EMR cluster can be inventoried. This means it is visible in the Veeam infrastructure. Generically speaking, the Linux requirements for a repository or other Linux functions are documented here. But mainly, Perl and SSH are what is needed to add the EMR cluster into Veeam Backup & Replication. This is shown in the figure below:

 

The second entry, ec2-54-196-190-21.compute-1.amazonaws.com, is EMR cluster’s master public DNS entry. From there, it is visible for some basic interaction with Veeam. This reminds me of a challenge I had nearly 10 years ago. I had a physical server with millions of files that I needed to move, the Veeam FastSCP engine saved the day.

I had an odd re-remembering of this situation with the EMR cluster. When you have some hive scripts that are inventoried in EMR, it is possible that Veeam can copy those off to another system so you have a copy of them. It’s important to note that this is not a backup — but a file copy job. Nonetheless, it can be helpful if you have a need to have a copy of your EMR scripts on another system. In the screenshot above, I could copy them to the ec2-18-206-235-47.compute-1.amazonaws.com host as that is a Linux EC2 instance, and I have Veeam Agent for Linux on that system. So, I could have a “proper” backup as well.

One other nice feature here is I can use the Veeam text editor (what used to be the FastSCP Editor). Simply right-click on your hive script and select Edit, this may be easier than dusting off your vi skills:

 

Then from there, I can build the File Copy Job. The file copy job is a simple copy with an optional schedule. You select a source and a target, and it will do just as it sounds. It’s handy in that you can coordinate its schedule with other jobs you may already have in the public cloud. In fact, that’s exactly what I have done in the example below. The file copy job is placing the hive scripts and some other folders I care about from my EMR cluster on a system I’m backing up with Veeam Agent for Linux. That is scheduled to run right after the Linux Agent job. This is a view of the file copy job:

 

Have you ever used the file copy job? It may be handy for services such as EMR clusters that you need to copy files to or from. You can use the file copy job in the free Veeam Community Edition, download it now and try for yourself.

The post How to copy AWS EMR cluster hive scripts appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/k_JMjkHQSjM/aws-emr-cluster-hive-scripts-file-copy-job.html

DR

SAP HANA tenant database system copy and recovery

SAP HANA tenant database system copy and recovery

Veeam Software Official Blog  /  Clemens Zerbe

Navigation:

Part 1 — 3 steps to protect your SAP HANA database
Part 2 — How to optimize and backup your SAP HANA environment
Part 3 — SAP HANA tenant database system copy and recovery

 

Throughout these blog series dedicated to the Veeam Plug-in for SAP HANA, I have highlighted the challenges SAP administrators face to maintain reliability and downtime. What we have not yet had the opportunity to discuss is how the demand for data preservation has increased precipitously in recent years.

This could be due to regulations or the need to preserve critical records. Now businesses are grappling with maintaining data to leverage it for long-term strategic business decisions, testing, or even monetizing it. For the final blog in this series, I will provide you with answers to typical customer questions I receive before or when enabling the Veeam Plug-in for SAP HANA.

How to plan for long-term retention requirements and restore scenarios

The biggest challenge regarding this topic is how to restore data without knowing much about the backup. You might sit in front of an empty SAP HANA database without any catalog entries, or a different host, or even a different version of SAP HANA.

When performing a restore, the same version of SAP HANA or later is required, hence I highly recommend documenting the HANA version and consulting SAP´s HANA Update table. Please also consult any related SAP documentation before performing a restore to a different version of SAP HANA to prevent unnecessary challenges.

In most cases you might have a retention policy consisting of several days or even weeks. Customers typically delete older backups (you may recall we discussed proceeding carefully in the first blog of this series). You may encounter a missing catalog entry for this restore point. You might even have deleted all the <EBID>.vab files already from your repository — depending on your retention policy.

To make this backup recoverable, you need to proceed cautiously. Creating a proper backup would be a good start. In my example I only assume responsibility for the tenant, but the way of creating and restoring the database is the same for the SYSTEM DB — and you should always have both backups available to you.

Create a backup of tenant database

Please refer to the first blog in this series to create a backup with SAP HANA Studio if needed and document it. This will be important during the restore process.

Often at Veeam, we stress the importance of naming conventions for tags, backup jobs, and other related information so it’s intuitive to others. After clicking next and next again, the backup will be created. A recommended practice I should share is to copy the backup log into your backup documentation for this long-term backup.

You will now find your recently created backup in the backup catalog and can document the EBID numbers for your backup services.

All EBIDs have corresponding <EBID>.vab files in our repositories. Go to your repository and find these files and copy them into your long-term vault (e.g. file to tape, long-term S3 bucket).

It is important to note that if these <EBID>.vab files are deleted from the repository through backint, a restore will no longer be possible. To avoid this challenge, you MUST delete this specific data backup prior to the application of your created retention policy. The next screenshot shows how to delete a specific data backup and from the catalog. You will delete only this specific data backup from the catalog, but the files remain in the repository.

The Catalog and Backup Location option will delete the <EBID>.vab files inside the repository — which we wish to avoid.

After refreshing the catalog, you’ll see it’s gone forever from the SAP HANA catalog. Check the file system for the repository and you’ll still find the files.

Now you can copy/move/archive these <EBID>.vab files with file to tape or other tools.

Recovery of tenant database

To recover it, please follow the steps below:

  1. Copy the <EBID>.vab files back to your original repository folder
  2. Restore databackup with knowing prefix.
  3. Start a recovery of your tenant and select Recover the database to a specific data backup.

Click next and choose Recover without the backup catalog.

Now is the moment where we’ll be glad we documented the original backup. Have your Backup Prefix ready.

Click Next — you don’t have many other options to restore your database.

Make sure to check the summary — review the SQL statement if needed — and press Finish.

Tenant database System Copy with Veeam

The next question I often receive is how to create an SAP HANA System Copy with Veeam Backup & Replication. System copies are one of the often-used SAP Basis processes to create DB copies for your Quality Assurance, Development or Sandbox Systems. Keep in mind this is only the first step of this process — the database copy. Please consult SAP documentation on further steps like system renaming and cleanups.

Please follow these steps to make this happen:

Creating backup of tenant database — see above how to create it with a meaningful name and document your backup properly.

This is a backup of my S/4 production system.

Before I can restore this tenant on my S/4 QA system, I need to tell my secondary HANA System to search within a different job entry via command-line interface (CLI). Keep in mind you need access rights into the needed repositories to see the other SAP HANA system entries.

 

sles15hana04:/opt/veeam/VeeamPluginforSAPHANA # SapBackintConfigTool --help    --help                Show help    --show-config         Show configuration parameters    --wizard              Start configuration wizard    --set-credentials arg Set credentials    --set-host arg        Set backup server    --set-port arg        Set backup server port    --set-repository      Set backup respository    --set-restore-server  Set source restore server for system copy processing    --map-backup          Map backup       sles15hana04:/opt/veeam/VeeamPluginforSAPHANA # SapBackintConfigTool --set-restore-server    Select source SAP HANA plug-in server to be used for system copy restore:    1. sles15hana04  2. sles15hana03  3. sles15hana02  4. sles15hana01    Enter server number: 2    Available backup repositories:    1. Default Backup Repository  2. SOBR1  3. w2k19repo_ext1  4. sles15repo_ext1    Enter repository number: 2    sles15hana04:/opt/veeam/VeeamPluginforSAPHANA #

 

This set of commands tells the Veeam Backup & Replication SAP Backint client to search inside the SLESHANA03/SOBR1 backup job to find this specific backup. This has no impact on your running backups. If you are not able to modify the configuration, check your user permissions and the file permissions on /op/Veeam/VeeamHANAPlug/Veeam_config.xml

The next step is to perform a restore on your copy system. While this seems easy, I recommend reviewing the comments below for important steps you may not be familiar with.

Select the tenant you want to restore to.

Recover the database to a specific data backup — the one we just created.

Select Recover without backup catalog and enable Backint System Copy. It is important to add the Source system with tenant@SID naming.

Now add your Backup Prefix:

No other options are possible so click Next.

Review the summary before initiating the restore.

Now SAP HANA Studio will shut down this specific tenant and start the recovery.

If everything is going well, it should look like this and begin the recovery process.

The tenant will restart.

And thankfully, the HANA System copy process is done. Just keep in mind this was only the database copy. Additional SAP tasks such as SAP system renaming, etc. can begin.

As result, I created an SAP HANA DB system copy of my production S/4 database into a new HANA system with a new SID.

Closing thoughts

I hope you enjoyed this blog series and are now better prepared to leverage Veeam to more effectively backup, restore, and protect your SAP HANA environment. As SAP S/4 HANA is the market-leading intelligent ERP solution with over 11,000 customers, I’m thrilled Veeam Backup & Replication 9.5 Update 4 offers an SAP Backint certified solution to help customers minimize disruption and downtime for mission-critical applications like your SAP system. I highly encourage you to check out this capability and I also encourage you to check out the Veeam Plug-in for Oracle RMAN, another great feature in Update 4.

The post SAP HANA tenant database system copy and recovery appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/c3Xir0LEKWM/sap-hana-tenant-database-system-copy-recovery.html

DR

How policy based backups will benefit you

How policy based backups will benefit you

Veeam Software Official Blog  /  Michael Cade

With VMworld 2019 right around the corner, we wanted to share a recap on some of the powerful things that VMware has in their armoury and also discuss how Veeam can leverage this to enhance your Availability.

This week VMware announced vSAN 6.7 Update 3. This release seems to have a heavy focus on simplifying data center management while improving overall performance. A few things that stood out to me with this release included:

  • Cleaner, simpler UI for capacity management: 6.7 Update 3 has color-coding, consumption breakdown, and usable capacity analysis for better capacity planning allowing administrators to more easily understand the consumption breakdown.
  • Storage Policy changes now occur in batches. This ensures that all policy changes complete successfully, and free capacity is not exhausted.
  • iSCSI LUNs presented from vSAN can now be resized without the need to take the volume offline, preventing application disruption.
  • SCSI-3 persistent reservations (SCSI-3 PR) allow for native support for Windows Server Failover Clusters (WSFC) requiring a shared disk.

Veeam is listed in the vSAN HCL for vSAN Partner Solutions and can protect and restore VMs. The certification for the new Update 3 release is also well on its way to being complete.

Another interesting point to mention is the Windows Server Failover Clusters (WSFC). While these are seen as VMDKs, they are not applicable to the data protection APIs used for data protection tasks. This is where the Veeam Agent for Microsoft Windows comes in with the ability to protect those failover clusters in the best possible way.

What is SPBM?

Storage Policy Based Management (SPBM) is the vSphere administrator’s answer to control within their environments. This framework allows them to overcome upfront storage provisioning challenges, such as capacity planning, differentiated service levels and managing capacity resources in a much better and efficient way. All of this is achieved by defining a set of policies within vSphere for the storage layer. These storage policies optimise the provisioning process of VMs by provisioning specific datastores at scale, which in turn will remove the headaches between vSphere admins and storage admins.

However, this is not a closed group between the storage and virtualisation admins. It also allows Veeam to hook into certain areas to provide better Availability for your virtualised workloads.

SPBM spans all storage offerings from VMware, traditional VMFS/NFS datastore as well as vSAN and Virtual Volumes, allowing policies to overarch any type of environment leveraging whatever type of storage that is required or in place.

What can Veeam do?

Veeam can leverage these policies to better protect virtual workloads, by utilising vSphere tags on old and newly created virtual machines and having specific jobs setup in Veeam Backup & Replication with specific schedules and settings that are required to meet the SLA of those workloads.

Veeam will also back up any virtual machine that has an SPBM policy assigned to it, as well as protect the data. It will also protect the policy, so if you had to restore the whole virtual machine, the policy would be available as part of the restore process.

Automate IT

Gone are the days of the backup admin adding and removing virtual machines from a backup job, so let’s spend time on the interesting and exciting things that provide much more benefit to your IT systems investment.

With vSphere tags, you can create logical groupings within your VMware environment based on any characteristic that is required. Once this is done, you are able to migrate those tags into Veeam Backup & Replication and create backup jobs based on vSphere tags. You can also create your own set of vSphere tags to assign to your virtual machine workloads based on how often you need to back up or replicate your data, providing a granular approach to the Availability of your infrastructure.

VMware Snapshots – The vSAN way

In vSAN 6.0, VMware introduced vSAN Sparse Snapshots. The snapshot implementation for vSAN provides significantly better I/O performance. The good news for Veeam customers is if you are using the traditional VMFS or the newer vSAN sparse snapshots the display and output are the same — a backup containing your data. The benefits are incredible from a performance and methodology point of view when it comes to the sparse snapshot way and can play a huge role in achieving your backup windows.

The difference between the “traditional” and the new snapshot methodology that both vSAN as well as Virtual Volumes leverage is that a traditional VMFS snapshot is using Redo logs which, when working with high I/O workloads, could cause performance hits when committing those changes back to the VM disk. The vSAN way is much more similar to a shared storage system and a Copy On Write snapshot. This means that there is no commitment after a backup job has released a snapshot, meaning that I/O can continue to run as the business needs.

There are lots of other integrations between Veeam and VMware but I feel that this is still the number one touch point where a vSphere and Backup Admin can really make their life easier by using policy-based backups using Veeam.

The post How policy based backups will benefit you appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/q0JAuQAOo-w/vmware-update-vsan-spbm.html

DR

Enhanced Deal Registration coming soon!

Enhanced Deal Registration coming soon!
Veeam is evolving its deal registration process to give you a
better user experience and streamline the quoting process! Beginning September
2, when you go to register a deal, you’ll notice the addition of a SKU
configurator. This will automatically calculate MSRP values and generate
SKUs
all with the click of a button. See how it works by watching this short demo.

Veeam channel talks episode 2: Why should customers backup Microsoft Office 365?

Veeam channel talks episode 2: Why should customers backup
Microsoft Office 365?
Even though Microsoft takes on the management
of the Office 365 infrastructure, it is your customer’s
responsibility
to control and protect their data. That’s why
they need a backup! Watch this short video to learn more about
why your customers should backup their Office 365 data with Veeam.

Earn 5% cashback with Veeam GreenBucks

Earn 5% cashback with Veeam GreenBucks
Now through Sept. 30, eligible Silver, Gold
and Platinum‑level resellers can earn 5% cashback
for closing a deal of $25,000 or more through one
of Veeam’s alliance reseller partners: Cisco, Hewlett Packard
Enterprises, NetApp or Lenovo.*

Recorded webinar: Healthcare data management with Veeam

Recorded webinar: Healthcare data management with Veeam
More than 20,000 healthcare customers worldwide rely
on Veeam to assure access to patient information
by providing a data management solution that avoids downtime
and data loss, improves application Availability across hybrid
environments and eases regulatory compliance.
Watch this webinar to learn how you can pursue
healthcare data management opportunities
with Veeam!

Veeam Backup for Microsoft Office 365 earns a gold star review rating from TechGenix

Veeam Backup for Microsoft Office 365 earns
a gold star review rating from TechGenix
TechGenix recently conducted a product review
of Veeam Backup for Microsoft Office 365.
Their review covers the installation process, the backup
and restore process and their overall positive verdict
on the product. Read a review from an expert
and see for yourself why Veeam Backup for Microsoft
Office 365
earned a gold star rating.

Storage Ecosystem for Cloud Providers with expert Yogesh Ranjan [MTE6034U]

vCD integrated with EMC, Cohesity, Rubrik, Veeam, and Pure Storage extends the data protection capabilities for your cloud deployments.
SPEAKERS
Yogesh Ranjan, Global Product Management Lead, VMware
  • Tuesday, August 27, 04:15 PM – 05:00 PM | Moscone West, Level 2, Meet the Experts, Table 2
Session Type: Expert Roundtable
Level: Technical 200
Recently Added: Updated on week of August 5
Hybrid Cloud: Hybrid Cloud Infrastructure
Product: VMware Cloud Provider Program

Infrastructure as Code vs APIs – A Working Example with Terraform and vCloud Director [VMTN5058U]

You Don’t Need to Know APIs to Survive! That statement is fairly controversial… especially for those that have been preaching that IT professionals need to code in order to remain viable. A lot of that mindshare is centred around the API and the DevOps world…but not everyone needs to be a DevOp! IT is all about trying to solve problems and achieve outcomes… it doesn’t matter how you solve it. Leveraging IaC instead of trying to code against APIs directly can be more palatable for IT professionals as it acts as a middle man interpreter between yourself and the infrastructure endpoints. Come see a real world example of that in this lightning session!
SPEAKERS
Anthony Spiteri, Senior Global Technologist, Product Strategy, Veeam
  • Wednesday, August 28, 12:00 PM – 12:15 PM | Moscone South, Lower Level, The Square, Social Media & VMware Community
Session Type: VMTN TechTalk
Hybrid Cloud: Hybrid Cloud Infrastructure
Product: NSX Cloud, vCloud Suite

How to Excel As a Great Mentor and Mentee [VMTN5044U]

Do you know what is required to be a great mentor? Were you also aware that there is a way to be a better mentee? This session will help you to better define your goals, identify potential weaknesses, and determine the best areas in which a mentor can help you improve. You can get huge personal and professional gains if you leverage the help of any advisor who is willing to boost your skills. Building upon this, one of the best ways to better yourself can be to mentor someone. You don’t need to be an expert in any field, and it doesn’t have to be technical in nature, but you will need some skills to do this effectively. Key Takeaways: This session will discuss how you can get more benefit when you are being assisted by others, whether in a formal mentoring or coaching model, or by anyone playing an advisory role. Also, learn what it takes to be a good mentor and how to polish your own skills by helping others to progress in their journey.
SPEAKERS
Joe Houghes, Veeam
  • Wednesday, August 28, 04:00 PM – 04:15 PM | Moscone South, Lower Level, The Square, Social Media & VMware Community
Session Type: VMTN TechTalk

Grow into your IT role [VMTN5064U]

Too many times I hear that people don’t apply for jobs because they don’t feel qualified – maybe due to imposter syndrome or just not meeting 100% of the requirements. I’ll talk about why we should never be 100% qualified for our IT role, and should stay challenged.
SPEAKERS
Tim Smith, Solutions Architect, Veeam
  • Thursday, August 29, 11:45 AM – 12:00 PM | Moscone South, Lower Level, The Square, Social Media & VMware Community
Session Type: VMTN TechTalk
Level: Business 100

Enhancing Data Protection for vSphere with What’s Coming from Veeam [HBI3532BUS]

As the definition of what is disaster recovery continues to evolve, it is clear that organizations are looking at ways to enhance the protection of Tier-1 workloads by lowering RPOs and RTOs and unlock the portability of their data. In this session we will give a first look at how Veeam is leveraging vSphere’s APIs for I/O Filtering for Continuous Data Protection (CDP). We will deep dive into the CDP Backup & Replication componentry and architecture and demo the new functionality in an exclusive first look for VMworld attendees. We will also take a look at what we have in store for in next major release which includes enhancements to our Scale Out Backup Repository leveraging Object Storage, enhanced recoverability options of workloads to vSphere from any platform and other core enhancements to the Backup & Replication platform.
SPEAKERS
Anthony Spiteri, Senior Global Technologist, Product Strategy, Veeam
Danny Allan, Vice President, Product Strategy, Veeam
  • Monday, August 26, 02:30 PM – 03:30 PM | Moscone West, Level 2, Room 2005
Level: Technical 200
Hybrid Cloud: Hybrid Cloud Infrastructure
Product: vSphere
Session Type: Breakout Session

Backups are just the start! Enhanced Data Mobility with Veeam [HBI3535BUS]

Data is consuming our lives at an increasing pace & more than ever we need not only protect that data, but also allow portability & mobility of data as it becomes more & more critical. This session will provide an overview & demo of key Veeam capabilities which allow ANY backup to be restored to vSphere; including Microsoft Hyper-V, Nutanix AHV as well as physical & cloud workloads. This session will highlight the portability of the Veeam backup file format & how by taking any source workload Veeam can be flexible in restoring/replicating data to a different source be it vSphere, VMware Cloud on AWS or Service Provider & Public Cloud platforms. Recently released capabilities in Veeam Backup & Replication featuring the Veeam Cloud Tier, vCloud Director Replication & look at how automation & orchestration plays a key role in data mobility.
SPEAKERS
Michael Cade, Senior Technologist, Product Strategy, Veeam
David Hill, Chief Global Technologist, Product Strategy, Veeam Software
  • Wednesday, August 28, 08:30 AM – 09:30 AM | Moscone West, Level 3, Room 3022
Level: Technical 300
Hybrid Cloud: Hybrid Cloud Infrastructure
Product: Cloud Assembly, vSphere, vSphere Hypervisor, CloudHealth
Session Type: Breakout Session

To Veeam Community Edition and Beyond [VMTN5056U]

At Veeam, every day we wake up thinking about data… your data! We’re constantly thinking about the unique ways we can protect that data better; how we can do it more efficiently; how we can make restore times even faster, and even where tomorrow’s data will reside. We have the ability to look forward because of the way Veeam backup files are created and have been created from day one. Veeam will NOT lock your data in and keep you and your data hostage. After all, it’s your data, not Veeam’s. Nobody should ever deny you the flexibility to be able to have control of that data should you need to restore it, for whatever reason. We provide you this freedom at Veeam with the portability of the Veeam backup file format along with the ability to open that file format wherever and whenever you wish.
SPEAKERS
Michael Cade, Senior Technologist, Product Strategy, Veeam
  • Monday, August 26, 12:00 PM – 12:15 PM | Moscone South, Lower Level, The Square, Social Media & VMware Community
Session Type: VMTN TechTalk
Hybrid Cloud: Hybrid Cloud Infrastructure
Product: vCenter Converter, VMware Cloud on AWS, vSphere, vSphere Hypervisor

Welcome Reception [WELCOME]

Kick off your VMworld experience at the welcome reception, sponsored by Veeam, in the Solutions Exchange on Sunday, August 25, 5:00 – 7:30 PM. Get your first look at 200+ exhibitors on the show floor, network with fellow attendees in a fun atmosphere, pick up some new swag, and enjoy delicious appetizers and drinks.
  • Sunday, August 25, 05:00 PM – 07:30 PM | Moscone South, Lower Level, Solutions Exchange Theater
Session Type: General Session

A sneak peek of v10’s Cloud Tier Copy Mode

A sneak peek of v10’s Cloud Tier Copy Mode
At VeeamON 2019, members of the Veeam Product Strategy Team demoed a preview of the new Copy Mode feature that will be part of the Cloud Tier in Veeam Backup & Replication™ v10. One of the team members, Anthony Spiteri, takes a further look at this new feature and how it will benefit Veeam Cloud & Service Provider partners in his latest blog post.

NEW Marketing Campaign for Office 365 Backup & Recovery Services

NEW Marketing Campaign for Office 365 Backup & Recovery Services
Within the Veeam Marketing Center, you have access to a variety of marketing tools designed to help you generate demand for your Veeam‑powered Office 365 backup and recovery services. Quickly access the latest marketing materials and brand them with your company logo or launch an email campaign and generate leads at the click of a button.

First Look: On‑Demand Recovery with Cloud Tier and VMware Cloud on AWS

First Look: On‑Demand Recovery with Cloud Tier and VMware Cloud on AWS
Since Veeam® Cloud Tier was released as part of Veeam Backup & Replication™ 9.5 Update 4, Anthony Spiteri, Senior Technologist, Product Strategy, has written a lot about how it works and what it offers in terms of offloading data from more expensive local storage to what is fundamentally cheaper remote Object Storage. As with most innovative technologies, dig a little deeper and different use cases start to present themselves and unintended use cases find their way to the surface. Read this blog from Anthony Spiteri to learn more.

Veeam Ready – Object: Cloud Tier Compatible Solutions

Veeam Ready – Object: Cloud Tier Compatible Solutions 
Have you heard the latest news on Cloud Tier and what it means to you as a VCSP? Cloud Tier is a feature in the latest update of Veeam Backup & Replication 9.5 Update 4. It’s the built‑in, automatic tiering feature of Scale‑out Backup Repository that offloads older backup files to lower cost storage.
If you’re a VCSP looking for an Object Storage solution for your customers, consider Veeam Technology Alliance partner solutions who have the Veeam Ready – Object logo.
Veeam Ready – Object solutions have been tested and verified as S3‑compatible object storage that support the Veeam Cloud Tier capabilities for Veeam Backup & Replication functions.
So, when you see the Veeam Ready – Object logo, you have the confidence that the Partner solution has been qualified and tested to work with Veeam’s Cloud Tier capabilities.