To keep the momentum going, we will be increasing the payouts for a few of our selling-incentive activities effective Oct. 1, 2018:
For more information, visit the PartnerPerks page on the ProPartner Portal
To keep the momentum going, we will be increasing the payouts for a few of our selling-incentive activities effective Oct. 1, 2018:
For more information, visit the PartnerPerks page on the ProPartner Portal
Creating policy with exclusion of folders
Notes from MWhite / Michael White
I need to add Bitdefender protection to my Veeam Backup & Replication server so this article is going to help with that. But it is mostly going to talk about adjusting the policy that is applied to the Veeam server so that it doesn’t impact the backups.
We need to have access to the info about what to exclude on the Veeam server. That is found in this article. We also need to have access to our GravityZone UI so we can create a policy, add exclusions to it, and than apply it to the VM.
Lets get started.
We need to start off in the GravityZone UI and change to the Policies area.
We do not want to edit the default policy as it is applied to everyone, plus I think we cannot delete it. So we are going to select it and clone it so we have a new copy of it that we can tweak and attach to our Veeam server. Once it is cloned it opens up and is seen like below.
We need to name it something appropriate – I will call it VBR Exclusions. I like the default policy and think it pretty good. So I am going to leave this clone as it was in the default policy and only add to it the Veeam exclusions. So change to the Antimalware area and select Settings.
You can see it below, where I have already entered the Veeam server exclusions.
You only need to enable the Custom Exclusions by checkmark. Then add in what you see above. Once you have finished you use the Save button to save this new policy. It is the same as the default – which I said I like, except it has additional exclusions.
I do not know how to attach a policy to a package so that when it is installed it gets the policy. So we are going to install with the default and change it afterwards.
Likely you know how to do that – download a package and execute it. Once done make sure you see it in the GravityZone UI.
Now we need to assign the proper policy to our Veeam server.
We need to be in the Policies \ Assignment Rules area.
We add a location rule by using Add \ Location. Once we do that we see the following screen.
We add a name, and description, plus select the policy we just created the exclusions in and add an IP address for our Veeam server.
Now we change to the Policies view and it may take a minute or two and you will see something different.
We see that one has the policy which makes sense, but there is 4 applied which is confusing. However, I do a Policy Compliance report which shows me who has what policy and I see that VBR01 – my Veeam server – is the only one that has the policy.
So things look good now. We have created a special policy for our Veeam server, applied it, and confirmed it was applied.
Any questions or comments let me know.
=== END ===
Veeam showcases NEW cloud data management at Ignite 2018
Veeam Executive Blog – The Availability Lounge / Carey Stanton
Veeam, a company born on virtualization, has rapidly adapted to customer needs to become the leader in Intelligent Data Management. In keeping with its speed and momentum to market, Veeam’s highly anticipated NEW Veeam Availability Suite Update 4 will be released in Q4 2018. This promises to redefine cloud data management with cloud mobility for Microsoft Azure Stack, as well as native Azure Blob storage for Veeam-powered archives.
Take a deeper look at the current and new innovations Veeam will showcase at Microsoft Ignite 2018:
As you probably know, Microsoft Azure Stack is an exciting innovation that extends the power of the Azure public cloud to on-premises. Building on Veeam’s announcement at Microsoft Ignite 2017, Veeam is publicly presenting Veeam Direct Restore to Microsoft Azure Stack. With similar functionality to existing Veeam Direct Restore to Microsoft Azure, this new mobility feature for Azure Stack allows you to restore and migrate workloads from on-premises to Azure Stack, or even from Microsoft Azure to Azure Stack, all with a 2-step process!
This means that in the event of a failure, or if you want to migrate a current VM to Azure Stack, you can quickly achieve this through the built-in Veeam Direct Restore to Microsoft Azure Stack functionality that will be available in the upcoming update.
Another big highlight at Ignite is Veeam’s highly anticipated, native Azure Blob support known as Veeam Cloud Archive. This will give customers an automated data-management solution designed to simplify data transfer to Azure Blob storage, with up to 10x the savings on long-term archives.
With this upcoming feature, it has never been easier to free up primary storage space and save on costs by archiving your Veeam backups offsite with infinite scalability into Azure Blob. By leveraging Veeam’s proven Scale-Out Backup Repository (SOBR) technologies, customers can access the cloud directly for storage of long-term data archives. Also, unlike other solutions, Veeam does not charge storage tax fees for storing data in the cloud — making this one of the most cost-effective cloud archive solutions in the industry!
There’s a lot of buzz at Microsoft Ignite around Veeam’s new cloud management capabilities for Microsoft Azure and Azure Stack coming in NEW Veeam Availability Suite 9.5 Update 4. As the Leader in Intelligent Data Management for the Hyper-Available Enterprise, Veeam continues to deliver innovative solutions — with cloud mobility for Microsoft Azure Stack and native cloud archives for Azure Blob.
You can explore all Veeam’s Microsoft integrations by visiting Veeam for the Microsoft Cloud.
NEW Veeam Availability Orchestrator helps you reduce the time, cost and effort of planning for and recovering from a disaster by automatically creating plans that meet compliance.
How to bring balance into your infrastructure
Veeam Software Official Blog / Evgenii Ivanov
Veeam Backup & Replication is known for ease of installation and a moderate learning curve. It is something that we take as a great achievement, but as we see in our support practice, it can sometimes lead to a “deploy and forget” approach, without fine-tuning the software or learning the nuances of its work. In our previous blog posts, we examined tape configuration considerations and some common misconfigurations. This time, the blog post is aimed at giving the reader some insight on a Veeam Backup & Replication infrastructure, how data flows between the components, and most importantly, how to properly load-balance backup components so that the system can work stably and efficiently.
Veeam Backup & Replication is a modular system. This means that Veeam as a backup solution consists of a number of components, each with a specific function. Examples of such components are the Veeam server itself (as the management component), proxy, repository, WAN accelerator and others. Of course, several components can be installed on a single server (provided that it has sufficient resources) and many customers opt for all-in-one installations. However, distributing components can give several benefits:
Such system can only work efficiently if everything is balanced. An unbalanced backup infrastructure can slow down due to unexpected bottlenecks or even cause backup failures because of overloaded components.
Let’s review how data flows in a Veeam infrastructure during a backup (we’re using a vSphere environment in this example):
All data in Veeam Backup & Replication flows between source and target transport agents. Let’s take a backup job as an example: a source agent is running on a backup proxy and its job is to read the data from a datastore, apply compression and source-side deduplication and send it over to a target agent. The target agent is running directly on a Windows/Linux repository or a gateway if a CIFS share is used. Its job is to apply a target-side deduplication and save the data in a backup file (.VKB, .VIB etc).
That means there are always two components involved, even if they are essentially on the same server and both must be taken into account when planning the resources.
To start, we must examine the notion of a “task.” In Veeam Backup & Replication, a task is equal to a VM disk transfer. So, if you have a job with 5 VMs and each has 2 virtual disks, there is a total of 10 tasks to process. Veeam Backup & Replication is able to process multiple tasks in parallel, but the number is still limited.
If you go to the proxy properties, on the first step you can configure the maximum concurrent tasks this proxy can process in parallel:
For normal backup operations, a task on the repository side also means one virtual disk transfer.
On the repository side, you can find a very similar setting:
For normal backup operations, a task on the repository side also means one virtual disk transfer.
This brings us to our first important point: it is crucial to keep the resources and number of tasks in balance between proxy and repository. Suppose you have 3 proxies set to 4 tasks each (that means that on the source side, 12 virtual disks can be processed in parallel), but the repository is set to 4 tasks only (that is the default setting). That means that only 4 tasks will be processed, leaving idle resources.
The meaning of a task on a repository is different when it comes to synthetic operations (like creating synthetic full). Recall that synthetic operations do not use proxies and happen locally on a Windows/Linux repository or between a gateway and a CIFS share. In this case for normal backup chains, a task is a backup job (so 4 tasks mean that 4 jobs will be able to generate synthetic full in parallel), while for per-VM backup chains, a task is still a VM (so 4 tasks mean that repo can generate 4 separate VBKs for 4 VMs in parallel). Depending on the setup, the same number of tasks can create a very different load on a repository! Be sure to analyze your setup (the backup job mode, the job scheduling, the per-VM option) and plan resources accordingly.
Note that, unlike for a proxy, you can disable the limit for number of parallel tasks for a repository. In this case, the repository will accept all incoming data flows from proxies. This might seem convenient at first, but we highly discourage from disabling this limitation, as it may lead to overload and even job failures. Consider this scenario: a job has many VMs with a total of 100 virtual disks to process and the repository uses the per-VM option. The proxies can process 10 disks in parallel and the repository is set to the unlimited number of tasks. During an incremental backup, the load on the repository will be naturally limited by proxies, so the system will be in balance. However, then a synthetic full starts. Synthetic full does not use proxies and all operations happen solely on the repository. Since the number of tasks is not limited, the repository will try to process all 100 tasks in parallel! This will require immense resources from the repository hardware and will likely cause an overload.
If you are using a Windows or Linux repository, the target agent will start directly on the server. When using a CIFS share as a repository, the target agent starts on a special component called a “gateway,” that will receive the incoming traffic from the source agent and send the data blocks to the CIFS share. The gateway must be placed as close to the system sharing the folder over SMB as possible, especially in scenarios with a WAN connection. You should not create topologies with a proxy/gateway on one site and CIFS share on another site “in the cloud” — you will likely encounter periodic network failures.
The same load balancing considerations described previously apply to gateways as well. However, the gateway setup requires an additional attention because there are 2 options available — set the gateway explicitly or use an automatic selection mechanism:
Any Windows “managed server” can become a gateway for a CIFS share. Depending on the situation, both options can come handy. Let’s review them.
You can set the gateway explicitly. This option can simplify the resource management — there can be no surprises as to where the target agent will start. It is recommended to use this option if an access to the share is restricted to specific servers or in case of distributed environments — you don’t want your target agent to start far away from the server hosting the share!
Things become more interesting if you choose Automatic selection. If you are using several proxies, automatic selection gives ability to use more than one gateway and distribute the load. Automatic does not mean random though and there are indeed strict rules involved.
The target agent starts on the proxy that is doing the backup. In case of normal backup chains, if there are several jobs running in parallel and each is processed by its own proxy, then multiple target agents can start as well. However, within a single job, even if the VMs in the job are processed by several proxies, the target agent will start only on one proxy, the first to start processing. For per-VM backup chains, a separate target agent starts for each VM, so you can get the load distribution even within a single job.
Synthetic operations do not use proxies, so the selection mechanism is different: the target agent starts on the mount server associated with the repository (with an ability to fail over to Veeam server if the mount server in unavailable). This means that the load of synthetic operations will not be distributed across multiple servers. As mentioned above, we discourage from setting the number of tasks to unlimited — that can cause a huge load spike on the mount/Veeam server during synthetic operations.
Scale-out backup repository. SOBR is essentially a collection of usual repositories (called extents). You cannot point a backup job to a specific extent, only to SOBR, however extents retain some of settings, including the load control. So what was discussed about standalone repositories, pertains to SOBR extents as well. SOBR with per-VM option (enabled by default), the “Performance” placement policy and backup chains spread out across extents will be able to optimize the resource usage.
Backup copy. Instead of a proxy, source agents will start on the source repository. All considerations described above apply to source repositories as well (although in case of Backup Copy Job, synthetic operations on a source repository are logically not possible). Note that if the source repository is a CIFS share, the source agents will start on the mount server (with a failover to Veeam server).
Deduplication appliances. For DataDomain, StoreOnce (and possibly other appliances in the future) with Veeam integration enabled, the same considerations apply as for CIFS share repositories. For a StoreOnce repository with source-side deduplication (Low Bandwidth mode) the requirement to place gateway as close to the repository as possible does not apply — for example, a gateway on one site can be configured to send data to a StoreOnce appliance on another site over WAN.
Proxy affinity. A feature added in 9.5, proxy affinity creates a “priority list” of proxies that should be preferred when a certain repository is used.
If a proxy from the list is not available, a job will use any other available proxy. However, if the proxy is available, but does not have free task slots, the job will be paused waiting for free slots. Even though the proxy affinity is a very useful feature for distributed environments, it should be used with care, especially because it is very easy to set and forget about this option. Veeam Support encountered cases about “hanging” jobs which came down to the affinity setting that was enabled and forgotten about. More details on proxy affinity.
Whether you are setting up your backup infrastructure from scratch or have been using Veeam Backup & Replication for a long time, we encourage you to review your setup with the information from this blog post in mind. You might be able to optimize the use of resources or mitigate some pending risks!
Intelligent data management: Interview with Rick Vanover of Veeam – TechGenix (blog)
veeam – Google News
I recently had a chance to talk with Rick Vanover of Veeam Software about what businesses need to do these days to ensure their availability strategy fully addresses their needs. Rick is the director of product strategy at Veeam, where he leads a team of technologists and analysts that brings Veeam solutions to market and works with customers, partners, and R&D teams around the world. You can follow Rick on Twitter @RickVanover.
MITCH: Rick, thanks very much for agreeing to let me interview you on the topic of how data protection and availability are changing in our cloud-based area and what organizations are doing right and wrong these days when it comes to preparing for disaster recovery.
RICK: My pleasure. This is an area that I’ve built my career around and organizations today need to ensure that their availability strategy meets the expectations of the business.
MITCH: The last few years have seen a lot of changes in how organizations of all sizes implement and manage their IT infrastructures. Cloud computing models like software as a service (SaaS) now enables users to connect to and use cloud-based apps directly over the Internet with Microsoft’s Office 365 being one popular solution of this type. Then there’s infrastructure as a service (IaaS) that lets organizations build agile computing infrastructures that can scale up and down on demand. What does and doesn’t change in regards to ensuring data protection and availability for your business with these new models?
RICK: This is a great question, Mitch, and I’m glad you asked it. The one important thing I’ve learned over the years is that while the platform may change, the rules on the data and availability expectations do not change. Microsoft Office 365 is a good example that you have illustrated. The promise of this Software as a Service (SaaS) solution is great: a great relief of on-premises tier 1 storage, the opportunity to reduce the need for mailbox quotas and with OneDrive for Business a way to combat “shadow IT” file sharing outside of corporate mechanisms. These are real business problems solved by Microsoft Office 365 and these changes are welcome to both users and IT administrators alike.
But what doesn’t change when the application does change? The responsibility of the data. Organizations need to realize that this is their data and Veeam has invested in a new product, Veeam Backup for Microsoft Office 365.
MITCH: What sort of changes do organizations need to make in their supporting processes to ensure data protection/availability in the event of a disaster when they’ve embraced the cloud wholeheartedly or at least adopted some sort of hybrid IT model?
RICK: As the mix of platforms change for organizations, the disaster recovery aspect absolutely needs to be reassessed. This is a very difficult task and, honestly, the old way of doing this isn’t acceptable anymore today. I know plenty of IT administrators who addressed disaster recovery as a once-a-year test where there was free pizza over the weekend, things were tested, about half of it failed, and the goal was to do better next year. Today’s IT services and expectations can’t deal with that.
I know plenty of IT administrators who addressed disaster recovery as a once-a-year test where there was free pizza over the weekend, things were tested, about half of it failed, and the goal was to do better next year. Today’s IT services and expectations can’t deal with that.
This is one reason Veeam have developed a new product that went available earlier this year, Veeam Availability Orchestrator. This product brings a very critical capability for disaster recovery in the era of hybrid IT. Veeam Availability Orchestrator supports orchestrating disaster recovery for on-premises workloads; but also supports orchestrated disaster recovery to VMware Cloud on AWS. This is a new cloud offering in Amazon for VMware workloads. This is an example where an organization can have their on-premises resources benefit from DR in the cloud — literally!
MITCH: I’ve heard it said by some who provide IT support for businesses that rely mostly on SaaS applications that “backup” is basically a bad word now, that performing daily backups is a dead practice because there are now more sophisticated ways to ensure availability for your business data. But this sounds a lot like an oversimplification to me. Does the availability burden of performing regular backups really go away with cloud computing?
Backup and recovery, as well as replication and failover, are the important critical functions there. This effectively is a gateway to more advanced capabilities.
The next step is an aggregation of those critical data sources, whether they are in the cloud, on-premises, or in the SaaS space. Having the data flow for all critical data is an important milestone, and each platform has their own characteristics that may change what the capabilities are for backup and recovery.
With the aggregation of this data, visibility becomes important. Answering key questions like what data is where, who is accessing what, will the environment run out of storage and such are very critical questions today.
Advanced capabilities become the next opportunity, and orchestration is a capability that Veeam brings today to the market that can respond to changes very easily. For example, orchestrated disaster recovery to another site can be done with confidence if there is a concern that weather is going to take out a data center, so organizations can proactively fail overconfidently.
With all of these capabilities, then automation becomes the goal. Automatic resolution of issues and policy violations, for example, will be a capability from Veeam later this year. This can be very important when it comes to ensuring that critical data is protected to the level the business demands today.
This is a quick overview of our vision, but it is important to reinforce that it all starts with a solid backup and recovery as well as replication and failover capability.
MITCH: Given all these changes that are happening, what are most organizations doing right and wrong these days with regard to disaster recovery?
RICK: My observation today is that many organizations are simply not providing the availability experience their business demands. The best example is a high-speed recovery technique. Ask this question: If someone accidentally deletes a virtual machine, how soon can it come back? If the answer is more than a few minutes, there is a gap between capabilities of what is in place and the expectation of users. That’s one example that Veeam has pioneered and led the market for over eight years and there are move. Same for an AWS EC2 instance in the cloud: If someone terminates and deletes it, how soon can it come back? If that too is more than minutes, there is a gap.
Organizations are doing things right when it comes to leveraging new platforms. This includes leveraging SaaS applications where it makes sense, leveraging the cloud and leveraging service providers for the right services as well.
The key advice I have to offer is to ensure that availability is thought of every step of the way.
MITCH: How do new platforms like hyper-converged infrastructure (HCI), advanced storage systems, and robust networking change the game in regards to availability?
RICK: They provide new “plumbing” to work with. New APIs, new networking techniques, and better snapshot capabilities. These are all important for efficient data flow for backup and recovery as well as replication and failover.
Veeam has always invested in APIs and platform capabilities to address data movement at scale. The new technology platforms in the mix are built in the same mindset as well.
MITCH: Looking ahead then, what do you see in the future for business continuity and disaster recovery?
RICK: I see a continued push for completeness. Organizations will make changes to applications, data, and other critical systems to make them more “DR friendly.” A good example is a legacy application that sits on a physical server that is obsolete on an operating system out of support for three years. Can that application really have good DR? No.
The key advice I have to offer is to ensure that availability is thought of every step of the way.
Organizations are indeed seeing the value of proper DR and if the application needs modernized, changed, or sunset out of production use, that’s what it takes.
Proper DR comes with modern platforms and data; obsolete components can’t be made awesome!
MITCH: What practical advice would you give to an admin for implementing disaster recovery solutions in a hybrid cloud environment? Any tips or recommendations to help them get it right going forward?
RICK: If there is a gap in the availability strategy, my advice is to start small and make it right. Specifically, that would mean take one small application. Get the basics of backup and recovery right. Then set up replication and DR capabilities for that small application. Once you have that working right, that motion educates organizations about what solid backup looks like. How those types of tools work, etc.
Then move to the next application that is a bit more complex. And succeed. Get applications to the right model one at a time. Don’t start with the biggest, most critical application in the mix at first.
Once the small successes are proven, go back to the business (like operational people in an organization) and indicate that proper DR and better backup and restore times can be achieved if we virtualize this application or invest in a storage snapshot engine or such.
If the business is drawn to the benefits, the effort to change the platform may come much easier.
MITCH: Rick, thanks very much for giving us some of your valuable time!
RICK: Cheers, Mitch, my pleasure.
Post Views: 71
Original Article: http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNFBDdYkCYgjOgQrww_stnidIl_AWA&clid=c3a7d30bb8a4878e06b80cf16b898331&ei=SiKtW-CSGdCxhAGT5ZH4Cw&url=http://techgenix.com/rick-vanover-veeam/
New process: Even easier to sign Veeam’s Data Processor Addendum
YThe General Data Protection Regulation (GDPR) took effect on May 25. Veeam is held to GDPR compliance standards like other companies all over the world. Every Veeam ProPartner who has a Deal Registration or submits a purchase order, or alternatively would like to receive marketing leads or marketing development funds for potential customers from Veaam, needs to sign a Data Processor Addendum (DPA) — and that process just got easier! Log in to the ProPartner Portal to review and sign the DPA.
Go beyond Backup as a Service and Disaster Recovery as a Service
Veeam is breaking new boundaries with technology to help you build lucrative Veeam-powered services. Access the VCSP Discount Calculator to see your potential savings.
The ultimate AWS re:Invent experience
Veeam and N2WS are giving away the ultimate AWS re:Invent experience to THREE lucky winners! All you have to do is register now and enter our raffle to win a FULL CONFERENCE TICKET and FIVE NIGHTS HOTEL ACCOMMODATIONS.
To meet the rising demands of IT infrastructures, businesses of all sizes, from SMB to the largest enterprises, are embracing the cloud. But a one-cloud-fits-all strategy is not the norm.
By evaluating workloads across an IT environment and assessing requirements like performance, infrastructure compatibility, security and compliance, and strategic fit, businesses are choosing a mix of cloud deployment models to maximize the benefits of cloud across private, public, and hybrid environments.
For many small to midsized businesses, they are turning to a cloud-first strategy, often abandoning completely the need for an on-premises data center. Enterprises are taking a multi-provider approach to the cloud to drive innovation. And all sizes of businesses are looking at a hybrid model that has data residing in a hybrid environment. The end result is a multi-cloud strategy. In fact, based on recent studies, 81% of enterprises are embracing a multi-cloud strategy with a mix of solutions across private, public, and hybrid cloud — across multiple providers.
Now the challenge becomes: how to ensure data and apps are always available across this multi-cloud model so that you never skip a beat when it comes to innovating and providing services to your customers.
We know this can be a challenge as:
Veeam understands the opportunities and implications of a multi-cloud environment and has built an entire cloud strategy around protecting customers’ data, whether it is on premises, in a hybrid model, or residing in the cloud.
Our relationship with multiple cloud providers ensures our customers can choose a cloud vendor and be assured Veeam will be there to protect the data.
A perfect example of this relationship is the strength of the partnership between Microsoft Cloud and Veeam. When Mark Russinovich, CTO for Microsoft Azure, spoke onstage at VeeamON 2017, he clearly laid out that Veeam is a trusted partner for Microsoft.
Alasdair Thomson, IT Director College Success Foundation, says:
“When you use Veeam, it’s less about where data sits and more about visibility, access and control. That’s why we love Veeam — we can ensure availability of data on-premises and in Microsoft’s cloud. Combining Microsoft and Veeam is a win-win all around.”
While the cloud has continued to expand into every business of every size, there are still challenges for IT departments.
To overcome these challenges, here are three key cloud Availability best practices to consider:
With these proven approaches to providing Availability across your multi-cloud environment, you can confidently accelerate innovation without worrying about disruption to your business. To get started with an Availability strategy for your multi-cloud environment, visit veeam.com to check out our solutions.
 ESG, 2017 Public Cloud Computing Trends, April 2017
 Veeam 2017 Availability Report
 Veeam 2017 Availability Report
 Half of U.S. Businesses Report Being Hacked
 Comprehensive Guide to Email Retention
 Veeam on IBM Cloud: Bridging the Availability Gap
I recently showed a customer how to do a Veeam ONE report that showed the firmware of the VMware hosts, and the version / build of the VMware software. It was an interesting use of the custom reporting in ONE and I have since shown other people and since they liked it too I thought I would write about it.
We start by logging into the Veeam ONE Reporter UI. It is the one on port 1239 so https://fqdn_ONE_server:1239. Next change to the Workspace tab, next would be select Custom Reports folder. You can see it below.
Once we click on Custom Reports we will see a selection of them.
The one that we are going to start with, sort of as framework for what we want is the one called Custom Infrastructure. It is seen above with the red arrow pointing at it.
Lets click on Custom Infrastructure so that we can work with it.
Depending on what you have done with this report type before some of the fields may look different. Object type above says Host System, but that is due to my using that before. It may say Click to choose for you, and so if it does, select it and select Host System as you see above.
Next we select the Click to choose for Columns. What we are going to do here is layout our report. I would like to have the following columns – Name, Virtualization Info, Version, Build, Host BIOS firmware, Manufacture. So lets pick like that. As you select these values notice all the other things you can pick! Once selected it should look like below.
Once you have selected what you need, you can use the OK button to return to the main screen. I like to use the Preview Button now to confirm what it looks like.
And it is what I want, so I change tabs so I am back where we hit the Preview button.
Now we use the Save as button to save our report.
I like to save my reports to My Reports folder. And even though we made this report ourselves, it is now treated like the others. So produced automatically on a schedule for example. Or you can click on report in your My Reports folder and do things like edit it, View it, delete it as some examples.
When you have previewed the report, or when you are looking at it, you can always export it. At the top of the screen you will see an Export button.
When you use that Export button you can export to Word, Excel or PDF.
Hope that this helps but you can always comment or ask questions. ONE is a pretty powerful tool but people don’t always seem to see that. So I am going to do some articles to help with that.
BTW, any technical ONE articles will be able to be found using this tag.
Cisco and Veeam – A Validated Solution
Original Article: http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNEpdW9kDC68fx1rdLJMG9Z3HSlqSQ&clid=c3a7d30bb8a4878e06b80cf16b898331&ei=l4SjW_j7HcXFhQHXmI_wDQ&url=https://www.cio.com/resources/178493/infrastructure/cisco-and-veeam—a-validated-solution
The recent introduction of the General Data Protection Regulation (GDPR) has done a lot to tackle issues surrounding business’ exploitation of personal data and has led to calls by some tech leaders for a similar legislative approach in the U.S. at a Federal Government level. Just last month, “The California Consumer Privacy Act of 2018” was created, promising similar rights for the State’s 40 million citizens as Europeans received with GDPR.
The hastily approved Act, which is due to come into effect on Jan. 1, 2020, affords citizens the right to see what information of theirs is being collected by businesses and to request that data be deleted. They will also be able to find out whether their information is being sold to third parties, including advertisers, and to request they stop doing so. It is by some stretch the most comprehensive privacy law in the country, but it’s not without fault.
California is known across the world for Silicon Valley and the endless amounts of world-changing technology businesses it has given birth to. The irony is the businesses that call the state home are precisely those causing the need for such regulatory overhaul by pushing the boundaries on technology, and as a result, privacy.
California has a long history of taking privacy seriously and has led the United States in terms of the creation of privacy laws. In 1972, Golden State voters amended the California Constitution to include the right of privacy among the “inalienable” rights of all people, and in doing so gave every Californian a legal and enforceable right of privacy. Since then, more laws have been passed to safeguard state citizens, including the Online Privacy Protection Act, the Privacy Rights for California Minors in the Digital World Act, and Shine the Light.
While GDPR was accused of being ambiguous for its lack of specificity, it looks comprehensive in comparison to the California Consumer Privacy Act. Its very creation was to curb the abusive practices of online businesses trading consumer data for financial income. Unfortunately, through some loose categorization of businesses, the Act has the potential to include websites that collect IP addresses of sites with over 137 unique visitors per day. That is just one example, but there are plenty more. And it matters.
In 2017, over 1.7 billion files were leaked through breaches. After the California Consumer Privacy Act comes into force, organizations mishandling data could be penalized up to $7,500 for each violation, which could add up significantly based on the 2017 data. If you look specifically at data breach penalties across the different states, they vary significantly; Texas imposes civil fines of up to $50,000 per violation while Georgia imposes no penalty at all. For me, this is where the problem lies.
If each state takes a local approach to data privacy, the United States will become a patchwork of regulation, and unless state laws can come to a common agreement, it might soon become a challenging and less friendly place to do business. That’s not a good thing for anyone.
A discussion draft of a new proposed federal law, “Data Acquisition and Technology Accountability and Security Act,” would pre-empt state breach notification laws, but has received widespread criticism. It isn’t perfect. It’s too focused on notification itself rather than providing consumers with the rights needed for modern, everyday lives. But if it could be adjusted and expanded, it would be a better way of handling state-wide data privacy concerns and data management practices.
What would be preferable is if the law could mirror the GDPR, a very thorough and active piece of regulation. The hard work for legislators is largely done, and it would reduce the compliance costs for American businesses and encourage a fast start. Given we’re now on the backfoot and in desperate need of such a law, common sense says use something global businesses are already working with, rather than the laws 50 states independently create.
California has made the first move, but is it the right one? I’d be keen to hear your views on this.
Show more articles from this author
Veeam is expanding its partnership with Lenovo to make its hyper-availability solutions available right from Lenovo and its resellers.
As a part of the partnership, Veeam intelligent data management solutions will be available with Lenovo Software Defined Infrastructure (SDI) and Storage Area Network (SAN) offerings.
The customers will be able to purchase the Veeam Hyper-Availability Platform directly from Lenovo and its resellers in a single transaction.
The aim of this partnership is to provide seamless and efficient sales and deployment process to partners and customers. Integration of Lenovo SDI and SAN solutions with Veeam Hyper-Availability Platform will help enterprises simplify IT, mitigate risks, and accelerate their business with intelligent data management solutions.
“Lenovo’s decision to resell Veeam Intelligent Data Management and Availability solutions with their offerings reflects Veeam’s market momentum and leadership,” said Peter McKay, President and Co-CEO of Veeam.
“It’s a prime example of two technology leaders collaborating to provide the most seamless and efficient sales and deployment process for its partners and customers. Lenovo’s global reach and extensive partner ecosystem, combined with the strong growth of its Data Center Infrastructure (DCI) and Software Defined Infrastructure (SDI) business units, provides Veeam customers with the best purchasing experience.”
The combined solution will reduce the costs and complexity of traditional infrastructure, virtualization, and data protection management. The companies said that they together aim to provide enterprises an increased ROI, accelerate application development and deployment, support data analytics, and simplify disaster recovery.
Veeam solutions come integrated with VMware vSphere, Microsoft Hyper-V and Nutanix Acropolis. It will help enterprises automate backup and recovery operations for high availability of data and applications.
“Veeam’s strong track record of innovation combined with their focus on creating a great user experience aligns with our customer-first approach. This alliance further enables us to provide an effortless customer experience and relevant solutions to our customers,” said John Majeski, who leads the Software and Solutions business at Lenovo Data Center Group.
“Businesses want solutions that provide IT simplicity while delivering Intelligent Transformation across environments. Veeam Availability Solutions deliver the IT simplicity and Intelligent Data Management needed for Lenovo Spectrum Virtualize SAN solutions, as well as for Lenovo’s SDI portfolio of ThinkAgile offerings with VMware, Microsoft and Nutanix to accelerate digital transformation while mitigating business risk.”
Additionally, organizations will also be able to create Veeam DataLabs environments for accelerated app development and testing, efficient DR testing for corporate data governance and compliance, and data analytics for improved business intelligence.
Also read: How Veeam Hyper-Availability Platform on Cisco HyperFlex helps enterprises modernize their data centers?
“The multi-cloud capabilities of Veeam combined with Lenovo’s extensive portfolio of hybrid cloud solutions, provides organizations with the unprecedented choice, flexibility and agility needed to stay competitive in the digital economy,” said Carey Stanton, Vice President of Global Alliances at Veeam.
“This partnership is yet another example of how Veeam is helping customers ensure availability across all clouds without compromising availability, performance, efficiency or manageability.”
Original Article: http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNGCykqVKB2PFbAwwbj4bFKBifQpLw&clid=c3a7d30bb8a4878e06b80cf16b898331&ei=WuagW7CNII_SzQa3tJPYAg&url=https://www.dailyhostnews.com/veeam-hyper-availability-solutions-with-lenovo/
What are your options if you wish to automate the resumption of backup jobs on your Veeam Backup server after a failover? Is there a way to automatically resume your backup jobs after switching over to the Veeam Backup server?
As you may be aware, Veeam does not offer an “out-of-the-box” High Availability function for it own backup server. The reason is probably that the simplicity of deploying and recovering the backup server might you such an option redundant. You can read more about the steps on this link. However, you might also like to have an HA function for its convenience.
One of the ways to protect the Veeam Backup server, and to offer High Availability to the Veeam Backup server, is to replicate the backup server using the Veeam Backup & Replication product. Or you could use the VMware vSphere replication Tool
To accomplish a complete High Availability solution for the Veeam Backup server, it is important that after the failover has happened, the Replica Veeam backup server automatically resumes the backup jobs.
To accomplish the automatic resumption after a failover, you can use Veeam PowerShell to initiate the resumption of failed backup jobs. It requires a few simple steps, and can be automated by adding the following PowerShell commands to the post-failover script:
The following commands check the status of each Backup and Replication Job. They will resume, or retry, each failed Backup job and Replication Job:
These steps, together with the steps in the previous blog post where I told you about protecting the Veeam Backup server, allowed the Service Provider mentioned at the top of the page to implement an easy and straightforward HA solution for his Veeam Backup server.
Testing was carried out, adjustments were made, and the finished solution was implemented in the PRODUCTION environment. These easy steps have become the cornerstone of his Veeam High Availability solution. “Easy, and very effective.” … is how he described it.
Original Article: https://cloudoasis.com.au/2018/09/15/resume-veeam-failed-backup-jobs/
I guess I should have called this how to be successful with BCDR but I really wanted to tie this to VAO. I have heard of people saying VAO just works. It doesn’t quite mind you. It is a BCDR tool so it is only – at best – 15% of a BCDR project. The rest is the investigation and learning you need to do to make it possible to have a successful BCDR project. So you investigate, search, discuss, talk, decide and then you have the info you need to start making VAO useful. We are going to look at what you need to think about and look into to be successful at the at the investigation and research.
The Business Impact Assessment (BIA) is the quick way to making VAO work successfully due to the fact it contains so much useful information. But not everyone will have a BIA or time or money to do one. But if you have one, make sure it is current, and it will have all of the info to make VAO work real good!
So you do it manual – which I call an application catalog.
You need to fill in the form below.
short form, Name of app, business contact, technical contact, data, comments, components of the app, misc info, RTO, importance
Exchange, v2016. www.microsoft.com, AppOwner – John Smith, Lead Support – Jane Doe, 24 VMs, requires – AD / DNS / DHCP, 1 desktop, minimum of 8 of the VMs, and DC with global catalog role, Barracuda Anti-Spam appliance, physical requirements – Barracuda Anti-Spam appliance. Miscellaneous notes – spare Barracuda appliance on DR site, and virtual is possible. RTO – 2 hours. Importance – 1.
Sometimes you can work with the Help Desk – if you have one, as it often has a basic form of this kind of doc, and you can improve it. The components is the hard part, often it is Active Directory for credentials, DNS for directory services, but it can be things like database servers, or anti spam appliances for example.
Now you have an application catalog, you need to talk to your management to understand the priority of each app. Hopefully you will select email to be the first recovered.
The important apps should be packaged on their own – Email, SharePoint, Accts Payable, Accts Rec, SQL for example. The less important apps can be bundled together and protected together. But often in DR the outage is only partial so you may only fail over email for example and so packaging with that granularity is important.
Now that you have an application catalog you can start using the info in it to do your VAO build-out of plans and start doing your test failover.
If you got all your info right in the catalog, you should have successful test failover. But if you don’t check out the info in the execution report and it will likely have what you need to modify and try again.
In the case of VAO, it has functionality other DR tools sometimes doesn’t have and that is the ability to add scripts to do really good automated tests. So that is worth doing – so for you own apps do some scripting and use the info in this article and this one to get them into and working with VAO.
I like to document my recovery plan using info from the application catalog so I am ready to work in VAO, but have handy what is necessary. Here is an example.
I hope this information gives you some help and guidance so when you sit down with VAO you are going to be more quickly successful with it.
BTW, using VAO is not hard. But it is hard knowing all the info you need to know to be successful with it. So that is what I am trying to help with in this article.
Let me know if you have questions or comments.
=== END ===
Original Article: https://notesfrommwhite.net/2018/09/01/how-to-be-successful-with-vao/
The No. 1 question we get all the time: “Why do I need to back up my Office 365 Exchange Online, SharePoint Online and OneDrive for Business data?”
And it’s normally instantaneously followed up with a statement similar to this: “Microsoft takes care of it.”
Do they? Are you sure?
To add some clarity to this discussion, we’ve created an Office 365 Shared Responsibility Model. It’s designed to help you — and anyone close to this technology — understand exactly what Microsoft is responsible for and what responsibility falls on the business itself. After all — it is YOUR data!
Over the course of this post, you’ll see we’re going to populate out this Shared Responsibility Model. On the top half of the model, you will see Microsoft’s responsibility. This information was compiled based on information from the Microsoft Office 365 Trust Center, in case you would like to look for yourself.
On the bottom half, we will populate out the responsibility that falls on the business, or more specifically, the IT organization.
Now, let’s kick this off by talking specifically about each group’s primary responsibility. Microsoft’s primary responsibility is focused on THEIR global infrastructure and their commitment to millions of customers to keep this infrastructure up and running, consistently delivering uptime reliability of their cloud service and enabling the productivity of users across the globe.
An IT organization’s responsibility is to have complete access and control of their data — regardless of where it resides. This responsibility doesn’t magically disappear simply because the organization made a business decision to utilize a SaaS application.
Here you can see the supporting technology designed to help each group meet that primary responsibility. Office 365 includes built-in data replication, which provides data center to data center georedundancy. This functionality is a necessity. If something goes wrong at one of Microsoft’s global data centers, they can failover to their replication target, and, in most cases, the users are completely oblivious to any change.
But replication isn’t a backup. And furthermore, this replica isn’t even YOUR replica; it’s Microsoft’s. To further explain this point, take a minute and think about this hypothetical question:
Some of you might be thinking a replica — because data that is continuously or near-continuously replicated to a second site can eliminate application downtime. But some of you also know there are issues with a replication-only data protection strategy. For example, deleted data or corrupt data is also replicated along with good data, which means your replicated data is now also deleted or corrupt.
To be fully protected, you need both a backup and a replica! This fundamental principle has been the bedrock of Veeam’s data protection strategy for over 10 years. Look no further than our flagship product, aptly named Veeam Backup & Replication.
Some of you are probably already thinking: “But what about the Office 365 recycle bin?” Yes, Microsoft has a few different recycle bin options, and they can help you with limited, short-term data loss recovery. But if you are truly in complete control of your data, then “limited” can’t check the box. To truly have complete access and control of your business-critical data, you need full data retention. This is short-term retention, long-term retention and the ability to fill any / all retention policy gaps. In addition, you need both granular recovery, bulk restore and point-in-time recovery options at your fingertips.
The next part of the Office 365 Shared Responsibility Model is security. You’ll see that this is strategically designed as a blended box, not separate boxes — because both Microsoft AND the IT organization are each responsible for security.
Microsoft protects Office 365 at the infrastructure level. This includes the physical security of their data centers and the authentication and identification within their cloud services, as well as the user and admin controls built into the Office 365 UI.
The IT organization is responsible for security at a data-level. There’s a long list of internal and external data security risks, including accidental deletion, rogue admins abusing access and ransomware to name a few. Watch this five-minute video on how ransomware can take over Office 365. This alone will give you nightmares.
The final components are legal and compliance requirements. Microsoft makes it very clear in the Office 365 Trust Center that their role is of the data processor. This drives their focus on data privacy, and you can see on their site that they have a great list of industry certifications. Even though your data resides within Office 365, an IT organization’s role is still that of the data owner. And this responsibility comes with all types of external pressures from your industry, as well as compliance demands from your legal, compliance or HR peers.
In summary, now you should have a better understanding of exactly what Microsoft protects within Office 365 and WHY they protect what they do. Without a backup of Office 365, you have limited access and control of your own data. You can fall victim to retention policy gaps and data loss dangers. You also open yourself up to some serious internal and external security risks, as well as regulatory exposure.
All of this can be easily solved with a backup of your own data, stored in a place of your choosing, so that you can easily access and recover exactly what you want, when you want.
Looking to find a simple, easy-to-use Office 365 backup solution?
Look no further than Veeam Backup for Microsoft Office 365. This solution has already been downloaded by over 35,000 organizations worldwide, representing 4.1 million Office 365 users across the globe. Veeam was also named to Forbes World’s Best 100 Cloud Companies and is a Gold Microsoft Partner. Give Veeam a try and see for yourself.
Veeam pricing and licensing changes
Veeam will be implementing a series of pricing updates
on Monday, Oct. 1, aimed at restoring balance between
our existing perpetual and subscription pricing options that were
launched earlier this year. The goal is to simplify our
subscription-based pricing and ordering policies to better align with
how customers purchase.
NEW Veeam Backup for Microsoft Office 365 v2
learning module now available!
In this course, you will find new information about
the Microsoft Office 365 market, understand the Office 365
Shared Responsibility Model and how to discuss it with your customers,
learn about new features of the product, explore frequently asked
questions and gain access to helpful resources that can be used
to increase your sales.
Check out the link to discover more:
If your team would like to have a dedicated VMCE class in your office, let me know and Veeam will take care of everything.
This is at no cost to you or your partner organization. Normally, this class is $3500 per attendee. Exam vouchers for a Pearson Vue testing center can be obtained at no cost, as well.
Veeam recognizes that you’re a valued partner and we will continue to invest in our you!
Shoot me a message to find out more or to line up an onsite class.