Veeam Co-Founder Ratmir Timashev On Strategic Growth, And Co-CEO Peter McKay’s Departure – CRN

Veeam Co-Founder Ratmir Timashev On Strategic Growth, And Co-CEO Peter McKay’s Departure – CRN

veeam – Google News

Changing Of The Guard
Veeam co-founder Ratmir Timashev (pictured) expects the data availability software company to outpace the market for years to come. A 100 percent channel sales strategy, several powerful alliances with top hardware vendors including Hewlett Packard Enterprise, Cisco Systems and NetApp, and a market hungry for cloud solutions are all working in the company’s favor, he said.
The Baar, Switzerland, company’s co-CEO, Peter McKay, said Tuesday that he was leaving the company, kicking off a broader executive restructuring. Timashev said McKay made the decision to leave the company after accelerating its enterprise channel sales strategy, as well as its strategic alliance strategy.
In the wake of McKay’s departure, Veeam made Timashev, who has in the past served as CEO, executive vice president of worldwide sales and marketing. Co-founder Andrei Baronov, formerly co-CEO, is now the company’s sole chief executive.
Timashev said Veeam has grown its data protection and replication business in the mid-20 percent range in recent years, and he expects to maintain that pace over the next couple of years. Cloud service providers are the company’s fastest-growing partner segment, he said, and an increasing number of traditional resellers lured by customer demand and huge margins are getting in on that action.
What follows is an edited excerpt of CRN’s conversation with Timashev.

Original Article: http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNG2qpEm5zNhzrInBIBbbd–JUT9Xw&clid=c3a7d30bb8a4878e06b80cf16b898331&ei=hh_bW7gXgfeoAs65pKAG&url=https://www.crn.com/slide-shows/data-center/veeam-co-founder-ratmir-timashev-on-strategic-growth-and-co-ceo-peter-mckay-s-departure

DR

Prepare you vLab for your VAO DR planning

Prepare you vLab for your VAO DR planning

CloudOasis  /  HalYaman


disaster-recovery-plan-ts-100662705-primary.idge_.jpgA good disaster plan needs careful preparation. But after your careful planning, how do you validate your DR strategy without disrupting your daily business operation? Veeam Availability Orchestrator just might be your solution.
As this blog post focuses on one aspect of the VBR and VAO configuration and to learn more about Veeam Availability Orchestrator, and to find out more about Veeam Replication and Veeam Virtual Labs, continue reading.
Veeam Availability Orchestrator is a very powerful tool to help you implement and document your DR strategy. However, this product relies on Veeam Backup & Replication Tool to:

  • perform Replication, and
  • Virtual Lab.

Therefore, to successfully configure the Veeam Availability Orchestrator, you must master Replication and the Virtual Lab. Don’t worry, I will take you through the important steps to successfully configure your Veeam Availability Orchestrator, and to implement your DR Plan.
The best way to get started is to share with you a real-life DR scenario I experienced last week during my VAO POC.

Scenario

A customer wanted to implement Veeam Availability Orchestrator with the following objectives:

  • Replication between the production and DR datacentre across the county,
  • Re-Mapping the Network attached to each VM at the DR site,
  • Re-IP the VM IP address of each VM at the DR site,
  • Scheduling the DR testing to perform every Saturday morning,
  • Document the DR Plan.

As you might already be aware, all those objectives can be achieved using Veeam VBR & VAO.
So let’s get started.

The Environment and the Design

To understand the design and the configuration lets first introduce the customer network subnets at PRODUCTION and the DR site.
PRODUCTION Site

  • At the PRODUCTION site, the customer used the 192.168.33.x/24 subnet,
  • Virtual Distribution Switch and Group: SaSYD-Prod.

DR Site

  • Production network at the DR site uses the 192.168.48.x/24 subnet
  • Prod vDS: SP-DSWProd
  • DR Re-IP subnet & Network name: vxw-dvs-662-virtualwire-4-sid-5000-SP_Prod_Switch

To accomplish the configuration and requirements at those sites, the configuration must consider the following steps:

  • Replication between the PRODUCTION and the DR Sites
  • Re-Mapping the VM Network from ProdNet to vxw-dvs-662-virtualwire-4-sid-5000-SP_Prod_Switch
  • vLab with that configuration listed above.

The following diagram and design is what we are going to discuss on this blog:

Replication Job Configuration

To prepare for a disaster and to implement failover, we must create a replication job that will replicate the intended VMs from the PRODUCTION site to the DR site. In this scenario, to achieve the requirements above, we must use Veeam replication with Network Mapping and Re-IP options when configuring the Replication Job. To do this, we have to tick the checkbox for the Separate virtual networks (enable network mapping), and Different IP addressing scheme (enable re-IP):

At the Network stage, we will specify the source and destination networks:

Note: to follow the diagram, the source network must be: Prod_Net and the Target network will be the DR_ReIP network.
On the Re-IP, enter the original IP address and the new Re-IP address to be used at the DR site:
Next, continue with the replication job configuration as normal.

Virtual Lab Configuration

To use Veeam Availability Orchestrator to check our DR planning, and to make sure our VMs will start on the DR site as expected, we must create a Veeam Virtual Lab to test our configuration. First, let’s create a Veeam vLab, starting with the name of the vLab and the ESXi host at the DR site which will host the Veeam Proxy appliance. At the following screenshot, the hostname is spesxidr.veeamsalab.org
Choose a data store where you will keep the redo log files. After you have selected the datastore, press next. You must configure the proxy appliance specifically for the network to be attached. In our example, the network is the PRODUCTION network at the DR site named SP-DSWProd, and it has a static DR site IP address. See below.
Next, we must configure the Networking as Advanced multi-host (manual configuration), and then select the appropriate Distributed virtual switch; in our case, SP-ProdSwitch.

This leads us to the next configuration stage, Isolated Network. At this stage, we must assign the DR network that each replicated VM will be connected.
Note: This network must be the same as the Re-Mapped network you selected as a destination network during the replication job configuration. The Isolation network is any name you assign to the temporary network used during the DR plan check.

Next, we must configure the temporary DR network. As shown on the following screenshot, I chose the Omnitra_VAO_vLab network I named on the previous step (Isolated network). The IP address is the same IP address of the DR PRODUCTION gateway. Also on the following screenshot, you can see the Masquerade network address I can use to get access to each of the VMs from the DR PRODUCTION network:


Finally, let’s create a static Mapping to access the VMs during the DR testing. We will use the Masquerade IPs as shown in the following screenshot.

Conclusion

Veeam Availability Orchestrator is a very powerful tool to help companies streamline and simplify their DR planning. The initial configuration of the VAO DR planning is not complex; but it is a little involved. You must navigate between the two products, Veeam Backup and Replication, and the Veeam Availability Orchestrator.
After you have grasped the DR concept, your next steps to DR planning will be smooth and simple. Also, you may have noted that to configure your DR planning using Veeam Availability Orchestrator, you must be familiar with Veeam vLabs and networking in general. I highly recommend that you read more on Veeam Virtual Labs before starting your DR Plan configuration.
I hope this blog post helps you to get started with vLab and VAO configuration. Stay tuned for my next blog or recording about the complete configuration of the DR Plane. Until then, I hope you find this blog post informative; please post your comments in the chatter below if you have questions or suggestions.
The post Prepare you vLab for your VAO DR planning appeared first on CloudOasis.

Original Article: https://cloudoasis.com.au/2018/11/20/prepare-you-vlab-for-your-vao-dr-planning/

DR

Migration is never fun – Backups are no exception

Migration is never fun – Backups are no exception

Veeam Software Official Blog  /  Rick Vanover


One of the interesting things I’ve seen over the years is people switching backup products. Additionally, it is reasonable to say that the average organization has more than one backup product. At Veeam, we’ve seen this over time as organizations started with our solutions. This was especially the case before Veeam had any solutions for the non-virtualized (physical server and workstation device) space. Especially in the early days of Veeam, effectively 100% of business was displacing other products — or sitting next to them for workloads where Veeam would suit the client’s needs better.
The question of migration is something that should be discussed, as it is not necessarily easy. It reminds me of personal collections of media such as music or movies. For movies, I have VHS tapes, DVDs and DVR recordings, and use them each differently. For music, I have CDs, MP3s and streaming services — used differently again. Backup data is, in a way, similar. This means that the work to change has to be worth the benefit.
There are many reasons people migrate to a new backup product. This can be due to a product being too complicated or error-prone, too costly, or a product discontinued (current example is VMware vSphere Data Protection). Even at Veeam we’ve deprecated products over the years. In my time here at Veeam, I’ve observed that backup products in the industry come, change and go. Further, almost all of Veeam’s most strategic partners have at least one backup product — yet we forge a path built on joint value, strong capabilities and broad platform support.
Migration-VDP
When the migration topic comes up, it is very important to have a clear understanding about what happens if a solution no longer fits the needs of the organization. As stated above, this can be because a product exits the market, drops support for a key platform or simply isn’t meeting expectations. How can the backup data over time be trusted to still meet any requirements that may arise? This is an important forethought that should be raised in any migration scenario. This means that the time to think about what migration from a product would look like, actually should occur before that solution is ever deployed.
Veeam takes this topic seriously, and the ability to handle this is built into the backup data. My colleagues and I on the Veeam Product Strategy Team have casually referred to Veeam backups being “self-describing data.” This means that you open it up (which can be done easily) and you can clearly see what it is. One way to realize this is the fact that Veeam backup products have an extract utility available. The extract utility is very helpful to recover data from the command line, which is a good use case if an organization is no longer a Veeam client (but we all know that won’t be the case!). Here is a blog by Vanguard Andreas Lesslhumer on this little-known tool.
Why do I bring up the extract utility when it comes to switching backup products? Because it hits on something that I have taken very seriously of late. I call it Absolute Portability. This is a very significant topic in a world where organizations passionately want to avoid lock-in. Take the example I mentioned before of VMware vSphere Data Protection going end-of-life, Veeam Vanguard Andrea Mauro highlights how they can migrate to a new solution; but chances are that will be a different experience. Lock-in can occur in many ways, and organizations want to avoid lock-in. This can be a cloud lock-in, a storage device lock-in, or a services lock-in. Veeam is completely against lock-ins, and arguably so agnostic that it makes it hard to make a specific recommendation sometimes!
I want to underscore the ability to move data — in, out and around — as organizations see fit. For organizations who choose Veeam, there are great capabilities to keep data available.
So, why move? Because expanded capabilities will give organizations what they need.
GD Star Rating
loading…
Migration is never fun – Backups are no exception, 4.8 out of 5 based on 4 ratings

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/EjQm2rSAvsU/backup-product-migration-lock-in.html

DR

The closing speed of transformation

The closing speed of transformation

Veeam Executive Blog – The Availability Lounge  /  Dave Russell

We all know that software and infrastructure don’t typically go away in the data center. You very rarely decommission something and bring in all new gear or stop the application to completely transition over to something else. Everyone’s in some sort of hybrid state. Some think that they’re transitioning, and many hopefully even have a plan for this.
 
Some of these changes are bigger, take longer, and you almost need to try them and experience them to have success in order to proceed. I’ve heard people say, “Well, we’re going to get around to doing X, Y, and Z.” But they are often lacking a sense of urgency to experiment early.
 
A major contributor to this type of procrastination is that changes that appear to be far off, arrive suddenly. The closing speed of transformation is the issue. Seldom do you have the luxury of time; but early on you are seldom compelled to decide. You don’t sense the urgency.
 
It is not in your best interest to say, “We’re just going to try to manage what we’ve got, because we’re really busy. We’ll get to that when we can.” Because then, boom, you’re unprepared for when you really need to actually get going on something.
 
A perfect example is the impact of the cloud on every business and every IT department. The big challenge is that organizations know they should be doing something today but are still struggling with exactly what to do. In terms of a full cloud strategy, it’s oftentimes very disaggregated. And while we’re going on roughly a decade in the cloud-era, as an industry overall, we’re still really in the infancy of having a complete cloud strategy.
 
In December of last year, when I asked people, “What’s your cloud strategy? Do you have one? Are you formulating one?” The answer was unfortunately the same response that they’ve been giving for the last eight years… “Next year.” The problem is that this is next year, and they are still in the same state.
 
When it comes to identifying a cloud strategy, a big challenge for IT departments and CIOs is that it used to be easy to peer into the future because the past was fairly predictable — whether it was the technology in data centers, the transformation that was happening, the upgrade cycle, or the movement from one platform to another. The way you had fundamentally been doing something in the past wasn’t changing with the future. Nor did it require radically different thinking. And it likely did not require a series of conversations across all of IT, and the business as well.
 
But when it comes to the cloud, we’re dealing with a fundamental transformation in the ways that businesses operate when it comes to IT: compute, storage, backup, all of these are impacted.
 
Which means organizations working on a cloud strategy have little, to no historical game plan to refer to. Nothing they can fall back on for a reference. The approach of, “Well this is what we did in the past, so let’s apply that to the future,” no longer applies. Knowing what you have, knowing what your resources are, and knowing what to expect are not typically well understood with regards to a complete cloud transformation. In part, this is because the business is often transforming, or seeking to become more digital, at the same time.
 
With the cloud, you often have limited, or isolated experiences. You have people who are trying to make business decisions that have never been faced with these issues, in this way, before.
 
Moving to absolute virtualization and a hybrid, multi-cloud deployment means that when it comes time to set a strategy you have a series of questions that need to be answered:
 

  • Do you understand what infrastructure resources are going to be required? No.
  • Do you understand what skills are going to be needed from your people? No.
  • Do you know how much budget to allocate with clarity, today, and over time? No.
  • Do you know what technologies are going to impact your business immediately, and in the near-term future? No.

 
Okay, go ahead and make a strategy now based on the information you just gave me, four No answers in a row. That’s pretty scary.
 
On top of this, data center people tend to be very risk averse by design. There’s a paralysis that creeps in. “Well, we’re not sure how we should get started.” And people just stay in pause mode. That’s part of why we see Shadow IT or Rogue IT. Someone says, “Well, I’m going to go out and I’m just going to get my own SaaS-based license for some capability that I’m looking for, because the IT department says they’re investigating it.”
 
Typically, what happens is the IT department is trying to figure that out, trying to get a strategy, investigate the request. But in the meantime, they say, “No.” Now the IT becomes the department of “no” and is not being perceived as being helpful.
 
To address this issue head on, you need to apply an engineering mindset. Meaning, that you learn more about a problem by trying to solve it. In absence of having a great reference base, with something that can easily be compared to, we should at least get going on what is visible to us, and that looks to be achievable in the short term.
 
An excellent example in the software as a service (SaaS) world is Microsoft Office 365. Getting the on-premises IT people participating in this can still be a challenge. As the SaaS solutions start to become more and more implemented, they’re sometimes happening outside of the purview of what goes on in the data center. This can lead to security, pricing and performance and Availability issues.
 
Percolate that up, what’s the effect of that? What does that actually mean? It means that the worst-case scenario is you’ve got an outcome of where the infrastructure operations people are increasingly viewed as less and less strategic going forward, because if you take this out to the extreme, you’ll end up being custodians of legacy implementations and older solutions. All while numerous other projects are being championed, piloted, put in to production and ultimately owned, by somebody else; perhaps a line of business that is outside of IT.
 
That’s where you see CIOs self-report that they think more than 29% of their IT spending is happening outside of their purview. If you think about that, that’s concerning. You’re the Chief Information Officer (CIO). You should know pretty close to 100% of what’s going on as it relates to IT. If your belief is that approaching a third of IT spending happens elsewhere, outside of your control, and that this outside spending is not really an issue, then what are you centralizing? What are you the Chief of, if this trend continues?”
 
The previous way of developing a strategic IT plan worked well in the past when you had an abundance of time. But that is no longer the case. Transformation is happening all around us and inside of each organization. You can’t continue to defer decisions. IT is now a critical part of the business; every organization has become digital and the cloud is touching everything. It is time to step up, work with vendors you trust, and move boldly to the cloud.
 
Please join me on December 5th for VeeamOn Virtual where I discuss these challenges as well as 2019 predictions, Digital Transformation, Hyper-Availability and much more. Go ahead and register for VeeamOn Virtual.
 

Makes disaster recovery, compliance, and continuity automatic

NEW Veeam Availability Orchestrator helps you reduce the time, cost and effort of planning for and recovering from a disaster by automatically creating plans that meet compliance.

DOWNLOAD NOW

New
Veeam Availability Orchestrator

Original Article: http://feedproxy.google.com/~r/veeam-executive-blog/~3/IG0mZ-tB1tA/speed-of-cloud-transformation.html

DR

QNAP NAS Verified Veeam Ready for Efficient Disaster Recovery – HEXUS

QNAP NAS Verified Veeam Ready for Efficient Disaster Recovery – HEXUS

veeam – Google News

PRESS RELEASE

Taipei, Taiwan, November 19, 2018 – QNAP® Systems, Inc. today announced that multiple Enterprise-class QNAP NAS systems, including the ES1640dc v2 and TES-1885U, have been verified as Veeam® Ready. Veeam Software, the leader in Intelligent Data Management for the Hyper-Available Enterprise™, has granted QNAP NAS systems with the Veeam Ready Repository distinction, verifying these systems achieve the performance levels for efficient backup and recovery with Veeam® Backup & Replication™ for virtual environments built on VMware® vSphere™ and Microsoft® Hyper-V® hypervisors.
“Veeam provides industry-leading Availability solutions for virtual, physical and cloud-based workloads, and verifying performance ensures that organizations can leverage Veeam advanced capabilities with QNAP NAS systems to improve recovery time and point objectives and keep their businesses up and running,” said Jack Yang, Associate Vice President of Enterprise Storage Business Division of QNAP.
Veeam Backup & Replication helps achieve Availability for ALL virtual, physical and cloud-based workloads and provides fast, flexible and reliable backup, recovery and replication of all applications and data. Organizations can now choose among several QNAP systems verified with Veeam for backup and recovery including:
For more information, please visit https://www.veeam.com/ready.html.
About QNAP Systems, Inc.
QNAP Systems, Inc., headquartered in Taipei, Taiwan, provides a comprehensive range of cutting-edge Network-attached Storage (NAS) and video surveillance solutions based on the principles of usability, high security, and flexible scalability. QNAP offers quality NAS products for home and business users, providing solutions for storage, backup/snapshot, virtualization, teamwork, multimedia, and more. QNAP envisions NAS as being more than “simple storage”, and has created many NAS-based innovations to encourage users to host and develop Internet of Things, artificial intelligence, and machine learning solutions on their QNAP NAS.

Original Article: http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNGr1B0Zhq5QDEVsTvQ-qQhfGevzSA&clid=c3a7d30bb8a4878e06b80cf16b898331&ei=4xIAXKjpHpXnhAGrrL2oBg&url=https://hexus.net/tech/items/network/124439-qnap-nas-verified-veeam-ready-efficient-disaster-recovery/

DR

On-Premises Object Storage for Testing

On-Premises Object Storage for Testing

CloudOasis  /  HalYaman


objectstorage.pngMany Customers and Partners are wanting to deploy an on-premises object storage for testing and learning purposes. How you can deploy a totally free of charge object storage instance for an unlimited time and unrestricted capacity in your own lab?


On this blog post, I will take you through the deployment and configuration of an on-premises object storage instance for test purposes so that you might learn more about the new feature.
For this test, I will use a product call Minio. You can download it from this link for free, and it is available for Windows, Linux and MacOS.
To get started, download Minio and run the installation.
On my CloudOasis lab, I decided to use a Windows core server to act as a test platform for my object storage. Read on to the steps below to see the configuration I used:

Deployment

By default, after the download of Minio Server, it will install as an unsecured service. The downside of this is that many applications out there need a secure URL over HTTPS to interact with the Object Storage. To fix this insecurity, we have to download GnuTLS to generate a private and public key facility to secure our Object storage connection.

Preparing Certificate

After GnuTLS has downloaded and extracted to a folder on your drive, for example, C:\GnuTLS, you now must add that path to Windows with the following command:
setx path "%path%;C:\gnutls\bin"

Private Key

The next step is to generate the private.key:
certtool.exe --generate-privkey --outfile private.key
After the private.key has been  generated, create a new file called cert.cnf and past the following script into it:
# X.509 Certificate options  #  # DN options    # The organization of the subject.  organization = "CloudOasis"    # The organizational unit of the subject.  #unit = "CloudOasis Lab"    # The state of the certificate owner.  state = "NSW"    # The country of the subject. Two letter code.  country = "AU"    # The common name of the certificate owner.  cn = "Hal Yaman"    # In how many days, counting from today, this certificate will expire.  expiration_days = 365    # X.509 v3 extensions    # DNS name(s) of the server  dns_name = "miniosrv.cloudoasis.com.au"    # (Optional) Server IP address  ip_address = "127.0.0.1"    # Whether this certificate will be used for a TLS server  tls_www_server    # Whether this certificate will be used to encrypt data (needed  # in TLS RSA cipher suites). Note that it is preferred to use different  # keys for encryption and signing.  encryption_key

Public Key

Now we are ready to generate the public certificate using the following command:
certtool.exe --generate-self-signed --load-privkey private.key --template cert.cnf --outfile public.crt
After you have completed those steps, you must copy the private and the public keys to the following path:
C:\Users\administrator.VEEAMSALAB\.minio\certs\
Note: If you already had your own certificates, all you have to do is to copy them to the .\.\certs folder.

Run Minio Secure Service

Now we are ready to run a secured Minio Server using the following command. Here, we are assuming your minio.exe has been installed on your C:\ drive:
C:\minio.exe server S:\bucket
Note: the S:\bucket is a second volume I created and configured in Minio Server to receive the saved the objects.
After the minio.exe has run, you will get the following screen:
Screen Shot 2018-11-09 at 8.46.33 am

Conclusion

This blog was prepared following several requests from customers and partners who wanted to familiarise themselves with object storage integration during their private beta experience.
As you saw, the steps we have described here will help you to deploy an on-premises object storage for your own testing, without the cost, time limit, or storage size limit.
The steps to deploy the object storage are very simple, but the outcome is huge, and very helpful to you in your learning journey.

The post On-Premises Object Storage for Testing appeared first on CloudOasis.

Original Article: https://cloudoasis.com.au/2018/11/09/on-premises-s3/

DR

Veeam honored with 2018 CRN Tech Innovator Award

Veeam honored with 2018 CRN Tech Innovator Award
CRN, a brand of The Channel Company, recognized Veeam®
with a 2018 CRN Tech Innovator Award. Veeam Availability Suite™ 9.5 took
top honors in the Data Management category!
These annual awards honor standout hardware, software and
services that are moving the IT industry forward. In compiling
the 2018 Tech Innovator Award list, CRN editors evaluated 300 products
across 34 technology categories using several criteria, including
technological advancements, uniqueness of features, and potential
to help solution providers solve end users’ IT challenges.

UPDATED: Cisco + Veeam Digital Hub
Cisco and Veeam offer unmatched price and performance along
with hyperconverged Availability. Have you seen the latest updates
we’ve made to the Cisco + Veeam Digital Hub? Subscribe now
to get easy and unlimited access to Cisco + Veeam reports, best
practices, webinars and much more.

VeeamON Virtual 2018 – December 4

VeeamON Virtual 2018

VeeamON Virtual 2018
04 December, Tuesday
VeeamON Virtual 2018
VeeamON Virtual is a unique online conference designed to deliver the latest insights on Intelligent Data Management — all from the comfort of your own office.
Every year, VeeamON Virtual brings together more than 2,500 industry experts to showcase the latest technology solutions providing the Hyper-Availability of data.
Join us on our virtual journey to explore:
  • The challenges of data growth and inevitable data sprawl
  • Threats to data Availability and protection
  • The steps you can take to achieve Intelligent Data Management
  • And more!
Enter for a chance to win a Virtual Reality Kit and gain access to the Veeam® Availability Library!

On-prem or in-cloud? This platform provides holistic security in-cloud, on-prem and everywhere in between – SiliconANGLE News

On-prem or in-cloud? This platform provides holistic security in-cloud, on-prem and everywhere in between – SiliconANGLE News

veeam – Google News

Three years ago the big question was: “On-prem or in-cloud?” Today’s answer is both, as the majority of companies adopt hybrid solutions. However, the question turns to the complexity of securing data that is dispersed over multiple platforms. Taking steps to solve this issue is N2W Software Inc.’s Backup & Recovery version 2.4, announced during AWS re:Invent 2018 this week in Las Vegas, Nevada.
“You have a single platform that can manage data protection both on- and off-premises so that you can leverage where is the best place for this workload and protect it across no matter where it chooses to live,” said Danny Allan (pictured, left), vice president of product strategy at Veeam Software Inc., was acquired N2W this past January.
Allan and Andy Langsam (pictured, right), chief operating officer at N2W Software, spoke with John Walls (@JohnWalls21), host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, and guest host Justin Warren (@jpwarren), chief analyst at PivotNine Pty Ltd, during AWS re:Invent in Las Vegas. They discussed the release of N2WS version 2.4 and the changing face of the back-up industry as it expands from security into predictive analysis. (* Disclosure below.)

Slashing storage costs

N2WS version 2.4 introduces snapshot decoupling into the AWS S3 repository, allowing customers to move backup EC2 snapshots into the much cheaper S3 storage, according to Langsam. Real customer savings are estimated to be 40 to 50 percent a month, he added.
“Anytime you can talk about cost reduction from five cents a gig on EC2 storage to two cents on S3, it’s a tremendous savings for our customer base,” Langsam said.
A recent survey conducted by N2WS showed that more than half of the company’s customers spend $10,000 a month or more on AWS storage costs. “If they can save 40 percent on that, that’s real, real savings,” Langsam stated. “[It’s] more than the cost of the software alone.”
N2WS has reported 189-percent growth in revenue since its January 2018 acquisition by Veeam, according to Allan. “We’ve got customers recently like Notre Dame and Cardinal Health, and then we have people getting into the cloud for the very first time,” he said.
Allan attributes this growth to the financial stability provided by having a parent company. “Being acquired has allowed us to focus on the customer and innovation versus going out and raising money from investors,” he stated.
Security concerns have catapulted back-up services into the spotlight, but their capabilities are starting to expand outside of just protection and recovery. “We’re moving away from just being reactive to business need to being proactive in driving the business forward,” Allan said.
Applying new technologies available through advances in machine learning and artificial intelligence allows data protection companies to offer suggestions to their clients that will increase productivity or reduce costs. “We [can] leverage a lot of the algorithms that are existing in clouds like AWS to help analyze the data and make decisions that [our clients] don’t even know that they need to make,” Allan said. “That decision could be, you need to run this analysis at 2:00 a.m. in the morning because the instances are cheaper.”
Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of AWS reInvent. (* Disclosure: Veeam Software Inc. sponsored this segment of theCUBE. Neither Veeam nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.
The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.

Original Article: http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNHfBiD2bON-7L0l4XY3tdnBfY63Kg&clid=c3a7d30bb8a4878e06b80cf16b898331&ei=1UgAXJCDGYnihAGZ5qHwDA&url=https://siliconangle.com/2018/11/28/prem-cloud-platform-provides-holistic-security-cloud-prem-everywhere-reinvent/

DR

Veeam and N2WS: Accelerating Growth and Innovation

Veeam and N2WS: Accelerating Growth and Innovation

Veeam Executive Blog – The Availability Lounge  /  Yesica Schaaf

Who can forget that wonderful line from Forrest Gump in which Hanks talks about the beginning of his friendship with Jenny and how since that day, they were like “Peas and Carrots?” That’s how we like to think of Veeam and N2WS.
 
Veeam, a company born on virtualization, is the clear leader in Intelligent Data Management with significant investments and growth in the public cloud space. As part of our strategy to deliver the most robust data management and protection across any environment, any cloud, we’re always looking at ways to innovate and expand our offerings.  And last year, we acquired N2WS, a leading provider of cloud-native backup and disaster recovery for Amazon Web Services (AWS); since the acquisition, Veeam and N2WS have accelerated N2WS’ revenue growth by 186% year over year, including 179% growth of installed trials of N2WS’ product, N2WS Backup & Recovery.
 
Today, N2WS announces version 2.4 of N2WS Backup & Recovery, which enables businesses to reduce the overall cost of protecting cloud data by extending Amazon EC2 snapshots to Amazon S3 for long-term data retention — reducing backup storage costs by up to 40%.
 
This latest release builds on the previous announcement where N2WS and Veeam launched new innovations to help businesses automate backup and recovery operations for Amazon DynamoDB, a cloud database service used by more than 100,000 AWS customers for mobile, web, gaming, ad tech, IoT, and many other applications that need low-latency data access. By extending N2WS Backup & Recovery to Amazon DynamoDB, businesses are now able to ensure continuous Availability and protect against accidental or malicious deletion of critical data in Amazon DynamoDB.
 
While the cloud delivers significant business benefits, based on the AWS shared responsibility model, businesses must still take direct action to guard data and enable business continuity in the event of an outage or disaster. We are now seeing unprecedented numbers of new customers turn to N2WS and Veeam to ensure their AWS cloud data is secure, including the University of Notre Dame, Cardinal Health and more.
 
As Veeam and N2WS continue to invest in the public cloud space, new innovations specifically designed for AWS environments will be available in early 2019, innovations that will deliver major advancements to customers across the globe. I’d love to share this with you now, but you’ll have to wait for Q1 2019 for the details.
 
But if you are at AWS re:Invent this week, please join our session “A Deeper Dive on How Veeam is Evolving Availability on AWS” on Wednesday, November 28 or visit us at booth #1011 throughout the week to hear more about our innovative approach to the public cloud, including where we are headed with archiving, mobility and more.
 
With Veeam’s momentum and growth across AWS data protection solutions, the company is well positioned as the market leader in Intelligent Data Management with an innovative focus on the public cloud. Ultimately, from being the peas and carrots and better together, we want to be the meat and potatoes for all our new joint customers where we are an integral part of their business and IT operations.

Makes disaster recovery, compliance, and continuity automatic

NEW Veeam Availability Orchestrator helps you reduce the time, cost and effort of planning for and recovering from a disaster by automatically creating plans that meet compliance.

DOWNLOAD NOW

New
Veeam Availability Orchestrator

Original Article: http://feedproxy.google.com/~r/veeam-executive-blog/~3/Fiwmc_ygDWA/new-version-n2ws-backup-recovery.html

DR

Stateful containers in production, is this a thing?

Stateful containers in production, is this a thing?

Veeam Software Official Blog  /  David Hill


As the new world debate of containers vs virtual machines continues, there is also a debate raging about stateful vs stateless containers. Is this really a thing? Is this really happening in a production environment? Do we really need to backup containers, or can we just backup the data sets they access? Containers are not meant to be stateful are they? This debate rages daily on Twitter, reddit and pretty much in every conversation I have with customers.
Now the debate typically starts with the question Why run a stateful container? In order for us to understand that question, first we need to understand the difference between a stateful and stateless container and what the purpose behind them is.

What is a container?

“Containers enable abstraction of resources at the operating system level, enabling multiple applications to share binaries while remaining isolated from each other” *Quote from Actual Tech Media
A container is an application and dependencies bundled together that can be deployed as an image on a container host. This allows the deployment of the application to be quick and easy, without the need to worry about the underlying operating system. The diagram below helps explain this:
stateful containers
When you look at the diagram above, you can see that each application is deployed with its own libraries.

What about the application state?

When we think about any application in general, they all have persistent data and they all have application state data. It doesn’t matter what the application is, it has to store data somewhere, otherwise what would be the point of the application? Take a CRM application, all that customer data needs to be kept somewhere. Traditionally these applications use database servers to store all the information. Nothing has changed from that regard. But when we think about the application state, this is where the discussion about stateful containers comes in. Typically, an application has five state types:

  1. Connection
  2. Session
  3. Configuration
  4. Cluster
  5. Persistent

For the interests of this blog, we won’t go into depth on each of these states, but for applications that are being written today, native to containers, these states are all offloaded to a database somewhere. The challenge comes when existing applications have been containerized. This is the process of taking a traditional application that is installed on top of an OS and turning it into a containerized application so that it can be deployed in the model shown earlier. These applications save these states locally somewhere, and where depends on the application and the developer. Also, a more common approach is running databases as containers, and as a consequence, these meet a lot of the state types listed above.

Stateful containers

A container with stateful data is either typically written to persistent storage or kept in memory, and this is where the challenges come in. Being able to recover the applications in the event of an infrastructure failure is important. If everything that is backed up is running in databases, then as mentioned earlier, that is an easy solution, but if it’s not, how do you orchestrate the recovery of these applications without interruption to users? If you have load balanced applications running, and you have to restore that application, but it doesn’t know the connection or session state, the end user is going to face issues.

If we look at the diagram, we can see that “App 1” has been deployed twice across different hosts. We have multiple users accessing these applications through a load balancer. If “App 1” on the right crashes and is then restarted without any application state awareness, User 2 will not simply reconnect to that application. That application won’t understand the connection and will more than likely ask the user to re-authenticate. Really frustrating for the user, and terrible for the company providing that service to the user. Now of course this can be mitigated with different types of load balancers and other software, but the challenge is real. This is the challenge for stateful containers. It’s not just about backing up data in the event of data corruption, it’s how to recover and operate a continuous service.

Stateless containers

Now with stateless containers its extremely easy. Taking the diagram above, the session data would be stored in a database somewhere. In the event of a failure, the application is simply redeployed and picks up where it left off. Exactly how containers were designed to work.

So, are stateful containers really happening?

When we think of containerized applications, we typically think about the new age, cloud native, born in the cloud, serverless [insert latest buzz word here] application, but when we dive deeper and look at the simplistic approach containers bring, we can understand what businesses are doing to leverage containers to reduce the complex infrastructure required to run these applications. It makes sense that lots of existing applications that require consistent state data are appearing everywhere in production.
Understanding how to orchestrate the recovery of stateful containers is what needs to be focused on, not whether they are happening or not.
GD Star Rating
loading…
Stateful containers in production, is this a thing?, 5.0 out of 5 based on 4 ratings

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/n1e1aGaEEoU/stateful-containers-in-production-is-this-a-thing.html

DR

Veeam / Cisco Events for December

12/2/18 Partner Exchange — Richmond, VA
12/5/18 Cisco Connect — Denver, CO
12/12/18 Cisco Connect — Virginia Beach, VA
12/12/18 Cisco Connect — Los Angeles, CA
12/13/18 Cisco Connect — Columbus, OH
12/13/18 Partner Exchange — Pittsburgh, PA

Regional Health Centre gets healthy with Veeam and Cisco

Regional Health Centre gets healthy with Veeam
and Cisco
Canadian-based
Peterborough Regional Health Centre’s relatively small IT team was not happy
with the performance, recoverability and manageability
of their Veritas solution. Veeam and Cisco were able
to add data protection onto a VDI project which greatly simplified
and improved their data Availability. The ability
to manage and protect Microsoft Office 365 also added
incremental value to the deal.

VeeamON ’18 Virtual Conference

VeeamON Virtual Conference

VeeamON Virtual is a unique online conference designed to deliver the latest insights on Intelligent Data Management… to the comfort of your own office. Annually, VeeamON Virtual brings together more than 2,500 industry experts to showcase the latest technology solutions providing the Hyper-Availability of data. Join us on our virtual journey to explore the challenges of data growth and inevitable data sprawl, and the threat they pose to data Availability and protection.

REGISTER NOW

Enhanced self-service restore in Veeam Backup for Microsoft Office 365 v2

Enhanced self-service restore in Veeam Backup for Microsoft Office 365 v2

In Veeam Backup for Microsoft Office 365 1.5, you could only restore the most recently backed up recovery point, which limited its usefulness for most administrators looking to take advantage of the feature. That’s changed in Veeam Backup for Microsoft Office 365 v2 with the ability to now choose a point in time from the Veeam Explorers™. Read this blog from Anthony Spiteri, Global Technologist, Product Strategy to learn more.

LEARN MORE

More tips and tricks for a smooth Veeam Availability Orchestrator deployment

More tips and tricks for a smooth Veeam Availability Orchestrator deployment

Veeam Software Official Blog  /  Melissa Palmer

Welcome to even more tips and tricks for a smooth Veeam Availability Orchestrator deployment. In the first part of our series, we covered the following topics:

  • Plan first, install next
  • Pick the right application to protect to get a feel for the product
  • Decide on your categorization strategy, such as using VMware vSphere Tags, and implement it
  • Start with a fresh virtual machine

Configure the DR site first

After you have installed Veeam Availability Orchestrator, the first site you configure will be your DR site. If you are also deploying production sites, it is important to note, you cannot change your site’s personality after the initial configuration. This is why it is so important to plan before you install, as we discussed in the first article in this series.

As you are configuring your Veeam Availability Orchestrator site, you will see an option for installing the Veeam Availability Orchestrator Agent on a Veeam Backup & Replication server. Remember, you have two options here:

  1. Use the embedded Veeam Backup & Replication server that is installed with Veeam Availability Orchestrator
  2. Push the Veeam Availability Orchestrator Agent to existing Veeam Backup & Replication servers

If you change your mind and do in fact want to use an existing Veeam Backup & Replication server, it is very easy to install the agent after initial configuration. In the Veeam Availability Orchestrator configuration screen, simply click VAO Agents, then Install. You will just need to know the name of the Veeam Backup & Replication server you would like to add and have the proper credentials.

Ensure replication jobs are configured

No matter which Veeam Backup & Replication server you choose to use for Veeam Availability Orchestrator, it is important to ensure your replication jobs are configured in Veeam Backup & Replication before you get too far in configuring your Veeam Availability Orchestrator environment. After all, Veeam Availability Orchestrator cannot fail replicas over if they are not there!

If for some reason you forget this step, do not worry. Veeam Availability Orchestrator will let you know when a Readiness Check is run on a Failover Plan. As the last step in creating a Failover Plan, Veeam Availability Orchestrator will run a Readiness Check unless you specifically un-check this option.

If you did forget to set up your replication jobs, Veeam Availability Orchestrator will let you know, because your Readiness Check will fail, and you will not see green checkmarks like this in the VM section of the Readiness Check Report.

For a much more in-depth overview of the relationship between Veeam Backup & Replication and Veeam Availability Orchestrator, be sure to read the white paper Technical Overview of Veeam Availability Orchestrator Integration with Veeam Backup & Replication.

Do not forget to configure Veeam DataLabs

Before you can run a Virtual Lab Test on your new Failover Plan (you can find a step-by-step guide to configuring your first Failover Plan here), you must first configure your Veeam DataLab in Veeam Backup & Replication. If you have not worked with Veeam DataLabs before (previously known as Veeam Virtual Labs), be sure to read the white paper I mentioned above, as configuration of your first Veeam DataLab is also covered there.

After you have configured your Veeam DataLab in Veeam Backup & Replication, you will then be able to run Virtual Lab Tests on your Failover Plan, as well as schedule Veeam DataLabs to run whenever you would like. Scheduling Veeam DataLabs is ideal to provide an isolated production environment for application testing, and can help you make better use of those idle DR resources.

Veeam DataLabs can be run on demand or scheduled from the Virtual Labs screen. When running or scheduling a lab, you can also select the duration of time you would like the lab to run for, which can be handy when scheduling Veeam DataLab resources for use by multiple teams.

There you have it, even more tips and tricks to help you get Veeam Availability Orchestrator up and running quickly and easily. Remember, a free 30-day trial of Veeam Availability Orchestrator is available, so be sure to download it today!

The post More tips and tricks for a smooth Veeam Availability Orchestrator deployment appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/l-_Z3dXBpcE/availability-orchestrator-deployment-tips-tricks.html

DR

Veeam and NetApp double-team hybrid data run wild – SiliconANGLE

Veeam and NetApp double-team hybrid data run wild – SiliconANGLE

veeam – Google News

Mission-critical apps on-premises, machine-learning apps on Google Cloud Platform, serverless apps on Amazon Web Services cloud — phew, it’s getting hectic in modern enterprise information technology. The number of clouds isn’t going to shrink anytime soon, so it would help if things like data backup and data management would fuse to keep the number of additional things to mess with manageable.
NetApp Inc. and Veeam Software Inc. have partnered to integrate their technologies so fans of both will have fewer things to fiddle with. NetApp for storage — and more recently, data management — and Veeam for backup go together like chocolate and mint creme. They cover a number of crucial steps along the path of that most valuable asset in digital business, data.
“But what makes it simple is when it is comprehensive and integrated,” said Bharat Badrinath (pictured, right), vice president of products and solutions marketing at Veeam. “When the two companies’ engineering teams work together to drive that integration, that results in simplicity.”
Badrinath and Ken Ringdahl (pictured, left), vice president of global alliance architecture at Veeam, spoke with Lisa Martin (@LuccaZara) and Stu Miniman (@stu), co-hosts of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during this week’s NetApp Insight event in Las Vegas. They discussed how the companies’ partnership is reshaping their sales strategies. (* Disclosure below.)

In the back door and through the C-suite

Veeam and NetApp have integrated deeply so users can have the two technologies jointly follow their data around wherever it goes on the long, strange trip through hybrid cloud. Veeam has a reputation as a younger man’s technology that comes in the door through advocates in the IT department. It’s now making a push into larger enterprises, where NetApp has a large, deeply ingrained footprint.
“That’s a big impetus for the partnership, because NetApp has a lot of strength, especially with the ONTAP system in enterprise,” Ringdahl said.
The companies complement each other and fill in their blank spaces. “Veeam is bringing NetApp into more of our commercial deals; Netapp is bringing us into more enterprise deals,” Ringdahl explained. “We can come bottom-up; NetApp can come top-down.”
Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of the NetApp Insight event. (* Disclosure: TheCUBE is a paid media partner for NetApp Insight. Neither NetApp Inc., the event sponsor, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.
The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.

Original Article: http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNGscbl2YkiS6c9aP4_RprNlFy-siw&clid=c3a7d30bb8a4878e06b80cf16b898331&cid=52780078044958&ei=rT7XW7jDG6ORhgHZkpPwDQ&url=https://siliconangle.com/2018/10/25/veeam-and-netapp-double-team-hybrid-data-run-wild-netappinsight/

DR

How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs

How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs

Veeam Software Official Blog  /  Melissa Palmer


Unfortunately, bad patches are something everyone has experienced at one point or another. Just take the most recent example of the Microsoft Windows October 2018 Update that impacted both desktop and server versions of Windows. Unfortunately, this update resulted in missing files for impacted systems, and has temporarily been paused as Microsoft investigates.
Because of incidents like this, organizations are often seldom to quickly adopt patches. This is one of the reasons the WannaCry ransomware was so impactful. Unpatched systems introduce risk into environments, as new exploits for old problems are on the rise. In order to patch a system, organizations must first do two things, back up the systems to be patched, and perform patch testing.

A recent, verified Veeam Backup

Before we patch a system, we always want to make sure we have a backup that matches our organization’s Recovery Point Objective (RPO), and that the backup was successful. Luckily, Veeam Backup & Replication makes this easy to schedule, or even run on demand as needed.
Beyond the backup itself succeeding, we also want to verify the backup works correctly. Veeam’s SureBackup technology allows for this by booting the VM in an isolated environment, then tests the VM to make sure it is functioning properly. Veeam SureBackup gives organizations additional piece of mind that their backups have not only succeeded, but will be useable.

Rapid patch testing with Veeam DataLabs

Veeam DataLabs enable us to test patches rapidly, without impacting production. In fact, we can use that most recent backup we just took of our environment to perform the patch testing. Remember the isolated environment we just talked about with Veeam SureBackup technology? You guessed it, it is powered by Veeam DataLabs.
Veeam DataLabs allows us to spin up complete applications in an isolated environment. This means that we can test patches across a variety of servers with different functions, all without even touching our production environment. Perfect for patch testing, right?
Now, let’s take a look at how the Veeam DataLab technology works.
Veeam DataLabs are configured in Veeam Backup & Replication. Once they are configured, a virtual appliance is created in VMware vSphere to house the virtual machines to be tested. Beyond the virtual machines you plan on testing, you can also include key infrastructure services such as Active Directory, or anything else the virtual machines you plan on testing require to work correctly. This group of supporting VMs is called an Application Group.
patch testing veeam backup datalabs
In the above diagram, you can see the components that support a Veeam DataLab environment.
Remember, these are just copies from the latest backup, they do not impact the production virtual machines at all. To learn more about Veeam DataLabs, be sure to take a look at this great overview hosted here on the Veeam.com blog.
So what happens if we apply a bad patch to a Veeam DataLab environment? Absolutely nothing. At the end of the DataLab session, the VMs are powered off, and the changes made during the session are thrown away. There is no impact to the production virtual machines or the backups leveraged inside the Veeam DataLab. With Veeam DataLabs, patch testing is no longer a big deal, and organizations can proceed with their patching activities with confidence.
This DataLab can then be leveraged for testing, or for running Veeam SureBackup jobs. SureBackup jobs also provide reports upon completion. To learn more about SureBackup jobs, and see how easy they are to configure, be sure to check out the SureBackup information in the Veeam Help Center.

Patch testing to improve confidence

The hesitance to apply patches is understandable in organizations, however, that does not mean there can be significant risk if patches are not applied in a timely manner. By leveraging Veeam Backups along with Veeam DataLabs, organizations can quickly test as many servers and environments as they would like before installing patches on production systems. The ability to rapidly test patches ensures any potential issue is discovered long before any data loss or negative impact to production occurs.

No VMs? No problem!

What about the other assets in your environment that can be impacted by a bad patch, such as physical servers, dekstops, laptops, and full Windows tablets? You can still protect these assets by backing them up using Veeam Agent for Microsoft Windows. These agents can be automatically deployed to your assets from Veeam Backup & Replication. To learn more about Veeam Agents, take a look at the Veeam Agent Getting Started Guide.
To see the power of Veeam Backup & Replication, Veeam DataLabs, and Veeam Agent for Microsoft Windows for yourself, be sure to download the 30-day free trial of Veeam Backup & Replication here.
The post How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/dCjtFr9x-1M/rapid-patch-testing-veeam-backups-veeam-datalabs.html

DR

Native snapshot integration for NetApp HCI and SolidFire

Native snapshot integration for NetApp HCI and SolidFire

Veeam Software Official Blog  /  Adam Bergh


Four years ago, Veeam delivered to the market ground-breaking native snapshot integration into NetApp’s flagship ONTAP storage operating system. In addition to operational simplicity, improved efficiencies, reduced risk and increased ROI, the Veeam Hyper-Availability Platform and ONTAP continues to help customers of all sizes accelerate their Digital Transformation initiatives and compete more effectively in the digital economy.
Today I’m pleased to announce a native storage integration with Element Software, the storage operating system that powers NetApp HCI and SolidFire, is coming to Veeam Backup & Replication 9.5 with the upcoming Update 4.

Key milestones in the Veeam + NetApp Alliance
Veeam continues to deliver deeper integration across the NetApp Data Fabric portfolio to provide our joint customers with the ability to attain the highest levels of application performance, efficiency, agility and Hyper-Availability across hybrid cloud environments. Together with NetApp, we enable organizations to attain the best RPOs and RTOs for all applications and data through native snapshot based integrations.

How Veeam integration takes NetApp HCI to Hyper-Available

With Veeam Availability Suite 9.5 Update 3, we released a brand-new framework called the “Universal Storage API.” This set of API’s allows Veeam to accelerate the adoption of storage-based integrations to help decrease impact on the production environment, significantly improve RPOs and deliver significant operational benefits that would not be attainable without Veeam.
Let’s talk about how the new Veeam integration with NetApp HCI and SolidFire deliver these benefits.

Backup from Element Storage Snapshots

The Veeam Backup from Storage Snapshot technology is designed to dramatically reduce the performance impact typically associated with traditional API driven VMware backup on primary hypervisor infrastructure.
This process dramatically improves backup performance, with the added benefit of reducing performance impact on production VMware infrastructure.

Granular application item recovery from Element Storage Snapshots

If you’re a veteran of enterprise storage systems and VMware, you undoubtably know the pain of trying to recover individual Windows or Linux files, or application items from a Storage Snapshot. The good news is that Veeam makes this process fast, easy and painless. With our new integration into Element snapshots, you can quickly recover application items directly from the Storage Snapshot like:

  • Individual Windows or Linux guest files
  • Exchange items
  • MS SQL databases
  • Oracle databases
  • Microsoft Active Directory items
  • Microsoft SharePoint items

What’s great about this functionality is that it works with a Storage Snapshot created by Veeam and NetApp, and the only requirement is that VMs need to be in the VMDK format.

Hyper-Available VMs with Instant VM Recovery from Element Snapshots

Everyone knows that time is money and that every second that a critical workload is offline your business is losing money, prestige and possibly even customers. What if I told you that you could recover an entire virtual machine, no matter the size in a very short timeframe? Sound farfetched? Instant VM Recovery technology from Veeam which leverages Element Snapshots for NetApp HCI and SolidFire makes this a reality.
Not only is this process extremely fast, there is no performance loss during this process, because once recovered, the VM is running from your primary production storage system!

Veeam Instant VM Recovery on NetApp HCI

Element Snapshot orchestration for better RPO

It’s common to see a nightly or twice daily backup schedule in most organizations. The problem with this strategy is that it leaves your organization with a large data loss potential of 12-24 hours. We call the amount of acceptable data loss your “RPO” or recovery point objective. Getting your RPO as low as possible just makes good business sense. With Veeam and Element Snapshot management, we can supplement the off-array backup schedule with more frequent storage array-based snapshots. One common example would be taking hourly storage-based snapshots in between nightly off-array Veeam backups. When a restore event happens, you now have hourly snapshots, or a Veeam backup to choose from when executing the recovery operation.

Put your Storage Snapshots to work with Veeam DataLabs

Wouldn’t it be great if there were more ways to leverage your investments in Storage Snapshots for additional business value? Enter Veeam DataLabs — the easy way to create copies of your production VMs in a virtual lab protected from the production network by a Veeam network proxy.
The big idea behind this technology is to provide your business with near real-time copies of your production VMs for operations like dev/test, data analytics, proactive DR testing for compliance, troubleshooting, sandbox testing, employee training, penetration testing and much more! Veeam makes the process of test lab rollouts and refreshes easy and automated.

NetApp + Veeam = Better Together

NetApp Storage Technology and Veeam Availability Suite are perfectly matched to create a Hyper-Available data center. Element storage integrations provide fast, efficient backup capabilities, while significantly lowering RPOs and RTOs for your organization.
Find out more on how you can simplify IT, reduce risk, enhance operational efficiencies and increase ROI through NetApp HCI and Veeam.
GD Star Rating
loading…
Native snapshot integration for NetApp HCI and SolidFire, 4.9 out of 5 based on 15 ratings

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/0jXxQ6YvSno/native-snapshot-integration-element.html

DR

Veeam Announces Integration with NetApp HCI – StorageReview.com

Veeam Announces Integration with NetApp HCI – StorageReview.com

veeam – Google News

by Michael Rink

Veeam announced that Update 4 of Veeam Backup & Replication, expected later this year, will add native storage integration with Element Software,the storage operating system that powers NetApp HCI and SolidFire. Veeam first joined NetApp’s Alliance program four years ago, in 2014. Since then they’ve been steadily increasing support and integration. Earlier this month the two companies worked together to allow NetApp to resale Veaam Availability Solutions.

Veeam’s integration into Element snapshots, once complete, will allow IT engineers to quickly recover application items directly from the Storage Snapshot at a very granular level. Not just entire databases, but also individual Windows or Linux guest files and Exchange items (emails). Veem even expects to be able to recover Microsoft SharePoint items & Microsoft Active Directory items directly from the Storage Snapshot; which will be a pretty neat trick. The only requirement is that VMs need to be in the VMDK format.
With their next update, Veeam is expecting to be able to recover an entire virtual machine, no matter the size in a very short timeframe by leveraging Element Snapshots for NetApp HCI and SolidFire. Once recovered, the VM will be running from the primary production storage system. Additionally, Veeam is offering support for companies that want to leverage their backup snapshots to support testing and development. By quickly cloning the most recent backup of your company’s production environment; Veeam lets you reduce the risk of unexpected integration problems. It’s also useful for troubleshooting (especially those production only problems), sandbox testing, employee training, and penetration testing.
Veeam
Discuss this story
Sign up for the StorageReview newsletter

Original Article: http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNEfUh2GTOKZ0RgNomsXSMt6ayyeKQ&clid=c3a7d30bb8a4878e06b80cf16b898331&ei=Pb7YW7jeEqORhgHZkpPwDQ&url=https://www.storagereview.com/veeam_announces_integration_with_netapp_hci

DR

Considerations in a multi-cloud world

Considerations in a multi-cloud world

Veeam Software Official Blog  /  David Hill


With the infrastructure world in constant flux, more and more businesses are adopting a multi-cloud deployment model. The challenges from this are becoming more complex and, in some cases, cumbersome. Consider the impact on the data alone. 10 years ago, all anyone worried about was if the SAN would stay up, and if it didn’t, would their data be protected. Fast forward to today, even a small business can have data scattered across the globe. Maybe they have a few vSphere hosts in an HQ, with branch offices using workloads running in the cloud or Software as a Service-based applications. Maybe backups are stored in an object storage repository (somewhere — but only one guy knows where). This is happening in the smallest of businesses, so as a business grows and scales, the challenges become even more complex.

Potential pitfalls

Now this blog is not about how Veeam manages data in a multi-cloud world, it’s more about how to understand the challenges and the potential pitfalls. Take a look at the diagram below:

Veeam supports a number of public clouds and different platforms. This is a typical scenario in a modern business. Picture the scene: workloads are running on top of a hypervisor like VMware vSphere or Nutanix, with some services running in AWS. The company is leveraging Microsoft Office 365 for its email services (people rarely build Exchange environments anymore) with Active Directory extended into Azure. Throw in some SAP or Oracle workloads, and your data management solution has just gone from “I back up my SAN every night to tape” to “where is my data now, and how do I restore it in the event of a failure?” If worrying about business continuity didn’t keep you awake 10 years ago, it surely does now. This is the impact of modern life. The more agility we provide on the front end for an IT consumer, the more complexity there has to be on the back end.
With the ever-growing complexity, global reach and scale of public clouds, as well as a more hands-off approach from IT admins, this is a real challenge to protect a business, not only from an outage, but from a full-scale business failure.

Managing a multi-cloud environment

When looking to manage a multi-cloud environment, it is important to understand these complexities, and how to avoid costly mistakes. The simplistic approach to any environment, whether it is running on premises or in the cloud, is to consider all the options. Sounds obvious, but that has not always been the case. Where or how you deploy a workload is becoming irrelevant, but how you protect that workload still is. Think about the public cloud: if you deploy a virtual machine, and set the firewall ports to any:any, (that would never happen would it?), you can be pretty sure someone will gain access to that virtual machine at some point. Making sure that workload is protected and recoverable is critical in this instance. The same considerations and requirements always apply whether running on premises or off premises.  How do you protect the data and how do you recover the data in the event of a failure or security breach?

Why use a cloud platform?

This is something often overlooked, but it has become clear in recent years that organizations do not choose a cloud platform for single, specific reasons like cost savings, higher performance and quicker service times, but rather because the cloud is the right platform for a specific application. Sure, individual reason benefits may come into play, but you should always question the “why” on any platform selection.
When you’re looking at data management platforms, consider not only what your environment looks like today, but also what will it look like tomorrow. Does the platform you’re purchasing today have a roadmap for the future? If you can see that the company has a clear vision and understanding of what is happening in the industry, then you can feel safe trusting that platform to manage your data anywhere in the world, on any platform. If a roadmap is not forthcoming, or they just don’t get the vision you are sharing about your own environment, perhaps it’s time to look at other vendors. It’s definitely something to think about next time you’re choosing a data management solution or platform.
The post Considerations in a multi-cloud world appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/oACkbQrZbW8/multi-cloud-considerations.html

DR

Veeam Hyper-Availablity as a Service for Nutanix AHV – Virtualization Review

Veeam Hyper-Availablity as a Service for Nutanix AHV – Virtualization Review

veeam – Google News

Veeam Hyper-Availablity as a Service for Nutanix AHV

Date: November 15, 2018 @ 11:00 AM PST / 2:00 PM EST
Speaker: Steve Walker, Director of Sales & Marketing at TBConsulting
Veeam Availability for Nutanix AHV as a Service is purpose-built with the Nutanix user in mind! Join this engaging discussion around the beneficial uses of Veeam in your virtual environment. The powerful web-based UI was specifically designed to look and feel like Prism — Nutanix’s management solution for the Acropolis infrastructure stack — while ensuring a streamlined and familiar user experience brought to life by TB Consulting.
Minimize data loss with frequent, fast backups across the entire infrastructure:  Veeam Availability for Nutanix AHV as a Service.
Register today!

Date: 11/15/2018
Time: 11:00 AM

Duration: 1 Hour


Original Article: http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNFb2bc6qPpIAnELR0lBIM9AlQuNcg&clid=c3a7d30bb8a4878e06b80cf16b898331&ei=48LZW9itA4TaqgK-soi4DQ&url=https://virtualizationreview.com/webcasts/2018/10/tbconsultnov15-veeam-hyper-availablity-as-a-service.aspx?tc=page0

DR