Win a trip to Miami for VeeamON 2019

Win a trip to Miami for VeeamON 2019

Veeam Software Official Blog  /  Eli Afanasyev

With the holiday season fast approaching, we’re proud to look back on another successful year as well as forward to 2019, where we will have even more exciting news for your Veeam folks! Thanks to the continuing support of our customers and partners, we’ve reached a customer satisfaction level 3.5 times higher than the average in the industry. So, when nine out of ten of our customers say they would recommend us, we couldn’t be happier.
Veeam was also listed on the Forbes Cloud 100 list for the third year in a row, which means that independent analysts and media publications see Veeam as being a step above other industry-leading solutions.
But even with more than 300,000 businesses relying on Veeam, we aren’t resting on our laurels.
Get ready for May — VeeamON is coming to Miami! Back for a fifth installment, the premier conference for Intelligent Data Management will bring together IT professionals from all around the globe to discover the latest news in Availability through inspiring breakout sessions, presentations and networking.

Join us in Miami

To celebrate the holidays, we’re offering FREE, full VeeamON 2019 passes to three lucky winners. Oh yeah, with round-trip flights to Miami and accommodation at a top hotel for the whole duration of the conference.
Enter the competition now for a chance to visit the biggest event in the industry and, of course, enjoy world-famous Miami Beach with its sunsets, seaside, Latin-infused cuisine and vibrant nightlife.
Wishing you all happy holidays and good luck!
The post Win a trip to Miami for VeeamON 2019 appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/EOgFvQRhU4o/free-veeamon-around-trip.html

DR

AWS vs Azure vs Google Cloud: Storage and Compute Comparison

AWS vs Azure vs Google Cloud: Storage and Compute Comparison

N2WS  /  Cristian Puricica

Choosing a public cloud service provider (CSP) has become a complex decision. Today, it’s no longer a question of which option you should work with, but rather, how to achieve optimal performance and distribute risk across multiple vendors—while containing cloud compute and storage costs at the same time.
In a recent Virtustream/Forrester survey of more than 700 cloud decision makers, 86% of respondents said that their enterprises are deploying workloads across more than one CSP. We learn from the same survey that the prime motivation for adopting a multi-cloud strategy is to improve performance, followed by cost savings and faster delivery times. Today, the three leading CSPs are Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), with respective market shares of 62%, 20%, and 12%.
In this post, the first in a two-part series, we will compare and contrast what AWS, Azure, and GCP offer in terms of storage, compute, and management tools. In the following post, we will discuss big data and analytics, serverless, machine learning, and more. Armed with this information, it should be easier for you to map out your multi-cloud strategy.

Service-to-Service Comparison

Enterprises typically look to CSPs for three levels of service: Infrastructure as a Service (IaaS, i.e., outsourcing of self-service compute-storage capacity); Platform as a Service (PaaS, i.e., complete environments for developing, deploying, and managing web apps); and secure, performant hosting of Software as a Service (SaaS) apps.
Keeping these levels in mind, we have chosen to compare:

  1. Storage (IaaS)
  2. Compute (IaaS)
  3. Management Tools (IaaS, PaaS, SaaS)

Note:   We won’t be comparing pricing since it is quite difficult to achieve apples-to-apples comparisons without a very detailed use case. Once you have determined your organization’s CSP requirements, you can use the CSP price calculators to check if there are significant cost differences: AWS, Azure, GCP.

Storage

The CSPs offer a wide range of object, block, and file storage services for both primary and secondary storage use cases. You will find that object storage is well suited to handling massive quantities of unstructured data (images, videos, and so on), while block storage provides better performance for structured transactional data. Storage tiers offer varying levels of accessibility and latency to cost-effectively meet the needs of both active (hot) and inactive (cold) data.
In terms of differentiators, Azure takes the lead in managed DR and backup services. When it comes to managing hybrid architectures, AWS and Azure have built-in services, while GCP relies on partners.

Compute

The CSPs offer a range of predefined instance types that define, for each virtual server launched, the type of CPU (or GPU) processor, the number of vCPU or vGPU cores, RAM, and local temporary storage. The instance type determines compute and I/O speeds and other performance parameters, allowing you to optimize price/performance according to different workload requirements. It should be noted that GCP, in addition to its predefined VM types, also offers Custom Machine Types.
The CSPs offer pay-as-you-go PaaS options that automatically handle the deployment, scaling, and balancing of web applications and services developed in leading frameworks such as Java, Node.js, PHP, Python, Ruby, and more.
AWS offers auto scaling at no additional charge, based on scaling plans that you define for all the relevant resources used by the application. Azure offers auto scaling per app, or as part of platforms that manage groups of apps or groups of virtual machines. GCP offers auto scaling only within the context of its Managed Instance Groups platform.
Both AWS and Azure offer services that let you create a virtual private server in a few clicks, but GCP does not yet offer this capability.

Management Tools

As you may have already experienced, managing and orchestrating cloud resources across multiple business units and complex infrastructures can be a daunting challenge. All three CSPs offer platforms and services to streamline and provide visibility into the organization, configuration, provisioning, deployment, and monitoring of cloud resources. These offerings range from predefined deployment templates and catalogs of approved services to centralized access control. However, AWS and Azure seem to have invested more heavily in this area than GCP, and AWS even offers outsourced managed services (AWS Managed Services).

And the Winner Is…

In today’s multi-cloud world, you shouldn’t be seeking to identify a single “winner,” but rather how to optimally distribute workloads across multiple CSPs. As you map out your multi-cloud strategy, bear in mind that in the key categories of storage, compute, and management tools, AWS and Azure offer a more complete and mature stack than GCP. In general, AWS’ services and products are the most comprehensive, but they can also be challenging to navigate and manage. Also consider that if your company is already using Microsoft’s development tools, Windows servers, and Office productivity applications, you will find it very easy to integrate with Azure.
In the second part of this blog series, we will compare how the CSPs support next-generation technologies such as containers, serverless, analytics, and machine learning. We will also look at higher level issues, such as user friendliness, security, and partnership ecosystems, and provide some final thoughts on how to choose the right CSP(s) for your organization’s needs.
Looking for an AWS Data Protection solution? Try N2WS Backup & Recovery (CPM) for FREE!

Read Also

Original Article: https://n2ws.com/blog/aws-vs-azure-vs-google-cloud

DR

Meeting the intelligent data management needs of 2019 – Intelligent CIO Africa

Meeting the intelligent data management needs of 2019 – Intelligent CIO Africa

“veeam” – Google News

Meeting the intelligent data management needs of 2019

Dave Russell, Vice President for Product Strategy at Veeam

Dave Russell, Vice President for Product Strategy at Veeam, outlines five intelligent data management needs CIOs need to know about in 2019.
The world of today has changed drastically due to data. Every process, whether an external client interaction or internal employee task, leaves a trail of data. Human and machine generated data is growing 10 times faster than traditional business data, and machine data is growing at 50 times that of traditional business data.
With the way we consume and interact with data changing daily, the number of innovations to enhance business agility and operational efficiency are also plentiful. In this environment, it is vital for enterprises to understand the demand for Intelligent Data Management in order to stay one step ahead and deliver enhanced services to their customers.
I’ve highlighted five hot trends in 2019 decision-makers need to know – keeping the Europe, Middle East and Africa (EMEA) market in mind, here are my views:

  1. Multi-Cloud usage and exploitation will rise

With companies operating across borders and the reliance on technology growing more prominent than ever, an expansion in multi-cloud usage is almost inevitable. IDC estimates that customers will spend US$554 billion on cloud computing and related services in 2021, more than double the level of 2016.
On-premises data and applications will not become obsolete, but that the deployment models for your data will expand with an increasing mix of on-prem, SaaS, IaaS, managed clouds and private clouds.
Over time, we expect more of the workload to shift off-premises, but this transition will take place over years, and we believe that it is important to be ready to meet this new reality today.

  1. Flash memory supply shortages, and prices, will improve in 2019

According to a report by Gartner in October this year, flash memory supply is expected to revert to a modest shortage in mid-2019, with prices expected to stabilise largely due to the ramping of Chinese memory production.
Greater supply and improved pricing will result in greater use of flash deployment in the operational recovery tier, which typically hosts the most recent 14 days of backup and replica data. We see this greater flash capacity leading to broader usage of instant mounting of backed up machine images (or copy data management).
Systems that offer copy data management capability will be able to deliver value beyond availability, along with better business outcomes. Example use cases for leveraging backup and replica data include DevOps, DevSecOps and DevTest, patch testing, analytics and reporting.

  1. Predictive analytics will become mainstream and ubiquitous

The predictive analytics market is forecast to reach $12.41 billion by 2022, marking a 272% increase from 2017, at a CAGR of 22.1%.
Predictive analytics based on telemetry data, essentially Machine Learning (ML) driven guidance and recommendations is one of the categories that is most likely to become mainstream and ubiquitous.
Machine Learning predictions are not new, but we will begin to see them utilising signatures and fingerprints, containing best practice configurations and policies, to allow the business to get more value out of the infrastructure that you have deployed and are responsible for.
Predictive analytics, or diagnostics, will assist us in ensuring continuous operations, while reducing the administrative burden of keeping systems optimised. This capability becomes vitally important as IT organisations are required to manage an increasingly diverse environment, with more data, and with more stringent service level objectives.
As predictive analytics become more mainstream, SLAs and SLOs are rising and businesses’ SLEs, Service Level Expectations, are even higher. This means that we need more assistance, more intelligence in order to deliver on what the business expects from us.

  1. The ‘versatalist’ (or generalist) role will increasingly become the new operating model for the majority of IT organisations.

While the first two trends were technology-focused, the future of digital is still analogue: it’s people. Talent shortages combined with new, collapsing on-premises infrastructure and public cloud + SaaS, are leading to broader technicians with background in a wide variety of disciplines, and increasingly a greater business awareness as well.
Standardisation, orchestration and automation are contributing factors that will accelerate this, as more capable systems allow for administrators to take a more horizontal view rather than a deep specialisation.
Specialisation will of course remain important but as IT becomes more and more fundamental to business outcomes, it stands to reason that IT talent will likewise need to understand the wider business and add value across many IT domains.
Yet, while we see these trends challenging the status quo next year, some things will not change. There are always constants in the world, and we see two major factors that will remain top-of-mind for companies everywhere….

  1. Frustration with legacy backup approaches and solutions

The top three vendors in the market continue to lose market share in 2019. In fact, the largest provider in the market has been losing share for 10 years. Companies are moving away from legacy providers and embracing more agile, dynamic, disruptive vendors, such as Veeam, to offer the capabilities that are needed to thrive in the data-driven age.

  1. The pain points of the Three Cs: Cost, complexity and capability 

These Three Cs continue to be why people in data centres are unhappy with solutions from other vendors. Broadly speaking, these are excessive costs, unnecessary complexity and a lack of capability, which manifests as speed of backup, speed of restoration or instant mounting to a virtual machine image. These three major criteria will continue to dominate the reasons why organisations augment or fully replace their backup solution.

  1. The arrival of the first 5G networks will create new opportunities for resellers and CSPs to help collect, manage, store and process the higher volumes of data

In early 2019 we will witness the first 5G-enabled handsets hitting the market at CES in the US and MWC in Barcelona. I believe 5G will likely be most quickly adopted by businesses for Machine-to-Machine communication and Internet of Things (IoT) technology. Consumer mobile network speeds have reached a point where they are probably as fast as most of us need with 4G.
2019 will be more about the technology becoming fully standardised and tested, and future-proofing devices to ensure they can work with the technology when it becomes more widely available, and EMEA becomes a truly Gigabit Society.
For resellers and cloud service providers, excitement will centre on the arrival of new revenue opportunities leveraging 5G or infrastructure to support it. Processing these higher volumes of data in real-time, at a faster speed, new hardware and device requirements, and new applications for managing data will all present opportunities and will help facilitate conversations around edge computing.

Original Article: http://www.intelligentcio.com/africa/2018/12/06/meeting-the-intelligent-data-management-needs-of-2019/

DR

Veeam updates Availability Suite 9.5, N2WS platform – Storage Soup – TechTarget

Veeam updates Availability Suite 9.5, N2WS platform – Storage Soup – TechTarget

“veeam” – Google News


Veeam updates to its flagship product and AWS data protection include the ability to migrate old data to cheaper cloud or object storage.
Update 4 of Veeam Availability Suite 9.5 is scheduled to be released this month to partners. General availability will follow at a date to be determined. Veeam’s flagship Availability Suite consists of Veeam Backup & Replication and the Veeam ONE monitoring and reporting tool.
The new Cloud Tier feature of Scale-out Backup Repository within Backup & Replication facilitates moving older backup files to cheaper storage, such as cloud or on-premises object storage, according to a presentation at the VeeamON Virtual conference this week. Targets include Amazon Simple Storage Service (S3), Microsoft Azure Blob Storage and S3-compatible object storage. Files remain on premises, but as only a shell.
The Veeam updates also feature Direct Restore to AWS. The process, which takes backup files and restores them into the public cloud, works in the same way as Veeam’s Direct Restore to Azure, said Michael Cade, technologist of product strategy.
A Staged Restore in the Veeam DataLabs copy data management tool helps with General Data Protection Regulation compliance, specifically the user’s “right to be forgotten,” Cade said. The DataLab can run a script removing personal data from a virtual machine backup, then migrate the VM to a production environment.
The Veeam updates also include ransomware protection. Also within Veeam DataLabs, the Secure Restore feature enables an optional antivirus scan to ensure the given restore point is uninfected before restoring. Veeam is not going to prevent attacks, but it can help with remediations, Cade said.
In addition, intelligent diagnostics in Veeam ONE analyze Backup & Replication debug logs, looking for known problems, and proactively report or remediate common configuration issues before impact to operations.

N2WS data protection tiers up

The Veeam updates include N2WS, a company it acquired at the end of last year. N2WS, which provides data protection for AWS workloads, released Backup & Recovery 2.4 last week, enabling users to choose from different storage tiers and reduce the cost of data requiring long-term retention.
The vendor launched Amazon Elastic Block Store snapshot decoupling and the N2WS-enabled Amazon S3 repository. Customers can move snapshots to the repository.
That repository saves up to 40% on costs, said Ezra Charm, vice president of marketing at N2WS.
“Data storage in AWS can get expensive,” especially if an organization is looking at long-term retention, for example at least two years, Charm said during the virtual conference.
Possible uses include archiving data for compliance in S3, a cheaper storage tier. Managed service providers can also use it to lower storage costs for clients.
In addition, the N2WS update features VPC Capture and Clone. That capability captures VPC settings and clones them to other regions, which eliminates the need for manual configuration during disaster recovery, according to N2WS. An enhanced RESTful API automates backup and recovery for business-critical data.
“Any of the data you store [in AWS] is clearly your responsibility,” Charm said.
AWS data protection is an emerging market. In June 2018, Druva acquired CloudRanger, another company that provides backup and recovery of AWS data.
While human error is the most likely scenario why AWS data protection is needed, Charm said, there are many other possible issues.
“Ransomware in AWS has been documented,” he said.
N2WS Backup & Recovery 2.4 is available now in the AWS Marketplace.

Original Article: https://searchstorage.techtarget.com/blog/Storage-Soup/Veeam-updates-Availability-Suite-95-N2WS-platform

DR

Hospitality group turns to Veeam for round-the-clock availability – Intelligent CIO Africa

Hospitality group turns to Veeam for round-the-clock availability – Intelligent CIO Africa

“veeam” – Google News

Hospitality group turns to Veeam for round-the-clock availability

Peermont Hotels, Casinos and Resorts operates 12 properties across South Africa, Botswana and Malawi

Peermont Hotels, Casinos and Resorts is an award-winning hospitality and entertainment company which operates 12 properties located across South Africa, Botswana, and Malawi. Renowned for its excellence in design, development, management, ownership, and operation of multifaceted hospitality and gaming facilities, guests partake in fine dining, relaxing hotel stays, exciting casino action, live entertainment, soothing spa treatments, efficient conferencing or sporting activities — offered in unique safe and secure, themed settings.
Hyper-availability of data and systems are fundamental to the success of the business, which led to Peermont turning to Veeam to implement its Veeam Availability Suite, Veeam ONE and Veeam Cloud Connect solutions.
“We do not have the luxury of a downtime window or shutting down over the weekend for system maintenance,” said Ernst Karner, Group IT Manager and CIO of Peermont Group.
“For us, customer service is fundamental and there are no allowances for not delivering a round-the-clock value proposition.
“Being a hospitality and entertainment company, we do not have the luxury of a downtime window or shutting operations over the weekend for system maintenance; we are always on the go with our business.”
Even though many other organisations have only recently started embracing the need to have hyper-available data, Peermont has been operating in this always-on environment since it opened its doors more than two decades ago.
“Historically, backing up and restoring data was not an easy proposition. We had multiple vendors claiming various features that sounded nice on paper, but the reality worked quite different. Unfortunately, no organisation can claim to not have any downtime or availability issues. The same can be said for us where drive failures were quite common in the past.”
Karner says restoring data is always used as a last resort because of the impact it could have on live systems.
“Before Veeam, we had a few incidents where recovery from our backups was not possible. We did not have the level of comfort that all our backups (stretching to more than 500 virtual machines, 275 terabytes of data, and 13 data centres) were done reliably. At the time, it was impossible to guarantee the entire company being recoverable in the event of a disaster.”
Karner says he understands customer expectations and feels that it is unforgivable in this day and age for systems to go down, irrespective of the industry a business operates in. For visitors, things like internet access (which is provided free to all customers at all Peermont properties) have become a commodity. If Peermont is unable to deliver this, people start becoming very unforgiving.
“Customers expect to have access to services at any time of day or night. As such, we had to review our capacity across all spheres of the business (network, disk storage, and compute availability). For us, there is no such thing as a Plan B. Uptime is critical and we identified Veeam as our partner of choice to address our hyper-availability requirements.”
However, unlike other businesses, Peermont requires its disaster recovery facilities to remain onsite due to its unique availability requirements. If systems go down there is an immediate financial impact on the business. However, it does supplement its on-premise business continuity strategy with a hosted component as additional fail-safe. For example, if central reservations are offline, no bookings can be made through any means. Guests will simply choose another hotel to stay at.
Similarly, if slot machines go down there is a real-time impact on revenues. Peermont can potentially lose considerable gross operating profit if downtime occurs. Not being available for two to three hours could therefore be disastrous to the business bottomline. Beyond the financial, the reputational damage could also be significant.
“Customers might forgive you once if systems are not available. But if this starts becoming an ongoing concern, they will migrate away from you to a competitor who can deliver on their expectations.”
Veeam therefore had to provide a strategic solution that could allay these concerns and provide Peermont with the peace of mind needed to deliver continuous services to customers.
The Veeam solution
As a precursor to implementing Veeam Availability Suite, Peermont embarked on a virtualisation journey in 2005. Gradually it expanded to 70 hosts across the group moving away from physical business applications.
By improving efficiencies, Peermont can also keep costs down and run analysis on data to better understand customer needs.
“We have a centralised data warehouse that collects information from all our casinos,” added Karner.
“From supplier data and procurement systems to the requests we receive from tour operators require several Extract, Transform, and Load (ETL) processes to manage effectively.”
From an intelligent data management perspective, Peermont is focused on delivering a completely integrated ecosystem designed to give customers the best value possible. It is about using the technology, data, and intelligence from the point of arrival to the point of departure.
“We identify guests from the moment they arrive at a property and tailor their experience according to the historical data and insights we have built on them,” said Karner.
“It is about using their preferences when it comes to where they dine, what they enjoy eating, what are their favourite tables to play at, and even what shows they prefer to watch, and creating a unique environment tailored to their needs.”
Customer satisfaction is therefore an essential deliverable, but this needs to be done without adding to the cost or making it a more complex environment. Beyond this balance, Peermont also needs to meet the regulatory requirements of the South African National and Provincial Gambling Boards.
Karner says this requires Peermont to deliver quarterly reports that show how data is backed up at casinos and illustrate its recoverability. This sees Peermont using a team of people requesting a tape from its storage facility, restoring it to a regulated part of the server, boot it up, and verify with the Gambling Board that the recovery was successful.
As part of these compliance fundamentals, Peermont must also make daily offsite backups of the regulated part of the server. Previously, this required tape cartridges to be delivered by courier to an external facility. And when the time came to restore the data, it was a time-consuming process of recalling the tapes. With Veeam Cloud Connect, this has changed as Peermont now has an agreement in place with an external cloud service for the secure storage of its regulated data. Veeam Cloud Connect completely automates the recovery and offsite storage of this data.
“Thanks to Veeam, we have been able to take a process that would take a day at each property, requiring numerous employees, and fully automate it to take 15 minutes,” said Karner.
“I sleep a lot better at night knowing I have a system that is reliable and can recover within minutes.”
The Results

  • Automation of backup environment results in time and money savings
    Using Veeam, backups that previously took Peermont a day to complete at each of its properties can now be done in under 15 minutes. And with the configuration and maintenance of its hyper-available environment centralised using Veeam from the head office, Peermont can rest assured that lack of data access that could result in significant loss of revenue, is now a thing of the past.
  • Reliability and recoverability of backups
    Using Veeam, Peermont ensures the integrity of its backups and guarantees being able to restore critical data in the event of a disaster. With Veeam, its backups work without fail giving them confidence knowing that risk management is taken care of.
  • Scalability to cater for individual property requirements
    Veeam scales according to the unique needs of each property in the Peermont Group. Whether it is a smaller casino with a limited number of virtual machines or head offices with continuously expanding data that runs into the terabytes, Veeam delivers on both ends of the spectrum

Kate Mollett, Regional Manager for Africa South at Veeam, said the company provided a strategic solution that could address all customer requirements and provide Peermont with the peace of mind needed to deliver continuous services to customers.
“They might operate casinos and hotels, but they needed to bet on a sure-thing, rather than take a roll of the dice,” she said.
“As an organisation, Peermont embraced hyper-availability from the day it opened its doors and required a partner to deliver on its own exacting expectations, to help it meet those of its high-class clientele. Veeam has dealt Peermont with the strongest hand possible to realise its data management goals and ensure the integrity of its backups.
“Veeam provides Peermont with the ability to restore critical data in the event of just about any disaster, rapidly. When it comes to data management, you don’t want to gamble with your reputation or your service delivery.”

Original Article: http://www.intelligentcio.com/africa/2018/12/12/hospitality-group-turns-to-veeam-for-round-the-clock-availability/

DR

Why it’s so easy to get started with Veeam

Why it’s so easy to get started with Veeam

Veeam Software Official Blog  /  Melissa Palmer


When I first started working with Veeam Software before becoming a Veeam Vanguard and joining the Veeam Product Strategy Team, one of the things that was so appealing was that it was so easy to get started with and use.
Before I knew it, I was backing up and restoring virtual machines with ease. Now, I want to take a closer look at what makes Veeam products so easy to use and get started with.

Veeam just works

Like many people who love technology out there, I tend to leap before I look. Many times, beyond installation requirements I will not actually ready any how-to-get-started guides if I am working in a lab. Has this come back and bitten me at times? Sure it has, but with Veeam, it has not.
This goes beyond Veeam’s flagship product, Veeam Backup & Replication and extends to things like Veeam ONE, Veeam Availability Orchestrator, and extends to advanced features like storage integration. I haven’t met a Veeam product I found difficult to install.

Top-notch product documentation

After installing Veeam Backup & Replication and backing up and restoring a couple of virtual machines, I wondered what else I was missing. As it turns out, I was missing quite a lot! The good news is that Veeam product documentation is top notch.
When I was getting started with Veeam, I primarily worked with Veeam Backup & Replication for VMware vSphere. There were two documents I read from back to front:

Here is why these two documents are so useful.
The Veeam Backup & Replication Evaluator’s Guide for VMware vSphere (there is also one for Hyper-V if you prefer that hypervisor) covers key getting-started information like system requirements, among other things. It also walks you through the complete setup of Veeam Backup & Replication, and more importantly, it walks you through the process of performing your first backups and restores.
Beyond simply restoring a full VM, which as we know is not always required, the Evaluator’s Guide shows you how to restore:

  • Microsoft SQL databases
  • Guest OS files
  • VM virtual disks
  • VM files (VMDKs, VMXs, etc.)

Once you are comfortable with the basics of Veeam Backup & Replication, the Veeam Backup & Replication User Guide has the full details of all the great things Veeam can do.
Some of my favorite things beyond the basics of Veeam Backup & Replication are:

The Veeam community

One of the greatest part of Veeam products is the community surrounding it. The Veeam community is huge, and there are a couple of ways to start interacting. First of all is the Veeam Community Forums, which is your one-stop shop on the web for interreacting with other Veeam users, as well as some great technology experts that work at Veeam. You can sign up for free for the Veeam Community Forums right here.
Next, we have the Veeam Vanguards. They are a great group of Veeam enthusiasts and IT pros who share their knowledge not only on the Veeam Community Forums, but on Twitter and on their blogs. Simply search for the #VeeamVanguard hashtag on Twitter to get started interacting with these great folks.
It could not be easier to get started using Veeam products. Veeam offers a free 30-day trial of many of their products, including the flagship Veeam Availability Suite, which includes Veeam Backup & Replication and Veeam ONE. You can download the free trial here.
If you are an IT pro, you can also get a completely free NFR license for Veeam Availability Suite. Be sure to fill out the request form here, and get started with Veeam in your lab.
The post Why it’s so easy to get started with Veeam appeared first on Veeam Software Official Blog.

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/zRVJupF3tog/why-its-so-easy-to-get-started-with-veeam.html

DR

10 Tips for a Solid AWS Disaster Recovery Plan

10 Tips for a Solid AWS Disaster Recovery Plan

N2WS  /  Cristian Puricica

AWS is a scalable and high-performance computing infrastructure used by many organizations to modernize their IT. However, no system is invulnerable, and if you want to ensure business continuity, you need to have some kind of insurance in place. Disaster events do occur, whether they are a malicious hack, natural disaster, or even an outage due to a hardware failure or human error. And while AWS is designed in such a way to greatly offset some of these events, you still need to have a proper disaster recovery plan in place. Here are 10 tips you should consider when building a DR plan for your AWS environment.

1. Ship Your EBS Volumes to Another AZ/Region

By default EBS volumes are automatically replicated within an Availability Zone (AZ) where they have been created, in order to increase durability and offer high availability. And while this is a great way to protect your data from relying on a lone copy, you are still tied to a single point of failure, since your data is located only in one AZ. In order to properly secure your data, you can either replicate your EBS volumes to another AZ, or even better, to another region.
To copy EBS volumes to another AZ, you simply create a snapshot of it, and then recreate a volume in the desired AZ from that snapshot. And if you want to move a copy of your data to another region, take a snapshot of your EBS, and then utilize the “copy” option and pick a region where your data will be replicated.

2. Utilize Multi-AZ for EC2 and RDS

Just like your EBS volumes, your other AWS resources are susceptible to local failures. Making sure you are not relying on a single AZ is probably the first step you can take when setting up your infrastructure. For your database needs covered by RDS, there is a Multi-AZ option you can enable in order to create a backup RDS instance, which will be used in case the primary one fails by switching the CNAME DNS record of your primary RDS instance. Keep in mind that this will generate additional costs, as AWS charges you double if you want a multi-AZ RDS setup compared to having a single RDS instance.
Your EC2 instances should also be spread across more than one AZ, especially the ones running your production workloads, to make sure you are not seriously affected if a disaster happens. Another reason to utilize multiple AZs with your EC2 instances is the potential lack of available resources in a given AZ, which can occur sometimes. To properly spread your instances, make sure AutoScaling Groups (ASG) are used, along with an Elastic Load Balancer (ELB) in front of them. ASG will allow you to choose multiple AZs in which your instances will be deployed, and ELB will distribute the traffic between them in order to properly balance the workload. If there is a failure in one of the AZs, ELB will forward the traffic to others, therefore preventing any disruptions.
With EC2 instances, you can even go across regions, in which case you would have to utilize Route53 (a highly available and scalable cloud DNS service) to route the traffic, as well as do the load balancing between regions.

3. Sync Your S3 Data to Another Region

When we consider storing data on AWS, S3 is probably the most commonly used service. That is the reason why, by default, S3 duplicates your data behind the scene to multiple locations within a region. This creates high durability, but data is still vulnerable if your region is affected by a disaster event. For example, there was a full regional S3 outage back in 2017 (which actually hit a couple other services as well), which led to many companies being unable to access their data for almost 13 hours. This is a great (and painful) example of why you need a disaster recovery plan in place.
In order to protect your data, or just provide even higher durability and availability, you can use the cross-region replication option which allows you to have your data copied to a designated bucket in another region automatically. To get started, go to your S3 console and enable cross-region replication (versioning must be enabled for this to work). You will be able to pick the source bucket and prefix but will also have to create an IAM role so that your S3 can get objects from the source bucket and initiate transfer. You can even set up replication between different AWS accounts if necessary.
Do note though that the cross-region sync starts from the moment you enable it, so any data that already exists in the bucket prior to this will have to be synced by hand.

4. Use Cross-Region Replication for Your DynamoDB Data

Just like your data residing in S3, DynamoDB only replicates data within a region. For those who want to have a copy of their data in another region, or even support for multi-master writes, DynamoDB global tables should be used. These provide a managed solution that deploys a multi-region multi-master database and propagates changes to various tables for you. Global tables are not only great for disaster recovery scenarios but are also very useful for delivering data to your customers worldwide.
Another option would be to use scheduled (or one-time) jobs which rely on EMR to back up your DynamoDB tables to S3, which can be later used to restore them to, not only another region, but also another account if needed. You can find out more about it here.

5. Safely Store Away Your AWS Root Credentials

It is extremely important to understand the basics around security on AWS, especially if you are the owner of the account or the company. AWS root credentials should ONLY be used to create initial users with admin privileges which would take over from there. Root password should be stored away safely, and programmatic keys (Access Key ID and Secret Access Key) should be disabled if already created.
Somebody getting access to your admin keys would be very bad, especially if they have malicious intentions (disgruntled employee, rival company, etc.), but getting your root credentials would be even worse. If a hack like this happens, your root user is the one you would use to recover, whether to disable all other affected users, or contact AWS for help. So, one of the things you should definitely consider is protecting your account with multi-factor authentication (MFA), preferably a hardware version.
The advice to protect your credentials sometimes sounds like a broken record, but many don’t understand the actual severity of this, and companies have gone out of business because of this oversight.

6. Define your RTO and RPO

Recovery Time Objective (RTO) represents the allowed time it would take to restore a process back to service, after a disaster event occurs. If you guarantee an RTO of 30 minutes to your clients, it means that if your service goes down at 5 p.m., your recovery process should have everything up and running again within half an hour. RTO is important to help determine the disaster recovery strategy. If your RTO is 15 minutes or less, it means that you potentially don’t have time to reprovision your entire infrastructure from scratch. Instead, you would have some instances up and running in another region, ready to take over.
When looking at recovering data from backups, RTO defines what AWS services can be used as a part of disaster recovery. For example if your RTO is 8 hours, you will be able to utilize Glacier as a backup storage, knowing that you can retrieve the data within 3–5 hours using standard retrieval. If your RTO is 1 hour, you can still opt for Glacier, but expedited retrieval costs more, so you might chose to keep your backups in S3 standard storage instead.
Recovery Point Objective (RPO) defines the acceptable amount of data loss measured in time, prior to a disaster event happening. If your RPO is 2 hours, and your system has gone down at 3 p.m., you must be able to recover all the data up until 1 p.m. The loss of data from 1 p.m. to 3 p.m. is acceptable in this case. RPO determines how often you have to take backups, and in some cases continuous replication of data might be necessary.

7. Pick the Correct DR Scenario for Your Use Case

The AWS Disaster Recovery white paper goes to great lengths to describe various aspects of DR on AWS, and does a good job of covering four basic scenarios (Backup and Restore, Pilot Light, Warm Standby and Multi Site) in detail. When creating a plan for DR, it is important to understand your requirements, but also what each scenario can provide for you. Your needs are also closely related to your RTO and RPO, as those determine which options are viable for your use case.
These DR plans can be very cheap (if you rely on simple backups only for example), or very costly (multi-site effectively doubles your cost), so make sure you have considered everything before making the choice.

8. Identify Mission Critical Apps and Data and Design Your DR Strategy Around Them

While all your applications and data might be important to you or your company, not all of them are critical for running a business. In most cases not all apps and data are treated equally, due to the additional cost it would create. Some things have to take priority, both when making a DR plan, and when restoring your environments after a disaster event. An improper prioritization will either cost you money, or simply risk your business continuity.

9. Test your Disaster Recovery

Disaster Recovery is more than just a plan to follow in case something goes wrong. It is a solution that has to be reliable, so make sure it is up to the task. Test your entire DR process thoroughly and regularly. If there are any issues, or room for improvement, give it the highest possible priority. Also don’t forget to focus on your technical people as well as—they too need to be up to the task. Have procedures in place to familiarize them with every piece of the DR process.

10. Consider Utilizing 3rd-party DR Tools

AWS provides a lot of services, and while many companies won’t ever use the majority of them, for most use cases you are being provided with options. But having options doesn’t mean that you have to solely rely on AWS. Instead, you can consider using some 3rd-party tools available in AWS Marketplace, whether for disaster recovery or something else entirely. N2WS Backup & Recovery is the top-rated backup and DR solution for AWS that creates efficient backups and meets aggressive recovery point and recovery time objectives with lower TCO. Starting with the latest version, N2WS Backup & Recovery offers the ability to move snapshots to S3 and store them in a Veeam repository format. This new feature enables organizations to achieve significant cost-savings and a more flexible approach toward data storage and retention. Learn more about this here.

Summary

Disaster recovery planning should be taken very seriously, nonetheless many companies don’t invest enough time and effort to properly protect themselves, leaving their data vulnerable. And while people will often learn from their mistakes, it is much better to not make them in the first place. Make disaster recovery planning a priority and consider the tips we have covered here, but also do further research.
Try N2WS Backup & Recovery 2.4 for FREE!

Read Also

Original Article: https://n2ws.com/blog/aws-cloud/how-to-create-a-disaster-recovery-plan-for-aws

DR

Chaining plans in Veeam Availability Orchestrator

Chaining plans in Veeam Availability Orchestrator

Notes from MWhite  /  Michael White


So you have installed VAO, and have tested your plans and you are ready for a crisis.  But you want to know how you can trigger one plan, and have them all go.  Right?  I can help with that.
The overview is you do plans, test them and when happy you chain them.  But you need to make sure the order of recovery is what it needs to be.  For example, database’s should be recovered before the applications that need them. In a crisis it is best to have email working so management can reassure the market, customers and employees.
In another article I am working on I will show you a method of recovering multi – tier applications fast, rather than the slow method – which I suspect will be default for many as it is obvious, and yet, there is a much better way to do it.

Starting Point

In my screenshot below you can see my plans.  I want to recover EMail Protection first, followed by SQL, and finally View.  View requires SQL, and I need email recovered first so I can tell my customers, and employees that we are OK.

And yes, my EMail protection plan is not verified but I am having trouble with it so it needs more time. But we can still chain plans.  When you do this, make sure all your plans are verified.
The next step is to right click on Email Protection Plan and select Schedule.  You can also do that under the Launch menu too.

Yes, we are not going to schedule a failover, but this is the path we must take to chain these plans together so we do the failover on the first and they all go one after another.
We need to authenticate – we don’t want anyone doing this after all. Next we see a schedule screen and so you need to schedule a failover – in the future.  Not too worry, we will not do that.

You choose Next, and Finish.
We now right click on the next plan and select Schedule.

This time, we are enable the checkbox, but rather than select a date we will choose a plan to failover after.  So select Choose a Plan button. In our case we are going to have this second plan fire after the first one.

Now we Next, and Finish.
Now we right + click on the last plan and select Schedule. Then we enable and select the previous plan.


Then you do the same old Next, and Finish. Now when we look at our plans we see a little of what we have done – but not yet are we complete.

You can see in the Failover Scheduled column how the email plan – first to be recovered – occurs on a particular date, but then we have View execute after SQL, and SQL recovery after Email.
Now, we must right click on EMail and select Schedule.  We must disable the scheduled failover.

As you see above, remove the checkbox, and the typical Next and Finish.

Now we see no scheduled failover for the EMail protection plan, but we still see the other two plans with Run after so if I execute a failover for EMail the other two will execute in the order we set.
So things are good.  Now one plan triggered will cause the next to execute.  If you select EMail, it will trigger SQL which will than trigger View.

Things to be aware of

  • The plans need to be enabled for this to work.
  • Test failovers are not impacted by this order of execution.
  • When you do a failover with any of these plans, you will be prompted to execute the other plans in the appropriate order. This will be enabled by default but you can disable it.

One you have your plans chained, you should arrange a day and time to do a real failover to your DR site.  It is the best way to confirm things are working. BTW, all of my VAO technical articles can be found with this link.
Michael
=== END ===

Original Article: https://notesfrommwhite.net/2018/12/02/chaining-plans-in-veeam-availability-orchestrator/

DR

Top Veeam Forum Posts and Content

TOP CONTENT
HELP!.. Error
connecting veeam 9.5 with esxi 5.5
   [VIEWS: 266 • REPLIES: 16]
Hi, I’m new with
veeam backup and I have the following problem. When I try to connect Veeam
backup 9.5 with the V Center Esxi 5.5 server. I missed proxy error (407).
Error on the proxy server (407). Proxy authentication is required. How can I
solve it to connect with my ESXI. more
confused about
which ReFS version to use
   [VIEWS: 153 • REPLIES: 3]
There are 2 Veeam
Knowledge Base articles about ReFS, that stats what version that is good to
use with ReFS.
This one i the latest from 2018-12-26: https://www.veeam.com/kb2854

This says that the good version is like this: more
V2V support   [VIEWS: 140 • REPLIES: 6]
Hi.
V2V support Veeam v9.5 U4 ?
Hello all,
I would like to edit bakup copy job’s schedule via powershell.
I made a script as follows with reference to other forums. more
Backup to a
Mapped Drive?
   [VIEWS:
94 • REPLIES: 3]
I have a NAS set up
as a mapped drive on windows 10. I cannot correctly configure my backup to a
shared folder on my NAS. What could I be doing wrong? Veeam continues to tell
me it Failed to get disk free space.
more

Veeam Availability Console 2.0 Update 1 – New patch release

Veeam Availability Console 2.0 Update 1 – New patch
release
Recently a new patch was released
for Veeam Availability Console 2.0 Update 1.
This new patch resolves several issues surrounding core VAC server functionality,
discovery rules, monitoring, alarms and more. Read this blog post from
Anthony Spiteri, global technologist of product strategy at Veeam
to learn about these issues and how they will be resolved.

Recorded Webinar NEW Veeam Availability Suite 9.5 Update 4

Recorded Webinar
NEW Veeam Availability Suite 9.5 Update 4
Get an exclusive look at the NEW Veeam® Availability Suite™ 9.5
Update 4
and learn how it opens the opportunity for you
to close more deals with Veeam!
What’s new in Veeam Availability Suite 9.5
Update 4:
Veeam Cloud Tier – delivers up to 10
times the savings on long-term data retention costs using native
object storage integration
NEW Veeam Cloud Mobility – provides easy
portability and recovery of any on‑premises or cloud‑based workloads
to AWS, Azure and Azure Stack
Veeam DataLabs™ – increases security and
data governance, including GDPR compliance and malware prevention
And more!

Veeam Co-Founder Ratmir Timashev On Strategic Growth, And Co-CEO Peter McKay’s Departure – CRN

Veeam Co-Founder Ratmir Timashev On Strategic Growth, And Co-CEO Peter McKay’s Departure – CRN

veeam – Google News

Changing Of The Guard
Veeam co-founder Ratmir Timashev (pictured) expects the data availability software company to outpace the market for years to come. A 100 percent channel sales strategy, several powerful alliances with top hardware vendors including Hewlett Packard Enterprise, Cisco Systems and NetApp, and a market hungry for cloud solutions are all working in the company’s favor, he said.
The Baar, Switzerland, company’s co-CEO, Peter McKay, said Tuesday that he was leaving the company, kicking off a broader executive restructuring. Timashev said McKay made the decision to leave the company after accelerating its enterprise channel sales strategy, as well as its strategic alliance strategy.
In the wake of McKay’s departure, Veeam made Timashev, who has in the past served as CEO, executive vice president of worldwide sales and marketing. Co-founder Andrei Baronov, formerly co-CEO, is now the company’s sole chief executive.
Timashev said Veeam has grown its data protection and replication business in the mid-20 percent range in recent years, and he expects to maintain that pace over the next couple of years. Cloud service providers are the company’s fastest-growing partner segment, he said, and an increasing number of traditional resellers lured by customer demand and huge margins are getting in on that action.
What follows is an edited excerpt of CRN’s conversation with Timashev.

Original Article: http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNG2qpEm5zNhzrInBIBbbd–JUT9Xw&clid=c3a7d30bb8a4878e06b80cf16b898331&ei=hh_bW7gXgfeoAs65pKAG&url=https://www.crn.com/slide-shows/data-center/veeam-co-founder-ratmir-timashev-on-strategic-growth-and-co-ceo-peter-mckay-s-departure

DR

Prepare you vLab for your VAO DR planning

Prepare you vLab for your VAO DR planning

CloudOasis  /  HalYaman


disaster-recovery-plan-ts-100662705-primary.idge_.jpgA good disaster plan needs careful preparation. But after your careful planning, how do you validate your DR strategy without disrupting your daily business operation? Veeam Availability Orchestrator just might be your solution.
As this blog post focuses on one aspect of the VBR and VAO configuration and to learn more about Veeam Availability Orchestrator, and to find out more about Veeam Replication and Veeam Virtual Labs, continue reading.
Veeam Availability Orchestrator is a very powerful tool to help you implement and document your DR strategy. However, this product relies on Veeam Backup & Replication Tool to:

  • perform Replication, and
  • Virtual Lab.

Therefore, to successfully configure the Veeam Availability Orchestrator, you must master Replication and the Virtual Lab. Don’t worry, I will take you through the important steps to successfully configure your Veeam Availability Orchestrator, and to implement your DR Plan.
The best way to get started is to share with you a real-life DR scenario I experienced last week during my VAO POC.

Scenario

A customer wanted to implement Veeam Availability Orchestrator with the following objectives:

  • Replication between the production and DR datacentre across the county,
  • Re-Mapping the Network attached to each VM at the DR site,
  • Re-IP the VM IP address of each VM at the DR site,
  • Scheduling the DR testing to perform every Saturday morning,
  • Document the DR Plan.

As you might already be aware, all those objectives can be achieved using Veeam VBR & VAO.
So let’s get started.

The Environment and the Design

To understand the design and the configuration lets first introduce the customer network subnets at PRODUCTION and the DR site.
PRODUCTION Site

  • At the PRODUCTION site, the customer used the 192.168.33.x/24 subnet,
  • Virtual Distribution Switch and Group: SaSYD-Prod.

DR Site

  • Production network at the DR site uses the 192.168.48.x/24 subnet
  • Prod vDS: SP-DSWProd
  • DR Re-IP subnet & Network name: vxw-dvs-662-virtualwire-4-sid-5000-SP_Prod_Switch

To accomplish the configuration and requirements at those sites, the configuration must consider the following steps:

  • Replication between the PRODUCTION and the DR Sites
  • Re-Mapping the VM Network from ProdNet to vxw-dvs-662-virtualwire-4-sid-5000-SP_Prod_Switch
  • vLab with that configuration listed above.

The following diagram and design is what we are going to discuss on this blog:

Replication Job Configuration

To prepare for a disaster and to implement failover, we must create a replication job that will replicate the intended VMs from the PRODUCTION site to the DR site. In this scenario, to achieve the requirements above, we must use Veeam replication with Network Mapping and Re-IP options when configuring the Replication Job. To do this, we have to tick the checkbox for the Separate virtual networks (enable network mapping), and Different IP addressing scheme (enable re-IP):

At the Network stage, we will specify the source and destination networks:

Note: to follow the diagram, the source network must be: Prod_Net and the Target network will be the DR_ReIP network.
On the Re-IP, enter the original IP address and the new Re-IP address to be used at the DR site:
Next, continue with the replication job configuration as normal.

Virtual Lab Configuration

To use Veeam Availability Orchestrator to check our DR planning, and to make sure our VMs will start on the DR site as expected, we must create a Veeam Virtual Lab to test our configuration. First, let’s create a Veeam vLab, starting with the name of the vLab and the ESXi host at the DR site which will host the Veeam Proxy appliance. At the following screenshot, the hostname is spesxidr.veeamsalab.org
Choose a data store where you will keep the redo log files. After you have selected the datastore, press next. You must configure the proxy appliance specifically for the network to be attached. In our example, the network is the PRODUCTION network at the DR site named SP-DSWProd, and it has a static DR site IP address. See below.
Next, we must configure the Networking as Advanced multi-host (manual configuration), and then select the appropriate Distributed virtual switch; in our case, SP-ProdSwitch.

This leads us to the next configuration stage, Isolated Network. At this stage, we must assign the DR network that each replicated VM will be connected.
Note: This network must be the same as the Re-Mapped network you selected as a destination network during the replication job configuration. The Isolation network is any name you assign to the temporary network used during the DR plan check.

Next, we must configure the temporary DR network. As shown on the following screenshot, I chose the Omnitra_VAO_vLab network I named on the previous step (Isolated network). The IP address is the same IP address of the DR PRODUCTION gateway. Also on the following screenshot, you can see the Masquerade network address I can use to get access to each of the VMs from the DR PRODUCTION network:


Finally, let’s create a static Mapping to access the VMs during the DR testing. We will use the Masquerade IPs as shown in the following screenshot.

Conclusion

Veeam Availability Orchestrator is a very powerful tool to help companies streamline and simplify their DR planning. The initial configuration of the VAO DR planning is not complex; but it is a little involved. You must navigate between the two products, Veeam Backup and Replication, and the Veeam Availability Orchestrator.
After you have grasped the DR concept, your next steps to DR planning will be smooth and simple. Also, you may have noted that to configure your DR planning using Veeam Availability Orchestrator, you must be familiar with Veeam vLabs and networking in general. I highly recommend that you read more on Veeam Virtual Labs before starting your DR Plan configuration.
I hope this blog post helps you to get started with vLab and VAO configuration. Stay tuned for my next blog or recording about the complete configuration of the DR Plane. Until then, I hope you find this blog post informative; please post your comments in the chatter below if you have questions or suggestions.
The post Prepare you vLab for your VAO DR planning appeared first on CloudOasis.

Original Article: https://cloudoasis.com.au/2018/11/20/prepare-you-vlab-for-your-vao-dr-planning/

DR

Migration is never fun – Backups are no exception

Migration is never fun – Backups are no exception

Veeam Software Official Blog  /  Rick Vanover


One of the interesting things I’ve seen over the years is people switching backup products. Additionally, it is reasonable to say that the average organization has more than one backup product. At Veeam, we’ve seen this over time as organizations started with our solutions. This was especially the case before Veeam had any solutions for the non-virtualized (physical server and workstation device) space. Especially in the early days of Veeam, effectively 100% of business was displacing other products — or sitting next to them for workloads where Veeam would suit the client’s needs better.
The question of migration is something that should be discussed, as it is not necessarily easy. It reminds me of personal collections of media such as music or movies. For movies, I have VHS tapes, DVDs and DVR recordings, and use them each differently. For music, I have CDs, MP3s and streaming services — used differently again. Backup data is, in a way, similar. This means that the work to change has to be worth the benefit.
There are many reasons people migrate to a new backup product. This can be due to a product being too complicated or error-prone, too costly, or a product discontinued (current example is VMware vSphere Data Protection). Even at Veeam we’ve deprecated products over the years. In my time here at Veeam, I’ve observed that backup products in the industry come, change and go. Further, almost all of Veeam’s most strategic partners have at least one backup product — yet we forge a path built on joint value, strong capabilities and broad platform support.
Migration-VDP
When the migration topic comes up, it is very important to have a clear understanding about what happens if a solution no longer fits the needs of the organization. As stated above, this can be because a product exits the market, drops support for a key platform or simply isn’t meeting expectations. How can the backup data over time be trusted to still meet any requirements that may arise? This is an important forethought that should be raised in any migration scenario. This means that the time to think about what migration from a product would look like, actually should occur before that solution is ever deployed.
Veeam takes this topic seriously, and the ability to handle this is built into the backup data. My colleagues and I on the Veeam Product Strategy Team have casually referred to Veeam backups being “self-describing data.” This means that you open it up (which can be done easily) and you can clearly see what it is. One way to realize this is the fact that Veeam backup products have an extract utility available. The extract utility is very helpful to recover data from the command line, which is a good use case if an organization is no longer a Veeam client (but we all know that won’t be the case!). Here is a blog by Vanguard Andreas Lesslhumer on this little-known tool.
Why do I bring up the extract utility when it comes to switching backup products? Because it hits on something that I have taken very seriously of late. I call it Absolute Portability. This is a very significant topic in a world where organizations passionately want to avoid lock-in. Take the example I mentioned before of VMware vSphere Data Protection going end-of-life, Veeam Vanguard Andrea Mauro highlights how they can migrate to a new solution; but chances are that will be a different experience. Lock-in can occur in many ways, and organizations want to avoid lock-in. This can be a cloud lock-in, a storage device lock-in, or a services lock-in. Veeam is completely against lock-ins, and arguably so agnostic that it makes it hard to make a specific recommendation sometimes!
I want to underscore the ability to move data — in, out and around — as organizations see fit. For organizations who choose Veeam, there are great capabilities to keep data available.
So, why move? Because expanded capabilities will give organizations what they need.
GD Star Rating
loading…
Migration is never fun – Backups are no exception, 4.8 out of 5 based on 4 ratings

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/EjQm2rSAvsU/backup-product-migration-lock-in.html

DR

The closing speed of transformation

The closing speed of transformation

Veeam Executive Blog – The Availability Lounge  /  Dave Russell

We all know that software and infrastructure don’t typically go away in the data center. You very rarely decommission something and bring in all new gear or stop the application to completely transition over to something else. Everyone’s in some sort of hybrid state. Some think that they’re transitioning, and many hopefully even have a plan for this.
 
Some of these changes are bigger, take longer, and you almost need to try them and experience them to have success in order to proceed. I’ve heard people say, “Well, we’re going to get around to doing X, Y, and Z.” But they are often lacking a sense of urgency to experiment early.
 
A major contributor to this type of procrastination is that changes that appear to be far off, arrive suddenly. The closing speed of transformation is the issue. Seldom do you have the luxury of time; but early on you are seldom compelled to decide. You don’t sense the urgency.
 
It is not in your best interest to say, “We’re just going to try to manage what we’ve got, because we’re really busy. We’ll get to that when we can.” Because then, boom, you’re unprepared for when you really need to actually get going on something.
 
A perfect example is the impact of the cloud on every business and every IT department. The big challenge is that organizations know they should be doing something today but are still struggling with exactly what to do. In terms of a full cloud strategy, it’s oftentimes very disaggregated. And while we’re going on roughly a decade in the cloud-era, as an industry overall, we’re still really in the infancy of having a complete cloud strategy.
 
In December of last year, when I asked people, “What’s your cloud strategy? Do you have one? Are you formulating one?” The answer was unfortunately the same response that they’ve been giving for the last eight years… “Next year.” The problem is that this is next year, and they are still in the same state.
 
When it comes to identifying a cloud strategy, a big challenge for IT departments and CIOs is that it used to be easy to peer into the future because the past was fairly predictable — whether it was the technology in data centers, the transformation that was happening, the upgrade cycle, or the movement from one platform to another. The way you had fundamentally been doing something in the past wasn’t changing with the future. Nor did it require radically different thinking. And it likely did not require a series of conversations across all of IT, and the business as well.
 
But when it comes to the cloud, we’re dealing with a fundamental transformation in the ways that businesses operate when it comes to IT: compute, storage, backup, all of these are impacted.
 
Which means organizations working on a cloud strategy have little, to no historical game plan to refer to. Nothing they can fall back on for a reference. The approach of, “Well this is what we did in the past, so let’s apply that to the future,” no longer applies. Knowing what you have, knowing what your resources are, and knowing what to expect are not typically well understood with regards to a complete cloud transformation. In part, this is because the business is often transforming, or seeking to become more digital, at the same time.
 
With the cloud, you often have limited, or isolated experiences. You have people who are trying to make business decisions that have never been faced with these issues, in this way, before.
 
Moving to absolute virtualization and a hybrid, multi-cloud deployment means that when it comes time to set a strategy you have a series of questions that need to be answered:
 

  • Do you understand what infrastructure resources are going to be required? No.
  • Do you understand what skills are going to be needed from your people? No.
  • Do you know how much budget to allocate with clarity, today, and over time? No.
  • Do you know what technologies are going to impact your business immediately, and in the near-term future? No.

 
Okay, go ahead and make a strategy now based on the information you just gave me, four No answers in a row. That’s pretty scary.
 
On top of this, data center people tend to be very risk averse by design. There’s a paralysis that creeps in. “Well, we’re not sure how we should get started.” And people just stay in pause mode. That’s part of why we see Shadow IT or Rogue IT. Someone says, “Well, I’m going to go out and I’m just going to get my own SaaS-based license for some capability that I’m looking for, because the IT department says they’re investigating it.”
 
Typically, what happens is the IT department is trying to figure that out, trying to get a strategy, investigate the request. But in the meantime, they say, “No.” Now the IT becomes the department of “no” and is not being perceived as being helpful.
 
To address this issue head on, you need to apply an engineering mindset. Meaning, that you learn more about a problem by trying to solve it. In absence of having a great reference base, with something that can easily be compared to, we should at least get going on what is visible to us, and that looks to be achievable in the short term.
 
An excellent example in the software as a service (SaaS) world is Microsoft Office 365. Getting the on-premises IT people participating in this can still be a challenge. As the SaaS solutions start to become more and more implemented, they’re sometimes happening outside of the purview of what goes on in the data center. This can lead to security, pricing and performance and Availability issues.
 
Percolate that up, what’s the effect of that? What does that actually mean? It means that the worst-case scenario is you’ve got an outcome of where the infrastructure operations people are increasingly viewed as less and less strategic going forward, because if you take this out to the extreme, you’ll end up being custodians of legacy implementations and older solutions. All while numerous other projects are being championed, piloted, put in to production and ultimately owned, by somebody else; perhaps a line of business that is outside of IT.
 
That’s where you see CIOs self-report that they think more than 29% of their IT spending is happening outside of their purview. If you think about that, that’s concerning. You’re the Chief Information Officer (CIO). You should know pretty close to 100% of what’s going on as it relates to IT. If your belief is that approaching a third of IT spending happens elsewhere, outside of your control, and that this outside spending is not really an issue, then what are you centralizing? What are you the Chief of, if this trend continues?”
 
The previous way of developing a strategic IT plan worked well in the past when you had an abundance of time. But that is no longer the case. Transformation is happening all around us and inside of each organization. You can’t continue to defer decisions. IT is now a critical part of the business; every organization has become digital and the cloud is touching everything. It is time to step up, work with vendors you trust, and move boldly to the cloud.
 
Please join me on December 5th for VeeamOn Virtual where I discuss these challenges as well as 2019 predictions, Digital Transformation, Hyper-Availability and much more. Go ahead and register for VeeamOn Virtual.
 

Makes disaster recovery, compliance, and continuity automatic

NEW Veeam Availability Orchestrator helps you reduce the time, cost and effort of planning for and recovering from a disaster by automatically creating plans that meet compliance.

DOWNLOAD NOW

New
Veeam Availability Orchestrator

Original Article: http://feedproxy.google.com/~r/veeam-executive-blog/~3/IG0mZ-tB1tA/speed-of-cloud-transformation.html

DR

QNAP NAS Verified Veeam Ready for Efficient Disaster Recovery – HEXUS

QNAP NAS Verified Veeam Ready for Efficient Disaster Recovery – HEXUS

veeam – Google News

PRESS RELEASE

Taipei, Taiwan, November 19, 2018 – QNAP® Systems, Inc. today announced that multiple Enterprise-class QNAP NAS systems, including the ES1640dc v2 and TES-1885U, have been verified as Veeam® Ready. Veeam Software, the leader in Intelligent Data Management for the Hyper-Available Enterprise™, has granted QNAP NAS systems with the Veeam Ready Repository distinction, verifying these systems achieve the performance levels for efficient backup and recovery with Veeam® Backup & Replication™ for virtual environments built on VMware® vSphere™ and Microsoft® Hyper-V® hypervisors.
“Veeam provides industry-leading Availability solutions for virtual, physical and cloud-based workloads, and verifying performance ensures that organizations can leverage Veeam advanced capabilities with QNAP NAS systems to improve recovery time and point objectives and keep their businesses up and running,” said Jack Yang, Associate Vice President of Enterprise Storage Business Division of QNAP.
Veeam Backup & Replication helps achieve Availability for ALL virtual, physical and cloud-based workloads and provides fast, flexible and reliable backup, recovery and replication of all applications and data. Organizations can now choose among several QNAP systems verified with Veeam for backup and recovery including:
For more information, please visit https://www.veeam.com/ready.html.
About QNAP Systems, Inc.
QNAP Systems, Inc., headquartered in Taipei, Taiwan, provides a comprehensive range of cutting-edge Network-attached Storage (NAS) and video surveillance solutions based on the principles of usability, high security, and flexible scalability. QNAP offers quality NAS products for home and business users, providing solutions for storage, backup/snapshot, virtualization, teamwork, multimedia, and more. QNAP envisions NAS as being more than “simple storage”, and has created many NAS-based innovations to encourage users to host and develop Internet of Things, artificial intelligence, and machine learning solutions on their QNAP NAS.

Original Article: http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNGr1B0Zhq5QDEVsTvQ-qQhfGevzSA&clid=c3a7d30bb8a4878e06b80cf16b898331&ei=4xIAXKjpHpXnhAGrrL2oBg&url=https://hexus.net/tech/items/network/124439-qnap-nas-verified-veeam-ready-efficient-disaster-recovery/

DR

On-Premises Object Storage for Testing

On-Premises Object Storage for Testing

CloudOasis  /  HalYaman


objectstorage.pngMany Customers and Partners are wanting to deploy an on-premises object storage for testing and learning purposes. How you can deploy a totally free of charge object storage instance for an unlimited time and unrestricted capacity in your own lab?


On this blog post, I will take you through the deployment and configuration of an on-premises object storage instance for test purposes so that you might learn more about the new feature.
For this test, I will use a product call Minio. You can download it from this link for free, and it is available for Windows, Linux and MacOS.
To get started, download Minio and run the installation.
On my CloudOasis lab, I decided to use a Windows core server to act as a test platform for my object storage. Read on to the steps below to see the configuration I used:

Deployment

By default, after the download of Minio Server, it will install as an unsecured service. The downside of this is that many applications out there need a secure URL over HTTPS to interact with the Object Storage. To fix this insecurity, we have to download GnuTLS to generate a private and public key facility to secure our Object storage connection.

Preparing Certificate

After GnuTLS has downloaded and extracted to a folder on your drive, for example, C:\GnuTLS, you now must add that path to Windows with the following command:
setx path "%path%;C:\gnutls\bin"

Private Key

The next step is to generate the private.key:
certtool.exe --generate-privkey --outfile private.key
After the private.key has been  generated, create a new file called cert.cnf and past the following script into it:
# X.509 Certificate options  #  # DN options    # The organization of the subject.  organization = "CloudOasis"    # The organizational unit of the subject.  #unit = "CloudOasis Lab"    # The state of the certificate owner.  state = "NSW"    # The country of the subject. Two letter code.  country = "AU"    # The common name of the certificate owner.  cn = "Hal Yaman"    # In how many days, counting from today, this certificate will expire.  expiration_days = 365    # X.509 v3 extensions    # DNS name(s) of the server  dns_name = "miniosrv.cloudoasis.com.au"    # (Optional) Server IP address  ip_address = "127.0.0.1"    # Whether this certificate will be used for a TLS server  tls_www_server    # Whether this certificate will be used to encrypt data (needed  # in TLS RSA cipher suites). Note that it is preferred to use different  # keys for encryption and signing.  encryption_key

Public Key

Now we are ready to generate the public certificate using the following command:
certtool.exe --generate-self-signed --load-privkey private.key --template cert.cnf --outfile public.crt
After you have completed those steps, you must copy the private and the public keys to the following path:
C:\Users\administrator.VEEAMSALAB\.minio\certs\
Note: If you already had your own certificates, all you have to do is to copy them to the .\.\certs folder.

Run Minio Secure Service

Now we are ready to run a secured Minio Server using the following command. Here, we are assuming your minio.exe has been installed on your C:\ drive:
C:\minio.exe server S:\bucket
Note: the S:\bucket is a second volume I created and configured in Minio Server to receive the saved the objects.
After the minio.exe has run, you will get the following screen:
Screen Shot 2018-11-09 at 8.46.33 am

Conclusion

This blog was prepared following several requests from customers and partners who wanted to familiarise themselves with object storage integration during their private beta experience.
As you saw, the steps we have described here will help you to deploy an on-premises object storage for your own testing, without the cost, time limit, or storage size limit.
The steps to deploy the object storage are very simple, but the outcome is huge, and very helpful to you in your learning journey.

The post On-Premises Object Storage for Testing appeared first on CloudOasis.

Original Article: https://cloudoasis.com.au/2018/11/09/on-premises-s3/

DR

Veeam honored with 2018 CRN Tech Innovator Award

Veeam honored with 2018 CRN Tech Innovator Award
CRN, a brand of The Channel Company, recognized Veeam®
with a 2018 CRN Tech Innovator Award. Veeam Availability Suite™ 9.5 took
top honors in the Data Management category!
These annual awards honor standout hardware, software and
services that are moving the IT industry forward. In compiling
the 2018 Tech Innovator Award list, CRN editors evaluated 300 products
across 34 technology categories using several criteria, including
technological advancements, uniqueness of features, and potential
to help solution providers solve end users’ IT challenges.

UPDATED: Cisco + Veeam Digital Hub
Cisco and Veeam offer unmatched price and performance along
with hyperconverged Availability. Have you seen the latest updates
we’ve made to the Cisco + Veeam Digital Hub? Subscribe now
to get easy and unlimited access to Cisco + Veeam reports, best
practices, webinars and much more.

VeeamON Virtual 2018 – December 4

VeeamON Virtual 2018

VeeamON Virtual 2018
04 December, Tuesday
VeeamON Virtual 2018
VeeamON Virtual is a unique online conference designed to deliver the latest insights on Intelligent Data Management — all from the comfort of your own office.
Every year, VeeamON Virtual brings together more than 2,500 industry experts to showcase the latest technology solutions providing the Hyper-Availability of data.
Join us on our virtual journey to explore:
  • The challenges of data growth and inevitable data sprawl
  • Threats to data Availability and protection
  • The steps you can take to achieve Intelligent Data Management
  • And more!
Enter for a chance to win a Virtual Reality Kit and gain access to the Veeam® Availability Library!

On-prem or in-cloud? This platform provides holistic security in-cloud, on-prem and everywhere in between – SiliconANGLE News

On-prem or in-cloud? This platform provides holistic security in-cloud, on-prem and everywhere in between – SiliconANGLE News

veeam – Google News

Three years ago the big question was: “On-prem or in-cloud?” Today’s answer is both, as the majority of companies adopt hybrid solutions. However, the question turns to the complexity of securing data that is dispersed over multiple platforms. Taking steps to solve this issue is N2W Software Inc.’s Backup & Recovery version 2.4, announced during AWS re:Invent 2018 this week in Las Vegas, Nevada.
“You have a single platform that can manage data protection both on- and off-premises so that you can leverage where is the best place for this workload and protect it across no matter where it chooses to live,” said Danny Allan (pictured, left), vice president of product strategy at Veeam Software Inc., was acquired N2W this past January.
Allan and Andy Langsam (pictured, right), chief operating officer at N2W Software, spoke with John Walls (@JohnWalls21), host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, and guest host Justin Warren (@jpwarren), chief analyst at PivotNine Pty Ltd, during AWS re:Invent in Las Vegas. They discussed the release of N2WS version 2.4 and the changing face of the back-up industry as it expands from security into predictive analysis. (* Disclosure below.)

Slashing storage costs

N2WS version 2.4 introduces snapshot decoupling into the AWS S3 repository, allowing customers to move backup EC2 snapshots into the much cheaper S3 storage, according to Langsam. Real customer savings are estimated to be 40 to 50 percent a month, he added.
“Anytime you can talk about cost reduction from five cents a gig on EC2 storage to two cents on S3, it’s a tremendous savings for our customer base,” Langsam said.
A recent survey conducted by N2WS showed that more than half of the company’s customers spend $10,000 a month or more on AWS storage costs. “If they can save 40 percent on that, that’s real, real savings,” Langsam stated. “[It’s] more than the cost of the software alone.”
N2WS has reported 189-percent growth in revenue since its January 2018 acquisition by Veeam, according to Allan. “We’ve got customers recently like Notre Dame and Cardinal Health, and then we have people getting into the cloud for the very first time,” he said.
Allan attributes this growth to the financial stability provided by having a parent company. “Being acquired has allowed us to focus on the customer and innovation versus going out and raising money from investors,” he stated.
Security concerns have catapulted back-up services into the spotlight, but their capabilities are starting to expand outside of just protection and recovery. “We’re moving away from just being reactive to business need to being proactive in driving the business forward,” Allan said.
Applying new technologies available through advances in machine learning and artificial intelligence allows data protection companies to offer suggestions to their clients that will increase productivity or reduce costs. “We [can] leverage a lot of the algorithms that are existing in clouds like AWS to help analyze the data and make decisions that [our clients] don’t even know that they need to make,” Allan said. “That decision could be, you need to run this analysis at 2:00 a.m. in the morning because the instances are cheaper.”
Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of AWS reInvent. (* Disclosure: Veeam Software Inc. sponsored this segment of theCUBE. Neither Veeam nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.
The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.

Original Article: http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNHfBiD2bON-7L0l4XY3tdnBfY63Kg&clid=c3a7d30bb8a4878e06b80cf16b898331&ei=1UgAXJCDGYnihAGZ5qHwDA&url=https://siliconangle.com/2018/11/28/prem-cloud-platform-provides-holistic-security-cloud-prem-everywhere-reinvent/

DR

Veeam and N2WS: Accelerating Growth and Innovation

Veeam and N2WS: Accelerating Growth and Innovation

Veeam Executive Blog – The Availability Lounge  /  Yesica Schaaf

Who can forget that wonderful line from Forrest Gump in which Hanks talks about the beginning of his friendship with Jenny and how since that day, they were like “Peas and Carrots?” That’s how we like to think of Veeam and N2WS.
 
Veeam, a company born on virtualization, is the clear leader in Intelligent Data Management with significant investments and growth in the public cloud space. As part of our strategy to deliver the most robust data management and protection across any environment, any cloud, we’re always looking at ways to innovate and expand our offerings.  And last year, we acquired N2WS, a leading provider of cloud-native backup and disaster recovery for Amazon Web Services (AWS); since the acquisition, Veeam and N2WS have accelerated N2WS’ revenue growth by 186% year over year, including 179% growth of installed trials of N2WS’ product, N2WS Backup & Recovery.
 
Today, N2WS announces version 2.4 of N2WS Backup & Recovery, which enables businesses to reduce the overall cost of protecting cloud data by extending Amazon EC2 snapshots to Amazon S3 for long-term data retention — reducing backup storage costs by up to 40%.
 
This latest release builds on the previous announcement where N2WS and Veeam launched new innovations to help businesses automate backup and recovery operations for Amazon DynamoDB, a cloud database service used by more than 100,000 AWS customers for mobile, web, gaming, ad tech, IoT, and many other applications that need low-latency data access. By extending N2WS Backup & Recovery to Amazon DynamoDB, businesses are now able to ensure continuous Availability and protect against accidental or malicious deletion of critical data in Amazon DynamoDB.
 
While the cloud delivers significant business benefits, based on the AWS shared responsibility model, businesses must still take direct action to guard data and enable business continuity in the event of an outage or disaster. We are now seeing unprecedented numbers of new customers turn to N2WS and Veeam to ensure their AWS cloud data is secure, including the University of Notre Dame, Cardinal Health and more.
 
As Veeam and N2WS continue to invest in the public cloud space, new innovations specifically designed for AWS environments will be available in early 2019, innovations that will deliver major advancements to customers across the globe. I’d love to share this with you now, but you’ll have to wait for Q1 2019 for the details.
 
But if you are at AWS re:Invent this week, please join our session “A Deeper Dive on How Veeam is Evolving Availability on AWS” on Wednesday, November 28 or visit us at booth #1011 throughout the week to hear more about our innovative approach to the public cloud, including where we are headed with archiving, mobility and more.
 
With Veeam’s momentum and growth across AWS data protection solutions, the company is well positioned as the market leader in Intelligent Data Management with an innovative focus on the public cloud. Ultimately, from being the peas and carrots and better together, we want to be the meat and potatoes for all our new joint customers where we are an integral part of their business and IT operations.

Makes disaster recovery, compliance, and continuity automatic

NEW Veeam Availability Orchestrator helps you reduce the time, cost and effort of planning for and recovering from a disaster by automatically creating plans that meet compliance.

DOWNLOAD NOW

New
Veeam Availability Orchestrator

Original Article: http://feedproxy.google.com/~r/veeam-executive-blog/~3/Fiwmc_ygDWA/new-version-n2ws-backup-recovery.html

DR

Stateful containers in production, is this a thing?

Stateful containers in production, is this a thing?

Veeam Software Official Blog  /  David Hill


As the new world debate of containers vs virtual machines continues, there is also a debate raging about stateful vs stateless containers. Is this really a thing? Is this really happening in a production environment? Do we really need to backup containers, or can we just backup the data sets they access? Containers are not meant to be stateful are they? This debate rages daily on Twitter, reddit and pretty much in every conversation I have with customers.
Now the debate typically starts with the question Why run a stateful container? In order for us to understand that question, first we need to understand the difference between a stateful and stateless container and what the purpose behind them is.

What is a container?

“Containers enable abstraction of resources at the operating system level, enabling multiple applications to share binaries while remaining isolated from each other” *Quote from Actual Tech Media
A container is an application and dependencies bundled together that can be deployed as an image on a container host. This allows the deployment of the application to be quick and easy, without the need to worry about the underlying operating system. The diagram below helps explain this:
stateful containers
When you look at the diagram above, you can see that each application is deployed with its own libraries.

What about the application state?

When we think about any application in general, they all have persistent data and they all have application state data. It doesn’t matter what the application is, it has to store data somewhere, otherwise what would be the point of the application? Take a CRM application, all that customer data needs to be kept somewhere. Traditionally these applications use database servers to store all the information. Nothing has changed from that regard. But when we think about the application state, this is where the discussion about stateful containers comes in. Typically, an application has five state types:

  1. Connection
  2. Session
  3. Configuration
  4. Cluster
  5. Persistent

For the interests of this blog, we won’t go into depth on each of these states, but for applications that are being written today, native to containers, these states are all offloaded to a database somewhere. The challenge comes when existing applications have been containerized. This is the process of taking a traditional application that is installed on top of an OS and turning it into a containerized application so that it can be deployed in the model shown earlier. These applications save these states locally somewhere, and where depends on the application and the developer. Also, a more common approach is running databases as containers, and as a consequence, these meet a lot of the state types listed above.

Stateful containers

A container with stateful data is either typically written to persistent storage or kept in memory, and this is where the challenges come in. Being able to recover the applications in the event of an infrastructure failure is important. If everything that is backed up is running in databases, then as mentioned earlier, that is an easy solution, but if it’s not, how do you orchestrate the recovery of these applications without interruption to users? If you have load balanced applications running, and you have to restore that application, but it doesn’t know the connection or session state, the end user is going to face issues.

If we look at the diagram, we can see that “App 1” has been deployed twice across different hosts. We have multiple users accessing these applications through a load balancer. If “App 1” on the right crashes and is then restarted without any application state awareness, User 2 will not simply reconnect to that application. That application won’t understand the connection and will more than likely ask the user to re-authenticate. Really frustrating for the user, and terrible for the company providing that service to the user. Now of course this can be mitigated with different types of load balancers and other software, but the challenge is real. This is the challenge for stateful containers. It’s not just about backing up data in the event of data corruption, it’s how to recover and operate a continuous service.

Stateless containers

Now with stateless containers its extremely easy. Taking the diagram above, the session data would be stored in a database somewhere. In the event of a failure, the application is simply redeployed and picks up where it left off. Exactly how containers were designed to work.

So, are stateful containers really happening?

When we think of containerized applications, we typically think about the new age, cloud native, born in the cloud, serverless [insert latest buzz word here] application, but when we dive deeper and look at the simplistic approach containers bring, we can understand what businesses are doing to leverage containers to reduce the complex infrastructure required to run these applications. It makes sense that lots of existing applications that require consistent state data are appearing everywhere in production.
Understanding how to orchestrate the recovery of stateful containers is what needs to be focused on, not whether they are happening or not.
GD Star Rating
loading…
Stateful containers in production, is this a thing?, 5.0 out of 5 based on 4 ratings

Original Article: http://feedproxy.google.com/~r/VeeamSoftwareOfficialBlog/~3/n1e1aGaEEoU/stateful-containers-in-production-is-this-a-thing.html

DR

Veeam / Cisco Events for December

12/2/18 Partner Exchange — Richmond, VA
12/5/18 Cisco Connect — Denver, CO
12/12/18 Cisco Connect — Virginia Beach, VA
12/12/18 Cisco Connect — Los Angeles, CA
12/13/18 Cisco Connect — Columbus, OH
12/13/18 Partner Exchange — Pittsburgh, PA