Thursday, October 26, 2017

Planning for VM backup

Before installing DPM 2012 R2 and performing a series of required configuration tasks, you should devise a strategy for what to back up, when to back up, and where to back up. This section helps answer some of the questions you might have before starting the data protection.


What to back up (host level vs. guest level)
It is recommended that you protect your Hyper-V environment by combining host-level backup of Hyper-V VMs with the existing backup strategy for your in-guest applications, like Microsoft SQL Server, Microsoft Exchange, and Microsoft SharePoint. Host-level backup of VMs is equivalent to protecting a physical server using bare-metal recovery. It is recommended that you protect your application data more frequently than your VMs. For example, VMs can have a schedule that backs up data once per day or once per week, while Microsoft SQL Server databases could be backed up as frequently as every 15 minutes.


When to back up
Typically, production workloads experience peak load during specific time windows during the work day and are less loaded during off-peak hours (maintenance window). Customers (usually the hosting service providers) need the flexibility to run backups during off-peak hours, especially for backup at scale. They need the ability to contain overhead of backup as these affect the performance characteristics of the production workloads. IOPS consumption, network bandwidth, and CPU utilization during backup are some of the key performance parameters that affect the production workload performance. So the concept of a backup window for VM data sources was introduced in DPM 2012 R2 UR3.

With UR3, you can now create specific time windows within which to run the scheduled backup jobs and consistency check (CC) jobs to achieve the SLAs. These time windows are configured through Windows PowerShell at the level of a protection group to strictly ensure that all backup and CC jobs run only during the set window. Jobs that are actively transferring backup data after the window has ended are allowed to continue, but all other jobs are automatically cancelled. The backup/CC windows do not affect ad hoc jobs triggered by the user.

The following Windows PowerShell commands demonstrate how you can configure the backup window. Set the values for $pgName, $startTime, and $duration. The backup schedule should align with the StartTime parameter used in the Set-DPMBackupWindow command.


Where to back up
DPM supports longer retention (multiple years) since the release of DPM 2012 R2 UR5. It is recommended that you use Azure Backup as the bottomless storage for host-level VM backups for longer retention. You can still have a few days’ worth of backup on-premises to ascertain faster operational recovery jobs. Offloading backup data to Azure gives you the illusion of infinite storage capacity; you no longer need to worry about managing the tape infrastructure.


How to back up
With the enhancements delivered in UR3, a single DPM server can protect up to 800 VMs. With customers running thousands of Hyper-V VMs in a cluster or rack, a single DPM server does not suffice. Multiple DPM servers are needed to protect these VMs, and DPM scale-out architecture allows this without any issues. When DPM servers are deployed, these can be managed and monitored through the System Center Operations Manager console.

A DPM server can protect up to 800 VMs with co-location turned on and up to 300 VMs with co-location turned off. In general, you have one replica volume and one recovery point volume per protected data source. Co-location enables you to have multiple data sources mapping on a single replica and recovery point volume. It allows you to locate data from different protection groups on the same disk or tape storage.

Here are some high-level issues with the co-location feature and the workarounds:

 Keeping the replica volume too small defeats the purpose of co-location. Default replica volume size is 250 GB (which is highly debatable since one size doesn’t fit all customers/data sources). This number is configurable through a registry setting, HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Collocation\HyperV\CollocatedReplicaSize. The only condition is that it must be a multiple of 10 MB. It is very difficult for a customer to know the optimal number.

 Keeping the replica volume too big means that a lot of data sources will be co-located, and this will mean that the customer will have less flexibility due to the nature of disk colocation, which includes:
• If one data source is not able to create a recovery point for a number of days equal to the retention range, then it loses all recovery points due to garbage collection of shadow copies. In non-colocation cases, this does not happen since at least one recovery point is preserved.
• A user can’t stop protection with retain data and re-protect in a different protection group more than twice for a co-located data source. The third time produces an error and the user must wait for days to perform the operation a third time.

 When protecting very few data sources, large replica volumes lead to wasted storage space. You can tweak the number of data sources on a volume by changing the registry value HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Collocation\HyperV\DSCollocationFactor to fit more VMs on the replica. It is safe to increase it to a large number (for example, 50).

If you understand the implications and find the right settings, the recommendation is to use the colocation feature. Why not leverage it and protect more data sources per DPM server?


How to control costs
Since backup is all about data, storage needs to be optimized for backup purposes. Double-parity (and not three-way mirrored) is recommended for backup storage, leveraging Windows Server de duplication to reduce the amount of backup storage consumed. De-duplication provides a great opportunity to realize storage savings. These savings will vary depending on the workloads running within the VM and the amount of churn created. The high-level steps to realize these savings are as follows:

1.Run DPM in a virtualized deployment.

2.Provision backup storage through VHDs residing on scale-out file server (SOFS) shares.

3.Enable the De-duplication role on the SOFS volumes hosting the backup storageVHDs.

4.Use DPM VHD/VHDX files of 1 TB.

Source of Information : Microsoft System Center

No comments: