Wednesday, October 25, 2017

Considerations for performance and scale

Using DPM in Azure to protect your Azure workloads also means that you need to pay for the additional resources consumed by DPM. This includes the VMs, the Azure disks used as backup storage, and Azure Backup for longer retention. IT administrators have the dual responsibility of providing a performant backup environment and of controlling the costs. With the pay-as-you-use model in Azure, it is possible to “right size” the DPM setup and grow your resources as you need them, without having the reserve capacity up front. This section delves deeper into the parameters that tune the performance and scale DPM in Azure.


Recommendations for better performance
DPM is an I/O-intensive application, and though it is not sensitive to latency, it consumes significant network bandwidth. It is therefore important that the right resources are made available to meet your backup needs. Here is a list of recommendations based on performance tests run with DPM in various configurations:

1. Use the Standard tier when creating a VM in which DPM will be installed. The IOPS per attached disk is higher for the Standard tier (500 IOPS) than for the Basic tier (300 IOPS).

2. Do not use the same storage account for the DPM disks and the disks attached to the production VMs. Azure places a limit of 20,000 IOPS per storage account. If the data to be backed up is read and written to the same storage account, you have effectively halved the IOPS available from the storage account to the workloads or DPM.

3. The minimum VM size should be A2 with 3.5 GB of RAM. DPM needs at least 2 GB of RAM to work correctly, and A2 is the smallest VM size where the RAM is greater than 2 GB.

4. Since DPM and the workloads are in the same region, there is no network egress cost incurred during backup or restore.


Scaling up vs. scaling out
DPM in Azure supports both scale out and scale up. This section provides guidance on when to scale up and when to scale out.

Scaling up your DPM setup deals primarily with the need for more backup storage. Different VM sizes in Azure support a different number of attached virtual disks. Given that the maximum possible size of an Azure disk is just shy of 1 terabyte (TB), each VM size has a limited amount of storage that can be used as directly attached virtual disks.

The other factor that influences the scaling decision is the number of servers that are being protected. A single VM cannot scale up infinitely to provide resources for backing up workloads, and when you reach the limit for a specified size, you need to choose whether to scale up or scale out.

The maximum number of protected servers that a given size can support is derived from scale tests that have been run, while the maximum raw backup storage is a limit imposed by Azure and can be explored in greater detail at http://msdn.microsoft.com/en-us/library/azure/dn197896.aspx.

For example, assume you start with an A2 VM having a single 500-GB virtual disk attached, protecting 10 servers. If you need more storage, the first step is to exhaust the storage capability of the A2 VM. The easiest way to do this is to create a new virtual disk and attach it to the running VM. If you need to protect more servers, then you can install up to 20 agents on various Azure VMs to protect the workloads on those servers. When you reach the maximum storage capacity of the VM or exhaust the number of servers that the DPM instance should be handling, you can either scale up or scale out:

Change the VM size. Click the VM to open the detailed view. Click the Configure tab,and scroll down to the Virtual Machine Size drop-down list. Change the VM size tothe next higher size, and click Save.

Create another VM, and configure DPM with the same steps that have been highlighted

The advantage of changing the VM size is that the software is already set up and you need to manage one less backup server. However, when you reach an A4 size and exhaust either the maximum raw backup storage or the number of protected servers, then you have to scale out and create a new VM to accommodate your backup growth.


Tiering data to Azure Backup
Protecting large amounts of data can quickly degenerate into a situation where a large number of DPM VMs are needed for longer retention. For example, protecting 100 GB of data for 30 days and having 5-GB daily incremental backup data means that DPM ends up storing 250 GB of data. (This is based on an initial size of 100 GB plus 30 days of incremental backups at 5 GB per backup.) Working backwards with the same assumptions, it means that you cannot protect more than 1.6 TB of data using an A2 VM. This data growth can be severely limiting as you incur additional costs of spinning up and running new VMs.

The recommendation is to use Azure Backup as the bottomless storage pit for anything more than a few days. Thus, the data retained with DPM is always the freshest and can be used quickly for any immediate recovery scenarios. This also postpones the decision to scale up or scale out with respect to hitting the maximum raw backup storage limit.

Source of Information : Microsoft System Center

No comments: