When it comes to CIOs and their enterprise cloud strategies, more and more of their focus today is turning to public cloud solutions. The usage of public cloud services by IT organizations continues to grow, due to cost savings and resource sharing that can’t often be achieved with private cloud solutions.
Spending on cloud storage is expected to rise from $2.4 billion to $8.7 billion in the next 3-5 years, with 23% of that being spent on cloud backup. Through Amazon Web Services (AWS) and other similar solutions, CIOs have been able to avoid the efficiency and security issues that sometimes plague in-house backup.
In this article, we’ll cover the top five criteria that CIOs should look for in a public cloud solution, and discuss some of the key factors to consider when assessing cloud backup and recovery.
Whether it’s hacking or phishing attacks, ransomware or accidental corruption or deletion of data, CIOs and their enterprises are continually facing increased data and security risks. When Code Spaces’ AWS control panel was recently hacked, for example, a large sum of money was extorted in exchange for giving Code Spaces back control of their data. But it was too late. By the time Code Spaces managed to regain their account, most of their data and backups – including offsite backups – were deleted in some capacity.
Enterprises like Code Spaces are learning the hard way that keeping backup data secure goes beyond having even offsite backups. One way to safely secure backup data from security attacks and vulnerabilities is to store your backups in a “vault”. This means creating a secondary site in a separate data center with a totally different and limited amount of users who have access to the data backups.
When it comes to running your service on the public cloud, it’s recommended to run cross-AWS account backup. This is done to create a tight separation between production and your backup vault. Ordinary security measures such as keeping up-to-date software versions and encrypting your backup repositories are other important ways to keep backup data secure. Check the AWS security white paper for more detailed guidelines.
Automation is about making sure everything is in place at the time of a disaster. You can’t trust a human to take a snapshot of a volume every hour, so it’s crucial to automate the snapshot processes including auto-creation of your backup repositories and secondary site. Backup should always be automated with a flexible solution as well to facilitate recovery policy updates; for example, changing your backup frequency as your amount of users expand.
AWS EBS snapshots are an example of an IT operations team’s friend. However, these are worth very little if they are not scheduled, automated and managed to support a robust backup. Cumbersome manual processes – such as updating backup policies with the latest release – should be avoided. Simply put, manually provisioning instances and volumes could put you at risk of not having a good cloud disaster recovery DR solution.
3. Recovery Drills
Recovery drills are a great way for CIOs to make sure their second site does the job when a disaster occurs. Running recovery drills helps compliance with regulatory standards that require sending periodic reports on system performance, security, and availability. Together with the flexibility of the AWS cloud, recovery drills can be automated and scheduled.
Your backup solution vendor should make sure these happen often. They should be able to provide you with a summary report to learn if your cloud instances are back and running. These tests should also provide a report of whether the network was configured correctly, and whether the application on your secondary site is running and include all the data required based on your RPO and RTO objectives.
4. Application Level Backup
Exploring our last point regarding complete data inclusion, application level support is an often overlooked but important factor for CIOs utilizing the public cloud today. AWS EC2 instance and EBS volume snapshots are a great building block on the infrastructure level; however, when it comes to protecting an application you should think not only of your resource stacks and dependencies but also of your data backup consistency. A backup copy that is true “application consistent” reflects a state of the dataset where all application transactions are completed (i.e., no open transactions).
5. Consistent Monitoring
Scheduling a backup and simply letting it run automatically isn’t enough. Monitoring your backup solution closely lets you know that your recent backups have succeeded, and tells you when certain failure scenarios take place. Services such as Amazon CloudWatch can be utilized, for example, to collect and track metrics, collect and monitor log files, set alarms and automatically react to changes in your AWS resources.
In addition to monitoring the health of your backup site, the solution provider should constantly monitor for security threats such as unauthorized file updates (i.e., file integrity), and alert teams as soon as possible to avoid data leaks and breaches in advance.
With the evolution of changing infrastructure and increased emphasis on security and data protection, companies are outgrowing their in-house backup solutions and looking more and more to the public cloud.
A full-featured enterprise backup and recovery solution is the best approach for ensuring that your data is protected, secure and reliable. Options like Cloud Protection Manager have therefore become an attractive alternative to traditional onsite backup systems. CPM provides flexible backup policies and scheduling, rapid recovery of instances, and a simple, intuitive and user-friendly web interface to easily manage your backup operations.
CPM has a Windows agent to consistently back up Windows applications and allows users to manage multiple AWS accounts and configure policies and schedules to take automated snapshot backups. With CPM, you can recover a volume from a snapshot, increase its size and switch it with an existing attached volume in a single step.