I have had the pleasure of witnessing and being an integral part of many developments in the world of backup and data protection solutions over the years. It is from my observations and understanding that I present to you the following 5 predictions regarding enterprise-grade data protection solutions in the coming years.
1 – Seamless Data Center Protection
One of the most important baseline future features of a backup solution will be the ability to deal with a variety of infrastructures and environments. While future enterprises will still employ on-premises resources, such as physical servers, much of their environment will be divvied up between one or more private clouds and possibly multiple vendors within the public cloud. In order to work together, data and application stored on these various infrastructures will have to be backed up and managed in a seamless manner. The ultimate backup management solution will cope with each environment individually, and provide a single interface for users. Regardless of where an application is located, this single console will enable users to back up resources with the click of a mouse. In order to deal with migrating resources between these different environments and infrastructures, DevOps or IT Managers will be able to back up a given application without worrying about the original location.
2 – Back-Up per Role Basis
Already well on their way, servers and applications will certainly become much more versatile in the future. An application that runs today on server ‘x’ on an on-premises infrastructure will be able to run tomorrow on the cloud; similarly, an instance running in a public cloud will be able to be shut-down and restarted in a different environment. Nonetheless, users will have to be assured that their data is protected, be it by backup or replication, focusing backup more on application roles and less on servers. These roles will define all aspects of data protection, such as whether or not the data will be replicated and with which SLA, as well as backup scheduling, the retention window and how to handle applications and consistency. That way, in case a certain virtual machine is shut down and a new one is started, if it has the same role, it will be backed up in the same manner.
3 – Recover Everywhere
This prediction is a bit more complicated in terms of implementation, however from the users’ point of view, it has to be seamless. A user may have a backed-up resource in one environment or infrastructure, yet wants the ability to recover it whenever, even on another infrastructure. For instance, if an application is backed up on the Amazon cloud, a user may want to recover it on the enterprise’s private cloud, and vice versa. The recovery process can be used for the purpose of high availability and disaster recover. The beauty of this notion is that if there is an outage in one infrastructure, work can continue to be done utilizing another, which is critical to data protection. Even in these elaborate environments, rapid recovery will still be expected, as recovery time objectives (RTOs) will only become shorter in the future.
4 – Automatic and Granular Backup and Recovery
The concept of `Application Aware Backup` already exists in modern data protection. Each application has an optimal backup configuration, ensuring consistency for the recovery process (as well as keeping an optimal RTO and RPO). In the future, however, backup solutions will be able to detect which applications or roles are running and which resources are being used, automatically knowing how to back each one up correctly. For example, with an Oracle database, clicking a button to begin backup translates into consistent backup without requiring the user to define and configure the right policy and smooth adjustments to environmental changes. The agnostic way VSS (see post here) allows backup solutions to create consistent backups of applications in Windows, is a good start, but the solution in the future will need to be even wider.
Today, these processes require complicated configuration. The level of human error that causes faulty configurations will also be removed with an automated backup management process. Applications will be able to automatically set schedules according to specific parameters (i.e. group membership and level of activity). For example, an application with many transactions will automatically back up at a more frequent rate than an application that moves slower. Additionally, while granular recovery is a feature that exists in backup solutions, I believe it will become much more dominant in the years to come. Recovering whole stacks, servers and instances already exists in many applications, however certain applications will require more and more granular recovery, retrieving specified databases, tables, rows within tables, etc…
5 – Long Term and Seamless Archiving
Archiving takes a different angle than operational backup. Operational backups are generally kept for a short period of time, in case of data loss or a crash. However, long-term archiving is not concerned with loss. Instead, long term archiving is more important in terms of compliance towards regulations for various institutions and even companies that need to keep records from a legal standpoint for extended periods of time. Regardless of where information is archived, be it in Amazon S3, Glacier, on-premises or in another service altogether, there will be a need to move even archived data seamlessly between infrastructures, no matter what storage solution was initially used. Competition is constantly increasing, creating more versatility for customers to move between vendors, which will be become the norm before we know it.
Assuming future enterprises will continue to develop into highly versatile environments, all the while utilizing different infrastructures, backup solutions will have to adapt to that reality. Seamless solutions will have to be provided for backup and data protection regardless of the complexity of the environment, whilst maintaining simple user interactions.