Dedup based Backup
Commvault / Symantec / EMC
The backup and recovery process is becoming a major challenge for most organizations. IT departments face demands to provide a higher-quality backup and recovery service at a time when data volumes are exploding and budgets are being squeezed. Backup and recovery are a particular concern for IT managers, who now need to handle larger and more complex volumes of data as they reduce cost and increase service to the business. Many organizations are simply not equipped to meet the backup and recovery challenge because they have outdated infrastructures. Their IT networks and systems cannot accommodate the increasing numbers of mission-critical applications that demand lower downtime. Nor can they deal with the complexity that arises from tailoring service levels to the needs of individual business applications. Despite the technology to centralize backup and recovery operations, many companies still deploy multiple backup and recovery environments for reasons related primarily to legacy issues—for example, the need to separate UNIX and Windows®, provide for multiple solutions obtained through mergers and acquisitions, or wall off critical applications with their own infrastructure.scalability issues with existing technology are another problem.While each individual backup and recovery decision may well have been reasonable, the cumulative impact can be damaging. The results can include multiple tape libraries, increased numbers of backup servers, multiple enterprise backup products, and the need for multiple management teams. What’s more, these undesirable results have huge cost implications stemming from increased hardware and software expenditures, large numbers of backup administrators and operators, and excessive use of data-center space.Many organizations find themselves spending considerable amounts of money on highperformance tape drive resources, and then struggling to integrate and optimize them. The drives’ lack of scalability can lead to backup-window overruns, prompting further capital expenditures on storage. Furthermore, high-volume servers often suffer from excessively long backup times across LAN links, which are simply unable to meet the RPO and RTO requirements of many modern businesses.
Data deduplication (often called “intelligent compression” or “single-instance storage”) is a method of reducing storage needs by eliminating redundant data. Only one unique instance of the data is actually retained on storage media, such as disk or tape. Redundant data is replaced with a pointer to the unique data copy. For example, a typical email system might contain 100 instances of the same one megabyte (MB) file attachment. If the email platform is backed up or archived, all 100 instances are saved, requiring 100 MB storage space. With data deduplication, only one instance of the attachment is actually stored; each subsequent instance is just referenced back to the one saved copy. In this example, a 100 MB storage demand could be reduced to only one MB.Data deduplication offers other benefits. Lower storage space requirements will save money on disk expenditures. The more efficient use of disk space also allows for longer disk retention periods, which provides better recovery time objectives (RTO) for a longer time and reduces the need for tape backups. Data deduplication also reduces the data that must be sent across a WAN for remote backups, replication, and disaster recovery.
The explosion of data triggered by the mobile and social media revolutions have created extraordinary opportunities for enterprises to collect and analyze digital information. Naturally, enterprises also need ways to store that growing amount of valuable data. Between the requirements of the business for scalable data storage and the perpetual constraints of IT budgets, storage management—or storage optimization—is more than a best practice; it’s an imperative.Storage optimization enables data centers and the businesses they support to use storage resources more efficiently, thus saving money on additional storage hardware, cooling equipment and electricity, storage administration, and possible expansion of the data center’s floor space.Further, by implementing a tiered storage model as part of an optimization strategy, enterprises can ensure that data is stored in the right place. For example, frequently accessed data—such as payroll information or customer accounts—should be stored on faster storage devices such as hybrid-flash and all-flash arrays, while infrequently accessed data (such as compliance and regulatory reporting data) is better off stored on slower spinning magnetic disk drives.While data storage can be optimized to some extent through processes, frameworks, and storage management software, modern storage technology such as flash storage provide dramatically better performance in enterprises that require high performance, such as those deploying virtual desktops, those using enterprise resource planning software and data analytics, and those with large-scale ecommerce platforms.As enterprises continue to amass digital data from website traffic, mobile devices, social media, and the Internet of Things, older technology such as spinning magnetic disks will become increasingly inadequate. If your business requires both scalable storage to handle vast amounts of data and high performance to meet the demands of customers and employees in the digital, mobile economy, it makes sense to start planning a storage optimization strategy that includes a refresh with modern technologies such as flash and hybrid-flash arrays.Over the long run these new storage technologies should provide higher-quality, optimized storage that will make it easier and faster to access data, and at a lower cost per gigabyte. Storage optimization can transform your data center from a storage facility to a strategic asset.