Modern Data Management and Protection Challenges
Customers of all types and sizes are seeking new and innovative ways to overcome challenges associated with data growth and storage management. While these challenges are not necessarily new, they continue to become more complex and more difficult to overcome due to the following:
- Pace of data growth has accelerated
- Location of data has become more dispersed
- Linkages between data sets have become more complex
Data and storage management challenges are compounded by the need for companies to protect critical data assets against disaster through backup and recovery solutions. In order to maintain backups of critical data assets, additional secondary storage resources are required. This additional layer of backup storage must be implemented wherever backups occur, including central data centers and remote offices.
Storage Efficiencies through Data Deduplication
Backup Exec 2012 includes advanced data deduplication technology that allows companies to dramatically reduce the amount of storage required for backups, and to more efficiently centralize backup data from multiple sites for assured disaster recovery. These data deduplication capabilities are available in the Backup Exec 2012 Deduplication Option.
Backup Exec 2012 Data Deduplication Technology
The data deduplication technology within Backup Exec 2012 breaks down streams of backup data into “blocks.” Each data block is identified as either unique or non-unique, and a tracking database is used to ensure that only a single copy of a data block is saved to storage by that Backup Exec server. For subsequent backups, the tracking database identifies which blocks have been protected and only stores the blocks that are new or unique. For example, if five different client systems are sending backup data to a Backup Exec server and a data block is found in backup streams from all five of those client systems, only a single copy of the data block is actually stored by the Backup Exec server. This process of reducing redundant data blocks that are saved to backup storage leads to significant reduction in storage space needed for backups.
Figure 1: Deduplication Process
The deduplication technology within Backup Exec is applied across all backups managed by a deduplication-enabled Backup Exec server.
Deduplication Methods within Backup Exec 2012
The Backup Exec 2012 Deduplication Option gives administrators the flexibility to choose when and where deduplication calculations take place. Three deduplication methods are supported by Backup Exec 2012. These are as follows:
The client-side deduplication method is a software-driven process. Deduplication takes place at the source or protected client, and backup data is sent over the network in deduplicated form to the Backup Exec server. Only unique blocks of backup data are sent to the backup server and saved to backup storage; non-unique blocks are skipped.
Backup Exec Server-side Deduplication
The Backup Exec server-side deduplication method is also a software-driven process. Deduplication takes place after backup data has arrived at the Backup Exec server and just before data is stored to disk (also known as inline deduplication). Only unique blocks of backup data are stored; non-unique blocks are skipped.
The appliance deduplication method is a hardware-driven process. Deduplication takes place on the deduplication appliance (can be in-line or post-process deduplication, for example, ExaGrid or Quantum). 3rd-party deduplication devices handle all aspects of deduplication.
Administrators can mix and match deduplication methods to fit their unique needs. For example, a single Backup Exec server enabled for deduplication can simultaneously use client-side deduplication for some jobs, Backup Exec server-side deduplication for others, and appliance deduplication for yet another set of jobs.
Figure 2: Deduplication Methods
The different deduplication methods supported by Backup Exec 2012 have various configurations for which they are best suited. The benefits of each method, as well as the configurations for which each method is best suited, will be detailed in the following weeks.
By Kate Lewis
Agents. Agentless. VADP integration. VSS integration. Image based backups. File based backups. Hypervisor based snapshots. Array based snapshots. Host based backups. Guest based backups. It’s no wonder why backup professionals are confused about what the best approach is for backing up their virtual machines. With a myriad of vendors, all positioning their own way as the “best” way, it leaves in its wake ambiguity and a good dose of confusion about what an agent is, or does.
With the help of my technical experts here at Symantec, this blog distils the confusion with unbiased information so you can make the right choice for your environment. By looking at each method and highlighting the pros and cons of each, you can make informed decisions without the distraction of smoke and mirrors.
Caution: But before we dive in, it’s important to mention that the phrases agentless backup and agent-based backup can mean different things to different vendors. To truly determine what the best approach for your organization is, you need to look under the cover and weigh in the pros and cons of each method. Don’t worry; we have done the hard work for you at Symantec. Now let us jump in and take a look at each one in turn.
1) Traditional Agent-Based Backup (also known as guest based backup)
Traditional agent-based backup is also known as guest based backup. An agent is installed in every virtual machine and treats each virtual machine as if it was a physical server. The agent in this scenario is reading data from disk and streaming the data to the backup server. This method must not be confused with agent-assisted backups that we will cover later.
There are many people today using this approach to protect their virtual machines. According to ESG1, 46% of all environments are utilizing guest based protection methods with a backup agent running inside the guest OS. Although there are newer methods available, you may be asking yourself why so many people are still using this method.
- Both physical and virtual machines are protected using the same method
- Application owners can manage backups and restores from guest OS
- Time tested and proven solution
- Meets their recovery needs
- This is the only way to protect VMware Fault Tolerant virtual machines
- Significantly higher CPU, memory, I/O and network resources utilization on virtual host machines when backups run.
- Need to install and manage agents on each virtual machine
- Cost may be high for solutions that license on a per agent basis as opposed to per hypervisor based licensing
- Cannot accommodate virtual machine sprawl, lack of visibility into changing virtual infrastructure
- No visibility for backups from VM administrators’ point of view; for example, backups are not visible at vSphere client level
- May need multiple kinds of backups and recovery methods; for example separate backup policies may be needed for file and folders backups, Microsoft Exchange backups, bare metal recovery etc.
- Complex disaster recovery strategies
- Lack of SAN transport backups to offload backup processing job from virtual infrastructure
- No protection for offline virtual machines and virtual machine templates
- Slow file by file backup by agent sending the even unchanged data over and over again
Verdict: A cumbersome, traditional backup and recovery method, but offers flexible recovery features.
2) Agentless backup (also known as host-based backup)
Agentless backup, also known as host-based backup, refers to solutions that do not require an agent to be installed on each VM by the administrator. However, it’s important to note that the software may be injecting an agent onto the guest machines without your knowledge.
These solutions integrate with VMware APIs for Data Protection (VADP) or Microsoft VSS, which creates fast, high performance snapshots of the virtual disks attached to VMs. The backup software communicates with VADP or VSS and tells it what it wants to backup. VADP and VSS carry out a number of steps and in turn prepare the data to be backed up. The VSS / VADP provider snaps the volume and gives the backup solution access to this snapshot by feeding the file to the backup server. The backup solution then backs up the snapshot.
While it provides recovery for full VMs, files and folders, the recovery of applications and application data can be complex and time consuming. This is because it requires additional processing that engages resources external to the virtual machines. Applications on these hypervisors won’t truncate their transactions logs or perform other database maintenance tasks. An Exchange Server is a perfect example. Without an agent or agent-like executable in the VM gathering metadata about the Exchange information store there is a need for additional processing external to the exchange VM in order to map mailbox data. If you ignore this process, it can result in unmanaged transactional applications that must be manually managed by the application owner, and data that might only be recovered by first restoring the entire VM and its virtual disks. Therefore, one of the key differences between Agentless and Agent-Assisted backups is how the transactional post-processing happens.
- VMs can be backed up online or offline
- Less CPU, memory, I/O and network impact on the virtual host
- An agentless architecture doesn’t require the management of agent software
- No per VM agent licensing fees
- Extremely difficult to recover granular object data – first restore the entire VM and its virtual disks
- Traditional login techniques to log into the server
- Temporary “injected” drivers can destabilize the system and compromise data integrity
- Troubleshooting is more complex when using injected (temporary) agents
- A centralized controller is a single-point-of-failure
- Requires a fully-virtualized environment. Physical machines still require agent-based backup. If you have physical and virtual you will need two backup solutions – one for physical and the other for virtual.
- Additional processing e.g. post backup scripts and truncation engages resources external to the virtual machines
Verdict: Good method for protecting file and print servers, but not an optimal solution for VMs with applications. Recovery is operationally painful for applications and application data.
3) Agent-Assisted Backup: Next generation backup (also known as host based backup)
Agent assisted backups are also known as host based backup and integrate with VMware’s VADP and Microsoft VSS to provide fast and efficient online and offline backups of ESX, vSphere and Hyper-V. The primary difference between agentless and this design is its perspective: it pairs the VMware VADP or Microsoft VSS with an agent that gathers application metadata to enable multiple avenues of recovery (full VM, applications, databases, files, folders and granular objects). The agent-like executable in this instance is not carrying out the backup and thus does not impact the performance of the VM. It’s simply handling metadata and necessary post backup processing like log truncation.
- The backup is for the entire virtual machine. This is important because it means the entire VM can be recovered from the image. It also means that products like Backup Exec & NetBackup can offer “any-level” of recovery from the image contents: Files / Folders, Databases and Granular database contents, like email and documents.
- The backup can be offloaded from both VM as well as the hypervisor. This means that Backup Exec & NetBackup have the flexibility to offload VM backup onto an existing backup server, instead of burdening the hypervisor. It also means that users have the option of deploying a dedicated VM, e.g. a virtual appliance, when a physical backup server is not practical.
- Application owner can self serve restore requests: The application owner can request restores directly from the guest operating system.
- Enhanced security: The agent installed for assisting with VM backup can be managed by the application owner. Thus you are avoiding the need to share guess OS credentials with backup administrator.
The most resource efficient backups are the backup operations done at the hypervisor level and provide some of the following advantages:
- Backup is still image based. Leverages VADP / VSS.
- Ability to directly recover files and folders directly back to a virtual machine.
- Enable automatic discovery of application inside VM
- Granular Application Recovery
- For VSS Compliant Applications, backup is application consistent via VSS integration
- For Non VSS compliant applications, backup is crash consistent
- Less Performance and I/O impact on the Virtual Machines
- Can be on LAN or SAN interface
Verdict: Excellent method for VMs with applications like AD, Exchange, SQL and SharePoint.
Before I close out this blog, it is very important to understand that backup vendors who utilize VMware VADP or Microsoft VSS to perform backups are not the same. Some are better than others. How good the backup software is will depend on the validation phase. So don’t be fooled in thinking that all solutions with VMware VADP or Microsoft VSS integration offer the same functionality and benefits. A quality backup application that integrates with VMware VADP or Microsoft VSS to perform backups should at the very minimum:
- Validate snapshots prior to backup
- Use VMware change block tracking (CBT) mechanism to have smaller storage footprint by backing up only changed data
- Verify backup data
- Backup VM directly from storage location for example, SAN, iSCSI, NAS, without having to install any software a.k.a agent inside the VMs
- Offer flexible recovery options (Full VM recovery , File/Folder level recovery, Application level recovery and granular object recovery)
- Centralized backups for Virtual machines
- Dynamic inclusion of VMs
- Ability to transport your data offsite for disaster recovery
As your embrace virtualization or increase your virtual footprint, selecting a backup solution that integrates with VMware VADP and Microsoft VSS will provide fast snapshot-based image backups of online and offline guest virtual machines. For superior recovery capabilities a solution that gathers metadata and executes post processing tasks is a must. So we’ve learnt that the question isn’t whether you have an agent or other differently named binary in the guest. The question is what is the agent’s function. Jason Buffington, a Senior Analyst at ESG, wrote a great blog on “good agents” and “bad agents”. If you would like to learn more, you can check out his blog here: http://www.esg-global.com/briefs/agent-best-practices-for-host-based-backups/.
In summary, an agent-assisted solution that integrates with VMware VADP and Microsoft VSS is the clear winner in today’s environments where physical and virtual machines require a holistic approach.
Finally, I couldn’t close out this blog without mentioning Backup Exec. Yes – I am biased on this one, but Backup Exec provides a perfect solution for virtual environments with technology that was designed for VMware and Hyper-V. Not only does Backup Exec provide superior data protection for virtual environments, it also provides market leading technology for physical environments too. With Backup Exec you get it all in a single solution. So here’s my pitch. Backup Exec 2012 dramatically reduces the time to recover from small or major disasters by protecting all of your virtual machines and/or physical servers through a single pass backup, while still allowing for individual file, folder and granular object level recovery. In short, it’s powerful, efficient, reliable and fast.
If you have any questions or would like to know more, email me at: email@example.com.
1 Source: ESG Research Report, 2012 Trends in Data Protection Modernization, August 2012.
When searching for a backup and recovery solution for virtual environments, here are a few “must have” features to consider:
1) Granular Recovery
Granular and application level recovery is paramount to any virtual backup strategy. If you can’t restore what you need, when you need it, then your whole entire backup strategy is flawed from day one. Make sure your chosen solution provides all levels of recovery – full virtual machine, individual virtual disks, virtualized application & database servers, along with standards like file, folder and granular objects such as an individual email.
Backup Exec leverages Symantec’s patented Granular Recovery Technology (GRT) to provide all the recovery methods mentioned above. The innovative GRT feature helps IT Administrators save time and headaches by enabling them to restore individual files, folders and granular objects within a guest virtual machine from a single-pass image backup. In addition, Backup Exec also provides the ability to recover an entire VM or virtual disk, virtualized applications and databases. Backup Exec even includes physical to virtual conversion technology, so you can accelerate your transition to virtual environments. Overall, Backup Exec provides one product and any recovery.
2) Application Awareness
Application awareness is an essential component of virtual machine backup. While most backup products provide “crash consistent” backups – meaning those applications use integration with technologies like Microsoft’s VSS, many backup products do not perform required post-process functions like log truncation, which ensure you are protecting the application completely. Many backup applications can’t perform granular recovery of those virtualized applications either.
Many business-critical applications – like Microsoft Exchange or SQL Server – will only do certain types of maintenance only when a successful backup occurs. Application-aware backup solutions ensure this maintenance can take place. Usually, this requires some sort of software (i.e. an Agent, whether it’s deployed beforehand or injected and uninstalled on demand) in the virtualized application server. The most capable backup applications such as Backup Exec are able to index, catalog, or otherwise capture important application metadata that is necessary for fast search and recovery of granular application items.
3) Data Deduplication
We’ve all heard the saying that VMs are multiplying like bunny rabbits. According to a recent ESG survey companies have about 16 virtual machines per physical host, with a plan to grow to 26 per host. This number will continue to move upwards as hardware is built to accommodate this trend. It’s no surprise that between all these guest machines there is significant duplication of data from both applications and operating systems.
To manage data growth and storage costs while improving network bandwidth optimization, data deduplication is a must. However, not all data deduplication solutions are equal. Look for a solution that offers source side deduplication. Why? By removing redundant data as close to the source as possible maximizes the benefits of deduplication. It will decrease network traffic, reduce the storage footprint and lowers memory, thereby helping to beat backup windows and make backup strategies more successful.
Also, ensure your data deduplication solution works across everything you protect – across all virtual machines and any physical servers too – otherwise the storage savings from deduplication will be severely impacted. You want to be able to deduplicate your data effectively as possible and having multiple backup jobs containing the same data isn’t very efficient. For example, if you are protecting 100 VMs and 50 physical servers running Windows. True global data deduplication would reduce backup to just one instance of the operating system as opposed to 150.
Backup Exec enables customers to choose the deduplication method that best suits their environment. Backup Exec’s Deduplication Option offers three methods for deduplicating data across the enterprise (across all backup jobs). These methods are Client (or source) Deduplication, Media Server Deduplication, and Appliance Deduplication.
4) Physical Server and Multi-Hypervisor Support
More and more organizations are running multiple hypervisors within their environment, especially as alternatives to VMware are gaining popularity – especially Microsoft’s Hyper-V. Finding a single solution that supports all of your hypervisors will simplify backup complexity and licensing, streamline management and reduce costs.
While some IT organizations have invested in multiple separate tools for backup – one for physical servers and virtual servers – customers have consistently asked for a single vendor to manage both environments. This is because a differing approach to backup leads to inconsistent data management, backup confusion, increased cost, and even conflict between various IT organizations. The solution is for IT to bring together the virtualization and backup teams, assign ownership, authority and resources for backup of both physical and virtual machines.
With the release of Backup Exec 2012, now you can eliminate backup complexity and the need for specialized point products through a single solution that unifies virtual and physical, deduplication, and replication while offering the choice of on-premise software, appliance, or cloud delivery models. Unlike other solutions, Backup Exec is powered by Symantec V-Ray technology, which enables visibility across both virtual and physical environments for fast and efficient backup and recovery.
What are your must haves in a backup and recovery solution for VMs and why?
Cancer research group endorses Backup Exec 2012 upgrade, by Dave Raffo, Tech Target, September 6, 2012
Not all Backup Exec users hated the new Backup Exec 2012 interface or demanded Symantec Corp. make changes before they upgraded. Scott Gould, senior network and systems analyst at the Gynecological Oncology Group’s statistical data center in Buffalo, N.Y., said he found the new version easier to use than Backup Exec 2010 practically from the start.
The Gynecological Oncology Group (GOG) uses Backup Exec to protect clinical trial data for cancer research generated by about 10,000 GOG members at more than 700 institutions. The group installed Backup Exec (BE) 2010 about two years ago and performed a Backup Exec 2012 upgrade early this year. Gould said the new interface was no problem for him or his two-person support staff to master.
“I liked it; no ifs, ands or buts about it,” he said. “There was a learning curve, but by the time I was setting up my fifth protected resource, I realized how easy it was to use. And it wasn’t just easy for me. I was responsible for designing and setting up the system from the ground up, butother users [at GOG] have picked it up easily. They can do simple restores on their own. That says a lot about the user interface.”
Although, not everyone agreed with that assessment, as Symantec soon found after releasing the BE 2012 upgrade. There was an angry user backlash, placated only by the vendor’s promise to fix some of the issues via service packs. But Symantec execs said newer users and heavily virtualized shops should take to the new interface quicker than those who have used the product for many years.
Gould said he quickly realized how to create a resource group that showed him what he could see in the old interface. “It wasn’t that difficult to get a view similar to what we had before,” he said. “The whole job staging flow from resource to disk or tape has gotten easier to manipulate or manage.”
Gould said GOG’s servers are about 80% virtualized with about 40 virtual machines on afive-host VMware cluster. GOG switched from CA ARCserve to Backup Exec 2010 at the same time it moved to Dell Compellent SANs. Gould said Backup Exec handled virtual machine backups better than ARCserve, and has superior data deduplication. He said BE’s one-pass backup lets him protect virtual and physical machines while backing up only once.
Gould said GOG struggled to do full backups in 24 hours with ARCserve, so it had to wait until the weekends to do full backups on some systems.
He said GOG now starts full backups Monday through Friday at around 5 p.m., and completes them by around 9 a.m. the next morning. GOG backs up data directly to a Dell PowerEdge server, and then copies it out to tape at night.
He said his dedupe ratio is about 32-1, with roughly 4.5 terabytes (TB) of disk space protecting around 150 TB of uncompressed data.
Backup Exec 2012 introduced the first revolutionary change in its administrative paradigm in over nine years by moving to a resource-centric model that enables data protection lifecycle flexibility that was not possible in previous versions. Designed and optimized for physical, virtual, tape, disk, deduplication and the cloud, the new Backup Exec 2012 experience gives users the power to tailor the right protection paradigm in a very stepped, logical fashion.
In order to help our more established customers make the transition to the new paradigm, the Backup Exec team has quickly reacted to customer feedback on how to improve the migration experience. Available now in the Backup Exec 2012 SP1 release, is the Job View button that gives users a dedicated view of all jobs that have been configured in their Backup Exec environment.
This feature is similar to the Job Monitor feature found in previous versions of Backup Exec. The Job View allows users to view – and take action on – all backup jobs managed by their Backup Exec server through a simple “one click” button feature. Going forward we will focus on various improvements such as the ability to group servers to provide a “job” analogue, prioritize server backup order, target multiple server backups to the same tape and the introduction of the next-generation Job Monitor will be made available to all Backup Exec 2012 customers.
What does this really mean?
Well, in spite of what I might think there have been some grumblings on the ground that we have gone “A Bridge Too Far” with the new Backup Exec 2012 User Interface (Marketing would refer to this as “Exxxxxperrriance!). I love it. Loads of you do too, but it is a big leap for those hardened amongst us. However, things are not so bad as we’re introduced a “Job View” … a bit like the Job Monitor with SP1a. You can already group assets and therefore edit multiple policies (for multiple assets) at the same time – described in an earlier blog. All good stuff!
The bottom line is: When you are migrating from BE 2010 to BE 2012 make sure you read the wording in the wizzards which explains what is going to happen next. Clicking, click, click, clickerty, click through will give you a shock or two.
Upgrading is usually a painful and arduous process – this is certainly true of any software upgrade, especially when there is an approach reengineering that takes place to enhance a product dramatically and certainly not limited to BE. A number of you have found the move and upgrade to BE 2012 – policies appear to have disappeared – you have to have a job for each server.
Policies haven’t gone, they appear per server and because the UI is asset centric (per server) you do have one policy for each server. But that doesn’t mean you have to write hundreds of polices – one for each server. The BE 2012 experience includes all jobs you need to backup your server and to duplicate those data from one storage to another, i.e. disk to tape in a single policy for that server.
I’ve heard that customers are having issues when they have a bunch of servers that they want to configure in the same way … so in the past you would have written a policy and then associated it with a bunch of servers. In the new version of BE you can still do that: you can build a group and then write a policy for that group. You will still end up with a policy for each server but you won’t have to write the same policy for a hundred of servers or more. The thing is that it really isn’t that obvious how to make the group in the first place. Actually, when you think about it, it really is obvious … ctrl select!
If with a previous version of BE you had a single job that protected several (sometimes 10s – 100s of servers) when you upgrade to BE 2012 it creates a single policy into separate jobs for each server protected. If you have already migrated, should you need to change any settings you will need to make a change across all those new jobs … painful? Not at all, you can change the setting for all of your backups at the same time and end up with one job per server, but all those jobs created in a single process (just like in BE 2010) and have an identical configuration. If you want to change a configuration on a group of servers or an individual server you can do this.
If you need to do this select all the servers that have the overwrite media option set by Shift+Clicking or Ctrl+Clicking, or by selecting a containing group. This is the way you can create groups in BE 2012, so although you end up with a job per server you can still create single policies and associate with multiple assets. So you can create or change policies or configurations for multiple instances without having to go through each individual server.
One issue we are aware of and will be fixing in a service pack coming out pretty soon, where customers had overwrite option set to “Overwrite media” prior to migration, the migration creates a new policy for each asset and every one of those new jobs will retain all the attributes of the original policy, including the “Overwrite media” setting. This will cause all of those new jobs to request new media.
To do this:
- Create all you servers you want to amend into a single group (Ctrl+Click)
- Click “Edit Backups” on the toolbar
- You get a list of all the server assets you selected, click on the checkbox in the header to select all the servers
- Select “OK” taking you into the “multi-edit” view where you can make changes across all the selected backups at one time.
- In this case click on “edit” on the backup stage
- Click on the Storage tab, change the media overwrite option from “Overwrite media” to “Append to media, overwrite if no appendable media is available”
- Click on OK twice, this changes this option for all those selected servers.
This only changes that option and will not change any other changes or customisations you may have made to other server assets – cool!
Over the last 12 months it’s hard not to miss all the messages from vendors that promise to modernize your backup infrastructure. I particularly like the messages from vendors that champion solutions that address one or two aspects of backup – such as deduplication, snapshots or tools for backing up just VMware and Hyper-V environments. These are quick fixes, not modern data protection. Throwing more solutions at a problem as a quick fix is the cause of backup complexity and cost. A solution that increases complexity, risk or cost is hardly a solution that will modernize your backups. Why? Because the very nature of modernization is moving forward – reducing complexity, risk and cost. Yet these vendors, ironically, still try to position themselves as the solution to backup modernization without really offering a solution that does exactly that.
So what does it really mean to modernize your backup infrastructure? How do you know if your backup infrastructure is out of date? Is it the software, hardware or network that needs updating or is it a combination of all three?
To answer these questions and more, the Backup Exec team at Symantec have teamed up with leading analyst firm ESG to bring to you the latest backup trends and give pragmatic advice on how to apply those ideas to your environment that will truly modernize your environment. On Wednesday 9th May during Symantec Vision 2012 at the MGM in Las Vegas (http://www.symantec.com/vision), Jason Buffington from ESG will be presenting on: How to modernize your backups in 2012 (session number IM B01). In this session Jason will uncover his research on the latest trends in data protection. Not only will Jason give pragmatic advice on how to apply those ideas to your environment, he’ll also uncover backup and recovery best practices that will radically improve your backup and recovery performance. From virtualization to cloud, and from dedupe to replication, Jason will squeeze everything into 55 minutes.
If you are unable to attend vision this year don’t worry! This session will be recorded so if would like to get your hands on the replay, simply reply to this post and we will send you a link to view the recording as soon as it becomes available.
In the meantime, if you have a question in relation to modernizing your backups in 2012 send it via twitter to @JBUFF. Jason will be answering these questions and more during the session.
Before I sign out, I wanted to leave you with one closing thought. While many vendors talk about modernizing backup, there is only one company who is really delivering upon that promise and that’s Symantec. If you are attending Vision, stop by the Backup Exec booth and find out how.
Virtualization- what can I say other than it has “virtually” changed the IT world in which we all work and play? Why is virtualization so attractive to IT administrators? In short, the answer is easy- there are many uses and benefits that we gain through virtualization. For starters, the thought of being able to have a single server’s physical footprint represent many servers on the network has been a boon to administrators looking to consolidate space and reduce operation costs. Having the ability to quickly stand up a vm copy of a major application or work server for patch testing is a benefit that allows administrators to test during business hours is simply a game changer.
So, how else can we leverage this exciting technology? Well…how about recovery? What if I said we were talking about both physical and virtual environments?
I often speak with administrators that look for ways to simply protect their virtualized assets for the purpose full recovery in the event of disaster- i.e. their backup solution is only working to back up, but not truly embrace their virtual solution. What if we began taking the approach of having the backup software actually begin using virtualization as a true extension of the recovery plan? Can we take virtualization from being a resource that is typically one that is only backed up to a resource that can be leveraged as the platform for recovery for both physical and virtual servers alike?
We at Symantec say YES! …and Backup Exec 2012 is just the catalyst needed to truly and finally unite vrtual and physical environments.
Sure, the world is going virtual in a strong way but this is not something that is going to happen overnight. Although many early adopters have moved forward to become near 100% virtualized, most administrators are still governing environments that are heavily comprised of both physical and virtual server assets. As such, administrators need a solution that is not only purpose-built to work for their entire environment but one that takes advantage of the virtual infrastructure specifically to allow them to further leverage their IT investments.
Backup Exec 2012 is just the solution to deliver the proverbial goods. Not only is Backup Exec 2012 offering a fresh and new user experience but also is leveraging companies’ existing virtual infrastructure for items including instant recovery of any physical or virtual server that is protected but also has the ability to leverage the cloud to recover, test or migrate any VMware virtual machine that is in your environment. And for good measure, imagine that when it is time to migrate that physical server to a new virtual body you simply power on the virtual copy that was created and maintained as part of the standard backup of a physical server with Backup Exec 2012? Well, its time to stop imagining- the time has come to fully realize and embrace the power and fleibility of your virtual environment- and Backup Exec 2012 is here to help you do just that.
So again I ask….have you hugged your VM lately? If not you should- it can help you in more ways than you know!
Interested yet? If so please take a few moments to visit us virtually on the web or in person at Symantec VISION 2012 in May to learn more about how Backup Exec 2012 can help you truly embraced your virtual infrastructure.
In the meantime, check out this short video on leveraging “No Hardware” recovery with Backup Exec 2012:
Planning on attending Symantec VISION US this year? Come learn more in our BE 2012 session!
Discover the power of your virtual environments with Backup Exec 2012
Session IM B15 @ Symantec Vision 2012
MGM Grand – Las Vegas – May 7-10
Come join us at VISION US and learn more about:
- How to embrace and leverage your virtual infrastructure for near-line recovery, DR and sandbox testing
- Migrating to virtual? Learn about how to simply use your future and existing backups to make the process painless
- Need offsite DR for VM’s? Come see how we are leveraging the cloud for DR and much more!
At Lotus F1 Team, Information is the true currency of business but the by-product data is gold bullion
Many say that a company’s most valuable asset is its human capital. But a strong workforce without ideas is not really strong, or much of a force. So we would say human capital is nothing without information or to put it another way, knowledge. And it’s generally accepted that knowledge is power.
Power is the currency traded at Lotus F1 Team - a company that Symantec is proud to have worked in partnership with for over 15 years and an affair set to continue into the foreseeable future.
When we say power, we’re not just referring to the engines in its cars, though for sure they are some of the most powerful in the world. But before an engine makes it onto the track, hundreds of engineers and scientists pour thousands of hours into engineering & testing designs before completing a final blueprint. The blueprint is the real prize. It forms the basis of the most valuable asset a business can possess – Intellectual Property. This is the stuff that sets your business apart from competitors, which can be sold or licensed, providing an important revenue stream.
But the kind of innovation taking place on a daily basis at Lotus F1 Team produces a by-product: Data – reams of it. Anything not directly related to racing and winning is going to be a distraction but get sloppy with how you keep and manage data and you may as well had over the blueprints to your competitor in person. Lotus F1 Team has for example developed Wind Tunnel technology used by the company in the development and measurement of aerodynamic forces – known as Computational Fluid Dynamics (CFD). This is an area of research not only important to car racing but also to the aerospace, road and wind turbine industries and one pioneered and perfected by F1 over the years.
As well as building and driving fast cars, there’s also a commercial and business side too – all of which produces data and business processing that needs to be managed effectively, efficiently and securely. Ultimately, the organisation needs to know its power (information, knowledge, deals and ideas) are safe and subject to the highest integrity.
This is why we work with companies like Lotus F1 Team – because we are in the business of protecting and managing data at every level so that innovators can focus on the business they are in.
Have you ever seen pictures of the control room in a power generation plant? It’s an entire wall of dials, knobs, and gauges, all telling you important bits of information about the system. That’s great – and probably necessary! – if your only job was to manage that power plant. But as an IT professional, you have lots of daily jobs; some of you manage Exchange, SQL Server, firewalls, security, storage, servers, you name it. Backup and recovery might be only one of the many jobs you do on a daily basis. And here at Symantec, we don’t want your backup application to be like that power plant – we want you to be
able to sit down, do what you need to do, and get on with your day.
Backup Exec has an entirely new user interface. You’re going to like it – it’s simpler, more intuitive, and much easier to navigate. We’ve also taken a lot of time to keep all those great features you are used to in previous versions of Backup
Exec – so this interface is going to appeal to the new Backup Exec user and the seasoned professional. At-a-glance status is easily available in Backup Exec 2012, both from servers you are protecting and from the storage you are using to store your backups. The latest version of Backup Exec is just a cleaner, more intuitive way to manage your backup and recovery environment.
We’ve also included a new way to create and manage backup jobs and policies. No longer do you need an advanced degree in Data Protection to set up disk-to-disk-to-tape backups or replicate data between sites – the new Backup Stages feature shows you, in graphical detail, how your backup data will be transferred, when it will be backed up, and where it’s going to be transferred to.
Speaking of backups, how many of you just want to create backups to protect your critical servers and applications without any headaches? How many of you would rather not pore over every detail of application backups? Well, if that sounds like you, Backup Exec has made it much easier for you to set up backup jobs, because we have included some seriously intelligent defaults – based off our own expertise in data protection and by taking the most successful backup configurations from
our customers and partners – and building them into Backup Exec. If you want to get into the nuts and bolts of backup job creation, however, Backup Exec has all the same great features and customizability you have used before – so you have the right tools to get the job done.
With Backup Exec, we’ve also stepped up our “telemetry” program – gathering non-personally identifiable information from our customers and partners who choose to participate in the program. This gives us invaluable intelligence about how backups and restores are working in the field, and we have used that information extensively to make Backup Exec the easiest to use, full-featured data protection application for physical or virtual environments on the market.
Be one of the first to find out what other new ground breaking backup and recovery features are coming soon in Backup Exec.
Visit the Countdown to Better Backup web site here: http://bit.ly/yenx3z
By Aidan Finley … Symantec’s Aidan Finley talks about simplifying intelligent backup: http://bit.ly/vHJXoa #BetterBackup [Video]