You should be able to configure a Secondary Target on the original File Backup job. You can configure more than one secondary repository, and you can configure different retention policies for each secondary copy.
Yes! I found this. But I need something like ‘GFS’.
The idea is run this job once a month/year with full backups.
If the secondary target does not work then the only other way would be two more jobs scheduled monthly and yearly to run.
If the secondary target does not work then the only other way would be two more jobs scheduled monthly and yearly to run.
You’re talking about backup jobs Chris? Not backup copy job right?
I try this way too. It works, but it consume the license twice.
If the secondary target does not work then the only other way would be two more jobs scheduled monthly and yearly to run.
You’re talking about backup jobs Chris? Not backup copy job right?
I try this way too. It works, but it consume the license twice.
Yes normal file backup job. I just checked my lab as I have some file copy jobs and the secondary repo will not do GFS so the only way around this I see is two more jobs scheduled when you need. Maybe Support can help?
If the secondary target does not work then the only other way would be two more jobs scheduled monthly and yearly to run.
You’re talking about backup jobs Chris? Not backup copy job right?
I try this way too. It works, but it consume the license twice.
Yes normal file backup job. I just checked my lab as I have some file copy jobs and the secondary repo will not do GFS so the only way around this I see is two more jobs scheduled when you need. Maybe Support can help?
Yeah, the ticket is already running on support.
Yes! I found this. But I need something like ‘GFS’.
The idea is run this job once a month/year with full backups.
If you have tape in your environment (Or want to consider using a virtual tape library), you could configure a Backup to Tape job, and schedule it to run monthly or yearly. It looks like you can configure it to create virtual full backups on specific days.
Proper way to do this right now is with Archive Copies for the Unstructured Data Backup. You can get pretty granular in what gets archived and for how long.
I’m actually not getting clearly if your source job is an Unstructured Data job (NAS Backup) or a backup of the file server itself, but since you can’t select it for the regular backup copy job I’m guessing it is indeed Unstructured Data Backup. This is a bit important because remember that Unstructured data has a unique backup format -- it’s not a single monolithic file, it’s essentially a noSQL database; similarly, there’s no such thing as multiple fulls for Unstructured Data Backups, there’s just the one large, growing backup.
Aside from @Tommy O'Shea ‘s option with tape (which isn’t an exact match but still a good one), a few thoughts you can do now:
- Periodic new backup chains for the source job -- Not a great solution IMO as it means a lot of space used for this, but you’d need it with GFS copies anyways. Keep in mind that background retention does not apply to Unstructured Data Backups which means manually deleting when you’re done
- Archive Copies as mentioned above -- while you won’t have the explicit GFS markings, you will be able to store the archive copies alongside the primary copy and the archive copy will have custom retention
- Periodically Copy Backups (not backup copy!)-- effectively the same as solution 1, but avoids putting workload on the production file server. Same warning about retention
- Protect the production fileserver itself with an image/volume level backup and utilize GFS like you normally would. There are some tradeoffs potentially with this strategy but it could still achieve GFS easily and allow 3-2-1 with more flexibility.
Lot of options here, none that will say “GFS” on them anywhere, but you can meet 3-2-1 with the options recommended in the topic.