Not only am I reading each file for the archive bit, but I'm doing it probably across multiple clients to improve throughput and shrink my backup window.
Sorry for not being able to help on that matter. NetBackup Accelerator is designed such that it works on any file system.
How to solve extremely high random reads with client mounted shares and performing NBU Accelerator type backups? Interesting, but what I would like to know is if my dededuicating storage units are supported?
That is a true offhost backup solution. When I try to run an accelerator backup the job fails immediately with error 154 storage unit characteristics mismatched to request.
If you have follow up questions or feedback, post them as comments. For our competitors, turning on source side deduplication would imply resource consumption on production client system to support dedupe fingerprinting. If yes, contact your sales rep or customer care center to issue new license keys for use with NetBackup 7. I am listing them for the sake of awareness.
Stay tuned, we got you covered! In any case, I shall get back to you if our developers have a different opinion. See the "cost of doing backups" section in the FAQ and hence recommended.
Last Published: Vision Solutions Days. And, 1 more, I understand that the size of the file is file count and size dependant, but is there some type of scale for estimating Accelerator track log? To do so, it compares information from the track log against information from the file system for each file.
I won't be able to reveal the actual IP behind the processes. Showing results for. Media server is not involved in this data path.