I am > thinking the previous admin on this server redirected these > files to the D: because it has more space available. It has files such as > _3.fh.FIL.tmp in it. I was wondering if its needed and > what its purpose is. You may need to stop the services before you are able to delete them.Īt 07:43 PM, you wrote: >I have a folder on my email server that I use backup exec on > “D:Program FilesVERITASBackup ExecNTCatalogs”, and it is > currently over 10gb large. These are created while the jobs are running, but should auto-purge.Īs is the case across the board, I would backup ALL of the files in the directory –. tmp files, as long as they are not part of a current job that is running. Remember too that the date you need to look at is the Modified date – not the Created date. Keep your special backup handy in case you need to restore from those files. If you are willing to take the chance of not needing the files, you can delete them.Īnother option is to run a special backup of the catalog files, then delete the older files. img files, you will not have the data you need to perform a restore. Seems like a Win2K3, TCP/IP stack, or Broadcom teaming issue.Based on the file extensions that you mention, you are using Backupexec 10.x. I know its not our switch gear (Nortel 5510) these babies have a 72Gbit fabric and we have all new wiring. In general I've found our server's gigabit performance to be rather lacking - for a single session seems to max out around 250-400Mbit. I was hoping to leverage Jumbo frames to speed my IP backups and other large file transfers (our users move a lot of data around from subnet to subnet) but I haven't been able to get this to work in my environment. I'm planning on repeating this test using SnapView only comparing backup to LTO2 vs. Unfortunately, using SnapView requires so many carefully coordinated scripts so I've not implemented this in production yet. This dataset was EXTREMELY compressible (~30:1) as Oracle tablespaces are typically pretty sparse. Job rate : 3,616.00 MB/Min (Byte count divided by Elapsed time for job)
SnapView SAN based backup of the same data: Job rate : 1,877.00 MB/Min (Byte count divided by Elapsed time for job) IP Based Backup to Tape (dual gigabit on host and media server): (Both are to LTO2 on my PV136T w/BE9.1 SP1 w/Dell device drivers) In general I'm getting about SnapShot -> SAN attached MediaServer -> SAN attached Tape I've tried everything I can thing of - file system block alignment with the RAID5 stripe element, changing the NTFS allocation unit (typically use 64K on Win2K3), changing the RAID5 Element size (usually 128 sectors = 64K). To be fair, I have not completed every possible BU2D scenario, but it general I've found the EMC ATA drives to be terribly slow.
#Backup exec 16 could not load resdll software#
Other limiting factors must apply I guess (network bandwidth, source disk read speed, backup software itself) We have not so far really experimented with backup-to-disk on a large scale but it is surprizing that it would lag behind a linear tape drive. Glad to hear that Symantec's scanner is not affecting the backup rate although I wonder if having a SAN-based backup library vs a SCSI-based one has an effect. This is a prefered method to completely shutting down the virus scanner during the backup window. We resolved this issue by configuring our mcafee on-access scanner to exclude backupexec-related executables from being scanned (bkupexec.exe, bengine.exe, beserver.exe, etc) and now our rates for running 2 concurrent jobs on 2 LTOII drives are around 1000mb/m each. I was advised to exclude the backupexec folder from being actively scanned on the media server (our backup performance was slow even during media server backup) but that did not appear to solve the problem.