so a little back-story.. VMWare virtualized server 2008R2. have been using CBB for just over a year. never had any issues up till the last couple months. i have 2 jobs setup: 1 file level and 1 image based.
for the last couple months, jobs are not finishing. i can start them, it looks like it gets finished.. says "190 of 190 files" and the over-lay at the bottom says "STOPPING" but it never does.
i have let it run just to see if it would finish.. after 10 days no dice.
i have removed all VSS snapshots, have cleared the recycle bin, tried to change VSS providers, etc. the jobs always hang at the end.
what would you check if this were happening to you? i cant think of anything else to try to make these jobs finish on their own. i am going to do a reboot and will schedule a full re-install after the next maintenance window is available, would like to add more to this if anyone has any helpful ideas.
.
Sorry for your experience. You definitely shouldn't have waited for 10 days and reach out sooner.
This requires a thorough investigation. Please go to Tools -> Diagnostic -> Add a comment "with the link to this thread" and click "Send to Support".
Let's work out this issue and I'll update the thread in case somebody faces the same problem.
yeah, it wasnt ideal, but i wanted to let CBB do its thing. it has always finished in the past so i thought more time might help. i have a couple backup solutions in place since the failure (not failure, just lack of completion i suppose) started happening...
i have added the link to the page as requested. i have also updated the instance, it had been running the previous build.
im going to update this again after the next backup job runs, but the last job actually did finish.. it ran from when i made the post till last night, so 7 days total and it finally finished.
i will see if this is going to continue to finish and update accordingly. still think that 7 days is far too long to wait for a full backup.
So, assuming full 75 Mbps bandwidth, a full backup (non-synthetic) would take a little more than a day. If the server is sharing bandwidth with other computers, this would scale down proportionally. That's without compression, so we can adjust the numbers a bit based on compression percentage. For a synthetic full, however, the number of bits moved will generally be much lower and total backup time should be faster even with the time the synthetic full runs on the cloud storage. My recommendation is to continue with Support and have them analyze the log data to see exactly where the bottleneck is. I cannot stress how important this step is to get to the bottom of the issue. Have you opened a support case? If not, please do so as soon as you can.
yes, i have done the math as well and came to roughly the same conclusion so that makes me feel good. i scheduled the job to run as soon as they leave for the day, so literally like 5:15PM that job starts. funny thing, it has not hung since last week. the last successful notification i have 5/5/19... a good image and a successful purge from the media.. total file size for the image was 73 gigs.
maybe i should open a ticket with them and close if issue is moot by the time they get around to me? idk, but i would hate to say this is resolved and then i get to try for a successful job and get a hang again..