[Box Backup] Restore & Compare Issue ** UPDATE **

Matt Brown boxbackup@fluffy.co.uk
Tue, 24 Apr 2007 17:45:30 +0100


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

>
>> Well the file is in question is an SQL backup done via NT Backup  
>> on a Windows 2003 Server and then rsync'd (via Cygwin) to a 2.5TB  
>> data central backup store (Linux box with large storage array) and  
>> in turn produces a 20GB file - this data store then needs an  
>> offsite backup so I am running the bbackup client to send to the  
>> remote host. As I am trying to keep the transfer small(ish) I gzip  
>> first otherwise we would be sending a 20GB file each night (and  
>> growing). 3.5GB a night is not too much of a problem for us as the  
>> pipe carrying the data is big enough and quiet at night - my main  
>> concern is if the gzip gets corrupted when sending/restoring as I  
>> do not like depending on compressed data where I can help it.
>
> If I understand correctly, you are doing SQL server -> offsite  
> store running bbackupd -> bbstored server?
>
> I would send the file from the SQL server to the offsite store  
> uncompressed using rsync. That will use less than 3.5 GB of  
> bandwidth and take less time too.
>
> Alternatively, gunzip it on the offsite store before running  
> bbackupd on it.
>
> The problem is not just the 3.5 GB of bandwidth per day, but the  
> fact that every update on the bbackupd server will be 3.5 GB as  
> well. You will use a lot of space very quickly on the server if you  
> do this.
>
>> I have tried changing the LCD to other paths i.e /data /home and  / 
>> tmp and still the same issue. I tried /tmp to make sure this was  
>> not a directory perms issue.
>>
>> The only thing I can conclude so far is that it would be something  
>> to do with the size of the file :(
>
> Please try the dd as well to make sure that it's not a local  
> filesystem issue.

Hi Chris,

Well I ran the command and this was what was returned:

root@io:/tmp# dd if=/dev/zero of=/tmp/bigfile bs=1M count=4k
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 60.7408 seconds, 70.7 MB/s

In relation to the rsync of the actual file(s), the only way we are  
able to do that is if the Server that is running SQL Server 2005 (I  
am no windows expert, this is second hand) is told to disconnect the  
Database so we can rsync the actual SQL DB File(s) and then reconnect  
to it again afterwards - however because this is in a cluster  
environment, I am told the SQL Server would think SQL Server has  
failed or stopped and would fall over to the other SQL Server :(  -  
therefore an automated backup is preformed each night using SQL  
Server or NT Backup to create this one large file - once this has  
been created we simply use Rsync to shift it over to the storage array.

In addition, I was under the assumption that BB will use available  
soft/hard space available on the server and once this is close to  
full housekeeping would free up space. Currently we are only keeping  
about 7 days of backups of this file offsite - we have 3 months + in  
house if we needed to roll back plus transaction logs.

However as always I am open to suggestions and ways around these kind  
of issues, as there is usually more than one way to skin a cat :-)

Kind Regards

Matt Brown

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.5 (Darwin)

iD8DBQFGLjQ56vWewLkSmagRAtDEAJ4iHcT9XgzwW06a7n2BhijaRGUNlwCfYGqN
jMvj3cH9DutNo54csaArWng=
=g8nz
-----END PGP SIGNATURE-----