Discussion:
HELP - need way to restore only blobs
(too old to reply)
Warren Little
2004-02-17 19:40:44 UTC
Permalink
I recently posted a similar message but left some key info out:

I migrated my database from 7.3 to 7.4 this weekend using the pg_dumpall
tool which I now realize does not capture blobs.

I now need to move only the blob data to the 7.4 database.
The problem with redoing the dump with pg_dump -b is the database is now
in production and writing over the top of changes to the database is not
exceptable. The blob data is very static so if there was some way to
copy the physical files from disk and modify some records in the system
tables to properly locate the blob records that would be best. Another
option I was looking at was to restore the archived database with the
blobs intact and then restore the production version over the top
without destroying the blob data.

All suggestions welcome, I'm dying here.
--
Warren Little
Senior Vice President
Secondary Marketing
Security Lending Wholesale, LC
www.securitylending.com
Tel: 866-369-7763
Fax: 866-849-8082

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend
Jeff Boes
2004-02-18 12:54:23 UTC
Permalink
Post by Warren Little
I migrated my database from 7.3 to 7.4 this weekend using the pg_dumpall
tool which I now realize does not capture blobs.
I now need to move only the blob data to the 7.4 database.
The problem with redoing the dump with pg_dump -b is the database is now
in production and writing over the top of changes to the database is not
exceptable. The blob data is very static so if there was some way to
copy the physical files from disk and modify some records in the system
tables to properly locate the blob records that would be best. Another
option I was looking at was to restore the archived database with the
blobs intact and then restore the production version over the top
without destroying the blob data.
In a similar situation, we wrote a script that constructs "\lo_export" commands
to dump each large object to a file. The file name for each contains the
information needed to reconstruct the object at the other end. For instance, if
you have a table like this:

CREATE TABLE foobar (primary_id INTEGER PRIMARY KEY, large_obj OID, ...)

you'd want a series of commands that look like this:

\lo_export <large_obj> <primary_id>.dmp

Then, given a directory full of such files, you construct another series of
commands that look like this:

\lo_import <primary_id>.dmp

UPDATE foobar SET large_obj = <new_value> WHERE primary_id = <primary_id>;

The trick is capturing the output of the \lo_import command and parsing it to
get the large object OID after it is created.

I don't know if I have permission to post or email the script, but if you
contact me offline I should know by then.

jboes at n/e/x/c/e/r/p/t/d/o/t/c/o/m

--
~~~~~~~~~~~~~~~~| Genius may have its limitations, but stupidity is not
Jeff Boes | thus handicapped.
***@qtm.net | --Elbert Hubbard (1856-1915), American author
Warren Little
2004-02-18 16:57:29 UTC
Permalink
Thanks to all who responded.
Found the pg_dumplo tool in contrib which did exactly what I needed.
Post by Jeff Boes
Post by Warren Little
I migrated my database from 7.3 to 7.4 this weekend using the pg_dumpall
tool which I now realize does not capture blobs.
I now need to move only the blob data to the 7.4 database.
The problem with redoing the dump with pg_dump -b is the database is now
in production and writing over the top of changes to the database is not
exceptable. The blob data is very static so if there was some way to
copy the physical files from disk and modify some records in the system
tables to properly locate the blob records that would be best. Another
option I was looking at was to restore the archived database with the
blobs intact and then restore the production version over the top
without destroying the blob data.
In a similar situation, we wrote a script that constructs "\lo_export" commands
to dump each large object to a file. The file name for each contains the
information needed to reconstruct the object at the other end. For instance, if
CREATE TABLE foobar (primary_id INTEGER PRIMARY KEY, large_obj OID, ...)
\lo_export <large_obj> <primary_id>.dmp
Then, given a directory full of such files, you construct another series of
\lo_import <primary_id>.dmp
UPDATE foobar SET large_obj = <new_value> WHERE primary_id = <primary_id>;
The trick is capturing the output of the \lo_import command and parsing it to
get the large object OID after it is created.
I don't know if I have permission to post or email the script, but if you
contact me offline I should know by then.
jboes at n/e/x/c/e/r/p/t/d/o/t/c/o/m
--
~~~~~~~~~~~~~~~~| Genius may have its limitations, but stupidity is not
Jeff Boes | thus handicapped.
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
message can get through to the mailing list cleanly
--
Warren Little
Senior Vice President
Secondary Marketing
Security Lending Wholesale, LC
www.securitylending.com
Tel: 866-369-7763
Fax: 866-849-8082

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster
Loading...