Discussion:
Connection pooling/sharing software help
(too old to reply)
Kris Kiger
2004-07-15 15:31:50 UTC
Permalink
Here is the scenario I am running into:
I have a cluster of databases. I also have another cluster of
machines that will have to talk with some of the database machines.
Rather than set up a connection to each databases from each of the
'talking' machines, is it possible to share a single connection to a
database between machines? Does such a connection pooling/sharing
software exist? Thanks in advance for the advice!

Kris



---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ***@postgresql.org so that your
message can get through to the mailing list cleanly
Kris Kiger
2004-07-15 19:35:11 UTC
Permalink
This looks exactly like what we want. Currently we are using JDBC to
maintain persistent connections. The advantage to this is that you can
use the JDBC driver written specifically for your version of Postgres
(level 1 or level 4, I forget which is the closest). Do you have any
idea how close sqlrelay could be considered or if there is another
pooling software that fits the description, but fits into JDBC some how?
I appreciate your help!

Kris
Post by Kris Kiger
I have a cluster of databases. I also have another cluster of
machines that will have to talk with some of the database machines.
Rather than set up a connection to each databases from each of the
'talking' machines, is it possible to share a single connection to a
database between machines? Does such a connection pooling/sharing
software exist? Thanks in advance for the advice!
Yep, called SQL Relay and can be found at
http://sqlrelay.sourceforge.net/
I find it especially handy with Oracle, as OCI connections incur
pretty big overhead.
-- Mitch
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster
Kris Kiger
2004-07-19 19:28:48 UTC
Permalink
I've got a database that is a single table with 5 integers, a timestamp
with time zone, and a boolean. The table is 170 million rows in length.
The contents of the tar'd dump file it produced using:
pg_dump -U postgres -Ft test > test_backup.tar
is: 8.dat (approximately 8GB), a toc, and restore.sql.

No errors are reported on dump, however, when a restore is attempted I get:

ERROR: unexpected message type 0x58 during COPY from stdin
CONTEXT: COPY test_table, line 86077128: ""
ERROR: could not send data to client: Broken pipe
CONTEXT: COPY test_table, line 86077128: ""

I am doing the dump & restore on the same machine.

Any ideas? If the file is too large, is there anyway postgres could
break it up into smaller chunks for the tar when backing up? Thanks for
the help!

Kris




---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ***@postgresql.org)
Scott Marlowe
2004-07-19 20:07:54 UTC
Permalink
Post by Kris Kiger
I've got a database that is a single table with 5 integers, a timestamp
with time zone, and a boolean. The table is 170 million rows in length.
pg_dump -U postgres -Ft test > test_backup.tar
is: 8.dat (approximately 8GB), a toc, and restore.sql.
ERROR: unexpected message type 0x58 during COPY from stdin
CONTEXT: COPY test_table, line 86077128: ""
ERROR: could not send data to client: Broken pipe
CONTEXT: COPY test_table, line 86077128: ""
I am doing the dump & restore on the same machine.
Any ideas? If the file is too large, is there anyway postgres could
break it up into smaller chunks for the tar when backing up? Thanks for
the help!
How, exactly, are you restoring? Doing things like:

cat file|pg_restore ...

can cause problems because cat is often limited to 2 gigs on many OSes.
Just use a redirect:

psql dbname <file


---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ***@postgresql.org
Kris Kiger
2004-07-20 15:09:19 UTC
Permalink
Thanks Tom, -Fc worked great! I appreciate your help.

Kris
Post by Kris Kiger
I've got a database that is a single table with 5 integers, a timestamp
with time zone, and a boolean. The table is 170 million rows in length.
pg_dump -U postgres -Ft test > test_backup.tar
is: 8.dat (approximately 8GB), a toc, and restore.sql.
Try -Fc instead. I have some recollection that tar format has a
hard-wired limit on the size of individual members.
regards, tom lane
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ***@postgresql.org so that your
message can get through to the mailing list cleanly

Loading...