Iain
2003-12-18 09:23:57 UTC
Hi All,
I had been having some problems since yesterday with dumping/loading data
and even running vacuum analyse. The good news is that I've sorted it out
and things seem to running smoothly again.
There may be a bug in there somewhere, or at least a degree of fragility
that may deserve some attention. I'm writing this now in the hope of getting
any feedback on what happened that anyone might have, and perhaps it might
help someone out in the future.
After a successful stress test of backup and restore 2 days ago, nothing was
working today.
The problem seemed to stem from some (I thought) innocuous changes I made to
the postgresql.conf file.
Reverting to the original file has rectified the problem.
This is typical of the errors I got:
pg_dump: ERROR: canceling query due to user request
pg_dump: SQL command to dump the contents of table "d_hi_mise_item_uri"
failed: PQendcopy() failed.
pg_dump: Error message from server: ERROR: canceling query due to user
request
pg_dump: The command was: COPY public.d_hi_mise_item_uri (ymd_uri, tenpo_cd,
item_id, urisuu, urikin, araridaka,
lst_genka, ave_genka, logic_tana_suu, henpintyuu_suu, tenji_suu, tuika_suu,
idou_in_suu, idou_out_suu, rank_new
_suu, rank_old_suu) TO stdout;
The part "ERROR: canceling query due to user request" is the main part. It
occured during vacuum analyse too.
I setup a new server from scratch and got similar errors. It was significant
that I had copied the conf files from the previous installation.
These are the settings that I changed:
tcpip_socket = true
shared_buffers = 2048
wal_buffers = 16
checkpoint_segments = 8
effective_cache_size = 50000
random_page_cost = 2
statement_timeout = 30000
I thought that these are pretty conservative settings, even for my system.
but maybe not.
On the old server (Version is 7.4 running on turbo linux 7 (kernal
2.4.18-15, libc ver 2.2.4-13)) under virtual pc under windows 2000) . The
Linux kernal settings are default, and the PC has 1GB of RAM, at about half
of which should be available to the the linux system.
I'm going to experiment with the conf settings one at a time from now and
see what happens. I'm also hoping to remove the need for virtual PC as it
seems to complicate configuration. Unfortunately the development server
doesn't resemble the intended production configuration in the least, so it
seems somewhat pointless anyway.
regards
Iain
I had been having some problems since yesterday with dumping/loading data
and even running vacuum analyse. The good news is that I've sorted it out
and things seem to running smoothly again.
There may be a bug in there somewhere, or at least a degree of fragility
that may deserve some attention. I'm writing this now in the hope of getting
any feedback on what happened that anyone might have, and perhaps it might
help someone out in the future.
After a successful stress test of backup and restore 2 days ago, nothing was
working today.
The problem seemed to stem from some (I thought) innocuous changes I made to
the postgresql.conf file.
Reverting to the original file has rectified the problem.
This is typical of the errors I got:
pg_dump: ERROR: canceling query due to user request
pg_dump: SQL command to dump the contents of table "d_hi_mise_item_uri"
failed: PQendcopy() failed.
pg_dump: Error message from server: ERROR: canceling query due to user
request
pg_dump: The command was: COPY public.d_hi_mise_item_uri (ymd_uri, tenpo_cd,
item_id, urisuu, urikin, araridaka,
lst_genka, ave_genka, logic_tana_suu, henpintyuu_suu, tenji_suu, tuika_suu,
idou_in_suu, idou_out_suu, rank_new
_suu, rank_old_suu) TO stdout;
The part "ERROR: canceling query due to user request" is the main part. It
occured during vacuum analyse too.
I setup a new server from scratch and got similar errors. It was significant
that I had copied the conf files from the previous installation.
These are the settings that I changed:
tcpip_socket = true
shared_buffers = 2048
wal_buffers = 16
checkpoint_segments = 8
effective_cache_size = 50000
random_page_cost = 2
statement_timeout = 30000
I thought that these are pretty conservative settings, even for my system.
but maybe not.
On the old server (Version is 7.4 running on turbo linux 7 (kernal
2.4.18-15, libc ver 2.2.4-13)) under virtual pc under windows 2000) . The
Linux kernal settings are default, and the PC has 1GB of RAM, at about half
of which should be available to the the linux system.
I'm going to experiment with the conf settings one at a time from now and
see what happens. I'm also hoping to remove the need for virtual PC as it
seems to complicate configuration. Unfortunately the development server
doesn't resemble the intended production configuration in the least, so it
seems somewhat pointless anyway.
regards
Iain