Christopher Browne
2004-05-31 03:00:17 UTC
Hi
ID:77777
I am running PostgreSQL 7.2.2 on Red Hat 9 and I am suffering a
growing performance problem. The problem shows through a slowing of
queries and an increase in the system CPU usage. Queries that took
less than 6 seconds clime to take more than 5 minutes and as the
system is driven by Apache through Perl scripts, the web server
times out. Clearly I could reset the Apache timers, however this
would just hide the problem for a little longer and of course once
the problem starts to happen the system tends to cascade because the
users try again and the orphaned processes continue to use processor
time until they complete.
I use Cron to 'VACUUM ANALIZE' the system every night and this
greatly improved the performance but has not stopped the delay from
growing. The strange thing is that apart from the delay everything
seems fine.
Rebooting does not remove the problem.
Restarting Apache and/or Postgres makes no difference.
Other that the ANALIZE mentioned above the system has not been tuned in
anyway.
The following sequence does however cure the problem;
pg_dump database > temp.db
dropdb database
createdb database
psql -e database < temp.db
I have not tried leaving out the dropdb and createdb but I do not need
to stop postgres.
Has anybody any idea how to fix my problem is it something I have or
have not done? Please do not tell me to upgrade to the latest
version of Postgres unless it is a clearly identifiable Postgres
problem that is at the rot of my dilemma.
Well, there certainly have been _major_ improvements in performanceID:77777
I am running PostgreSQL 7.2.2 on Red Hat 9 and I am suffering a
growing performance problem. The problem shows through a slowing of
queries and an increase in the system CPU usage. Queries that took
less than 6 seconds clime to take more than 5 minutes and as the
system is driven by Apache through Perl scripts, the web server
times out. Clearly I could reset the Apache timers, however this
would just hide the problem for a little longer and of course once
the problem starts to happen the system tends to cascade because the
users try again and the orphaned processes continue to use processor
time until they complete.
I use Cron to 'VACUUM ANALIZE' the system every night and this
greatly improved the performance but has not stopped the delay from
growing. The strange thing is that apart from the delay everything
seems fine.
Rebooting does not remove the problem.
Restarting Apache and/or Postgres makes no difference.
Other that the ANALIZE mentioned above the system has not been tuned in
anyway.
The following sequence does however cure the problem;
pg_dump database > temp.db
dropdb database
createdb database
psql -e database < temp.db
I have not tried leaving out the dropdb and createdb but I do not need
to stop postgres.
Has anybody any idea how to fix my problem is it something I have or
have not done? Please do not tell me to upgrade to the latest
version of Postgres unless it is a clearly identifiable Postgres
problem that is at the rot of my dilemma.
between 7.2 and 7.4, so it is more than plausible that that could have
a significant impact on performance.
It sounds as though the traffic on the system is causing query plans
to get out of whack.
But it is not clear what your problem actually is; you have not
explained the nature of the queries that are performing badly.
I would imagine that it would be beneficial to run VACUUM and/or
ANALYZE somewhat more often.
- Let's suppose you run VACUUM on tables getting lots of UPDATEs and
DELETEs (note, I did _not_ say INSERT; tables that only see INSERTs
essentially NEVER need to be vacuumed) once every 4 hours.
- Tables seeing frequent updates of any sort (INSERT/DELETE/UPDATE)
perhaps should get an ANALYZE every hour.
There are significant merits to jumping to a newer version in this
regard:
1. As of 7.3, you can run pg_autovacuum, which will automatically
do VACUUMs and ANALYZEs, which hopefully would prevent things
from going downhill.
2. As of 7.4, there is a new logging parameter that allows logging
just those queries that are running slowly, which should help you
to find just the offending queries.
The sort of information we'd actually need to help fix the problem is
the queries and the output provided by running EXPLAIN ANALYZE on some
of the slow queries.
There are some tables that we have in our applications that we happen
to ANALYZE _and_ VACUUM every couple of minutes, as they are UPDATEd
almost continually. Waiting an hour between VACUUMS would lead to
_really_ bad performance. You might have an instance of a table like
that, but you haven't said anything that would indicate that.
--
(reverse (concatenate 'string "gro.mca" "@" "enworbbc"))
http://cbbrowne.com/info/sgml.html
Rules of the Evil Overlord #149. "Ropes supporting various fixtures
will not be tied next to open windows or staircases, and chandeliers
will be hung way at the top of the ceiling."
<http://www.eviloverlord.com/>
(reverse (concatenate 'string "gro.mca" "@" "enworbbc"))
http://cbbrowne.com/info/sgml.html
Rules of the Evil Overlord #149. "Ropes supporting various fixtures
will not be tied next to open windows or staircases, and chandeliers
will be hung way at the top of the ceiling."
<http://www.eviloverlord.com/>