Discussion:
pg_hba.conf and groups
(too old to reply)
b***@biology.nmsu.edu
2004-05-18 20:26:32 UTC
Permalink
I am trying to allow individuals within a PostgreSQL (v7.4) group to
connect to the backend. The relevant line in pg_hba.conf is (I
believe):

host <db> +<group> <IP address>/32 md5

Note that I have replaced the contents of the real fields with <...>,
but that these match across what follows. I understand the + to allow
access to members of the group.

The contents of the system catalogs include the following:

# select * from pg_catalog.pg_shadow;
usename | usesysid | usecreatedb | usesuper | usecatupd | passwd | valuntil | useconfig
-------------+----------+-------------+----------+-----------+--------+----------+-----------
<name> | 105 | f | f | f | md5... | |

# select * from pg_catalog.pg_group;
groname | grosysid | grolist
---------+----------+-----------------------
<group> | 100 | {100,101,102,104,105}

Nevertheless, the following command fails:

psql --host <dbhost> --dbname=<db> --username=<name>
psql: FATAL: no pg_hba.conf entry for host "<IP address>", user "<name>", database "<db>", SSL off

Have I set this up incorrectly? Is there any way to query the backend
in order to identify who it thinks has access to what?

Thanks for your help.

Cheers,
Brook

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html
Steve Lane
2004-05-18 23:42:10 UTC
Permalink
Hello all:


I have a database that is exhibiting sluggishness under load. Suspecting
that some queries may be poorly optimized, I turned on a fair amount of
debugging output in the logs. But I could use some help interpreting it.

For the record, this is Postgres 7.2.1. I've already been rightly chastised
for using such an old version, and the upgrade is scheduled.

Here's some sample output:

2004-05-18 15:46:04 [27129] DEBUG: connection: host=127.0.0.1
user=postgres database=iep_db
2004-05-18 15:46:04 [26914] DEBUG: query: SELECT CAST(name_first || ' '
|| name_last AS TEXT) FROM iep_personnel WHERE id_personnel = $1
2004-05-18 15:46:08 [27015] DEBUG: query: select * from iep_student where
id_student = '1002863';
2004-05-18 15:46:08 [26914] DEBUG: query: SELECT $1
2004-05-18 15:46:08 [26914] DEBUG: query: SELECT CAST(name_county AS
TEXT) FROM iep_county WHERE id_county = $1
2004-05-18 15:46:05 [26892] DEBUG: QUERY STATISTICS
! system usage stats:
! 106.317726 elapsed 21.080000 user 0.240000 system sec
! [21.110000 user 0.250000 sys total]
! 0/0 [0/0] filesystem blocks in/out
! 7959/191 [8515/747] page faults/reclaims, 0 [0] swaps
! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent
! 0/0 [0/0] voluntary/involuntary context switches
! postgres usage stats:
! Shared blocks: 0 read, 0 written, buffer hit rate
= 100.00%
! Local blocks: 0 read, 0 written, buffer hit rate
= 0.00%
! Direct blocks: 0 read, 0 written

Some questions:

1) The statistics appear to be reporting on a series of queries. These
queries are all from inside a function, hence a single transaction. Are the
query statistics being reported here on a per-transaction basis?

Two numbers leap out: 106 seconds for the query, and a high number on the
page faults line.

2) 106 seconds for the operation is too horrible for words. All of the
operations are where-clauses involving single, index id fields. The server
load is heavy, but I can't account for this slowness due to stupidly-written
queries. Anyone see anything suspect in the queries?

3) I don't know how to read the page faults line. Is this referring to
system virtual memory, or postgres buffer management? What does this line
mean, and what parameter does it imply needs tuning?

I have RAM to spare on this box, as follows:

[***@iep log]# cat /proc/meminfo
total: used: free: shared: buffers: cached:
Mem: 6161747968 3970502656 2191245312 0 13320192 3259764736
Swap: 2089209856 47116288 2042093568
MemTotal: 6017332 kB
MemFree: 2139888 kB
MemShared: 0 kB
Buffers: 13008 kB
Cached: 3138220 kB
SwapCached: 45144 kB
Active: 1362996 kB
Inactive: 2210704 kB
HighTotal: 5177216 kB
HighFree: 1643716 kB
LowTotal: 840116 kB
LowFree: 496172 kB
SwapTotal: 2040244 kB
SwapFree: 1994232 kB

Some key postgresql.conf parameters:

max_connections = 150
shared_buffers = 175000 # 2*max_connections, min 16
sort_mem = 32000 # min 32
effective_cache_size = 100000 # default in 8k pages
log_connections = true
log_timestamp = true
log_pid = true
debug_print_query = true
show_query_stats = true

Sorry for the big dump. I'm feeling a bit at sea in this information and I
need to know if I'm swimming in the right direction. Thanks for any help.

-- sgl



---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
Tom Lane
2004-05-19 01:06:43 UTC
Permalink
Post by Steve Lane
I have a database that is exhibiting sluggishness under load. Suspecting
that some queries may be poorly optimized, I turned on a fair amount of
debugging output in the logs. But I could use some help interpreting it.
I think you're going at this all wrong. EXPLAIN ANALYZE should be the
first tool you turn to, not low-level stats. Browsing the archives of
the pgsql-performance mailing list may help you get started.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend
Steve Lane
2004-05-19 03:08:42 UTC
Permalink
Date: Tue, 18 May 2004 21:06:43 -0400
Subject: Re: [ADMIN] Interpreting query debug output
Post by Steve Lane
I have a database that is exhibiting sluggishness under load. Suspecting
that some queries may be poorly optimized, I turned on a fair amount of
debugging output in the logs. But I could use some help interpreting it.
I think you're going at this all wrong. EXPLAIN ANALYZE should be the
first tool you turn to, not low-level stats. Browsing the archives of
the pgsql-performance mailing list may help you get started.
regards, tom lane
Hmm. When I do a process listing I can see that there are postgres processes
occupying large chunks of CPU, sometimes in the 60-99% range, for long
enough to be noticeable in the process list. I'd like to capture those
process IDs and then correlate them with the stats captured in the log to
see why they take so much CPU.

If I want to EXPLAIN ANALYZE, I have to pick individual queries. The query
logic of the application is distributed across many source files. I'd have
to do quite some combing to recover a list of all queries the system runs. I
figure distilling the log output might be the best approach.

Given these points, is this still the wrong approach? Even if so, I'd still
love to know how to read the debug output.

-- sgl


---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ***@postgresql.org
Loading...