How to fix: connection limit exceeded for non-superusers


(Nic Hubbard) #1

We used to get this error a while back, so we put more RAM into our DB server, but sadly just got the error again:

    Fatal error: Uncaught exception 'Exception' with message 'Could not create database connection: DBError!:SQLSTATE[08006] [7] FATAL:  connection limit exceeded for non-superusers' in /home/websites/mysource_matrix/core/include/mysource.inc:3267
    Stack trace:
    #0 /home/websites/mysource_matrix/core/include/mysource.inc(220): MySource->changeDatabaseConnection('db')
    #1 /home/websites/mysource_matrix/core/include/init.inc(243): MySource->init()
    #2 /home/websites/mysource_matrix/core/cron/run.php(36): require_once('/home/websites/...')
    #3 {main}
     thrown in /home/websites/mysource_matrix/core/include/mysource.inc on line 3267


Currently, in our postgres config file, we have max_connections = 70. This seems super low. Should I bump this up? If so, to what number? We currently have 3369928kB of RAM on the server.

Thanks!

(Keith Brown) #2

we have max connections set at 100. But also the effective_cache_size and shared buffers have been tweaked in line with this guide


http://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm



Dunno if that's any help…



K


(David Schoen) #3

[quote]
Currently, in our postgres config file, we have max_connections = 70. This seems super low. Should I bump this up? If so, to what number? We currently have 3369928kB of RAM on the server.

[/quote]



70 will probaly allow for about 24 concurrent page generations (most pages initiate 3 connections to the DB: a db, db2 and db3 connection), which is actually quite a bit of work to be doing all at once (assuming you have a fairly normal number of CPU cores).



From memory you mentioned recently that there's no Proxy cache in front?



I would seriously be looking in to implementing that now to remove a lot of the load from repeat views.


(Nic Hubbard) #4

[quote]
70 will probaly allow for about 24 concurrent page generations (most pages initiate 3 connections to the DB: a db, db2 and db3 connection), which is actually quite a bit of work to be doing all at once (assuming you have a fairly normal number of CPU cores).



From memory you mentioned recently that there's no Proxy cache in front?



I would seriously be looking in to implementing that now to remove a lot of the load from repeat views.

[/quote]



Correct, we don't have a proxy set up.



Is that something that is difficult to set up? And does that hardware have to be powerful?


(David Schoen) #5

[quote]
Correct, we don't have a proxy set up.



Is that something that is difficult to set up? And does that hardware have to be powerful?

[/quote]

Well I find it pretty easy, but I've done a lot of them now :slight_smile:



Generally the hardware can be pretty low spec. We often install it on the same server as Apache.



If you contact support directly they can probably email you our template squid.conf file (I'm not sure if we can publish it on the forum or not).


(Dan Simmons) #6

[quote]
If you contact support directly they can probably email you our template squid.conf file (I'm not sure if we can publish it on the forum or not).

[/quote]



Here it is (well, the UK version) :slight_smile:

http://forums.squizsuite.net/index.php?showtopic=8042&view=findpost&p=39739


(Nic Hubbard) #7

[quote]
Here it is (well, the UK version) :slight_smile:

http://forums.squizsuite.net/index.php?showtopic=8042&view=findpost&p=39739

[/quote]



Thanks. So, is all that it takes to get it up and running, is install it on our apache server, add the config, and it is up and running? Or is it more complicated than that?


(David Schoen) #8

[quote]
Thanks. So, is all that it takes to get it up and running, is install it on our apache server, add the config, and it is up and running? Or is it more complicated than that?

[/quote]



Very slightly more complicated.



Apache will need to be listening on 127.0.0.1:80 and Squid will need to listen on <public ips>:80.



Then tell Squid to find Apache on the local host ip:

    
    cache_peer 127.0.0.1 parent 80 0 originserver no-query no-digest login=PASS default


As long as you only have one Matrix instance this should pretty much always work. It gets more complicated if you have multiple instances on one IP because Squid has to decide which parent server to use based on the incomming Host header.

This can be done with fairly minimal down time, even without a UAT environment (but if you have UAT that'd be a REALLY good place to try it out).

Install Squid and configure it to listen on a non-standard port (e.g. 8080). Set the <public ip> and 8080 as your browser proxy and confirm it's ok. Once you are comfortable the site renders ok through the Squid instance, adjust Squid to listen on port 80 on the public ips only and Apache to listen on 127.0.0.1 only, then do a restart on both daemons.

Make sure to backup your Apache config just in case.

[u]If you get everything right[/u], down time should be less than 30 seconds in the worst case.

Your backout plan should be:
* shutdown squid
* restore apache config
* restart apache

p.s. once you have Squid working, make sure to set up a clear squid cache trigger (and confirm it actually works).