Recently our site has been running out of memory and freezing up for about 5 minutes at a time. I upped the php memory limit to 96MB, the previous setting was 64MB, does this seem too high?
Also, the error was coming from the following file, does this seem out of the ordinary?
[03-Sep-2008 15:45:48] PHP Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 37979827 bytes) in /home/websites/puc_matrix_3-16-2/php_includes/HTTP/Request.php on line 729
I just want to make sure that there is nothing strange that is causing this.
The file that is trying to allocate that amount of memory is the PEAR HTTP client module (not written by Squiz). This is primarily used in Remote Content assets.
I'd be worried that a script like this would need to use more than 64MB of memory…
Also, generally speaking I would say that setting memory_limit to anywhere near half your system memory is dangerous. The limit is there to prevent scripts from eating all available memory and causing your system to stop responding. If the memory_limit is set to half your system memory, then it could theoretically only take 2-3 executions of a buggy or rogue script to bring your server to it's knees.
It largely depends on how many requests your server deals with at a time. If it's handling a maximum of 20 requests at a time, then multiply 20 times your memory limit setting to get a rough idea of the theoretical maximum memory that PHP could use under that given load. It should be far less than your overall system memory (because other processes need memory too).
Would there be any way to figure out which Remote Content asset could be causing this? We do use Remote Content in a number of places, so if this was the case, It would be nice to track it down.
Where the actual error is thrown may have nothing to do with where the memory is being used. It is just the line on which PHP needed to allocate some memory and it ran out. It is quite possible that it has nothing to do with PEAR::HTTP_Client and it probably has nothing to do with mysource.inc.
You need to see if you can pinpoint a particular page or function that you can use to replicate the error. Then we can attempt to replicate it in our environment or debug it directly on yours.
[quote]Where the actual error is thrown may have nothing to do with where the memory is being used. It is just the line on which PHP needed to allocate some memory and it ran out. It is quite possible that it has nothing to do with PEAR::HTTP_Client and it probably has nothing to do with mysource.inc.
You need to see if you can pinpoint a particular page or function that you can use to replicate the error. Then we can attempt to replicate it in our environment or debug it directly on yours.[/quote]
Thanks Greg. I was looking through the logs more closely, and I found something interesting:
[2008-08-26 13:56:11][7:Public User][2:php warning][R] (/usr/share/php/Net/Socket.php:106) - fsockopen(): php_network_getaddresses: getaddrinfo failed: Name or service not known
[2008-08-26 13:56:11][7:Public User][2:php warning][R] (/usr/share/php/Net/Socket.php:106) - fsockopen(): unable to connect to thinkgreen.cp:80
[2008-08-26 13:56:11][7:Public User][512:mysource warning][R] (/packages/cms/page_templates/page_remote_content/page_remote_content.inc:380) - Cannot connect to server while attempti
ng to access "http://thinkgreen.cp/cs/store_find.aspx?txtmemberno=906505&" - Success [CMS0063]
[26-Aug-2008 14:24:04] PHP Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 28210298 bytes) in /home/websites/puc_matrix_3-16-2/core/include/mysource.
inc on line 543
[26-Aug-2008 14:30:06] PHP Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 57516782 bytes) in /home/websites/puc_matrix_3-16-2/php_includes/HTTP/Requ
est.php on line 729
It almost seems that somehow one of our remote content pages was being used to access content that was not on our site. I checked all our remote content assets and found one that had no tunneling rules set, so I changed it to only tunnel URLs in the same domain.
It seems that each time I see the php memory error in the logs it is after the remote content trying to connect to some random page that has nothing to do with our site...
If you have configured your remote content page to allow tunneling of any URL, users can craft a specially encoded URL to tell a remote content page to go there.
Open tunneling should only be used inside a secure firewalled network where you can't access external URLs. For other systems, the default option is to only tunnel URLs from the same domain. It will just ignore everything else.
[quote]If you have configured your remote content page to allow tunneling of any URL, users can craft a specially encoded URL to tell a remote content page to go there.
Open tunneling should only be used inside a secure firewalled network where you can't access external URLs. For other systems, the default option is to only tunnel URLs from the same domain. It will just ignore everything else.[/quote]
Well, I found the remote content asset that was being accessed, which I realized I had not set the option to only allow URLs in the same domain. So, I had to change the webpath so the requests would stop.
But, we are still getting php memory errors and freezing of the site at random times. Our IT admin is looking into it. He did find that postgresql is running auto vacuum about every 5 minutes, which sounds strange, but maybe this is normal.
Is there no other way to monitor and track down the cause of a php memory error?
We also had some strange apache errors, not sure if they are related:
[Fri Sep 12 12:01:16 2008] [warn] child process 819 still did not
exit, sending a SIGTERM
[Fri Sep 12 12:01:18 2008] [warn] child process 819 still did not
exit, sending a SIGTERM
[Fri Sep 12 12:01:20 2008] [warn] child process 819 still did not
exit, sending a SIGTERM
[Fri Sep 12 12:01:22 2008] [error] child process 819 still did not
exit, sending a SIGKILL
[Fri Sep 12 12:01:23 2008] [notice] caught SIGTERM, shutting down
[Fri Sep 12 12:01:25 2008] [notice] Apache/2.0.54 (Debian GNU/Linux)
PHP/4.3.10-22 mod_ssl/2.0.54 OpenSSL/0.9.7e configured -- resuming
normal operations
[Fri Sep 12 12:01:30 2008] [error] server reached MaxClients setting,
consider raising the MaxClients setting
Postgres' autovacuum process will run depending on the rules set in the postgres config file.
The postgres manual has more specific instructions, but 5 minutes sounds reasonable on for a fairly stock install.
There are a lot of different ways to hunt down what's causing the memory ceiling to be reached, but the most important is to identify the urls that trigger the event.
Each time it happens, the response should be logged in apache's access log as a '500', which will give you the url that's causing the error.
Once there, you can collect the assetid for that url either through matrix or by consulting the db directly.
From that point, the process should be fairly straight forward
We only had 1 gig of RAM running in our Webserver, so we upped that to 4BG and also changed the MaxClients setting in apache as we were getting errors for that too. So far, this seems to have fixed the problem.