- 论坛徽章:
- 0
|
squid遇到问题,急!
摘自www.unix.org.ua(http://www.unix.org.ua/squid/FAQ-10.html),原文引用如下:
---------------------------------------------
If you see the Too many open files error message, you are most likely running out of file descriptors. This may be due to running Squid on an operating system with a low filedescriptor limit. This limit is often configurable in the kernel or with other system tuning tools. There are two ways to run out of file descriptors: first, you can hit the per-process limit on file descriptors. Second, you can hit the system limit on total file descriptors for all processes.
For Linux, have a look at filehandle.patch.linux(http://www.linux.org.za/filehandle.patch.linux)by Michael O'Reilly
For Solaris, add the following to your /etc/system file to increase your maximum file descriptors per process:
set rlim_fd_max = 4096
set rlim_fd_cur = 1024
You should also #define SQUID_FD_SETSIZE in include/config.h to whatever you set rlim_fd_max to. Going beyond 4096 may break things in the kernel.
Solaris' select(2) only handles 1024 descriptors, so if you need more, edit srcMakefile/ and enable $(USE_POLL_OPT). Then recompile squid.
For FreeBSD (by Torsten Sturm <torsten.sturm@axis.de>:
1.How do I check my maximum filedescriptors?
Do sysctl -a and look for the value of kern.maxfilesperproc.
2.How do I increase them?
sysctl -w kern.maxfiles=XXXX
sysctl -w kern.maxfilesperproc=XXXX
Warning: You probably want maxfiles >; maxfilesperproc if you're going to be pushing the limit.
3.What is the upper limit?
I don't think there is a formal upper limit inside the kernel. All the data structures are dynamically allocated. In practice there might be unintended metaphenomena (kernel spending too much time searching tables, for example).
For most BSD-derived systems (SunOS, 4.4BSD, OpenBSD, FreeBSD, NetBSD, BSD/OS, 386BSD, Ultrix) you can also use the ``brute force'' method to increase these values in the kernel (requires a kernel rebuild):
1.How do I check my maximum filedescriptors?
Do pstat -T and look for the files value, typically expressed as the ratio of currentmaximum/.
2.How do I increase them the easy way?
One way is to increase the value of the maxusers variable in the kernel configuration file and build a new kernel. This method is quick and easy but also has the effect of increasing a wide variety of other variables that you may not need or want increased.
3.Is there a more precise method?
Another way is to find the param.c file in your kernel build area and change the arithmetic behind the relationship between maxusers and the maximum number of open files.
Here are a few examples which should lead you in the right direction:
1.SunOS
Change the value of nfile in usr/kvm/sys/conf.common/param.c/tt>; by altering this equation:
int nfile = 16 * (NPROC + 16 + MAXUSERS) / 10 + 64;
Where NPROC is defined by:
#define NPROC (10 + 16 * MAXUSERS)
2.FreeBSD (from the 2.1.6 kernel)
Very similar to SunOS, edit /usr/src/sys/conf/param.c and alter the relationship between maxusers and the maxfiles and maxfilesperproc variables:
int maxfiles = NPROC*2;
int maxfilesperproc = NPROC*2;
Where NPROC is defined by: #define NPROC (20 + 16 * MAXUSERS) The per-process limit can also be adjusted directly in the kernel configuration file with the following directive: options OPEN_MAX=128
3.BSD/OS (from the 2.1 kernel)
Edit /usr/src/sys/conf/param.c and adjust the maxfiles math here:
int maxfiles = 3 * (NPROC + MAXUSERS) + 80;
Where NPROC is defined by: #define NPROC (20 + 16 * MAXUSERS) You should also set the OPEN_MAX value in your kernel configuration file to change the per-process limit.
NOTE: After you rebuild/reconfigure your kernel with more filedescriptors, you must then recompile Squid. Squid's configure script determines how many filedescriptors are available, so you must make sure the configure script runs again as well. For example:
cd squid-1.1.x
make realclean
./configure --prefix=/usr/local/squid
make |
|