Operating system kernels typically enforce configurable limits on System V Shared Memory usage. On Linux, these limits can be seen by running the following command:
$ ipcs -lm ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 67108864 max total shared memory (kbytes) = 67108864 min seg size (bytes) = 1
The tunable values that affect shared memory are:
SHMMAX - This parameter defines the maximum size, in bytes, of a single shared memory segment. It should be set to at least the largest desired memory size for nodes using System V Shared Memory.
SHMALL - This parameter sets the total amount of shared
memory pages that can be used system wide. It should be set to at
least SHMMAX/page size
. To see the page size
for a particular system run the following command:
$ getconf PAGE_SIZE 4096
SHMMNI - This parameter sets the system wide maximum number of shared memory segments. It should be set to at least the number of nodes that are to be run on the system using System V Shared Memory.
These values may be changed either at runtime (in several different ways) or system boot time.
Change SHMMAX to 17 gigabytes, at runtime, as root, by setting the value directly in /proc:
# echo 17179869184 > /proc/sys/kernel/shmmax
Change SHMALL to 4 million pages, at runtime, as root, via the sysctl program:
# sysctl -w kernel.shmall=4194304
Change SHMMNI to 4096 automatically at boot time:
# echo "kernel.shmmni=4096" >> /etc/sysctl.conf
On Linux, the runtime attempts to use the huge page TLB support the when allocating System V Shared Memory for sizes that are even multiples of 256 megabytes. If the support is not present, or not sufficiently configured, the runtime will automatically fallback to normal System V Shared Memory allocation.
The kernel must have the hugepagetlb support enabled. This is present in 2.6 kernels and later. See ( http://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt).
The system must have huge pages available. They can be reserved:
At boot time via /etc/sysctl.conf:
vm.nr_hugepages = 512
Or at runtime:
echo 512 > /proc/sys/vm/nr_hugepages
Or the kernel can attempt allocate the from the normal memory pools as needed:
At boot time via /etc/sysctl.conf:
vm.nr_overcommit_hugepages = 512
Or at runtime:
echo 512 > /proc/sys/vm/nr_overcommit_hugepages
Non-root users require group permission. This can be granted:
At boot time via /etc/sysctl.conf:
vm.hugetlb_shm_group = 1000
Or at runtime by:
echo 1000 > /proc/sys/vm/hugetlb_shm_group
where 1000 is the desired group id.
On earlier kernels in the 2.6 series, the user ulimit on maximum locked memory (memlock) must also be raised to a level equal to or greater than the System V Shared Memory size. On RedHat systems, this will involve changing /etc/security/limits.conf, and the enabling the PAM support for limits on whatever login mechanism is being used. See the operating system vendor documentation for details.
A system imposed user limit on the maximum number of processes may impact to ability to deploy multiple JVMs concurrently to the same machine, or even a single JVM if it uses a large number of threads. The limit for the current user may be seen by running:
$ ulimit -u 16384
Many RedHat systems ship with a limit of 1024:
$ cat /etc/security/limits.d/90-nproc.conf # Default limit for number of user's processes to prevent # accidental fork bombs. # See rhbz #432903 for reasoning. * - nproc 1024
This 1024 should be raised if you errors like the following:
EAGAIN The system lacked the necessary resources to create another thread, or the system-imposed limit on the total number of threads in a process {PTHREAD_THREADS_MAX} would be exceeded.