Is there a memory leak problem in 1.2.8, several terminals are registered on it, and the server memory increases by three or four GB in a day or two.
our server is running 1.2.8 and still uses about 3Gb of ram since over a year when I first noticed the memory usage. The virtual machine has 8Gb RAM just to avoid using swap as much as possible β which is set to 1Gb but not really used.
Does it stop increasing at this point? Or does it keep consuming 3-4 gb every day or so? If it stops consuming, then its probably not a memory leak.
These were some details after 24 hours, and this machine has 8GB RAM.
yeah but that does not prove GenieACS is behind it.
root@acs3:~# free -h
total used free shared buff/cache available
Mem: 7.8Gi 3.2Gi 3.5Gi 3.0Mi 1.1Gi 5.1Gi
Swap: 974Mi 0B 974Mi
root@acs3:~# uptime
09:26:02 up 97 days, 16:27, 1 user, load average: 0.04, 0.01, 0.00
found a python script to get memory usage
root@acs3:~/install/util# curl -sL https://raw.githubusercontent.com/pixelb/ps_mem/master/ps_mem.py -o /usr/local/bin/ps_mem.py
root@acs3:~/install/util# chmod 755 /usr/local/bin/ps_mem.py
root@acs3:~/install/util# python
Python 2.7.18 (default, Jul 14 2021, 08:11:37)
[GCC 10.2.1 20210110] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> exit
Use exit() or Ctrl-D (i.e. EOF) to exit
>>>
root@acs3:~/install/util# ps_mem.py -S
Private + Shared = RAM used Swap used Program
116.0 KiB + 93.5 KiB = 209.5 KiB 0.0 KiB sleep
316.0 KiB + 40.5 KiB = 356.5 KiB 0.0 KiB agetty
304.0 KiB + 132.5 KiB = 436.5 KiB 0.0 KiB cron
300.0 KiB + 449.5 KiB = 749.5 KiB 0.0 KiB daemon_tr069.sh
856.0 KiB + 605.5 KiB = 1.4 MiB 0.0 KiB systemd-timesyncd
1.3 MiB + 298.5 KiB = 1.6 MiB 0.0 KiB dbus-daemon
1.1 MiB + 634.5 KiB = 1.7 MiB 0.0 KiB systemd-logind
2.5 MiB + 193.5 KiB = 2.7 MiB 0.0 KiB rsyslogd
3.1 MiB + 115.5 KiB = 3.2 MiB 0.0 KiB systemd-udevd
2.8 MiB + 589.5 KiB = 3.4 MiB 0.0 KiB vmtoolsd
2.9 MiB + 920.5 KiB = 3.8 MiB 0.0 KiB VGAuthService
3.2 MiB + 1.1 MiB = 4.2 MiB 0.0 KiB bash (2)
3.5 MiB + 2.8 MiB = 6.3 MiB 0.0 KiB sshd (3)
3.7 MiB + 5.9 MiB = 9.6 MiB 0.0 KiB systemd (3)
4.7 MiB + 15.9 MiB = 20.5 MiB 0.0 KiB apache2 (6)
7.8 MiB + 14.1 MiB = 21.8 MiB 0.0 KiB systemd-journald
528.7 MiB + 279.5 KiB = 529.0 MiB 0.0 KiB mongod
2.4 GiB + 33.5 MiB = 2.4 GiB 0.0 KiB node (44)
---------------------------------------------
3.0 GiB 0.0 KiB
=============================================
root@acs3:~/install/util#
Number does make sense to me. And ofc python has to be installed for it to work.
I suspect that it is a memory leak caused by the session not ending normally
I also have CPEβ s not ending sessions, and I think itβs because of physical problems. But memory usage remains the same. GenieACS version is βas isβ β code unmodified, installed via npm as the docs suggests and running on a dedicated virtual machine. Over 4k of CPE registered.
I see you have 75 node processes while I run 45. I suspect you must have 16 cores which could explain your 4.7Gb or RAM used. Is that a physical machine ? Worst case scenario I would try to isolate GenieACS on a virtual machine with no other service running.
root@acs3:~# lscpu | grep -E '^Thread|^Core|^Socket|^CPU\('
CPU(s): 8
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 8
I never tried it but you could use the *_WORKER_PROCESSES
variables to limit the number of threads per service. Environment Variables β GenieACS Documentation 1.2.9 documentation
That or increase total RAM.
Also:
Extraordinary claims require extraordinary evidence; when you make a claim like this one, you must back it up with clear and exhaustive documentation of the failure case.
By docker install, and docker-compose is used.
lscpu | grep -E β^Thread|^Core|^Socket|^CPU(β
CPU(s): 16
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 16