It would be useful to log any case like this with TAC as mentioned earlier, this could be a known defect
Original Message:
Sent: Sep 15, 2023 02:51 AM
From: zsentinel
Subject: ArubaCX 6100: High CPU utilization after update to 10.12.0006
Correction: we updated to 10.12.1000 ;)
------------------------------
ZeitlerMatthiasmz@vfm-gruppe.de
Original Message:
Sent: Sep 15, 2023 02:12 AM
From: zsentinel
Subject: ArubaCX 6100: High CPU utilization after update to 10.12.0006
Good Morning,
we experienced same result updating to 10.23.1000 very high CPU consuming where the snmp daemon will participate a lot!
We removed snmp configuration and did snmp configuration again and the problem with high cpu utilization was gone.
Maybe this tip could help you ;)
Best Regards,
Matthias
------------------------------
ZeitlerMatthiasmz@vfm-gruppe.de
Original Message:
Sent: Sep 14, 2023 11:42 AM
From: higher_ed_admin_0978734512
Subject: ArubaCX 6100: High CPU utilization after update to 10.12.0006
I have noticed this on our 6100's on 10.10, currently have one doing it on 10.10.1030. We saw this on quite a few a while back, but it has been a while. The `hpe-snmpd` process is normally the issue as it would consume a large ##% of the CPU.
Did you open a case with TAC? If so, any resolution?
I am planning on opening a ticket with TAC.
Original Message:
Sent: Jun 17, 2023 04:30 AM
From: Daniel
Subject: ArubaCX 6100: High CPU utilization after update to 10.12.0006
Hi,
i just updated a CX 6100 Switch to PL.10.12.0006 and now the CPU usage is extremely high.

before the update all good after the update ....
After the update the ovsdb-server and snmpd are fighting over the CPU.
# top cpu
top - 10:23:20 up 11:20, 1 user, load average: 2.17, 2.38, 2.44
Tasks: 199 total, 7 running, 192 sleeping, 0 stopped, 0 zombie
%Cpu(s): 65.0 us, 25.0 sy, 2.5 ni, 7.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 3422.6 total, 1216.8 free, 982.0 used, 1223.8 buff/cache
MiB Swap: 1024.0 total, 1024.0 free, 0.0 used. 2227.2 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
553 root 20 0 21000 14400 4552 S 50.0 0.4 113:35.55 /usr/sbin/ovsdb-server --config-file=/e+
13713 remote_+ 20 0 3036 1880 1504 R 35.0 0.1 0:00.20 /usr/bin/top -b -n 2 -c -o %CPU -w 110 +
4099 root 20 0 69276 24440 19620 S 25.0 0.7 528:33.37 /usr/bin/hpe-snmpd -x /var/agentx/maste+
13711 root 20 0 24872 8012 4964 R 25.0 0.2 0:00.33 /usr/bin/dns_get_srcip 13708 2558 13708
2168 root 20 0 810324 131668 23732 S 20.0 3.8 51:14.80 /usr/bin/switchd_agent -s 1 -p 1943 -m 1
610 root 20 0 289768 41932 24248 S 15.0 1.2 9:15.03 /usr/sbin/ops-switchd --no-chdir --pidf+
10 root 20 0 0 0 0 S 5.0 0.0 0:39.94 [ksoftirqd/0]
2318 root 20 0 43008 14360 11708 S 5.0 0.4 0:56.54 /usr/bin/l2mac-mgrd --pidfile -vSYSLOG:+
2319 root 20 0 343036 16048 12556 S 5.0 0.5 7:12.82 /usr/bin/ndmd --pidfile -vSYSLOG:INFO
2341 root 20 0 51644 14100 9824 R 5.0 0.4 21:09.38 /usr/bin/poe-hald --detach --pidfile -v+
2649 root 20 0 95420 15532 12260 S 5.0 0.4 6:39.05 /usr/bin/ipsavd --pidfile -vSYSLOG:INFO
2680 root 20 0 104260 14320 11532 S 5.0 0.4 12:42.85 /usr/bin/hpe-mgmdd --detach --pidfile -+
2722 root 20 0 101928 16788 11680 S 5.0 0.5 4:05.06 /usr/bin/mtmd --detach --pidfile -vSYSL+
12669 root 30 10 19316 14956 6360 R 5.0 0.4 0:50.85 python /usr/bin/ops-gen-logrotate
SNMP is in use otherwise the config of the switch is vanilla, no routing, no additional vrf, just VLANs and uplink