Long Delay in Login to Desktop on Kali 6.4.0 and Ubuntu 22.04

In recent weeks both OS’ show a long delay in login to seeing the session desktop. The delay increased by 300-600%, up to 140+ seconds. The machine in question is an older HP workstation, which normally took less than 20 seconds. The problem is present regardless of whether the machine is rebooted or started from a suspended state. The reason was a good learning experience and was not what was initially expected! No IPv6 tables were injured in the resolution of this issue…

Table of Contents

1: Operating systems involved.
2: Diagnostic tools used.
3: Commonalities between examples.
4: Kali 6.4.0 use case.
5: Ubuntu 22.04 use case.
6: Lessons Learned.

Operating systems involved:

The problem manifests on

Both systems had been logging in nominally until maybe two weeks+ ago.

Diagnostic Tools

The journal was perused on both operating systems. The appropriate command for doing so is:

cat /var/log/syslog | less
-- AND/OR --
journalctl --system -S today | less

Commonalities Between Examples

The system is a dual boot HP wx8600 workstation with boot partitions on separate HDDs.

Processors, 2x: Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
Memory, 8GB: DDR2-667/PC2-5300.

In both instances we start from a suspended state.

Kali 6.4.0


We observe journalctl on Kali to audit exit from the suspended state into login. The journal showed that the program lynis started a vulnerability scan on the boot drive as system leaves sleep state, causing delay.

Attempted Resolution:

Lynis installs as a service, lynis.service, with normal start and stop control. Disabling of lynis.service is not possible. Stopping lynis.service warns the operator that it can be started by lynis.timer. To resolve implement hiding lynis.timer, i.e.

sudo mv lynis.timer .lynis.timer 

and stopping the lynis.service.

Though this improved the delay between login to session desktop, the problem was not fully cured. See the Kali 6.4 implemented solution below

Ubuntu 22.04


Again using journalctl to observe the wake from suspend, lynis was not visible in the log. In addition, there was nothing obvious in the log. What I needed was a tool like systemd-analyze blame but for login.

Linux provides just such a command

sudo /usr/bin/systemd-analyze critical-chain

Running the above command results in the following output

The time when unit became active or started is printed after the "@" character.
The time the unit took to start is printed after the "+" character.

graphical.target @1min 29.272s
└─multi-user.target @1min 29.272s
 └─mysql.service @46.746s +42.525s
  └─network.target @46.736s
   └─NetworkManager.service @38.972s +7.762s
    └─dbus.service @38.967s
     └─basic.target @38.781s
      └─sockets.target @38.780s
       └─snapd.socket @38.777s +2ms
        └─sysinit.target @38.649s
         └─systemd-timesyncd.service @38.302s +347ms
          └─systemd-tmpfiles-setup.service @37.328s +963ms
           └─systemd-journal-flush.service @6.108s +31.217s
            └─systemd-journald.service @5.603s +503ms
             └─systemd-journald.socket @5.588s
              └─-.mount @5.557s
               └─-.slice @5.557s

The long load-up time for mysql.service is unusual, as the database is very small, well less than 1GB. Also unusual is a similar long delay for NetworkManager.service. Trying to enter the database using mysql-workbench-community failed. Many hours of research was done online to solve this login to graphical target delay over a period of a couple of weeks.

During this time I had been working with fwbuilder and had added iptables-persistent. iptables-persistent restores the last saved iptables state on reboot. I had created a tight firewall, and while testing, I was watching the firewall fallout’s. Fallout’s occur at the bottom of the INPUT, FORWARD, and OUTPUT chains, when nothing else traps out. Then we have

Chain RULE_VERYLAST (3 references)
num  target     prot opt source               destination         
1    LOG        all  --              LOG flags 0 level 5 prefix "FALLTHRU "
2    DROP       all  --  

Fallout’s trap via the command:

tail -f /var/log/syslog | grep 'FALLTHRU'
-- OR --
journalctl -f --system -S today | grep 'FALLTHRU'

The tight firewall setting results in port 3306 blocked in both directions for local host, i.e. Flushing the firewall while logged in allowed entry into the databases with mysql-workbench-community.

However, that did not resolve the login to session desktop delay issue. netfilter-persistent is the case of the delay, loading at boot the previously saved tight firewall setting. Resulting in a timed out mysql.service, and probably also other undiscovered timeouts.

Implemented Resolution

The easiest solution is to clear the firewall prior to logging out of the user account. The best process is to edit the file in the Gnome Display Manager Post Session folder, /etc/gdm3/PostSession/Default, and adding a firewall clearing command. The file, Default, runs as root at the end of a user’s session. This solution decreased the login to session desktop time from ~144 seconds to ~7-12 seconds.

The user may clear the firewall with any usable tool. One possible method is to create an open firewall rule set, i.e.

# Generated by iptables-save v1.8.7 on Sun Sep 10 18:45:54 2023
# Completed on Sun Sep 10 18:45:54 2023
# Generated by iptables-save v1.8.7 on Sun Sep 10 18:45:54 2023
# Completed on Sun Sep 10 18:45:54 2023

Saved to a file, i.e. /etc/iptables/rules.v4, then load by iptables-restore, i.e.

sudo iptables-restore < /etc/iptables/rules.v4

This solution is not the best solution, as it wide opens the firewall. Another drawback is the user must install the target firewall rule set after the user is in the session desktop. However, I didn’t have time to precisely determine which open ports and addresses where mandatory when logging in. That will be the topic of updates to this post.

Kali 6.4 Difference

This solution did not work on Kali 6.4. Partly because Kali runs the Xfce desktop on lightdm. However there is also another interesting difference. The firewall state between multi-users and graphical users is separate. The system boots in multi-user mode until the graphical display manager takes over. Trying to tinker with the firewall though iptables-persistent broke this mechanism of separation.

To resolve issue, open a virtual terminal outside of the graphical interface, while in multi-user mode. Load the open rule set, above, using iptables-restore. Once a user graphically logs in the target firewall loads using whatever tool the user prefers. In the first case, we have the default multi-user firewall rule set. In the second case we have the default graphical user firewall rule set.

Lessons Learned

The iptables firewall is a complex insidious beast whose configuration has significant ramifications for system operation. Your best intentions can quickly lead you astray. Incorrect or improperly applied firewall rules can have hard to diagnose effects that are inconvenient at best. At worse, they can brick your system, especially cloud systems that do not have virtual or console terminal access. Beware at all times, tread lightly, and if possible always have a way out…

Go Home