StackPatch is liveSee product

Back to StackPatch
Playbook reference

How StackPatch turns a CVE finding into the exact command to run

Five playbook classes cover ~95% of the Ubuntu USN feed today. The matcher decides which class applies based on the affected package and the fixed-version shape, then emits the recommended action with the literal shell command. This page is the authoritative reference — it doubles as our customer onboarding doc.

apt_upgradeurgency: now

apt_upgrade — the most common case

A standard package vulnerability where the patched version is in the regular apt repos for your release codename.

How the matcher decides

Matcher reads the USN release_packages[<your_codename>] list, finds your installed version via dpkg-query, runs dpkg --compare-versions installed lt fixed. If installed is strictly less than fixed, finding fires.

Expected downtime

Typically zero. Most apt upgrades restart their service automatically (sshd, nginx, postgres, etc.) and existing connections survive.

Recommended command

sudo apt-get update
sudo apt-get install --only-upgrade -y <package-name>

Real example from the MSS VPS audit log

USN-8222-1 OpenSSH 9.6p1-3ubuntu13.15 → 13.16 on Ubuntu noble. We applied this on the MSS VPS this morning; existing SSH session survived; new connections used the patched binary immediately.

When this fails / edge cases

If apt list --upgradable doesn't show the package, the patched version isn't in your enabled repos yet. Wait for the security mirror to sync (usually <1 hour after a USN drops), then retry.

apt_upgrade_esmurgency: soon

apt_upgrade_esm — Ubuntu Pro / ESM required

The fixed version has the ~esmN suffix (e.g. 0.42.0-2ubuntu0.1~esm1). Ubuntu Pro / ESM subscription required to access the patched package.

How the matcher decides

Matcher detects ~esm in the fixed version string and routes to apt_upgrade_esm instead of plain apt_upgrade. Without Pro, plain apt-get install --only-upgrade will report 'already the newest version' even though a fix exists in the ESM repo.

Expected downtime

Zero for the upgrade itself. The pro attach step is one-time per server.

Recommended command

# Free for personal + small-team (up to 5 machines)
# Sign up: https://ubuntu.com/pro
sudo pro attach <your-token>
sudo apt-get update
sudo apt-get install --only-upgrade -y <package-name>=<fixed-version>

Real example from the MSS VPS audit log

USN-8221-1 python-wheel 0.42.0-2 → 0.42.0-2ubuntu0.1~esm1. The MSS VPS quickscan today flagged this as apt_upgrade_esm. Recommended action routes to ubuntu.com/pro instead of plain apt-get.

When this fails / edge cases

If you cannot enable Pro (commercial > 5 machines), this finding stays open until upstream patches reach the standard repos (sometimes never for old packages). Document the risk; consider package-level mitigation (e.g., for python-wheel, only install wheels from trusted sources).

kernel_rebooturgency: next-window

kernel_reboot — apt then reboot

Affected package starts with linux-image, linux-modules, or linux-headers. The new kernel installs to disk via apt but the running kernel stays vulnerable until you reboot.

How the matcher decides

Matcher checks if the package name matches the kernel-pattern. If yes, it emits kernel_reboot kind and includes 'sudo reboot' in the command. Inventory's reboot_pending field flips to true after install.

Expected downtime

~30-60 seconds while the box reboots. Existing TCP connections drop. systemd services restart automatically. Containers self-restart. Schedule for a maintenance window if you have user traffic.

Recommended command

sudo apt-get update
sudo apt-get install --only-upgrade -y linux-image-generic
sudo reboot

Real example from the MSS VPS audit log

CVE-2026-31431 (Copy Fail) is the canonical case. Our VPS has the patched kernel installed but pending reboot — kernel-patch-watcher.sh fires hourly checking running vs installed.

When this fails / edge cases

If unattended-upgrades installed the kernel automatically but you didn't reboot, your running kernel stays vulnerable indefinitely. The kernel-patch-watcher cron we ship for free flags this case via Telegram so you get a reminder.

modprobe_blacklisturgency: now

modprobe_blacklist — block the vulnerable module before reboot

Kernel module CVE where the entry path is loadable but unloaded by default. You can block the module without rebooting; the running kernel stays vulnerable but the exploit cannot reach it.

How the matcher decides

Manually authored playbook (V0). For CVE-2026-31431 the matcher flags any kernel < 2026-05-01 cutoff and emits the modprobe blacklist as the 'now' action. The kernel_reboot playbook is the 'eventually' action — chain them.

Expected downtime

Zero. The mitigation is read on next module load (which never happens after the install /bin/false directive). No service restarts.

Recommended command

echo -e 'blacklist <module-name>\ninstall <module-name> /bin/false' \
  | sudo tee /etc/modprobe.d/cve-NNNNN-NNNNN.conf
sudo rmmod <module-name> 2>/dev/null || true
# Verify
sudo modprobe <module-name> 2>&1 | grep -q 'install command' && echo OK || echo FAIL

Real example from the MSS VPS audit log

Our /etc/modprobe.d/cve-2026-31431-copyfail.conf blocks algif_aead. modprobe algif_aead now exits 1 with /bin/false. Verified in our public audit log — the file appears under "Active mitigations".

When this fails / edge cases

If something on your box has the module already loaded AND in use (not common for niche crypto modules like algif_aead), rmmod fails. Workaround: schedule a kernel_reboot to drop the module fresh, OR keep the blacklist + accept the in-memory exposure until reboot.

mitigatedurgency: informational

mitigated — already protected

A new finding lands but our matcher detects an active mitigation that already covers the CVE (e.g., /etc/modprobe.d/cve-NNN.conf is present and lists the relevant directive).

How the matcher decides

Matcher reads inventory.mitigations from the snapshot. For each finding's CVE list, checks if any mitigation's cve field matches by substring. If yes, finding kind flips to usn_already_mitigated and urgency becomes 'informational'.

Expected downtime

None.

Recommended command

No action required. The mitigation file blocks the exploit path; the finding will auto-clear when the patched package is installed and dpkg-version-compare returns false.

Real example from the MSS VPS audit log

CVE-2026-31431 on our VPS appears in the public audit log as Active mitigation (modprobe blacklist) — the kernel itself is still vulnerable but the exploit path is blocked. Once we reboot to the patched kernel, the row clears entirely.

When this fails / edge cases

If you remove the mitigation file (e.g., during a server rebuild or modprobe.d cleanup), the matcher will re-flag the finding as active with full urgency. This is by design — our state is derived from the inventory, never cached past one hour.

See it running on a real server

The MindSparkStack VPS is StackPatch's customer #0. The audit log shows live playbook output across all five classes — including the Copy Fail mitigation (modprobe_blacklist), the OpenSSH apt_upgrade we ran this morning, and the python-wheel apt_upgrade_esm that's correctly outstanding.

Roadmap — playbook classes coming in V1.5 / V2

  • container_rebuild — Docker image vulnerabilities. Rebuild the affected image with a patched base layer.
  • config_change — Config-only mitigations (e.g., disabling a dangerous CipherSuite, restricting an nginx directive).
  • rpm_upgrade / dnf_upgrade — RHEL / AlmaLinux / Rocky / Amazon Linux family.
  • apk_upgrade — Alpine Linux family.
  • language_runtime_pin — Python / Node / Ruby vulnerabilities at the runtime layer (vs distro package).