SSD/NVMe Tweaks (TRIM/Discard): Difference between revisions

From WikiMLT
m (Text replacement - "mlw-continue" to "code-continue")
 
Line 44: Line 44:
</syntaxhighlight>
</syntaxhighlight>


To test does an NVMe supports TRIM (physical discard option) the output of the following command must be greater than 0. This approach is applicable also for SSD devices.<syntaxhighlight lang="shell" line="1" class="mlw-continue">
To test does an NVMe supports TRIM (physical discard option) the output of the following command must be greater than 0. This approach is applicable also for SSD devices.<syntaxhighlight lang="shell" line="1" class="code-continue">
cat /sys/block/nvme0n1/queue/discard_max_bytes
cat /sys/block/nvme0n1/queue/discard_max_bytes
</syntaxhighlight>
</syntaxhighlight>
<syntaxhighlight lang="shell" line="1" class="mlw-shell-gray mlw-continue">
<syntaxhighlight lang="shell" line="1" class="mlw-shell-gray code-continue">
cat /sys/block/sda/queue/discard_max_bytes
cat /sys/block/sda/queue/discard_max_bytes
</syntaxhighlight>
</syntaxhighlight>
<syntaxhighlight lang="shell" line="1" class="mlw-continue">
<syntaxhighlight lang="shell" line="1" class="code-continue">
grep . /sys/block/sd*/queue/discard_*  # Inspect all devices
grep . /sys/block/sd*/queue/discard_*  # Inspect all devices
</syntaxhighlight>
</syntaxhighlight>


In addition if the columns <code>DISC-GRAN</code> and <code>DISC-MAX</code> within the output of <code>lsblk -D</code> has values the relevant devices supports TRIM.<syntaxhighlight lang="shell" line="1" class="mlw-continue">
In addition if the columns <code>DISC-GRAN</code> and <code>DISC-MAX</code> within the output of <code>lsblk -D</code> has values the relevant devices supports TRIM.<syntaxhighlight lang="shell" line="1" class="code-continue">
lsblk -dD
lsblk -dD
</syntaxhighlight>
</syntaxhighlight>
Line 67: Line 67:
== Get Full SSD/NVMe/HDD Device Info ==
== Get Full SSD/NVMe/HDD Device Info ==
For SSD/HDD devices use:
For SSD/HDD devices use:
<syntaxhighlight lang="shell" line="1" class="mlw-continue root-prompt">
<syntaxhighlight lang="shell" line="1" class="code-continue root-prompt">
smartctl -x /dev/sda            # -x, --xall; -a, --all
smartctl -x /dev/sda            # -x, --xall; -a, --all
</syntaxhighlight>
</syntaxhighlight>
Line 74: Line 74:
</syntaxhighlight>
</syntaxhighlight>
For NVMe devices use:
For NVMe devices use:
<syntaxhighlight lang="shell" line="1" class="mlw-continue root-prompt">
<syntaxhighlight lang="shell" line="1" class="code-continue root-prompt">
smartctl -x /dev/nvme0n1        # -x, --xall; -a, --all
smartctl -x /dev/nvme0n1        # -x, --xall; -a, --all
</syntaxhighlight>
</syntaxhighlight>
Line 177: Line 177:
For <u>un</u>privileged Linux '''containers''' use the following command at the ProxmoxVE host side .<syntaxhighlight lang="shell" line="1" class="root-prompt mlw-shell-gray">
For <u>un</u>privileged Linux '''containers''' use the following command at the ProxmoxVE host side .<syntaxhighlight lang="shell" line="1" class="root-prompt mlw-shell-gray">
pct fstrim 151              # Where 151 is the Id of the container
pct fstrim 151              # Where 151 is the Id of the container
</syntaxhighlight>For privileged containers <code>fstrim</code> could be executed also from the guest's side. For the '''virtual machines''' - enable the discard and SSD emulation options in the storage device configuration, use VirtIo SCSI as controller and trim at the guest side. Also we can use one of the following commands at he ProxmoxVE host side.<syntaxhighlight lang="shell" line="1" class="mlw-continue root-prompt">
</syntaxhighlight>For privileged containers <code>fstrim</code> could be executed also from the guest's side. For the '''virtual machines''' - enable the discard and SSD emulation options in the storage device configuration, use VirtIo SCSI as controller and trim at the guest side. Also we can use one of the following commands at he ProxmoxVE host side.<syntaxhighlight lang="shell" line="1" class="code-continue root-prompt">
qm guest cmd 251 fstrim    # Where 251 is the Id of the container
qm guest cmd 251 fstrim    # Where 251 is the Id of the container
</syntaxhighlight><syntaxhighlight lang="shell" line="1" class="root-prompt mlw-shell-gray">
</syntaxhighlight><syntaxhighlight lang="shell" line="1" class="root-prompt mlw-shell-gray">

Latest revision as of 08:30, 26 September 2022

Note, with­in the ex­am­ples in the fol­low­ing sec­tions, /​​​dev/​​​sda refers to a SSD de­vice while /​​​dev/​​​nvme0n1 refers to a NVMe de­vice.

Dis­card­ing Un­used Blocks

You can per­form or sched­ule dis­card op­er­a­tions on block de­vices that sup­port them. Block dis­card op­er­a­tions dis­card blocks that are no longer in use by a mount­ed file sys­tem. They are use­ful on:

  • Sol­id-state dri­ves (SS­Ds)
  • Thin­ly-pro­vi­sioned stor­age

You can run dis­card op­er­a­tions us­ing dif­fer­ent meth­ods:

  • Batch dis­card: Are run ex­plic­it­ly by the user. They dis­card all un­used blocks in the se­lect­ed file sys­tems.
  • On­line dis­card: Are spec­i­fied at mount time. They run in re­al time with­out user in­ter­ven­tion. On­line dis­card op­er­a­tions dis­card on­ly the blocks that are tran­si­tion­ing from used to free.
  • Pe­ri­od­ic dis­card: Are batch op­er­a­tions that are run reg­u­lar­ly by a sys­temd ser­vice.

All types are sup­port­ed by the XFS and ext4 file sys­tems and by VDO.

Rec­om­men­da­tions. Red Hat rec­om­mends that you use batch or pe­ri­od­ic dis­card. Use on­line dis­card on­ly if:

  • the system’s work­load is such that batch dis­card is not fea­si­ble, or
  • on­line dis­card op­er­a­tions are nec­es­sary to main­tain per­for­mance.

Ref­er­ences

Test Whether a De­vise Sup­ports TRIM

To test whether the SSD de­vice sup­ports TRIM/​​​Discard op­tion you can use ei­ther of the fol­low­ing com­mands.

sudo hdparm -I /dev/sda | grep TRIM
* Data Set Management TRIM supported (limit 8 blocks)
sudo smartctl --all /dev/sda | grep TRIM
TRIM Command:     Available

To test does an NVMe sup­ports TRIM (phys­i­cal dis­card op­tion) the out­put of the fol­low­ing com­mand must be greater than 0. This ap­proach is ap­plic­a­ble al­so for SSD de­vices.

cat /sys/block/nvme0n1/queue/discard_max_bytes
cat /sys/block/sda/queue/discard_max_bytes
grep . /sys/block/sd*/queue/discard_*   # Inspect all devices

In ad­di­tion if the columns DISC-GRAN and DISC-MAX with­in the out­put of ls­blk ‑D has val­ues the rel­e­vant de­vices sup­ports TRIM.

lsblk -dD
NAME    DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
sda            0        4K       2G         0   # SSD
sdb            0        0B       0B         0   # HDD
sr0            0        0B       0B         0   # DVD ROM
nvme0n1        0      512B       2T         0   # NVMe

Get Full SSD/​​​NVMe/​​​HDD De­vice In­fo

For SSD/HDD de­vices use:

smartctl -x /dev/sda            # -x, --xall; -a, --all
hdparm -I /dev/sda              # doesn't support NVMe

For NVMe de­vices use:

smartctl -x /dev/nvme0n1        # -x, --xall; -a, --all
nvme smart-log -H /dev/nvme0n1  # apt install nvme-cli

Ref­er­ences

[Batch dis­card] TRIM from CLI

sudo fstrim -av             # trim/discardd all mounted devices, verbose output
sudo fstrim -v /            # trim/discardd the root fs, verbose output
sudo fstrim /mount-point    # trim/discardd a device mounted at specific mount point

[Pe­ri­od­ic Dis­card] Set the TRIM Job to Dai­ly

sudo systemctl enable fstrim.timer
sudo mkdir /etc/systemd/system/fstrim.timer.d
sudo nano /etc/systemd/system/fstrim.timer.d/override.conf
[Timer]
OnCalendar=
OnCalendar=daily

Re­boot the sys­tem or do dae­mon-re­load and check the up­dat­ed val­ue.

sudo systemctl daemon-reload
systemctl cat fstrim.timer
# /lib/systemd/system/fstrim.timer
[Unit]
Description=Discard unused blocks once a week
Documentation=man:fstrim
ConditionVirtualization=!container

[Timer]
OnCalendar=weekly
AccuracySec=1h
Persistent=true

[Install]
WantedBy=timers.target

# /etc/systemd/system/fstrim.timer.d/override.conf
[Timer]
OnCalendar=
OnCalendar=daily
sudo systemctl status fstrim.service
sudo systemctl status fstrim.timer

In­spect the logs.

journalctl -u fstri
#Out­put
Aug 25 01:18:25 example.com systemd[1]: Starting Discard unused blocks on filesystems from /etc/fstab...
Aug 25 01:18:56 example.com fstrim[2387712]: /mnt/500GB-HT: 108.2 GiB (116132655104 bytes) trimmed on /dev/disk/by-uuid/b6ec78e6-bf2e-4cf0-bb1e-0042645af13b
Aug 25 01:18:56 example.com fstrim[2387712]: /mnt/2TB-WD-2: 149.6 GiB (160665714688 bytes) trimmed on /dev/disk/by-uuid/5d0b4c70-8936-41ef-8ac5-eba6f3142d9d
Aug 25 01:18:56 example.com fstrim[2387712]: /mnt/2TB-WD-1: 36.9 GiB (39626182656 bytes) trimmed on /dev/disk/by-uuid/9aa5e7bb-a0ca-463c-856b-86a3b9b4944a
Aug 25 01:18:56 example.com fstrim[2387712]: /: 0 B (0 bytes) trimmed on /dev/disk/by-uuid/09e7c8ed-fb55-4a44-8be4-18b1696fc714
Aug 25 01:18:56 example.com systemd[1]: fstrim.service: Deactivated successfully.
Aug 25 01:18:56 example.com systemd[1]: Finished Discard unused blocks on filesystems from /etc/fstab.
Aug 25 01:18:56 example.com systemd[1]: fstrim.service: Consumed 1.213s CPU time.

Un­do the change if you need.

sudo rm -v /etc/systemd/system/fstrim.timer.d/override.conf

Ref­er­ences

[On­line Dis­card] En­able TRIM in /​​​etc/​​​fstab

This ap­proach is not rec­om­mend­ed by the most man­u­als.

sudo nano /etc/fstab
#/dev/disk/by-uuid/09e7c--18b1696fc714 / ext4 defaults 0 0
#/dev/disk/by-uuid/09e7c--18b1696fc714 / ext4 discard,defaults 0 0
#/dev/disk/by-uuid/09e7c--18b1696fc714 / ext4 discard,async,noatime,nodiratime,errors=remount-ro 0 1
/dev/disk/by-uuid/09e7c--18b1696fc714 / ext4 discard,errors=remount-ro 0 1

Warn­ing: Users need to be cer­tain that their block de­vice sup­ports TRIM be­fore at­tempt­ing to use the dis­card mount­ing op­tion. Oth­er­wise da­ta loss can oc­cur!

Prox­moxVE Node

Batch dis­card of the guests

For unpriv­i­leged Lin­ux con­tain­ers use the fol­low­ing com­mand at the Prox­moxVE host side .

pct fstrim 151              # Where 151 is the Id of the container

For priv­i­leged con­tain­ers fstrim could be ex­e­cut­ed al­so from the guest's side. For the vir­tu­al ma­chines – en­able the dis­card and SSD em­u­la­tion op­tions in the stor­age de­vice con­fig­u­ra­tion, use Vir­tIo SC­SI as con­troller and trim at the guest side. Al­so we can use one of the fol­low­ing com­mands at he Prox­moxVE host side.

qm guest cmd 251 fstrim     # Where 251 is the Id of the container
qm agent 251 fstrim         # Where 251 is the Id of the container

Notes about the PVE Node (the PVE host)

  1. Sched­ule fstrim to run week­ly on PE hosts.
  2. Use the Vir­tio SC­SI dri­ver for VMs when pos­si­ble, en­abling the dis­card op­tion each time?!?
  3. Sched­ule TRIM (eg fstrim in Lin­ux) to run week­ly in guest VMs.
  4. Sched­ule fstrim to run week­ly in PE con­tain­ers.

Ref­er­ences

Tweak the APM Lev­el of SSD

Most Lin­ux dis­tri­b­u­tions use Lin­ux Kernel’s “Ad­vanced Pow­er Man­age­ment (APM)” API to han­dle con­fig­u­ra­tion, op­ti­mize per­for­mance, and en­sure sta­bil­i­ty of stor­age de­vices. These de­vices are as­signed an APM val­ue be­tween 1 and 255 to con­trol their pow­er man­age­ment thresh­olds. A val­ue of 254 in­di­cates best per­for­mance, while a val­ue of 1 in­di­cates bet­ter pow­er man­age­ment. As­sign­ing a val­ue of 255 will dis­able APM al­to­geth­er. By de­fault, SS­Ds are as­signed an APM of 254 when the sys­tem is run­ning on ex­ter­nal pow­er. In bat­tery mode, the APM lev­el is set to 128, re­duc­ing the read and write speeds of SS­Ds. This ar­ti­cle ex­plains how to in­crease SSD APM lev­els to 254 when your Lin­ux lap­top is run­ning on bat­tery mode.

sudo hdparm -B254 /dev/sda

Get the cur­rent AMP val­ue.

sudo hdparm -B /dev/sda

Test the per­for­mance.

sudo hdparm -tT /dev/sda

Ref­er­ences

See al­so