SSD/NVMe Tweaks (TRIM/Discard): Difference between revisions

From WikiMLT
Line 24: Line 24:
* Unix and Linux: [https://unix.stackexchange.com/q/652623 How to evaluate the wear level of a NVMe SSD?]
* Unix and Linux: [https://unix.stackexchange.com/q/652623 How to evaluate the wear level of a NVMe SSD?]
* NVMexpress.org: [https://nvmexpress.org/ NVMe Current Specifications]
* NVMexpress.org: [https://nvmexpress.org/ NVMe Current Specifications]
* Linux NVMe at GitHub: [https://github.com/linux-nvme/nvme-cli '''nvme-cli''']


== Block Discard Operations ==
== Block Discard Operations ==


== [Batch discard] TRIM from command line ==
== [Batch discard] TRIM from CLI ==
 
=== <code>'''fstrim'''</code> command ===
<syntaxhighlight lang="shell" line="1">
<syntaxhighlight lang="shell" line="1">
sudo fstrim -av            # trim/discardd all mounted devices, verbose output
sudo fstrim -av            # trim/discardd all mounted devices, verbose output
sudo fstrim -v /            # trim/discardd the root fs, verbose output
sudo fstrim -v /            # trim/discardd the root fs, verbose output
sudo fstrim /mount-point    # trim/discardd a device mounted at specific mount point
sudo fstrim /mount-point    # trim/discardd a device mounted at specific mount point
</syntaxhighlight>
=== ProxmoxVE Node ===
For <u>un</u>privileged Linux '''containers''' use the following command at the ProxmoxVE host side .<syntaxhighlight lang="shell" line="1" class="root-prompt mlw-shell-gray">
pct fstrim 151              # Where 151 is the Id of the container
</syntaxhighlight>For privileged containers <code>fstrim</code> could be executed also from the guest's side. For the '''virtual machines''' - enable the discard and SSD emulation options in the storage device configuration, use VirtIo SCSI as controller and trim at the guest side. Also we can use one of the following commands at he ProxmoxVE host side.<syntaxhighlight lang="shell" line="1" class="mlw-continue root-prompt">
qm guest cmd 251 fstrim    # Where 251 is the Id of the container
</syntaxhighlight><syntaxhighlight lang="shell" line="1" class="root-prompt mlw-shell-gray">
</syntaxhighlight><syntaxhighlight lang="shell" line="1" class="root-prompt mlw-shell-gray">
nvme smart-log -H /dev/nvme0n1  # apt install nvme-cli
qm agent 251 fstrim        # Where 251 is the Id of the container
</syntaxhighlight>
</syntaxhighlight>


Line 42: Line 52:
* Red Hat Docs: <u>[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_file_systems/discarding-unused-blocks_managing-file-systems Red Hat > '''<nowiki>[9]</nowiki>''' > Managing storage devices > Chapter 37. '''Discarding unused blocks''']</u>
* Red Hat Docs: <u>[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_file_systems/discarding-unused-blocks_managing-file-systems Red Hat > '''<nowiki>[9]</nowiki>''' > Managing storage devices > Chapter 37. '''Discarding unused blocks''']</u>


== [Periodic Discard] Set the automatic TRIM job to daily ==
=== [Periodic Discard] Set the automatic TRIM job to daily ===
<syntaxhighlight lang="shell" line="1">
<syntaxhighlight lang="shell" line="1">
sudo systemctl enable fstrim.timer
sudo systemctl enable fstrim.timer

Revision as of 10:17, 18 August 2022

Note, with­in the ex­am­ples in the fol­low­ing sec­tions, /​​​dev/​​​sda refers to a SSD de­vice while /​​​dev/​​​nvme0n1 refers to a NVMe de­vice.

Get Full SSD/​​​NVMe De­vice In­fo

For SSD de­vices use:

smartctl -x /dev/sda            # -x, --xall; -a, --all
hdparm -I /dev/sda              # doesn't support NVMe

For NVMe de­vices use:

smartctl -x /dev/nvme0n1        # -x, --xall; -a, --all
nvme smart-log -H /dev/nvme0n1  # apt install nvme-cli

Ref­er­ences

Block Dis­card Op­er­a­tions

[Batch dis­card] TRIM from CLI

fstrim com­mand

sudo fstrim -av             # trim/discardd all mounted devices, verbose output
sudo fstrim -v /            # trim/discardd the root fs, verbose output
sudo fstrim /mount-point    # trim/discardd a device mounted at specific mount point

Prox­moxVE Node

For unpriv­i­leged Lin­ux con­tain­ers use the fol­low­ing com­mand at the Prox­moxVE host side .

pct fstrim 151              # Where 151 is the Id of the container

For priv­i­leged con­tain­ers fstrim could be ex­e­cut­ed al­so from the guest's side. For the vir­tu­al ma­chines – en­able the dis­card and SSD em­u­la­tion op­tions in the stor­age de­vice con­fig­u­ra­tion, use Vir­tIo SC­SI as con­troller and trim at the guest side. Al­so we can use one of the fol­low­ing com­mands at he Prox­moxVE host side.

qm guest cmd 251 fstrim     # Where 251 is the Id of the container
qm agent 251 fstrim         # Where 251 is the Id of the container

Ref­er­ences

[Pe­ri­od­ic Dis­card] Set the au­to­mat­ic TRIM job to dai­ly

sudo systemctl enable fstrim.timer
sudo mkdir /etc/systemd/system/fstrim.timer.d
sudo nano /etc/systemd/system/fstrim.timer.d/override.conf
[Timer]
OnCalendar=
OnCalendar=daily

Re­boot the sys­tem or do dae­mon-re­load and check the up­dat­ed val­ue.

sudo systemctl daemon-reload
systemctl cat fstrim.timer
# /lib/systemd/system/fstrim.timer
[Unit]
Description=Discard unused blocks once a week
Documentation=man:fstrim
ConditionVirtualization=!container

[Timer]
OnCalendar=weekly
AccuracySec=1h
Persistent=true

[Install]
WantedBy=timers.target

# /etc/systemd/system/fstrim.timer.d/override.conf
[Timer]
OnCalendar=
OnCalendar=daily
sudo systemctl status fstrim.service
sudo systemctl status fstrim.timer

Un­do the change if you need.

sudo rm -v /etc/systemd/system/fstrim.timer.d/override.conf

Ref­er­ences

[On­line Dis­card] En­able TRIM in FsTab (not rec­om­mend­ed)

TRIM (Trim com­mand let an OS know which SSD blocks are not be­ing used and can be cleared).

sudo nano /etc/fstab
#/dev/disk/by-uuid/09e7c8ed-fb55-4a44-8be4-18b1696fc714 / ext4 defaults 0 0
/dev/disk/by-uuid/09e7c8ed-fb55-4a44-8be4-18b1696fc714 / ext4 discard,async,noatime,nodiratime,errors=remount-ro 0 1

Warn­ing: Users need to be cer­tain that their SSD sup­ports TRIM be­fore at­tempt­ing to use it. Da­ta loss can oc­cur oth­er­wise! Tp test whether the SSD de­vice sup­ports TRIM/​​​Discard op­tion you can use ei­ther of the fol­low­ing com­mands.

sudo hdparm -I /dev/sda | grep TRIM
* Data Set Management TRIM supported (limit 8 blocks)
sudo smartctl --all /dev/sda | grep TRIM
TRIM Command:     Available

To test does an NVMe sup­ports TRIM (phys­i­cal dis­card op­tion) the out­put of the fol­low­ing com­mand must be greater than 0.

cat /sys/block/nvme0n1/queue/discard_max_bytes

Ref­er­ences

Tweak the AMP val­ue of SSD

Most Lin­ux dis­tri­b­u­tions use Lin­ux Kernel’s “Ad­vanced Pow­er Man­age­ment (APM)” API to han­dle con­fig­u­ra­tion, op­ti­mize per­for­mance, and en­sure sta­bil­i­ty of stor­age de­vices. These de­vices are as­signed an APM val­ue be­tween 1 and 255 to con­trol their pow­er man­age­ment thresh­olds. A val­ue of 254 in­di­cates best per­for­mance, while a val­ue of 1 in­di­cates bet­ter pow­er man­age­ment. As­sign­ing a val­ue of 255 will dis­able APM al­to­geth­er. By de­fault, SS­Ds are as­signed an APM of 254 when the sys­tem is run­ning on ex­ter­nal pow­er. In bat­tery mode, the APM lev­el is set to 128, re­duc­ing the read and write speeds of SS­Ds. This ar­ti­cle ex­plains how to in­crease SSD APM lev­els to 254 when your Lin­ux lap­top is run­ning on bat­tery mode.

sudo hdparm -B254 /dev/sda

Get the cur­rent AMP val­ue.

sudo hdparm -B /dev/sda

Test the per­for­mance.

sudo hdparm -tT /dev/sda

Ref­er­ences

See al­so