SSD/NVMe Tweaks (TRIM/Discard): Difference between revisions
m (→See also) |
m (Text replacement - "mlw-continue" to "code-continue") |
||
(One intermediate revision by the same user not shown) | |||
Line 26: | Line 26: | ||
* Red Hat Docs: <u>[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_storage_devices/enabling-multipathing-on-nvme-devices_managing-storage-devices Red Hat > '''<nowiki>[9]</nowiki>''' > Managing storage devices > Chapter 10. '''Enabling multipathing on NVMe devices''']</u> | * Red Hat Docs: <u>[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_storage_devices/enabling-multipathing-on-nvme-devices_managing-storage-devices Red Hat > '''<nowiki>[9]</nowiki>''' > Managing storage devices > Chapter 10. '''Enabling multipathing on NVMe devices''']</u> | ||
* Red Hat Docs: <u>[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_file_systems/discarding-unused-blocks_managing-file-systems Red Hat > '''<nowiki>[9]</nowiki>''' > Managing storage devices > Chapter 37. '''Discarding unused blocks''']</u> | * Red Hat Docs: <u>[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_file_systems/discarding-unused-blocks_managing-file-systems Red Hat > '''<nowiki>[9]</nowiki>''' > Managing storage devices > Chapter 37. '''Discarding unused blocks''']</u> | ||
* Red Hat Docs: <u>[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/deduplicating_and_compressing_logical_volumes_on_rhel/trim-options-on-an-lvm-vdo-volume_deduplicating-and-compressing-logical-volumes-on-rhel Red Hat > '''<nowiki>[9]</nowiki>''' > Deduplicating and compressing LVs > Chapter 4. '''Trim options on an LVM-VDO volume''']</u> | |||
* Arch Linux Wiki: [https://wiki.archlinux.org/title/Solid_state_drive#Continuous_TRIM '''Solid state drive'''] | * Arch Linux Wiki: [https://wiki.archlinux.org/title/Solid_state_drive#Continuous_TRIM '''Solid state drive'''] | ||
* Unix and Linux: [https://unix.stackexchange.com/questions/226445/fstrim-the-discard-operation-is-not-supported-with-ext4-on-lvm '''Fstrim "the discard operation is not supported" with ext4 on LVM'''] | |||
* Red Hat Customer Portal: [https://access.redhat.com/solutions/3921661 Does fstrim work on a LVM filesystem that has a snapshot or on a snapshot itself?] | |||
* Red Hat Bugzilla: [https://bugzilla.redhat.com/show_bug.cgi?id=1390148 Bug 1390148 - Discard operation is not supported on SSD disk] | |||
* Red Hat Bugzilla: [https://bugzilla.redhat.com/show_bug.cgi?id=1400824 Bug 1400824 - Fstrim/Discard operation is not supported for LVMRAID1 in RHEL7.3] | |||
== Test Whether a Devise Supports TRIM == | == Test Whether a Devise Supports TRIM == | ||
Line 39: | Line 44: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
To test does an NVMe supports TRIM (physical discard option) the output of the following command must be greater than 0. This approach is applicable also for SSD devices.<syntaxhighlight lang="shell" line="1" class=" | To test does an NVMe supports TRIM (physical discard option) the output of the following command must be greater than 0. This approach is applicable also for SSD devices.<syntaxhighlight lang="shell" line="1" class="code-continue"> | ||
cat /sys/block/nvme0n1/queue/discard_max_bytes | cat /sys/block/nvme0n1/queue/discard_max_bytes | ||
</syntaxhighlight> | </syntaxhighlight> | ||
<syntaxhighlight lang="shell" line="1" class="mlw-shell-gray | <syntaxhighlight lang="shell" line="1" class="mlw-shell-gray code-continue"> | ||
cat /sys/block/sda/queue/discard_max_bytes | cat /sys/block/sda/queue/discard_max_bytes | ||
</syntaxhighlight> | </syntaxhighlight> | ||
<syntaxhighlight lang="shell" line="1" class=" | <syntaxhighlight lang="shell" line="1" class="code-continue"> | ||
grep . /sys/block/sd*/queue/discard_* # Inspect all devices | grep . /sys/block/sd*/queue/discard_* # Inspect all devices | ||
</syntaxhighlight> | </syntaxhighlight> | ||
In addition if the columns <code>DISC-GRAN</code> and <code>DISC-MAX</code> within the output of <code>lsblk -D</code> has values the relevant devices supports TRIM.<syntaxhighlight lang="shell" line="1" class=" | In addition if the columns <code>DISC-GRAN</code> and <code>DISC-MAX</code> within the output of <code>lsblk -D</code> has values the relevant devices supports TRIM.<syntaxhighlight lang="shell" line="1" class="code-continue"> | ||
lsblk -dD | lsblk -dD | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 62: | Line 67: | ||
== Get Full SSD/NVMe/HDD Device Info == | == Get Full SSD/NVMe/HDD Device Info == | ||
For SSD/HDD devices use: | For SSD/HDD devices use: | ||
<syntaxhighlight lang="shell" line="1" class=" | <syntaxhighlight lang="shell" line="1" class="code-continue root-prompt"> | ||
smartctl -x /dev/sda # -x, --xall; -a, --all | smartctl -x /dev/sda # -x, --xall; -a, --all | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 69: | Line 74: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
For NVMe devices use: | For NVMe devices use: | ||
<syntaxhighlight lang="shell" line="1" class=" | <syntaxhighlight lang="shell" line="1" class="code-continue root-prompt"> | ||
smartctl -x /dev/nvme0n1 # -x, --xall; -a, --all | smartctl -x /dev/nvme0n1 # -x, --xall; -a, --all | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 172: | Line 177: | ||
For <u>un</u>privileged Linux '''containers''' use the following command at the ProxmoxVE host side .<syntaxhighlight lang="shell" line="1" class="root-prompt mlw-shell-gray"> | For <u>un</u>privileged Linux '''containers''' use the following command at the ProxmoxVE host side .<syntaxhighlight lang="shell" line="1" class="root-prompt mlw-shell-gray"> | ||
pct fstrim 151 # Where 151 is the Id of the container | pct fstrim 151 # Where 151 is the Id of the container | ||
</syntaxhighlight>For privileged containers <code>fstrim</code> could be executed also from the guest's side. For the '''virtual machines''' - enable the discard and SSD emulation options in the storage device configuration, use VirtIo SCSI as controller and trim at the guest side. Also we can use one of the following commands at he ProxmoxVE host side.<syntaxhighlight lang="shell" line="1" class=" | </syntaxhighlight>For privileged containers <code>fstrim</code> could be executed also from the guest's side. For the '''virtual machines''' - enable the discard and SSD emulation options in the storage device configuration, use VirtIo SCSI as controller and trim at the guest side. Also we can use one of the following commands at he ProxmoxVE host side.<syntaxhighlight lang="shell" line="1" class="code-continue root-prompt"> | ||
qm guest cmd 251 fstrim # Where 251 is the Id of the container | qm guest cmd 251 fstrim # Where 251 is the Id of the container | ||
</syntaxhighlight><syntaxhighlight lang="shell" line="1" class="root-prompt mlw-shell-gray"> | </syntaxhighlight><syntaxhighlight lang="shell" line="1" class="root-prompt mlw-shell-gray"> |
Latest revision as of 07:30, 26 September 2022
Note, within the examples in the following sections, /dev/sda
refers to a SSD device while /dev/nvme0n1
refers to a NVMe device.
Discarding Unused Blocks
You can perform or schedule discard operations on block devices that support them. Block discard operations discard blocks that are no longer in use by a mounted file system. They are useful on:
- Solid-state drives (SSDs)
- Thinly-provisioned storage
You can run discard operations using different methods:
- Batch discard: Are run explicitly by the user. They discard all unused blocks in the selected file systems.
- Online discard: Are specified at mount time. They run in real time without user intervention. Online discard operations discard only the blocks that are transitioning from used to free.
- Periodic discard: Are batch operations that are run regularly by a
systemd
service.
All types are supported by the XFS and ext4 file systems and by VDO.
Recommendations. Red Hat recommends that you use batch or periodic discard. Use online discard only if:
- the system’s workload is such that batch discard is not feasible, or
- online discard operations are necessary to maintain performance.
References
- Red Hat Docs: Red Hat > [8] > Managing storage devices > Chapter 8. Discarding unused blocks
- Red Hat Docs: Red Hat > [9] > Managing storage devices > Chapter 10. Enabling multipathing on NVMe devices
- Red Hat Docs: Red Hat > [9] > Managing storage devices > Chapter 37. Discarding unused blocks
- Red Hat Docs: Red Hat > [9] > Deduplicating and compressing LVs > Chapter 4. Trim options on an LVM-VDO volume
- Arch Linux Wiki: Solid state drive
- Unix and Linux: Fstrim "the discard operation is not supported" with ext4 on LVM
- Red Hat Customer Portal: Does fstrim work on a LVM filesystem that has a snapshot or on a snapshot itself?
- Red Hat Bugzilla: Bug 1390148 – Discard operation is not supported on SSD disk
- Red Hat Bugzilla: Bug 1400824 – Fstrim/Discard operation is not supported for LVMRAID1 in RHEL7.3
Test Whether a Devise Supports TRIM
To test whether the SSD device supports TRIM/Discard option you can use either of the following commands.
sudo hdparm -I /dev/sda | grep TRIM
* Data Set Management TRIM supported (limit 8 blocks)
sudo smartctl --all /dev/sda | grep TRIM
TRIM Command: Available
To test does an NVMe supports TRIM (physical discard option) the output of the following command must be greater than 0. This approach is applicable also for SSD devices.
cat /sys/block/nvme0n1/queue/discard_max_bytes
cat /sys/block/sda/queue/discard_max_bytes
grep . /sys/block/sd*/queue/discard_* # Inspect all devices
In addition if the columns DISC-GRAN
and DISC-MAX
within the output of lsblk ‑D
has values the relevant devices supports TRIM.
lsblk -dD
NAME DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
sda 0 4K 2G 0 # SSD
sdb 0 0B 0B 0 # HDD
sr0 0 0B 0B 0 # DVD ROM
nvme0n1 0 512B 2T 0 # NVMe
Get Full SSD/NVMe/HDD Device Info
For SSD/HDD devices use:
smartctl -x /dev/sda # -x, --xall; -a, --all
hdparm -I /dev/sda # doesn't support NVMe
For NVMe devices use:
smartctl -x /dev/nvme0n1 # -x, --xall; -a, --all
nvme smart-log -H /dev/nvme0n1 # apt install nvme-cli
References
- Chia Decentral at YouTube: Reading NVMe SMART in Windows and Linux
- NVMexpress.org: NVMe Current Specifications
- NVMexpress.org: NVMe Command Line Interface (NVMe-CLI)
- Linux NVMe at GitHub: NVMe-CLI
- SmartMonTools: NVMe support
- Percona: Using NVMe Command Line Tools to Check NVMe Flash Health
- Unix and Linux: How to evaluate the wear level of a NVMe SSD?
[Batch discard] TRIM from CLI
sudo fstrim -av # trim/discardd all mounted devices, verbose output
sudo fstrim -v / # trim/discardd the root fs, verbose output
sudo fstrim /mount-point # trim/discardd a device mounted at specific mount point
[Periodic Discard] Set the TRIM Job to Daily
sudo systemctl enable fstrim.timer
sudo mkdir /etc/systemd/system/fstrim.timer.d
sudo nano /etc/systemd/system/fstrim.timer.d/override.conf
[Timer]
OnCalendar=
OnCalendar=daily
Reboot the system or do daemon-reload and check the updated value.
sudo systemctl daemon-reload
systemctl cat fstrim.timer
# /lib/systemd/system/fstrim.timer
[Unit]
Description=Discard unused blocks once a week
Documentation=man:fstrim
ConditionVirtualization=!container
[Timer]
OnCalendar=weekly
AccuracySec=1h
Persistent=true
[Install]
WantedBy=timers.target
# /etc/systemd/system/fstrim.timer.d/override.conf
[Timer]
OnCalendar=
OnCalendar=daily
sudo systemctl status fstrim.service
sudo systemctl status fstrim.timer
Inspect the logs.
journalctl -u fstri
Aug 25 01:18:25 example.com systemd[1]: Starting Discard unused blocks on filesystems from /etc/fstab...
Aug 25 01:18:56 example.com fstrim[2387712]: /mnt/500GB-HT: 108.2 GiB (116132655104 bytes) trimmed on /dev/disk/by-uuid/b6ec78e6-bf2e-4cf0-bb1e-0042645af13b
Aug 25 01:18:56 example.com fstrim[2387712]: /mnt/2TB-WD-2: 149.6 GiB (160665714688 bytes) trimmed on /dev/disk/by-uuid/5d0b4c70-8936-41ef-8ac5-eba6f3142d9d
Aug 25 01:18:56 example.com fstrim[2387712]: /mnt/2TB-WD-1: 36.9 GiB (39626182656 bytes) trimmed on /dev/disk/by-uuid/9aa5e7bb-a0ca-463c-856b-86a3b9b4944a
Aug 25 01:18:56 example.com fstrim[2387712]: /: 0 B (0 bytes) trimmed on /dev/disk/by-uuid/09e7c8ed-fb55-4a44-8be4-18b1696fc714
Aug 25 01:18:56 example.com systemd[1]: fstrim.service: Deactivated successfully.
Aug 25 01:18:56 example.com systemd[1]: Finished Discard unused blocks on filesystems from /etc/fstab.
Aug 25 01:18:56 example.com systemd[1]: fstrim.service: Consumed 1.213s CPU time.
Undo the change if you need.
sudo rm -v /etc/systemd/system/fstrim.timer.d/override.conf
References
- Easy Linux Tips Project: SSD – How to optimize your Solid State Drive for Linux Mint and Ubuntu
[Online Discard] Enable TRIM in /etc/fstab
This approach is not recommended by the most manuals.
sudo nano /etc/fstab
#/dev/disk/by-uuid/09e7c--18b1696fc714 / ext4 defaults 0 0
#/dev/disk/by-uuid/09e7c--18b1696fc714 / ext4 discard,defaults 0 0
#/dev/disk/by-uuid/09e7c--18b1696fc714 / ext4 discard,async,noatime,nodiratime,errors=remount-ro 0 1
/dev/disk/by-uuid/09e7c--18b1696fc714 / ext4 discard,errors=remount-ro 0 1
Warning: Users need to be certain that their block device supports TRIM before attempting to use the discard mounting
option. Otherwise data loss can occur!
ProxmoxVE Node
Batch discard of the guests
For unprivileged Linux containers use the following command at the ProxmoxVE host side .
pct fstrim 151 # Where 151 is the Id of the container
For privileged containers fstrim
could be executed also from the guest's side. For the virtual machines – enable the discard and SSD emulation options in the storage device configuration, use VirtIo SCSI as controller and trim at the guest side. Also we can use one of the following commands at he ProxmoxVE host side.
qm guest cmd 251 fstrim # Where 251 is the Id of the container
qm agent 251 fstrim # Where 251 is the Id of the container
Notes about the PVE Node (the PVE host)
- Schedule
fstrim
to run weekly on PE hosts. - Use the Virtio SCSI driver for VMs when possible, enabling the discard option each time?!?
- Schedule TRIM (eg
fstrim
in Linux) to run weekly in guest VMs. - Schedule
fstrim
to run weekly in PE containers.
References
- Proxmox Forum: TRIM SSDs
Tweak the APM Level of SSD
Most Linux distributions use Linux Kernel’s “Advanced Power Management (APM)” API to handle configuration, optimize performance, and ensure stability of storage devices. These devices are assigned an APM value between 1 and 255 to control their power management thresholds. A value of 254 indicates best performance, while a value of 1 indicates better power management. Assigning a value of 255 will disable APM altogether. By default, SSDs are assigned an APM of 254 when the system is running on external power. In battery mode, the APM level is set to 128, reducing the read and write speeds of SSDs. This article explains how to increase SSD APM levels to 254 when your Linux laptop is running on battery mode.
sudo hdparm -B254 /dev/sda
Get the current AMP value.
sudo hdparm -B /dev/sda
Test the performance.
sudo hdparm -tT /dev/sda
References
- Ask Ubuntu: Ubuntu SSD – Was fast, is now extremely slow
- LinuxHint: How to Improve SSD Performance in Linux Laptops
- Linux Magazine: Tune Your Hard Disk with hdparm
- Linrunner: TLP FAQ: Disk Drives
See also
- Linux Swap and Swapfile
- Preload Tool for Better System Performance
- Arch Linux Wiki: Improving performance (Need to read carefully!)
- Arch Linux Wiki: Ext4 (Need to read carefully!)
- Website for Students: Improve Nginx Cache Performance with
tmpfs
on Ubuntu - Ask Ubuntu: How do I optimize the OS for SSDs?
- How-To Geek: How to Tweak Your SSD in Ubuntu for Better Performance
- How-To Geek: What Is the Linux fstab File, and How Does It Work?
- SSD Sphere: Why you should not worry about SSD’s TBW and MTBF?
- GBMB.org: PB to TB Conversion