SSD/NVMe Tweaks (TRIM/Discard): Difference between revisions

From WikiMLT
m (Стадий: 5 [Фаза:Утвърждаване, Статус:Авторизиран]; Категория:Linux Desktop)
m (Text replacement - "mlw-continue" to "code-continue")
 
(7 intermediate revisions by the same user not shown)
Line 26: Line 26:
* Red Hat Docs: <u>[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_storage_devices/enabling-multipathing-on-nvme-devices_managing-storage-devices Red Hat > '''<nowiki>[9]</nowiki>''' > Managing storage devices > Chapter 10. '''Enabling multipathing on NVMe devices''']</u>
* Red Hat Docs: <u>[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_storage_devices/enabling-multipathing-on-nvme-devices_managing-storage-devices Red Hat > '''<nowiki>[9]</nowiki>''' > Managing storage devices > Chapter 10. '''Enabling multipathing on NVMe devices''']</u>
* Red Hat Docs: <u>[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_file_systems/discarding-unused-blocks_managing-file-systems Red Hat > '''<nowiki>[9]</nowiki>''' > Managing storage devices > Chapter 37. '''Discarding unused blocks''']</u>
* Red Hat Docs: <u>[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_file_systems/discarding-unused-blocks_managing-file-systems Red Hat > '''<nowiki>[9]</nowiki>''' > Managing storage devices > Chapter 37. '''Discarding unused blocks''']</u>
* Red Hat Docs: <u>[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/deduplicating_and_compressing_logical_volumes_on_rhel/trim-options-on-an-lvm-vdo-volume_deduplicating-and-compressing-logical-volumes-on-rhel Red Hat > '''<nowiki>[9]</nowiki>''' > Deduplicating and compressing LVs > Chapter 4. '''Trim options on an LVM-VDO volume''']</u>
* Arch Linux Wiki: [https://wiki.archlinux.org/title/Solid_state_drive#Continuous_TRIM '''Solid state drive''']
* Arch Linux Wiki: [https://wiki.archlinux.org/title/Solid_state_drive#Continuous_TRIM '''Solid state drive''']
* Unix and Linux: [https://unix.stackexchange.com/questions/226445/fstrim-the-discard-operation-is-not-supported-with-ext4-on-lvm '''Fstrim "the discard operation is not supported" with ext4 on LVM''']
* Red Hat Customer Portal: [https://access.redhat.com/solutions/3921661 Does fstrim work on a LVM filesystem that has a snapshot or on a snapshot itself?]
* Red Hat Bugzilla: [https://bugzilla.redhat.com/show_bug.cgi?id=1390148 Bug 1390148 - Discard operation is not supported on SSD disk]
* Red Hat Bugzilla: [https://bugzilla.redhat.com/show_bug.cgi?id=1400824 Bug 1400824 - Fstrim/Discard operation is not supported for LVMRAID1 in RHEL7.3]


== Test Whether a Devise Supports TRIM ==
== Test Whether a Devise Supports TRIM ==
Line 39: Line 44:
</syntaxhighlight>
</syntaxhighlight>


To test does an NVMe supports TRIM (physical discard option) the output of the following command must be greater than 0. This approach is applicable also for SSD devices.<syntaxhighlight lang="shell" line="1" class="mlw-continue">
To test does an NVMe supports TRIM (physical discard option) the output of the following command must be greater than 0. This approach is applicable also for SSD devices.<syntaxhighlight lang="shell" line="1" class="code-continue">
cat /sys/block/nvme0n1/queue/discard_max_bytes
cat /sys/block/nvme0n1/queue/discard_max_bytes
</syntaxhighlight>
</syntaxhighlight>
<syntaxhighlight lang="shell" line="1" class="mlw-shell-gray">
<syntaxhighlight lang="shell" line="1" class="mlw-shell-gray code-continue">
cat /sys/block/sda/queue/discard_max_bytes
cat /sys/block/sda/queue/discard_max_bytes
</syntaxhighlight>
<syntaxhighlight lang="shell" line="1" class="code-continue">
grep . /sys/block/sd*/queue/discard_*  # Inspect all devices
</syntaxhighlight>
</syntaxhighlight>


In addition if the columns <code>DISC-GRAN</code> and <code>DISC-MAX</code> within the output of <code>lsblk -D</code> has values the relevant devices supports TRIM.<syntaxhighlight lang="shell" line="1" class="mlw-continue">
In addition if the columns <code>DISC-GRAN</code> and <code>DISC-MAX</code> within the output of <code>lsblk -D</code> has values the relevant devices supports TRIM.<syntaxhighlight lang="shell" line="1" class="code-continue">
lsblk -dD
lsblk -dD
</syntaxhighlight>
</syntaxhighlight>
Line 59: Line 67:
== Get Full SSD/NVMe/HDD Device Info ==
== Get Full SSD/NVMe/HDD Device Info ==
For SSD/HDD devices use:
For SSD/HDD devices use:
<syntaxhighlight lang="shell" line="1" class="mlw-continue root-prompt">
<syntaxhighlight lang="shell" line="1" class="code-continue root-prompt">
smartctl -x /dev/sda            # -x, --xall; -a, --all
smartctl -x /dev/sda            # -x, --xall; -a, --all
</syntaxhighlight>
</syntaxhighlight>
Line 66: Line 74:
</syntaxhighlight>
</syntaxhighlight>
For NVMe devices use:
For NVMe devices use:
<syntaxhighlight lang="shell" line="1" class="mlw-continue root-prompt">
<syntaxhighlight lang="shell" line="1" class="code-continue root-prompt">
smartctl -x /dev/nvme0n1        # -x, --xall; -a, --all
smartctl -x /dev/nvme0n1        # -x, --xall; -a, --all
</syntaxhighlight>
</syntaxhighlight>
Line 126: Line 134:
sudo systemctl status fstrim.service
sudo systemctl status fstrim.service
sudo systemctl status fstrim.timer
sudo systemctl status fstrim.timer
</syntaxhighlight>Undo the change if you need.<syntaxhighlight lang="shell" line="1">
</syntaxhighlight>
Inspect the logs.
{{collapse/begin}}
<syntaxhighlight lang="shell" line="1">
journalctl -u fstri
</syntaxhighlight>
{{collapse/div|#Output}}
<syntaxhighlight lang="shell-session" class="mlw-collapsed-first-element">
Aug 25 01:18:25 example.com systemd[1]: Starting Discard unused blocks on filesystems from /etc/fstab...
Aug 25 01:18:56 example.com fstrim[2387712]: /mnt/500GB-HT: 108.2 GiB (116132655104 bytes) trimmed on /dev/disk/by-uuid/b6ec78e6-bf2e-4cf0-bb1e-0042645af13b
Aug 25 01:18:56 example.com fstrim[2387712]: /mnt/2TB-WD-2: 149.6 GiB (160665714688 bytes) trimmed on /dev/disk/by-uuid/5d0b4c70-8936-41ef-8ac5-eba6f3142d9d
Aug 25 01:18:56 example.com fstrim[2387712]: /mnt/2TB-WD-1: 36.9 GiB (39626182656 bytes) trimmed on /dev/disk/by-uuid/9aa5e7bb-a0ca-463c-856b-86a3b9b4944a
Aug 25 01:18:56 example.com fstrim[2387712]: /: 0 B (0 bytes) trimmed on /dev/disk/by-uuid/09e7c8ed-fb55-4a44-8be4-18b1696fc714
Aug 25 01:18:56 example.com systemd[1]: fstrim.service: Deactivated successfully.
Aug 25 01:18:56 example.com systemd[1]: Finished Discard unused blocks on filesystems from /etc/fstab.
Aug 25 01:18:56 example.com systemd[1]: fstrim.service: Consumed 1.213s CPU time.
</syntaxhighlight>
{{collapse/end}}
Undo the change if you need.
<syntaxhighlight lang="shell" line="1" class="mlw-shell-gray">
sudo rm -v /etc/systemd/system/fstrim.timer.d/override.conf
sudo rm -v /etc/systemd/system/fstrim.timer.d/override.conf
</syntaxhighlight>
</syntaxhighlight>
Line 150: Line 177:
For <u>un</u>privileged Linux '''containers''' use the following command at the ProxmoxVE host side .<syntaxhighlight lang="shell" line="1" class="root-prompt mlw-shell-gray">
For <u>un</u>privileged Linux '''containers''' use the following command at the ProxmoxVE host side .<syntaxhighlight lang="shell" line="1" class="root-prompt mlw-shell-gray">
pct fstrim 151              # Where 151 is the Id of the container
pct fstrim 151              # Where 151 is the Id of the container
</syntaxhighlight>For privileged containers <code>fstrim</code> could be executed also from the guest's side. For the '''virtual machines''' - enable the discard and SSD emulation options in the storage device configuration, use VirtIo SCSI as controller and trim at the guest side. Also we can use one of the following commands at he ProxmoxVE host side.<syntaxhighlight lang="shell" line="1" class="mlw-continue root-prompt">
</syntaxhighlight>For privileged containers <code>fstrim</code> could be executed also from the guest's side. For the '''virtual machines''' - enable the discard and SSD emulation options in the storage device configuration, use VirtIo SCSI as controller and trim at the guest side. Also we can use one of the following commands at he ProxmoxVE host side.<syntaxhighlight lang="shell" line="1" class="code-continue root-prompt">
qm guest cmd 251 fstrim    # Where 251 is the Id of the container
qm guest cmd 251 fstrim    # Where 251 is the Id of the container
</syntaxhighlight><syntaxhighlight lang="shell" line="1" class="root-prompt mlw-shell-gray">
</syntaxhighlight><syntaxhighlight lang="shell" line="1" class="root-prompt mlw-shell-gray">
Line 162: Line 189:
=== References===
=== References===
*'''Proxmox Forum: [https://forum.proxmox.com/threads/trim-ssds.46398/ TRIM SSDs]'''
*'''Proxmox Forum: [https://forum.proxmox.com/threads/trim-ssds.46398/ TRIM SSDs]'''
== Tweak the AMP value of SSD ==
== Tweak the APM Level of SSD ==
Most Linux distributions use Linux Kernel’s “Advanced Power Management (APM)” API to handle configuration, optimize performance, and ensure stability of storage devices. These devices are assigned an APM value between 1 and 255 to control their power management thresholds. A value of 254 indicates best performance, while a value of 1 indicates better power management. Assigning a value of 255 will disable APM altogether. By default, SSDs are assigned an APM of 254 when the system is running on external power. In battery mode, the APM level is set to 128, reducing the read and write speeds of SSDs. This article explains how to increase SSD APM levels to 254 when your Linux laptop is running on battery mode.
Most Linux distributions use Linux Kernel’s “Advanced Power Management (APM)” API to handle configuration, optimize performance, and ensure stability of storage devices. These devices are assigned an APM value between 1 and 255 to control their power management thresholds. A value of 254 indicates best performance, while a value of 1 indicates better power management. Assigning a value of 255 will disable APM altogether. By default, SSDs are assigned an APM of 254 when the system is running on external power. In battery mode, the APM level is set to 128, reducing the read and write speeds of SSDs. This article explains how to increase SSD APM levels to 254 when your Linux laptop is running on battery mode.
<syntaxhighlight lang="shell" line="1">
<syntaxhighlight lang="shell" line="1">
Line 177: Line 204:


=== References ===
=== References ===
* [https://askubuntu.com/a/879291/566421 AskUbuntu: Ubuntu SSD - Was fast, is now extremely slow]
* Ask Ubuntu: [https://askubuntu.com/a/879291/566421 Ubuntu SSD - Was fast, is now extremely slow]
* [https://linuxhint.com/improve_ssd_performance_linux_laptops/ LinuxHint.com: How to Improve SSD Performance in Linux Laptops]
* LinuxHint: [https://linuxhint.com/improve_ssd_performance_linux_laptops/ How to Improve SSD Performance in Linux Laptops]
* Linux Magazine: [https://www.linux-magazine.com/Online/Features/Tune-Your-Hard-Disk-with-hdparm Tune Your Hard Disk with hdparm]
* Linrunner: [https://linrunner.de/tlp/faq/disks.html TLP FAQ: Disk Drives]


== See also ==
== See also ==
* [[Linux Swap and Swapfile]]
* [[Linux Swap and Swapfile]]
* [[Preload Tool for Better System Performance]]
* [[Preload Tool for Better System Performance]]
* [https://wiki.archlinux.org/title/Improving_performance#Storage_devices Arch Linux Wiki: Improving performance] (Need to read carefully!)
* Arch Linux Wiki: [https://wiki.archlinux.org/title/Improving_performance#Storage_devices Improving performance] (Need to read carefully!)
* [https://wiki.archlinux.org/title/Ext4#Improving_performance Arch Linux Wiki: Ext4] (Need to read carefully!)
* Arch Linux Wiki: [https://wiki.archlinux.org/title/Ext4#Improving_performance Ext4] (Need to read carefully!)
* Website for Students: [https://websiteforstudents.com/improve-nginx-cache-performance-with-tmpfs-on-ubuntu/ Improve Nginx Cache Performance with <code>tmpfs</code> on Ubuntu]
* Website for Students: [https://websiteforstudents.com/improve-nginx-cache-performance-with-tmpfs-on-ubuntu/ Improve Nginx Cache Performance with <code>tmpfs</code> on Ubuntu]
* Ask Ubuntu: [https://askubuntu.com/questions/1400/how-do-i-optimize-the-os-for-ssds How do I optimize the OS for SSDs?]
* Ask Ubuntu: [https://askubuntu.com/questions/1400/how-do-i-optimize-the-os-for-ssds How do I optimize the OS for SSDs?]
* How-To Geek: [https://www.howtogeek.com/62761/how-to-tweak-your-ssd-in-ubuntu-for-better-performance/ How to Tweak Your SSD in Ubuntu for Better Performance]
* How-To Geek: [https://www.howtogeek.com/62761/how-to-tweak-your-ssd-in-ubuntu-for-better-performance/ How to Tweak Your SSD in Ubuntu for Better Performance]
* How-To Geek: [https://www.howtogeek.com/howto/38125/htg-explains-what-is-the-linux-fstab-and-how-does-it-work/ What Is the Linux fstab File, and How Does It Work?]
* How-To Geek: [https://www.howtogeek.com/howto/38125/htg-explains-what-is-the-linux-fstab-and-how-does-it-work/ What Is the Linux fstab File, and How Does It Work?]
* SSD Sphere: [https://ssdsphere.com/why-you-should-not-worry-about-ssds-tbw-and-mtbf/ '''Why you should not worry about SSD’s TBW and MTBF?''']
* GBMB.org: [https://www.gbmb.org/pb-to-tb PB to TB Conversion]


<noinclude>
<noinclude>
Line 195: Line 226:
  | Прндл  = Linux Desktop
  | Прндл  = Linux Desktop
  | Прндл1 = Proxmox
  | Прндл1 = Proxmox
  | Стадий = 5
  | Стадий = 6
  | Фаза  = Утвърждаване
  | Фаза  = Утвърждаване
  | Статус = Авторизиран
  | Статус = Утвърден
  | ИдтПт  = Spas
  | ИдтПт  = Spas
  | РзбПт  = Spas
  | РзбПт  = Spas

Latest revision as of 08:30, 26 September 2022

Note, with­in the ex­am­ples in the fol­low­ing sec­tions, /​​​dev/​​​sda refers to a SSD de­vice while /​​​dev/​​​nvme0n1 refers to a NVMe de­vice.

Dis­card­ing Un­used Blocks

You can per­form or sched­ule dis­card op­er­a­tions on block de­vices that sup­port them. Block dis­card op­er­a­tions dis­card blocks that are no longer in use by a mount­ed file sys­tem. They are use­ful on:

  • Sol­id-state dri­ves (SS­Ds)
  • Thin­ly-pro­vi­sioned stor­age

You can run dis­card op­er­a­tions us­ing dif­fer­ent meth­ods:

  • Batch dis­card: Are run ex­plic­it­ly by the user. They dis­card all un­used blocks in the se­lect­ed file sys­tems.
  • On­line dis­card: Are spec­i­fied at mount time. They run in re­al time with­out user in­ter­ven­tion. On­line dis­card op­er­a­tions dis­card on­ly the blocks that are tran­si­tion­ing from used to free.
  • Pe­ri­od­ic dis­card: Are batch op­er­a­tions that are run reg­u­lar­ly by a sys­temd ser­vice.

All types are sup­port­ed by the XFS and ext4 file sys­tems and by VDO.

Rec­om­men­da­tions. Red Hat rec­om­mends that you use batch or pe­ri­od­ic dis­card. Use on­line dis­card on­ly if:

  • the system’s work­load is such that batch dis­card is not fea­si­ble, or
  • on­line dis­card op­er­a­tions are nec­es­sary to main­tain per­for­mance.

Ref­er­ences

Test Whether a De­vise Sup­ports TRIM

To test whether the SSD de­vice sup­ports TRIM/​​​Discard op­tion you can use ei­ther of the fol­low­ing com­mands.

sudo hdparm -I /dev/sda | grep TRIM
* Data Set Management TRIM supported (limit 8 blocks)
sudo smartctl --all /dev/sda | grep TRIM
TRIM Command:     Available

To test does an NVMe sup­ports TRIM (phys­i­cal dis­card op­tion) the out­put of the fol­low­ing com­mand must be greater than 0. This ap­proach is ap­plic­a­ble al­so for SSD de­vices.

cat /sys/block/nvme0n1/queue/discard_max_bytes
cat /sys/block/sda/queue/discard_max_bytes
grep . /sys/block/sd*/queue/discard_*   # Inspect all devices

In ad­di­tion if the columns DISC-GRAN and DISC-MAX with­in the out­put of ls­blk ‑D has val­ues the rel­e­vant de­vices sup­ports TRIM.

lsblk -dD
NAME    DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
sda            0        4K       2G         0   # SSD
sdb            0        0B       0B         0   # HDD
sr0            0        0B       0B         0   # DVD ROM
nvme0n1        0      512B       2T         0   # NVMe

Get Full SSD/​​​NVMe/​​​HDD De­vice In­fo

For SSD/HDD de­vices use:

smartctl -x /dev/sda            # -x, --xall; -a, --all
hdparm -I /dev/sda              # doesn't support NVMe

For NVMe de­vices use:

smartctl -x /dev/nvme0n1        # -x, --xall; -a, --all
nvme smart-log -H /dev/nvme0n1  # apt install nvme-cli

Ref­er­ences

[Batch dis­card] TRIM from CLI

sudo fstrim -av             # trim/discardd all mounted devices, verbose output
sudo fstrim -v /            # trim/discardd the root fs, verbose output
sudo fstrim /mount-point    # trim/discardd a device mounted at specific mount point

[Pe­ri­od­ic Dis­card] Set the TRIM Job to Dai­ly

sudo systemctl enable fstrim.timer
sudo mkdir /etc/systemd/system/fstrim.timer.d
sudo nano /etc/systemd/system/fstrim.timer.d/override.conf
[Timer]
OnCalendar=
OnCalendar=daily

Re­boot the sys­tem or do dae­mon-re­load and check the up­dat­ed val­ue.

sudo systemctl daemon-reload
systemctl cat fstrim.timer
# /lib/systemd/system/fstrim.timer
[Unit]
Description=Discard unused blocks once a week
Documentation=man:fstrim
ConditionVirtualization=!container

[Timer]
OnCalendar=weekly
AccuracySec=1h
Persistent=true

[Install]
WantedBy=timers.target

# /etc/systemd/system/fstrim.timer.d/override.conf
[Timer]
OnCalendar=
OnCalendar=daily
sudo systemctl status fstrim.service
sudo systemctl status fstrim.timer

In­spect the logs.

journalctl -u fstri
#Out­put
Aug 25 01:18:25 example.com systemd[1]: Starting Discard unused blocks on filesystems from /etc/fstab...
Aug 25 01:18:56 example.com fstrim[2387712]: /mnt/500GB-HT: 108.2 GiB (116132655104 bytes) trimmed on /dev/disk/by-uuid/b6ec78e6-bf2e-4cf0-bb1e-0042645af13b
Aug 25 01:18:56 example.com fstrim[2387712]: /mnt/2TB-WD-2: 149.6 GiB (160665714688 bytes) trimmed on /dev/disk/by-uuid/5d0b4c70-8936-41ef-8ac5-eba6f3142d9d
Aug 25 01:18:56 example.com fstrim[2387712]: /mnt/2TB-WD-1: 36.9 GiB (39626182656 bytes) trimmed on /dev/disk/by-uuid/9aa5e7bb-a0ca-463c-856b-86a3b9b4944a
Aug 25 01:18:56 example.com fstrim[2387712]: /: 0 B (0 bytes) trimmed on /dev/disk/by-uuid/09e7c8ed-fb55-4a44-8be4-18b1696fc714
Aug 25 01:18:56 example.com systemd[1]: fstrim.service: Deactivated successfully.
Aug 25 01:18:56 example.com systemd[1]: Finished Discard unused blocks on filesystems from /etc/fstab.
Aug 25 01:18:56 example.com systemd[1]: fstrim.service: Consumed 1.213s CPU time.

Un­do the change if you need.

sudo rm -v /etc/systemd/system/fstrim.timer.d/override.conf

Ref­er­ences

[On­line Dis­card] En­able TRIM in /​​​etc/​​​fstab

This ap­proach is not rec­om­mend­ed by the most man­u­als.

sudo nano /etc/fstab
#/dev/disk/by-uuid/09e7c--18b1696fc714 / ext4 defaults 0 0
#/dev/disk/by-uuid/09e7c--18b1696fc714 / ext4 discard,defaults 0 0
#/dev/disk/by-uuid/09e7c--18b1696fc714 / ext4 discard,async,noatime,nodiratime,errors=remount-ro 0 1
/dev/disk/by-uuid/09e7c--18b1696fc714 / ext4 discard,errors=remount-ro 0 1

Warn­ing: Users need to be cer­tain that their block de­vice sup­ports TRIM be­fore at­tempt­ing to use the dis­card mount­ing op­tion. Oth­er­wise da­ta loss can oc­cur!

Prox­moxVE Node

Batch dis­card of the guests

For unpriv­i­leged Lin­ux con­tain­ers use the fol­low­ing com­mand at the Prox­moxVE host side .

pct fstrim 151              # Where 151 is the Id of the container

For priv­i­leged con­tain­ers fstrim could be ex­e­cut­ed al­so from the guest's side. For the vir­tu­al ma­chines – en­able the dis­card and SSD em­u­la­tion op­tions in the stor­age de­vice con­fig­u­ra­tion, use Vir­tIo SC­SI as con­troller and trim at the guest side. Al­so we can use one of the fol­low­ing com­mands at he Prox­moxVE host side.

qm guest cmd 251 fstrim     # Where 251 is the Id of the container
qm agent 251 fstrim         # Where 251 is the Id of the container

Notes about the PVE Node (the PVE host)

  1. Sched­ule fstrim to run week­ly on PE hosts.
  2. Use the Vir­tio SC­SI dri­ver for VMs when pos­si­ble, en­abling the dis­card op­tion each time?!?
  3. Sched­ule TRIM (eg fstrim in Lin­ux) to run week­ly in guest VMs.
  4. Sched­ule fstrim to run week­ly in PE con­tain­ers.

Ref­er­ences

Tweak the APM Lev­el of SSD

Most Lin­ux dis­tri­b­u­tions use Lin­ux Kernel’s “Ad­vanced Pow­er Man­age­ment (APM)” API to han­dle con­fig­u­ra­tion, op­ti­mize per­for­mance, and en­sure sta­bil­i­ty of stor­age de­vices. These de­vices are as­signed an APM val­ue be­tween 1 and 255 to con­trol their pow­er man­age­ment thresh­olds. A val­ue of 254 in­di­cates best per­for­mance, while a val­ue of 1 in­di­cates bet­ter pow­er man­age­ment. As­sign­ing a val­ue of 255 will dis­able APM al­to­geth­er. By de­fault, SS­Ds are as­signed an APM of 254 when the sys­tem is run­ning on ex­ter­nal pow­er. In bat­tery mode, the APM lev­el is set to 128, re­duc­ing the read and write speeds of SS­Ds. This ar­ti­cle ex­plains how to in­crease SSD APM lev­els to 254 when your Lin­ux lap­top is run­ning on bat­tery mode.

sudo hdparm -B254 /dev/sda

Get the cur­rent AMP val­ue.

sudo hdparm -B /dev/sda

Test the per­for­mance.

sudo hdparm -tT /dev/sda

Ref­er­ences

See al­so