Ubuntu – NVMe SSD slow write speed

kernelnvmepcieperformancessd

I have a NUC (BEH model) and a M.2 SSD PCIe gen3 NMVe card (Samsung 970 pro 512GB) and I have a slow and fast write speed result in Ubuntu 18.04.3 with two different kernels. I used ukuu for kernel switching and in kernel 5.0+ which comes standard with the Ubuntu installer I get around 600MiB ( sad ) write speed and with a previous kernel version of 4.9.190, I get around 2200MiB with the benchmark tool in Ubuntu. I have tried the latest 5.2 kernel and it is still a problem. I have tried Linux mint 19.2 and I also get the a slow write speed because it is using a later kernel than 4.9.

Here is my benchmark result on kernel 4.9.190.

I think this and this are related problems and a simple google search indicates lots of SSD write performance issues. Could it be a massive potential linux kernel performance issue?

Any help or fix would be greatly welcome!

Best Answer

  • The problem described is about the same here. I have a computer with ASUS Z10PE motherboard. That one has an in-built M2 NVMe slot. I also added 1 PCIe card that supports 1 NVMe drive. I also modded the BIOS to get bifurcation mode to have one PCIe slot to be divided in 4X4X4X4 so I can fit in the ASUS M2 Hyper PCIe card that allows up to 4 NVMe drives.

    If I use GNOME-DISKS tool, that allows to run a performance test, the best case scenario is on the ASUS PCIe card with Samsung PM981 NVMe drives :

    • 3.3GB/s READ speed (as advertised)
    • 600MB/s WRITE speed (about 4 times less than what is advertised ; with a really significant crush in performance when cache get's filles at about 40GB).

    I softraided the Samsung NVMe PM981 dirves on the ASUS PCIe card. Speeds are now as follows :

    • READ : 5.6GB/s (that is OK... even if not double of the single drive) ;
    • WRITE : 1.2GB/s which exactly the double of the single drive performance.

    It is like the kernel or MoBo sets the speed at AHCI speed (as it was a SATA drive).

    Now if I use the above method, the results are quite different :

    dd if=/dev/zero of=tempfile bs=1M count=16384 conv=fdatasync,notrunc status=progress oflag=direct 15183380480 octets (15 GB, 14 GiB) copiés, 5 s, 3,0 GB/s 16384+0 enregistrements lus 16384+0 enregistrements écrits 17179869184 octets (17 GB, 16 GiB) copiés, 5,63686 s, 3,0 GB/s

    dd if=tempfile of=/dev/null bs=1M count=4096 status=progress iflag=direct 4096+0 enregistrements lus 4096+0 enregistrements écrits 4294967296 octets (4,3 GB, 4,0 GiB) copiés, 1,00056 s, 4,3 GB/s

    So it is totally inconsistent between both tools : GNOME-DISKS and dd...

    In real world : if I move a really large (about 20GB) file from one NVMe to another one, I hardly get more than 850MB/s even on the softraided drives, which is really MUCH MUCH MUCH slower than expected... Theory would be : 2 X 2400MB/s = 4800MB/s. Reality : 6/7 times less.

    You ask me : I think there is a real problem either in MoBo or in Linux.

    I'll have to install Windows just to check if the problem is with the MoBo or with OS.

    Regards.

  • Related Question