Hi, Ubuntu Server 20.04 does not install on my new PC.
It boots into the grub menu and when I choose install Ubuntu it comes out with a large log and ending in the following message:
"end kernel panic - not synching: attempt to kill the idle task"
I have built a new PC using the following kit:
AMD Ryzen 9 5950X Processor
MSI MAG X570S TOMAHAWK MAX WIFI
128 GB Crucial Pro RAM DDR4
Noctura-NH-D15 heat sink.
Generic AMD chipset graphics card in PCIe slot 2.
Crucial Gen 4 PCIe M.2 2TB Hard drive.
I am looking to use a PCIe Adaptor with 16 lanes to the CPU adding four 1TB drives in a RAID 10 array, To do this I have been using ASUS Hyper M.2 X16 PCIe 4.0 X4 Expansion Card Supports 4 NVMe M.2
I entered the bios and set up RAID 10 OK after some messing about but the computer would not install Ubuntu from my USB.
I have removed the PCIe Adapter and tried to install Ubuntu on the M.2 drive but the program dumps out errors.
I have tried using USB2 for the USB stick with no luck.
Bit of a dead loss as to what's causing it. Other people seem to have used this motherboard and I can't see that the RAM or SSD are an issue. The graphics card is obviously working enough to get a display.
I'm at a dead loss to be honest, hope someone can help.
I'm not sure how I can get a better dump.
Any ideas?
Next is the ISO. Desktop Edition has both kernel track in the ISO (GA & HWE) Having the newer HWE kernel series gives it a step up to be able to boot and run on newer hardware. Server ISo only has the GA Kerrnel series. For Focal Fossa, 20.04, when it released, the GA series of kernel was 5.4. HWE when it was released was 5.8... Which much later, a decision was made, accelerate that HWE kernel series choice to 5.15. (Biggest factor was hardware considerations.)
Put the release numbers above together... Now ca you see that the Server Edition ISO will not boot, but the Desktop Edition of 20.04 will?
You have 2 choices. Install an minimal system from the 20.04 Desktop ISO. Then convert that back to Ubuntu Server (only takes one command), or install Ubuntu Server LTS 22.04.
*** Next... Does that PCIe NVME quad card work with your motherboard? Does that motherboard have a PLX bifuration chip on it with the MSI "PCIe Configuration" menu option? I thought the MSI boards that have that onboard were the Unify, x670e Carbon, x670e ACE, x670e Godlike and the TRX40-A PRO... I know the chipsets that support PCIe Bifurcation: Intel Platform X299, C422, C621, Z390, Z370, Z490, Z590, Z690, X99; AMD Platform TRX40, X399, X570, X470, B550, B450, Z590, etc. And just because the chipset supports it doesn't mean that the BIOS supports it. That is usally in their higher end boards.
Some mobo manufacturers, if you do ot use the top x16 slot for that, then you might only get 2 of the 4 drives come up on those cards. I use a Quad Card with the birfurcation chip on the card I have.
*** Next is, once you get it booting, you may find that Linux will not see your drives at all. None of them that you have setup behind the BIOS RAID. That is ot an HBA RAID controller. You say in MSI's instructions for BIOS RAID, that the options related to Microsoft Windows type right? They are using API's to, and drivers within Windows to do that... Their drivers downloads, do not include Linux Drivers to do the same...
If you really think you need to do RAID, then I would recommend using mdadm, LVM2 RAID, or ZFS RAIDz. RAID is not a replacement for backups. But there are reasons for RAID. One is uptime, meaning staying running. Hardware and software mdadm, What I found out long ago which disaster management in a crisis (for servers), is that you should create with hot-spares. Then is worse-off, expect and plan to destroy your old arrays, to create new to restore to. For better adaptability, portability, and flexibility... I converted everything I had to LVM2 RAID, then later on to ZFS RAIDz.
I test daily builds. I run LTS'es as daily drivers. You are planning to run LTS, but from a previous release, before your hardware came out. 22.04 has been out for over a year and is well-proven in production. I run new hardware, so I run the HWE stack on my servers. I would try 22.04 and see if it boots and runs. Then relook at your plan, to verify and see what will work with the hardware you have.
This is one of my servers, with a PCIe x M.2 Bifurifaction card:
:
mafoelffen@Mikes-ThinkPad-T520:~$ ssh mafoelffen@10.0.0.3
Welcome to Ubuntu 22.04.2 LTS (GNU/Linux 5.19.0-50-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Expanded Security Maintenance for Applications is not enabled.
79 updates can be applied immediately.
66 of these updates are standard security updates.
To see these additional updates run: apt list --upgradable
Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status
Last login: Sun Jul 30 06:18:49 2023 from 10.0.0.170
mafoelffen@Mikes-B460M:~$ lsblk -e7 -o name,label,size,fstype,mountpoint,model
NAME LABEL SIZE FSTYPE MOUNTPOINT MODEL
sda 1.8T Samsung SSD 870 EVO 2TB
└─sda1 datapool 1.8T zfs_member
sdb 1.8T Samsung SSD 870 EVO 2TB
└─sdb1 datapool 1.8T zfs_member
sdc 1.8T Samsung SSD 870 EVO 2TB
└─sdc1 datapool 1.8T zfs_member
sdd 476.9G Crucial_CT512MX100SSD1
├─sdd1 16M ext4
Re: Ubuntu Server 20.04 Installation Failed with "End Kernel Panic: not synching"
Part 2:
I have two reasons for using RAID: Reliability and speed performance increase. My through-put benchmarks on my server (above) is in the multi-GiB/s... I can run VM Guests that 'pop', better than most people can run on normal physical hardware.
With hardware RAID and Linux software RAID, there are no warnings before the array goes into degraded or failed. You get warnings from LVM2 RAID and ZFS RAIDz. They are also more flexible, and you can make changes (in structure, size, etc) and do maintenance on a live filesystem. With ZFS, even more. With both, I can export and migrate the data members to other places or machines. Both, I can take snapshots. Both are production ready. ZFS has a steeper learning curve of the two. Check into both of these volume managers to see what might fit your needs.
Here is my workstation, still in the rebuilding stage:Code:
root@msi-ubuntu:~# lsblk -e7 -o name,label,size,fstype,mountpoint,model
NAME LABEL SIZE FSTYPE MOUNTPOINT MODEL
sda 3.6T WDC WD40EZAZ-00S
├─sda1 16M
├─sda2 Win_G 3.4T ntfs /home/mafoelffen/WIN_G
└─sda3 Home_LTS 295.4G ext4
sdb 465.8G Samsung SSD 870
├─sdb1 System Reserved 50M ntfs
├─sdb2 365.2G ntfs
├─sdb3 1K
└─sdb4 508M ntfs
sdc 3.6T WDC WD40EZRZ-22G
├─sdc1 128M
└─sdc2 WIN_F 3.6T ntfs
sdd 3.6T WDC WD40EZAZ-00S
├─sdd1 16M
└─sdd2 20210604 3.6T ntfs
sde 4.5T ST5000DM000-1FK1
├─sde1 128M
└─sde2 WIN_H 4.5T ntfs /Media_H
sdf varpool 7.3T zfs_member
└─sdf1 mpool 1.5T zfs_member
nvme2n1 1.8T Samsung SSD 990 PRO 2TB
├─nvme2n1p1 kpool 1.8T zfs_member
└─nvme2n1p9 8M
nvme0n1 1.8T Samsung SSD 990 PRO 2TB
├─nvme0n1p1 EFI 750M vfat /boot/efi