Apr 4 2007
One step forwards, two steps back – hardware support for Linux
We’ve recently been looking at buying a new low-cost server for running demos from. To reduce costs, we went for a standard consumer motherboard, and the system will, naturally, be running Linux.
I’ve become fairly disillusioned with the PCs in general (get a Mac!) and so I’ve not been keeping up to date with the latest developments in the PC world… what I found shocked me.
From what I can tell, the amount of hardware that works with Linux – through no fault of the kernel developers – is dropping rapidly. It now seems that we’re back to the bad old days when even the motherboard may not work correctly.
The problem lies squarely at the feet of the manufacturers: nVidia and ATI are trying to get everyone to use pre-compiled closed-source binary blobs (and yes, I realise that that’s like saying “PIN number”) even for core functionality; Intel’s latest chipsets seem to break their own specifications; AMD doesn’t do (non-ATI) chipsets any more… so the only choice for a new (consumer) motherboard is SiS or VIA!
To demonstrate the depth of this problem, our new system was based around a Gigabyte GA-965GM-S2 with a Core2 Duo processor. Try to boot this with a recent LiveCD, and the system stops early on saying that it can’t find any storage devices: It turns out that to get SATA working requires a BIOS option to be changed to a none-default value, and that IDE won’t work without a nasty hack.
As detailed here, here, here, and here this has caught lots of people out.
Infact, it seems that the only way to reliably get this board to boot is to pass the following parameters to the kernel:
acpi=off pci=nommconf routeirq irqpoll all-generic-ide
… which is decidedly user-unfriendly. In addition, the kernel still complains about broken IRQ routing (and suggests using irqpoll), the firewire port is detected as being present but doesn’t work, and the integrated NIC drops out every few minutes. In addition to this, all of these hacks apparently induce a quite severe performance hit.
An interesting post from the end of 2006, here discusses exactly the same problem – without these options, the kernel won’t boot. With them, it boots but it complains about routing and that the CD drive is confused every few seconds.
I don’t blame the kernel developers for this, as many might be tempted to do – those guys do a great job under frustrating conditions, and Linux had the best hardare support of any UNIX or UNIX-like OS. The problem lies squarely with the hardware manufacturers. If they actually documented their chipsets, then this would not be a problem. Instead, they are actively shoring up Microsoft’s monopoly in the OS market by preventing any other solution from working.
Even worse, the only consumer motherboards available which don’t support dual graphics cards are the very low-end cheap ones. Even mid-range boards are now SLI/Crossfire enabled – which means that they are proprietary to that particular manufacturer – and in both of these cases this means non-functional hardware or closed black-box binary drivers which break on every kernel upgrade. So far as I can tell, this has also pushed VIA and SiS out of the market, making them non-contenders. Worse, it means that if someone wants to upgrade their graphics card, they may also need to upgrade their motherboard, processor, and soon even their memory too.
This problem looks as if it is going to get worse before it gets better, and I can see only two solutions for the home user who wants to run a real OS: buy an (expansive) server-class motherboard which will likely have limited graphics options and may perform poorly as a desktop, or get a Mac.
The Mac Pros and MacBook ranges have pretty good Linux support (binary graphics drivers are still an issue) – but, in all honesty, MacOS is now good enough that, on the desktop, Linux is no longer really needed for the the home user who doesn’t want to be kept in thrall by Microsoft. OSX is more usable than Windows, whilst retaining a UNIX-like core.
As MacOS has gcc, non-proprietary software development is still easily done – and fink/MacPorts/Gentoo Portage all run very well to provide a GNU userland.
In addition, it’s simple enough that my parents have been using a Mac for over a year now – and I was only called for technical help once, because of a Canon scanner (Canon have been very lazy with their Mac drivers, which don’t actually work. Installing a Mac-built version of SANE fixed this easily) – compared to once or twice a week when they had a PC. This is even more impressive when you consider that they only used the PC when they absolutely had to, whereas they are actually choosing to use their iMac!
People criticise Apple hardware for being proprietary and designed solely to run MacOS and nothing else… the sad thing is that the PC has become a clumsily-assembled and ill-fitting collection of non-interchangable proprietary parts which are assembled solely to run Windows.
“The creatures outside looked from pig to man, and from man to pig, and from pig to man again; but already it was impossible to say which was which.“
Stuart
12th April 2007 @ 10:11 am
Update:
I managed to bootstrap the Intel (Gigabyte) motherboard far enough to allow it to reboot under its own steam – and with a 2.6.19 kernel things are much improved.
The SATA and IDE controllers are now both recognised, and the board boots correctly without having to pass the kernel lots of hack-ish arguments.
Firewire still doesn’t appear to work, and the on-board NIC isn’t detected. Having said this, I put in a generic Intel e1000-based card to bootstrap the system, and I’ve seen cases before where having multiple identical NICs in the same system results in only one being detected – so this may not be (entirely) Intel’s fault.
Stuart
23rd April 2007 @ 9:46 pm
Seek and ye shall find:
http://www.phoronix.com/ – GNU/Linux & Solaris Reviews 😀