Rollin' on da (Linux) River
Posted by Tech at 1:27 p.m. on Jan. 25th, 20070 Comments 0 Pings in
Seeing as how, in a matter of weeks, I’ll be working in an exclusively Linux-based environment, I figured I’d fire up my Fedora installation again and see how’s tricks. Linux has been playing the Ike to my Tina for over a decade, always disappointing and abusing me every chance it gets, and I just keep coming back for more. This time, though, it’s for real; I can feel it. Anyway, I don’t have a choice, so let’s just tell the neighbors I fell off the swing, and concentrate on the good times.
One of the biggest gripes I’ve had about Linux was its sound card support*. Every Linux distribution you can think of still defaults to 1993’s concept of sound hardware. Namely, only one application can play a sound at any given time, and it locks the sound hardware from being used by other applications, for what reason not even God knows. This is behavior that belongs in a single-user, single-tasking operating system like DOS, not in a modern multiuser environment like Linux.
This problem was addressed back in the day by using “sound servers”, like ESD or artsd, to provide a layer between the applications and the OSS-supported hardware. This introduced a huge amount of latency, however, and was generally considered a stopgap measure even at its inception back in 1998 (or maybe even earlier, I can’t find that info). Even worse, software had to be written to explicitly support it or it was useless.
The limitations of the OSS+Sound Server architecture were overcome by a project known as ALSA, which introduced software mixing at the kernel level. This created an OSS-compatible, multi-streamed abstraction for sound hardware, and obviated the need for sound daemons altogether. That was back in, oh, 2000 or so, and to this day, every single Linux distribution still ships with sound servers and software mixing disabled.
Ubuntu, Fedora, Debian, et. al still configure sound hardware to be accessible by one application at a time, despite ALSA’s hard work. This means, for instance, that your Flash Player will freeze your browser if you happen to be listening to music in the background. Or if you get a system beep, it will lock you out of using your sound card for as long as the artsd/ESD timeout is set up. Games will not work. This sucks, of course, and is no way to compute in 2007.
The way to fix this is fairly simple. Simply put the following into your /etc/asound.conf, or in your ~/.asoundrc:
Now, I profess neither to having written that snippet, nor to having any understanding whatsoever as to what it does. But since I put that magical incantation into use, I’ve had no problems whatsoever with sound-card locking, or Flash plugins, or anything else. It just works.
Now why haven’t the distributions adopted this behavior as the default, instead of their asinine insistence on single-channel sound support? The reason is simple: Linux distributions hate their users, and want to destroy them by any means possible.
This would also explain their sticking with X Window System.
* See also:
Kubuntu: Sucks
Linux Sound: Still suckin’ after all these years
One of the biggest gripes I’ve had about Linux was its sound card support*. Every Linux distribution you can think of still defaults to 1993’s concept of sound hardware. Namely, only one application can play a sound at any given time, and it locks the sound hardware from being used by other applications, for what reason not even God knows. This is behavior that belongs in a single-user, single-tasking operating system like DOS, not in a modern multiuser environment like Linux.
This problem was addressed back in the day by using “sound servers”, like ESD or artsd, to provide a layer between the applications and the OSS-supported hardware. This introduced a huge amount of latency, however, and was generally considered a stopgap measure even at its inception back in 1998 (or maybe even earlier, I can’t find that info). Even worse, software had to be written to explicitly support it or it was useless.
The limitations of the OSS+Sound Server architecture were overcome by a project known as ALSA, which introduced software mixing at the kernel level. This created an OSS-compatible, multi-streamed abstraction for sound hardware, and obviated the need for sound daemons altogether. That was back in, oh, 2000 or so, and to this day, every single Linux distribution still ships with sound servers and software mixing disabled.
Ubuntu, Fedora, Debian, et. al still configure sound hardware to be accessible by one application at a time, despite ALSA’s hard work. This means, for instance, that your Flash Player will freeze your browser if you happen to be listening to music in the background. Or if you get a system beep, it will lock you out of using your sound card for as long as the artsd/ESD timeout is set up. Games will not work. This sucks, of course, and is no way to compute in 2007.
The way to fix this is fairly simple. Simply put the following into your /etc/asound.conf, or in your ~/.asoundrc:
pcm.!default {
type plug
slave.pcm “swmixer”
}
pcm.swmixer {
type dmix
ipc_key 1234
slave {
pcm “hw:0,0”
period_time 0
period_size 1024
buffer_size 4096
rate 44100
}
}
Now, I profess neither to having written that snippet, nor to having any understanding whatsoever as to what it does. But since I put that magical incantation into use, I’ve had no problems whatsoever with sound-card locking, or Flash plugins, or anything else. It just works.
Now why haven’t the distributions adopted this behavior as the default, instead of their asinine insistence on single-channel sound support? The reason is simple: Linux distributions hate their users, and want to destroy them by any means possible.
This would also explain their sticking with X Window System.
* See also:
Kubuntu: Sucks
Linux Sound: Still suckin’ after all these years