Hyperthreading CPUs and User Experience

Brian has an article on his blog about Hyperthreaded CPUs and their effects on “the user experience”, by which I’m sure he means the typical response on a graphical desktop to a user’s actions — something like moving the mouse, dragging a window, opening-up a menu, etc.

I disagree with a few of his assertions… namely that HT itself is responsible for improving the user experience. For example, if you have a single (and non-HT) processor and you run some CPU-intensive process (such as a compiler, a complex graphical manipulation that doesn’t take advantage of your graphics processor, or a poorly written program that runs away with your CPU in a tight loop), that process is going to eat cycles that would otherwise be used to redraw your mouse pointer (hardware-drawn cursors went away with Microsoft Windows 3.1), draw the menus in your spreadsheet program, or drag your windows around the screen. This makes the responsiveness of your graphical desktop seem sluggish.

The reason this happens is that CPUs can only do one thing at a time. Fortunately, they typically do things reeeeeally fast, so you don’t notice that it’s only doing that one thing at a time because it switches tasks and does a little bit of work here and a little bit of work there, and it magically looks like everything is getting done “all at the same time”.

With HT, the CPU itself can actually do more than one thing at a time. Sure, the CPU still does that frienzied-switcheroo dance, except that it can — ostensibly, anyway — do work on two whole tasks at once. Brian mentions that HT isn’t as nice as actually having two equally-fast processors, but let’s ignore that fact for the moment.

I assert that the responsiveness of the graphical desktop has more to do with the way that the desktop functions than the way the CPU works. Evidence? Compare any version of Microsoft Windows with a similar machine running Linux and any one of the graphical desktops that run atop it. When you launch a program under Microsoft Windows, you get an hourglass mouse pointer, the computer churns for a while, and the program window eventually opens. The next time you do that, move the mouse around… try to open another application…. try to drag another window around. For the most part, your desktop will respond quite favorably. The mouse cursor will smoothly follow your hand motions, the windows will redraw, and the second application will also eventually open.

My experience with Linux is not the same. If I open an application, the mouse cursor immediately starts jerking around and loses its smoothness. With the mouse jerking around, the windows jerk around as well. Other apps will start, of course, but it’s really still like dropping menus down and moving the mouse that people really notice.

Note to Linux zealots: I totally love Linux. I run it on everything except the computer that I use as my primary desktop, mostly because of games that I want to play. Yeah, Wine just doesn’t work for me. Get over it.

Anyway, these observations lead me to believe that Microsoft Windows, no doubt through some kind of unholy voodoo, has gone through great pains to schedule the user interface at the highest possible priority. Linux, in typical pragmatic style, has chosen not to hijack the CPU for such trivial details as turning your mouse pointer into the Energizer Bunny.

As for Brian’s compiler running in a virtual machine, it’s a shame that VMWare doesn’t properly expose both processors available to Microsoft Windows to the OS running in the virtual machine. I would expect that a decent virtualization environment would allow you to set the number of CPUs to expose to the guest OS. I would have expected his gentoo compile to be able to peg both of his virtual CPUs.

But back to CPU utilization versus user experience. I would bet that if he were using a threaded compiler (which almost doesn’t make any sense) directly in his Microsoft Windows environment, and compiling the same code (or at least performing a compile that was equally CPU-intensive), then both HT CPUs would be pegged, and he’d still be able to move the mouse around, click on things (with a slight delay), etc.

I think it comes down to scheduling. Your OS can always interrupt your compiler for any reason. Your compiler (well, really your VM) is probably scheduled at in “normal” mode, whatever that means for your OS. I’m willing to bet dollars-to-doughnuts (mmmm… doughnuts) that Microsoft Windows’s graphical shell itself (explorer?) is scheduled at a higher-than-normal priority, or that all the UI calls that it makes are either running at the kernel level (which wouldn’t suprise me one bit for MS Windows, honestly) or at a higher-than-normal priority. It’s not the CPU, it’s the scheduler.

There are a lot of folks out there that say that HT is actually hurting performance. I haven’t read any of them, ’cause I’m honestly not that interested in looking at those numbers. After reading the ARS article linked above a few years ago, I thought that some really smart dudes got really high one day and had themselves a fantastic idea. I figured that it wasn’t as cool as the hype would suggest, but hey… why not squeeze as much out of the CPU as you possibly can? My gut reaction is that you can find data to either support or deride HT technology. I do know one thing: lots of Java developers were complaining in the past that HT CPUs would crash all the time with very strange errors, and turning off HT would solve their problems. >shrug<. You gotta do what you gotta do. Too bad those folks paid extra for their super-sexy HT processors.

I had a friend at Rose-Hulman that used to play Unreal Tournament with a couple of friends and me. He had just gotton a dual-CPU machine and decided to play with it a little: he created a dedicated server and set the processor affinity to his second processor (i.e. not the primary one). Then, he started UT in client mode so he could play it, and set the processor affinity for that process to the primary CPU. I’m not sure if it really made any difference than just running them separately with no tweaks, but it was an interesting idea.

When I heard that he had done that, I decided that since the OS itself actually needs very little CPU time to do it’s stuff, that an OS that could monopolize a considerably lower-powered processor and then schedule all user tasks on a much higher-powered processor would be great. Super-fast memory allocation (not that it’s particularly slow in the first place), buffer management, DMA, etc. For most OSs, this also means that the various hardware drivers would run on a CPU that wasn’t being used for applications. That would speed-up graphics processing since even a computer with the latest monster GPU still needs the graphics driver to actually send the data to the GPU to do the work.

Who knows. Maybe someone will steal my idea and make a jillion dollars. That would really suck for me.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: