Archive for the ‘Tech’ Category

(Yet Another) Microsoft Internet Explorer Rendering Bug

Tuesday, March 21st, 2006

For years, standard operating procedure for developing a web application would be to design and implement it with Microsoft’s Internet Explorer as the test bed. You’d pick a version and tell everyone that they needed to use it and that was that. I have to admit that even I committed such sins.

These days, I want my pages to be usable on as wide a variety of web browsers as possible, so I use Mozilla Firefox for development, and then just check MSIE at the end to see if anything is amiss. Yet, with MSIE, something is almost always amiss…

I’ve had trouble with good-looking logos and mastheads for a long time. Back in the day, tables were the way to go. More recently, CSS is the preferred (and really the only) way to do layouts. The trouble is that MSIE has some schizophrenia when it comes to CSS. The folks that wrote MSIE implement only part of the CSS specifications, and often took shortcuts whenever they wanted.

A few days ago, I noticed that MSIE was acting strangely when viewing one of my current projects. It appeared that the text in my masthead wasn’t being displayed. Perhaps I had some conflicting CSS styles that were giving my text the same background and foreground colors. I checked, and everything was okay. Reload after reload, application server restart after application server restart didn’t solve the problem. Mozilla Firefox never blinked an eye and rendered the (quite simple) page without a problem.

Then, I noticed that scrolling the page past the fold and back again would mysteriously reveal the text. That’s not something that can be done using CSS.

Check out this movie that demonstrates the problem:

Screenshot of MSIE Rendering Bug

It’s definitely a bug. The markup is validated correct strict XHTML 1.0 and the CSS is also spick-and-span clean.

The markup is fairly straightforward; there’s a div surrounding the entire masthead (both the topmost blue bar and the bar containing the tri-colored regions), and then a divcontaining everything within the topmost blue bar. The blue bar contains a form containing the login form. All the other text elements are plain-old h1 and h2 h2elements. The tri-colored regions live in their own divand are made up of the surrounding div(black background) and two nested divelements with appropriate background colors.

The styles are also straightforward; colors, margins, borders, etc. I didn’t even have to use the ‘line-height’ trick to get MSIE to display an empty div(for the tri-colored regions).

It turns out that the problem can be solved by adding a simple non-breaking space between the blue-bar and tri-colored-bar divelements. MSIE interprets this change by finally giving me what I wanted in the first place. Unfortunately, it adds a small vertical space before the tri-colored-bar which I would prefer not to have… it looks like unnecessary padding in the blue div.

I have a virtual machine running Microsoft Windows XP with MSIE 7 beta running on it, so I decided to comfort myself with the fact that MSIE 7 had probably fixed this bug. It hasn’t, at least not yet. I hope the MSIE engineers really try to get CSS right this time.

Hyperthreading CPUs and User Experience

Tuesday, November 29th, 2005

Brian has an article on his blog about Hyperthreaded CPUs and their effects on “the user experience”, by which I’m sure he means the typical response on a graphical desktop to a user’s actions — something like moving the mouse, dragging a window, opening-up a menu, etc.

I disagree with a few of his assertions… namely that HT itself is responsible for improving the user experience. For example, if you have a single (and non-HT) processor and you run some CPU-intensive process (such as a compiler, a complex graphical manipulation that doesn’t take advantage of your graphics processor, or a poorly written program that runs away with your CPU in a tight loop), that process is going to eat cycles that would otherwise be used to redraw your mouse pointer (hardware-drawn cursors went away with Microsoft Windows 3.1), draw the menus in your spreadsheet program, or drag your windows around the screen. This makes the responsiveness of your graphical desktop seem sluggish.

The reason this happens is that CPUs can only do one thing at a time. Fortunately, they typically do things reeeeeally fast, so you don’t notice that it’s only doing that one thing at a time because it switches tasks and does a little bit of work here and a little bit of work there, and it magically looks like everything is getting done “all at the same time”.

With HT, the CPU itself can actually do more than one thing at a time. Sure, the CPU still does that frienzied-switcheroo dance, except that it can — ostensibly, anyway — do work on two whole tasks at once. Brian mentions that HT isn’t as nice as actually having two equally-fast processors, but let’s ignore that fact for the moment.

I assert that the responsiveness of the graphical desktop has more to do with the way that the desktop functions than the way the CPU works. Evidence? Compare any version of Microsoft Windows with a similar machine running Linux and any one of the graphical desktops that run atop it. When you launch a program under Microsoft Windows, you get an hourglass mouse pointer, the computer churns for a while, and the program window eventually opens. The next time you do that, move the mouse around… try to open another application…. try to drag another window around. For the most part, your desktop will respond quite favorably. The mouse cursor will smoothly follow your hand motions, the windows will redraw, and the second application will also eventually open.

My experience with Linux is not the same. If I open an application, the mouse cursor immediately starts jerking around and loses its smoothness. With the mouse jerking around, the windows jerk around as well. Other apps will start, of course, but it’s really still like dropping menus down and moving the mouse that people really notice.

Note to Linux zealots: I totally love Linux. I run it on everything except the computer that I use as my primary desktop, mostly because of games that I want to play. Yeah, Wine just doesn’t work for me. Get over it.

Anyway, these observations lead me to believe that Microsoft Windows, no doubt through some kind of unholy voodoo, has gone through great pains to schedule the user interface at the highest possible priority. Linux, in typical pragmatic style, has chosen not to hijack the CPU for such trivial details as turning your mouse pointer into the Energizer Bunny.

As for Brian’s compiler running in a virtual machine, it’s a shame that VMWare doesn’t properly expose both processors available to Microsoft Windows to the OS running in the virtual machine. I would expect that a decent virtualization environment would allow you to set the number of CPUs to expose to the guest OS. I would have expected his gentoo compile to be able to peg both of his virtual CPUs.

But back to CPU utilization versus user experience. I would bet that if he were using a threaded compiler (which almost doesn’t make any sense) directly in his Microsoft Windows environment, and compiling the same code (or at least performing a compile that was equally CPU-intensive), then both HT CPUs would be pegged, and he’d still be able to move the mouse around, click on things (with a slight delay), etc.

I think it comes down to scheduling. Your OS can always interrupt your compiler for any reason. Your compiler (well, really your VM) is probably scheduled at in “normal” mode, whatever that means for your OS. I’m willing to bet dollars-to-doughnuts (mmmm… doughnuts) that Microsoft Windows’s graphical shell itself (explorer?) is scheduled at a higher-than-normal priority, or that all the UI calls that it makes are either running at the kernel level (which wouldn’t suprise me one bit for MS Windows, honestly) or at a higher-than-normal priority. It’s not the CPU, it’s the scheduler.

There are a lot of folks out there that say that HT is actually hurting performance. I haven’t read any of them, ’cause I’m honestly not that interested in looking at those numbers. After reading the ARS article linked above a few years ago, I thought that some really smart dudes got really high one day and had themselves a fantastic idea. I figured that it wasn’t as cool as the hype would suggest, but hey… why not squeeze as much out of the CPU as you possibly can? My gut reaction is that you can find data to either support or deride HT technology. I do know one thing: lots of Java developers were complaining in the past that HT CPUs would crash all the time with very strange errors, and turning off HT would solve their problems. >shrug<. You gotta do what you gotta do. Too bad those folks paid extra for their super-sexy HT processors.

I had a friend at Rose-Hulman that used to play Unreal Tournament with a couple of friends and me. He had just gotton a dual-CPU machine and decided to play with it a little: he created a dedicated server and set the processor affinity to his second processor (i.e. not the primary one). Then, he started UT in client mode so he could play it, and set the processor affinity for that process to the primary CPU. I’m not sure if it really made any difference than just running them separately with no tweaks, but it was an interesting idea.

When I heard that he had done that, I decided that since the OS itself actually needs very little CPU time to do it’s stuff, that an OS that could monopolize a considerably lower-powered processor and then schedule all user tasks on a much higher-powered processor would be great. Super-fast memory allocation (not that it’s particularly slow in the first place), buffer management, DMA, etc. For most OSs, this also means that the various hardware drivers would run on a CPU that wasn’t being used for applications. That would speed-up graphics processing since even a computer with the latest monster GPU still needs the graphics driver to actually send the data to the GPU to do the work.

Who knows. Maybe someone will steal my idea and make a jillion dollars. That would really suck for me.

Character Assassination

Friday, November 18th, 2005

At the dawn of (computer) time, someone decided that computers being able to deal with letters as well as numbers would be a great idea. And it turned out to be a big ‘ole mess.

The problem is that you have to decide how to encode these letters (or characters) into numbers, which is the only thing that computers can handle. EBCDIC and ASCII were two of the first, and while DBCDIC has effectively died, ASCII has turned into a few (relatively compatible) standards such as US-ASCII and ISO-8859-1 (also called “Latin-1”). These jumbles of letters are called character sets, and the describe how to take the concept of a letter and turn it into one or more 8-bit bytes for processing within the computer.

One of the most flexible characters sets is called UTF-8, and represents an efficient packing of bytes by only using the minimum necessary. For example, there are jillions of characters out there in human language if you take into account written languages like Chinese, Sanskrit, etc. We would need many bytes to represent all character possibilities (maybe 4 or 5), but UTF-8 has a trick up its sleeve that helps reduce the number of bytes taken up by common (read: Latin-1) characters. It’s also completely backward-compatible with ASCII, which makes it super-handy to use in places where ASCII was already being used, and it’s time to add support for international characters.

Now that the history lesson is over, it’s time to complain.

I’m writing an application in the Java programming language, which is generally highly touted as having excellent internationalization (or i18n) support: it has encoding and decoding capability for a number of different character sets (ASCII, UTF-8, Big5, Shift_JIS, any number of ISO-xyz-pdq encodings, etc.), natively uses Unicode (actually, UTF-16, which is a specific type of Unicode), and has some really sexy ways to localize (that’s the process of managing translations of your stuff into non-native languages — such as Spanish being non-native to me, an English speaker) content.

I was tyring to do something very simple: get my application to accept a “funny” (or “international” or non-Latin-1… I’ll just say “funny”, since I don’t use those characters very often) character. I love the Spanish use of open-exclaimation and open-question characters. They’re upside-down versions of ! and ? and preceed questions and exclaimations. It makes sense when you think about it. Anyhow, I was trying to successfully take the string “¡Bienvenidos!”, put it into my database, and get it back out successfully, using a web browser as the client and my own software to move the data back and forth.

It wasn’t working. Repeated submissions/views/re-submissions were resulting in additional characters being inserted before the “¡”. Funny stuff that I had clearly not entered.

I’ve done this before, but the mechanics are miserable and I pretty much block out the painful memories each time if happens.

The problem is that many pieces of code get their grubby little hands on the data from the time you type it on your keyboard and the time it gets into my database. Here is a short list of code that handles those characters, and where opportunities for cock-ups occur.

  • Keyboard controller. Your keyboard has to be able to “type” these characters correctly so that the operating system can read them. I can’t type a “¡”on my keyboard, so I need to take other steps.
  • Your operating system. MS-DOS in its default configuration in the US isn’t going to handle Kanji characters very well.
  • Your web browser. The browser has to take your characters and submit them in a request to the web server. Guess what? There’s a character encoding that is used in the request itself, which can complicate matters.
  • The web server, which may or may not perform any interpretation of the bytes being sent from the web browser.
  • The application server, which provides the code necessary to convert incoming request data into Java strings.
  • My database driver, which shuttles data back and forth between Java and the database server.
  • The database itself, which has to store strings and retrieve them.

I can pretty much absolve the keyboard and operating system at this point. If I can see the “¡” on the screen, I’m pretty happy. I can also be reasonably sure that the web browser knows what character I’m taking about, since it’s being displayed in the text area where I’m entering this stuff. My web server is actually ignoring request content and just piping it through to my app server. The database and driver should be okay, as I have specified that I want UTF-8 to be used both as the storage format of characters in the database, and for communication between the Java database driver and the database server.

That leaves 2 possibilities: the request itself (made by the web browser) or the application server (converts bytes into Java strings).

The first step in determining the problem is research: what happens when the web browser submits the form, and how is it accepted and converted into a Java string?

  1. The web browser creates a request by converting all the data in a form into bytes. It does this by using the content-type “application/x-www-form-urlencoded” and some character encoding. You can ignore the content-type for now.
  2. The request is sent to the server.
  3. The application uses the ServletRequest.getParameter method to get a String value for a request parameter.
  4. The application server reads the parameter out of the request using some character encoding, and converts it into a String.

So, it looks like the possibilties for confusion are where the character sets are chosen. The W3C says that <form> elements can specify their preferred character set by using the accept-charset attribute. The default value for that attribute is “UNKNOWN”, which means that the browser is free to choose an arbitrary character set. A semi-tacit recommendation is that the browser use the character encoding that was used to provide the form (i.e. the charset of the current page) as the charset to use to make the request.

That seems relatively straightforward. My responses are currently using UTF-8 as their only charset, so the forms ought to be submitted as UTF-8. Perfect! “¡” ought to successfully be transmitted in UTF-8 format, and go straight-through to my database without ever being mangled. Since this wasn’t happening, there was obviously a problem. What character set *was* the browser using? A quick debug log message ought to help:

DEBUG - request charset=null 

Uh, oh. A null charset means that the app server has to do some of it’s own thinking, and that usually spells trouble.

Time to take a look at the ‘ole API specification. First stop, ServletRequest.getParameter(), which is the first place my code gets a crack at reading data. There’s no mention of charsets, but it does mention that if you’re using POST (which I am), that calling getInputStream or getReader before calling getParameter might cause problems. That’s a tip-off that one of those methods gets called in order to read the parameter values themselves. Since InputStreams don’t care about character sets (they deal directly with bytes), I can ignore that one. ServletRequest.getReader() claims to throw UnsupportedEncodingException if the encoding is (duh) unsupported, so it must be applying the encoding itself. There is no indication of how the API determines the charset to use.

The HTTP specification has a header field which can be used to communicate the charset to be used to decode the request. The header is “content-type”, and has the form: “Content-Type: major/minor; charset=[charset]”. I already mentioned that the content-type of a form submission was “application/x-www-form-urlencoded”, so I should expect something like “Content-Type: application/x-www-form-urlencoded; charset=UTF-8” to be included in the headers from the browser. Let’s have a look:

DEBUG - Header['host']=[deleted]
DEBUG - Header['user-agent']=Mozilla/5.0 [etc...]
DEBUG - Header['accept']=text/xml, [etc...]
DEBUG - Header['accept-language']=en-us,en;q=0.5
DEBUG - Header['accept-encoding']=gzip,deflate
DEBUG - Header['accept-charset']=ISO-8859-1,utf-8;q=0.7,*;q=0.7
DEBUG - Header['keep-alive']=300
DEBUG - Header['connection']=keep-alive
DEBUG - Header['referer']=[deleted]
DEBUG - Header['cookie']=JSESSIONID=[deleted]
DEBUG - Header['content-type']=application/x-www-form-urlencoded
DEBUG RequestDumper- Header['content-length']=121

Huh? The Content-Type line doesn’t contain a charset. That means that the application server is free to choose one arbitrarily. Again, the unspecified charset comes back to haunt me.

So, the implication is that the web browser is submitting the form using UTF-8, but that the app server is choosing its own character set. Since things aren’t working, I’m assuming that it’s choosing incorrectly. Since the Servlet spec doesn’t say what to do in the absence of a charset in the request, okly reading the code can help you figure out what’s going on. Unfortunately, Tomcat’s code is so byzantine, you don’t get very far into the request wrapping and facade classes before you go crazy.

So, you try other things. Maybe the app server is using the default file encoding for the environment (it happens to be “ANSI_X3.4-1968”) for me. Setting the “file.encoding” system property changes the file encoding used in the system, so I tried that. No change. The last-ditch effort was to simply smack the request into submission by explicitly setting the character encoding in the request if none was provided by the client (in this case, the browser).

The best way to do this is with a servlet filter, which gets ahold of the request before it is processed by any servlet. I simply check for a null charset and set it to UTF-8 if it’s missing.

public class EncodingFilter
    implements Filter
{
    public static final String DEFAULT_ENCODING = "UTF-8";

    private String _encoding;

    /**
     * Called by the servlet container to indicate to a filter that it is
     * being put into service.
     *
     * @param config The Filter configuration.
     */
    public void init(FilterConfig config)
    {
	_encoding = config.getInitParameter("encoding");
	if(null == _encoding)
	    _encoding = DEFAULT_ENCODING;
    }

    protected String getDefaultEncoding()
    {
	return _encoding;
    }

    /**
     * Performs the filtering operation provided by this filter.
     *
     * This filter performs the following:
     *
     * Sets the character encoding on the request to that specified in the
     * init parameters, but only if the request does not already have
     * a specified encoding.
     *
     * @param request The request being made to the server.
     * @param response The response object prepared for the client.
     * @param chain The chain of filters providing request services.
     */
    public void doFilter(ServletRequest request,
			 ServletResponse response,
			 FilterChain chain)
	throws IOException, ServletException
    {
	request.setCharacterEncoding(getCharacterEncoding(request));

	chain.doFilter(request, response);
    }

    protected String getCharacterEncoding(ServletRequest request)
    {
	String charset=request.getCharacterEncoding();

	if(null == charset)
	    return this.getDefaultEncoding();
	else
	    return charset;
    }

    /**
     * Called by the servlet container to indicate that a filter is being
     * taken out of service.
     */
    public void destroy()
    {
    }
}

This filter has been written before: at least here and here.

It turns out that adding this filter solves the problem. It’s very odd that browsers are not notifying the server about the charset they used to encode their requests. Remember the “accept-charset” attribute from the HTML <form> element? If you specify that to be “ISO-8859-1”, Mozilla Firefox will happily submit using ISO-8859-1 and not tell the server which encoding was used. Same thing with Microsoft Internet Explorer.

I can understand why the browser might choose not to include the charset in the content type header because the server ought to “know” what to expect, since the browser is likely to re-use the charset from the page containing the form. But what if the form comes from one server and submits to another? Neither of these two browsers provide the charset if the form submits to a different page, so it’s not just an “optimization”… it’s an oversight.

There’s actually a bug in Mozilla related to this. Unfortunately, the fix for it was removed because of incompatibilities that the addition of the charset to the content type was causing. Since Mozilla doesn’t want to get the reputation that their browser doesn’t work very well, they decided to drop the charset. :(

The bottom line is that, due to some bad implementations out there that ruin things for everyone, I’m forced to use this awful forced-encoding hack. Fortunately, it “degrades” nicely if and when browsers start enforcing the HTTP specification a little better. My interpretation is that “old” implementations always expect ISO-8859-1 and can’t handle the “charset” portion of the header. Fine. But, if a browser is going to submit data in any format other than ISO-8859-1, then they should include the charset in the header. It’s the only thing that makes sense.

How old are you, really?

Sunday, August 7th, 2005

When a man sits with a pretty girl for an hour, it seems like a minute. But let him sit on a hot stove for a minute–and it’s longer than any hour. That’s relativity.

-Albert Einstein

Reckoning time has always been a problem for humans, it seems. We have argued over which calendar to use for quote a long time. Even worse is trying to figure out how long ago something happened.

The answers to many “how long ago” questions can be answered with a certain degree of slop. For example, “how long ago was Jesus of Nazareth born?” could be answered, “about 2000 years ago”. “When was peace declared at the end of World War II?”, “60 years ago”. But what a question to which the answer should be more specific, such as “how long ago was I born?”. I want to know the years, months, and days for that figure, and here’s why.

As part of my continuing work with The Center for Promotion of Child Development Through Primary Care, I have to be able to display ages for patients that our doctors will be treating. More often than not, these patients are young, so we’re talking about newborns through adolescencts. For the newborns, the number of months and days is very important, while the ages of adolescent patients are okay to round-off to years and months, and maybe just years.

It turns out that it’s somewhat difficult to answer the question “how old are you?”. It doesn’t really seem all that hard, until you actually try to do it. The problem is that people disagree about a lot of things. For example, you won’t get much argument that there are 10 days separating 2000-01-01 and 2000-01-11, or that there is 1 month separating 2000-01-01 and 2000-02-01. But what about the date difference between 2000-01-31 and 2000-02-30? Is that 30 days or is it 1 month?

Julian Bucknall is a guy who studies algorithms, at least as a hobby. He has a discussion of time reckoning in software including a sample implementation in C#. Although I appreciate his discussion (and created a few new unit tests based upon some of the problematic date ranges he presents), I don’t entirely agree with how he did his implementation. I happen to be using Java for my purposes, but I did my own implementation because I needed to, not because I’m just a Java wonk.

Before I start, those without a programming background have to realize that most programming languages have very poor tools for handling dates. Mostly they center around counting milliseconds since a certain date (usually 1970-01-01). This is great for quick calculations of numbers of days between events, since a day has a fixed number of milliseconds (1000 ms/sec * 60 sec/min * 60 min/hr * 24 hr/day = 86400000 ms/day).

For those of you who are too smart for your own good, I’m going to be ignoring leap seconds and things like that for the time being, since computers generally don’t handle those, anyway. If you want your computer’s time to be correct to the nearest leap-second mandated by the IEOS, you should just manually adjust your clock whenever it’s convenient… no date library is going to worry about keeping a list of all leap-seconds ever added to civil time.

So, back to dates in software. Since the number of milliseconds in a day is fixed, and computers often represent dates as a number of milliseconds from a fixed date (generally known as the epoch), it’s very easy to calculate the difference between two dates as a number of days. For example, I was born on 1977-10-27. That means that I am 10146 days old (wow, that doesn’t seem like a lot…). But how many years, months, and days old am I?

Fortunately, for discussion purposes, I’m writing this entry on 2005-08-07, which has both the day-of-month, as well as the month itself, less than the same numbers in my birth date (that is, 8 is less than 10, and 7 is less than 27). That’s good because it makes the math harder. If I had been born on 1977-08-01, then you could count on your fingers that I am 28 years, 0 months, and 6 days old. Since I was born later in the month and later in the year, there are all kinds of fun things that have to happen.

If you were to perform these calculations on your fingers, you’d probably start with the birth date and keep adding years until you couldn’t add them anymore without going over. You’d easily get to 27 and stop (if you had that many fingers). But then, you have to figure out what the differences are between the months and days. Exactly 27 years after my birth would be 2004-10-27. In order to get yourself to 2005-08-07, you need to add a bunch of months. If you add 10 months, you’ll get 2005-08-27, which is too much. So, you have to add 9 months instead, and then figure the days. Exactly 27 years and 9 months after my birth would be 2005-07-27. In order to get to today, you have to add days. If you add 11 days, you’ll get to 2005-08-07. Ta-da!

Now, that didn’t seem too bad, did it? Actually, an implementation which basically follows this on-your-fingers calculation is the one proposed by Julian Bucknall as well as many others on the web. I don’t like this implementation because is it computational overkill (you have to do lots of looping, and most Date object implementations that exist out there will re-calculate a bunch of stuff whenever you update a single field, such as the year or month). I actually wrote mine before I read his article, and I don’t have a C# compiler handy to run his algorithm through my test cases, so I can’t be sure that they yield the same results. At any rate, I have an implementation that should be a little more efficient and meets my needs.

Oh, one last note: we had been using a Java library called BigDate to do our date calculations. I knew it was going to be a pain in the neck to write our own, so we found a library that would do it for us. Unfortunately, it fails with Java Date objects representing dates before 1970-01-01. The author claims that his library handles dates prior to 1970 in contrast to Java’s Date, but it appears that he is wrong on two counts: Java’s Date class does, in fact, handle dates before 1970, and his library trips over them. I was able to use his library by passing-in the year, month, and date separately, but that required me to use deprecated methods in the Date API, and I was already starting to look down my nose at it, slightly. Just for the heck of it, I tried to use BigDate to calculate the date delta between a BCE date and today, and BigDate ignored the era, so I got the wrong answers there, too.

So, I wrote my own implementation (in Java) that quickly calculates deltas for all three fields (I’m not concerned with time, just the date), possibly ajdusts them for BCE dates, and then runs a fairly simple algorithm to move the date, then month and year to their correct values. We use a class called DiffDate which just stores a year, month, and date as a return value. I have one method that accepts a pair of Date objects, and one that accepts a pair of Calendars. Use of the Calendar avoids deprecation warnings during compilation, and offers two methods for client code, making it easier to use in situations that call for either Dates or Calendars.

    //
    // Copyright and licence notice: I intend for this code to be freely copied, edited, improved, etc.
    // Please give me (Chris Schultz, http://www.christopherschultz.net/) credit as the source of
    // this code, and let me know if you find ways to improve it.
    //
    public static DiffDate diffDates(Date earlier, Date later)
    {
      Calendar c_e = Calendar.getInstance();
      c_e.setTime(earlier);
      Calendar c_l = Calendar.getInstance();
      c_l.setTime(later);
      return diff(c_e, c_l);
    }

    public static DiffDate diff(Calendar earlier, Calendar later)
    {
      int y1 = earlier.get(Calendar.YEAR);
      int m1 = earlier.get(Calendar.MONTH);
      int d1 = earlier.get(Calendar.DATE);
      int y2 = later.get(Calendar.YEAR);
      int m2 = later.get(Calendar.MONTH);
      int d2 = later.get(Calendar.DATE);

      // Adjust years across eras (BC dates should be negative, here).
      if(java.util.GregorianCalendar.BC == earlier.get(Calendar.ERA))
        y1 = -y1;
      if(java.util.GregorianCalendar.BC == later.get(Calendar.ERA))
        y2 = -y2;

      int d_y = y2 - y1;
      int d_m = m2 - m1;
      int d_d;

      // Now that we've got deltas, start with the days and work backward
      // changing any negatives into positives, and rippling up to larger
      // fields.
      if(d2 >= d1)
      {
        d_d = d2 - d1; // Easy
      }
      else
      {
        // To determine how big the months are.
        Calendar work = (Calendar)later.clone();
        while(d1 > d2)
        {
          // Move backward through the months, adding a whole month 
          // until we have enough days to cover the deficit.
          --m2;
          // To track our progress through the month
          --d_m;
          // Now, there's one less month between dates
          if(0 > m2)
          {
            --d_y;
            work.set(Calendar.YEAR, work.get(Calendar.YEAR) - 1);
            m2 = Calendar.DECEMBER;
          }

          work.set(Calendar.MONTH, m2);
          d2 += work.getActualMaximum(Calendar.DAY_OF_MONTH);
        }

        d_d = d2 - d1;
      }

      // Adjust the months and years
      while(0 > d_m)
      {
        d_m += 12;
        d_y -= 1;
      }

      return new DiffDate(d_y, d_m, d_d);
    }

The whole thing is very straightforward, with the notable exception of the big “else” block in the middle of the code. It is here where we handle cases when the earlier date has a day-of-month that is later in the month than the later date. In that case, we need to count backwards, enlisting the help of a Calendar object to give me the lengths of various months. That ‘work’ calendar actually exists only to help me with leap-year determination. I suppose I would have used the old “years evenly divisible by 4, except every 100, except every 400”, but that would have complicated my code even further, and, I think, been inaccurate for old dates because of changes to the calendar. Then again, I think that GregorianCalendar (the default calendar in my locale) had those same rules, so I’d get the same results in both cases. If you want to calculate dates in October of 1582, you’re on your own.

You may have noticed, but this implementation does not handle time zones in any way. The reason is that this is intended to be for age calculation. If you were born in Sydney on 2000-01-01, then it might still have been 1999-12-31 in New York. However, you’re certainly not going to maintain your birthday to be 1999-12-31 when you’re in the US and 2000-01-01 when you’re in Sydney. Or, at least, we won’t ;)

It occurs to be that I’d like to write an entirely new Date implementation for Java, to handle things like bizarre missing dates (like October 1582) and a few other things that bother me about the Date class, but it’s just not going to happen. There are too many APIs that already use Date (or Calendar) and they’re not likely to change. Also, one of the things that I haven’t liked about the APIs is that they were able to neither calculate nor store delta dates. I have solved both with a delta date implementation and a simple delta date class.

So, how old are you, exactly? My code says that I’m 27 years, 9 months, and 11 days old. But I feel much younger than that.

Can I Get A Witness?

Monday, July 18th, 2005

On the 13th of July, Slashdot had a post about a guy who had, using some Javascript and interaction with the HTML DOM, replaced standard checkboxes and radio buttons on web forms with nicer looking versions; basically, he has a way to use nicer looking widgets on a web form. They’re quite attractive. His research can be found on this page.

He mentions in “The Code” section that this is — as far as he knows — an original idea. There’s no date on the page itself, so it’s difficult to tell when he had the idea.

What I can tell you is this: I had this working somewhere between 2005-02-14 and 2005-02-16 according to my CVS logs. I’m working on an online questionnaire delivery and decision-processing system for CHADIS, and I developed this technique over a period of time, and finally checked the code into my CVS repo over a period of 2 days, including updates and fixes. Of course, I have root access on the servers which host our CVS repositories, so I could be falsifying that information, however I don’t really have any motivation to do so.


Radio buttons replaced with stylized buttons. The orange-lit button is the currently-selected button, while the light-blue-lit one is under the cursor. Update: 2005-07-18 15:09: Oops. When I did a screenshot, the “light-blue” button that I describe disappeared, probably because the cursor disappears when you take a screenshot. I promose that it works ;)

My code is all in Javasript, keps in a separate file with a smattering of onclick-style hooks into the library that I’ve written. All of the form elements are standard HTML form elements, and the whole thing degrades gracefully on non-CSS/Javascript browsers (yes, I test my software on Lynx, thank you very much).


The same page with styles disabled.

I’m sure that someone else has identified this problem (of not being able to adequately style HTML radio buttons and checkboxes) and developed a similar solution as well. It’s nice to know that someone has found a solution very similar to my own. It gives me a sense of validation that someone else came to the same conclusions that I did: that this is a problem that is best solved with Javascript and DOM manipulation, and that degrading usefully on browsers that don’t support all the whiz-bang features is not only possible, but a worthwhile goal.


The same page when viewed with Lynx v2.8.5rel.1.

Spyware Sucks

Wednesday, June 8th, 2005

Although I have never managed to fall victim to spyware or other variants, I am the de-facto Help Desk for my family, as I’m sure many tech-savvy family members are.

My brother-in-law lives in Atlanta and whenever I visit, I check their computers, apply any required updates and patches, and install anything I think they need. This time, I had known for a few months that his computer was totally hosed, since he described the problems over the phone to me. I instructed him to disconnect his network cable so that the problems wouldn’t get any worse. About all he wanted to do was play video games as a single player, so losing network access wasn’t a big deal.

Oh, and this is a post about cleaning up Spyware, so of course, he’s running Windows (2000).

When I got here, I expected the clean-up effort to be pretty easy: a little virus scan here, two or three runs of Spybot: Search & Destroy, and that would be that.

Boy, was I wrong.

It turned out that, not only had he fallen victim to some spyware and adware, but that he had been successfully attacked by several Trojan Horses. These programs were running on startup and basically re-installing all the stuff that I kept uninstalling every time.

My first mistake was not running the virus scanner as my first order of business. I ran Spybot a few times between reboots, and some of the things couldn’t be removed for one reason or another, so I ended up performing some boot-time scans as well. Apparently, this kind of pissed-off whatever was installed and everything seemed to get worse: there was a new program running called SpySherrif, which is a thinly-veiled effort to get you to buy something that you don’t need (i.e. a spyware cleaner that installs more spyware).

After wising up and installing my favorite anti-virus package, Avast!, it detected virii running in memory and recommended a boot-time scan. Of course, I accepted the offer and rebooted.

Along with a host of other files that were infected with things like win32:Trojano, win32:Trojan-XYZ (where XYZ is a random number between 1 and 1000), win32:Beavis-A, and a host of festering, adware-style trash, I was dismayed to learn that explorer.exe was infected with something. I tried to “repair” the file, but there was nothing to be done: it had to be deleted. I did that with a heavy heart, since I didn’t know how Windows would act with it’s primary shell gone. I figured that I would have to reinstall the OS if it was that badly damaged, anyway, so I’d better just delete the damned thing.

After all that, Windows started up, but with no desktop (as I would have expected). Fortunately, CTRL-ALT-DEL still worked, and I was able to run the command prompt and get some real work done. Conveniently, Windows still comes with expand.exe and I had conveniently copied all of the original files from the installation CD to his hard drive. Using those, I was able to restore explorer.exe and make some more progress.

It took several grueling rounds of reboot, virus scan, kill evil-looking resident programs, spyware scan, and then cleanup of Internet Explorer’s “Temporary Internet Files” folder because it ended up containing adware on every reboot. But, I was able to finally exorcize the machine of all the crap that was on it. Below is a list of extremely
useful programs that I keep in my bag of tricks to clean computers:

  • Avast! AntiVirus. Nice, ’cause it’s free for home use and very reliable.
  • Spybot: Search & Destroy. There’s no better spyware cleaner if you ask me.
  • Process Explorer from sys-internals. This puppy shows you everything, and will even let you scan for which program is using a particular file and snoop what resources a program is using.
  • Autoruns, also from sys-internals. This one was new to me, but I had to root-out lots of stuff that was re-installing itself on boot, and Windows has like 12 different places where run-on-boot programs can be specified.

Now, it was time to fix everything that was broken. For example, SpySheriff (or another program, it’s pretty much impossible to tell) changed the Desktop to be a web page telling me that my computer had been “stopped” due to spyware and virus activity. Now that I had cleaned it out (no thanks to SpySheriff), I was going to restore the desktop to its previous state. Unfortunately, the options to change any desktop settings were mysteriously greyed-out. This was obviously the work of nefarious software. I set about finding out what happened and how to fix it. Along the way, I discovered that the following items had been hosed by all the software that had taken over my brother-in-law’s computer.

  • ActiveDesktop permanently* enabled
  • Desktop wallpaper permanently disabled
  • Windows AutoUpdate permanently disabled

* By permanently disabled, I mean that re-enabling these features was not possible through the usual user interfaces.

After much searching, I found out how to fix each of these problems. Below, I describe them so that others might have an easier time finding this information.

ActiveDesktop is Disabled

I feel bad that I can’t remember precicely where I found this information or what it was, but I believe that checking this registry value will help you out a lot:

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\ForceActiveDesktopOn (you’ll want this to be set to “0” — zero).

Desktop Wallpaper is Disabled

Using the information found at this site (http://www.bleepingcomputer.com/forums/How_to_remove_the_Smitfraud_or_Wpexe_bswexe_WindowsFY-t17258.html), I found a registry script that you can run to clean up after a handful of evil programs. I was leery of blindly running that script on my own registry, so I picked out those changes that seemed to make sense for me. Here they are for convenience:

Delete the following keys and their values:

In HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\System:
NoDispAppearancePage, Wallpaper, WallpaperStyle, and NoDispBackgroundImage

In HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer:
NoActiveDesktopChanges

In HKEY_CURRENT_USER\Control Panel\Desktop:
Wallpaper, WallpaperStyle

AutoUpdate is Disabled
I first found this site (http://www.amset.info/windows/auto-updates.asp, which told me which registry keys were significant, here (search for “greyed out”). Unfortunately, there was more to it than their suggested fix, so I went looking for more information about those registery values. I found this site (http://www.susserver.com/FAQs/FAQ-InterpretingAUStateValues.asp) which described all the possible values and their meaning. That didn’t help too
much, so I kept looking and found this site (http://snakefoot.fateback.com/tweak/winnt/service/abc.html#AUTOMATIC_UPDATES) which pretty much lays it all out there for you. Using that information, I was able to correct all of the registry values that had been hosed.

Since I was both working and enjoying my family — including my nephew who was celebrating his 1st birthday — it took me 4 days to do all this. *Sigh*

Needless to say, he’s now running with all the latest OS patches, Avast!, Spybot, and, of course, Mozilla Firefox as a replacement for Internet Explorer.

Questo addatore é no bene!

Tuesday, March 15th, 2005

Before leaving for Italy, I double-checked to make sure that the power adaptor (brick) was capable of handling European voltages. Sure enough, the good people at HP have furnished me with a power supply capable of accepting 100-240V at 2A and 50-60Hz, which covers most of the world. What I did not have was the adaptor necessary to convert the North American square, slender prongs into the round ones used in Italy. Italy also has two different sizes: picolo and grande.

No problem. My dad has been overseas quite a bit, and he even has extra ones because not all of his trips have been as well-planned (or packed) as they could have been. So, I borrowed pair of adaptors which, when used appropriately, can get me plugged-in just about anywhere.

There’s only one problem: they’re dual-pronged instead of tri-pronged.

One day maybe a year ago or so, as I was sitting on Brian’s balcony, enjoying the first few warm days of the spring. It didn’t matter that I couldn’t see my own laptop screen… I wasn’t intending to get any great amount of work done that day. Brian has a pair of outlets on the balcony, which was nice, because our laptops can’t stay alive very long with the wireless network in constant use. So, we were plugged-in. Brian happened to be using an extension cord from inside the house and I was using my laptop on one of the external outlets, with one of those “grounded outlets are for suckers” electrical adaptors that simply eliminate the ground line (my laptop has a 3-pronged plug, and the outdoor outlets are dual-pronged).

Occationally, I felt like I was being bitten on the inside of my forearm by a small insect or something. It seemed strange that an insect would be situating itself directly between my arms and my laptop, where I was resting my wrists on the keyboard. Yes, it’s not very good posture, but I’m pretty well-insured.

It seemed to be getting worse. I was scratching my wrists and trying to locate the bug, which I assumed was too small to see. Then, the biting stopped. I concluded that the bug was either dead or gone, and I didn’t care which. So, I continued with my work.

I was about to get up to go inside for something to drink, and I put my stocking feet down on the floor and was biten again, this time, very sharply. I immediately took my feet off the floor and the biting stopped. Eureka!

The mystery was solved: without a ground plug, my laptop was grounding itself through me: a nice briney material connected more directly to natural ground — the concrete, in this case — than anything else. So, I was shocking myself over and over. With my feet off the ground, I was safe from the circuit created by Brian’s outlet, my foot, and the ground.

In a house full of tech gear, this was easy to fix; not so in Italy.

So, I have been to many stores to try, in broken English, or broken Italian, like this morning, to describe what I want. It is very difficult. I can communicate that I’m looking for a 3-pronged adaptor (addatore electrica), but asking for one with a continuous ground line is proving to be nearly impossible. I told one woman this morning that I was being shocked by my computer with my current adaptor. She said she understood, handed me two adaptors that didn’t seem to connect the ground lines, and said “no shock”. So, I said “okay”, and bought one of them, since I already had one that matched the other.

At home, I tested it out and shocked myself. Molto grazie, signorina.

Yesterday, I found a guy who had precicely what I was looking for. It was €8,50, and I told the salesman that I’d look for a better deal. Today, I was back in his shop, and after I asked him for the addatore electrica, he went right for the one on which we had settled the day before. I am currently sitting, barefoot, with my wrists resting on my keyboard, typing this post.

I am not being shocked.

Far from Routine

Saturday, March 12th, 2005

It’s been almost one week and I’ve only had a single emergency at work. (skip the technical details). Apparently, an unscheduled reboot of our intranet server irreparably hosed our LDAP database, which needs to work in order for everyone to use our intranet web site, including some of the software demonstrations that we have available, our on-line DAV-based file server, and our bug database.

Fortunately, the last time something terrible happened to our intranet server, I actually took the time to schedule regular backups of everything we have, including major databases such as our LDAP directory. I just had to take the time to figure out what was going wrong. I thought that restarting the LDAP server process would help, and so I did that, and I was about (I think) to get into our intranet web site. I must not have used HTTPS because I found out later that it was still broken. Of course, when I only check my email twice a day, and only once while anyone in the States is awake, it’s hard to find out that things are still broken.

Fortunately, most problems in a UNIX operating system are fixed by this short and sweet process:

  1. Murder the process
  2. Delete the files (in this case, the database)
  3. Re-start the process
  4. Re-load the database from a backup

That took me about 45 seconds to do. Too bad it took my 24 hours to figure out the problem. At least I didn’t do the opposite and use a 45-second investigation to effect a 24-hour solution ;)

Having not been disconnected from the umbilical cord I typically maintain between myself and the Internet for quite some time, I’m still getting used to the idea of doing things offline. For example, I don’t want to write these blog entries while sitting in an Internet Point, because it costs me money to do so. Therefore, I write them at home and then upload them quickly. There’s a way I can send an email to wordpress to post the entry for me, but I haven’t set that up, yet, so I have to use good old metapad to write them, in HTML. I started out using OpenOffice.org, but then realized that, although being great for writing normal documents, I can’t easily export it to HTML — at least not HTML without tons of junk in the resulting document.

Sending and receiving email is also strange, since I end up just syncing everything when I connect, and then leaving to go somewhere else. I read the email at my leisure, and write back when I read the message and have something to say. The next time I connect, everything gets sent, and a new batch of mail comes in. On top of that, my first trip of the day occurs at about four o’clock in the morning on the east coast of the US, so nobody’s going to read anything anytime soon. I find myself having difficulty phrasing some things, especially when time is involved. If I have to say “I’m about to do [whatever]”, then, by the time the recipient reads the message, whatever it was will likely be done. So, should I say “I’ve already done [whatever]”? Probably not, because, as I write the message, I haven’t actually done whatever it is that needs to be done.

Now I know why there are all of these obscure kinds of cases in languages. Describing the past in the future tense is bizarre. “By the time you read this, I will have completed the task I am about to start.” It’s a head-scratcher.

It’s still somewhat cold here in Florence, so we have to make the most of the time when the sun is available for warmth. That means that we get up, have an espresso, and then get out into the city to do whatever. The past few days have been spent going to markets to get food for a single day. We come home and Katie makes something to eat for lunch while I do some work so I can keep my job. Then, we try to go out and do something enjoyable in the city. Usually, it’s nothing more exciting than a stroll, which is actually quite nice.

For at least two reasons, I find myself in the unexpected position of not wanting to go into any of the classic Florentine historical sites. Katie and I hit most of the big ones when we were here on our honeymoon: The Uffizi, Bargello, and Academia galleries, most of the basilicas, the Palazzo Pitti and attached Boboli Gardens, and most of the piazzas where people mostly hang out and try to sell you sunglasses and prints of famous works of art. Around Easter, my parents will be coming to visit, followed by my sister and brother-in-law and my new nephew, Joshua. During their respective visits, I’m sure we’ll play Florentine host to them, taking them from one point of interest to the next, so there’s really no reason for me to do all of that, now. The real question is how to convince them that they don’t need to see the big sites, but that they should help us fill-in the gaps that we missed in the past…

After our stroll, where we try to wander aimlessly through the city, especially in, around, and into places we’ve never ventured. For example, today we went across the Arno (the “left” side) and then west to see Santa Maria del Carmine. My mother was interested in the frescoes there, so we wanted to see how long it would take and if it was a nice walk. What she doesn’t realize is that you can’t walk 10 meters without seeing a fresco!

With the afternoon waning, we return home and I generally work from then until dinnertime. Another trip to the La Ch@t for an email refresh (this time, while my colleagues are actually awake!) and I return to work. It helps me keep my mind off the fact that it’s still pretty cold.

There’s a warm front coming in, and it rained this morning, so hopefully things will be warming up somewhat soon. I’d rather not wait for Easter to roll around before I can wear fewer than 3 shirts plus my jacket when I go out.

Nice Rack

Thursday, December 16th, 2004

In an effort to reduce the physical volume of computer hardware that I keep in my home, I have decided to convert my servers from towers to 1U racks. I don’t need anything too powerful, so 1U racks are large enough for my purposes.

So, where does one start when one is going to start replacing their army of towers with an army of racks? Ebay, of course! I located a (working) machine for sale on ebay which ended up costing me about US300 after shipping. It was an AMD Athlon 1700+ with 256MB RAM and a 40GB hard disk. Not too shabby, given that lots of stuff on ebay are barebones 4U Intel boxes for $250. I was quite happy with my purchase, particularly because the seller sent it quickly and the machine was as advertised (which was as-is, but hey, at least it booted!

The first thing I noticed after I had started it up to determine it’s DOA status was that there was a tag hanging from one of the case handles. It read: “W2 Memory Bad”. Sure, the machine booted (into a basic install of Windows XP), but that’s no indication that the hardware doesn’t suck. So, I got out my trusty memtest86 CD and checked out the machine.

I’m not sure exactly which test was running when the machine died, but this was the result:

Hosed Memtest86

Wow! That’s crazy. The machine was totally hosed, too — not just the display. Since I have 4 machines around that have similar processors (AMD Socket 462’s) as well as compatible RAM (PC2100 and PC2700 DDR SDRAM), I had some hardware to use for a process-of-elimination game. The easiest component to check is the RAM, so I put the possibly bad RAM into another machine with known good hardware. Memtest86 says everything’s okay. :( So, I put known good RAM into the failing machine, and memtest86 indicates that things are no good. :(

In this way, I tested a total of four components: RAM, CPU, CPU fan and motherboard. Sadly, it appeared that the motherboard was the problem. Good thing for me, I was planning to install the operating system from scratch, so I could pick pretty much any hardware-compatible motherboard. The plan was to get something as soon as possible at the lowest price.

I’ve lived in Arlington, Virginia (USA) for more than four years, now, and I have yet to locate a computer shop that sells components — i.e. not just off-the-shelf brand-name systems. Other than the occational “computer show and sale”, I rarely go anywhere to just graze among the available hardware to see if anything interests me. Back in Gaithersburg, though, there is a place that has all that: The Computer Place. TCP has suppliedmy computer hardware habit a number of times, both in and out of travelling computer shows. Their stuff is a touch on the expensive side, but always rock solid quality.

They have a store in Falls Church (just west of Arlington), but I had other things to do last night and the Gaithersburg store was more convenient, so I decided to drop by. I had identified a motherboard that met my specifications (AMD Thoroughbred1700+, DDR memory, built-in VGA — it’s a rack, remember?): the ASUS A7V400-MX. It even physically fit into the rack: 24.5cm per side.

I bought the board and took it home. Immediately, I noticed that there was a problem:

Audio Header is Too Tall

The audio header sticks up almost 1cm above the case opening. Just in case you hadn’t guessed, this is a major problem. If the motherboard doesn’t fit, it’s pretty much a failure. Taking the motherboard back wasn’t an option: they’d charge me a 15% restocking fee, which I’m not about to pay, given that the board is perfectly good. I’d at least craigslist it and recoup the entire cost. However, I was in a hurry. I want this machine running now
because it’s going to be replacing all the network services running on a machine destined to be a holiday gift for someone in the family. It’s got to be now.

I had a plan that was just stupid enough to work.

Motherboard soldering [1 of 4]: preparation Motherboard soldering [2 of 4]: close-up of the pins to be de-soldered Motherboard soldering [3 of 4]: header cover removed Motherboard soldering [4 of 4]: audio header completely removed

Now, this isn’t going to win any IEEE awards in…

  • The good idea category
  • The steady hands category
  • The well-ventilated room category
  • and certainly not
  • The safety equipment and proper use thereof category

But, this was a pretty successful hardware hack if I do say so myself. I give myself an enormous amount of credit for this mini-project due to the following considerations:

  1. I haven’t held a soldering iron in my hand since I was about 12 years old.
  2. I did not burn down my condo, let alone the entire building.
  3. I did not inflict any permanent and/or unsightly injuries to either myself or my wife, who was willing to hold the board at an angle for me for several minutes before losing interest and moving on to other things
  4. I soldered neither myself nor any other object to the board in any way (which, sadly, I cannot claim regarding previous incidents involving “hot-glue” and other objects)
  5. I did not solder any two parts together on the board that were not intended to be connected.
  6. The motherboard still works (except for the audio plugs, of course)
  7. and, of course
  8. The motherboard now fits into the rack case

Motherboard sans audio header: a perfect fit

I finished my game of operation this afternoon and I’m writing this entry in homage to my ‘1337 h4x0rz 5k331z. However, there’s more to the story. I’m also trying out a new Linux distribution: Gentoo Linux.

My favorite all-time distro was Red Hat Linux, but they switched over to their “Fedora Core” product which isn’t ready for prime time, yet. I liked Red Hat because of their super-easy package management. I would have stuck with Red Hat Linux 9.0, except that Ximian’s red-carpet feed stopped making updated packages available. I’ve also had experience with debian‘s package manager, but I have to say that I found it overly complex and usually out of date. Gentoo looks like it’s got pretty good package management, and they’re relatively up-to-date with versions, etc. The only problem is that the gentoo folks are all about two things:

  1. More voices and more choices (or is that Ralph Nader?)
  2. Compile early, compile often (this must be a distro based in Chicago)

Now, I love a good all-day-compile as much as the next guy (that is, not at all), but this “everything gets compiled specifically for your unique processor” stuff may not be for me. The first painful compile I’ve had to endure (that would be the one that’s still compiling) is mysql. But, you can’t really do anything about that: mysql is just huge.

Anyhow, I’ll be happy when I’ve got the system to the point where I can stash it at the back of the closet where it belongs.

PC Tablets

Sunday, November 7th, 2004

My company provides software to pediatricians to improve pediatric care. The doctor) use our software during patient visits on PC tablets which have pens to capture text input. T-‘incurrent (y i up.s,.,,….of a tablet that F am testing so I can have experience with take tablets themselves.

I am writing this entire post using the pen input device. As you can see, the first part of the message was quite successful but things went awry when I tried to say “I’m currently in possession…”. I thi nkt hat’ she cause I was trying top u-i too much in the little handwritin, input box; plus, I have awful handwriting, so the tablet should actually be applauded for being able to read my chicken-sc rate h in the first place. (I’ve decided not to correct any of the mistakes made by the recognition system so you can see how well the system works, and where it fails).

So, my analysis is that the handwriting recognition on the tablet is pretty darned good. However, there’s a lot more than hander writing recognition necessary to make a pen-based interface work. Sometimes, the white space needs to be tweaked. When that happens, the pen-based interface breaks down. Also, if you screw-up the strokes of a letter and change your for mind, then there’s no recourse other than resorting to the keyboard or using a small number of buttons to the right (armed to return to the mistake and correct it using “bksp” and “del” buttons that you can tap with the pen–it acts just as a mouse in that context.

I’ve usedadi iferen.itablet in the past (this one is an HP, and I’ve also used a Toshiba) which had a be double -ended, Jen. The other end let you erase your strokes, which helped quite a bit. This one does not seem to have such a feature, which I miss.

Simply entering tee* seems to work relatively well, but when there’s something to be done that’s not handwriting -related,” the process gets bogged-down with exception cases.

Another problem that I see with the handwriting interface is entering passwords. Since are deal with medical data, which is very sensitive, most of our applications have password authentication. Entering a password using the pen can be done in two ways: by writing on the screen and using the handwriting. recognition system, or by switching to an oh-screen keyboard, where goa can enter your password, hunt-and-peck style, into the password field. Both choices make it painfully easy for someone to serrepticiorsly view the password being entered-i either by simply reading it off the screen, or by watching you type-in the password trey-by-key. Both of these options pretty much suck.

All in all, the technology in use here is pretty sexy. I’m hoping th a-ii in-‘.’me, th chard w.: l, any System will get better and be able to understand any chicken-scratch I can throw at it. -l-also hope that the non-handwriting stuff gets better, as well as’ the ways that sensitive information is entered (such as passwords).

Wow. The HW recognition system finally gut a pair of parenthesis right. Things are already getting better!