Archive for the ‘Java’ Category

Properly Handling Pooled JDBC Connections

Monday, March 16th, 2009

I’m an active member of the Tomcat-users mailing list and I see lots of folks that post questions about not being able to get a new database connection. The answer is simple: you have exhausted their JDBC connection pool. The answer is not so simple because the reasons for the situation can often be different, but most likely, your application is not properly handing pooled connections. Read on for information on how to code your app to properly handle such pooled connections.

(more…)

Character Assassination

Friday, November 18th, 2005

At the dawn of (computer) time, someone decided that computers being able to deal with letters as well as numbers would be a great idea. And it turned out to be a big ‘ole mess.

The problem is that you have to decide how to encode these letters (or characters) into numbers, which is the only thing that computers can handle. EBCDIC and ASCII were two of the first, and while DBCDIC has effectively died, ASCII has turned into a few (relatively compatible) standards such as US-ASCII and ISO-8859-1 (also called “Latin-1”). These jumbles of letters are called character sets, and the describe how to take the concept of a letter and turn it into one or more 8-bit bytes for processing within the computer.

One of the most flexible characters sets is called UTF-8, and represents an efficient packing of bytes by only using the minimum necessary. For example, there are jillions of characters out there in human language if you take into account written languages like Chinese, Sanskrit, etc. We would need many bytes to represent all character possibilities (maybe 4 or 5), but UTF-8 has a trick up its sleeve that helps reduce the number of bytes taken up by common (read: Latin-1) characters. It’s also completely backward-compatible with ASCII, which makes it super-handy to use in places where ASCII was already being used, and it’s time to add support for international characters.

Now that the history lesson is over, it’s time to complain.

I’m writing an application in the Java programming language, which is generally highly touted as having excellent internationalization (or i18n) support: it has encoding and decoding capability for a number of different character sets (ASCII, UTF-8, Big5, Shift_JIS, any number of ISO-xyz-pdq encodings, etc.), natively uses Unicode (actually, UTF-16, which is a specific type of Unicode), and has some really sexy ways to localize (that’s the process of managing translations of your stuff into non-native languages — such as Spanish being non-native to me, an English speaker) content.

I was tyring to do something very simple: get my application to accept a “funny” (or “international” or non-Latin-1… I’ll just say “funny”, since I don’t use those characters very often) character. I love the Spanish use of open-exclaimation and open-question characters. They’re upside-down versions of ! and ? and preceed questions and exclaimations. It makes sense when you think about it. Anyhow, I was trying to successfully take the string “¡Bienvenidos!”, put it into my database, and get it back out successfully, using a web browser as the client and my own software to move the data back and forth.

It wasn’t working. Repeated submissions/views/re-submissions were resulting in additional characters being inserted before the “¡”. Funny stuff that I had clearly not entered.

I’ve done this before, but the mechanics are miserable and I pretty much block out the painful memories each time if happens.

The problem is that many pieces of code get their grubby little hands on the data from the time you type it on your keyboard and the time it gets into my database. Here is a short list of code that handles those characters, and where opportunities for cock-ups occur.

  • Keyboard controller. Your keyboard has to be able to “type” these characters correctly so that the operating system can read them. I can’t type a “¡”on my keyboard, so I need to take other steps.
  • Your operating system. MS-DOS in its default configuration in the US isn’t going to handle Kanji characters very well.
  • Your web browser. The browser has to take your characters and submit them in a request to the web server. Guess what? There’s a character encoding that is used in the request itself, which can complicate matters.
  • The web server, which may or may not perform any interpretation of the bytes being sent from the web browser.
  • The application server, which provides the code necessary to convert incoming request data into Java strings.
  • My database driver, which shuttles data back and forth between Java and the database server.
  • The database itself, which has to store strings and retrieve them.

I can pretty much absolve the keyboard and operating system at this point. If I can see the “¡” on the screen, I’m pretty happy. I can also be reasonably sure that the web browser knows what character I’m taking about, since it’s being displayed in the text area where I’m entering this stuff. My web server is actually ignoring request content and just piping it through to my app server. The database and driver should be okay, as I have specified that I want UTF-8 to be used both as the storage format of characters in the database, and for communication between the Java database driver and the database server.

That leaves 2 possibilities: the request itself (made by the web browser) or the application server (converts bytes into Java strings).

The first step in determining the problem is research: what happens when the web browser submits the form, and how is it accepted and converted into a Java string?

  1. The web browser creates a request by converting all the data in a form into bytes. It does this by using the content-type “application/x-www-form-urlencoded” and some character encoding. You can ignore the content-type for now.
  2. The request is sent to the server.
  3. The application uses the ServletRequest.getParameter method to get a String value for a request parameter.
  4. The application server reads the parameter out of the request using some character encoding, and converts it into a String.

So, it looks like the possibilties for confusion are where the character sets are chosen. The W3C says that <form> elements can specify their preferred character set by using the accept-charset attribute. The default value for that attribute is “UNKNOWN”, which means that the browser is free to choose an arbitrary character set. A semi-tacit recommendation is that the browser use the character encoding that was used to provide the form (i.e. the charset of the current page) as the charset to use to make the request.

That seems relatively straightforward. My responses are currently using UTF-8 as their only charset, so the forms ought to be submitted as UTF-8. Perfect! “¡” ought to successfully be transmitted in UTF-8 format, and go straight-through to my database without ever being mangled. Since this wasn’t happening, there was obviously a problem. What character set *was* the browser using? A quick debug log message ought to help:

DEBUG - request charset=null 

Uh, oh. A null charset means that the app server has to do some of it’s own thinking, and that usually spells trouble.

Time to take a look at the ‘ole API specification. First stop, ServletRequest.getParameter(), which is the first place my code gets a crack at reading data. There’s no mention of charsets, but it does mention that if you’re using POST (which I am), that calling getInputStream or getReader before calling getParameter might cause problems. That’s a tip-off that one of those methods gets called in order to read the parameter values themselves. Since InputStreams don’t care about character sets (they deal directly with bytes), I can ignore that one. ServletRequest.getReader() claims to throw UnsupportedEncodingException if the encoding is (duh) unsupported, so it must be applying the encoding itself. There is no indication of how the API determines the charset to use.

The HTTP specification has a header field which can be used to communicate the charset to be used to decode the request. The header is “content-type”, and has the form: “Content-Type: major/minor; charset=[charset]”. I already mentioned that the content-type of a form submission was “application/x-www-form-urlencoded”, so I should expect something like “Content-Type: application/x-www-form-urlencoded; charset=UTF-8” to be included in the headers from the browser. Let’s have a look:

DEBUG - Header['host']=[deleted]
DEBUG - Header['user-agent']=Mozilla/5.0 [etc...]
DEBUG - Header['accept']=text/xml, [etc...]
DEBUG - Header['accept-language']=en-us,en;q=0.5
DEBUG - Header['accept-encoding']=gzip,deflate
DEBUG - Header['accept-charset']=ISO-8859-1,utf-8;q=0.7,*;q=0.7
DEBUG - Header['keep-alive']=300
DEBUG - Header['connection']=keep-alive
DEBUG - Header['referer']=[deleted]
DEBUG - Header['cookie']=JSESSIONID=[deleted]
DEBUG - Header['content-type']=application/x-www-form-urlencoded
DEBUG RequestDumper- Header['content-length']=121

Huh? The Content-Type line doesn’t contain a charset. That means that the application server is free to choose one arbitrarily. Again, the unspecified charset comes back to haunt me.

So, the implication is that the web browser is submitting the form using UTF-8, but that the app server is choosing its own character set. Since things aren’t working, I’m assuming that it’s choosing incorrectly. Since the Servlet spec doesn’t say what to do in the absence of a charset in the request, okly reading the code can help you figure out what’s going on. Unfortunately, Tomcat’s code is so byzantine, you don’t get very far into the request wrapping and facade classes before you go crazy.

So, you try other things. Maybe the app server is using the default file encoding for the environment (it happens to be “ANSI_X3.4-1968”) for me. Setting the “file.encoding” system property changes the file encoding used in the system, so I tried that. No change. The last-ditch effort was to simply smack the request into submission by explicitly setting the character encoding in the request if none was provided by the client (in this case, the browser).

The best way to do this is with a servlet filter, which gets ahold of the request before it is processed by any servlet. I simply check for a null charset and set it to UTF-8 if it’s missing.

public class EncodingFilter
    implements Filter
{
    public static final String DEFAULT_ENCODING = "UTF-8";

    private String _encoding;

    /**
     * Called by the servlet container to indicate to a filter that it is
     * being put into service.
     *
     * @param config The Filter configuration.
     */
    public void init(FilterConfig config)
    {
	_encoding = config.getInitParameter("encoding");
	if(null == _encoding)
	    _encoding = DEFAULT_ENCODING;
    }

    protected String getDefaultEncoding()
    {
	return _encoding;
    }

    /**
     * Performs the filtering operation provided by this filter.
     *
     * This filter performs the following:
     *
     * Sets the character encoding on the request to that specified in the
     * init parameters, but only if the request does not already have
     * a specified encoding.
     *
     * @param request The request being made to the server.
     * @param response The response object prepared for the client.
     * @param chain The chain of filters providing request services.
     */
    public void doFilter(ServletRequest request,
			 ServletResponse response,
			 FilterChain chain)
	throws IOException, ServletException
    {
	request.setCharacterEncoding(getCharacterEncoding(request));

	chain.doFilter(request, response);
    }

    protected String getCharacterEncoding(ServletRequest request)
    {
	String charset=request.getCharacterEncoding();

	if(null == charset)
	    return this.getDefaultEncoding();
	else
	    return charset;
    }

    /**
     * Called by the servlet container to indicate that a filter is being
     * taken out of service.
     */
    public void destroy()
    {
    }
}

This filter has been written before: at least here and here.

It turns out that adding this filter solves the problem. It’s very odd that browsers are not notifying the server about the charset they used to encode their requests. Remember the “accept-charset” attribute from the HTML <form> element? If you specify that to be “ISO-8859-1”, Mozilla Firefox will happily submit using ISO-8859-1 and not tell the server which encoding was used. Same thing with Microsoft Internet Explorer.

I can understand why the browser might choose not to include the charset in the content type header because the server ought to “know” what to expect, since the browser is likely to re-use the charset from the page containing the form. But what if the form comes from one server and submits to another? Neither of these two browsers provide the charset if the form submits to a different page, so it’s not just an “optimization”… it’s an oversight.

There’s actually a bug in Mozilla related to this. Unfortunately, the fix for it was removed because of incompatibilities that the addition of the charset to the content type was causing. Since Mozilla doesn’t want to get the reputation that their browser doesn’t work very well, they decided to drop the charset. :(

The bottom line is that, due to some bad implementations out there that ruin things for everyone, I’m forced to use this awful forced-encoding hack. Fortunately, it “degrades” nicely if and when browsers start enforcing the HTTP specification a little better. My interpretation is that “old” implementations always expect ISO-8859-1 and can’t handle the “charset” portion of the header. Fine. But, if a browser is going to submit data in any format other than ISO-8859-1, then they should include the charset in the header. It’s the only thing that makes sense.

How old are you, really?

Sunday, August 7th, 2005

When a man sits with a pretty girl for an hour, it seems like a minute. But let him sit on a hot stove for a minute–and it’s longer than any hour. That’s relativity.

-Albert Einstein

Reckoning time has always been a problem for humans, it seems. We have argued over which calendar to use for quote a long time. Even worse is trying to figure out how long ago something happened.

The answers to many “how long ago” questions can be answered with a certain degree of slop. For example, “how long ago was Jesus of Nazareth born?” could be answered, “about 2000 years ago”. “When was peace declared at the end of World War II?”, “60 years ago”. But what a question to which the answer should be more specific, such as “how long ago was I born?”. I want to know the years, months, and days for that figure, and here’s why.

As part of my continuing work with The Center for Promotion of Child Development Through Primary Care, I have to be able to display ages for patients that our doctors will be treating. More often than not, these patients are young, so we’re talking about newborns through adolescencts. For the newborns, the number of months and days is very important, while the ages of adolescent patients are okay to round-off to years and months, and maybe just years.

It turns out that it’s somewhat difficult to answer the question “how old are you?”. It doesn’t really seem all that hard, until you actually try to do it. The problem is that people disagree about a lot of things. For example, you won’t get much argument that there are 10 days separating 2000-01-01 and 2000-01-11, or that there is 1 month separating 2000-01-01 and 2000-02-01. But what about the date difference between 2000-01-31 and 2000-02-30? Is that 30 days or is it 1 month?

Julian Bucknall is a guy who studies algorithms, at least as a hobby. He has a discussion of time reckoning in software including a sample implementation in C#. Although I appreciate his discussion (and created a few new unit tests based upon some of the problematic date ranges he presents), I don’t entirely agree with how he did his implementation. I happen to be using Java for my purposes, but I did my own implementation because I needed to, not because I’m just a Java wonk.

Before I start, those without a programming background have to realize that most programming languages have very poor tools for handling dates. Mostly they center around counting milliseconds since a certain date (usually 1970-01-01). This is great for quick calculations of numbers of days between events, since a day has a fixed number of milliseconds (1000 ms/sec * 60 sec/min * 60 min/hr * 24 hr/day = 86400000 ms/day).

For those of you who are too smart for your own good, I’m going to be ignoring leap seconds and things like that for the time being, since computers generally don’t handle those, anyway. If you want your computer’s time to be correct to the nearest leap-second mandated by the IEOS, you should just manually adjust your clock whenever it’s convenient… no date library is going to worry about keeping a list of all leap-seconds ever added to civil time.

So, back to dates in software. Since the number of milliseconds in a day is fixed, and computers often represent dates as a number of milliseconds from a fixed date (generally known as the epoch), it’s very easy to calculate the difference between two dates as a number of days. For example, I was born on 1977-10-27. That means that I am 10146 days old (wow, that doesn’t seem like a lot…). But how many years, months, and days old am I?

Fortunately, for discussion purposes, I’m writing this entry on 2005-08-07, which has both the day-of-month, as well as the month itself, less than the same numbers in my birth date (that is, 8 is less than 10, and 7 is less than 27). That’s good because it makes the math harder. If I had been born on 1977-08-01, then you could count on your fingers that I am 28 years, 0 months, and 6 days old. Since I was born later in the month and later in the year, there are all kinds of fun things that have to happen.

If you were to perform these calculations on your fingers, you’d probably start with the birth date and keep adding years until you couldn’t add them anymore without going over. You’d easily get to 27 and stop (if you had that many fingers). But then, you have to figure out what the differences are between the months and days. Exactly 27 years after my birth would be 2004-10-27. In order to get yourself to 2005-08-07, you need to add a bunch of months. If you add 10 months, you’ll get 2005-08-27, which is too much. So, you have to add 9 months instead, and then figure the days. Exactly 27 years and 9 months after my birth would be 2005-07-27. In order to get to today, you have to add days. If you add 11 days, you’ll get to 2005-08-07. Ta-da!

Now, that didn’t seem too bad, did it? Actually, an implementation which basically follows this on-your-fingers calculation is the one proposed by Julian Bucknall as well as many others on the web. I don’t like this implementation because is it computational overkill (you have to do lots of looping, and most Date object implementations that exist out there will re-calculate a bunch of stuff whenever you update a single field, such as the year or month). I actually wrote mine before I read his article, and I don’t have a C# compiler handy to run his algorithm through my test cases, so I can’t be sure that they yield the same results. At any rate, I have an implementation that should be a little more efficient and meets my needs.

Oh, one last note: we had been using a Java library called BigDate to do our date calculations. I knew it was going to be a pain in the neck to write our own, so we found a library that would do it for us. Unfortunately, it fails with Java Date objects representing dates before 1970-01-01. The author claims that his library handles dates prior to 1970 in contrast to Java’s Date, but it appears that he is wrong on two counts: Java’s Date class does, in fact, handle dates before 1970, and his library trips over them. I was able to use his library by passing-in the year, month, and date separately, but that required me to use deprecated methods in the Date API, and I was already starting to look down my nose at it, slightly. Just for the heck of it, I tried to use BigDate to calculate the date delta between a BCE date and today, and BigDate ignored the era, so I got the wrong answers there, too.

So, I wrote my own implementation (in Java) that quickly calculates deltas for all three fields (I’m not concerned with time, just the date), possibly ajdusts them for BCE dates, and then runs a fairly simple algorithm to move the date, then month and year to their correct values. We use a class called DiffDate which just stores a year, month, and date as a return value. I have one method that accepts a pair of Date objects, and one that accepts a pair of Calendars. Use of the Calendar avoids deprecation warnings during compilation, and offers two methods for client code, making it easier to use in situations that call for either Dates or Calendars.

    //
    // Copyright and licence notice: I intend for this code to be freely copied, edited, improved, etc.
    // Please give me (Chris Schultz, http://www.christopherschultz.net/) credit as the source of
    // this code, and let me know if you find ways to improve it.
    //
    public static DiffDate diffDates(Date earlier, Date later)
    {
      Calendar c_e = Calendar.getInstance();
      c_e.setTime(earlier);
      Calendar c_l = Calendar.getInstance();
      c_l.setTime(later);
      return diff(c_e, c_l);
    }

    public static DiffDate diff(Calendar earlier, Calendar later)
    {
      int y1 = earlier.get(Calendar.YEAR);
      int m1 = earlier.get(Calendar.MONTH);
      int d1 = earlier.get(Calendar.DATE);
      int y2 = later.get(Calendar.YEAR);
      int m2 = later.get(Calendar.MONTH);
      int d2 = later.get(Calendar.DATE);

      // Adjust years across eras (BC dates should be negative, here).
      if(java.util.GregorianCalendar.BC == earlier.get(Calendar.ERA))
        y1 = -y1;
      if(java.util.GregorianCalendar.BC == later.get(Calendar.ERA))
        y2 = -y2;

      int d_y = y2 - y1;
      int d_m = m2 - m1;
      int d_d;

      // Now that we've got deltas, start with the days and work backward
      // changing any negatives into positives, and rippling up to larger
      // fields.
      if(d2 >= d1)
      {
        d_d = d2 - d1; // Easy
      }
      else
      {
        // To determine how big the months are.
        Calendar work = (Calendar)later.clone();
        while(d1 > d2)
        {
          // Move backward through the months, adding a whole month 
          // until we have enough days to cover the deficit.
          --m2;
          // To track our progress through the month
          --d_m;
          // Now, there's one less month between dates
          if(0 > m2)
          {
            --d_y;
            work.set(Calendar.YEAR, work.get(Calendar.YEAR) - 1);
            m2 = Calendar.DECEMBER;
          }

          work.set(Calendar.MONTH, m2);
          d2 += work.getActualMaximum(Calendar.DAY_OF_MONTH);
        }

        d_d = d2 - d1;
      }

      // Adjust the months and years
      while(0 > d_m)
      {
        d_m += 12;
        d_y -= 1;
      }

      return new DiffDate(d_y, d_m, d_d);
    }

The whole thing is very straightforward, with the notable exception of the big “else” block in the middle of the code. It is here where we handle cases when the earlier date has a day-of-month that is later in the month than the later date. In that case, we need to count backwards, enlisting the help of a Calendar object to give me the lengths of various months. That ‘work’ calendar actually exists only to help me with leap-year determination. I suppose I would have used the old “years evenly divisible by 4, except every 100, except every 400”, but that would have complicated my code even further, and, I think, been inaccurate for old dates because of changes to the calendar. Then again, I think that GregorianCalendar (the default calendar in my locale) had those same rules, so I’d get the same results in both cases. If you want to calculate dates in October of 1582, you’re on your own.

You may have noticed, but this implementation does not handle time zones in any way. The reason is that this is intended to be for age calculation. If you were born in Sydney on 2000-01-01, then it might still have been 1999-12-31 in New York. However, you’re certainly not going to maintain your birthday to be 1999-12-31 when you’re in the US and 2000-01-01 when you’re in Sydney. Or, at least, we won’t ;)

It occurs to be that I’d like to write an entirely new Date implementation for Java, to handle things like bizarre missing dates (like October 1582) and a few other things that bother me about the Date class, but it’s just not going to happen. There are too many APIs that already use Date (or Calendar) and they’re not likely to change. Also, one of the things that I haven’t liked about the APIs is that they were able to neither calculate nor store delta dates. I have solved both with a delta date implementation and a simple delta date class.

So, how old are you, exactly? My code says that I’m 27 years, 9 months, and 11 days old. But I feel much younger than that.