Archive for the ‘Tech’ Category

Read-only CVS access for only certain projects

Friday, June 15th, 2012

I’ve recently decided to move from CVS to Subversion, but I’m not ready to move everything at once. Thus, I have decided to move some things and save others for later. In order to do that properly, I need to make some projects in CVS read-only while allowing others to remain active.

A bit of Googling easily reveals how to make CVS entirely read-only (add an empty ‘writers’ file to the CVSROOT project or to make projects write-only on a project-by-project basis (by setting group file permissions on the server), but those techniques are too heavy-handed for me: I want some projects to be writable and I want to be able to check-out old copies of read-only projects. Neither of the above techniques will do that for me.

So, I looked-into something I should have known all about after running a CVS repository for nearly 9 years: the commitinfo file. You can read all about the commitinfo file online. I decided that since I could use commit-time “checks” to veto any commit for virtually any reason, I could certainly affect read-only access on a project-by-project basis.

So, I set my commitinfo file up like this:

^read-write-project\(\|$\) true %n
^otherreadwriteproject\(/\|$\) true %n
DEFAULT /path/to/read-only-project.sh "%p" %s

Here is the script called read-only-project.sh:

#!/bin/sh
#
# This script is intended to be used as a failure script for CVS projects
# that are now in a read-only state.
#
# Configure this in CVSROOT/commitinfo like this:
#
# ^projectname/.* /path/to/script "%p" %s
#
# Or, to make all projects read-only, you can do this:
# DEFAULT /path/to/script "%p" %s
#
echo "========================================================================="
echo "Cannot commit the following files because '${1}' is now read-only:"
shift
while (( "$#" )) ; do
echo "  $1"
shift
done
echo "========================================================================="

# Exit with a non-zero return code
false

The magic is all in the last line of the script:

false

This allows the script to exit with a non-zero status and therefore block the commit from occurring. This also allows all projects to be checked-out, updated, etc.: just no commits to those projects, which is exactly what I want.

Updated 2012-06-15 15:17 EDT – fixed regular expression for project-selector.

Updated 2012-06-19 15:16 EDT – Fixed regular expression again.

Cure for OpenOffice.org Calc “the maximum number of rows has been exceeded”

Tuesday, November 30th, 2010

Today, I was working with a colleague to generate a report in CSV format and she was having trouble opening it in OpenOffice.org’s calc (spreadsheet) program. She was getting the error “the maximum number of rows has been exceeded”, yet the file had only about 2800 rows in it. I tried it and got the same error. Hrm.

It turns out that the CSV file itself had an error in it: a mismatched number of ” characters in the file, which is not legal — or at least not a good idea. We fixed the code that generated the CSV file to properly escape those ” characters and all was well.

I’m posting this in the hope that others searching for answers will look at their files and see that there is a problem with them. There seem to be a bunch of sites and fora on the web where people ask these questions and don’t really get much of an answer, so this post may help those people.

Oh, and I should file a bug against OpenOffice.org calc saying that this is a really bad error message.

Update: http://www.openoffice.org/issues/show_bug.cgi?id=78926

Trapped DVD after a failed Snow Leopard Install on a Mac Mini

Wednesday, December 2nd, 2009

Yeah, that’s a long title, but I want it to be easily searchable on the web.

I have an Intel-based Mac Mini running Mac OS X Tiger. I recently procured a Snow Leopard DVD and read that you can, in fact, upgrade from Tiger to Snow Leopard. Snow Leopard has some cool new features that I thought I’d like, so I gave it a shot.

I inserted the DVD and the finder window came up showing me the “Install Mac OS X” option and I double-clicked on it. It told me it had to reboot in order to perform the installation, so I said okay and it rebooted.

(Note that I didn’t care what happened to my existing installation, files, etc… this machine is used for web application testing, so I don’t care if I wipe everything or not).

After a few minutes, the installer came up and asked me what language I wanted to use (English) and I continued the install. It thought about things for a while and then told me:

Mac OS X cannot be installed on this machine because it does not have enough memory.

My options at this point were: Restore from Backup and Reboot. I chose the latter, thinking that the DVD would eject and I’d be back to using my old Tiger install.

Instead, the DVD stayed in the drive and, after the reboot, the whole process repeated — basically, I was asked what language I wanted and then told that my computer didn’t have enough memory to install Snow Leopard. :(

So, I tried the most obvious thing any Mac user would do: I pressed the eject button on the keyboard. No dice. I held the eject button down for what must have been 2 minutes. Nothing.

I tried Googling for answers. Lots of people giving various suggestions, none of which was working for me. CTRL-COMMAND-O-F apparently doesn’t work on Intel Macs. Holding OPTION during boot did nothing. Holding EJECT did nothing. Holding the mouse button down did nothing. I was seriously contemplating cracking open the machine to extract the DVD.

Someone suggested plugging the mouse button directly into the Mac, because some USB hubs don’t work quite right at initial boot. I have a Mac keyboard with my mouse plugged directly into that. That’s about as directly-plugged as you can get, right?

Well, apparently not. I moved by USB mouse from the keyboard to the back of the Mac Mini and held down the mouse during boot. Voile! Out popped the DVD.

So, anyone having similar problems can try this trick. It may save you from tearing-open your Mac Mini, or tearing-out your hair.

Properly Handling Pooled JDBC Connections

Monday, March 16th, 2009

I’m an active member of the Tomcat-users mailing list and I see lots of folks that post questions about not being able to get a new database connection. The answer is simple: you have exhausted their JDBC connection pool. The answer is not so simple because the reasons for the situation can often be different, but most likely, your application is not properly handing pooled connections. Read on for information on how to code your app to properly handle such pooled connections.

(more…)

Indenting HTML <select> options

Tuesday, February 10th, 2009

In CHADIS, we had a requirement to indent certain <option> elements within a <select> dropdown list. Until recently, only team administrators had to use these pages, so we just made it work in Mozilla Firefox and left the other browsers out in the cold. Now that we have some new features that need this capability, I decided to write-up a quick page to demonstrate each technique we tried and how each browser renders them. You can see the investigation and results below. Enjoy.

(more…)

Disabling Password-less SSH Connections

Tuesday, June 10th, 2008

I use Ubuntu on a server I use for software development over a VNC session. This is how I learned to do Java software development way back in 2000 at one of my first jobs, and the habit stuck. I recently upgraded to Ubuntu 8.04 LTS (via a completely straightforward and painless upgrade process, I might add) and noticed that something strange was happening: after entering my ssh key password once in a session, I was not asked for it again. Ever.

I leave myself logged into this machine for months at a time, and I never lock it. The only locking you might consider is that I disconnect from the VNC session and re-connecting requires a password. However, I have access to some sensitive information for my job and I’m a little leery when I’m allowed access to things unchallenged.

First, I thought something really fishy was going on, until I started reading man pages and doing a little digging. I found out about ssh-agent (which I had previously never heard of, since I’m usually using command-line-only interfaces) which did not appear to be running. Oddly enough, using ssh-add to flush all cached keys did work, so I was certain that the ssh-agent was lurking somewhere — I just hadn’t found it, yet.

Finally, today, I got fed up and apparently entered the magic phrase into Google. The reason I couldn’t see it running is because the process is gnome-keyring-daemon, not ssh-agent or *agent* or even *ssh*, which makes it tough to find if you don’t know that it’s a gnome component that is performing this service. I found the answer in the Gnome Keyring SSH Agent page on the Gnome Live website. Instructions for disabling ssh key caching are on the page here.

One thing they missed is that you can use gconf-editor to tweak the value indicated in the gconftool-2 instructions. Also, you can change the behavior of a running Gnome system by doing a ‘kill -HUP’ on the existing gnome-keyring-daemon process, and then re-starting it with the ‘–components’ that you want (i.e. removing the ‘ssh’ service).

Now, I get to enter my password every time I make an ssh connection, just as it should be ;)

Finding a decent laptop

Friday, September 21st, 2007

I’m tech geek, but I’m a cheap one. I’m willing to pay or quality, but I also am not one of those people who waits overnight in front of stores to get the latest Shiny Thingâ„¢ so I can show all my friends how cool I am. I casually look for things all the time, and get excited by them, but I rarely
actually buy.

Witness the (somewhat) recent release of the Apple iPhone, over which I have lusted since I first read reliable descriptions back in January. As the release date got closer and closer, the inadequacies of the platform became more and more pronounced (crappy EDGE network, only AT&T plans, can’t use your own SIM card, can’t install your own software, phone costs $600, etc.) and Apple failed to get me as a customer. Fortunately for me, I didn’t pay the $200 “aren’t I cool?” tax like a lot of folks did. Oh, well. At least those folks helped Apple beta-test their platform.

Recently, it’s become more and more clear that I need a replacement for my computer(s). My recent canine acquisition has effectively moved my home office from our actual office to my couch, since it offers superior surveillance capabilities. I had always worked nearly exclusively on my desktop computer, a great AMD Athlon XP workhorse that has been reliable and stood by me lo these many years since I bought it for my wife so she could play Diablo II with my brother-in-law and me (at which point I decided to take the new computer for myself because hey, what does my wife need with all that processing power?). When I was out and about, I used my somewhat less-trusty 17-inch HP electric blanket notebook, but it never really felt right, since I’ve always been a desktop kinda guy. Using the laptop more and more (on the couch) has made it clear that both computers need to go: I need a laptop, and this one is falling apart; if I need a laptop, why do I need a desktop at all — as long as I can have a nice, big screen to plug into when I’m actually at my desk.

Thus begins my quest to find a suitable laptop to take over all my computing needs.

Don’t forget: I said I was willing to pay for quality, but I also said I was cheap. I also didn’t say it, but I’m not going to lug around 10 pounds of laptop anymore. No, sir.

My actual needs are few: anything that can outperform my existing 3 GHz hyper-threaded processor without setting my legs on fire is adequate. I also need lots of RAM since I like to run a thousand things at once. The games that I do actually play are old in terms of graphics requirements, so I don’t exactly need a top-of-the-line gaming platform.

Given my requirements, why is it so hard to find a decent laptop these days? Apparently, my requirements are more strict than I had first let on. What I really want is:

  • 800-MHz FSB with matching-speed memory
  • A high-resolution screen (WSXGA+ would be preferred)
  • Discrete graphics memory on a good mobile graphics board
  • Gigabit Ethernet
  • Digital video output
  • Low weight
  • Reasonable price (less than $1500 including 1-yr warranty)

Actually, I’m willing to sacrifice a little weight to meet the other criteria. Ideally, I’d like to make it under 7 pounds including the power brick, but that appears to be difficult to accomplish in the 15-inch screen size.

So, what are the problems?

  • Many companies will allow you to select the new 800MHz FSB processors, but they won’t give you matching-speed memory. So much for a faster FSB.
  • I have been able to find WSXGA+ on only a few laptops. I realize this is pretty expensive, so most vendors don’t even give you the option. I can give this up if necessary, especially since I’ll mostly be using higher-resolutions on my external monitor, anyway.
  • Mobile graphics cards just suck in comparison to their desktop-based brethren: it’s a fact. It still shouldn’t stop me from getting something nice in the graphics department. Every single laptop in these price ranges should have the option of discrete graphics memory (with reasonable on-board memory sizes: 128MB is not enough these days, guys!).
  • Virtually nobody has gigabit Ethernet. Why? I can’t even imagine. You can get a desktop gigabit card for five bucks. I should be able to get a mobile one for fifty. It’s sad that the wired options for laptops are faster than the wired ones these days.
  • Many companies (Dell, I’m looking at you) don’t support HDMI or even DVI video output yet. Why? Especially Dell: they sell these big, fat displays that all have DVI and HDMI inputs on them, and their laptops need special adapters to utilize the superior-quality digital signals.
  • Weight is always a problem: sturdy construction plus lots of components equals many pounds. I get it. Why can I get the same components in 3 different systems and have the weights all be wildly different? Sigh.

I can get various combinations of the above on different units from different manufacturers (except gigabit Ethernet), but I can’t find the one unit that has all of them. It’s always a trade-off: do I want proper speed-matched memory and CPU or do I want a decent graphics card? Do I want a slick hi-def screen or do I want HDMI output? It’s maddening.

I have given up the laptop search for this month. Maybe around Thanksgiving, when hardware manufacturers completely lose their minds just so they can move inventory regardless of the cost, I’ll be able to get something whose flaws I don’t mind accepting because I’m getting such an insane deal on the price.

Interesting new WWW attack vector

Friday, February 23rd, 2007

While I suppose that using javascript for evil purposes isn’t exactly a new idea, Bruce Schneier has written a piece (also covered on Slashdot and, I’m sure, other places) about three guys who have developed an attack that royally screw most users’ ability to use their Internet connection again.

AJAX, the magic pixie dust used heavily on sites like Google Mail, is really just javascript with the ability to make HTTP requests and parse the results of those requests. Javascript has been available in browsers for years and is recently enjoying some interest by web developers because of that last (somewhat) new capability. The use of this technology for evil is nearly indistinguishable from legitimate use, so it’s hard for any software to detect it and prevent it.

Basically, the attacker sets up a web site with some javascript code (see below) and tricks you into visiting that site. It’s not all that hard to get people to look at a rogue site: you can either spam the entire world and expect that a certain percentage of email readers are suckers who will click on the links in those messages, or you can hack a major site (such as Dolphin Stadium) and insert the exploit into it.

Now, the fun begins. This piece of javascript code (which, as I mentioned earlier, is pretty much impossible to identify as evil) attempts to make a connection to your router. If you are like most home users, your router is still sitting there with it’s default, factory-set password (probably something stupid like “admin”). That means that this piece of javascript code can login to your router and start playing around. This particular attack is designed to change your DNS settings such that all requests for named Internet addresses go to malicious servers. Those requests will be answered with fraudulent IP addresses which can be used to either emulate your favorite website or simply serve nothing but pop-up ads and porn. This little hack could even change your router’s password, locking you out of your own hardware.

Imagine if you were to fall victim to this exploit… the next time you tried to access, say, www.bankofamerica.com, the rogue DNS server sends you to what really is www.evilbankofamerica.com. The site looks like Bank of America’s real site, and you fall for the bait. You enter your username and password for online banking, and bang! – the bad guys have your online banking credentials.

SSL certificates might save you, since VeriSign (and others) are unlikely to issue an SSL cert for “www.bankofamerica.com” to an entity that is not Bank of America. But what do you think most people do when they get a security warning these days? My guess is that most people do whatever they have to do in order to get the security warning to go away and let them look at their website. That is a recipe for disaster.

Since we’re talking about folks who have never changed their router’s password, they probably wouldn’t know how to recover from this problem, either. If the attack included changing your router’s password, you’ll have to reset it to factory defaults in order to get back up and running again.

I’m guessing most home users will ask friends what to do if every site they visit is just porn and popups. The advice they are going to get is to reinstall their operating system (statistically it will be Microsoft Windows, which has a bad reputation for becoming easily infested). Many users aren’t willing to do that, and will pay someone else to do it. Re-installing the OS won’t work, so those users are likely to do the next best thing: go out and buy a new computer. That won’t work either.

What a pain in the ass.

What a great exploit.

Blog moved to virtual host

Tuesday, February 13th, 2007

The whine of my rack server has finally gotten to me.

So, in spite of the home heating advantages, after more than 2 years of hosting my own website, blog, and mail server in my home, it’s time to get it out of here.

I found a relatively low-cost virtual hosting plan where I have free reign of my own virtual machine. Sadly, I couldn’t use Gentoo – my preferred Linux distribution – so I had to settle for Debian, whose package manager is absolutely maddening to me.

At any rate, we’ll see how things go. So far, the MySQL upgrade (Debian’s latest stable version of 4.0? WTF?) and WordPress installation (no stable version available through Debian?!) have been relatively painless. Let’s just hope that everything else goes well.

Attempting AJAX

Monday, October 23rd, 2006

So, AJAX is one of the more recent causes of excitement in the web-based application delivery world. The first major site to feature AJAX (as far as I know) was Google’s gmail. Assuming that you are okay with writing tons of Javascript, it’s quite a nice way to spice up your application enough to make your users feel a little less like they’re using a web-based app.

I have to admit that I generally like to use the minimum of Javascript on my sites that can possibly work, because it usually doesn’t. Javascript is notorious for failing on various browsers and platforms with no specific rhyme or reason. Most often, it is because the application developer did not take the time to test and debug the Javascript on various browsers, or failed to use standards-compliant code and chose a single target browser (this more often happens when an application is targeted towards Microsoft Internet Explorer).

My rule of thumb is that Javascript should not be used unless there is some non-Javascript backup for the same functionality. Basically, Javascript may only be used to enhance the user interface; it cannot be used to drive the user interface.

We have a page that allows users to tick items off of a list. The list is pagenated, to it can be quite long. They can either tick them or un-tick them, and the page reflects the current status of each item. There is one big problem with this type of interface: the user expects to interact with a standard widget (the checkbox), but the page itself does not respond to checking or unchecking the checkbox: you have to submit a form. This basically won’t work because users are not going to tolerate having to check these items and then click a button, especially when moving from one page to another. They might also tick an item and then leave the page entirely. There is no way to stop them, and the only way to capture the event of ticking the checkbox is to use Javascript.

I am unhappy with the Javascript-only solution because it breaks down when the user’s browser does not have Javascript available. The user might not have Javascript available because of the browser (think lynx), or because they have turned off Javascript for security reasons, or because their Javascript implementation is buggy and isn’t going to work for some reason. Another approach is required.

My solution in the past was to make the checkbox into an image that looks like a checkbox, but it actually a link. That link points to the URL that selects (or de-selects) the item in question, which then re-displays the page with the proper status update. This works very well, except for the fact that the page view often re-sets itself back to the top of the page. That is inconvenient when you want to select multiple items from the same page, and you have to scroll down the page to see them.

This is a perfect example of when I have a non-Javascript solution working that could be significantly improved through the use of Javascript.

Enter AJAX.

If all the planets are aligned (i.e. the user has Javascript available and enabled, and it supports AJAX, and nothing else goes wrong), we can use some AJAX magic to improve the user experience.

AJAX is little more than a single (at least, from an AJAX developer’s point of view) very useful Javascript object called XMLHttpRequest. It’s job is to make an asynchronous HTTP request to some URL and provide the response either as text or as an XML document. If you elect to use the response as an XML document, then you can use the standard DOM methods to traverse everything.

I started out by consulting Mozilla’s AJAX:Getting Started page. It gives a fairly straightforward example of, well, getting started with AJAX. Using the information presented on that page, I was able to get something up and working relatively quickly. They had even listed the changes I would have to make in order to use the code under Microsoft Internet Explorer, so I figured I was covered when I went to test on MSIE.

Unfortunately, there’s a relatively large caveat when using MSIE up through version 6 when working with XML: the document.getElementsByTagName function is not namespace-aware. That means that those of us who came out from under the non-namespace-using rock several years ago have to deal with some pretty stupid code in order to work around it.

At this point, AJAX pros are saying to themselves “why doesn’t this guy just use one of the dozen or so cross-platform AJAX libraries that are out there — then he won’t have this problem”. Well, I’ll tell you: because I wanted to solve the problem myself in order to understand what was going wrong. It did take quite a while, and I ended up using information presented on Apple’s Dynamic HTML and XML: The XMLHttpRequest object. This was the only place where I saw any mention of MSIE’s failure to support namespaces.

Working around non-namespace-aware Javascript is pretty ugly. Under normal circumstances, one would simply call document.getElementsByTagName and pass the “local name” of the element to that function. You’d get an array of nodes back and everything would be fine. But, since MSIE sees “foo:bar” as the local name (instead of just “bar”), you’d have to change your code to look for “foo:bar”. But, that wouldn’t work in browsers that are namespace-aware, and it’s difficult, if not impossible, to tell at runtime which way a browser will behave.

So, I was forced to implement my own function that loops through the children of a particular node and looks for matching elements. :(

It occured to me just now that I could probably get away with making two calls: one that uses the preferred method, and then, if the call returns no nodes, another that calls the same method with the namespace attached. The only problem with this option is that you have to know the text of the namespace that is being used. Typically, you only have to deal with the namespace URI instead of the actual value being used (such as “foo” in the example above). In this case, I’ll have to hard-code the namespace into the Javascript, which is non-ideal. My existing solution has no such restrictions, so I’ll probably keep it for the time being.

A special thanks to the MSIE team for once again stepping outside of the standard (which, in all fairness, may or may not have existed at the time of implementation) and spicing up my day.

Now, time to test on MSIE 7. And Opera. And Safari….