Friday, December 5, 2008

Cross-Browser Testing Made Easy

One of the challenges of building a web site is ensuring that it will function the same way when viewed by many different web browsers and versions of browsers on different operating systems. Something that works fine in the latest version of IE or FireFox may not work the same way (or at all) in an older browser or on a different O/S. This is not as much of an issue if you are just using very vanilla HTML. However, if you start using JavaScript, DHTML, or some of the more exotic CSS, you may be in for a nasty surprise when you try to view the page on a different system.

What really got me thinking about this recently was my interest in weaning myself away from table-based design in favor of more CSS-driven design. I know it's supposed to be so much better, the wave of the future and all that. But my concern has always been how to be sure it would work on older browsers.

I try to keep the latest versions of IE, FireFox, NS Navigator, Opera and now Google Chrome on my local machine for some basic cross-browser testing. But I can't have the other versions or the other O/S configurations to test. I don't even own a Mac at present.

I started thinking it would be cool if there was a software tool out there that would grab a page and render it just like a specified browser, version of browser, etc.

As it happens, I found something even better. This web site ( allows you to log into any of their many and varied images to test your page. It's free, with some restrictions that I'll get to in a bit. But you can hop on to a Mac, Ubuntu or Windows (98, XP, Vista, etc.) system and try out many different browsers, versions of browsers, etc. It's brilliant. It's not some kind of program that mimics the browser. It's the actual browser. And it's fully functional so you can test all your JavaScript, DHTML, whatever.

There are a few restrictions on the "free" part. You can only stay logged into a session for 5 minutes at a time. But you can launch as many sessions as you like. So for quick tests, it is perfectly adequate. Also, paying customers get preference for access when the site gets busy. If you need to do more complex testing and need more than 5 minutes on an image, you can buy little blocks of time. It's all very well thought-out.

Now, if they could just do something similar to test wireless devices...

Monday, November 17, 2008

Thumbs Up for Sphider

The model club site that I run has been steadily growing. We have been adding lots of great content - articles, tips and photos, lots of photos. Our gallery now contains nearly 5000 of them. I am using a gallery application called Coppermine (PHP front-end, MySQL back-end). It's very nice. Each photo has a title and many have more detailed descriptions.

With all this content I felt that the site really could benefit from having some kind of comprehensive search mechanism. Coppermine has its own search, which works fine, but I wanted a way to search the whole site at a go.

My first thought was to use Google Custom Search, which I had implemented with some success on another site. I was able to implement it on the club site without any trouble. The issue that I had was getting it to re-index in a timely fashion when I made changes. I decided that I wanted a mechanism that gave me more control over the indexing. As I have no budget for the club site, I also wanted something that was free.

I found no shortage of free search engines out there and tried a few. But the problem I kept running into was that the free versions had a limit on the number of pages they would index. The limit was high - usually several thousand - but I kept exceeding it. The reason I kept exceeding it was the photo gallery. Nearly 5000 photos, each of which gets indexed as it's own page, plus gallery sub-area pages, etc., etc. That ends up being a lot of pages to index.

I finally found a search engine that I could run locally on my site that was free and had no page limit - Sphider. Sphider uses PHP and requires a MySQL database to store it's indexes. It is really quite nice. Not only can you re-index at will, you can choose to index just certain parts of your site by setting up "sites" in the admin panel that limit their inclusion to just certain areas. This was especially useful for me because I often want to re-index everything except the gallery, which is pretty time consuming due to the sheer size of it.

It took me a little time to get the filters right for indexing the gallery. I had to keep it from indexing certain ancillary pages that had no business showing up in a search result. But Sphider has some decent include/exclude filtering mechanisms to facilitate that. It also respects any directives in your robots text file.

It provides some nice statistics on what search terms your visitors are entering, most popular searches and so on.

Implementation was fairly easy. It uses a template with a header, footer, etc., which gives you enough flexibility to make it a seamless part of your site. Once your MySQL database is in place, you just pass Spider's admin panel your db user credentials and it takes it from there.

All in all, really not bad. I have had it in place for about 2 months now and it seems to work really well.

Wednesday, November 5, 2008

Back in the Mud

After spending nearly 9 months on two airplane models, it's refreshing to be working on an armor subject again. My latest project is Tamiya's 1:35 scale M16 Half Track, US WWII. Here is a picture of the real deal:

It's a fun subject with a lot more detail to it than you would think at first glance.

It isn't that armor is easier. It has a whole set of challenges. Just a different set of challenges than building an aircraft subject. And there are a whole lot more possibilities for making it look like it has been in the field a while.

An airplane is fragile. There is a very finite limit to the amount of damage and dirt an airplane can endure before it just won't fly. So, if you want your model airplane to represent an operable subject, you have to use a light hand when it comes to weathering and simulated damage.

Not so with a tank or other armored vehicle. You can cover a tank in dust, dirt, mud, sand, snow, and just about anything else nature might throw at it. You can load it down with all manner of stowage: bags, tarps, infantry, and even random bits of civilian detritus. You can dent it up, rip parts of it off, shoot small holes in it, and basically just abuse it to no end and the odds are that the thing will still keep going. That makes for all sorts of fun possibilities when you model one because you can do all this stuff to it and it can still represent an operable front-line vehicle, albeit clearly one with a more interesting history. This is especially true if you model WWII subjects like I do.

I am still in the early stages of this project, so it is mostly building in preparation for the base paint coat. But it won't be long before I can break out the oils, pastels, washes, powder pigments, etc. and go to town.

Monday, November 3, 2008

New Window or Same Window?

Any time you create a link on a page, you have the option to launch the target page in the same window (the default option) or in a new window. We aren't talking about those obnoxious automatic pop-ups here. We are talking about links the visitor actually clicks on. There seem to be several schools of thought on when it is appropriate to launch a new window.

Some say you should never launch a link in a new window for a couple of reasons:
  • The user can always launch in a new window by choice by holding CTRL when they click the link or by middle-clicking the link with the mouse wheel.

  • By contrast, there is no easy way to make a link coded to launch in a new window launch in the same window. So, you are essentially taking the choice away from the visitor.

I'm not sure I agree with that. For one thing, many users don't know how to launch in a new window. Also, in many cases, a user may want to check out something you are linking to without leaving your site. I like the following rule of thumb, which seems to be the consensus from what I have read:

  • Make internal links (links to stuff on your site) open in the same window.

  • Make external links (links to other sites) open in a different window.

There are some exceptions to the first point. For example, if you are displaying a short form or a Flash piece or something and you want better control over the window like size or toolbar/no toolbar, etc. But for the most part I would think these rules would work.

I'd be curious if anyone agrees/disagrees either from a designer standpoint or from a user's standpoint. Let me know...

Friday, October 31, 2008

Validating Form Entries with JavaScript PS

OK. Being new to Blogger, I didn't realize how to correctly post code samples to an entry. Hence the screen captures in my last post. Below is a copy-and-paste-friendly version of the whole page of code I used for that example.


<script type="text/JavaScript">
function check_form_data()
if ( == "")
alert("Email is a required field. Please fill it in.");; = "#EEF111";
// -->



if (isset($_POST['test_val'])){

function print_form(){
print <<< FORM_TEXT
<!--Note that the action just points back to the same page-->
<form name="the_form" method="POST" action="this_page.php">
<!--Here is our test value-->
<input type="hidden" value="1" name="test_val">
Email: <input type="text" id="email" size="20"><br>
<input type="button" onClick="check_form_data();" value="Submit">

function process_form(){
//Process the form data - send an email, write to a DB, whatever.


Thursday, October 30, 2008

Validating Form Entries with JavaScript

I use HTML forms quite a lot on the web sites I build for a variety of purposes. Most of the ones I build these days are self-handling PHP forms. This means that the form and the PHP that handles the form are on the same page. This is easy to do. You just put a hidden value in the form that gets set when the visitor submits the form. The PHP code checks for that value. If it's set, it does the processing. If it's not set, it displays the form. The basic code layout looks something like this:

In the first part of the PHP, we check to see if the hidden value "test_val" is set. If it is, process_form(), otherwise print_form(). Nothing to it.

We can have process_form() do just about anything with the submitted data. What we don't want it doing is trying to process form data that cannot, or should not, be processed. We may have certain fields that are required. We may have fields where the input has to be in a certain format or match certain criteria. Of course, we also don't want to be processing anything that is clearly the work of a spambot. We could handle all of this on the server side and in some cases where more complex validation is needed, this might be appropriate. But why have our server waste processing cycles doing simple checks for missing or garbage data? Let the requesting browser do it by giving the data a quick once-over with client-side JavaScript first.

The first thing that we have to do is intercept the form data before it gets sent to the server. To do this, we will replace the usual HTML form SUBMIT button with a generic button that launches our JavaScript when clicked, like this:

For illustration, let's plug this into a simple form that collects an email address:

Now let's assume that we want to do a simple validation to make sure that the email field is not blank when submitted. The basic layout of our JavaScript function (placed in the HEAD) could look something like this:

We check to see if the email value is blank. If it is, we throw an alert and exit the function. Otherwise, we call the JavaScript submit function, which sends the validated form data on its merry way.

Of course, the validation we have done here is very simple. We could check to see that the value of the email field matches the proper format for an email address. We could write a loop to systematically check multiple required fields. We could check to see that multiple fields do not contain the same value - an annoying spambot symptom. We could even dynamically add additional fields to the form based on the visitor's initial inputs. There is loads more we could do.

In our simple example, we could help the visitor even more by sending their cursor directly to the email field:

We could even highlight the field in yellow to further alert the visitor to the problematic field:

This is especially useful if you are checking and flagging multiple fields. You can focus the visitor's cursor on the first problem field, but highlight them all so that they see all the problems. Just remember to reset the highlight color back to the default for all fields at the start of your function, otherwise the fields that your visitor did fix will still be highlighted.

Certainly nothing groundbreaking here, but useful stuff. There are tons of examples of this out there and the complexity of your validation is really only limited by your knowledge of JavaScript.

Monday, October 27, 2008

Why I Like Building Models

So, after several months of a little time one evening here and a little time one afternoon there, a few dozen bits of plastic are now one little six-inch long model airplane collecting dust on my shelf.

What was the point, exactly? What is it that compels me to spend so much of my spare time cutting, sanding, painting and gluing to produce something that in the final analysis is, well, completely useless. Then I go to the hobby shop and buy more box loads of plastic bits so that I can spend many more hours hard at work on...more dust bunny bait. Then I go online and spend hours finding reference photos so that I can see just what the right flap extension angle is for a Boeing 747 on final. Or I am online tracking down that extensive photo-etch detail set for the USS Missouri kit that already has hundreds of parts.

In contrast, my wife throws pots. At least when she is done we have something that, in addition to being nice to look at, is also useful like a mug or a bowl or a plate. She also sews. She has made me shirts, made clothing for our sons, etc. Again, the end result is useful. Not my models. They just sit on the shelf.

But maybe that's exactly why I like doing it. It is, in the truest sense, a hobby. A total waste of time done purely for enjoyment.

Well, off to work on my WWII half track...

Friday, October 24, 2008

Fun With Google Maps API

I was a Geography major in college and I have always loved maps. I like looking at them and I like working with them. It is something I often have to do when I build web sites for small businesses, particularly those with physical locations that customers need to get to.

In the past, I always just put together a static map graphic and added a hyperlink to MapQuest or some other site for directions. Then I read about Google allowing people to tap into their maps API to display fully functional Google maps on their own pages. (I read about it in Quest Software's Knowledge Xpert for MySQL, of all things, the latest version of which includes a very nice tutorial on this subject.) Anyway, it's really pretty easy. Just a matter of signing up with Google to get a key and then adding some JavaScript to your page - not that dissimilar to Google Analytics, another wonderful and inexplicably free Google web toy.

To start, you need to register and get a key from Google. You need one key for each domain you want to put your map on. You can get a key here:

Google will provide you with the code you need to drop into your page and they have loads of docs on the various options available to you. Alternatively, you can get some nice code from the aforementioned Knowledge Xpert product, which is free.

But it's straightforward enough. Some JavaScript in your HEAD, then a DIV tag where you want to actually place the map. You can include or exclude all the various controls like pan, zoom, Map/Satellite/Hybrid toggles, etc. It can be easily sized.

One thing that's not as obvious. Some of the Knowledge Xpert examples center the map using a Google geocode (the lat and long) for where you want the map to center on initially. Their instructions for obtaining that geocode for a specific address were a little fuzzy for me. As it happens, there is a URL that you can ping to get the code for any address. The example below finds the geocode for 810 Guadalupe Street, Austin (if you put in your google maps key at the end):,+Austin,+TX&output=csv&key=your_google_key_here

To see the example where I used Google Maps, visit and have a peek at the source.

Knowledge Xpert also provides additional information on tying this into a MySQL database to store and map multiple locations, if you are so inclined.

Lots of fun to play with and a much nicer solution to providing a map on your web site.