Web design is a lot of fun. I’ve been coding websites for some time and while HTML has remained fairly static, my design philosophies have not. Although I would love for this post to be details about a new way to design better web sites — it isn’t.
Back when I marked up my first piece of HTML I used a really loose version of the HTML 3.2 standard, although back then hyper text processing was pretty elementary in 3rd generation browsers. Layouts were all tabled or framed… content mixed in with markup.
Instead of presenting a new solution, I put forth my dilemma. I have coded web pages using web standards compliance, holding to syntactically correct code and markup (see W3C), but I have also coded webpages using as many hacks (standards non-compliant tricks used to just get stuff working) as necessary in order to get them working/displaying consistently in the majority of browsers.
Example of standards compliant design: http://www.hillsofjapan.com/
Example of hacked for consistency design: http://www.jeffothy.com/
Both of these design tactics are used when the audience of the website is not known; what OS do they use? what web browser? what screen resolution/color depth? etc.
Standards compliant design fails when it comes to users who still use outdated browsers (which are not usually standards compliant in their hypertext rendering). Another drag of standards compliant design is that even the newest browsers (Mozilla Firefox 1.0 and Microsoft Internet Explorer 6) are not consistent in how the standards are interpreted and implemented: this means even standards compliant design may fail to render consistently and that also means broad ranged testing is required.
Hacks for consistency have there own problems as well. The method is basically trial and error, aside from being able to have a thick book of tricks that make certain code render one way in one browser and render a different way in another browser. BUT, once you’ve got a design that you’ve tested to work in all (or most) of your audience’s browsers you still have to worry about new versions of browsers coming out and breaking what these hacks seemed to fix.
I know I am not the only one with this dilemma. Many many times I’ve caught myself trying to make a perfectly consistent page using any browser I can get my hands on. After long struggles with the code I often think, why am I trying to make this page work on these old browsers who barely anyone uses?
I have grown to truly hate websites which are “optimized for” Internet Explorer or 1024×768 screen resolution. This usually means the site looks like crap when using my preferred browser or a different screen resolution. So here’s one of the reasons I aim to make the code/markup I write work in as many browsers as possible. Backward Compatibility is a great fundamental. I mind less if there is a nifty feature or two supported in one browser but not in another, however, site layout, look and feel *need* to be cross-browser compatible.
These days, standards compliant websites are usually coded using validated XHTML and CSS. One of the frustrations I have when developing with XHTML/CSS is lacking consistent control over layout. Many things that seem quite simple using tables and traditional HTML 4.0 can become challenging brain-teasers when going with CSS.
Google has developed a number of web applications that are cross-browser compatible but are definitely not standards compliant. Being such a large scale web company, there audience is diverse, and as such, the applications they develop need to work almost everywhere, if not everywhere. Google Maps for instance supports a handful of newish browsers, but is explicit in stating that it does not support all. Aside from the simplest of designs, websites that want to use certain features cannot support all.
I think I need some comments here. Remind us why we shouldn’t code a website using “old technology” (tables in HTML4.0) even though we know that for the most part, newer browsers support it through backwards compatibility.