Accessibility: Web Standards vs Hacks

Jeffrey Larson

Web design is a lot of fun. I’ve been coding websites for some time and while HTML has remained fairly static, my design philosophies have not. Although I would love for this post to be details about a new way to design better web sites — it isn’t.

Back when I marked up my first piece of HTML I used a really loose version of the HTML 3.2 standard, although back then hyper text processing was pretty elementary in 3rd generation browsers. Layouts were all tabled or framed… content mixed in with markup.

Instead of presenting a new solution, I put forth my dilemma. I have coded web pages using web standards compliance, holding to syntactically correct code and markup (see W3C), but I have also coded webpages using as many hacks (standards non-compliant tricks used to just get stuff working) as necessary in order to get them working/displaying consistently in the majority of browsers.

Example of standards compliant design: http://www.hillsofjapan.com/
Example of hacked for consistency design: http://www.jeffothy.com/

Both of these design tactics are used when the audience of the website is not known; what OS do they use? what web browser? what screen resolution/color depth? etc.

Standards compliant design fails when it comes to users who still use outdated browsers (which are not usually standards compliant in their hypertext rendering). Another drag of standards compliant design is that even the newest browsers (Mozilla Firefox 1.0 and Microsoft Internet Explorer 6) are not consistent in how the standards are interpreted and implemented: this means even standards compliant design may fail to render consistently and that also means broad ranged testing is required.

Hacks for consistency have there own problems as well. The method is basically trial and error, aside from being able to have a thick book of tricks that make certain code render one way in one browser and render a different way in another browser. BUT, once you’ve got a design that you’ve tested to work in all (or most) of your audience’s browsers you still have to worry about new versions of browsers coming out and breaking what these hacks seemed to fix.

I know I am not the only one with this dilemma. Many many times I’ve caught myself trying to make a perfectly consistent page using any browser I can get my hands on. After long struggles with the code I often think, why am I trying to make this page work on these old browsers who barely anyone uses?

I have grown to truly hate websites which are “optimized for” Internet Explorer or 1024×768 screen resolution. This usually means the site looks like crap when using my preferred browser or a different screen resolution. So here’s one of the reasons I aim to make the code/markup I write work in as many browsers as possible. Backward Compatibility is a great fundamental. I mind less if there is a nifty feature or two supported in one browser but not in another, however, site layout, look and feel *need* to be cross-browser compatible.

These days, standards compliant websites are usually coded using validated XHTML and CSS. One of the frustrations I have when developing with XHTML/CSS is lacking consistent control over layout. Many things that seem quite simple using tables and traditional HTML 4.0 can become challenging brain-teasers when going with CSS.

Google has developed a number of web applications that are cross-browser compatible but are definitely not standards compliant. Being such a large scale web company, there audience is diverse, and as such, the applications they develop need to work almost everywhere, if not everywhere. Google Maps for instance supports a handful of newish browsers, but is explicit in stating that it does not support all. Aside from the simplest of designs, websites that want to use certain features cannot support all.

I think I need some comments here. Remind us why we shouldn’t code a website using “old technology” (tables in HTML4.0) even though we know that for the most part, newer browsers support it through backwards compatibility.


4 Responses to “Accessibility: Web Standards vs Hacks”

  • Brantone Says:

    I can relate … I’ve pulled my hair out trying to get some site to work across multiple browsers, damned frustrating … especially how widths are calculated when it comes to borders, padding, and margins in IE, the bane of my coding existence.
    The reason we shouldn’t use “old technology” is because … uh, it’s old (?) Actually I’m not too sure why not, but I could regurgitate some of the stuff I’ve read over the years. Separation of content and layout: if you want to redo the layout down the road, it’s easier to do so if there’s a separation, with tables, things are nested and it gets ugly (not to mention longer to load). Plus if you have CSS it’s only loaded once, and the code looks cleaner.
    In my mind those are just excuses for someone who got confused with tables. Although I agree nested tables can get ugly (and Netscape 4 had a nasty habit of really screwing them up), I still think that some stuff can be done a hell of a lot easier with tables rather than stupid p.i.t.a. CSS hacks.
    With regards to the “separation” issue, designers constantly say it’ll be easier to redesign later, and content is separate, or you can have multiple style sheets for the same content. While in theory this may be a novel idea, in practise pages aren’t redesigned often enough to deem it a worthy cause.

    Personally, I’m not really sure which side of the arguement I’ll take on this one. Both arguements for and against are valid. I like looking at clean, concise HTML code not cluttered by td’s and such, and I like having all my stuff in a CSS file. On the other hand, trying to ensure the page renders properly in each conceivable permutation of platform and browswer can be extremely frustrating, sometimes having to “tough luck” some browser and simply have it degrade to something ugly but readable.

    That being said, and now that I’ve rambled on aimlessly … I think I’ve just talked myself back to your conclusion: “why bother?”
    If anything it provides a good challenge, and possibly a big headache.

  • Jeffrey Larson Says:

    [quote] sometimes having to “tough luck” some browser and simply have it degrade to something ugly but readable [/quote]

    I think having a site degrade nicely on older browsers is a good thing. Aside from the simplest CSS, older browsers –like NN4– don’t have a solid implementation CSS2 for layout and such.

    Currently I only own x86 computers but yeah, I do wish I had a Mac to test that partition of permutations.

    Good point on the infrequency of redesign.

    Further from just HTML/CSS can’t there also be separation of content and layout using server-side scripting for templating. If a site consisted of a bunch of repeated modules, these could be printed from functions, which would only have to be edited once in order to change the markup. This doesn’t give you the benefits of complete separation in the rendered source and the CSS single download either.

    I’m glad you don’t think this is clear cut either.

  • Derek Field Says:

    Jeff, I’m still a total rookie but could you possibly detect what browswer the surfer is using and then have them relocated to the page specifically designed for them and their browser generation? Can you code in multiple versions of a page and have other browsers ignore stuff until it’s for them? That way maybe you could type out the content only once? That sounds like a lot of extra work and maybe even a stupid suggestion but is it possible?

    I know in my ‘idiot’s guide to HTML’ that I’m working through there was a section on the JavaScript statements, ‘navigator.appName’ and ‘navigator.appVersion’. You could then send the user somewhere else using the ‘location.href’ statement right? Or, you could at least notify them that they’re using an old browser and that if they want to see the site as it’s meant to be, to download the most updated version? And you could provide a link for them.

    I agree with your comments about the small percentage of your viewing audience using old browsers and maybe not worrying about them. Also, don’t most new computer’s with Windows XP automatically download any updates for IE? Maybe it will be less of a problem in the future? I know I’m limited to Microsoft experience right now, but wouldn’t Netscape and Mozilla also at least notify you to update your version when possible?

    I wish tag worked with Netscape. I also know that Netscape didn’t view my FTP very well either but I’m aware that people should be using real FTP clients if they wanted full functionality.

    Anyway, those are my limited knowledge thoughts.

  • Jeffrey Larson Says:

    You are talking about two distinct methods.

    One: browser sensing to forward people on to a site specific for their browser. This would mean that you are coding any number of webpages (or even full sites) multiplied by the number of browsers your audience uses. This means the amount of code grows pretty much proportional to the number of browser, and that is unacceptable everywhere except for small single pages with a specific purpose. It’s typically not worth the effort to do this, and when you think about it, this makes maintenance ridiculously painful.

    Two: having a single coded page which has sections that are ignored by certain browsers which don’t support certain functionality. This is hack design. It comes back to the browser specific code that _should_ break on purpose in certain browsers and work in others. This again means little tweaks to the code could require huge amounts of testing in all users browser/OS combinations.

    Your comment on Windows XP fails. MS Windows XP is just one operating system out of dozens. Even if it is the most common OS, users still can use their browser of choice. For instance, I use Windows XP but I use Mozilla Firefox. IMHO, it is frustrating and just not fair to require users to upgrade their software in order to view a website. I’ve had experiences where stuff wouldn’t work with old versions of software and yet the computer would not support the newer versions system requirements. Just not fair, and that’s where backwards compatibility becomes key.

    Thanks for your comments man!

Leave a Reply