molly.com

Tuesday 2 June 2009

The Real “Why XHTML” Discussion

The previous post was a document on XHTML2, sent in error. I noticed that Steven’s document didn’t match our conversation, but I made an honest mistake thinking what he sent in error was what he wanted to use to address the concerns.

So, I’ve left the other post up, but please know that this is the real discussion, and has lots more detail and insight than the other document, which is more of an overview of XHTML2 core principles.

Forgive me, and readers, Behold! It’s the real “Why XHTML” overview!

The following information is kindly provided courtesy of Steven Pemberton, CWI, Amsterdam, and W3C.

Why XHTML

Molly Holzschlag asked me if I’d try and clearly and simply explain why XML parsing is advantageous and why XHTML still is relevant. This was my answer.

Firstly, some background. I sometimes give talks on why books are doomed. I think books are doomed for the same reasons that I used to think that the VCR was doomed, or film cameras were doomed. People present at the talks make the mistake of thinking that because I think books are doomed, I want them to be doomed, and get very cross with me. Very cross. But in fact, I love books, have thousands of them … and think they are doomed.

Similarly, people make the mistake of thinking that because I am the voice behind XHTML, that I therefore think that XML is totally perfect and the answer to all the world’s problem, etc.

I don’t think that, but

  1. I was chartered to create XHTML, and so I did
  2. XML is not perfect; in fact I think the designers were too print-oriented and failed to anticipate properly its use for applications. As Tim Bray said “You know, the people who invented XML were a bunch of publishing technology geeks, and we really thought we were doing the smart document format for the future. Little did we know that it was going to be used for syndicated news feeds and purchase orders.”
  3. I have often tried to get some of XML’s worst errors fixed (not always successfully).
  4. I believe that you should row with the oars you have, and not wish that you had some other oars.
  5. XML is there, there are loads of tools for it, it is interoperable, and it really does solve some of the world’s problems.

Parsing

So, parsing. Everyone has grown up with HTML’s lax parsing and got used to it. It is meant to be user friendly. “Grandma’s markup” is what I call it in talks. But there is an underlying problem that is often swept under the carpet: there is a sort of contract between you and the browser; you supply markup, it processes it. Now, if you get the markup wrong, it tries to second-guess what you really meant and fixes it up. But then the contract is not fully honoured.

If the page doesn’t work properly, it is your fault, but you may not know it (especially if you are grandma) and since different browsers fix up in different ways you are forced to try it in every browser to make sure it works properly everywhere. In other words, interoperability gets forced back to being the user’s responsibility. (This is the same for the C programming language by the way, for similar but different reasons.)

Now, if HTML had never had a lax parser, but had always been strict, there wouldn’t be an incorrect (syntax-wise) HTML page in the planet, because everyone uses a ‘suck it and see’ approach:

  1. Write your page
  2. Look at it in the browser, if there is a problem, fix it, and look again.
  3. Is it ok? Then I’m done

and thus keeps iterating their page until it (looks) right. If that interation also included getting the syntax right, no one would have complained. No one complains that compilers report syntax errors, but in the web world there is no feedback that it has an error or has been fixed up.

It was tried once with programming languages actually. PL/I had the property of being lax, and many programs did something other than what the programmer intended, and the programmer just didn’t know. Luckily other programming languages haven’t followed its example.

For programming languages laxness is a disaster, for HTML pages it is an inconvenience, though with Ajax, it would be better if your really knew that the DOM was what you thought it was.

So the designers said for XML “Let us not make that mistake a second time” and if everyone had stuck to the agreement, it would have worked out fine. But in the web world, as soon as one player doesn’t honour the agreement, you get an arms race, and everyone starts being lax again. So the chance was lost.

But, still, being told that your page is wrong, even if the processor goes on to fix it up for you, is better than not knowing. And I believe that draconian error handling doesn’t have to be as draconian as some people would like us to think it is. I would like to know, without having to go to the extra lengths that I have to nowadays.

So I am a moderate supporter of strict parsing, just as I am with programming languages. I want the browsers to tell me when my pages are wrong, and to fix up other people’s wrong pages, which I have no control over, so I can still see them.

There is one other thing on parsing. The world isn’t only browsers. XML parsing is really easy. It is rather trivial to write an XML parser. HTML parsing is less easy because of all the junk HTML out there that you have to deal with, so that if you are going to write a tool to do something with HTML,
you have to go to a lot of work to get it right (as I saw from a research project I watched some people struggling with).

Let me tell a story. I was once editor-in-chief of a periodical, and we accepted articles in just about any format, because we had filters that transformed the input into the publishing package we used. One of the formats we accepted was HTML, and the filter of course fixed up wrong input as it had to. Once we had published the paper version of the periodical, we would then transform the articles from the publishing package into a website. One of the authors complained that the links in his article on the website weren’t working, and asked me to fix them. The problem turned out that his HTML was incorrect, the input filters were fixing it up, but in a slightly different way to how his browser had been doing it. And I had to put work in to deal with this problem.

Another example was in a publishing pipeline where one of the programs in the pipeline was producing HTML that was being fixed up but in a way that broke the pipeline later on. Our only option was to break open the pipeline, feed the output into a file, edit the file by hand, and feed it into the second part of the pipeline.

Usability is where you try to make people’s lives better by easing their task: make the task quicker, error-free, and enjoyable. By this definition, the HTML attempt to be more usable completely failed me in this case.

XHTML

The relevance of XHTML also starts with the statement that not everything is a browser. Because a lot of the producers of XHTML do it because they have a long XML-based tool pipeline, that spits out XHTML at the end, because it is an XML pipeline. Their databases talk XML, their production line produces and validates XML and at the end, out comes XML, in the form of XHTML. They just want to browsers to render their XHTML, since that is what they produce. That is why I believe it is perfectly acceptable to send XHTML to a browser using the media type text/html. All I want is to render the document, and with care there is nothing in XHTML that breaks the HTML processing model.

But there is more. The design of XML is to allow distributed markup design. Each bit of the markup story can be designed by domain experts in that area: graphics experts, maths experts, multi-media experts, forms experts and so on, and there is an architecture that allows these parts to be plugged together.

SVG, MathML, SMIL, XForms etc are the results of that distributed design, and if anyone else has a niche that they need a markup language for, they are free to do it. It is a truly open process, and there are simple, open, well-defined ways that they can integrate their markup in the existing markups.(One of the problems with the current HTML5 process is that it is being designed as a monolithic lump, by people who are not experts in the areas they need to be experts in.)

So anyway, the reason behind the need for XHTML is that the XML architecture needs the hypertext bit to plug in. It was a misunderstanding by many that XHTML 1.* offered next to no new functionality. The new functionality was SVG, SMIL, MathML and so on.

And my poster child for that architecture was Joost (alas no longer available) which combined whole bunches of those technologies to make an extremely functional IP TV player and you just didn’t realise it was actually running in a browser (Mozilla in that case).

Anyway, out on the intranets, there are loads of companies using that architecture to do their work and having then to do extra work to push the results out to the world’s browsers by making the results monolithic again.

What I anticipate is that we will see the emergence of XML javascrip libraries that will allow you to push your XML documents to the browsers, which are then just used as shells supplying javascript processors and renderers, which will process the XML, and make it visible. HTML will become the assembly language of the web. HTML is just not addressing the use cases of the real world any more. We need higher-levels of markup.

So in brief, XHTML is needed because 1) XML pipelines produce it; 2) there really are people taking advantage of the XML architecture.

Filed under:   general
Posted by:   Molly | 15:10 | Comments (28)

Comments (28)

  1. Pingback: molly.com » W3C’s Steven Pemberton on XHTML2

  2. So now we know “why” which leaves the questions of “when does it become a standard recommendation” and “why does it take so long”? I’ve heard of XHTML 2 and XForms eight or nine years ago. The examples in the previous post are from 2002. Even the revision of XHTML 1.1 (with target as core module, yay!) had to be taken back because of oversights in the process. The amount of time this takes reminds me of non-profit organizations where people contribute voluntarily in their spare time; I expected those standards to proceed much faster, leaving the question whether the priorities of the working group members or the process of W3C standardization is flawed? No offense, I appreciate the work, but something seems to be wrong here.

  3. Before the other thread was closed, I was about to post stuff about how the existence of HTML has created a mindset where we don’t need to code everything perfectly. I made some references to computer code, and it seems I’m thinking in the same vein as Steven.

    Also, To me, XHTML sounds like something that fits on top of HTML. I checked out the demo you linked to, and it seems to use HTML5 elements. Would the group basically have to wait for HTML5 to come out before they got XHTML2 finished? I would hate to think I wouldn’t have access to things like the video tag.

  4. Steve Pemberton says: “Another example was in a publishing pipeline where one of the programs in the pipeline was producing HTML that was being fixed up but in a way that broke the pipeline later on. Our only option was to break open the pipeline, feed the output into a file, edit the file by hand, and feed it into the second part of the pipeline.”

    And yet we have almost exactly this problem with XML. RSS for example is littered with such problems despite XML’s unambiguity, toolset and interoperability. SOAP is another classic example of XML failing to be interoperable. I find it inconceivable that despite two rather large failures of XML that XHTML2 will be any different.

    It’s also the case, if Steve choses to ignore the draconian error handling of the XML specification, then isn’t _that_ breaching the XML contract? Why is it not okay for HTML authors not to opt in to supporting the contract, but it’s okay to ignore the XML specification when it suits us?

    And XML as a datasource? Those days have passed, and JSON has emerged as the generic way of sharing data between applications.

  5. Pingback: Why XHTML? « Sharovatov’s Weblog

  6. Wow, cool! There’s been a rip in the space/time continuum and this article has slipped through from 1998!

  7. mattur, the reasons givens for supporting XHTML are still viable today. There are a considerable number of people who want to take advantage of an XML architecture. It is still the only want to support embedded MathML, SVG, SMIL, RDFa, and so on.

  8. Well, I certainly have no problem with “XHTML 2.0: a markup language designed for an XML pipeline behind a company firewall.” But I think one of your esteemed spec-writing colleagues didn’t get the memo, because when I go to http://www.w3.org/TR/xhtml2/ , the first sentence I read is “XHTML 2 is a general-purpose markup language designed for representing documents for a wide range of purposes across the World Wide Web.” If you’d like to post a followup explaining why XHTML is still relevant to the *public* web, I’d be sincerely interested to read that.

    Re: parsing, Steve seems to be completely unaware of the parsing section of the HTML 5 specification, or its reference implementation, html5lib. Plus, you know, http://validator.w3.org/

    Re: intranets, would these be the same companies that have standardized on IE 6, with its thorough implementation of XHTML?

    Re: XHTML as text/html, of course Steve knows (but doesn’t say) that if the W3C had had their way, all of that distributed extensible goodness — “SVG, SMIL, MathML and so on” — would have only ever worked if you served XHTML as application/xhtml+xml, with all the draconianness and lack-of-IE-compatibility that comes along with that. Steve wants to have his (extensibility) cake and (have browsers) eat it too — and ironically, the WHATWG has tackled that exact issue recently, allowing SVG and MathML to be embedded directly in text/html. No thanks to the XHTML2 Working Group.

    The rest of this post (and pretty much all of the previous post) is just too ridiculous to contemplate. “It is rather trivial to write an XML parser”? “We need higher-levels of markup”? I LOL’d.

  9. Ah ah guys, I don’t know what to wrote in this thread :), see you soon.
    ————————————————————————————-
    My website link

  10. Thanks for posting this, Molly. It’s interesting to see Steven make the case for off-the-shelf HTML parsing libraries (now blooming thanks to the HTML5 parsing algorithm) and for XML representations of HTML.

    However, the XML representation of HTML 4 is XHTML 1.0 and the XML representation of HTML5 is XHTML5. I don’t see how XHTML 2.0 fits this picture.

    Argumentum ad intranet is convenient, because the public can’t verify the claims. But then, I care more about technologies for the World-Wide Web than for company-wide webs.

  11. Am going to stick my neck out along with my ass and say that the rigidity requirements of XHTML fosters a mind set for producing clean code. That mindset, along with the structured code, may play an important role in detecting and mitigating security vulnerabilities.

  12. Why XHTML?

    To be honest, I don’t really care if I send my HTML+SVG files as application/xhtml+xml or as text/html. What matters in the end is the SVG support in the browsers. Um, now wait: Could it be that the only (major) browser not supporting the XHTML MIME type is also the only browser not supporting SVG? Some nasty people might use this fact to contradict the compatibility argument for text/html. If all browsers would support SVG, they would probably support also the XHTML MIME type. And as long as they don’t, allowing SVG fragments in HTML files (as HTML5 does) doesn’t help me anyway.

    But I agree, namespaces are for extensibility freaks, browsers need ages to support one or two XML dialects, so tag name clashes are quite unlikely (and HTML5’s and SVG’s video element are roughly the same, anyway). So why not put all that stuff into one big pot? (But please, then stop complaining about PHP’s early lack of namespaces and why Python is so much better with its separated modules.)

    But given all that, I still think there is one technique I would not know how to use without XHTML: XSL-T. And XSL-T is not only for transforming data into HTML output. You can perfectly well take XHTML documents, and transform them into new XHTML documents. CSS at a higher level, if you want. So XSL-T is for the server side? Who pushed Opera towards implementing it, then?

  13. Pingback: links for 2009-06-03 « SkunkWorks? No – GovWonks!

  14. Whoa, some very strong views in the comments, so I will tread carefully. I am just glad that Steve says that XHTML as text/html is fine. Admittedly I hadn’t lost much sleep over this before but now I can officially ignore some of the more dogmatic viewpoints on things like that.

  15. I am of thinking that XHTML is preferable over HTML, as it is XML.

    Any move towards higher inter-compatibility of technologies is a good idea.

    One of the main cool ideas about XHTML is that it can be used for the purposes of ‘interface as an API’ since serving XML into your UI is essentially exposing a machine parseable ‘feed’ which any other machine should be able to process if it is done in a standard way (hence RSS works so well).

    This is also why it’s important to create standards compliant (XML compliant) documents.

  16. Webmasters,

    New free online compressor for CSS, XHTML, XML, TXT, javascript files and all of languages.
    Initial pages must be UTF-8 encoded.
    Free web pages compressor

    See it.

  17. [b][u]Bed & Breakfast and Gites in South West France[/u].[/b]

    Situated close to Bordeaux and Angouleme, Les Quatre Puits is conveniently located for easy access to several nearby airports (Bordeaux Merignac, Bergerac, La Rochelle or Limoges) and TGV stations (Bordeaux, Angouleme).

    Les Quatre Puits is the ideal holiday spot for anyone wishing to experience the tranquillity and authentic rural experience of the French Countryside. Group bookings welcome on an accomodation and breakfast basis.

    http://les-quatre-puits.co.uk/

  18. Check Out SendFakeEmails.com You can emails from anyone u like even bill gates!
    trick your friends

  19. Still learning about xthml,html and php.

  20. I don’t think it’s trivial to write an XML parser, even prima facie. (Compare, for example, to writing a parser for line oriented formats.) I’ve always been struck by this passage from :

    XML is markedly more difficult to parse than it is commonly thought. It is by no means sufficient for a parser to merely follow the Extended BNF grammar of XML. Besides the grammar the XML Recommendation specifies a great number of rules (e.g., whitespace handling, attribute value normalization, entity references expansion) as well as well-formedness and validity constraint checks which a parser must implement. Whitespace handling rules in particular require an unusually tight coupling between tokenizing and parsing.

    Even if you pull out validity, there’s a bunch a stuff in XML which is decidedly non trivial (consider encodings). Now, given the prevalence of XML parsers, one could take a different tack and say that XML parsing is a solved problem with off the shelf tools. But even putting aside the huge cognitive burden of selecting a parser *and an API* to that parser, adding things like application specific error messages and handling can be quite tricky.

  21. does any body here have try to use json instead of xml, is fair easy to use undestand, parse and so on, json (javascript object notation) is designed for web, but I think could be a really better alternative for the xml way.

  22. Pingback: Reinventing Fire » Blog Archive » Platform Games

  23. This is a beautiful piece of writing, and who cares if it sounds like it’s from 1998? Apparently, that message, now well over eleven years old, hasn’t quite sunken in yet. In my (Draconian) opinion, when we put up with browsers that don’t render web pages in strict compliance to the standards that are out there, then we’re not truly using these languages like they were supposed to be used.

  24. Hi!

    There is related discussion about XML syntax in HTML (see coments)
    http://crisp.tweakblogs.net/blog/321/html5-why-not-use-xml-syntax.html

    and you have pointed out that often webpage is just the end or begining of some processes that are not using html because it is too complicated/expensive for machines.

  25. I was sold on XHTML. I still think it’s a good idea. Sure, there were flaws, but nothing insurmountable (most everybody knew the right solutions too). The main problems was W3C. Had they got their finger out, had they properly embraced browser cos, had they properly embraced the development community, had they moved quicker than snail-pace, then I think we’d’ve had the HTML5 style spec by now. But now people just confused W3C with XHTML. XHTML was OK. W3C was not. And I think that’s a real shame.

Upcoming Travels