XHTML5 in a nutshell
The WHATWG Wiki portal has a nice section describing HTML vs. XHTML differences, as well as specifics of a polyglot HTML document that also would be able to serve HTML5 document as valid XML document. I'd like to review what it takes to transform an HTML5 polyglot document into a valid XHTML5 document: it appears, finally the 'XHTML5' has become an official name.
The W3C first public working draft of "Polyglot Markup" recommendation describes polyglot HTML document as a document that conforms to both the HTML and XHTML syntax by using a common subset of both the HTML and XHTML and in a nutshell the HTML5 polyglot document is:
- HTML5 doctype/namespace
- XHTML well-formed syntax
application/xhtml+xml
.
In a nutshell the XHTML5 document is:
- HTML doctype/namespace: The
<!DOCTYPE html>
definition is optional, but it would be useful in a polyglot document by preventing browser quirks mode. - XHTML well-formed syntax
- XML MIME type:
application/xhtml+xml
. This MIME declaration is not visible in the source code, but it would appear in the HTTP Content-Type header that could be configured on the server. Of course, the XML MIME type is not yet supported by the current version Internet Explorer though IE can render XHTML documents. - Default XHTML namespace:
<html xmlns="http://www.w3.org/1999/xhtml">
- Secondary namespace such as SVG, MathML, Xlink, etc. To me this is like a test, if you don’t have a need for these namespaces in your document, then the use of XHTML is overkill in the first place.
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title></title><meta charset="UTF-8" /></head>
<body><svg xmlns="http://www.w3.org/2000/svg"><rect stroke="black" fill="blue" x="45px" y="45px" width="200px" height="100px" stroke-width="2" /></svg></body></html>
The XML declaration <?xml version=”1.0” encoding=”UTF-8”?>
is not required if the default UTF-8 encoding is used: an XHTML5 validator would not mind if it is omitted. However it is strongly recommended to configure the encoding using server HTTP Content-Type
header, otherwise this character encoding could be included in the document as part of a meta tag <meta charset="UTF-8" />
. This encoding declaration would be needed for a polyglot document so that it will be treated as UTF-8 if served as either HTML or XHTML.
The Total Validator Tool - Firefox plugin/desktop app has now the user-selectable option for XHTML5-specific validation.
I would say that the main advantage of using XHTML5 would be the ability to extend HTML5 to XML-based technologies such as SVG and MathML. The disadvantage is the lack of Internet Explorer support, more verbose code, and error handling. Unless we need that extensibility, HTML5 is the way to go.
You are overdoing it with “don’t use XHTML” several times in a row.
The doctype is useless in non-polyglot XHTML5. It’s used for validation but with the HTML5 doctype the only thing you can validate is the name of the root element. Triggering quirks mode is impossible in XHTML.
As far as I know, rendering XHTML in IE8 requires a trick with XSLT.
Specifying the charset in the meta tag does not work in XHTML. This is only useful if it’s a polyglot. Then again it doesn’t make sense since you the polyglot handling script (e.g. .htaccess or PHP) on the server already changes the Content-Type-header so for text/html it can easily include the charset parameter. Note that the XML declaration is illegal in text/html.
Except for IE8 and below, all browser support XHTML. In some XHTML even renders faster than HTML. If we would have switched to XHTML 1 a long time ago, it would now be a lot easier to add new elements to the language without parsing problems. For programmers, scraping XHTML is a lot easier than HTML due to the wide availability of XML parsers. Also, the error handling makes it easier to write valid pages. It’s always been denied that XHTML has the future, but since IE9 has support, XHTML should eventually win for the web’s sake.
Isn’t SVG and MathML a rather insignificant motivator? Since they’re both allowed in plain HTML5 documents as well. To a certain degree, XLink as well.
There are certainly reasons to use XHTML documents, but the ability to embed SVG and MathML isn’t necessarily one of them.
XHTML5.NL, I have already said that the charset in the meta tag is useful if it’s a polyglot document, which I have not mentioned about the DOCTYPE tag – I will correct that, thank you.
I wonder, how did you came up with the idea that “XHTML even renders faster than HTML”? XHTML markup is more verbose and therefor it would render slower.
Sebastian, thanks for the input. Yes SVG and MathML is supported inline by HTML5 specification, but HTML5 SVG and MathML browser support is limited even among latest non-beta browser versions. No support in IE8, FF3.6, Chrome 5.0, Safari 5.0. This could be a problem since large corporations got stuck with old browsers for years.
And of course, there are other XML-based technologies not yet supported at all by HTML5.
The doctypes and the meta charset tags are useful if you want to test and validate your polyglot page before you have the opportunity to put it on a server.
And XHTML can be faster for certain pages depending on the XML parser used. Since XML cannot be malformed there isn’t any code wasted on making it degrade gracefully. I don’t know that any current browsers use a fast XML parser though. Loading the file from the server would definitely be slower for the XML page, if the HTML page didn’t need all those IE hacks.
“I wonder, how did you came up with the idea that “XHTML even renders faster than HTML”? XHTML markup is more verbose and therefor it would render slower.”
I said that because of Chrome’s Accept header (something like application/xhtml+xml,text/html;q=0.9), but apparently that’s just a bug.
I think though the issue with extensibility isn’t if you need it, but if you might need it. Say some other markup language becomes as popular as say MathML and it would be really cool to start embedding it into documents. But maybe this markup had been developed outside of browsers for a while and used tags that conflict with some portions of HTML. Without namespaces, the only way to add this new functionality would be to make old pages obsolete or force detecting of old pages.
Take a page for example, from CSS. Every browser has their own extensions, and are supposed to use the namespace-like concept of browser-prefixes. Except IE didn’t, which might have been fine except that SVG has a css rule called “filter”. Suddenly a lot of stylesheets are in trouble.
To those who say XHTML could be faster than HTML:
XHTML documents may indeed be a bit faster to be parsed, but the bottleneck is never the parser. It’s those stupid scripts (from advertisers) and network speed (which neither sides may be able to control).
Currently browsers rely heavily on incremental rendering for giving users the best experiences. Internet connections and servers situation may not be always perfect. There are times that the whole HTML/XHTML file take multiple seconds or a significant fraction of a minute. In the worst case, the connection ends and the file is incomplete and truncated.
In the case of HTML, the issue is minor as there exists a graceful degradation provided by the browser. But if XHTML parser enforces strict conformation and won’t produce DOM trees for truncated documents, all the users can see is a blank page. I’ve experienced it once before and it doesn’t feel good. “You can’t show me what you’ve downloaded yet because there’s still 1% of the XHTML file stuck somewhere in the network tube and not yet received?” See, it will be very annoying when one is using a unstable connection.
So at the end, those “no graceful degradation allowed” wonderland is never acceptable to the real world. Just stick with HTML; we need reliability. And forced-conformation XHTML will never feel faster to the general public, never.
(Oh by the way, are there anybody willing to make a graceful-degradable XHTML parser and/or parsing rule? This may be the only opportunity XHTML could shine.)
“And XHTML can be faster for certain pages depending on the XML parser used.”
In fact, Nehalem introduced the SSE4.2 extension for fast XML parsing.
Shouldn’t you require HTML to be well formed as well? Why allow people to write broken code and then expect browsers vendors to waste time figuring out how to handle it?
“Why allow people to write broken code…”.
Who says you, me, we are the ones that may allow or disallow?
You can disallow whatever you want but it will always be here; lots of people start writing websites whithout even knowing what validation means ans lots and lots of webdeveloper-websites are written in non-valid code. That’s the reality and that will stay the reality: try validating 10 websites and see how many errors you will encounter… it’s a fairytale to think that in the future we will be looking at a 100% valid WWW.
But that’s not the point of this article: I myself have hundreds of websites online, some of them are valid other are on the to do list (some are really dusgusting looking at the code) and for months now I am trying to decide what todo: xhtml or html? Mostly all of my sites are written in html and some articles including this one makes me move towards html5
Or can someone tell me 100% for sure that I am making the wrong choice and that in the future I will have to convert all of my html-pages to xhtml again? If so, then convince me because I was never convinced about using xhtml instead of html and I never have seen the advantage of xhtml myself…
It would be more productive for browsers to break a page and declare what the problem was or what was expected in the XHTML than to subjectively attempt to guess what people intended thus having differing quirks modes and further creating unnecessary complications at so many levels.
Why would you WANT to use HTML over XHTML? If you have an error in XHTML the page breaks and in Gecko and Presto based browsers you get an error message, you’re not able to correct the issue on the spot whereas such errors in the vast majority of situations will go undetected and end up on live websites potentially causing problems for people. Additionally you can extend XHTML to support various other technologies whereas HTML you can not.
The way I see it XHTML not only has advantages over HTML though HTML is itself a liability.
By using an XML based technology you have a lot of advantages: using XQuery and XPath to query the data, using XInclude and XLink for much more powerfull linking, embedding other XML technologies such as SMiL SVG, MathML, RDF.
The vision of the W3C was much more theoretical and the WhatWG is much more practical, however i think both are needed. If you write a website you should probably choose HTML5 just because a lot of browser suck out there. But in an ideal world all browser implement XHTML and that is what you should use imho.
Erwin, the choice between HTML5 and XHTML5 boils down to the choice of a MIME/content type: the media type you choose determines what type of document you should be using. It’s pretty simple: if you use XML MIME type such as ‘application/xhtml+xml’, then it’s an XHTML5 document, if you use ‘text/html’ then it’s HTML5.
Unlike XHTML1 vs. HTML4, the XHTML5 vs. HTML5 choice of is defined exclusively by MIME type, rather than the DOCTYPE-defined: HTML5 is no longer formally based on SGML and it is no longer specified by DTD.
John, as for the well-formed syntax, it can be utilized in HTML5 document making it conforming to both the HTML and XHTML syntactic requirement. This kind of ‘polyglot’ document is useful when it is intended to be served as either XML MIME type or HTML type, otherwise you don’t have to use strict XML syntax in HTML5 document.
HTML5 syntax is not “broken”, it has very specific set of syntax rules which must be followed. For instance, the opening tag is not required if: the Head element is empty, or the first item inside is another element. Closing tag is not required if: it is followed by a element, or It is the last item within the parent element. Example of perfectly valid HTML5 document, it has no head, no body, no tr and td closing tags, no table title attribute quotes:
the code
@Erwin: XHTML doesn’t require validity, only well-formedness. The basic well-formedness requirement for XML syntax sets the bar pretty low. Closed tags, lowercase tags, no illegal characters, no attribute minimization, namespaces follow proper syntax. That’s basically it. Given that most people generate XML in a way that malformed syntax is impossible due to a typo, and the person serving application/xhtml+xml should know what they’re doing, there’s a very good chance that a UA getting malformed XML syntax indicates something went wrong that the user should know about. I would certainly hope that UA’s fail loudly rather than silently guessing at the correct interpretation of an ambiguously broken datastructure containing possibly critical data. HTML’s silent behavior is doubtlessly inferior to XML despite HTML5’s slight improvement over it’s predecessor by standardizing that wrong behavior so it’s at least wrong consistently.
That said, the only browser which notifies users of broken pages regardless of serialization is Konqueror. The only browser which offers users the choice of fixing up broken XHTML is Opera, but it does so in the stupidest way possible by re-parsing the broken XML as text/html. Implementers must just not understand that the XML spec doesn’t require them to work against users. I use Privoxy extensively to modify behavior because there isn’t a browser around that gives me the control I want (greasemonkey gets you half way there but no control over parsing/pre-processing). I’m working on a proxy of my own using Python’s LXML and HTTP libs since it gives easy access to libxml2 and several other parsers rather than pure regex matching as unfortunately Privoxy seems the best/only option available at the moment.
In the future the most important step towards fixing markup will be implementation of XBL2. HTML5 solves problems by simply statically dictating both semantics and behavior in a global namespace while disallowing extensibility. I’m fine with that so long as there’s a more extensible long-term solution. XML can dynamically bind semantics to tags via RDF and RDFa 1.1 profiles, and the last remaining step is to dynamically bind tags to behaviors so that we can formally define vocabularies rather than the xhtml namespace becoming a big mishmash of every feature of every html variant, and having to break semantics by transforming vocabularies into (X)HTML by XSLT in order to access browser features implicitly triggered by way of tags under a given namespace (without scripting).
@Tjerk: God I wish browsers processed XIncludes. That’s probably my #1 feature wish. No idea why nobody does since all the major XML libraries do – I’m sure the very same ones used in browsers themselves. http://www.w3.org/TR/xml-model/ isn’t a bad idea either.
I’m wondering if XHTML5 should not have its own namespace since it would require it’s own DTD…
EVERYONE JUST USE XHTML. WHY ALLOW TAG SOUP!?
All you need is to serve the document with an MIME content type of “application/xhtml+xml” and ensure the html tag has the xmlns attribute so it renders the page instead of the document tree. XHTML is easier to parse, debug, and has all the benefits of an XML document.
I should point out that moving all HTML documents to XHTML would be a short-term disaster for the following reasons:
1) Most “XHTML” pages served using “text/html” are not well-formed, let alone valid.
2) Even those pages that validate under normal circumstances may break in specific cases, depending on server-side data or user input.
3) More than a third of all browsers still don’t support XHTML, so you’d alienate a massive number of users immediately.
In the long term, I don’t see the advantage. The error handling model essentially holds user accountable for the sins of the Webmaster without actually notifying the Webmaster.
It would make more sense to create a hybrid syntax that has all the advantages of both XHTML and HTML while allowing allowing for HTML-style error handling. Call it “XPolyglot” (short for “eXtensible Polyglot”). The browser could notify the user when there’s a parser error using those top notification bars, but would handle errors like HTML and display the full page. You could also have a <meta> tag for the Webmaster’s email so you can email her/him any errors that occurred on the page at the click of a button.
You could say that parsing would be faster with pure XHTML, but I suspect you could design a parser with near-identical performance using some clever parser optimization.
It would also be nice to use attribute minimization for a series of semantic boolean values rather than having to use the role attribute:
<p htmlbool1 htmlbool2>…
<p ns:boolean1 ns:boolean2>…
…Versus…
<p role=”htmlbool1 htmlbool2″>…</p>
<p role=”ns:boolean1 ns:boolean2″>…</p>
Anyway, just a thought.
From the above post…”3) More than a third of all browsers still don’t support XHTML, so you’d alienate a massive number of users immediately.”
Where did you dream this up???
All recent versions browsers support xhtml. It’s only been out for around 12 years or so now. The biggest obstacle is IE >= 8. Only because MS decided to stay in the stone age and not add xhtml/xml support til IE 9 (in March, 2011). MS had their head up their rear for a long time. This should have been supported long, long, long time ago.
I see HTML5 as a band-aide for the time being. Eventually, XHTML will be the future at some point in time. It may not be called XHTML, perhaps HTML8 or something along those lines, but well formed syntax will win in the long run. Largely due to the handhelds. With keeping size down and resources being limited, whooper size browsers with tons of checking/correcting for malformed syntax (5 different ways to have the same tag in HTML5) aren’t going to be the future.
I still find it amusing how many web designers think it’s because of them HTML5 became the dominator over XHTML. It isn’t. The big boys like Google and Apple where the driving forces. Imagine how many IPhones would become bricks without manditory updates to support non-backward compatible XHTML 2.
Thank the heavens! There are some rational people still working on this.
It is painfully obvious that xHTML is the future. I agree with Jon and many others above in saying that HTML5 is a (really loud, really obnoxious) band-aid solution that serves little more than to make my server-side web-scraping life a living hell.
However, it’s not all bad. HTML5 has some pretty nifty tags (and a bunch of useless ones) that I’ll be using no less than 5 years from now (when IE9/IE10 becomes the new IE6). The new doctype is very nice, and I find myself using it everywhere I can. The audio and video components are also equally as interesting.
Regardless, the lion’s share of the glory (hogged by HTML5) belongs to our good friends JavaScript 2.0 and CSS3; the future of the web so unscrupulously coupled with the HTML5 brand.
Hah. I just briefly imagined a world where C++ syntax errors were “allowed” and the compilers just “guessed what people wanted.” Ah, I crack myself up.
Am I the only one who thinks that any XML parser will have a problem validating an HTML5 document using the new HTML5 elements like video, audio, nav, footer, and so on using the XHTML specification from 1999? I think it’s a good idea to join the two efforts of XHTML and HTML5 and I have no idea why the HTML5 working group has decided to not built on XHTML from the very beginning, but it is not quite as easy as the blogpost suggests.
@Sergey Mavrody: Maybe XHTML5.NL said it would render faster because it’s parsed faster. It’s just plain XML parsing. Unlike HTML 5 which has standalone tags that each have to be handled differently.
Any developer favors XML vs. anything else. Because it’s consistent in structure and easy to extend. Script kiddies like to leave tags open or standalone attributes. Standards should have always been enforced by browsers until web-designers/developer would have learned the f’in trade.
Cheers!
PS: Totally agree with Xunnamius’s comment about C++. Real coders appreciate standards, rigidity and consistency.
From a year-old comment but why do people like to point this out as an anti-strict argument?
“1) Most ‘XHTML’ pages served using “text/html” are not well-formed, let alone valid.”
Like all web pages? Since the dawn of the web in 1994 like 6-7 years before anybody had even heard of XHTML? How is this a relevant argument? Nobody’s forcing anybody to do it the easy way. If you want sloppy markup I won’t get in your way.
The meta charset tag will be ignored if the document is actually treated as XML for the XHTML.