The WHATWG Blog

Please leave your sense of logic at the door, thanks!

Archive for the ‘Browsers’ Category

Newline normalizations in form submission

Thursday, May 27th, 2021

If you work with form submissions, you might have noticed that form values containing newlines are normalized to CRLF, no matter whether the DOM value had LF or CR instead:

<form action="./post" method="post" enctype="application/x-www-form-urlencoded">
  <input type="hidden" name="hidden" value="a&#x0D;b&#x0A;c&#x0D;&#x0A;d" />
  <input type="submit" />
</form>

<script>
  // Checking that the DOM has the correct newlines and the normalization
  // happens during the form submission.
  const hiddenInput = document.querySelector("input[type=hidden]");
  console.log("%s", JSON.stringify(hiddenInput.value)); // "a\rb\nc\r\nd"
</script>
hidden=a%0D%0Ab%0D%0Ac%0D%0Ad

But although it might seem simple on the surface, newline normalization in form submissions is a topic that runs deeper than I thought, with bugs in the spec and differences across browsers. This post goes through what the spec used to do, what browsers (used to) implement, and how we went about fixing it.

First, some background on form submission

The data to submit from a form is modeled as an entry list – entries being pairs of names (strings) and values (either strings or File objects). This is a list rather than a map because a form can have multiple values for each name – which is is how <input type="file" multiple> and <select multiple> work – and their relative order with the rest of form entries matters.

The algorithm that does the job of going through every submittable element associated with a particular form and collecting their corresponding form entries is the "construct the entry list" algorithm. This algorithm does what you'd expect – discard disabled controls and buttons not pressed, and then for each control it calls the "append an entry" algorithm, which used to replace any newlines in the entry name and value (if that value is a string) with CRLF before appending the entry.

"Construct the entry list" is called early into the form submission algorithm, and the resulting entry list is then passed to the encoding algorithm for the corresponding enctype. Since only the multipart/form-data enctype supports uploading files, the algorithms for both application/x-www-form-urlencoded and text/plain encode the value's filename instead.

First signs of trouble

My first foray into the encoding of form payloads was in defining precisely how entry names (and filenames of file entry values) had to be escaped in multipart/form-data payloads, and since LF and CR have to be percent escaped, newlines came up during testing.

One thing I noticed is that, if you have newlines inside a filename – yes, that is something you can do – they're normalized differently than for an entry name or a string value.

<form id="form" action="./post" method="post" enctype="multipart/form-data">
  <input type="hidden" name="hidden a&#x0D;b" value="a&#x0D;b" />
  <input id="fileControl" type="file" name="file a&#x0D;b" />
</form>

<script>
  // A file with filename "a\rb", empty contents, and "application/octet-stream"
  // MIME type.
  const file = new File([], "a\rb");

  const dataTransfer = new DataTransfer();
  dataTransfer.items.add(file);
  document.getElementById("fileControl").files = dataTransfer.files;

  document.getElementById("form").submit();
</script>

Here is the resulting multipart/form-data payload in Chrome and Safari (newlines are always CRLF):

------WebKitFormBoundaryjlUA0jn3NUYxIh2A
Content-Disposition: form-data; name="hidden a%0D%0Ab"

a
b
------WebKitFormBoundaryjlUA0jn3NUYxIh2A
Content-Disposition: form-data; name="file a%0D%0Ab"; filename="a%0Db"
Content-Type: application/octet-stream


------WebKitFormBoundaryjlUA0jn3NUYxIh2A--

And this is in Firefox 88 (the current stable version as of this writing):

-----------------------------26303884030012461673680556885
Content-Disposition: form-data; name="hidden a b"

a
b
-----------------------------26303884030012461673680556885
Content-Disposition: form-data; name="file a b"; filename="a b"
Content-Type: application/octet-stream


-----------------------------26303884030012461673680556885--

As you can see, Firefox substitutes a space for any newlines (CR, LF or CRLF) in the multipart/form-data encoding of entry names and filenames, rather than percent-encoding them as do Chrome and Safari. This behavior was made illegal in the spec in pull request #6282, but it couldn't be fixed in Firefox until the spec decided on a normalization behavior. In the case of values, Firefox normalizes to CRLF as the other browsers do.

As for Chrome and Safari, here we see that newlines in entry names and string values are normalized to CRLF, but filenames are not normalized. From the entry list construction algorithm as described above, this makes sense because entry values are only normalized to CRLF when they are strings – files are unchanged, and so are their filenames.

Except that, if you change the form's enctype in the above example to application/x-www-form-urlencoded, you get this in every browser:

hidden+a%0D%0Ab=a%0D%0Ab&file+a%0D%0Ab=a%0D%0Ab

Since multipart/form-data is the only enctype that allows file uploads, other enctypes use their filenames instead. But here it seems like every browser is CRLF-normalizing the filenames, even though in the spec that substitution happens long after constructing the entry list.

Normalizations with FormData and fetch

The FormData class started out as a way to send multipart/form-data form payloads through the XMLHttpRequest and fetch APIs without having to generate that payload in JavaScript. As such, FormData instances are basically a JS-accessible wrapper over an entry list.

So let's try the same with FormData:

const formData = new FormData();
formData.append("hidden a\rb", "a\rb");
formData.append("file a\rb", new File([], "a\rb"));

// FormData objects in fetch will always be serialized as multipart/form-data.
await fetch("./post", { method: "POST", body: formData });

Safari sends the same form payload as above, with names and values normalized to CRLF, and so does Firefox 88 with values normalized to CRLF (and names and values having their newlines escaped as spaces). But Chrome keeps names, filenames and values unnormalized (here the ? character stands for CR):

------WebKitFormBoundarySMGkMfD8mVOnmGDP
Content-Disposition: form-data; name="hidden a%0Db"

a?b
------WebKitFormBoundarySMGkMfD8mVOnmGDP
Content-Disposition: form-data; name="file a%0Db"; filename="a%0Db"
Content-Type: application/octet-stream


------WebKitFormBoundarySMGkMfD8mVOnmGDP--

Since FormData is just a wrapper over an entry list, and fetch simply calls the multipart/form-data encoding algorithm, no normalizations should take place. So it looks like Chrome was following the spec here, while Firefox and Safari were apparently doing some newline normalization (for Firefox, on string values only) at the time of serializing as multipart/form-data.

With FormData you can also investigate what the "construct the entry list" algorithm does, since if you pass a <form> element to the FormData constructor, it will call that algorithm outside of a form submission context, and let you inspect the resulting entry list.

<form id="form">
  <input type="hidden" name="a&#x0D;b" value="a&#x0D;b" />
</form>

<script>
  const formData = new FormData(document.getElementById("form"));
  for (const [name, value] of formData.entries()) {
    console.log("%s %s", JSON.stringify(name), JSON.stringify(value));
  }
  // Firefox and Safari print: "a\rb" "a\rb"
  // Chrome prints: "a\r\nb" "a\r\nb"
  // These results don't depend on the form's enctype.
</script>

So it seems like Firefox and Safari are not normalizing as they construct the entry list, and instead normalize names and values at the time that they encode the form into an enctype. In particular, since the application/x-www-form-urlencoded and text/plain enctypes don't allow file uploads, file entry values are substituted with their filenames before the normalization. Entry lists that aren't created from the "construct an entry list" algorithm get normalized all the same.

Chrome instead follows the specification (as it used to be) in normalizing in "construct an entry list" and not normalizing later, even for entry lists created through other means. But that doesn't explain why filenames in the application/x-www-form-urlencoded and text/plain enctypes are normalized. Does Chrome also have an additional normalization layer?

Investigating late normalization with the formdata event

It would be great to investigate in more detail what Chrome and other browsers do after constructing the entry list. Since the entry list construction already normalizes entries, any further normalizations that might happen further down the line are obscured in the common case.

In the case of multipart/form-data, we can test this because using a FormData object with fetch doesn't invoke "construct an entry list", and so can see what happens to unnormalized entries. For other enctypes there is no way to create an entry list that doesn't go through "construct an entry list", but as it turns out, the "construct an entry list" algorithm itself offers two ways to end up with unnormalized entries: form-associated custom elements (only implemented in Chrome so far) and the formdata event (implemented in Chrome and Firefox). Here we'll only be covering the latter, since their results are equivalent.

One thing I skipped when I covered the "construct an entry list" algorithm above is that, at the end of the algorithm, after all entries corresponding to controls have been added to the entry list, a formdata event is fired on the relevant <form> element. This event has a formData attribute which allows you not only to inspect the entry list at that point, but to modify it.

<form
  id="form"
  action="./post"
  method="post"
  enctype="application/x-www-form-urlencoded"
>
  <!-- Empty -->
</form>

<script>
  const form = document.getElementById("form");
  form.addEventListener("formdata", (evt) => {
    evt.formData.append("string a\rb", "a\rb");
    evt.formData.append("file a\rb", new File([], "a\rb"));
  });
  form.submit();
</script>

For both Chrome and Firefox (not Safari because it doesn't support the formdata event), trying this with the application/x-www-form-urlencoded enctype gets you a normalized result:

string+a%0D%0Ab=a%0D%0Ab&file+a%0D%0Ab=a%0D%0Ab

Firefox shows the same normalizations for the text/plain enctype; Chrome instead normalizes only filenames, not names and values. And with multipart/form-data we get the same result as with fetch and FormData above: Chrome doesn't normalize anything, Firefox normalizes string values (with names and filenames being replaced with spaces).

So in short:

Remember that these differences across browsers don't really affect the encoding of normal forms, they only matter if you're using FormData with fetch, the formdata event, or form-associated custom elements.

Fixing the spec

So which behavior do we choose? Firefox replacing newlines in multipart/form-data names and values with a space is illegal as per PR #6282, but anything else is fair game.

For text/plain, we have Firefox and Safari behaving in the same way, and Chrome disagreeing. Since text/plain cannot represent inputs unambiguously, is little used in any case, and you would need either form-associated custom elements or to use the formdata event to see a difference, it seems extremely unlikely that there is web content that depends on either case. So it makes more sense to treat text/plain just like application/x-www-form-urlencoded and normalize names, filenames and values.

For multipart/form-data, there is the added compatibility risk that you can observe this case by using FormData and fetch, so it's more likely to cause webcompat issues, no matter whether we went with Safari's or Chrome's behavior. In the end we choose to go with Safari's, in order to be consistent at normalizing all enctypes – although the multipart/form-data has to be different in not normalizing filenames, of course.

So in pull request #6287 we fixed this by:

  1. Adding a new algorithm that runs before the application/x-www-form-urlencoded and text/plain serializers. This algorithm first extracts a filename from the entry value, if it's a file, and then CRLF-normalizes both the name and value.
  2. We changed the multipart/form-data encoding algorithm to have a first step that CRLF-normalizes names and string values, leaving file values intact.

Do we need the early normalization though?

At the time that we decided the above changes to the spec, I still thought that the spec's (and Chrome's) behavior of normalizing in "construct the entry list" was correct. But later on I realized that, once we have the late normalizations mentioned above, the early normalization in "construct the entry list" doesn't matter for form submission, since the late normalizations do everything the early one can do and more. The only way you could observe whether that early normalization is present or not is through the FormData constructor. So it would make sense to remove that early normalization and standardize on Firefox and Safari's behavior here, as we did in pull request #6624.

One kink remained, though: <textarea> elements support the wrap="hard" attribute, which adds linebreaks into the submitted value corresponding to how the text is linewrapped in the UI. In the spec, this is done through the "textarea wrapping transformation", which takes the textarea's "raw value", normalizes it to CRLF, and when wrap="hard", adds CRLF newlines to wrap the contents. But if you test this on Safari, all newlines (both normalized and added) are LF – and Firefox currently doesn't implement wrap="hard", but it does normalize newlines to LF. So should this be changed?

I thought it was better to align on Firefox's and Safari's behavior, especially since this could simplify the mess that is the difference between "raw value", "API value" and "value" for <textarea> in the spec. Chrome disagreed at first – but as it turns out, Chrome's implementation of the textarea wrapping transformation normalizes to LF, and it's only the normalization in "construct an entry list" that normalizes those newlines to CRLF. So Chrome would align with Firefox and Safari in that area by just removing the early normalization.

Pull request #6697 fixes this issue with <textarea> in the spec, and the follow-up issue #6662 will take care of simplifying the textarea wrapping transformation in the spec, as well as ensure that it matches implementations.

Implementation fixes

After those three pull requests, the current spec mandates that no normalization happen in "construct the entry list", and that the entry lists to be encoded with some enctype go through a transformation that depends on the enctype:

Safari implements the current spec behavior, and so doesn't need any fixes as a result of these spec changes. However, when working on them I noticed that Safari had a preexisting bug in that it wasn't converting entry names and string values into scalar value strings in the "construct an entry list" algorithm, which could lead to FormData objects containing DOMString values despite the WebIDL declaration. What's more, those lone surrogates would remain there until the time of serializing the form, and might show up in the form payload as WTF-8 surrogate byte sequences. This is bug 225299.

Firefox acts much like Safari, except in that it escapes newlines in multipart/form-data names and filenames as spaces rather than CRLF-normalizing in the case of names and then percent-encoding. With the normalization now defined in the spec, this could now be changed. This is bug 1686765. I additionally also found the same bug as Safari with not converting strings into scalar value strings, except in this case the normalization did take place when encoding (bug 1709066). Both issues are now fixed in Firefox Nightly and will ship on Firefox 90.

Finally, Chrome would need to remove the normalization it performs in "construct an entry list", update the normalization it performs on the text/plain enctype to cover not only filenames but also names and values, and add an additional normalization for multipart/form-data names and values. This is covered in issue 1167095. Since Chrome is the browser that needs the most changes as a result of these spec changes, and they are understandably concerned with backwards compatibility, these fixes are expected to ship behind a flag so they can be quickly rolled back in case of trouble.

Conclusion

Although form submission is not necessarily seen as an easy topic, newline normalization in that context might not seem like such a big deal from the outside. But in a platform like the web, where spec and implementation decisions can easily snowball into compatibility concerns, even things that might look simple might have a history that makes standardizing on one behavior hard.

Posted in Browsers, Forms, WHATWG | Comments Off on Newline normalizations in form submission

The state of fieldset interoperability

Wednesday, September 19th, 2018

As part of my work at Bocoup, I recently started working with browser implementers to improve the state of fieldset, the 21 year old feature in HTML, that provides form accessibility benefits to assistive technologies like screen readers. It suffers from a number of interoperability bugs that make it difficult for web developers to use.

Here is an example form grouped with a <legend> caption in a <fieldset> element:

Pronouns

And the corresponding markup for the above example.

<fieldset>
 <legend>Pronouns</legend>
 <label><input type=radio name=pronouns value=he> He/him</label>
 <label><input type=radio name=pronouns value=she> She/her</label>
 <label><input type=radio name=pronouns value=they> They/them</label>
 <label><input type=radio name=pronouns value=other> Write your own</label>
 <input type=text name=pronouns-other placeholder=&hellip;>
</fieldset>

The element is defined in the HTML standard, along with rendering rules in the Rendering section. Further developer documentation is available on MDN.

Usage

Based on a query of the HTTP Archive data set, containing the raw content of the top 1.3 million web pages, we find the relative usage of each HTML element. The fieldset element is used on 8.41% of the web pages, which is higher than other popular features, such as the video and canvas elements; however, the legend element is used on 2.46% of web pages, which is not ideal for assistive technologies. Meanwhile, the form element appears on 70.55% of pages, and we believe that if interoperability bugs were fixed, correct and semantic fieldset and legend use would increase, and have a positive impact on form accessibility for the web.

Fieldset standards history

In January 1997, HTML 3.2 introduces forms and some form controls, but does not include the fieldset or legend elements.

In July 1997, the first draft of HTML 4.0 introduces the fieldset and legend elements:

The FIELDSET element allows form designers to group thematically related controls together. Grouping controls makes it easier for users to understand their purpose while simultaneously facilitating tabbing navigation for visual user agents and speech navigation for speech-oriented user agents. The proper use of this element makes documents more accessible to people with disabilities.

The LEGEND element allows designers to assign a caption to a FIELDSET. The legend improves accessibility when the FIELDSET is rendered non-visually. When rendered visually, setting the align attribute on the LEGEND element aligns it with respect to the FIELDSET.

In December 1999, HTML 4.01 is published as a W3C Recommendation, without changing the definitions of the fieldset and legend elements.

In December 2003, Ian Hickson extends the fieldset element with the disabled and form attributes in the Proposed XHTML Module: XForms Basic, later renamed to Web Forms 2.0.

In September 2008, Ian Hickson adds the fieldset element to the HTML standard.

In February 2009, Ian Hickson specifies rendering rules for the fieldset element. The specification has since gone through some minor revisions, e.g., specifying that fieldset establishes a block formatting context in 2009 and adding min-width: min-content; in 2014.

In August 2018, I proposed a number of changes to the standard to better define how it should work, and resolve ambiguity between browser implementer interpretations.

Current state

As part of our work at Bocoup to improve the interoperability of the fieldset and legend child element, we talked to web developers and browser implementers, proposed changes to the standard, and wrote a lot of tests. At the time of this writing, 26 issues have been reported on the HTML specification for the fieldset element, and the tests that we wrote show a clear lack of interoperability among browser engines.

The results for fieldset and legend tests show some tests failing in all browsers, some tests passing in all browsers, and some passing and failing in different browsers.

Of the 26 issues filed against the specification, 17 are about rendering interoperability. These rendering issues affect use cases such as making a fieldset scrollable, which currently result in broken scroll-rendering in some browsers. These issues also affect consistent legend rendering which is causing web developers avoid using the fieldset element altogether. Since the fieldset element is intended to help people who use assistive technologies to navigate forms, the current situation is less than ideal.

HTML spec rendering issues

In April of this year, Mozilla developers filed a meta-issue on the HTML specification “Need to spec fieldset layout” to address the ambiguities which have been leading to interoperability issues between browser implementations. During the past few weeks of work on fieldset, we made initial proposed changes to the rendering section of the HTML standard to address these 17 issues. At the time of this writing, these changes are under review.

Proposal to extend -webkit-appearance

Web developers also struggle with changing the default behaviors of fieldset and legend and seek ways to turn off the “magic” to have the elements render as normal elements. To address this, we created a proposal to extend the -webkit-appearance CSS property with a new value called fieldset and a new property called legend that are together capable giving grouped rendering behavior to regular elements, as well as resetting fieldset/legend elements to behave like normal elements.

fieldset {
  -webkit-appearance: none;
  margin: 0;
  padding: 0;
  border: none;
  min-inline-size: 0;
}
legend {
  legend: none;
  padding: 0;
}

The general purpose proposed specification for an "unprefixed" CSS ‘appearance’ property, has been blocked by Mozilla's statement that it is not web-compatible as currently defined, meaning that implementing appearance would break the existing behavior of websites that are currently using CSS appearance in a different way.

We asked the W3C CSS working group for feedback on the above approach, and they had some reservations and will develop an alternative proposal. When there is consensus for how it should work, we will update the specification and tests accordingly.

We had also considered defining new display values for fieldset and legend, but care needs to be taken to preserve web compatibility. There are thousands of pages in HTTP Archive that set ‘display’ to something on fieldset or legend, but browsers typically behave as display: block was set. For example, specifying display: inline on the legend needs to render the same as it does by default.

In parallel, we authored an initial specification for the ‘-webkit-appearance’ property in Mike Taylor's WHATWG Compatibility standard (which reverse engineers web platform wonk into status quo specifications), along with accompanying tests. More work needs to be done for the ‘-webkit-appearance’ (or unprefixed ‘appearance’) to define what the values mean and to reach interoperability on the supported values.

Accessibility Issues

We have started looking into testing accessibility explicitly, to ensure that the elements remain accessible even when they are styled in particular ways.

This work has uncovered ambiguities in the specification, which we have submitted a proposal to address. We have also identified interoperability issues in the accessiblity mapping in implementations, which we have reported.

Implementation fixes

Meta bugs have been reported for each browser engine (Gecko, Chromium, WebKit, EdgeHTML), which depend on more specific bugs.

As of September 18 2018, the following issues have been fixed in Gecko:

In Gecko, the bug Implement fieldset/legend in terms of '-webkit-appearance' currently has a work-in-progress patch.

The following issues have been fixes in Chromium:

The WebKit and Edge teams are aware of bugs, and we will follow up with them to track progress.

Conclusion

The fieldset and legend elements are useful to group related form controls, in particular to aid people who use assistive technologies. They are currently not interoperable and are difficult for web developers to style. With our work and proposal, we aim to resolve the problems so that they can be used without restrictions and behave the same in all browser engines, which will benefit browser implementers, web developers, and end users.

(This post is cross-posted on Bocoup's blog.)

Posted in Browsers, Forms, WHATWG | 1 Comment »

Implementation progress on the HTML5 <ruby> element

Friday, November 13th, 2009

If you don't know what the HTML5 ruby element is, you might want to take a minute to first read the section about the ruby element in the HTML5 specification and/or the Wikipedia article on ruby characters. To quote from the HTML5 description of the ruby element:

The ruby element allows one or more spans of phrasing content to be marked with ruby annotations. Ruby annotations are short runs of text presented alongside base text, primarily used in East Asian typography as a guide for pronunciation or to include other annotations. In Japanese, this form of typography is also known as furigana.

I give a specific example further down, but for now I want to first say that the really great news about the ruby element is that last week, Google Chrome developer Roland Steiner checked in a change (r50495, and see also related bug 28420) that adds ruby support to the trunk of the WebKit source repository, thus making the ruby feature available in WebKit nightlies and Chrome dev-channel releases.

A simple example

The following is a simple example of what you can do with the ruby element; make sure to view it in a recent WebKit nightly or Chrome dev-channel release. Note that the text is an excerpt from the source of a ruby-annotated online copy of the short story Run, Melos, Run by the writer Osamu Dazai, which I came across by way of Piro's info page for his XHTML Ruby add-on for Firefox (and which I mention a bit more about further below).

?????????????<ruby>??<rp>?</rp>
<rt>????</rt><rp>?</rp></ruby>????
<ruby>??<rp>?</rp><rt>????</rt><rp>?</rp>
</ruby>???? ??????????????????? 
??????????<ruby>????<rp>?</rp>
<rt>??????</rt><rp>?</rp></ruby>?<ruby>??
<rp>?</rp><rt>????</rt><rp>?</rp></ruby>
??????????

If you don't happen to have Japanese fonts installed, here's a screenshot of the source for reference:

ruby source markup

Notice that the actual annotative ruby text (which I've highlighted in yellow in the source just for the sake of emphasis) is marked up using the rt element as a child of the ruby element, and the text being annotated is the node that's a previous sibling to that rt content as a child of the ruby element. The final new element in the mix is the rp element, which is simply a way to mark up the annotative ruby text with parenthesis, for graceful fallback in browsers that don't support ruby.

So here's the rendered view of that same text:

??????????????????????????????????????????????????????????????????????????????????????????????????????????

And here is a screenshot of how it should look in a recent WebKit nightly or Chrome dev-channel release:

ruby rendered view

Notice that the annotative ruby text is displayed above the ruby base it annotates. If you instead view this page in a browser that doesn't support the ruby feature, you'll see that the ruby text is just shown inline, in parenthesis following the ruby base it annotates. So the feature falls back gracefully in older browsers.

Support in other browsers

Current versions of Microsoft Internet Explorer also have native support for ruby, and you can also get ruby support in Firefox by installing Piro's XHTML Ruby add-on (and for more details, see his XHTML ruby add-on info page) — so we are well on the way to seeing the HTML5 ruby feature supported across a range of browsers. If you're not accustomed to reading printed books and magazines and such in Japanese, that might not sound like such a big deal. But for authors and developers and content providers in Japan who want to finally be able to use on the Web this very common feature of Japanese page layout from the print world, getting ruby support into another major browser engine is a huge win, and something to be very excited about.

Posted in Browsers, Elements | 3 Comments »

Sniffing for RSS 1.0 feeds served as text/html

Tuesday, September 29th, 2009

I recently found myself testing how browsers sniff for RSS 1.0 feeds that are served with an incorrect MIME type. (Yes, my life is full of delicious irony.) I thought I'd share my findings so far.

Firefox

Firefox's feed sniffing algorithm is located in nsFeedSniffer.cpp. As you can see, starting at line 353, it takes the first 512 bytes of the page, looks for a root tag called rss (for RSS 2.0), atom (for Atom 0.3 and 1.0), or rdf:RDF (for RSS 1.0). The RSS 1.0 marker is really a generic RDF marker, so it then does some additional checks for the two required namespaces of an RSS 1.0 feed, http://www.w3.org/1999/02/22-rdf-syntax-ns# and http://purl.org/rss/1.0/. This check is quite simple; it literally just checks for the presence of both strings, not caring whether they are the value of an xmlns attribute (or indeed any attribute at all).

Firefox has an additional feature which tripped up my testing until I understood it. IE and Safari both have a mode where they essentially say "I detected this page as a feed and tried to parse it, but I failed, so now I'm giving up, and here's an error message describing why I gave up." Firefox does not have a mode like this. As far as I can tell, if it decides that a resource is a feed but then fails to parse the resource as a feed, it reparses the resource with feed handling disabled. So an non-well-formed feed served as application/rss+xml will actually trigger a "Do you want to download this file" dialog, because Firefox tried to parse it as a feed, failed, then reparsed it as some-random-media-type-that-I-don't-handle. A non-well-formed feed served as text/html will actually render as HTML, but only after Firefox silently tries (and fails) to parse it as a feed.

There's nothing wrong with this approach; in fact, it seems much more end-user-friendly than throwing up an incomprehensible error message. I just mention it because it tripped me up while testing.

Internet Explorer

Internet Explorer's feed sniffing algorithm is documented by the Windows RSS team. About RSS 1.0, it states:

IE7 detects a RSS 1.0 feed using the content types application/xml or text/xml. ... The document is checked for the strings <rdf:RDF, http://www.w3.org/1999/02/22-rdf-syntax-ns# and http://purl.org/rss/1.0/. IE7 detects that it is a feed if all three strings are found within the first 512 bytes of the document. ... IE7 also supports other generic Content-Types by checking the document for specific Atom and RSS strings.

Now that I understand IE's algorithm, I have to concede that this documentation is 100% accurate. However, it doesn't tell the full story. Here's what actually happens. If the Content-Type is

...then IE will trigger its feed sniffing. Once IE triggers its feed sniffing, it will never change its mind (unlike Firefox). If feed parsing fails, IE will throw up an error message complaining of feed coding errors or an unsupported feed format. The presence or absence of a charset parameter in the Content-Type header made absolutely no difference in any of the cases I tested.

And how exactly does IE detect an RSS 1.0 feed, once it decides to sniff? The documentation on MSDN is literally true: "The document is checked for the strings <rdf:RDF, http://www.w3.org/1999/02/22-rdf-syntax-ns# and http://purl.org/rss/1.0/. IE7 detects that it is a feed if all three strings are found within the first 512 bytes of the document." Combined with our knowledge of which Content-Types IE considers "generic," we can conclude that the following page, served as text/html, will be treated as a feed in IE:

<!-- <rdf:RDF -->
<!-- http://www.w3.org/1999/02/22-rdf-syntax-ns# -->
<!-- http://purl.org/rss/1.0/ -->
<script>alert('Hi!');</script>

[live demonstration]

Why Bother?

I am working with Adam Barth and Ian Hickson to update draft-abarth-mime-sniff-01 (the content sniffing algorithm referenced by HTML5) to sniff RSS 1.0 feeds served as text/html. It is unlikely that we will adopt IE's algorithm, since it seems unnecessarily pathological. I am proposing the following change, which would bring the content sniffing specification in line with Firefox's sniffing algorithm:

In the "Feed or HTML" section, insert the following steps between step 10 and step 11:

10a. Initialize /RDF flag/ to 0.

10b. Initialize /RSS flag/ to 0.

10c. If the bytes with positions pos to pos+23 in s are exactly equal to 0x68, 0x74, 0x74, 0x70, 0x3A, 0x2F, 0x2F, 0x70, 0x75, 0x72, 0x6C, 0x2E, 0x6F, 0x72, 0x67, 0x2F, 0x72, 0x73, 0x73, 0x2F, 0x31, 0x2E, 0x30, 0x2F respectively (ASCII for "http://purl.org/rss/1.0/"), then:

  1. Increase pos by 23.
  2. Set /RSS flag/ to 1.

10d. If the bytes with positions pos to pos+42 in s are exactly equal to 0x68, 0x74, 0x74, 0x70, 0x3A, 0x2F, 0x2F, 0x77, 0x77, 0x77, 0x2E, 0x77, 0x33, 0x2E, 0x6F, 0x72, 0x67, 0x2F, 0x31, 0x39, 0x39, 0x39, 0x2F, 0x30, 0x32, 0x2F, 0x32, 0x32, 0x2D, 0x72, 0x64, 0x66, 0x2D, 0x73, 0x79, 0x6E, 0x74, 0x61, 0x78, 0x2D, 0x6E, 0x73, 0x23 respectively (ASCII for "http://www.w3.org/1999/02/22-rdf-syntax-ns#"), then:

  1. Increase pos by 42.
  2. Set /RDF flag/ to 1.

10e. Increase pos by 1.

10f. If /RDF flag/ is 1, and /RSS flag/ is 1, then the /sniffed type/ of the resource is "application/rss+xml". Abort these steps.

10g. If pos points beyond the end of the byte stream s, then continue to step 11 of this algorithm.

10h. Jump back to step 10c of this algorithm.

Further Reading

You can see the results of my research to date and test the feeds for yourself. Because my research results are plain text with embedded HTML tags, I have added 512 bytes of leading whitespace to the page to foil browsers' plain-text-or-HTML content sniffing. Mmmm -- delicious, delicious irony.

Update: Belorussian translation is available.

Posted in Browsers | 4 Comments »

Help Test HTML5 Parsing in Gecko

Wednesday, July 8th, 2009

The HTML5 parsing algorithm is meant to demystify HTML parsing and make it uniform across implementations in a backwards-compatible way. The algorithm has had “in the lab” testing, but so far it hasn’t been tested inside a browser by a large number of people. You can help change that now!

A while ago, an implementation of the HTML5 parsing algorithm landed on mozilla-central preffed off. Anyone who is testing Firefox nightly builds can now opt to turn on the HTML5 parser and test it.

How to Participate?

First, this isn’t release-quality software. Testing the HTML5 parser carries all the same risks as testing a nightly build in general, and then some. It may crash, it may corrupt your Firefox profile, etc. If you aren’t comfortable with taking the risks associated with running nighly builds, you shouldn’t participate.

If you are still comfortable with testing, download a trunk nightly build, run it, navigate to about:config and flip the preference named html5.enable to true. This makes Gecko use the HTML5 parser when loading pages into the content area and when setting innerHTML. The HTML5 parser is not used for HTML embedded in feeds, Netscape bookmark import, View Source, etc., yet.

The html5.enable preference doesn’t require a restart to take effect. It takes effect the next time you load a page.

What to Test?

The main thing is getting the HTML5 parser exposed to a wide range of real Web content that people browse. This may turn up crashes or compatibility problems.

So the way to help is to use nightly builds with the HTML5 parser for browsing as usual. If you see no difference, things are going well! If you see a page misbehaving—or, worse, crashing—with the HTML5 parser turned on but not with it turned off, please report the problem.

Reporting Bugs

Please file bugs in the “Core” product under “HTML: Parser” component with “[HTML5] ” at the start of the summary.

Known Problems

First and foremost, please refer to the list of known bugs.

However, I’d like to highlight a particular issue: Support for comments ending with --!> is in the spec, but the patch hasn’t landed, yet. Support for similar endings of pseudo-comment escapes within script element content is not in the spec yet. The practical effect is that the rest of the page may end up being swallowed up inside a comment or a script element.

Another issue is that the new parser doesn’t yet inhibit document.write() in places where it shouldn’t be allowed per spec but where the old parser allowed it.

Is There Anything New?

So what’s fun if success is that you notice no change? There are important technical things under the hood—like TCP packet boundaries not affecting the parse result and there never being unnotified nodes in the tree when the event loop spins—but you aren’t supposed to notice.

However, there is a major new visible feature, too. With the HTML5 parser, you can use SVG and MathML in text/html pages. This means that you can:

And yes, you can even put SVG inside MathML <annotation-xml> or MathML inside <foreignObject>. The mixing you’ve seen in XML is now supported in HTML, too.

If you aren’t concerned with taking the steps to make things degrade nicely in browsers that don’t support SVG and MathML in HTML, you can simply copy and paste XML output from your favorite SVG or MathML editor into your HTML source as long as the editor doesn’t use namespace prefixes for elements and uses the prefix xlink for XLink attributes.

If you don’t use the XML empty element syntax and you put you SVG text nodes in CDATA sections, the page will degrade gracefully in older HTML browser so that the image simply disappears but the rest of the page is intact. You can even put a fallback bitmap as <img> inside <desc>. Unfortunately, there isn’t a similar technique for MathML, though if you want to develop one, I suggest experimenting with the <annotation> as your <desc>-like container.

There are known issues with matching camelCase names with Selectors or getElementByTagName, though.

Posted in Browsers, Processing Model, Syntax | 8 Comments »