IDN Display Algorithm: Difference between revisions
No edit summary |
No edit summary |
||
Line 100: | Line 100: | ||
(All-Latin "scope.tld" vs all-Cyrillic "ѕсоре.tld"). However, so do the | (All-Latin "scope.tld" vs all-Cyrillic "ѕсоре.tld"). However, so do the | ||
solutions of the other browsers, and it has not proved to be a | solutions of the other browsers, and it has not proved to be a | ||
significant problem so far. If there is a problem, | significant problem so far. If there is a problem, every browser is equally affected. | ||
If problems arose in the future (e.g. | If problems arose in the future (e.g. whole-script, or homographs between a particular | ||
script and Latin), our response would be that in the end, it is up to registries | single script and Latin), our response would be that in the end, it is up to registries | ||
to make sure that their customers | to make sure that their customers | ||
cannot rip each other off. Browsers can put some technical restrictions in place, | cannot rip each other off. Browsers can put some technical restrictions in place, | ||
Line 111: | Line 111: | ||
we want to make sure we don't treat non-Latin scripts as second-class citizens. | we want to make sure we don't treat non-Latin scripts as second-class citizens. | ||
==Transition== | |||
In between adopting this plan and shipping a Firefox with | In between adopting this plan and shipping a Firefox with |
Revision as of 14:01, 30 January 2012
This page explains the plan for changing the mechanism by which Firefox decides whether to display a given IDN domain label (a domain name is made up of one or more labels, separated by dots) in its Unicode or Punycode form.
Background
The Problem
If we just display any possible IDN domain label, we open ourselves up to IDN homograph attacks, where one identical-looking domain can spoof another. So we have to have some mechanism to decide which ones to display and which ones to not display, which does not involve comparing the domain in question against every other single domain which exists (which is impossible).
Current Algorithm
Our current algorithm is to display as Unicode all IDN labels within TLDs on our whitelist, and display as Punycode otherwise. We check the anti-spoofing policies of a registry before adding their TLD to the whitelist. The TLD operator must apply directly (they cannot be nominated by another person), and on several occasions we have required policy updates or implementation as a condition of getting in.
We also have a character blacklist - characters we will never display under any circumstances. This includes those which could be used to spoof the separators "/" and ".", and invisible characters. (XXX Do we need to update this to remove some of those, like ZWJ/ZWNJ, for IDNA2008?)
Need For Change
This strategy provides pretty good user protection, and it provides consistency - every Firefox everywhere works the same. However, it does mean that IDNs do not work at all in many TLDs, because the registry (for whatever reason) has not applied for inclusion, or because we do not think they have sufficiently strong protections in place. In addition, ICANN is about to open a large number of new TLDs. So either maintaining a whitelist is going to become burdensome, or the list will become wildly out of date and we will not be serving our users.
Other Browsers
The Chromium IDN page has a good summary of the policies of Chrome/Chromium and the other browsers. Unfortunately, no consensus has emerged on how to do this. Those other mechanisms were considered, but many of them depend on the configuration of the user's computer (e.g. installed languages), and this does not give site owners any confidence that their IDN domain name will be correctly displayed for all their visitors (and no way of telling if it's not).
Proposal
The plan is to augment our whitelist with something based on ascertaining whether all the characters in a label all come from the same script, or are from one of a limited and defined number of allowable combinations. The hope is that any intra-script near-homographs will be recognisable to people who understand that script.
We will retain the whitelist as well, because a) removing it might break some domains which worked previously, and b) if a registry submits a good policy, we have the ability to give them more freedom than the default restrictions do. So an IDN domain would be shown as Unicode if the TLD was on the whitelist or, if not, if it met the criteria above.
Algorithm
If a TLD is in the whitelist, we will unconditionally display Unicode. If it is not, the following algorithm will apply.
Unicode Technical Report 36, defines a "Moderately Restrictive" profile. It says the following (with edits for clarity):
No characters in the label can be outside of the Identifier Profile (defined for us by the IDNA2008 standard, RFC 5892).
All characters in each label must be from Common + Inherited + a single script, or from one of the following combinations:
- Common + Inherited + Latin + Han + Hiragana + Katakana; or
- Common + Inherited + Latin + Han + Bopomofo; or
- Common + Inherited + Latin + Han + Hangul; or
- Common + Inherited + Latin + any single other script except Cyrillic, Greek, or Cherokee
Unicode Technical Report 39 gives a definition for how we detect whether a string is "single script". Some Common or Inherited characters are only used in a small number (but more than one) script. Mark Davis writes: "The Unicode Consortium in U6.1 (due out soon) is adding the property Script_Extensions, to provide data about characters which are only used in a few (but more than one) script. The sample code in #39 should be updated to include that, so handling such cases." We should take this enhancement when the data becomes available; in the mean time, Common and Inherited characters are permitted without restriction.
Additional checks:
- Display as Punycode labels which use more than one numbering system (we would need a list of numbering systems in Unicode)
- Display as Punycode labels which contain both simplified-only and traditional-only Chinese characters, using the Unihan data in the Unicode Character Database (should be < 16k of data for a simple binary test)
- Display as Punycode labels which have sequences of the same nonspacing mark (we would need a list of, or the name of a class containing, all such marks)
Possible Issues and Open Questions
The following issues are open, but should not block initial implementation.
Suggestion from TR#39:
- Check to see that all the characters are in the sets of exemplar characters for at least one language in the Unicode Common Locale Data Repository. [XXX What does this mean? -- Gerv]
Also:
- Should we document our character hard-blacklist as part of this exercise? It's already visible in the prefs. Are any characters in it legal in IDNA2008 anyway?
- Do we want to allow the user to choose between multiple "restriction levels", or have a hidden pref? There are significant downsides to allowing this...
- Do we ever want to display errors other than just by using Punycode? I suggest not...
- Should we add Armenian to the list of scripts which cannot mix with Latin?
Downsides
This system would permit whole-script confusables (All-Latin "scope.tld" vs all-Cyrillic "ѕсоре.tld"). However, so do the solutions of the other browsers, and it has not proved to be a significant problem so far. If there is a problem, every browser is equally affected.
If problems arose in the future (e.g. whole-script, or homographs between a particular single script and Latin), our response would be that in the end, it is up to registries to make sure that their customers cannot rip each other off. Browsers can put some technical restrictions in place, but we are not in a position to do this job for them while still maintaining a level playing field for non-Latin scripts on the web. The registries are the only people in a position to implement the proper checking here. For our part, we want to make sure we don't treat non-Latin scripts as second-class citizens.
Transition
In between adopting this plan and shipping a Firefox with the restrictions implemented, we will admit into the whitelist any TLD whose anti-spoofing policies at registration time were at least as strong as those outlined above.