Talk:Software Update

From MozillaWiki
Revision as of 04:15, 21 May 2005 by Justdave (talk | contribs)
Jump to navigation Jump to search

bsmedberg says: Why is the update a separate executable? What we would need to add to xpinstall is

  1. Binary-patch functionality
  2. Ability to do the xpinstall at shutdown/startup (not now)

Since we're planning on coding these features anyway, let's do it right! I can't see that it would take a lot more time than creating separate update executables for each update.

darin: There is a strong desire to avoid the complexity of xpinstall. We only need the ability to add, remove, replace, and patch files, and that can be done simply and reliably without xpinstall. Most of the complexity of software update is not addressed by xpinstall. The separate executable idea was intended for two reasons: 1) no need for the user to download the update utility until they need to update their app, 2) the updater would be very small, and 3) we might want to change the updater in the future.


Silver says: on NT-based OSes, you can at least rename files that are loaded as part of an application. This would allow you to rename existing, in use files, put down the new ones, all with Firefox running. Then restart it, and clean up the old ones in the background.

darin: There's no real benefit to this approach since users still need to restart Firefox to get the new version. It is easy enough to apply the changes between the time Firefox shutsdown and restarts from the point of view of the user. We can easily make the updater show a progress meter, and that should just do the trick.


Comments from bsmedberg: are we sure that the mozilla mirror network supports byte-range requests properly? Is there some other way to gate bandwidth?

darin: Yes, I have tested this out, and it seems to work. The only problem is that the various web servers do not all compute ETags in the same way, so we cannot issue If-Match requests, but that should be okay since the URLs of the files being downloaded should be sufficent to make the entities unique.

justdave: NO!!! The primary mirrors support it, but there's no way to guarantee that the secondaries do, and the primary mirrors don't have enough available bandwidth to support every Firefox user downloading at once when we have an update. Anything the update service downloads needs to be obtained via our download redirector on download.mozilla.org, so the bandwidth gets distributed to all of the primary and secondary mirrors according to who has bandwidth available.

darin: I think you're worrying about something that we can easily solve. Fetching the data via redirect is not a problem for this system. The idea here is that we will try to fetch the files in small chunks. We need to worry about all of the firefox's trying to update themselves, but we can be smart to ensure that they are balanced out. The download redirector could even return an error if the load is too high. Then the firefoxes will wait until some timeout before trying again. Are you sure that the mirrors do not support byte range requests? Even very old versions of Apache supports it for static files. Are you concerned about using HTTP instead of FTP?

justdave: Yeah, solvable easily enough. :) Some of the mirrors are actually using non-apache webservers, is the problem. HTTP is very much prefered over FTP though. Whether the server supports byte-ranges ought to be able to be tested for, so we could have Bouncer's sentry script test the servers to make sure they support it (so ones that don't support it get removed from the channel the update service will end up using). Bug 292942 has been filed for this.


Comments from bsmedberg: We should think carefully about how we handle these signatures. I presume we want mozilla updates to be signed *by mozilla.org*, not just signed in general. How do we identify which cert/certchains are appropriate?

darin: I'm not sure. I suspect that dougt will have some good ideas about this problem.

jmdesp We just had a discussion about that in news:npm.crypto. It may require a litle more work to verify what CA the extension is signed under, but it's not really hard. The main point here is that Mozilla would then act as a CA. I emitted the idea that what would be important is not the identity of the person who wants the certificate, but that he has a valuable extension to distribute. We could require that the extension be first made publicly available unsigned, and the certificate granted after a positive community review.

One question then is if we end up issuing many certificates, won't some of the end up badly used ? One solution is that certificate could be linked to an extension, not an individual, so we could use as the id in the cert the GUID of the extension, so that the certificate can be used only for one specific extension.

Then at installation time, we could check the extension by using the Update mechanism and it would restrain us from installing it if it is known that it is dangerous or is a version that has security holes. Peter Gutman raised the issue that for ActiveX, the problem is more the exploitation of vulnerabilities in legit ActiveX than people deliberatly signing evil components. But the mechanism above would cover both.

The one problem left is the risk that the checking of the validity of extension would end up as a major load for the servers. If you can believe it could happen, check the story of Class3SoftwarePublishers.crl at Verisign : http://www.verisign.com/verisign-inc/news-and-events/news-archive/us-news-2004/page_000738.html

Class3SoftwarePublishers.clr was only 7 Kb, but they would have needed up to 4 Gbit of bandwidth to keep up that day. They minimize the bandwidth by normally restricting the number of request for that file to 1 per month, except that it failed that day and they received ten time the usual number of requests.


Comments from chofmann: capturing some things from a discussion this morning...

-how to deal with situation requiring multiple updates. options are to apply patches in sequence possibly running browser between installations to sanity check, or not. or maybe just initially force multi patch users down the full update path.

-experiance shows we need to have a pretty exhastive list of OS combinations to test against so we capture behavioral diffs between win98, win2k, xp, osX releases, linux distros, etc...

-roll back: details to work out about that to do when a roll back situation is encountered. maybe try again, or roll from patch to full upgrade.

-look at options for compression of the packages:....


Comments from jmdesp: Currently the software update uses an SSL connexion to get the information. How will this scale if we have several millions of clients ? Bouncer will only redirect them after one initial exchange, and with SSL this exchange is a non neglectible amount of data. It might be better to switch to a model where the security is not insured not through SSL but by sending a signed answer on HTTP, and with DNS level repartition of requests, not http redirect.

darin: I believe that the current application update system serves an RDF manifest file via HTTPS. The current plan does not change that. Do you believe that the current system places too much load on the HTTPS server providing these manifest files today?

justdave: we're already getting several millions of clients, and it's handling it just fine, and we have plenty of infrastructure to scale now. For example, on May 19, there were 9,873,623 hits on the AUS servers. AUS is using DNS round-robining right now (and will soon have a set of load-balancers so multiple servers will share an IP, as well). Bouncer is only used when there's actually an update to retrieve, it's not used for checking for updates.