Confirmed users
85
edits
(first pass) |
(second pass) |
||
Line 34: | Line 34: | ||
= Logical diagram = | = Logical diagram = | ||
Can I see an example of such a diagram? I'm unclear what's requested here. | |||
= Physical diagram = | = Physical diagram = | ||
not fixed yet | |||
= Hardware = | = Hardware = | ||
(what is going to be used here, physically?) | (what is going to be used here, physically?) | ||
The hardware design will be worked on by IT once the design doc here is complete, see [https://bugzilla.mozilla.org/show_bug.cgi?id=677346 Bug 677346]. | |||
= OS = | = OS = | ||
(self explanatory, note any exceptions) | (self explanatory, note any exceptions) | ||
Any form of unix that's supported by Apache, PHP, Python and MongoDB should work. IT will decide what will actually be used. | |||
= Interface settings and IP allocations = | = Interface settings and IP allocations = | ||
This section needs to be filled in by IT. | |||
=== VLANs === | === VLANs === | ||
Line 56: | Line 66: | ||
(firewall needs?) | (firewall needs?) | ||
See [[Tinderboxpushlog/ArchitectureAndDependencies]]. | |||
=== Load Balancing / Caching Load balancing === | === Load Balancing / Caching Load balancing === | ||
(round robin? VIP? GLB?) | (round robin? VIP? GLB?) | ||
Undecided. It should be possible to have one shared server for the MongoDB and multiple servers serving the PHP / HTML / JS / CSS. All state is stored in the MongoDB; local data on the PHP server is only used for caching. (Cached files are in cache/ and in summaries/.) | |||
=== Health checks === | === Health checks === | ||
(how will the app be checked for validity from the lb?) | (how will the app be checked for validity from the lb?) | ||
Since the web servers don't store state, they can't be invalid. (Does that make sense?) | |||
=== Front end caching === | === Front end caching === | ||
(http caching) | (http caching) | ||
No caching. | |||
=== Back end caching === | === Back end caching === | ||
(memcache etc) | (memcache etc) | ||
No idea. | |||
= Database = | = Database = | ||
Line 77: | Line 95: | ||
(what database server(s) - rw & ro, db name(s), db username) Other requirements | (what database server(s) - rw & ro, db name(s), db username) Other requirements | ||
( | TBPL currently uses a read-write MongoDB database, configured to use the default settings (localhost:27017). | ||
Rewriting TBPL to use elastic search instead would only take a few hours but it's unclear whether it's worth doing. | |||
= File storage = | = File storage = | ||
(internally or externally mounted filesystems.. where will static data for this service live?) | (internally or externally mounted filesystems.. where will static data for this service live?) | ||
Results from the more expensive PHP scripts (getParsedLog.php, getTinderboxSummary.php, getLogExcerpt.php) are stored in directories called "cache" and "summaries" in the TBPL root directory. The "summaries" directory will go away once we get rid of Tinderbox mode. | |||
= Automation = | = Automation = | ||
Line 89: | Line 109: | ||
=== Cron jobs === | === Cron jobs === | ||
(if the cron jobs run from an admin machine, please specify where they will run) | (if the cron jobs run from an admin machine, please specify where they will run) | ||
(what's an admin machine?) | |||
There needs to be a cron job that periodically runs tbpl/dataimport/import-buildbot-data.py in order to import buildbot data into the MongoDB. The import frequency hasn't been fixed yet, but it's probably going to be between one and 5 minutes. (The Buildbot source data is regenerated every minute.) | |||
The importer is idempotent; it never destroys data and it doesn't insert duplicates. | |||
=== Puppet === | === Puppet === | ||
(what modules/classes will be used?) | (what modules/classes will be used?) | ||
(what's "Puppet"?) | |||
= Monitoring = | = Monitoring = | ||
Line 111: | Line 138: | ||
(where are backups stored if any? | (where are backups stored if any? | ||
How can someone else fix this site in a disaster?) | How can someone else fix this site in a disaster?) | ||
I don't know. Only the MongoDB would need to be backed up, but how would that be done? | |||
Most of TBPL's data comes from other sources and can be reassembled by simply running the importer (dataimport/import-buildbot-data.py) again. The two exceptions are job comments (also called "build stars") and the hidden builder list, which are directly stored in the TBPL MongoDB and don't come from other sources. | |||
= Staging site = | = Staging site = | ||
Line 124: | Line 155: | ||
= Developer Contacts = | = Developer Contacts = | ||
* Markus Stange <mstange@themasta.com> (original creator) | |||
* Arpad Borsos <arpad.borsos@googlemail.com> (active contributor) | |||
* (Ehsan Akhgari <eakhgari@mozilla.com> (contributor who's also a MoCo employee)) |