Apps/ServerArchitecture: Difference between revisions
< Apps
Jump to navigation
Jump to search
Tarek.ziade (talk | contribs) No edit summary |
Tarek.ziade (talk | contribs) |
||
(10 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
http://ziade.org/appsync.png | http://ziade.org/appsync.png | ||
= Overview = | |||
= appsync = | * The appsync server provides | ||
** appsync APIs | |||
** the static myapps website | |||
** a MySQL database that mirrors all writes | |||
* The sauropod server is a Node.js server | |||
** keeps sessions in memory (DB tokens) | |||
** proxies the calls to the HBAse server | |||
* The HBase cluster manage the data | |||
= appsync/myapps = | |||
* Points of contact | * Points of contact | ||
Line 8: | Line 18: | ||
** Devs: Tarek | ** Devs: Tarek | ||
* Servers | * Servers | ||
** appsync-stage1.vm1.labs.sjc1.mozilla.com ( | ** appsync-stage1.vm1.labs.sjc1.mozilla.com (https://stage-myapps.mozillalabs.com/) | ||
* stack | * stack | ||
** | ** CentOS 6 | ||
** Python 2.6 | ** Python 2.6 | ||
** nginx | ** nginx | ||
Line 22: | Line 32: | ||
* Points of contact | * Points of contact | ||
** Ops: Gozer | ** Ops: Gozer | ||
* | ** appsync-stage1.vm1.labs.sjc1.mozilla.com | ||
** | * stack | ||
* status - | ** CentOS 6 | ||
** Python 2.6 | |||
** nginx | |||
** gunicorn | |||
** appsync server | |||
* status - DEPLOYED | |||
= Sauropod Node.JS server | = Sauropod Node.JS server = | ||
* Points of contact | * Points of contact | ||
Line 33: | Line 47: | ||
** Devs: ? | ** Devs: ? | ||
* Servers | * Servers | ||
** | ** sauropod-stage1.vm1.labs.sjc1.mozilla.com | ||
* stack | * stack | ||
** CentOS6 | ** CentOS6 | ||
** | ** node.js 0.6.3 from mozilla-services package repository | ||
* status - | ** direct checkout from github | ||
* CI / CD - | * status - Hackishly deployed | ||
* CI / CD - not yet | |||
= Sauropod | = Sauropod HBase stack = | ||
* Points of contact | * Points of contact | ||
Line 46: | Line 61: | ||
** Devs: rtilder | ** Devs: rtilder | ||
* Servers | * Servers | ||
** | ** appsync-hbase-stage1.vm1.labs.sjc1.mozilla.com | ||
*** "master": running the namenode | |||
*** Zookeeper quorum node | |||
*** Also running as an HDFS data node and HBase region server | |||
** appsync-hbase-stage2.vm1.labs.sjc1.mozilla.com | |||
*** "slave": running HDFS data node and HBase region server | |||
*** Zookeeper quorum node | |||
* stack | * stack | ||
** CentOS6 | ** CentOS6 | ||
** | ** Sun JVM 1.6.0 | ||
* status - | ** Cloudera HBase distro version CDH3U2 | ||
* CI / CD - ? | * status - DEPLOYED | ||
* CI / CD - none | |||
= open questions = | |||
* how to load balance sauropod ? client-side ? Zeus ? | |||
* backup on the sauropod nodes ? |
Latest revision as of 10:04, 14 December 2011
Overview
- The appsync server provides
- appsync APIs
- the static myapps website
- a MySQL database that mirrors all writes
- The sauropod server is a Node.js server
- keeps sessions in memory (DB tokens)
- proxies the calls to the HBAse server
- The HBase cluster manage the data
appsync/myapps
- Points of contact
- Ops: Gozer
- Devs: Tarek
- Servers
- appsync-stage1.vm1.labs.sjc1.mozilla.com (https://stage-myapps.mozillalabs.com/)
- stack
- CentOS 6
- Python 2.6
- nginx
- gunicorn
- appsync server
- status - DEPLOYED
- CI / CD : http://hudson.build.mtv1.svc.mozilla.com/view/9.%32Apps
memcached
- Points of contact
- Ops: Gozer
- appsync-stage1.vm1.labs.sjc1.mozilla.com
- stack
- CentOS 6
- Python 2.6
- nginx
- gunicorn
- appsync server
- status - DEPLOYED
Sauropod Node.JS server
- Points of contact
- Ops: Gozer
- Devs: ?
- Servers
- sauropod-stage1.vm1.labs.sjc1.mozilla.com
- stack
- CentOS6
- node.js 0.6.3 from mozilla-services package repository
- direct checkout from github
- status - Hackishly deployed
- CI / CD - not yet
Sauropod HBase stack
- Points of contact
- Ops: Gozer
- Devs: rtilder
- Servers
- appsync-hbase-stage1.vm1.labs.sjc1.mozilla.com
- "master": running the namenode
- Zookeeper quorum node
- Also running as an HDFS data node and HBase region server
- appsync-hbase-stage2.vm1.labs.sjc1.mozilla.com
- "slave": running HDFS data node and HBase region server
- Zookeeper quorum node
- appsync-hbase-stage1.vm1.labs.sjc1.mozilla.com
- stack
- CentOS6
- Sun JVM 1.6.0
- Cloudera HBase distro version CDH3U2
- status - DEPLOYED
- CI / CD - none
open questions
- how to load balance sauropod ? client-side ? Zeus ?
- backup on the sauropod nodes ?