CIDuty/How To/Troubleshoot AWS: Difference between revisions

Added more info
m (Zsolt moved page CIDuty/How To/AWS to CIDuty/How To/Troubleshoot AWS: Changed links)
(Added more info)
Line 1: Line 1:
Sometimes AWS spins up bad instances. Usually sheriffs notifies ciduty about these but if you see one escalate to ciduty in #ci.
Sometimes AWS spins up bad instances. Usually sheriffs notifies ciduty about these but if you see one escalate to ciduty in #ci. A job may appear as failed if the instance it was running on disappears. Spot instances can disappear when they are outbid.  


= Bad Instances =  
= Bad Instances =  
When AWS spins up a bad instances (usually identified by the fact that it fails every job), find it in the worker explorer of [https://tools.taskcluster.net/provisioners/aws-provisioner-v1/worker-types AWS Provisioner] and quarantine it. Its inactivity will cause the worker to be terminated and AWS will spin up a new one.
To understand if a job failure is caused by a spot instance or not it's best to first understand the various ways a task can be resolved. See [https://docs.taskcluster.net/docs/reference/platform/taskcluster-queue/references/api#status this page] for more information.
 
When AWS spins up a bad instances (usually identified by the fact that it fails every job), find it in the worker explorer of [https://tools.taskcluster.net/provisioners/aws-provisioner-v1/worker-types AWS Provisioner] and quarantine it. Its inactivity will cause the worker to be terminated and AWS will spin up a new one. You can do this even while a task is running due to the built in mechanism for retrying jobs. To further understand the interaction between the queue and a worker, check out the [https://docs.taskcluster.net/docs/reference/platform/taskcluster-queue/docs/worker-interaction official docs].
Confirmed users
39

edits