Apache Mesos API(s)

This section is for information on how to interact with all of the parts of mesos and its frameworks and their respective API’s

See Apache Mesos for additional information.

Links

External links to the API docs for each of these projects

Mesos Execute Example

Submit jobs from any master or slave, this is not super recommended as the other frameworks do a better job of task scheduling.

mesos execute –command=”/opt/test.py” –master=”master001.example.com:5050″ –name=”test”

Singularity Example

For singularity you have to submit both a request and a task to deploy a job.

substitute a unique identifier for Ruuid and Duuid
#make request
curl -i -X POST -H 'Content-Type: application/json' -d '{"id": "'"$Ruuid"'", 
   "owners": ["[email protected]"], "daemon": false }, "state": "ACTIVE", 
   "instances": 1, "hasMoreThanOneInstance": false, "canBeScaled": false }' 
    master001:8082/singularity/api/requests

#deploy request
curl -i -X POST -H 'Content-Type: application/json' -d '{"deploy":{"requestId": 
   "'"$Ruuid"'", "id":"'"$Duuid"'", "command":"/exec/mesos_test/dev/test.py", 
   "resources":{"cpus":0.1,"memoryMb": 128, "numPorts":0} } }' 
    master001:8082/singularity/api/deploys

#run request
curl -i -X POST master001001:8082/singularity/api/requests/request/$Ruuid/run

#Remove a task
curl -i -X DELETE http://master001.example.com:8082/singularity/api/tasks/task/test-new_test-1433015100646-1-master001.example.com-DEFAULT

Chronos Example

Chronos’s syntax is nearly identical to singularity the only “fallback” is that you cannot query for individual success/failures of a job, you can only query for status of ALL jobs. Also it’s date syntax is a little “weird”.

# date syntax
date=`date -u +%Y-%m-%d`
time=`date -u +%T`
date time is R0/$date\T$time\Z/PT2S
#submit task to chronos
curl -i -H 'Content-Type: application/json' -X POST -d '{ "schedule": 
   "R0/'"$date"'T'"$time"'Z/PT2S", "name": "'$uuid'", "epsilon": "PT30S", 
   "command": "/exec/mesos_test/dev/test.py", "owner": "[email protected]", 
   "async": false }' master001:8081/scheduler/iso8601

#run task
curl -i -X PUT master001:8081/scheduler/job/$uuid

 

Marathon Example

We could experiment with php5.4’s built in web server and run a million tiny versions of websites, coupled with HAProxy, this would increase our ability to scale our webservices/apis as we add customers. command: /usr/bin/php -S `hostname`:$PORT0 /opt/info.php

Deploy a docker image with marathon 
vim testdev.json 
{ 
  "container": { 
    "type": "DOCKER", 
    "docker": { 
      "image": "docker001.example.com:5000/testdev", 
      "network": "BRIDGE", 
      "portMappings": [ 
        { "containerPort": 80, "hostPort": 0, "protocol": "tcp"} 
        ] 
    } 
  }, 
  "id": "testdev", 
  "instances": 1, 
  "cpus": 0.5, 
  "mem": 512, 
  "uris": [], 
  "ports": [49153], 
  "cmd": "" 
} 
  
curl -X POST -H "Content-Type: application/json" http://master001:8080/v2/apps  
[email protected]

Using It

I have created “sample” bash scripts to show how all this can be managed and executed from anywhere in the cluster.

Queue_task_singularity.sh queues 15 jobs at once (can be set to any arbitrary number)

#/bin/sh
##queuses up numerous tasks for singularity/mesos stress test

i=0
while [ $i -lt 15 ]; do

#create uuid edit out dashes, singularity does not allow them for names
uuid=`uuidgen | sed 's/-//g'`
echo $uuid
Rname=DEVreqTEST$uuid
Dname=DEVdepTEST$uuid
##send IDs to tmp for cleanup/referencing in other scripts
echo "----------$i--------------" >> /tmp/singhistory
echo "request ID is $Rname" >> /tmp/singhistory
echo "deploy ID is $Dname" >> /tmp/singhistory
#make request
curl -i -X POST -H 'Content-Type: application/json' -d '{"id": "'"$Rname"'", "owners": ["[email protected]"], "daemon": false }, "state": "ACTIVE", "instances": 1, "hasMoreThanOneInstance": false, "canBeScaled": false }' master00101:8082/singularity/api/requests

#deploy request
curl -i -X POST -H 'Content-Type: application/json' -d '{"deploy":{"requestId": "'"$Rname"'", "id":"'"$Dname"'", "command":"/exec/mesos_test/dev/test.py", "resources":{"cpus":0.1,"memoryMb": 128, "numPorts":0} } }' master001:8082/singularity/api/deploys

#run request
curl -i -X POST master001:8082/singularity/api/requests/request/$Rname/run

i=$((i + 1))
done

run_singularity.sh will re-run the last 15 jobs submitted.

#!/bin/bash
# I made this very early in my understanding of API's which is 
# why there are some "strange" things happening
#command to get history on deploys
#curl -i -X GET http://master001:8082/singularity/api/history/request/requestTEST/deploy/hafu4waoihaehroaehf0aer0f003428502409430hq0oia980

r_array=(`grep "request ID" /tmp/singhistory | awk 'match($0,"is"){print substr($0,RSTART+3,50)}'`)
echo "request array is ${r_array[*]}"


#d_array=(`grep "deploy ID" /tmp/history | awk 'match($0,"is"){print substr($0,RSTART+3,50)}'`)
#echo "Deploy array is ${d_array[*]}"

num=${#r_array[@]}
for (( i=0; i<${num}; i++));
        do
        #for deploy in "${d_array[@]}"
        #do
        echo $i

                Rname=${r_array[$i]}
                echo $request
                echo "curl -i -X POST master001:8082/singularity/api/requests/request/$Rname/run"
                curl -i -X POST master001:8082/singularity/api/requests/request/$Rname/run



done

cleanup_singularity.sh removes tasks/deploys/requests that are no longer valid or needed. Checks job status to make sure they have succeeded first before removing them.

#!/bin/bash

#command to get history on deploys
#curl -i -X GET http://master001:8082/singularity/api/history/request/requestTEST/deploy/hafu4waoihaehroaehf0aer0f003428502409430hq0oia980

r_array=(`grep "request ID" /tmp/singhistory | awk 'match($0,"is"){print substr($0,RSTART+3,50)}'`)
echo "request array is ${r_array[*]}"


d_array=(`grep "deploy ID" /tmp/singhistory | awk 'match($0,"is"){print substr($0,RSTART+3,50)}'`)
echo "Deploy array is ${d_array[*]}"

num=${#r_array[@]}
for (( i=0; i<${num}; i++));
        do
        #for deploy in "${d_array[@]}"
        #do
        echo $i

                request=${r_array[$i]}
                deploy=${d_array[$i]}
                echo $request
                echo $deploy
                echo "checking if successful"
                curl -i -X GET http://master001:8082/singularity/api/history/request/$request/deploy/$deploy > /tmp/singcheck
                success=`awk 'match($0,"SUCCEEDED") {print substr($0,RSTART+0,9)}' /tmp/singcheck`
                echo "checking if failed"
                failed=`awk 'match($0,"FAILED") {print substr($0,RSTART+0,6)}' /tmp/singcheck`

                #find deploy state in history file
                #answer=`awk 'match($0,"deployState") {print substr($0,RSTART+14,9)}' check`
                if [[ "$success" == "SUCCEEDED" ]];
                then
                #removing successful request
                echo "you can delete request $request with deply $deploy"
                curl -i -X DELETE http://master001001:8082/singularity/api/requests/request/$request
                echo "curl -i -X DELETE http://master001:8082/singularity/api/requests/request/$request"
                else
                echo "investigate request $request and deploy $deploy did not complete successfully" > sing_investigate
                sendmail -F [email protected] -it <<END_MESSAGE
                To: [email protected]
                Subject: tasks failed on singularity

                $(cat sing_investigate)
END_MESSAGE

fi
##clean up temp text files
cat /dev/null > /tmp/singhistory
cat /dev/null > /tmp/singcheck
cat /dev/null > /tmp/singrequest

I created similar examples for interacting with chronos

Watching It

You can view status of all jobs by navigating to http://master001:5050/#/. If jobs do not show up then some communication error is most likely the culprit. For old jobs view “sandbox” and there are links to stderr and stdout for individual tasks which make debugging very handy and easy.