Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Container Orchestration Engines (COEs) assist with deploying and managing containers. Popular COEs include Docker Swarm, Kubernetes, and Mesos. The Rapid Access Cloud can assist with deploying COE infrastructure by way of the OpenStack Magnum project.

Supported COEs

Right now, the The Rapid Access Cloud supports Docker Swarm . and Kubernetes will be available shortly.

Available Templates

At this time, we offer the following templates:

NameFlavor UsedIPv6 AccessibleFloating IP Accessible
swarm-floating-mediumm1.mediumYesYes
swarm-ipv6-mediumm1.mediumYesNo
swarm-floating-largem1.largeYesYes
swarm-ipv6-largem1.largeYesNo
swarm-floating-xlargem1.xlargeYesYes
swarm-ipv6-xlargem1.xlargeYesNo

Creating a Swarm Cluster

Install Docker if it is not already installed.

On the Rapid Access Cloud Dashboard

Coming Soon.

On the Command-Line

Make sure you have the OpenStack command-line tools installed.

Next, install the python-magnumclient package:

Code Block
languagebash
# On Linux
$ sudo pip install python-magnumclient


# On Mac
$ pip install --user python-magnumclient
k8s-floating-mediumm1.mediumYesYes
k8s-ipv6-mediumm1.mediumYesNo
k8s-floating-largem1.largeYesYes
k8s-ipv6-largem1.largeYesNo
k8s-floating-xlargem1.xlargeYesYes
k8s-ipv6-xlargem1.xlargeYesNo

Creating a Swarm Cluster

Install Docker if it is not already installed.

On the Rapid Access Cloud Dashboard

Coming Soon.

On the Command-Line

Make sure you have the OpenStack command-line tools installed.

Next, install the python-magnumclient packageNext, choose a Cluster Template. You can view the available templates by doing:

Code Block
languagebash
$# On Linux
$ sudo pip install python-magnumclient


# On Mac
$ pip install --user python-magnumclient


Next, choose a Cluster Template. You can view the available templates by doing:

Code Block
languagebash
$ source source /path/to/your/rc/file
$ openstack coe cluster template list

+--------------------------------------+-----------------------+
| uuid                                 | name                  |
+--------------------------------------+-----------------------+
| 22484703-51e3-468d-b829-64fe19fec7b6 | swarm-floating-medium |
| 416e127b-8377-4d17-bffb-049dd92bba39 | swarm-floating-large  |
| 82151b50-53d6-4a6a-8cb8-7adcf3bd7353 | swarm-floating-xlarge |
| ace2c106-b14d-4cb6-9138-9533fab03453 | swarm-ipv6-medium     |
| 438ed95c-2197-4ea5-b5db-26f2c9d50cea | swarm-ipv6-large      |
| cb442f1d-9ffd-471c-b158-bc2320b0fccb | swarm-ipv6-xlarge     |
+--------------------------------------+-----------------------+

...


Code Block
languagebash
$ openstack server list -c Name -c Networks
+---------------------------------------------+---------------------------------------------------------------------------+
| Name                                        | Networks                                                                  |
+---------------------------------------------+---------------------------------------------------------------------------+
| swarm-cluster-aq5dpyxwpte5-node-1           | default=2605:fd00:4:1000:f816:3eff:feb5:293d, 10.1.2.187                  |
| swarm-cluster-aq5dpyxwpte5-node-0           | default=2605:fd00:4:1000:f816:3eff:fe09:ca6a, 10.1.2.188                  |
| swarm-cluster-aq5dpyxwpte5-node-2           | default=2605:fd00:4:1000:f816:3eff:fe15:8882, 10.1.2.185                  |
| swarm-cluster-aq5dpyxwpte5-primary-master-0 | default=2605:fd00:4:1000:f816:3eff:fe2a:b323, 10.1.2.184, 162.246.156.5   |


Note either the Floating IP (162.246.156.5 in the example above) or the IPv6 address of the Master node. Then do:

Code Block
languagebash
$ export DOCKER_HOST=tcp://162.246.156.5:2375


After that, you now have access to a fully functional Docker Swarm cluster:

Code Block
languagebash
$ docker node ls
ID                            HOSTNAME                                                STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
jqm3pmupwxlcdy4bodp3ak7n2     swarm-cluster-aq5dpyxwpte5-node-0.novalocal             Ready               Active                                  1.13.1
qseo32fewgiprqqxmf9otg7mb     swarm-cluster-aq5dpyxwpte5-node-1.novalocal             Ready               Active                                  1.13.1
vegmr4raoiyq4j5upkvy9o6pi     swarm-cluster-aq5dpyxwpte5-node-2.novalocal     node-2.novalocal             Ready               Active                                  1.13.1
zpgho4soqk3qk74dbrny3r4es *   swarm-cluster-aq5dpyxwpte5-primary-master-0.novalocal   Ready               Active              Leader              1.13.1


$ docker run hello-world
Unable to find image 'hello-world:latest' locally
Trying to pull repository docker.io/library/hello-world ...
sha256:3e1764d0f546ceac4565547df2ac4907fe46f007ea229fd7ef2718514bcec35d: Pulling from docker.io/library/hello-world
9bb5a5d4561a: Pull complete
Digest: sha256:3e1764d0f546ceac4565547df2ac4907fe46f007ea229fd7ef2718514bcec35d
Status: Downloaded newer image for docker.io/hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.


$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
a375d7b710d8        hello-world         "/hello"            25 seconds ago      Exited (0) 23 seconds ago                       confident_kirch

Creating a Kubernetes Cluster

First, install kubectl if it isn't already installed.

On the Rapid Access Cloud Dashboard

Coming Soon.

On the Command-Line

Make sure you have the OpenStack command-line tools installed.

Next, install the python-magnumclient package:

Code Block
languagebash
# On Linux
$ sudo pip install python-magnumclient


# On Mac
$ pip install --user python-magnumclient


Next, choose a Cluster Template. You can view the available templates by doing:

Code Block
languagebash
$ source /path/to/your/rc/file
$ openstack coe cluster template list

+--------------------------------------+-----------------------+
| uuid                                 | name                  |
+--------------------------------------+-----------------------+
| e27d8d86-b051-4a8d-98cf-b8cd6afa7df4 | k8s-ipv6-medium       |
| 258b877b-8256-4612-ba2c-1cde512570c8 | k8s-ipv6-large        |
| d397c572-b93b-47ed-8ea0-d824ea90cea9 | k8s-ipv6-xlarge       |
| 6b1af9f0-c073-464b-9552-eb233ab993b2 | k8s-floating-medium   |
| 1e1d58d7-e6b1-4922-a44b-790cc2cc5b66 | k8s-floating-large    |
| fd52ee79-c7a7-43d9-b3a6-fe4080ad7143 | k8s-floating-xlarge   |
+--------------------------------------+-----------------------+

Next, create a cluster:

Code Block
languagebash
$ openstack coe cluster create kubernetes-cluster --cluster-template swarm-floating-medium --master-count 1 --node-count 3 --keypair mykey --docker-volume-size=10
Request to create cluster 54b4d4e5-1952-415b-b2ac-af0cfcdb9af2 accepted


There are a few things to note about this command:

  • The above command will create a cluster of 4 total nodes: 1 master and 3 workers. All will be m1.medium instances.
  • Clusters only support a single master at this time, so you always need to use --master-count 1.
  • --keypair must be an existing Key Pair.
  • --docker-volume-size is required. The example above will have 4 volumes of 10 gigabytes created. One volume will be attached to each node of your cluster.

Note

You must make sure you have the appropriate quota available to create a cluster. In the above example, you would need to be able to create the following:

  • 1 x Floating IP
  • 4 x m1.medium instances
  • 4 x volumes
  • 40gb block storage

The above possible to do with the Rapid Access Cloud's default quota and no existing resources running.


You can watch the status of the cluster creation by taking the printed UUID and doing:

Code Block
languagebash
$ openstack coe cluster show 54b4d4e5-1952-415b-b2ac-af0cfcdb9af2

+---------------------+------------------------------------------------------------+
| Field               | Value                                                      |
+---------------------+------------------------------------------------------------+
| status              | CREATE_IN_PROGRESS                                         |
| cluster_template_id | 6b1af9f0-c073-464b-9552-eb233ab993b2                       |
| node_addresses      | []                                                         |
| uuid                | 54b4d4e5-1952-415b-b2ac-af0cfcdb9af2                       |
| stack_id            | dae1f0c4-6118-4ef2-9aab-80a5b6958ef6                       |
| status_reason       | None                                                       |
| created_at          | 2018-07-11T05:56:38+00:00                                  |
| updated_at          | 2018-07-11T05:56:54+00:00                                  |
| coe_version         | None                                                       |
| labels              | {u'cert_manager_api': u'true'}                             |
| faults              |                                                            |
| keypair             | cybera                                                     |
| api_address         | None                                                       |
| master_addresses    | []                                                         |
| create_timeout      | 60                                                         |
| node_count          | 2                                                          |
| discovery_url       | https://discovery.etcd.io/56688682e6f32db3f45bfc6b2b82d1a1 |
| master_count        | 1                                                          |
| container_version   | None                                                       |
| name                | kubernetes-cluster                                         |
| master_flavor_id    | m1.medium                                                  |
| flavor_id           | m1.medium                                                  |
+---------------------+------------------------------------------------------------+


Note

Wait until the status reads CREATE_COMPLETE before proceeding to the next step.


Next, download the COE authentication information:

Code Block
languagebash
$ mkdir kubernetes-cluster
$ $(openstack coe cluster config kubernetes-cluster --dir kubernetes-cluster)


The above command will generate a Kubernetes configuration file and install the SSL certificates for your cluster to the kubernetes-cluster directory.

By default, the configuration file is set to communicate with the Kubernetes cluster by its private IP address. Private IPs are not accessible unless you use the RAC VPN. Alternatively, you can change the configuration file to use the Floating IP or IPv6 address of your master node. To do this, do the following:


Code Block
languagebash
$ openstack server list -c Name -c Networks
+---------------------------------------------+---------------------------------------------------------------------------+
| Name                                        | Networks                                                                  |
+---------------------------------------------+---------------------------------------------------------------------------+
| kubernetes-cluster-brs4edkzdppp-minion-1    | default=2605:fd00:4:1000:f816:3eff:fe8f:a992, 10.1.2.251                  |
| kubernetes-cluster-brs4edkzdppp-minion-0    | default=2605:fd00:4:1000:f816:3eff:fe2e:546d, 10.1.2.250                  |
| kubernetes-cluster-brs4edkzdppp-master-0    | default=2605:fd00:4:1000:f816:3eff:fe46:f51c, 10.1.2.249, 162.246.156.70   |


Note either the Floating IP (162.246.156.5 in the example above) or the IPv6 address of the Master node. Then edit the configuration file found under kubernetes-cluster/config and find the line that starts with server. Change this line to read:

Code Block
languagebash
server: https://162.246.156.70:6443


After that, you now have access to a fully functional Kubernetes cluster:

Code Block
$ kubectl -n kube-system get pods
NAME        Ready               Active             READY     STATUS    RESTARTS   AGE
coredns-5864cfd79d-86wgm         1.13.1
zpgho4soqk3qk74dbrny3r4es *   swarm-cluster-aq5dpyxwpte5-primary-master-0.novalocal   Ready1/1       Running   0     Active     12m
heapster-68b976dd7-fkckb         Leader       1/1       1.13.1


$ docker run hello-world
Unable to find image 'hello-world:latest' locally
Trying to pull repository docker.io/library/hello-world ...
sha256:3e1764d0f546ceac4565547df2ac4907fe46f007ea229fd7ef2718514bcec35d: Pulling from docker.io/library/hello-world
9bb5a5d4561a: Pull complete
Digest: sha256:3e1764d0f546ceac4565547df2ac4907fe46f007ea229fd7ef2718514bcec35d
Status: Downloaded newer image for docker.io/hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.


$ docker ps -a
CONTAINER ID        IMAGERunning   0          12m
kubernetes-dashboard-846b8b6844-5p54j   1/1       Running   0          12m


$ kubectl run nginx --image=nginx --replicas=5
deployment.apps/nginx created


$ kubectl get pods
NAME                   READY COMMAND    STATUS    RESTARTS     CREATEDAGE
nginx-8586cf59-2g4bp   1/1       Running   0 STATUS         30s
nginx-8586cf59-cw4x2   1/1       Running   PORTS0          30s
nginx-8586cf59-gnz6m   1/1  NAMES
a375d7b710d8     Running   hello-world0         "/hello"  30s
nginx-8586cf59-hr7x4   1/1       Running 25 seconds ago0      Exited (0) 23 seconds ago 30s
nginx-8586cf59-mhcv5   1/1       Running   0          confident_kirch30s


Security Groups

By default, your COE cluster has a very strict security group configured. This is to prevent security incident mishaps such as accidentally publishing ElasticSearch, MongoDB, or similar services. When you want to make a service publicly accessible, edit your cluster's security group. You can see the group by doing the following:

Code Block
$ openstack security group list | grep cluster

| c22cd9ba-6098-42b9-8a36-ff00b48924b4 | swarm-cluster-aq5dpyxwpte5-secgroup_swarm_manager-rzqnqkkw4mfd |
| ee5d6145-373e-42bf-9b5d-57372f8f20a3 | swarm-cluster-aq5dpyxwpte5-secgroup_swarm_node-iyghtvtxxg3l    |

And then adding a rule to the "manager" group using either the command-line or dashboard.