Commit 5d5f8693 authored by Domenico Giordano's avatar Domenico Giordano
Browse files

merge k8s doc

parent d47e1c96
# Restart the system
# How to make my Airflow schedule tasks on kubernetes cluster?
Usually the airflow machine is spawning new tasks as containers on the current VM where airflow itself is running. Unfortunately this exposes the VM to be saturated by the jobs it schedules and it is not scalable at all.
You might want to restart the VM or change cluster because something bad happened.
A solution can be: spawn new containers in the Kubernetes cluster directly via the kubectl command and off-load the jobs to the cluster.
## Restart the VM
1. Go tot openstack.cern.ch select your instance and click on hot restart
1. Mount the two volumes
# Requirements
1. Install Openstack packages:
```
mount /dev/vdb /opt/data_repo_volume
mount /dev/vdc /var/lib/docker
pip install python-openstackclient
pip install python-magnumclient
```
1. Start docker
1. Create an executable script **start_developer_project.sh** so that once executed will connect you to the project containing your cluster. It contains the openstack code that you can download on the top right corner of the Openstack web page. Tools > OpenStack RC v3.
```
systemctl start docker
vim start_developer_project.sh
(past the content of OpenStack RC v3 save and close)
chmod 711 start_developer_project.sh
```
1. Restart eos
1. To access the project containing the clusters call the following:
```
umount /eos/project
systemctl restart autofs
source start_developer_project.sh
```
## Attach a brand new cluster
1. Create cluster on openstack.cern.ch
1. Go to your VM and make sure to be able to use kubectl and openstack command (install python libraries
Check by trying this command to see the list of all clusters in the project:
```
pip install python-openstackclient
pip install python-magnumclient
openstack coe cluster list
```
1. Access the project
1. Create the config file for accessing your specific cluster with kubectl:
```
source script_downloadable_from_web_ui_openstack.sh
openstack coe cluster config cluster-name
export KUBECONFIG=<absolute path to your newly generated config file>
```
1. create config of your cluster
Check by trying this command to see all resources in the cluster (you should see the one there by default in every cluster):
```
kubectl get all -A
```
1. export kubeconfig
1. Install secrets with yaml file **user_secret.yaml**
```
apiVersion: v1
......@@ -203,4 +201,4 @@ parsingConfig: |
helm install myeos cern/eosxd
```
## Monitoring
\ No newline at end of file
## Monitoring
# How to make my Airflow shedule tasks on the cluster?
Usually the airflow machine is spawning new tasks as containers on the current VM where airflow itself is running. Unfortunately this exposes the VM to be saturated by the jobs it schedules and it is not scalable at all.
A solution can be: spawn new containers in the Kubernetes cluster directly via the kubectl command and off-load the jobs to the cluster.
# Requirements
1. Install Openstack packages:
```
pip install python-openstackclient
pip install python-magnumclient
```
1. Create an executable script **start_developer_project.sh** to that once executed will connect you to the project containing your cluster. It contains the openstack code that you can download on the top right corner of the Openstack web page. Tools > OpenStack RC v3.
```
vim start_developer_project.sh
(past the content of OpenStack RC v3 save and close)
chmod 711 start_developer_project.sh
```
1. To access the project containing the clusters call the following:
```
source start_developer_project.sh
```
Check by trying this command to see the list of all clusters in the project:
```
openstack coe cluster list
```
1. Create the config file for accessing your specific cluster with kubectl:
```
openstack coe cluster config cluster-name
export KUBECONFIG=<absolute path to your newly generated config file>
```
Check by trying this command to see all resources in the cluster (you should see the one there by default in every cluster):
```
kubectl get all -A
```
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment