Skip to content
Snippets Groups Projects
Commit a73b6073 authored by Vasileios Dimakopoulos's avatar Vasileios Dimakopoulos
Browse files

Add fixes to README

parent 9fc4204d
Branches example_fixes
No related tags found
1 merge request!1Add fixes to README
......@@ -60,28 +60,28 @@ $ chmod +x sparkctl
$ ./sparkctl --help
```
**Managing simple application**
###### Managing simple application
Edit yaml file with SparkApplication.
Check yaml file of SparkApplication.
```bash
$ vi ./examples/spark-pi.yaml
```
$ cat ./examples/spark-pi.yaml
```
The most important sections of your SparkApplication are:
- Application name
```bash
```
metadata:
name: spark-pi
```
- Application file
```bash
```
spec:
mainApplicationFile: "local:///opt/spark/examples/jars/spark-service-examples.jar"
```
- Application main class
```bash
```
spec:
mainClass: ch.cern.sparkrootapplications.examples.SparkPi
```
......@@ -98,6 +98,12 @@ To check application status
$ ./sparkctl status spark-pi
```
To get application logs
```bash
$ ./sparkctl log spark-pi
```
Alternatively, to check application status (or check created pods and their status)
```
......@@ -110,12 +116,6 @@ or
$ kubectl logs spark-pi-1528991055721-driver
```
To get application logs
```bash
$ ./sparkctl log spark-pi
```
To delete application
```bash
......@@ -128,7 +128,7 @@ please visit [sparkctl user-guide](https://github.com/cerndb/spark-on-k8s-operat
**Creating application with local dependencies**
In order to submit application with local dependencies, and have your spark-job fully-resilient to failures,
they need to be staged at e.g. S3.
they need to be staged at e.g. S3. For more information please check [sparkctl local dependencies](https://github.com/cerndb/spark-on-k8s-operator/tree/v1alpha1/sparkctl)
You would need to create an authentication file with cretentials on your filesystem:
......@@ -185,8 +185,7 @@ $ vi ./examples/scalability-test-eos-datasets.csv
```
```
Submit your application with custom hadoop config directory to authenticate EOS
$ export HADOOP_CONF_DIR=~/hadoop-conf-dir
$ ./sparkctl create ./examples/scalability-test-eos.yaml --upload-to s3a://spark-on-k8s-cluster --override --endpoint-url "https://cs3.cern.ch"
$ HADOOP_CONF_DIR=~/hadoop-conf-dir ./sparkctl create ./examples/scalability-test-eos.yaml --upload-to s3a://spark-on-k8s-cluster --override --endpoint-url "https://cs3.cern.ch"
```
### Building examples jars
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment