diff --git a/README.md b/README.md
index 31b743c6fd82004e3ea6b7756361b32393e3f43b..3d8b73f61a761de1d8adc99ad647d485020b8c9b 100644
--- a/README.md
+++ b/README.md
@@ -60,28 +60,28 @@ $ chmod +x sparkctl
 $ ./sparkctl --help
 ```
 
-**Managing simple application**
+###### Managing simple application
 
-Edit yaml file with SparkApplication. 
+Check yaml file of SparkApplication.
 
-```bash
-$ vi ./examples/spark-pi.yaml
+```
+$ cat ./examples/spark-pi.yaml
 ```
 
 The most important sections of your SparkApplication are:
 
 - Application name
-    ```bash
+    ```
     metadata:
       name: spark-pi
     ```
 - Application file
-    ```bash
+    ```
     spec:
       mainApplicationFile: "local:///opt/spark/examples/jars/spark-service-examples.jar"
     ```
 - Application main class
-    ```bash
+    ```
     spec:
       mainClass: ch.cern.sparkrootapplications.examples.SparkPi
     ```
@@ -98,6 +98,12 @@ To check application status
 $ ./sparkctl status spark-pi
 ```
 
+To get application logs
+
+```bash
+$ ./sparkctl log spark-pi
+```
+
 Alternatively, to check application status (or check created pods and their status)
 
 ```
@@ -110,12 +116,6 @@ or
 $ kubectl logs spark-pi-1528991055721-driver
 ```
 
-To get application logs
-
-```bash
-$ ./sparkctl log spark-pi
-```
-
 To delete application
 
 ```bash
@@ -128,7 +128,7 @@ please visit [sparkctl user-guide](https://github.com/cerndb/spark-on-k8s-operat
 **Creating application with local dependencies**
 
 In order to submit application with local dependencies, and have your spark-job fully-resilient to failures, 
-they need to be staged at e.g. S3.
+they need to be staged at e.g. S3. For more information please check [sparkctl local dependencies](https://github.com/cerndb/spark-on-k8s-operator/tree/v1alpha1/sparkctl)
 
 You would need to create an authentication file with cretentials on your filesystem:
 
@@ -185,8 +185,7 @@ $ vi ./examples/scalability-test-eos-datasets.csv
 ```
 ```
 Submit your application with custom hadoop config directory to authenticate EOS
-$ export HADOOP_CONF_DIR=~/hadoop-conf-dir
-$ ./sparkctl create ./examples/scalability-test-eos.yaml --upload-to s3a://spark-on-k8s-cluster --override --endpoint-url "https://cs3.cern.ch"
+$ HADOOP_CONF_DIR=~/hadoop-conf-dir ./sparkctl create ./examples/scalability-test-eos.yaml --upload-to s3a://spark-on-k8s-cluster --override --endpoint-url "https://cs3.cern.ch"
 ```
 
 ### Building examples jars