In order to submit application with local dependencies, and have your spark-job fully-resilient to failures,
they need to be staged at e.g. S3.
they need to be staged at e.g. S3. For more information please check [sparkctl local dependencies](https://github.com/cerndb/spark-on-k8s-operator/tree/v1alpha1/sparkctl)
You would need to create an authentication file with cretentials on your filesystem:
...
...
@@ -185,8 +185,7 @@ $ vi ./examples/scalability-test-eos-datasets.csv
```
```
Submit your application with custom hadoop config directory to authenticate EOS