diff --git a/README.md b/README.md index 0e95fc8961c590b2295e9f6afe84856536519a94..1eb5a21c4f9766a4c7986731a08c043bf7fc6fe5 100644 --- a/README.md +++ b/README.md @@ -6,6 +6,56 @@ In order to be used in other application. ## Installation +For now, it only work on linux OS and Windows Professional, +and not tested on MacOS. + +In order to run it, you have to Download the CERN's Spark bundle +and set up a working CERN's NXCALS environment. +See on <a href="nxcals-docs.web.cern.ch"> NXCALS Documentation</a>. + +You need then to export 2 environment variables for Spark & Python. + +For linux (Ubuntu) +``` bash +export SPARK_HOME=<location_of_the_bundle>/spark-<version>-bin-hadoop<version> +export PYTHONPATH=$SPARK_HOME/nxcals-python3-env/lib/python3.6/site-packages +``` +For windows into shell with admin rights +```shell script +setx SPARK_HOME <location_of_the_bundle>/spark-<version>-bin-hadoop<version> +setx PYTHON_PATH %SPARK_HOME%/nxcals-python3-env/lib/python3.6/site-packages +``` +Then install pyspark packages on python if not already done. +```shell script +pip install pyspark +``` + +Maybe you would have to replace the nxcals-jars directory with the +one you can find into the bundle at `<bundle_path>/nxcals-jars` and also +the nxcals-hdoop-pro-config<version>.jar that you can find at +`<bundle_path>/jars/nxcals-hadoop-pro-config-<version>.jar`. + +Know that all the links and directory path are up to date at the time of writing. +It may have been some updates since. + +After that you need to create a 'pyconf.py' file in order to create +mysql configuration. The template is : +```python +mysql = { + 'host': '', + 'port': '', + 'user': '', + 'passwd': '', + 'database': '', +} +``` + +You'll need to ask the databases info and access in order to complete this +configuration file. + +You can ask this info by creating an issue on this repo gitLab or by sending +a mail to a manager of this project with your name, section and why you want +to use the project. ## Usage