Hi all @multivac-dsl,
Up until now we were using Apache Livy
to interactively connect to Apache Spark cluster. This caused to miss some of Apache Zeppelin features such as [ZeppelinContext](https://zeppelin.apache.org/docs/latest/interpreter/spark.html#zeppelincontext)
where one can exchange data between different interpreters (ex: create a Map from Spark results in Angular interpreter).
From now on we are going to use native Spark
interpreter. This change doesn’t require you to change anything in your codes!
Please make sure you have Spark selected from the interpreter settings:
Then you will get multiple output feature (before it was only possible to see the last output):
NOTE: You still have access to all the sub interpreters such as python, sql, etc.
Or reading the Hive table from Spark:
You can follow the same multi-line style you have in Intellij:
Please let me know if you have any questions. I hope this solves some of the crashes we have experienced while we were using Apache Livy.
Scala: %spark.spark
SQL: %spark.sql
Best,
Maziyar