I’m in the process of scaling 30 servers in Multivac Hadoop cluster. This process adds additional 240 cores and 480G memory to the current available resources. You will also be able to have bigger spark executors for your jobs.
During this process the cluster will not be accessible for any Haddop (YARN, HDFs, Hue, Zeppelin, etc.) and Spark related operations.