Spark Values
The following settings are added to the Spark submission in alpine.conf, where you can edit them if you choose. While these can be used to define some overall settings, any Spark tuning you define at the operator level takes precedence.
Name | Default value | Notes |
---|---|---|
spark.yarn.api.timeout | 5000 | |
spark.yarn.report.interval | 2000 | |
spark.executor.extraJavaOpts | ||
spark.rdd.compress | true | |
spark.io.compression.codec | "org.apache.spark.io.SnappyCompressionCodec" | |
spark.eventLog.enabled | false | |
spark.dynamicAllocation.enabled | true | Dynamic allocation can be disabled at the operator level. Regardless of this value, we use only dynamic allocation if your cluster is correctly configured to enable dynamic allocation. |
spark.driver.maxResultSize | 2g | A Spark value that sets the maximum size of data that can be brought back to the driver. |
spark.driver.extraJavaOptions | -XX:MaxPermSize=256m |
Copyright © 2021. Cloud Software Group, Inc. All Rights Reserved.