sparklyr不创建执行程序

我正在尝试使用以下参数创建火花连接:

library(sparklyr)

conf <- spark_config()
conf$`sparklyr.cores.local` <- 6
conf$`sparklyr.shell.driver-memory` <- "16G"
conf$`spark.executor.cores` <- 2
conf$`spark.executor.memory` <- "2G"
conf$`sparklyr.verbose` <- TRUE
conf$`sparklyr.log.console` <- TRUE
conf$`spark.executor.instances` <- 4
conf$`spark.dynamicAllocation.enabled` <- FALSE

sc <- spark_connect(master = "local",config = conf,log = "console",version = "3.0.0")

它确实连接并在spark_session_config(sc)上正确显示:

$spark.executor.instances
[1] "4"

$spark.executor.cores
[1] "2"

$spark.driver.memory
[1] "16G"

$spark.master
[1] "local[16]"

$spark.sql.shuffle.partitions
[1] "16"

$spark.sql.legacy.utcTimestampFunc.enabled
[1] "true"

$spark.dynamicAllocation.enabled
[1] "false"

$spark.driver.port
[1] "65404"

$spark.submit.deployMode
[1] "client"

$spark.executor.id
[1] "driver"

$spark.jars
[1] "file:/C:/Users/B2623385/Documents/R/win-library/3.6/sparklyr/java/sparklyr-3.0-2.12.jar"

$spark.submit.pyfiles
[1] ""

$spark.app.id
[1] "local-1600432415127"

$spark.env.SPARK_LOCAL_IP
[1] "127.0.0.1"

$spark.sql.catalogImplementation
[1] "hive"

$spark.executor.memory
[1] "2G"

$spark.spark.port.maxRetries
[1] "128"

$spark.app.name
[1] "sparklyr"

$spark.home
[1] "C:\\Users\\B2623385\\AppData\\Local\\spark\\spark-3.0.0-bin-hadoop2.7"

$spark.driver.host
[1] "127.0.0.1"

但是,当我转到http://127.0.0.1:4040/executors/时,表明我只有驱动程序执行程序正在运行:

sparklyr不创建执行程序

我已经尝试过切换spark版本,并且声明了一个最不起眼的环境,但是,我仍然遇到同样的问题。 我想念什么?

我的最终目标是copy_to()一个数据帧连接到Spark连接,R继续运行,而http://127.0.0.1:4040/executors/看起来什么也没发生。

iCMS 回答:sparklyr不创建执行程序

暂时没有好的解决方案,如果你有好的解决方案,请发邮件至:iooj@foxmail.com
本文链接:https://www.f2er.com/1545106.html

大家都在问