问题: spark on k8s 配置 --conf spark.kubernetes.driver.podTemplateFile 参数不生效,但是指定spark.kubernetes.executor.podTemplateFile生效
spark版本3.2.0
k8s版本1.16
提交命令:
bin/spark-submit \
--master k8s://https://10.x.x.x:6443 \
--deploy-mode cluster \
--name spark-pi \
--class org.apache.spark.examples.streaming.HdfsWordCount \
--conf spark.executor.instances=1 \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark1 \
--conf spark.kubernetes.container.image=spark-example:v5 \
--conf spark.kubernetes.driver.podTemplateFile=/opt/spark/host_add.yaml \
--conf spark.kubernetes.executor.podTemplateFile=/opt/spark/host_add.yaml \
local:///opt/spark/examples/jars/spark-examples_2.12-3.2.0.jar hdfs:/x.x.x.xx:8020/jars/wordcount.txt
yaml文件:
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Neve
hostAliases:
-ip: "10.x.xx.xx"
hostnames:
-"cdh104"
containers:
-name: cat-hosts
image: spark-example:v5
分别查看driver/executor的yaml:
kubectl get po spark-pi-1640312803968-driver -o yaml (未生效,没有hostAliases配置)
kubectl get po hdfswordcount-ad53e97dea43281c-exec-1 -o yaml (生效存在hostAliases配置)
查看相似问题:
https://stackoverflow.com/questions/58169780/pod-template-for-specifying-tolerations-when-running-spark-on-kubernetes
我试用了spark3.0/3.2两个版本driver pod都未生效
相似问题