我试图提交一个spark作业,在该作业中,我将在conf属性中设置一个date参数,并通过NiFi中的一个脚本运行它。但是,当我运行脚本时,我面临一个错误。
脚本中的Spark Submit代码:
aws emr add-steps --cluster-id "$1" --steps '[{"Args":["spark-submit","--deploy-mode","cluster","--jars","s3://tvsc-lumiq-edl/jars/ojdbc7.jar","--executor-memory","10g","--driver-memory","10g","--conf","spark.hadoop.yarn.timeline-service.enabled=false","--conf","currDate='\"$5\"'","--class",'\"$2\"','\"$3\"','\"$4\"'],"Type":"CUSTOM_JAR","ActionOnFailure":"CONTINUE","Jar":"command-runner.jar","Properties":"","Name":"Spark application"}]' --region "$6" 运行它之后,我得到以下错误:
ExecuteStreamCommand[id=5b08df5a-1f24-3958-30ca-2e27a6c4becf] Transferring flow file StandardFlowFileRecord[uuid=00f844ee-dbea-42a3-aba3-0edcabfc50a2,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1607082757752-507103, container=default, section=223], offset=29, length=-1],offset=0,name=6414901712887990,size=0] to nonzero status. Executable command /bin/bash ended in an error:
Error parsing parameter '--steps': Invalid JSON:
[{"Args":["spark-submit","--deploy-mode","cluster","--jars","s3://tvsc-lumiq-edl/jars/ojdbc7.jar","--executor-memory","10g","--driver-memory","10g","--conf","spark.hadoop.yarn.timeline-service.enabled=false","--conf","currDate="Fri我哪里出问题了?
发布于 2020-12-04 12:49:41
您可以使用JSONLint验证您的JSON,这使您更容易了解错误的原因。
在您的例子中,您用单引号( ' )而不是双引号( " )包装最后的3个值。
您的steps JSON应该如下所示:
[{
"Args": [
"spark-submit",
"--deploy-mode",
"cluster",
"--jars",
"s3://tvsc-lumiq-edl/jars/ojdbc7.jar",
"--executor-memory",
"10g",
"--driver-memory",
"10g",
"--conf",
"spark.hadoop.yarn.timeline-service.enabled=false",
"--conf",
"currDate='\"$5\"'",
"--class",
"\"$2\"",
"\"$3\"",
"\"$4\""
],
"Type": "CUSTOM_JAR",
"ActionOnFailure": "CONTINUE",
"Jar": "command-runner.jar",
"Properties": "",
"Name": "Spark application"
}]具体而言,这3行:
"\"$2\"",
"\"$3\"",
"\"$4\""而不是原来的:
'\"$2\"',
'\"$3\"',
'\"$4\"'https://stackoverflow.com/questions/65143114
复制相似问题