我在google dataproc上运行apache spark java作业。该作业创建spark上下文,分析日志,最后关闭spark上下文。然后为另一组分析创建另一个spark上下文。这种情况会持续50-60次。有时我会反复收到错误Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources。 根据对SO的回答,当启动作业时没有足够的可用资源时,就会发生这种情况。但这通常发生在工作中途
[TencentCloudSDKException] code:InternalError message:An internal error has occurred. Retry your request, but if the problem persists, contact us. requestId:f3ba0ca1-d6b7-499b-8f86-7cac83d62702
[附加信息]