首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >在安装"hadoop“之后,"pyspark”不起作用,但是“火花-shell”仍然有效。为什么?

在安装"hadoop“之后,"pyspark”不起作用,但是“火花-shell”仍然有效。为什么?
EN

Stack Overflow用户
提问于 2022-11-21 06:57:24
回答 2查看 71关注 0票数 0

我已经安装了Spark 3.3.1,它以前使用spark-shellpyspark命令运行。但是,在我安装Hadoop 3.3.1之后,pyspark命令似乎不能正常工作,这是运行该命令的结果:

代码语言:javascript
运行
复制
C:\Users\A>pyspark2 --num-executors 4 --executor-memory 1g
[I 2022-11-20 22:36:09.100 LabApp] JupyterLab extension loaded from C:\Users\A\AppData\Local\Programs\Python\Python311\Lib\site-packages\jupyterlab
[I 2022-11-20 22:36:09.100 LabApp] JupyterLab application directory is C:\Users\A\AppData\Local\Programs\Python\Python311\share\jupyter\lab
[I 22:36:09.107 NotebookApp] Serving notebooks from local directory: C:\Users\A
[I 22:36:09.107 NotebookApp] Jupyter Notebook 6.5.2 is running at:
[I 22:36:09.107 NotebookApp] http://localhost:8888/?token=0fca9f0378976c7af19886970c9e801ac27a8d1a209528db
[I 22:36:09.108 NotebookApp]  or http://127.0.0.1:8888/?token=0fca9f0378976c7af19886970c9e801ac27a8d1a209528db
[I 22:36:09.108 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 22:36:09.189 NotebookApp]

    To access the notebook, open this file in a browser:
        file:///C:/Users/A/AppData/Roaming/jupyter/runtime/nbserver-8328-open.html
    Or copy and paste one of these URLs:
        http://localhost:8888/?token=0fca9f0378976c7af19886970c9e801ac27a8d1a209528db
     or http://127.0.0.1:8888/?token=0fca9f0378976c7af19886970c9e801ac27a8d1a209528db
0.01s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.

它打开了Jupyter notebook,但是Spark徽标没有显示,Python也不会像以前一样在CMD中可用。但是,spark-shell仍然是这样工作的:

代码语言:javascript
运行
复制
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://168.150.8.52:4040
Spark context available as 'sc' (master = local[*], app id = local-1669062477403).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 3.3.1
      /_/

Using Scala version 2.12.15 (OpenJDK 64-Bit Server VM, Java 11.0.16.1)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 22/11/21 12:28:12 WARN ProcfsMetricsGetter: Exception when trying to compute pagesize, as a result reporting of ProcessTree metrics is stopped


scala>

编辑:这些都是与我相关的系统变量和路径:

代码语言:javascript
运行
复制
JAVA_HOME : C:\ProgramData\OpenJDK
HADOOP_HOME : C:\ProgramData\hadoop
SPARK_HOME : C:\ProgramData\spark
PYSPARK_PYTHON : python
PYSPARK_DRIVER_PYTHON : jupyter
PYSPARK_DRIVER_PYTHON_OPTS : notebook
PYTHONPATH : %SPARK_HOME%\python;%SPARK_HOME%\python\lib\py4j-0.10.9.5-src.zip;%PYTHONPATH%

系统路径:

代码语言:javascript
运行
复制
C:\ProgramData\OpenJDK\bin
C:\ProgramData\spark\bin
C:\ProgramData\hadoop\bin
C:\ProgramData\hadoop\sbin
C:\Users\A\AppData\Local\Programs\Python\Python311
C:\Users\A\AppData\Local\Programs\Python\Python311\Lib\site-packages
EN

回答 2

Stack Overflow用户

发布于 2022-11-22 14:42:46

它打开木星笔记本,但星火徽标没有显示,Python外壳也不可用

,是Python (默认情况下)。

顺便说一下,pyspark2.cmd命令只是pyspark.cmd的包装器。另外,只有在设置了特定的环境变量(PYSPARK_DRIVER_PYTHON)时,才会默认打开木星。

这个标志没必要告诉你它起作用了。尝试创建一个会话

代码语言:javascript
运行
复制
from pyspark.sql import SparkSession 
spark = SparkSession.builder\
  .master("local")\
  .appName("test")\
  .getOrCreate()
票数 1
EN

Stack Overflow用户

发布于 2022-11-22 14:26:29

您的路径已更改为使用sparks python发行版。您可以了解有关此这里的更多信息。

尝试:echo $PATH

看看你有多少只蟒蛇。我敢打赌你肯定有超过一个。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/74515230

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档