【Hue】大数据WEB工具Hue

版权声明:本文为博主原创文章,转载请注明出处。 https://blog.csdn.net/gongxifacai_believe/article/details/81125718

1、Hue的安装

(1)解压hue的安装包。 cdh]$ tar -zxf hue-3.7.0-cdh5.3.6-build.tar.gz -C /opt/app/ (2)编辑配置文件/opt/app/hue-3.7.0-cdh5.3.6/desktop/conf/hue.ini。

[desktop]

  # Set this to a random string, the longer the better.
  # This is used for secure hashing in the session store.
  secret_key=jFE93j;2[290-eiw.KEiwN2s3['d;/.q[eIW^y#e=+Iei*@Mn<qW5o

  # Webserver listens on this address and port
  http_host=hadoop-senior.ibeifeng.com
  http_port=8888

  # Time zone name
  time_zone=Asia/Shanghai

(3)进入/opt/app/hue-3.7.0-cdh5.3.6目录,运行Hue。 hue-3.7.0-cdh5.3.6]$ build/env/bin/supervisor (4)Hue官方文档: http://gethue.com/ http://archive.cloudera.com/cdh5/cdh/5/hue-3.7.0-cdh5.3.6/manual.html#_install_hue https://github.com/cloudera/hue

2、Hue结合HDFS

(1)编辑配置文件/opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/etc/hadoop/hdfs-site.xml,添加如下属性。

		<property>
                <name>dfs.webhdfs.enabled</name>
                <value>true</value>
        </property>

(2)编辑配置文件/opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/etc/hadoop/core-site.xml,添加如下属性。

        <property>
                <name>hadoop.proxyuser.hue.hosts</name>
                <value>*</value>
        </property>
        <property>
                <name>hadoop.proxyuser.hue.groups</name>
                <value>*</value>
        </property>

(3)编辑配置文件/opt/app/hue-3.7.0-cdh5.3.6/desktop/conf/hue.ini。 HDFS部分配置如下:

[hadoop]

  # Configuration for HDFS NameNode
  # ------------------------------------------------------------------------
  [[hdfs_clusters]]
    # HA support by using HttpFs

    [[[default]]]
      # Enter the filesystem uri
      fs_defaultfs=hdfs://hadoop-senior.ibeifeng.com:8020

      # NameNode logical name.
      ## logical_name=

      # Use WebHdfs/HttpFs as the communication mechanism.
      # Domain should be the NameNode or HttpFs host.
      # Default port is 14000 for HttpFs.
      webhdfs_url=http://hadoop-senior.ibeifeng.com:50070/webhdfs/v1

      # Change this if your HDFS cluster is Kerberos-secured
      ## security_enabled=false

      # Default umask for file and directory creation, specified in an octal value.
      ## umask=022

      # Directory of the Hadoop configuration
      hadoop_conf_dir=/opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/etc/hadoop

YARN部分配置如下:

# Configuration for YARN (MR2)
  # ------------------------------------------------------------------------
  [[yarn_clusters]]

    [[[default]]]
      # Enter the host on which you are running the ResourceManager
      resourcemanager_host=hadoop-senior.ibeifeng.com

      # The port where the ResourceManager IPC listens on
      resourcemanager_port=8032

      # Whether to submit jobs to this cluster
      submit_to=True

      # Resource Manager logical name (required for HA)
      ## logical_name=

      # Change this if your YARN cluster is Kerberos-secured
      ## security_enabled=false

      # URL of the ResourceManager API
      resourcemanager_api_url=http://hadoop-senior.ibeifeng.com:8088

      # URL of the ProxyServer API
      proxy_api_url=http://hadoop-senior.ibeifeng.com:8088

      # URL of the HistoryServer API
      history_server_api_url=http://hadoop-senior.ibeifeng.com:19888

      # In secure mode (HTTPS), if SSL certificates from Resource Manager's
      # Rest Server have to be verified against certificate authority
      ## ssl_cert_ca_verify=False

缓存目录配置如下:

###########################################################################
# Settings to configure the Filebrowser app
###########################################################################

[filebrowser]
  # Location on local filesystem where the uploaded archives are temporary stored.
  archive_upload_tempdir=/tmp

启动namenode、datanode、resourcemanager、nodemanager、historyserver。

3、Hue结合Hive

(1)编辑配置文件/opt/cdh-5.3.6/hive-0.13.1-cdh5.3.6/conf/hive-site.xml。

        <!-- HiveServer2 -->
        <property>
                <name>hive.server2.thrift.port</name>
                <value>10000</value>
        </property>
        <property>
                <name>hive.server2.thrift.bind.host</name>
                <value>hadoop-senior.ibeifeng.com</value>
        </property>

启动hiveserver2:hive-0.13.1-cdh5.3.6]$ bin/hiveserver2 (2)编辑配置文件/opt/cdh-5.3.6/hive-0.13.1-cdh5.3.6/conf/hive-site.xml。

        <!-- Remote MetaStore -->
        <property>
                <name>hive.metastore.uris</name>
                <value>thrift://hadoop-senior.ibeifeng.com:9083</value>
        </property>

启动metastore:hive-0.13.1-cdh5.3.6]$ bin/hive --service metastore (3)编辑配置文件/opt/app/hue-3.7.0-cdh5.3.6/desktop/conf/hue.ini。

[beeswax]

  # Host where HiveServer2 is running.
  # If Kerberos security is enabled, use fully-qualified domain name (FQDN).
  hive_server_host=hadoop-senior.ibeifeng.com

  # Port where HiveServer2 Thrift server runs on.
  hive_server_port=10000

  # Hive configuration directory, where hive-site.xml is located
  hive_conf_dir=/opt/cdh-5.3.6/hive-0.13.1-cdh5.3.6/conf

  # Timeout in seconds for thrift calls to Hive service
  server_conn_timeout=120

4、Hue结合RDBMS

(1)编辑配置文件/opt/app/hue-3.7.0-cdh5.3.6/desktop/conf/hue.ini。

[librdbms]
  # The RDBMS app can have any number of databases configured in the databases
  # section. A database is known by its section name
  # (IE sqlite, mysql, psql, and oracle in the list below).

  [[databases]]
    # sqlite configuration.
    [[[sqlite]]]
      # Name to show in the UI.
      nice_name=SQLite

      # For SQLite, name defines the path to the database.
      name=/opt/app/hue-3.7.0-cdh5.3.6/desktop/desktop.db

      # Database backend to use.
      engine=sqlite

      # Database options to send to the server when connecting.
      # https://docs.djangoproject.com/en/1.4/ref/databases/
      ## options={}

    # mysql, oracle, or postgresql configuration.
    [[[mysql]]]
      # Name to show in the UI.
      nice_name="My SQL DB"

      # For MySQL and PostgreSQL, name is the name of the database.
      # For Oracle, Name is instance of the Oracle server. For express edition
      # this is 'xe' by default.
      name=test

      # Database backend to use. This can be:
      # 1. mysql
      # 2. postgresql
      # 3. oracle
      engine=mysql

      # IP or hostname of the database to connect to.
      host=hadoop-senior.ibeifeng.com

      # Port the database server is listening to. Defaults are:
      # 1. MySQL: 3306
      # 2. PostgreSQL: 5432
      # 3. Oracle Express Edition: 1521
      port=3306

      # Username to authenticate with when connecting to the database.
      user=root

      # Password matching the username to authenticate with when
      # connecting to the database.
      password=123456

5、Hue结合Oozie

(1)编辑配置文件/opt/app/hue-3.7.0-cdh5.3.6/desktop/conf/hue.ini。

###########################################################################
# Settings to configure liboozie
###########################################################################

[liboozie]
  # The URL where the Oozie service runs on. This is required in order for
  # users to submit jobs. Empty value disables the config check.
  oozie_url=http://hadoop-senior.ibeifeng.com:11000/oozie

  # Requires FQDN in oozie_url if enabled
  ## security_enabled=false

  # Location on HDFS where the workflows/coordinator are deployed when submitted.
  remote_deployement_dir=/user/beifeng/examples/apps


###########################################################################
# Settings to configure the Oozie app
###########################################################################

[oozie]
  # Location on local FS where the examples are stored.
  local_data_dir=/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/examples

  # Location on local FS where the data for the examples is stored.
  sample_data_dir=/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/examples/input-data

  # Location on HDFS where the oozie examples and workflows are stored.
  remote_data_dir=/user/beifeng/examples/apps

  # Maximum of Oozie workflows or coodinators to retrieve in one API call.
  oozie_jobs_count=100

  # Use Cron format for defining the frequency of a Coordinator instead of the old frequency number/unit.
  enable_cron_scheduling=true

(2)编辑配置文件/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/conf/oozie-site.xml,将oozie的共享库改为oozie用户,而不是beifeng用户。

    <property>
        <name>oozie.service.WorkflowAppService.system.libpath</name>
        <value>/user/oozie/share/lib</value>
        <description>
            System library path to use for workflow applications.
            This path is added to workflow application if their job properties sets
            the property 'oozie.use.system.libpath' to true.
        </description>
    </property>

(3)将oozie的共享库上传。 oozie-4.0.0-cdh5.3.6]$ bin/oozie-setup.sh sharelib create -fs hdfs://hadoop-senior.ibeifeng.com:8020 -locallib oozie-sharelib-4.0.0-cdh5.3.6-yarn.tar.gz (4)启动oozie服务。 oozie-4.0.0-cdh5.3.6]$ bin/oozied.sh start

6、启动Hue

进入目录/opt/app/hue-3.7.0-cdh5.3.6/,执行如下命令: hue-3.7.0-cdh5.3.6]$ build/env/bin/supervisor 也可执行完每步后分别重新启动(Ctrl+c关闭服务,以上命令启动服务)。

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

扫码关注云+社区

领取腾讯云代金券