Hadoop入门·环境搭建_axb平台搭建-程序员宅基地

技术标签: hadoop  大数据  

Hadoop入门·环境搭建

1 步骤

  • 硬件环境准备
  • 资源下载
  • 环境部署

2 分布式集群环境部署

2.1 硬件环境准备

本案例中使用三台服务器(仅作为学习案例),分别为Hadoop102,Hadoop103,Hadoop104,要求如下(资源充足可多分配):

 

备注:可使用创建模板机进行克隆操作

2.2 资源下载

资源列表:

Apache产品下载网:Apache Distribution Directory

  • JDK1.8
  • Hadoop3.2.3
  1. 进入Hadoop下载官网(Index of /hadoop/common (apache.org))进行下载,此案例使用hadoop-3.2.3版本。
  2. 下载jdk8,Java Downloads | Oracle

2.3 环境部署

前提:确保服务上述服务器及资源已准备完全。

建议将服务器防火墙关闭(否则需要配置防火墙出站、入站规则),以确保客户端可以访问相应端口。

集群服务安装位置要求:

2.3.1 服务器环境配置

每一台服务器都需要进行操作(Hadoop102,Hadoop103,Hadoop104)。

主机名配置

确认主机名是否与要求一致:

[my@Hadoop102 ~]$ hostname

Hadoop102

[my@Hadoop102 ~]$

若不一致,可进行如下操作,编辑hostname文件,输入需要的主机名称(需重启生效):

[my@Hadoop102 ~]$ sudo vi /etc/hostname

[sudo] my 的密码:

Hadoop102

~                                                                                                                           

~                                                                                                                                                                                                                                                       

~                                                                                                                           

"/etc/hostname" 1L, 10C

配置域名解析文件

编辑hosts文件(便于使用域名解析对应主机ip),如下加入Hadoop集群主机列表:

[my@Hadoop102 ~]$ sudo vi /etc/hosts

[sudo] my 的密码:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

#Hadoop集群主机列表

192.168.10.102 Hadoop102

192.168.10.103 Hadoop103

192.168.10.104 Hadoop104

~                                                                                                                                                                                                                                                       

"/etc/hosts" [noeol] 6L, 258C

关闭防火墙

切换到root用户:

[my@Hadoop102 ~]$ su - root

密码:

上一次登录:六 4月 16 23:00:28 CST 2022pts/1 上

关闭防火墙:

[root@Hadoop102 ~]# systemctl stop firewalld

查看防火墙状态:

[root@Hadoop102 ~]# systemctl status firewalld

● firewalld.service - firewalld - dynamic firewall daemon

   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)

   Active: inactive (dead)

     Docs: man:firewalld(1)

关闭防火墙开机自启:

[root@Hadoop102 ~]# systemctl disable firewalld

[root@Hadoop102 ~]#

2.3.2 安装jdk

在Hadoop102上安装

上传jdk安装包

连接sftp会话到Hadoop102进行文件上传

sftp> lcd D:\soft\Hadoop

sftp> lls

hadoop-3.2.3.tar.gz  jdk-8u321-linux-x64.tar.gz

sftp> cd /opt/Hadoop/

sftp> ls

hadoop-3.2.3

sftp> put jdk-8u321-linux-x64.tar.gz

Uploading hadoop-3.2.3.tar.gz to /opt/Hadoop/ jdk-8u321-linux-x64.tar.gz

  100% 480705KB  48070KB/s 00:00:10    

D:/soft/Hadoop/ jdk-8u321-linux-x64.tar.gz: 592241961 bytes transferred in 18 seconds (48070 KB/s)

sftp>

安装jdk

进入安装包目录,将文件解压到对应的路径下:

tar zxvf jdk-8u321-linux-x64.tar.gz -C /opt/java

配置JAVA环境变量:

[root@Hadoop102 java]# cd /etc/profile.d/

[root@Hadoop102 profile.d]# ls

256term.csh                   bash_completion.sh  colorls.csh  lang.csh  less.sh        vim.csh  which2.csh

256term.sh                    colorgrep.csh       colorls.sh   lang.sh          vim.sh   which2.sh

abrt-console-notification.sh  colorgrep.sh        flatpak.sh   less.csh  PackageKit.sh  vte.sh

在/etc/profile.d/目录下新增myenv.sh,并写入指定环境变量:

[root@Hadoop102 profile.d]# vi myenv.sh

#JAVA JDK 环境变量

export JAVA_HOME=/opt/java/jdk1.8/jdk1.8.0_321

export PATH=$PATH:$JAVA_HOME/bin

~                                                                                                                            

~                                                                                                                          

"myenv.sh" 6L, 220C

使环境变量生效:

[root@Hadoop102 profile.d]# source /etc/profile

[root@Hadoop102 profile.d]#

查看是否安装成功,出现以下信息代表成功:

[root@Hadoop102 profile.d]# java

用法: java [-options] class [args...]

           (执行类)

   或  java [-options] -jar jarfile [args...]

           (执行 jar 文件)

其中选项包括:

    -d32          使用 32 位数据模型 (如果可用)

    -d64          使用 64 位数据模型 (如果可用)

    -server       选择 "server" VM

                  默认 VM 是 server.

    -cp <目录和 zip/jar 文件的类搜索路径>

    -classpath <目录和 zip/jar 文件的类搜索路径>

                  用 : 分隔的目录, JAR 档案

                  和 ZIP 档案列表, 用于搜索类文件。

    -D<名称>=<值>

                  设置系统属性

    -verbose:[class|gc|jni]

                  启用详细输出

    -version      输出产品版本并退出

    -version:<值>

                  警告: 此功能已过时, 将在

                  未来发行版中删除。

                  需要指定的版本才能运行

    -showversion  输出产品版本并继续

    -jre-restrict-search | -no-jre-restrict-search

                  警告: 此功能已过时, 将在

                  未来发行版中删除。

                  在版本搜索中包括/排除用户专用 JRE

    -? -help      输出此帮助消息

    -X            输出非标准选项的帮助

    -ea[:<packagename>...|:<classname>]

    -enableassertions[:<packagename>...|:<classname>]

                  按指定的粒度启用断言

    -da[:<packagename>...|:<classname>]

    -disableassertions[:<packagename>...|:<classname>]

                  禁用具有指定粒度的断言

    -esa | -enablesystemassertions

                  启用系统断言

    -dsa | -disablesystemassertions

                  禁用系统断言

    -agentlib:<libname>[=<选项>]

                  加载本机代理库 <libname>, 例如 -agentlib:hprof

                  另请参阅 -agentlib:jdwp=help 和 -agentlib:hprof=help

    -agentpath:<pathname>[=<选项>]

                  按完整路径名加载本机代理库

    -javaagent:<jarpath>[=<选项>]

                  加载 Java 编程语言代理, 请参阅 java.lang.instrument

    -splash:<imagepath>

                  使用指定的图像显示启动屏幕

有关详细信息, 请参阅 http://www.oracle.com/technetwork/java/javase/documentation/index.html。

[root@Hadoop102 profile.d]#

2.3.3 安装Hadoop

在Hadoop102上安装

上传Hadoop安装包

连接sftp会话到Hadoop102进行文件上传

sftp> lcd D:\soft\Hadoop

sftp> lls

hadoop-3.2.3.tar.gz  jdk-8u321-linux-x64.tar.gz

sftp> cd /opt/Hadoop/

sftp> ls

hadoop-3.2.3

sftp> put hadoop-3.2.3.tar.gz

Uploading hadoop-3.2.3.tar.gz to /opt/Hadoop/hadoop-3.2.3.tar.gz

  100% 480705KB  48070KB/s 00:00:10    

D:/soft/Hadoop/hadoop-3.2.3.tar.gz: 492241961 bytes transferred in 10 seconds (48070 KB/s)

sftp>

安装Hadoop

进入安装包目录,将文件解压到对应的路径下:

tar zxvf hadoop-3.2.3.tar.gz -C /opt/Hadoop

更改Hadoop文件所有者

chown -R my:my /opt/Hadoop

配置Hadoop环境变量:

[root@Hadoop102 Hadoop]# cd /etc/profile.d/

[root@Hadoop102 profile.d]# ls

256term.csh                   bash_completion.sh  colorls.csh  lang.csh  less.sh        vim.csh  which2.csh

256term.sh                    colorgrep.csh       colorls.sh   lang.sh          vim.sh   which2.sh

abrt-console-notification.sh  colorgrep.sh        flatpak.sh   less.csh  PackageKit.sh  vte.sh

在/etc/profile.d/目录下新增myenv.sh,并写入指定环境变量:

[root@Hadoop102 profile.d]# vi myenv.sh

#JAVA JDK 环境变量

export JAVA_HOME=/opt/java/jdk1.8/jdk1.8.0_321

export PATH=$PATH:$JAVA_HOME/bin

#Hadoop环境变量

export HADOOP_HOME=/opt/Hadoop/hadoop-3.2.3

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

~                                                                                                                           

~                                                                                                                           

"myenv.sh" 6L, 220C

使环境变量生效:

[root@Hadoop102 profile.d]# source /etc/profile

[root@Hadoop102 profile.d]#

查看是否安装成功,出现以下信息代表成功:

[root@Hadoop102 profile.d]# hadoop

Usage: hadoop [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]

 or    hadoop [OPTIONS] CLASSNAME [CLASSNAME OPTIONS]

  where CLASSNAME is a user-provided Java class

  OPTIONS is none or any of:

buildpaths                       attempt to add class files from build tree

--config dir                     Hadoop config directory

--debug                          turn on shell script debug mode

--help                           usage information

hostnames list[,of,host,names]   hosts to use in slave mode

hosts filename                   list of hosts to use in slave mode

loglevel level                   set the log4j level for this command

workers                          turn on worker mode

  SUBCOMMAND is one of:

    Admin Commands:

daemonlog     get/set the log level for each daemon

    Client Commands:

archive       create a Hadoop archive

checknative   check native Hadoop and compression libraries availability

classpath     prints the class path needed to get the Hadoop jar and the required libraries

conftest      validate configuration XML files

credential    interact with credential providers

distch        distributed metadata changer

distcp        copy file or directories recursively

dtutil        operations related to delegation tokens

envvars       display computed Hadoop environment variables

fs            run a generic filesystem user client

gridmix       submit a mix of synthetic job, modeling a profiled from production load

jar <jar>     run a jar file. NOTE: please use "yarn jar" to launch YARN applications, not this command.

jnipath       prints the java.library.path

kdiag         Diagnose Kerberos Problems

kerbname      show auth_to_local principal conversion

key           manage keys via the KeyProvider

rumenfolder   scale a rumen input trace

rumentrace    convert logs into a rumen trace

s3guard       manage metadata on S3

trace         view and modify Hadoop tracing settings

version       print the version

    Daemon Commands:

kms           run KMS, the Key Management Server

SUBCOMMAND may print help when invoked w/o parameters or with -h.

[root@Hadoop102 profile.d]#

2.3.3 配置Hadoop集群

在Hadoop102上配置

配置相关的主要配置文件(位于Hadoop安装目录的etc文件夹下,配置文件在官网Hadoop源码中有默认配置文件, Hadoop默认使用默认配置):

  • core-site.xml
  • hdfs-site.xml
  • mapred-site.xml
  • yarn-site.xml
  • workers

配置core-site.xml

[root@Hadoop102 hadoop]# pwd

/opt/Hadoop/hadoop-3.2.3/etc/hadoop

[root@Hadoop102 hadoop]# vi core-site.xml

<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!--

  Licensed under the Apache License, Version 2.0 (the "License");

  you may not use this file except in compliance with the License.

  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software

  distributed under the License is distributed on an "AS IS" BASIS,

  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

  See the License for the specific language governing permissions and

  limitations under the License. See accompanying LICENSE file.

-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

    <!--NameNode位置,及内部通讯端口-->

    <property>

        <name>fs.default.name</name>

        <value>hdfs://Hadoop102:8020</value>

    </property>

    <!--hdfs数据文件存储路径-->

    <property>

        <name>hadoop.tmp.dir</name>

        <value>/opt/Hadoop/hadoop-3.2.3/Datas</value>

    </property>

</configuration>

~                                                                                                                            

"core-site.xml" 30L, 1102C

配置hdfs-site.xml

[root@Hadoop102 hadoop]# ls

capacity-scheduler.xml      hadoop-user-functions.sh.example  kms-log4j.properties        ssl-client.xml.example

configuration.xsl           hdfs-site.xml                     kms-site.xml                ssl-server.xml.example

container-executor.cfg      httpfs-env.sh                     log4j.properties            user_ec_policies.xml.template

core-site.xml               httpfs-log4j.properties           mapred-env.cmd              workers

hadoop-env.cmd              httpfs-signature.secret           mapred-env.sh               yarn-env.cmd

hadoop-env.sh               httpfs-site.xml                   mapred-queues.xml.template  yarn-env.sh

hadoop-metrics2.properties  kms-acls.xml                      mapred-site.xml             yarnservice-log4j.properties

hadoop-policy.xml           kms-env.sh                        shellprofile.d              yarn-site.xml

[root@Hadoop102 hadoop]# vi hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!--

  Licensed under the Apache License, Version 2.0 (the "License");

  you may not use this file except in compliance with the License.

  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software

  distributed under the License is distributed on an "AS IS" BASIS,

  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

  See the License for the specific language governing permissions and

  limitations under the License. See accompanying LICENSE file.

-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

    <!--2nn web前端访问地址-->

        <property>

        <name>dfs.namenode.secondary.http-address</name>

        <value>Hadoop104:9868</value>

        </property>

    <!--nn web前端访问地址-->

        <property>

        <name>dfs.namenode.http-address</name>

        <value>Hadoop102:9870</value>

        </property>

</configuration>

~                                                                                                                           

"hdfs-site.xml" 30L, 1049C

配置mapred-site.xml

[root@Hadoop102 hadoop]# vi mapred-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!--

  Licensed under the Apache License, Version 2.0 (the "License");

  you may not use this file except in compliance with the License.

  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software

  distributed under the License is distributed on an "AS IS" BASIS,

  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

  See the License for the specific language governing permissions and

  limitations under the License. See accompanying LICENSE file.

-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

        <!--mapreduce处理平台配置,此案例使用yarn-->

        <property>

        <name>mapreduce.framework.name</name>

        <value>yarn</value>

        </property>

</configuration>

~                                                                                                                            

~                                                                                                                           

~                                                                                                                            

~                                                                                                                           

~                                                                                                                           

~                                                                                                                           

"mapred-site.xml" 25L, 900C

配置yarn-site.xml

[root@Hadoop102 hadoop]# vi yarn-site.xml

<?xml version="1.0"?>

<!--

  Licensed under the Apache License, Version 2.0 (the "License");

  you may not use this file except in compliance with the License.

  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software

  distributed under the License is distributed on an "AS IS" BASIS,

  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

  See the License for the specific language governing permissions and

  limitations under the License. See accompanying LICENSE file.

-->

<configuration>

<!-- Site specific YARN configuration properties -->

        <!—配置resourcemanager 安装位置-->

<property>

        <name>yarn.resourcemanager.hostname</name>

        <value>Hadoop103</value>

        </property>

        <property>

        <!—配置nodemanager 使用服务-->

<name>yarn.nodemanager.aux-services</name>

        <value>mapreduce_shuffle</value>

        </property>

</configuration>

~                                                                                                                            

~                                                                                                                           

~                                                                                                                            

~                                                                                                                           

"yarn-site.xml" 27L, 888C

配置集群主机列表

works 配置DataNode节点的主机名或IP,用于集群服务启动时启动对应集群中的Hadoop服务

[root@Hadoop102 hadoop]# vi workers

Hadoop102

Hadoop103

Hadoop104

~                                                                                                                                                                                                                                                        

~                                                                                                                           

~                                                                                                                            

"workers" [noeol] 3L, 29C

配置集群主机间免密登录

生成RSA密钥:

ssh-keygen -t rsa

进入用户家目录中的.ssh目录:

cd ~/.ssh

将公钥发送到指定主机,实现Hadoop102 ssh连接到Hadoop103、Hadoop104免密登录

ssh-copy-id -i id_rsa.pub my@Hadoop103

ssh-copy-id -i id_rsa.pub my@Hadoop104

Hadoop集群配置分发

将Hadoop集群配置分发复制同步到各个集群主机(Hadoop103、Hadoop104,当然也可以每一台主机分别进行与Hadoop102相同的配置),免密登录需单独仿照“配置集群主机间免密登录”进行配置。

全部完全复制可使用scp命令

更新同步配置可使用rsync命令

此处首次全部复制同步,使用scp进行复制分发。

同步安装Hadoop环境及配置:

[root@Hadoop102 hadoop]# scp -r /opt/Hadoop my@Hadoop103:/opt

[root@Hadoop102 hadoop]# scp -r /opt/Hadoop my@Hadoop104:/opt

同步环境变量:

[root@Hadoop102 hadoop]# scp -r /etc/profile.d/ myenv.sh root@Hadoop103: /etc/profile.d/

[root@Hadoop102 hadoop]# scp -r /etc/profile.d/ myenv.sh root@Hadoop104: /etc/profile.d/

更改文件所有者(各集群主机保持一致)

[root@Hadoop103 hadoop]# chown -R my:my /opt/Hadoop/

[root@Hadoop104 hadoop]# chown -R my:my /opt/Hadoop/

完成同步后,分别检查各主机上Hadoop是否安装成功

启动Hadoop 集群服务

初始化HDFS:

[my@Hadoop102 hadoop-3.2.3]$ hdfs namenode -format

[my@Hadoop102 hadoop-3.2.3]$ ll

总用量 180

drwxr-xr-x. 2 my my 203 3月 20 09:58 bin

drwxrwxr-x. 3 my my 17 4月 17 18:20 Datas

drwxr-xr-x. 3 my my 20 3月 20 09:20 etc

drwxr-xr-x. 2 my my 106 3月 20 09:58 include

drwxr-xr-x. 3 my my 20 3月 20 09:58 lib

drwxr-xr-x. 4 my my 288 3月 20 09:58 libexec

-rw-rw-r--. 1 my my 150571 3月 10 13:39 LICENSE.txt

drwxrwxr-x. 2 my my 35 4月 17 18:20 logs

-rw-rw-r--. 1 my my 21943 3月 10 13:39 NOTICE.txt

-rw-rw-r--. 1 my my 1361 3月 10 13:39 README.txt

drwxr-xr-x. 3 my my 4096 3月 20 09:20 sbin

drwxr-xr-x. 4 my my 31 3月 20 10:17 share

启动HDFS集群服务:

[my@Hadoop102 ~]$ $HADOOP_HOME/sbin/start-dfs.sh

Starting namenodes on [Hadoop102]

Starting datanodes

Starting secondary namenodes [Hadoop104]

启动yarn服务:

[my@Hadoop103 ~]$ /$HADOOP_HOME/sbin/start-yarn.sh   

Starting resourcemanager

Starting nodemanagers

查看运行服务(确认对应服务位置是否与规划一致):

[my@Hadoop102 ~]$ jps

9328 Jps

8774 DataNode

9225 NodeManager

8653 NameNode

[my@Hadoop102 ~]$

[my@Hadoop103 ~]$ jps

7989 ResourceManager

7670 DataNode

8104 NodeManager

8473 Jps

[my@Hadoop103 ~]$

[my@Hadoop104 ~]$ jps

7968 NodeManager

7590 DataNode

7704 SecondaryNameNode

8104 Jps

[my@Hadoop104 ~]$

启动成功以后在网页上输入对应HDFS网页浏览地址,验证是否成功:

 

3 备注

3.1 常见错误

3.1.1 集群配置文件错误

启动时若对应xml配置有误,则在集群启动是边会抛出指定配置文件错误,如:

ration>; expected . at [row,col,system-id]: [24,15,"file:/opt/Hadoop/hadoop-3.2.3/etc/hadoop/mapred-site.xml"] 2022-04-17 18:17:48,721 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at Hadoop102/192.168.10.102 ************************************************************/ 2022-04-17 18:17:48,767 ERROR conf.Configuration: error parsing conf mapred-site.xml com.ctc.wstx.exc.WstxParsingException: Unexpected close tag ; expected . at [row,col,system-id]: [24,15,"file:/opt/Hadoop/hadoop-3.2.3/etc/hadoop/mapred-site.xml"] at com.ctc.wstx.sr.StreamScanner.constructWfcException(StreamScanner.java:634) at com.ctc.wstx.sr.StreamScanner.throwParseError(StreamScanner.java:504) at com.ctc.wstx.sr.StreamScanner.throwParseError(StreamScanner.java:488) at com.ctc.wstx.sr.BasicStreamReader.reportWrongEndElem(BasicStreamReader.java:3352) at com.ctc.wstx.sr.BasicStreamReader.readEndElem(BasicStreamReader.java:3279) at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2900) at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1121) at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3336) at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3130) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3023) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2984) at org.apache.hadoop.conf.Configuration.loadProps(Configuration.java:2862) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2844) at org.apache.hadoop.conf.Configuration.get(Configuration.java:1200) at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1812) at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1789) at org.apache.hadoop.util.ShutdownHookManager.getShutdownTimeout(ShutdownHookManager.java:183) at org.apache.hadoop.util.ShutdownHookManager.shutdownExecutor(ShutdownHookManager.java:145) at org.apache.hadoop.util.ShutdownHookManager.access$300(ShutdownHookManager.java:65) at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:102) Exception in thread "Thread-1" java.lang.RuntimeException: com.ctc.wstx.exc.WstxParsingException: Unexpected cl ose tag ; expected . at [row,col,system-id]: [24,15,"file:/opt/Hadoop/hadoop-3.2.3/etc/hadoop/mapred-site.xml"] at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3040) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2984) at org.apache.hadoop.conf.Configuration.loadProps(Configuration.java:2862) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2844) at org.apache.hadoop.conf.Configuration.get(Configuration.java:1200) at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1812) at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1789) at org.apache.hadoop.util.ShutdownHookManager.getShutdownTimeout(ShutdownHookManager.java:183) at org.apache.hadoop.util.ShutdownHookManager.shutdownExecutor(ShutdownHookManager.java:145) at org.apache.hadoop.util.ShutdownHookManager.access$300(ShutdownHookManager.java:65) at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:102) Caused by: com.ctc.wstx.exc.WstxParsingException: Unexpected close tag ; expected . at [row,col,system-id]: [24,15,"file:/opt/Hadoop/hadoop-3.2.3/etc/hadoop/mapred-site.xml"] at com.ctc.wstx.sr.StreamScanner.constructWfcException(StreamScanner.java:634) at com.ctc.wstx.sr.StreamScanner.throwParseError(StreamScanner.java:504) at com.ctc.wstx.sr.StreamScanner.throwParseError(StreamScanner.java:488) at com.ctc.wstx.sr.BasicStreamReader.reportWrongEndElem(BasicStreamReader.java:3352) at com.ctc.wstx.sr.BasicStreamReader.readEndElem(BasicStreamReader.java:3279) at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2900) at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1121) at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3336) at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3130) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3023) ... 10 more

这时需更改修正指定配置文件错误,并将其分发更新到各个集群服务器上:

[my@Hadoop102 hadoop-3.2.3]$ rsync -av /opt/Hadoop/hadoop-3.2.3/etc/hadoop/mapred-site.xml Hadoop103:/opt/Hadoo p/hadoop-3.2.3/etc/hadoop/

my@hadoop103's password: sending incremental file list mapred-site.xml sent 296 bytes received 43 bytes 96.86 bytes/sec total size is 900 speedup is 2.65

[my@Hadoop102 hadoop-3.2.3]$ rsync -av /opt/Hadoop/hadoop-3.2.3/etc/hadoop/mapred-site.xml Hadoop104:/opt/Hadoo p/hadoop-3.2.3/etc/hadoop/

The authenticity of host 'hadoop104 (192.168.10.104)' can't be established. ECDSA key fingerprint is SHA256:fE+HBwM03RQA+TNPrpQvWYHV46mYltvqrh9psMUXwos. ECDSA key fingerprint is MD5:38:8a:6a:6c:a2:f9:43:2e:e4:99:58:53:aa:84:cc:13. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'hadoop104,192.168.10.104' (ECDSA) to the list of known hosts. my@hadoop104's password: sending incremental file list mapred-site.xml sent 296 bytes received 43 bytes 61.64 bytes/sec total size is 900 speedup is 2.65

初始化HDFS:

[my@Hadoop102 hadoop-3.2.3]$ hdfs namenode -format

2022-04-17 18:20:30,242 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = Hadoop102/192.168.10.102 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 3.2.3 STARTUP_MSG: classpath = /opt/Hadoop/hadoop-3.2.3/etc/hadoop:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib /kerb-client-1 .0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/opt/Hadoop/hadoop-3 .2.3/share/had oop/common/lib/kerb-common-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/op t/Hadoop/hadoo p-3.2.3/share/hadoop/common/lib/jetty-http-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/li b/jersey-core1.19.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/hadoop-annotations-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3 /share/hadoop/ common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop /common/lib/jc ip-annotations-1.0-1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/woodstox-core-5.3.0.jar:/opt/Hadoop/h adoop-3.2.3/sh are/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/common s-collections3.2.2.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/netty-3.10.6.Final.jar:/opt/Hadoop/hadoop-3.2.3/shar e/hadoop/commo n/lib/zookeeper-3.4.14.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jersey-server-1.19.jar:/opt/Hadoop/ hadoop-3.2.3/s hare/hadoop/common/lib/paranamer-2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/javax.servlet-api-3.1 .0.jar:/opt/Ha doop/hadoop-3.2.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/com mon/lib/javax. activation-api-1.2.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/guava-27.0-jre.jar:/opt/Hadoop/hadoop -3.2.3/share/h adoop/common/lib/asm-5.0.4.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/opt /Hadoop/hadoop -3.2.3/share/hadoop/common/lib/curator-framework-2.13.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/ac cessors-smart2.4.7.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/re2j-1.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/c ommon/lib/kerb y-config-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jackson-databind-2.10.5.1.jar:/opt/Hadoop/h adoop-3.2.3/sh are/hadoop/common/lib/commons-beanutils-1.9.4.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/failureacces s-1.0.jar:/opt /Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/comm on/lib/commons -cli-1.2.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jersey-json-1.19.jar:/opt/Hadoop/hadoop-3.2.3/sha re/hadoop/comm on/lib/hadoop-auth-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/op t/Hadoop/hadoo p-3.2.3/share/hadoop/common/lib/dnsjava-2.1.7.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jackson-xc-1 .9.13.jar:/opt /Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/j axb-api-2.2.11 .jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/kerby-util-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoo p/common/lib/k erb-core-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/httpclient-4.5.13.jar:/opt/Hadoop/hadoop-3. 2.3/share/hado op/common/lib/nimbus-jose-jwt-9.8.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/curator-client-2.13.0. jar:/opt/Hadoo p/hadoop-3.2.3/share/hadoop/common/lib/commons-text-1.4.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/sl f4j-api-1.7.25 .jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jackson-annotations-2.10.5.jar:/opt/Hadoop/hadoop-3.2.3/s hare/hadoop/co mmon/lib/jaxb-impl-2.2.3-1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jersey-servlet-1.19.jar:/opt/Ha doop/hadoop-3. 2.3/share/hadoop/common/lib/jetty-xml-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/slf 4j-log4j12-1.7 .25.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jetty-server-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3 .2.3/share/had oop/common/lib/gson-2.2.4.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/opt/Had oop/hadoop-3.2 .3/share/hadoop/common/lib/spotbugs-annotations-3.1.9.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/chec ker-qual-2.5.2 .jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jetty-webapp-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2. 3/share/hadoop /common/lib/metrics-core-3.2.4.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/token-provider-1.0.1.jar:/o pt/Hadoop/hado op-3.2.3/share/hadoop/common/lib/jetty-security-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/comm on/lib/snappyjava-1.0.5.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/commons-net-3.6.jar:/opt/Hadoop/hadoop-3.2.3/sh are/hadoop/com mon/lib/kerb-server-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/opt/Hadoop /hadoop-3.2.3/ share/hadoop/common/lib/commons-compress-1.21.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/httpcore-4.4 .13.jar:/opt/H adoop/hadoop-3.2.3/share/hadoop/common/lib/jsr305-3.0.2.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/lo g4j-1.2.17.jar :/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop /common/lib/je tty-util-ajax-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/opt/H adoop/hadoop-3 .2.3/share/hadoop/common/lib/curator-recipes-2.13.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/common s-codec-1.11.j ar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/commons-io-2.8.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/ common/lib/jet ty-servlet-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/avro-1.7.7.jar:/opt/Hadoop/had oop-3.2.3/shar e/hadoop/common/lib/kerby-asn1-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/commons-lang3-3.7.jar :/opt/Hadoop/h adoop-3.2.3/share/hadoop/common/lib/json-smart-2.4.7.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/kerbutil-1.0.1.jar :/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/co mmon/lib/kerbsimplekdc-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/Hadoop/hadoo p-3.2.3/share/ hadoop/common/lib/jackson-core-2.10.5.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/kerby-xdr-1.0.1.jar: /opt/Hadoop/ha doop-3.2.3/share/hadoop/common/lib/jsch-0.1.55.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jettison-1. 1.jar:/opt/Had oop/hadoop-3.2.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/l ib/commons-log ging-1.1.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/opt/Hadoop /hadoop-3.2.3/ share/hadoop/common/lib/error_prone_annotations-2.2.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jett y-io-9.4.40.v2 0210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/opt/Hadoop/hadoop3.2.3/share/ha doop/common/lib/jetty-util-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/stax2-api-4.2. 1.jar:/opt/Had oop/hadoop-3.2.3/share/hadoop/common/hadoop-kms-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/hadoop-n fs-3.2.3.jar:/ opt/Hadoop/hadoop-3.2.3/share/hadoop/common/hadoop-common-3.2.3-tests.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop /common/hadoop -common-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/ker b-client-1.0.1 .jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/opt/Hadoop/hadoop-3.2.3/s hare/hadoop/hd fs/lib/kerb-common-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/opt/Hadoop/h adoop-3.2.3/sh are/hadoop/hdfs/lib/jetty-http-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jersey-core1.19.jar:/opt/ Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/hadoop-annotations-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hd fs/lib/listena blefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jcip-ann otations-1.0-1 .jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/woodstox-core-5.3.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hado op/hdfs/lib/ht race-core4-4.1.0-incubating.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/o pt/Hadoop/hado op-3.2.3/share/hadoop/hdfs/lib/netty-3.10.6.Final.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/zookeeper3.4.14.jar:/op t/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/l ib/paranamer-2 .3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/Hadoop/hadoop-3.2.3/share/had oop/hdfs/lib/j avax.servlet-api-3.1.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/Hado op/hadoop-3.2. 3/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/javax.activation-a pi-1.2.0.jar:/ opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib /asm-5.0.4.jar :/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoo p/hdfs/lib/cur ator-framework-2.13.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/accessors-smart-2.4.7.jar:/opt/Hadoop/ hadoop-3.2.3/s hare/hadoop/hdfs/lib/re2j-1.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/opt/Ha doop/hadoop-3. 2.3/share/hadoop/hdfs/lib/jackson-databind-2.10.5.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/commonsbeanutils-1.9. 4.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoo p/hdfs/lib/com mons-cli-1.2.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/opt/Hadoop/hadoop-3.2.3/s hare/hadoop/hd fs/lib/hadoop-auth-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/ Hadoop/hadoop3.2.3/share/hadoop/hdfs/lib/dnsjava-2.1.7.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jackson-xc-1.9.13. jar:/opt/Hadoo p/hadoop-3.2.3/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/kerby-u til-1.0.1.jar: /opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/l ib/httpclient4.5.13.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/nimbus-jose-jwt-9.8.1.jar:/opt/Hadoop/hadoop-3.2.3/sh are/hadoop/hdf s/lib/curator-client-2.13.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/commons-text-1.4.jar:/opt/Hadoop /hadoop-3.2.3/ share/hadoop/hdfs/lib/netty-all-4.1.68.Final.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jackson-annotat ions-2.10.5.ja r:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hd fs/lib/jerseyservlet-1.19.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jetty-xml-9.4.40.v20210413.jar:/opt/Hadoop/hado op-3.2.3/share /hadoop/hdfs/lib/jetty-server-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/gson-2.2.4.ja r:/opt/Hadoop/ hadoop-3.2.3/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/spotb ugs-annotation s-3.1.9.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/opt/Hadoop/hadoop-3.2.3/shar e/hadoop/hdfs/ lib/jetty-webapp-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/ opt/Hadoop/had oop-3.2.3/share/hadoop/hdfs/lib/jetty-security-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/ lib/snappy-jav a-1.0.5.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/commons-net-3.6.jar:/opt/Hadoop/hadoop-3.2.3/share/h adoop/hdfs/lib /kerb-server-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/opt/Hadoop/hadoop-3 .2.3/share/had oop/hdfs/lib/commons-compress-1.21.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/httpcore-4.4.13.jar:/opt/ Hadoop/hadoop3.2.3/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/o pt/Hadoop/hado op-3.2.3/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jetty-uti l-ajax-9.4.40. v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/shar e/hadoop/hdfs/ lib/commons-daemon-1.0.13.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/curator-recipes-2.13.0.jar:/opt/Ha doop/hadoop-3. 2.3/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/commons-io-2.8. 0.jar:/opt/Had oop/hadoop-3.2.3/share/hadoop/hdfs/lib/jetty-servlet-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop /hdfs/lib/avro -1.7.7.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/h adoop/hdfs/lib /commons-lang3-3.7.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/json-smart-2.4.7.jar:/opt/Hadoop/hadoop-3 .2.3/share/had oop/hdfs/lib/kerb-util-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/Hadoo p/hadoop-3.2.3 /share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jackson-jaxrs-1. 9.13.jar:/opt/ Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/okio-1.6.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jackson -core-2.10.5.j ar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdf s/lib/jsch-0.1 .55.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jettison-1.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/h dfs/lib/protob uf-java-2.5.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/Hadoop/hadoop-3 .2.3/share/had oop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/error_prone_ann otations-2.2.0 .jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jetty-io-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/shar e/hadoop/hdfs/ lib/audience-annotations-0.5.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/opt/Hadoop/ hadoop-3.2.3/s hare/hadoop/hdfs/lib/jetty-util-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/stax2-api-4 .2.1.jar:/opt/ Hadoop/hadoop-3.2.3/share/hadoop/hdfs/hadoop-hdfs-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/hadoop-h dfs-3.2.3-test s.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/hadoop-hdfs-client-3.2.3-tests.jar:/opt/Hadoop/hadoop-3.2.3/sh are/hadoop/hdf s/hadoop-hdfs-nfs-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/hadoop-hdfs-rbf-3.2.3.jar:/opt/Hadoop/ha doop-3.2.3/sha re/hadoop/hdfs/hadoop-hdfs-native-client-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/hadoop-hdfs-httpf s-3.2.3.jar:/o pt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/hadoop-hdfs-native-client-3.2.3-tests.jar:/opt/Hadoop/hadoop-3.2.3/sha re/hadoop/hdfs /hadoop-hdfs-client-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/hadoop-hdfs-rbf-3.2.3-tests.jar:/opt/H adoop/hadoop-3 .2.3/share/hadoop/mapreduce/lib/junit-4.13.2.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/mapreduce/lib/hamcrest-c ore-1.3.jar:/o pt/Hadoop/hadoop-3.2.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.2.3.jar:/opt/Hadoop/hadoop-3. 2.3/share/hado op/mapreduce/hadoop-mapreduce-examples-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/mapreduce/hadoop-mapredu ce-client-app3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.2.3.jar:/opt/Had oop/hadoop-3.2 .3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/ma preduce/hadoop -mapreduce-client-jobclient-3.2.3-tests.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/mapreduce/hadoop-mapreduce-cl ient-uploader3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.2.3.jar:/opt/Hadoop/ha doop-3.2.3/sha re/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/mapreduce/hadoop -mapreduce-cli ent-common-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.2.3.jar: /opt/Hadoop/ha doop-3.2.3/share/hadoop/yarn:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/opt/Hadoop/h adoop-3.2.3/sh are/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/json-io-2.5.1. jar:/opt/Hadoo p/hadoop-3.2.3/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/e hcache-3.3.1.j ar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/bcpkix-jdk15on-1.60.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop /yarn/lib/jack son-jaxrs-json-provider-2.10.5.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/Hadoo p/hadoop-3.2.3 /share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/jakarta.xml.bind-api2.3.2.jar:/opt /Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/li b/objenesis-1. 0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/snakeyaml-1.26.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/y arn/lib/bcprov -jdk15on-1.60.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.10.5.jar:/op t/Hadoop/hadoo p-3.2.3/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/ya rn/lib/jackson -jaxrs-base-2.10.5.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/fst-2.50.jar:/opt/Hadoop/hadoop-3.2.3/sha re/hadoop/yarn /lib/mssql-jdbc-6.2.1.jre7.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/guice-4.0.jar:/opt/Hadoop/hadoop3.2.3/share/ha doop/yarn/lib/java-util-1.9.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/jersey-client-1.19.jar:/opt/Ha doop/hadoop-3. 2.3/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/jakarta.activati on-api-1.2.1.j ar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoop-yarn-services-api-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share /hadoop/yarn/h adoop-yarn-server-common-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoop-yarn-client-3.2.3.jar:/opt /Hadoop/hadoop -3.2.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/shar e/hadoop/yarn/ hadoop-yarn-submarine-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3 .2.3.jar:/opt/ Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoop-yarn-common-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/h adoop-yarn-reg istry-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.2.3.jar:/opt /Hadoop/hadoop -3.2.3/share/hadoop/yarn/hadoop-yarn-server-router-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoopyarn-server-ap plicationhistoryservice-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoop-yarn-api-3.2.3.jar:/opt/Had oop/hadoop-3.2 .3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoop-y arn-server-tes ts-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.2.3.jar:/opt/Hadoop/ha doop-3.2.3/sha re/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/h adoop-yarn-ser vices-core-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.2.3 .jar STARTUP_MSG: build = https://github.com/apache/hadoop -r abe5358143720085498613d399be3bbf01e0f131; compiled b y 'ubuntu' on 2022-03-20T01:18Z STARTUP_MSG: java = 1.8.0_321 ************************************************************/ 2022-04-17 18:20:30,255 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 2022-04-17 18:20:30,438 INFO namenode.NameNode: createNameNode [-format] Formatting using clusterid: CID-98655ee1-8960-41ad-bc6d-2bb95651b12a 2022-04-17 18:20:31,781 INFO namenode.FSEditLog: Edit logging is async:true 2022-04-17 18:20:31,851 INFO namenode.FSNamesystem: KeyProvider: null 2022-04-17 18:20:31,853 INFO namenode.FSNamesystem: fsLock is fair: true 2022-04-17 18:20:31,853 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 2022-04-17 18:20:31,872 INFO namenode.FSNamesystem: fsOwner = my (auth:SIMPLE) 2022-04-17 18:20:31,872 INFO namenode.FSNamesystem: supergroup = supergroup 2022-04-17 18:20:31,872 INFO namenode.FSNamesystem: isPermissionEnabled = true 2022-04-17 18:20:31,872 INFO namenode.FSNamesystem: HA Enabled: false 2022-04-17 18:20:31,952 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profi ling 2022-04-17 18:20:31,968 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, coun ted=60, effect ed=1000 2022-04-17 18:20:31,968 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-ch eck=true 2022-04-17 18:20:31,974 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00: 00.000 2022-04-17 18:20:31,981 INFO blockmanagement.BlockManager: The block deletion will start around 2022 四月 17 18:2 0:31 2022-04-17 18:20:31,985 INFO util.GSet: Computing capacity for map BlocksMap 2022-04-17 18:20:31,985 INFO util.GSet: VM type = 64-bit 2022-04-17 18:20:31,986 INFO util.GSet: 2.0% max memory 481.4 MB = 9.6 MB 2022-04-17 18:20:31,987 INFO util.GSet: capacity = 2^20 = 1048576 entries 2022-04-17 18:20:32,001 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled 2022-04-17 18:20:32,001 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false 2022-04-17 18:20:32,011 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990 000128746033 2022-04-17 18:20:32,011 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0 2022-04-17 18:20:32,011 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000 2022-04-17 18:20:32,012 INFO blockmanagement.BlockManager: defaultReplication = 3 2022-04-17 18:20:32,013 INFO blockmanagement.BlockManager: maxReplication = 512 2022-04-17 18:20:32,013 INFO blockmanagement.BlockManager: minReplication = 1 2022-04-17 18:20:32,013 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 2022-04-17 18:20:32,013 INFO blockmanagement.BlockManager: redundancyRecheckInterval = 3000ms 2022-04-17 18:20:32,013 INFO blockmanagement.BlockManager: encryptDataTransfer = false 2022-04-17 18:20:32,013 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 2022-04-17 18:20:32,050 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911 2022-04-17 18:20:32,050 INFO namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215 2022-04-17 18:20:32,050 INFO namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215 2022-04-17 18:20:32,050 INFO namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215 2022-04-17 18:20:32,067 INFO util.GSet: Computing capacity for map INodeMap 2022-04-17 18:20:32,068 INFO util.GSet: VM type = 64-bit 2022-04-17 18:20:32,068 INFO util.GSet: 1.0% max memory 481.4 MB = 4.8 MB 2022-04-17 18:20:32,068 INFO util.GSet: capacity = 2^19 = 524288 entries 2022-04-17 18:20:32,069 INFO namenode.FSDirectory: ACLs enabled? false 2022-04-17 18:20:32,069 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true 2022-04-17 18:20:32,069 INFO namenode.FSDirectory: XAttrs enabled? true 2022-04-17 18:20:32,069 INFO namenode.NameNode: Caching file names occurring more than 10 times 2022-04-17 18:20:32,074 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccess TimeOnlyChange : false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536 2022-04-17 18:20:32,079 INFO snapshot.SnapshotManager: SkipList is disabled 2022-04-17 18:20:32,097 INFO util.GSet: Computing capacity for map cachedBlocks 2022-04-17 18:20:32,098 INFO util.GSet: VM type = 64-bit 2022-04-17 18:20:32,098 INFO util.GSet: 0.25% max memory 481.4 MB = 1.2 MB 2022-04-17 18:20:32,098 INFO util.GSet: capacity = 2^17 = 131072 entries 2022-04-17 18:20:32,108 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 2022-04-17 18:20:32,108 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 2022-04-17 18:20:32,108 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 2022-04-17 18:20:32,112 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 2022-04-17 18:20:32,112 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache ent ry expiry time is 600000 millis 2022-04-17 18:20:32,120 INFO util.GSet: Computing capacity for map NameNodeRetryCache 2022-04-17 18:20:32,120 INFO util.GSet: VM type = 64-bit 2022-04-17 18:20:32,120 INFO util.GSet: 0.029999999329447746% max memory 481.4 MB = 147.9 KB 2022-04-17 18:20:32,120 INFO util.GSet: capacity = 2^14 = 16384 entries 2022-04-17 18:20:32,173 INFO namenode.FSImage: Allocated new BlockPoolId: BP-56363176-192.168.10.102-1650190832 146 2022-04-17 18:20:32,204 INFO common.Storage: Storage directory /opt/Hadoop/hadoop-3.2.3/Datas/dfs/name has been successfully formatted. 2022-04-17 18:20:32,254 INFO namenode.FSImageFormatProtobuf: Saving image file /opt/Hadoop/hadoop-3.2.3/Datas/d fs/name/curren t/fsimage.ckpt_0000000000000000000 using no compression 2022-04-17 18:20:32,406 INFO namenode.FSImageFormatProtobuf: Image file /opt/Hadoop/hadoop-3.2.3/Datas/dfs/name /current/fsima ge.ckpt_0000000000000000000 of size 397 bytes saved in 0 seconds . 2022-04-17 18:20:32,420 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 2022-04-17 18:20:32,438 INFO namenode.FSNamesystem: Stopping services started for active state 2022-04-17 18:20:32,438 INFO namenode.FSNamesystem: Stopping services started for standby state 2022-04-17 18:20:32,442 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown. 2022-04-17 18:20:32,443 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at Hadoop102/192.168.10.102 ************************************************************/

[my@Hadoop102 hadoop-3.2.3]$ ll

总用量 180

drwxr-xr-x. 2 my my 203 3月 20 09:58 bin

drwxrwxr-x. 3 my my 17 4月 17 18:20 Datas

drwxr-xr-x. 3 my my 20 3月 20 09:20 etc

drwxr-xr-x. 2 my my 106 3月 20 09:58 include

drwxr-xr-x. 3 my my 20 3月 20 09:58 lib

drwxr-xr-x. 4 my my 288 3月 20 09:58 libexec

-rw-rw-r--. 1 my my 150571 3月 10 13:39 LICENSE.txt

drwxrwxr-x. 2 my my 35 4月 17 18:20 logs

-rw-rw-r--. 1 my my 21943 3月 10 13:39 NOTICE.txt

-rw-rw-r--. 1 my my 1361 3月 10 13:39 README.txt

drwxr-xr-x. 3 my my 4096 3月 20 09:20 sbin

drwxr-xr-x. 4 my my 31 3月 20 10:17 share

3.1.2 集群未配置SSH免密登录

未配置免密登录,启动HDFS集群服务时会抛出以下错误:

[my@Hadoop102 current]$ $HADOOP_HOME/sbin/start-dfs.sh

Starting namenodes on [Hadoop102]

Hadoop102: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

Starting datanodes

Hadoop104: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

Hadoop102: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

Hadoop103: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

Starting secondary namenodes [Hadoop104]

Hadoop104: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

[my@Hadoop102 current]$ $HADOOP_HOME/sbin/start-dfs.sh

Starting namenodes on [Hadoop102]

Hadoop102: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

Starting datanodes

Hadoop103: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

Hadoop104: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

Hadoop102: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

1Starting secondary namenodes [Hadoop104]

Hadoop104: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

此时需要配置RSA密钥,进行免密认证:

[my@Hadoop102 current]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/my/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/my/.ssh/id_rsa.

Your public key has been saved in /home/my/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:hrIi7CAZx5vqX0VertHrvbDi7lO1eKMPqECOaaJ6auw my@Hadoop102

The key's randomart image is:

+---[RSA 2048]----+

| |

| |

| . . |

| . o.+ . |

|. o.. .+Soo . |

|.+=o o..+o.+ |

|B=+o.. o.+o . |

|B=..o .o..= |

|XE.. .++oo.+. |

+----[SHA256]-----+

[my@Hadoop102 current]$ cd ~

[my@Hadoop102 ~]$ ll

总用量 0

drwxr-xr-x. 2 my my 6 4月 16 13:59 公共

drwxr-xr-x. 2 my my 6 4月 16 13:59 模板

drwxr-xr-x. 2 my my 6 4月 16 13:59 视频

drwxr-xr-x. 2 my my 6 4月 16 13:59 图片

drwxr-xr-x. 2 my my 6 4月 16 13:59 文档

drwxr-xr-x. 2 my my 6 4月 16 13:59 下载

drwxr-xr-x. 2 my my 6 4月 16 13:59 音乐

drwxr-xr-x. 2 my my 6 4月 16 13:59 桌面

[my@Hadoop102 ~]$ ls -a

.. .bash_history .bash_profile ..ccaacchhee .esd_auth ..llooccaall ..sssshh 公共 视频 文档 音乐

.... .bash_logout .bashrc ..ccoonnffiigg .ICEauthority ..mmoozziillllaa .viminfo 模板 图片 下载 桌面

[my@Hadoop102 ~]$ cd .ssh

[my@Hadoop102 .ssh]$ ls

id_rsa id_rsa.pub known_hosts

[my@Hadoop102 .ssh]$ ssh-copy-id -i id_rsa.pub Hadoop103

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "id_rsa.pub"

/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already instal

led

/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new

keys

my@hadoop103's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'Hadoop103'"

and check to make sure that only the key(s) you wanted were added.

[my@Hadoop102 .ssh]$ ssh-copy-id -i id_rsa.pub Hadoop104

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "id_rsa.pub"

/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already instal

led

/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new

keys

my@hadoop104's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'Hadoop104'"

and check to make sure that only the key(s) you wanted were added.

[my@Hadoop102 .ssh]$ $HADOOP_HOME/sbin/start-dfs.sh

Starting namenodes on [Hadoop102]

Hadoop102: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

Starting datanodes

Hadoop102: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

Hadoop103: WARNING: /opt/Hadoop/hadoop-3.2.3/logs does not exist. Creating.

Hadoop104: WARNING: /opt/Hadoop/hadoop-3.2.3/logs does not exist. Creating.

Starting secondary namenodes [Hadoop104]

[my@Hadoop102 .ssh]$ ssh-copy-id -i id_rsa.pub Hadoop102

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "id_rsa.pub"

/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already instal

led

/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new

keys

my@hadoop102's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'Hadoop102'"

and check to make sure that only the key(s) you wanted were added.

3.1.3 使用其它非集群用户启动集群服务

[root@Hadoop102 hadoop-3.2.3]# $HADOOP_HOME/sbin/start-dfs.sh

Starting namenodes on [Hadoop102]

ERROR: Attempting to operate on hdfs namenode as root

ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.

Starting datanodes

ERROR: Attempting to operate on hdfs datanode as root

ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.

Starting secondary namenodes [Hadoop104]

ERROR: Attempting to operate on hdfs secondarynamenode as root

ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.

此时使用原本配置的集成用户启动集群服务便可。

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/JangBingYang/article/details/124259228

智能推荐

使用UmcFramework和unimrcpclient.xml连接多个SIP设置的配置指南及C代码示例

在多媒体通信领域,MRCP(Media Resource Control Protocol)协议被广泛用于控制语音识别和合成等媒体资源。UniMRCP是一个开源的MRCP实现,提供了客户端和服务端的库。UmcFramework是一个基于UniMRCP客户端库的示例应用程序框架,它帮助开发者快速集成和测试MRCP客户端功能。本文将详细介绍如何使用UmcFramework和unimrcpclient.xml配置文件连接到多个SIP设置,以及如何用C代码进行示例说明。

java.net.ProtocolException: Server redirected too many times (20)-程序员宅基地

文章浏览阅读3k次。报错:java.net.ProtocolException: Server redirected too many times (20)1.没有检查到cookie,一直循环重定向。解决:CookieHandler.setDefault(new CookieManager(null, CookiePolicy.ACCEPT_ALL));URL url = new URL(url); ..._java.net.protocolexception: server redirected too many times (20)

springboot启动报错 Failed to scan *****/derbyLocale_ja_JP.jar from classloader hierarchy_failed to scan from classloader hierarchy-程序员宅基地

文章浏览阅读4.1k次。问题这是部分报错信息2019-07-11 14:03:34.283 WARN [restartedMain][DirectJDKLog.java:175] - Failed to scan [file:/D:/repo/org/apache/derby/derby/10.14.2.0/derbyLocale_ja_JP.jar] from classloader hierarchyjava...._failed to scan from classloader hierarchy

MATLAB-ones函数_matlab中ones函数-程序员宅基地

文章浏览阅读2.8k次,点赞3次,收藏7次。在MATLAB中,ones函数用于创建一个指定大小的由1组成的矩阵或数组。_matlab中ones函数

解决PS等软件出现应用程序无法正常启动(0xc000007b)_photoshop应用程序无法正常启动0xc000007b。请单击“确认”关闭应用程序。-程序员宅基地

文章浏览阅读3.9w次,点赞2次,收藏9次。  在使用电脑办公过程中,安装应用程序时难免遇到无法安装或者无法正常启动的问题,这对我们使用电脑带来了诸多不便。那遇到应用程序无法正常启动的问题要如何解决呢?相信大家肯定都是十分疑问的,每次都是只能忍痛重新安装软件。今天,小编就和大家探讨下应用程序无法正常启动的解决方法,帮助大家排忧解难。0xc000007b电脑图解1  第一种方案:SFC检查系统完整性来尝试修复丢失文件  1、打开电脑搜索输入cmd.exe,选择以管理员身份运行,跳出提示框时选择继续。0xc000007b电脑图解2_photoshop应用程序无法正常启动0xc000007b。请单击“确认”关闭应用程序。

oracle介质恢复和实例恢复的异同-程序员宅基地

文章浏览阅读396次。1、概念 REDO LOG是Oracle为确保已经提交的事务不会丢失而建立的一个机制。实际上REDO LOG的存在是为两种场景准备的:实例恢复(INSTANCE RECOVERY);介质恢复(MEDIA RECOVERY)。 实例恢复的目的是在数据库发生故障时,确保BUFFER CACHE中的数据不会丢失,不会造成数据库的..._oracle 实例恢复和介质恢复

随便推点

轻松搭建CAS 5.x系列(5)-增加密码找回和密码修改功能-程序员宅基地

文章浏览阅读418次。概述说明CAS内置了密码找回和密码修改的功能; 密码找回功能是,系统会吧密码重置的连接通过邮件或短信方式发送给用户,用户点击链接后就可以重置密码,cas还支持预留密码重置的问题,只有回答对了,才可以重置密码;系统可配置密码重置后,是否自动登录; 密码修改功能是,用户登录后输入新密码即可完成密码修改。安装步骤`1. 首先,搭建好cas sso server您需要按..._修改cas默认用户密码

springcloud(七) feign + Hystrix 整合 、-程序员宅基地

文章浏览阅读141次。之前几章演示的熔断,降级 都是 RestTemplate + Ribbon 和RestTemplate + Hystrix ,但是在实际开发并不是这样,实际开发中都是 Feign 远程接口调用。Feign + Hystrix 演示:  eruka(略)order 服务工程:  pom.xml<?xml version="1.0" encoding="U..._this is order 服务工程

YOLOv7如何提高目标检测的速度和精度,基于优化算法提高目标检测速度-程序员宅基地

文章浏览阅读3.4k次,点赞35次,收藏43次。学习率是影响目标检测精度和速度的重要因素之一。合适的学习率调度策略可以加速模型的收敛和提高模型的精度。在YOLOv7算法中,可以使用基于余弦函数的学习率调度策略(Cosine Annealing Learning Rate Schedule)来调整学习率。

linux中进程退出函数:exit()和_exit()的区别_linux结束进程可以用哪些函数,它们之间有何区别?-程序员宅基地

文章浏览阅读4k次,点赞4次,收藏9次。 linux中进程退出函数:exit()和_exit()的区别(1)_exit()执行后立即返回给内核,而exit()要先执行一些清除操作,然后将控制权交给内核。(2)调用_exit函数时,其会关闭进程所有的文件描述符,清理内存以及其他一些内核清理函数,但不会刷新流(stdin, stdout, stderr ...). exit函数是在_exit..._linux结束进程可以用哪些函数,它们之间有何区别?

sqlserver55555_sqlserver把小数点后面多余的0去掉-程序员宅基地

文章浏览阅读134次。select 5000/10000.0 --想变成0.5select 5500/10000.0 --想变成0.55select 5550/10000.0 --想变成0.555select 5555/10000.0 --想变成0.5555其结果分别为:0.5000000 0.5500000 0.5550000 0.5555000一、如果想去掉数字5后面多余的0 ,需要转化一下:selec..._sql server 去小数 0

Angular6 和 RXJS6 的一些改动_angular6,requestoptions改成了什么-程序员宅基地

文章浏览阅读3.1k次。例一:import { Injectable } from '@angular/core';import { Observable } from 'rxjs';import { User } from "./model/User";import { map } from 'rxjs/operators';import { Http, Response, Headers, RequestOp..._angular6,requestoptions改成了什么

推荐文章

热门文章

相关标签