建设银行忘记密码网站,买卖友情链接,在线A视频网站l一级A做爰片,博客网站搭建Hadoop部署方式-高可用集群部署(High Availability) 作者#xff1a;尹正杰 版权声明#xff1a;原创作品#xff0c;谢绝转载#xff01;否则将追究法律责任。 本篇博客的高可用集群是建立在完全分布式基础之上的#xff0c;详情请参考#xff1a;https://www.cnblogs.c… Hadoop部署方式-高可用集群部署(High Availability) 作者尹正杰 版权声明原创作品谢绝转载否则将追究法律责任。 本篇博客的高可用集群是建立在完全分布式基础之上的详情请参考https://www.cnblogs.com/yinzhengjie/p/9065191.html。并且需要新增一台Linux服务器用于Namenode的备份节点。 一.实验环境准备 需要准备五台Linux操作系统的服务器配置参数最好一样由于我的虚拟机是之前完全分布式部署而来的因此我的环境都一致。 1.NameNode服务器s101) 2.DataNode服务器(s102) 3.DataNode服务器(s103) 4.DataNode服务器(s104) 5.DataNode服务器(s105) 二.在s101上修改配置文件并分发到其它节点 关于配置hadoop高可用的参数参考官网链接http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html 1.在s101上拷贝配置目录并修改符号链接 [yinzhengjies101 ~]$ ll /soft/hadoop/etc/
total 12
drwxr-xr-x. 2 yinzhengjie yinzhengjie 4096 Jun 8 05:36 full
lrwxrwxrwx. 1 yinzhengjie yinzhengjie 21 Jun 8 05:54 hadoop - /soft/hadoop/etc/full
drwxr-xr-x. 2 yinzhengjie yinzhengjie 4096 May 25 09:15 local
drwxr-xr-x. 2 yinzhengjie yinzhengjie 4096 May 25 20:51 pseudo
[yinzhengjies101 ~]$ cp -r /soft/hadoop/etc/full /soft/hadoop/etc/ha
[yinzhengjies101 ~]$ ln -sfT /soft/hadoop/etc/ha /soft/hadoop/etc/hadoop
[yinzhengjies101 ~]$ ll /soft/hadoop/etc/
total 16
drwxr-xr-x. 2 yinzhengjie yinzhengjie 4096 Jun 8 05:36 full
drwxr-xr-x. 2 yinzhengjie yinzhengjie 4096 Jun 8 05:54 ha
lrwxrwxrwx. 1 yinzhengjie yinzhengjie 19 Jun 8 05:54 hadoop - /soft/hadoop/etc/ha
drwxr-xr-x. 2 yinzhengjie yinzhengjie 4096 May 25 09:15 local
drwxr-xr-x. 2 yinzhengjie yinzhengjie 4096 May 25 20:51 pseudo
[yinzhengjies101 ~]$ 2.配置s105ssh免密登陆 [yinzhengjies101 ~]$ ssh-copy-id yinzhengjies105
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
yinzhengjies105s password: Number of key(s) added: 1Now try logging into the machine, with: ssh yinzhengjies105
and check to make sure that only the key(s) you wanted were added.[yinzhengjies101 ~]$ who
yinzhengjie pts/0 2018-06-08 05:29 (172.16.30.1)
[yinzhengjies101 ~]$
[yinzhengjies101 ~]$ ssh s105
Last login: Fri Jun 8 05:37:20 2018 from 172.16.30.1
[yinzhengjies105 ~]$
[yinzhengjies105 ~]$ who
yinzhengjie pts/0 2018-06-08 05:37 (172.16.30.1)
yinzhengjie pts/1 2018-06-08 05:56 (s101)
[yinzhengjies105 ~]$ exit
logout
Connection to s105 closed.
[yinzhengjies101 ~]$ 3.编辑core-site.xml配置文件 [yinzhengjies101 ~]$ more /soft/hadoop/etc/hadoop/core-site.xml
?xml version1.0 encodingUTF-8?
configurationpropertynamefs.defaultFS/namevaluehdfs://mycluster/value/propertypropertynamehadoop.tmp.dir/namevalue/home/yinzhengjie/ha/value/propertypropertynamehadoop.http.staticuser.user/namevalueyinzhengjie/value/property
/configuration!--core-site.xml配置文件的作用用于定义系统级别的参数如HDFS URL、Hadoop的临时
目录以及用于rack-aware集群中的配置文件的配置等此中的参
数定义会覆盖core-default.xml文件中的默认配置。fs.defaultFS 参数的作用#fs.defaultFS 客户端连接HDFS时默认的路径前缀。如果前面配置了nameservice ID的值是mycluster那么这里可以配置为授权
信息的一部分hadoop.tmp.dir 参数的作用#声明hadoop工作目录的地址。hadoop.http.staticuser.user 参数的作用#在网页界面访问数据使用的用户名。--
[yinzhengjies101 ~]$ 4.编辑hdfs-site.xml配置文件 [yinzhengjies101 ~]$ more /soft/hadoop/etc/hadoop/hdfs-site.xml
?xml version1.0 encodingUTF-8?
?xml-stylesheet typetext/xsl hrefconfiguration.xsl?
configurationpropertynamedfs.replication/namevalue3/value/propertypropertynamedfs.namenode.name.dir/namevalue/home/yinzhengjie/ha/dfs/name1,/home/yinzhengjie/ha/dfs/name2/value/propertypropertynamedfs.datanode.data.dir/namevalue/home/yinzhengjie/ha/dfs/data1,/home/yinzhengjie/ha/dfs/data2/value/property!-- 高可用配置 --
propertynamedfs.nameservices/namevaluemycluster/value
/propertypropertynamedfs.ha.namenodes.mycluster/namevaluenn1,nn2/value
/propertypropertynamedfs.namenode.rpc-address.mycluster.nn1/namevalues101:8020/value
/property
propertynamedfs.namenode.rpc-address.mycluster.nn2/namevalues105:8020/value
/propertypropertynamedfs.namenode.http-address.mycluster.nn1/namevalues101:50070/value
/property
propertynamedfs.namenode.http-address.mycluster.nn2/namevalues105:50070/value
/propertypropertynamedfs.namenode.shared.edits.dir/namevalueqjournal://s102:8485;s103:8485;s104:8485/mycluster/value
/propertypropertynamedfs.client.failover.proxy.provider.mycluster/namevalueorg.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider/value
/property!-- 在容灾发生时保护活跃的namenode --
propertynamedfs.ha.fencing.methods/namevaluesshfence shell(/bin/true)/value
/propertypropertynamedfs.ha.fencing.ssh.private-key-files/namevalue/home/yinzhengjie/.ssh/id_rsa/value
/property/configuration!--
hdfs-site.xml 配置文件的作用#HDFS的相关设定如文件副本的个数、块大小及是否使用强制权限
等此中的参数定义会覆盖hdfs-default.xml文件中的默认配置.dfs.replication 参数的作用#为了数据可用性及冗余的目的HDFS会在多个节点上保存同一个数据
块的多个副本其默认为3个。而只有一个节点的伪分布式环境中其仅用
保存一个副本即可这可以通过dfs.replication属性进行定义。它是一个
软件级备份。dfs.namenode.name.dir 参数的作用#本地磁盘目录NN存储fsimage文件的地方。可以是按逗号分隔的目录列表
fsimage文件会存储在全部目录冗余安全。这里多个目录设定最好在多个磁盘
另外如果其中一个磁盘故障不会导致系统故障会跳过坏磁盘。由于使用了HA
建议仅设置一个。如果特别在意安全可以设置2个dfs.datanode.data.dir 参数的作用#本地磁盘目录HDFS数据应该存储Block的地方。可以是逗号分隔的目录列表
典型的每个目录在不同的磁盘。这些目录被轮流使用一个块存储在这个目录
下一个块存储在下一个目录依次循环。每个块在同一个机器上仅存储一份。不存在
的目录被忽略。必须创建文件夹否则被视为不存在。dfs.nameservices 参数的作用#nameservices列表。逗号分隔。dfs.ha.namenodes.mycluster 参数的作用#dfs.ha.namenodes.[nameservice ID] 命名空间中所有NameNode的唯一标示名称。
可以配置多个使用逗号分隔。该名称是可以让DataNode知道每个集群的所有NameNode。
当前每个集群最多只能配置两个NameNode。dfs.namenode.rpc-address.mycluster.nn1 参数的作用#dfs.namenode.rpc-address.[nameservice ID].[name node ID] 每个namenode监听的RPC地址。dfs.namenode.http-address.mycluster.nn1 参数的作用#dfs.namenode.http-address.[nameservice ID].[name node ID] 每个namenode监听的http地址。dfs.namenode.shared.edits.dir 参数的作用#这是NameNode读写JNs组的uri。通过这个uriNameNodes可以读写edit log内容。URI的格式qjournal://host1:port1;host2:port
2;host3:port3/journalId。这里的host1、host2、host3指的是Journal Node的地址这里必须是奇数个至少3个其中journalId是集群
的唯一标识符对于多个联邦命名空间也使用同一个journalId。dfs.client.failover.proxy.provider.mycluster 参数的作用#这里配置HDFS客户端连接到Active NameNode的一个java类dfs.ha.fencing.methods 参数的作用#dfs.ha.fencing.methods 配置active namenode出错时的处理类。当active namenode出错时一般需要关闭该进程。处理方式可以
是ssh也可以是shell。dfs.ha.fencing.ssh.private-key-files 参数的作用#使用sshfence时SSH的私钥文件。 使用了sshfence这个必须指定--
[yinzhengjies101 ~]$ 5.分发配置文件 [yinzhengjies101 ~]$ more which xrsync.sh
#!/bin/bash
#author :yinzhengjie
#blog:http://www.cnblogs.com/yinzhengjie
#EMAIL:y1053419035qq.com#判断用户是否传参
if [ $# -lt 1 ];thenecho 请输入参数;exit
fi#获取文件路径
file$#获取子路径
filenamebasename $file#获取父路径
dirpathdirname $file#获取完整路径
cd $dirpath
fullpathpwd -P#同步文件到DataNode
for (( i102;i105;i ))
do#使终端变绿色 tput setaf 2echo s$i %file #使终端变回原来的颜色即白灰色tput setaf 7#远程执行命令rsync -lr $filename whoamis$i:$fullpath#判断命令是否执行成功if [ $? 0 ];thenecho 命令执行成功fi
done
[yinzhengjies101 ~]$ xrsync.sh /soft/hadoop/etc/s102 %file
命令执行成功s103 %file
命令执行成功s104 %file
命令执行成功s105 %file
命令执行成功
[yinzhengjies101 ~]$ 三.启动HDFS分布式系统 1.启动journalnode进程 [yinzhengjies101 ~]$ hadoop-daemons.sh start journalnode
s104: starting journalnode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-journalnode-s104.out
s103: starting journalnode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-journalnode-s103.out
s102: starting journalnode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-journalnode-s102.out
[yinzhengjies101 ~]$ xcall.sh jpss101 jps
2855 Jps
命令执行成功s102 jps
2568 Jps
2490 JournalNode
命令执行成功s103 jps
2617 Jps
2539 JournalNode
命令执行成功s104 jps
2611 Jps
2532 JournalNode
命令执行成功s105 jps
2798 Jps
命令执行成功
[yinzhengjies101 ~]$ more which xcall.sh
#!/bin/bash
#author :yinzhengjie
#blog:http://www.cnblogs.com/yinzhengjie
#EMAIL:y1053419035qq.com#判断用户是否传参
if [ $# -lt 1 ];thenecho 请输入参数exit
fi#获取用户输入的命令
cmd$for (( i101;i105;i ))
do#使终端变绿色 tput setaf 2echo s$i $cmd #使终端变回原来的颜色即白灰色tput setaf 7#远程执行命令ssh s$i $cmd#判断命令是否执行成功if [ $? 0 ];thenecho 命令执行成功fi
done
[yinzhengjies101 ~]$ 2.格式化名称节点 [yinzhengjies101 ~]$ hdfs namenode -format
18/06/08 10:39:42 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host s101/172.16.30.101
STARTUP_MSG: args [-format]
STARTUP_MSG: version 2.7.3
STARTUP_MSG: classpath /soft/hadoop-2.7.3/etc/hadoop:/soft/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by root on 2016-08-18T01:41Z
STARTUP_MSG: java 1.8.0_131
************************************************************/
18/06/08 10:39:42 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
18/06/08 10:39:42 INFO namenode.NameNode: createNameNode [-format]
18/06/08 10:39:43 WARN common.Util: Path /home/yinzhengjie/ha/dfs/name1 should be specified as a URI in configuration files. Please update hdfs configuration.
18/06/08 10:39:43 WARN common.Util: Path /home/yinzhengjie/ha/dfs/name2 should be specified as a URI in configuration files. Please update hdfs configuration.
18/06/08 10:39:43 WARN common.Util: Path /home/yinzhengjie/ha/dfs/name1 should be specified as a URI in configuration files. Please update hdfs configuration.
18/06/08 10:39:43 WARN common.Util: Path /home/yinzhengjie/ha/dfs/name2 should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-d8db65bd-2f38-48df-bd5f-be7b8d9d362c
18/06/08 10:39:43 INFO namenode.FSNamesystem: No KeyProvider found.
18/06/08 10:39:43 INFO namenode.FSNamesystem: fsLock is fair:true
18/06/08 10:39:43 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit1000
18/06/08 10:39:43 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-checktrue
18/06/08 10:39:43 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
18/06/08 10:39:43 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Jun 08 10:39:43
18/06/08 10:39:43 INFO util.GSet: Computing capacity for map BlocksMap
18/06/08 10:39:43 INFO util.GSet: VM type 64-bit
18/06/08 10:39:43 INFO util.GSet: 2.0% max memory 889 MB 17.8 MB
18/06/08 10:39:43 INFO util.GSet: capacity 2^21 2097152 entries
18/06/08 10:39:43 INFO blockmanagement.BlockManager: dfs.block.access.token.enablefalse
18/06/08 10:39:43 INFO blockmanagement.BlockManager: defaultReplication 3
18/06/08 10:39:43 INFO blockmanagement.BlockManager: maxReplication 512
18/06/08 10:39:43 INFO blockmanagement.BlockManager: minReplication 1
18/06/08 10:39:43 INFO blockmanagement.BlockManager: maxReplicationStreams 2
18/06/08 10:39:43 INFO blockmanagement.BlockManager: replicationRecheckInterval 3000
18/06/08 10:39:43 INFO blockmanagement.BlockManager: encryptDataTransfer false
18/06/08 10:39:43 INFO blockmanagement.BlockManager: maxNumBlocksToLog 1000
18/06/08 10:39:43 INFO namenode.FSNamesystem: fsOwner yinzhengjie (auth:SIMPLE)
18/06/08 10:39:43 INFO namenode.FSNamesystem: supergroup supergroup
18/06/08 10:39:43 INFO namenode.FSNamesystem: isPermissionEnabled true
18/06/08 10:39:43 INFO namenode.FSNamesystem: Determined nameservice ID: mycluster
18/06/08 10:39:43 INFO namenode.FSNamesystem: HA Enabled: true
18/06/08 10:39:43 INFO namenode.FSNamesystem: Append Enabled: true
18/06/08 10:39:43 INFO util.GSet: Computing capacity for map INodeMap
18/06/08 10:39:43 INFO util.GSet: VM type 64-bit
18/06/08 10:39:43 INFO util.GSet: 1.0% max memory 889 MB 8.9 MB
18/06/08 10:39:43 INFO util.GSet: capacity 2^20 1048576 entries
18/06/08 10:39:43 INFO namenode.FSDirectory: ACLs enabled? false
18/06/08 10:39:43 INFO namenode.FSDirectory: XAttrs enabled? true
18/06/08 10:39:43 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
18/06/08 10:39:43 INFO namenode.NameNode: Caching file names occuring more than 10 times
18/06/08 10:39:43 INFO util.GSet: Computing capacity for map cachedBlocks
18/06/08 10:39:43 INFO util.GSet: VM type 64-bit
18/06/08 10:39:43 INFO util.GSet: 0.25% max memory 889 MB 2.2 MB
18/06/08 10:39:43 INFO util.GSet: capacity 2^18 262144 entries
18/06/08 10:39:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct 0.9990000128746033
18/06/08 10:39:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes 0
18/06/08 10:39:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension 30000
18/06/08 10:39:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets 10
18/06/08 10:39:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users 10
18/06/08 10:39:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes 1,5,25
18/06/08 10:39:43 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
18/06/08 10:39:43 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
18/06/08 10:39:43 INFO util.GSet: Computing capacity for map NameNodeRetryCache
18/06/08 10:39:43 INFO util.GSet: VM type 64-bit
18/06/08 10:39:43 INFO util.GSet: 0.029999999329447746% max memory 889 MB 273.1 KB
18/06/08 10:39:43 INFO util.GSet: capacity 2^15 32768 entries
18/06/08 10:39:44 INFO namenode.FSImage: Allocated new BlockPoolId: BP-455103259-172.16.30.101-1528479584722
18/06/08 10:39:44 INFO common.Storage: Storage directory /home/yinzhengjie/ha/dfs/name1 has been successfully formatted.
18/06/08 10:39:44 INFO common.Storage: Storage directory /home/yinzhengjie/ha/dfs/name2 has been successfully formatted.
18/06/08 10:39:44 INFO namenode.FSImageFormatProtobuf: Saving image file /home/yinzhengjie/ha/dfs/name1/current/fsimage.ckpt_0000000000000000000 using no compression
18/06/08 10:39:44 INFO namenode.FSImageFormatProtobuf: Saving image file /home/yinzhengjie/ha/dfs/name2/current/fsimage.ckpt_0000000000000000000 using no compression
18/06/08 10:39:44 INFO namenode.FSImageFormatProtobuf: Image file /home/yinzhengjie/ha/dfs/name1/current/fsimage.ckpt_0000000000000000000 of size 358 bytes saved in 0 seconds.
18/06/08 10:39:44 INFO namenode.FSImageFormatProtobuf: Image file /home/yinzhengjie/ha/dfs/name2/current/fsimage.ckpt_0000000000000000000 of size 358 bytes saved in 0 seconds.
18/06/08 10:39:45 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid 0
18/06/08 10:39:45 INFO util.ExitUtil: Exiting with status 0
18/06/08 10:39:45 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at s101/172.16.30.101
************************************************************/
[yinzhengjies101 ~]$ echo $?
0
[yinzhengjies101 ~]$ [yinzhengjies101 ~]$ hdfs namenode -format 3.将s101中的工作目录同步到s105 [yinzhengjies101 ~]$ scp -r /home/yinzhengjie/ha yinzhengjies105:~
VERSION 100% 205 0.2KB/s 00:00
seen_txid 100% 2 0.0KB/s 00:00
fsimage_0000000000000000000.md5 100% 62 0.1KB/s 00:00
fsimage_0000000000000000000 100% 358 0.4KB/s 00:00
VERSION 100% 205 0.2KB/s 00:00
seen_txid 100% 2 0.0KB/s 00:00
fsimage_0000000000000000000.md5 100% 62 0.1KB/s 00:00
fsimage_0000000000000000000 100% 358 0.4KB/s 00:00
[yinzhengjies101 ~]$ 4.启动hdfs进程 [yinzhengjies101 ~]$ start-dfs.sh
Starting namenodes on [s101 s105]
s105: starting namenode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-namenode-s105.out
s101: starting namenode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-namenode-s101.out
s104: starting datanode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-datanode-s104.out
s103: starting datanode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-datanode-s103.out
s102: starting datanode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-datanode-s102.out
Starting journal nodes [s102 s103 s104]
s102: journalnode running as process 2490. Stop it first.
s104: journalnode running as process 2532. Stop it first.
s103: journalnode running as process 2539. Stop it first.
[yinzhengjies101 ~]$ xcall.sh jpss101 jps
3377 Jps
3117 NameNode
命令执行成功s102 jps
2649 DataNode
2490 JournalNode
2764 Jps
命令执行成功s103 jps
2539 JournalNode
2700 DataNode
2815 Jps
命令执行成功s104 jps
2532 JournalNode
2693 DataNode
2809 Jps
命令执行成功s105 jps
3171 NameNode
3254 Jps
命令执行成功
[yinzhengjies101 ~]$ 5.手动将s101转换成激活状态 [yinzhengjies101 ~]$ hdfs haadmin -transitionToActive nn1 //手动将s101转换成激活状态
[yinzhengjies101 ~]$ 配置到这里基本上高可用就配置好了但是美中不足的是需要字节手动切换NameNode模式这就比较麻烦了。索性的是Hadoop生态圈有专门维护的工具叫做zookeeper工具我们可以用该工具对集群继续管理就相当方便啦详情请参考https://www.cnblogs.com/yinzhengjie/p/9154265.html转载于:https://www.cnblogs.com/yinzhengjie/p/9070017.html