首页 诗词 字典 板报 句子 名言 友答 励志 学校 网站地图
当前位置: 首页 > 教程频道 > 其他教程 > 开源软件 >

hadoop 集群装配及验证

2013-09-28 
hadoop 集群安装及验证?一、上传hadoop包到master机器/usr目录版本:hadoop-1.2.1.tar.gz解压:?tar -zxvf ha

hadoop 集群安装及验证

?

一、上传hadoop包到master机器/usr目录

版本:hadoop-1.2.1.tar.gz

解压:

?

tar -zxvf hadoop-1.2.1.tar.gz

?当前目录产出hadoop-1.2.1目录,进去创建tmp目录备用:

?

?

[root@master hadoop-1.2.1]# mkdir tmp

?返回usr目录,赋给hadoop用户hadoop-1.2.1读写权限

?

?

[root@master usr]# chown -R hadoop:hadoop hadoop-1.2.1/

?插曲:在后面操作时,是赋完hadoop目录权限后,再建立的tmp目录,所以格式化namenode时,出现错误:

?

?

[hadoop@master conf]$ hadoop namenode -formatWarning: $HADOOP_HOME is deprecated.13/09/08 00:33:06 INFO namenode.NameNode: STARTUP_MSG: /************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG:   host = master.hadoop/192.168.70.101STARTUP_MSG:   args = [-format]STARTUP_MSG:   version = 1.2.1STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013STARTUP_MSG:   java = 1.6.0_45************************************************************/13/09/08 00:33:06 INFO util.GSet: Computing capacity for map BlocksMap13/09/08 00:33:06 INFO util.GSet: VM type       = 32-bit13/09/08 00:33:06 INFO util.GSet: 2.0% max memory = 101364531213/09/08 00:33:06 INFO util.GSet: capacity      = 2^22 = 4194304 entries13/09/08 00:33:06 INFO util.GSet: recommended=4194304, actual=419430413/09/08 00:33:06 INFO namenode.FSNamesystem: fsOwner=hadoop13/09/08 00:33:06 INFO namenode.FSNamesystem: supergroup=supergroup13/09/08 00:33:06 INFO namenode.FSNamesystem: isPermissionEnabled=true13/09/08 00:33:06 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=10013/09/08 00:33:06 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)13/09/08 00:33:06 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 013/09/08 00:33:06 INFO namenode.NameNode: Caching file names occuring more than 10 times 13/09/08 00:33:07 ERROR namenode.NameNode: java.io.IOException: Cannot create directory /usr/hadoop-1.2.1/tmp/dfs/name/current        at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:294)        at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1337)        at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1356)        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1261)        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1467)        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)13/09/08 00:33:07 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at master.hadoop/192.168.70.101************************************************************/[hadoop@master conf]$ 

?修正权限后解决。

?

二、配置hadoop环境变量(root) master slave都做:

?

[root@master conf]# vi /etc/profile

?

?

HADOOP_HOME=/usr/hadoop-1.2.1export HADOOP_HOMEPATH=$PATH:$HADOOP_HOME/binexport PATH

?加载环境变量:

?

?

[root@master conf]# source /etc/profile

?测试环境变量:

?

?

[root@master conf]# hadoopWarning: $HADOOP_HOME is deprecated.Usage: hadoop [--config confdir] COMMANDwhere COMMAND is one of:  namenode -format     format the DFS filesystem....

?done

?

?

三、修改hadoop JAVA_HOME路径:

?

[root@slave01 conf]# vi hadoop-env.sh 

?

?

?

# The java implementation to use.  Required. export JAVA_HOME=/usr/jdk1.6.0_45

?

?

四、修改core-site.xml配置

?

<configuration><property>        <name>hadoop.tmp.dir</name>        <value>/usr/hadoop-1.2.1/tmp</value></property><property>        <name>fs.default.name</name>        <value>hdfs://master.hadoop:9000</value></property></configuration>

?

?

五、修改hdfs-site.xml?

?

?

[hadoop@master conf]$ vi hdfs-site.xml 

?

?

?

<configuration>        <property>                <name>dfs.data.dir</name>                <value>/usr/hadoop-1.2.1/data</value>        </property>        <property>                <name>dfs.replication</name>                <value>2</value>        </property></configuration>

??

?

六、修改mapred-site.xml

?

[hadoop@master conf]$ vi mapred-site.xml 

?

?

?

<configuration>        <property>                <name>mapred.job.tracker</name>                <value>master.hadoop:9001</value>        </property></configuration>

?

七、修改 masters和slaves

?

[hadoop@master conf]$ vi masters 

?添加hostname或者IP

master.hadoop

?

[hadoop@master conf]$ vi slaves 

?

slave01.hadoopslave02.hadoop

?八、将修改好的hadoop分发给slave,由于slave节点hadoop用户还没有usr目录下的写权限,所以目的主机用root,源主机无所谓

?

?

[root@master usr]# [root@master usr]# scp -r hadoop-1.2.1/ root@slave01.hadoop:/usr...[root@master usr]# [root@master usr]# scp -r hadoop-1.2.1/ root@slave02.hadoop:/usr

?然后,slave修改hadoop-1.2.1目录权限

?

九、格式化HDFS文件系统

?

hadoop namenode -format

?出现.......successfully formatted 为成功,见第一步的插曲

?

?

十、启动hadoop

启动器,先关闭iptables(master、slave都要关闭),不然执行任务可能出错

?

[root@master usr]# service iptables stopiptables: Flushing firewall rules: [  OK  ]iptables: Setting chains to policy ACCEPT: filter [  OK  ]iptables: Unloading modules: [  OK  ][root@master usr]# 

?

(slave忘记关闭防火墙)插曲:

[hadoop@master hadoop-1.2.1]$ hadoop jar hadoop-examples-1.2.1.jar pi 10 100Warning: $HADOOP_HOME is deprecated.Number of Maps  = 10Samples per Map = 10013/09/08 02:17:05 INFO hdfs.DFSClient: Exception in createBlockOutputStream 192.168.70.102:50010 java.net.NoRouteToHostException: No route to host13/09/08 02:17:05 INFO hdfs.DFSClient: Abandoning blk_9160013073143341141_446013/09/08 02:17:05 INFO hdfs.DFSClient: Excluding datanode 192.168.70.102:5001013/09/08 02:17:05 INFO hdfs.DFSClient: Exception in createBlockOutputStream 192.168.70.103:50010 java.net.NoRouteToHostException: No route to host13/09/08 02:17:05 INFO hdfs.DFSClient: Abandoning blk_-1734085534405596274_446113/09/08 02:17:05 INFO hdfs.DFSClient: Excluding datanode 192.168.70.103:5001013/09/08 02:17:05 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/PiEstimator_TMP_3_141592654/in/part0 could only be replicated to 0 nodes, instead of 1        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)

?关闭后解决,另外,配置尽量用IP吧。

启动

?

[root@master usr]# su hadoop[hadoop@master usr]$ start-all.sh Warning: $HADOOP_HOME is deprecated.starting namenode, logging to /usr/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-namenode-master.hadoop.outslave01.hadoop: starting datanode, logging to /usr/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-datanode-slave01.hadoop.outslave02.hadoop: starting datanode, logging to /usr/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-datanode-slave02.hadoop.outThe authenticity of host 'master.hadoop (192.168.70.101)' can't be established.RSA key fingerprint is 6c:e0:d7:22:92:80:85:fb:a6:d6:a4:8f:75:b0:96:7e.Are you sure you want to continue connecting (yes/no)? yesmaster.hadoop: Warning: Permanently added 'master.hadoop,192.168.70.101' (RSA) to the list of known hosts.master.hadoop: starting secondarynamenode, logging to /usr/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-secondarynamenode-master.hadoop.outstarting jobtracker, logging to /usr/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-jobtracker-master.hadoop.outslave02.hadoop: starting tasktracker, logging to /usr/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-tasktracker-slave02.hadoop.outslave01.hadoop: starting tasktracker, logging to /usr/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-tasktracker-slave01.hadoop.out[hadoop@master usr]$ 

?从日志看出,启动过程:namenode(master)----> datanode(slave01、slave02)---->secondarynamenode(master)-----> jobtracker(master)-----> 最后启动tasktracker(slave01、slave02)

?

十一、验证

查看hadoop进程,master、slave节点分别用jps

master:

?

[hadoop@master tmp]$ jps6009 Jps5560 SecondaryNameNode5393 NameNode5627 JobTracker[hadoop@master tmp]$ 

?slave01:

?

?

[hadoop@slave01 tmp]$ jps3855 Jps3698 TaskTracker3636 DataNode

?slave02:

?

?

[root@slave02 tmp]# jps3628 TaskTracker3748 Jps3567 DataNode[root@slave02 tmp]# 

?查看集群状态:hadoop dfsadmin -report

?

?

[hadoop@master tmp]$ hadoop dfsadmin -reportWarning: $HADOOP_HOME is deprecated.Configured Capacity: 14174945280 (13.2 GB)Present Capacity: 7577288704 (7.06 GB)DFS Remaining: 7577231360 (7.06 GB)DFS Used: 57344 (56 KB)DFS Used%: 0%Under replicated blocks: 0Blocks with corrupt replicas: 0Missing blocks: 0-------------------------------------------------Datanodes available: 2 (2 total, 0 dead)Name: 192.168.70.103:50010Decommission Status : NormalConfigured Capacity: 7087472640 (6.6 GB)DFS Used: 28672 (28 KB)Non DFS Used: 3298820096 (3.07 GB)DFS Remaining: 3788623872(3.53 GB)DFS Used%: 0%DFS Remaining%: 53.46%Last contact: Sun Sep 08 01:19:18 PDT 2013Name: 192.168.70.102:50010Decommission Status : NormalConfigured Capacity: 7087472640 (6.6 GB)DFS Used: 28672 (28 KB)Non DFS Used: 3298836480 (3.07 GB)DFS Remaining: 3788607488(3.53 GB)DFS Used%: 0%DFS Remaining%: 53.45%Last contact: Sun Sep 08 01:19:17 PDT 2013[hadoop@master tmp]$ 

?集群管理页面:master IP

?

http://192.168.70.101:50030

http://192.168.70.101:50070/

?

十二、执行任务,算个圆周率

[hadoop@master hadoop-1.2.1]$ hadoop jar hadoop-examples-1.2.1.jar pi 10 100

第一个参数10:表示运行10次map任务

第二个参数100:表示每个map取样的个数?

正常结果:

[hadoop@master hadoop-1.2.1]$ hadoop jar hadoop-examples-1.2.1.jar pi 10 100Warning: $HADOOP_HOME is deprecated.Number of Maps  = 10Samples per Map = 100Wrote input for Map #0Wrote input for Map #1Wrote input for Map #2Wrote input for Map #3Wrote input for Map #4Wrote input for Map #5Wrote input for Map #6Wrote input for Map #7Wrote input for Map #8Wrote input for Map #9Starting Job13/09/08 02:21:50 INFO mapred.FileInputFormat: Total input paths to process : 1013/09/08 02:21:52 INFO mapred.JobClient: Running job: job_201309080221_000113/09/08 02:21:53 INFO mapred.JobClient:  map 0% reduce 0%13/09/08 02:24:06 INFO mapred.JobClient:  map 10% reduce 0%13/09/08 02:24:07 INFO mapred.JobClient:  map 20% reduce 0%13/09/08 02:24:21 INFO mapred.JobClient:  map 30% reduce 0%13/09/08 02:24:28 INFO mapred.JobClient:  map 40% reduce 0%13/09/08 02:24:31 INFO mapred.JobClient:  map 50% reduce 0%13/09/08 02:24:32 INFO mapred.JobClient:  map 60% reduce 0%13/09/08 02:24:38 INFO mapred.JobClient:  map 70% reduce 0%13/09/08 02:24:41 INFO mapred.JobClient:  map 80% reduce 13%13/09/08 02:24:44 INFO mapred.JobClient:  map 80% reduce 23%13/09/08 02:24:45 INFO mapred.JobClient:  map 100% reduce 23%13/09/08 02:24:47 INFO mapred.JobClient:  map 100% reduce 26%13/09/08 02:24:53 INFO mapred.JobClient:  map 100% reduce 100%13/09/08 02:24:54 INFO mapred.JobClient: Job complete: job_201309080221_000113/09/08 02:24:54 INFO mapred.JobClient: Counters: 3013/09/08 02:24:54 INFO mapred.JobClient:   Job Counters 13/09/08 02:24:54 INFO mapred.JobClient:     Launched reduce tasks=113/09/08 02:24:54 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=63801713/09/08 02:24:54 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=013/09/08 02:24:54 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=013/09/08 02:24:54 INFO mapred.JobClient:     Launched map tasks=1013/09/08 02:24:54 INFO mapred.JobClient:     Data-local map tasks=1013/09/08 02:24:54 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=4445813/09/08 02:24:54 INFO mapred.JobClient:   File Input Format Counters 13/09/08 02:24:54 INFO mapred.JobClient:     Bytes Read=118013/09/08 02:24:54 INFO mapred.JobClient:   File Output Format Counters 13/09/08 02:24:54 INFO mapred.JobClient:     Bytes Written=9713/09/08 02:24:54 INFO mapred.JobClient:   FileSystemCounters13/09/08 02:24:54 INFO mapred.JobClient:     FILE_BYTES_READ=22613/09/08 02:24:54 INFO mapred.JobClient:     HDFS_BYTES_READ=246013/09/08 02:24:54 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=62341913/09/08 02:24:54 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=21513/09/08 02:24:54 INFO mapred.JobClient:   Map-Reduce Framework13/09/08 02:24:54 INFO mapred.JobClient:     Map output materialized bytes=28013/09/08 02:24:54 INFO mapred.JobClient:     Map input records=1013/09/08 02:24:54 INFO mapred.JobClient:     Reduce shuffle bytes=28013/09/08 02:24:54 INFO mapred.JobClient:     Spilled Records=4013/09/08 02:24:54 INFO mapred.JobClient:     Map output bytes=18013/09/08 02:24:54 INFO mapred.JobClient:     Total committed heap usage (bytes)=141481984013/09/08 02:24:54 INFO mapred.JobClient:     CPU time spent (ms)=37713013/09/08 02:24:54 INFO mapred.JobClient:     Map input bytes=24013/09/08 02:24:54 INFO mapred.JobClient:     SPLIT_RAW_BYTES=128013/09/08 02:24:54 INFO mapred.JobClient:     Combine input records=013/09/08 02:24:54 INFO mapred.JobClient:     Reduce input records=2013/09/08 02:24:54 INFO mapred.JobClient:     Reduce input groups=2013/09/08 02:24:54 INFO mapred.JobClient:     Combine output records=013/09/08 02:24:54 INFO mapred.JobClient:     Physical memory (bytes) snapshot=147376947213/09/08 02:24:54 INFO mapred.JobClient:     Reduce output records=013/09/08 02:24:54 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=413034905613/09/08 02:24:54 INFO mapred.JobClient:     Map output records=20Job Finished in 184.973 secondsEstimated value of Pi is 3.14800000000000000000[hadoop@master hadoop-1.2.1]$ 

?由于未关闭slave防火墙,见第十步插曲。

热点排行