hadoop2.2.0单机安装(记录)
说明:新版本hadoop与以前差异较大,很多进程不了解,这里只是初次安装做一下记录,有些地方很模糊,这里只做一下记录安装JAVA添加用户SSH配置
不详述
环境变量$ vi .bash_profile
export JAVA_HOME=/usr/java/jdk1.7.0_45
export HADOOP_HOME=/app/hadoop/hadoop-2.2.0
export JRE_HOME=$JAVA_HOME/jre
exportPATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
exportCLASSPATH=./:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
下载安装hadoop:
http://apache.fayea.com/apache-mirror/hadoop/common/hadoop-2.2.0/
上传解压
[hadoop@hadoopt]$ tar -xvf hadoop-2.2.0.tar.gz(这里不明白官方的带src的是什么版本,里面目录结构看不懂什么意思,完全不同,不像是安装包啊。但是这个版本是为编译的版本,就是说只能打开进程,而很多功能不能用。ant编译又没有build.xml,这点以后再研究吧,先启动hadoop再说)
[(至于解压到哪就不做说明了,任何文件夹都可,这些目录不要照搬,自己创建自己,还有下面配置的目录、自己创建,但是一定用hadoop用户创建,否则没有权限)
移动解压软件到软件目录
/opt/hadoop-2.2.0
修改hadoop参数文件
安装参考:
http://wenku.baidu.com/view/1681511a52d380eb62946df6.html
以下所有配置文件的修改均在下面目录完成
[hadoop@hadoop01 hadoop]$ pwd
/app/hadoop/hadoop-2.2.0/etc/hadoop
Core-site.xml[hadoop@hadoop01 hadoop]$ vi core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:8020</value>
<description>The name of the defaultfile system. Either the literal string "local" or a host:port forNDFS.
</description>
<final>true</final>
</property>
</configuration>
Hdfs-site.xml[hadoop@hadoop01 hadoop]$ vi hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/dfs/name</value>
<description>Determineswhere on the local filesystem the DFS name node should store the name table. Ifthis is a comma-delimited list of directories then the name table is replicatedin all of the directories, for redundancy. </description>
<final>true</final>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/dfs/data</value>
<description>Determineswhere on the local filesystem an DFS data node should store its blocks. If thisis a comma-delimited list of directories, then data will be stored in all nameddirectories, typically on different devices.Directories that do not exist areignored.
</description>
<final>true</final>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
Mapred-site.xml[hadoop@hadoop01 hadoop]$ vi mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapred.system.dir</name>
<value>/home/hadoop/mapred/system</value>
<final>true</final>
</property>
<property>
<name>mapred.local.dir</name>
<value>/home/hadoop/mapred/local</value>
<final>true</final>
</property>
</configuration>
Yarn-site.xml默认即可
Hadoop-env.sh[hadoop@hadoop01 hadoop]$ vi hadoop-env.sh
增加
export JAVA_HOME=/usr/java/jdk1.7.0_45
启动hadoop
格式化namenode
[hadoop@hadoop01 ~]$ hdfs namenode –format (这里与以前不同了hdfs在bin里面)
开启进程(蓝色这一部分可以使用一个命令代替,目前不明白下面每个命令作用,所以留下记录,建议忽略蓝色部分直接执行/sbin/start-all.sh)
[hadoop@hadoop01 ~]$ hadoop-daemon.sh startnamenode
[hadoop@hadoop01~]$ hadoop-daemon.sh start datanode
开启yarn守护进程
[hadoop@hadoop01 ~]$ yarn-daemon.sh startresourcemanager
[hadoop@hadoop01 ~]$ yarn-daemon.sh startnodemanager
[hadoop@hadoop01 ~]$ start-yarn.sh
检查进程是否启动
[hadoop@hadoop01 ~]$ jps
2912 NameNode
5499 ResourceManager
2981 DataNode
6671 Jps
6641 NodeManager
6473 SecondaryNameNode
有以上内容说明已经启动
查看hadoop资源管理页面
http://localhost:8088
注意:进入界面后不应该都是0,否则有问题