2-集群安装
?
1)准备环境
?
hostname
ip
作用
?
station1
192.168.80.51
Namenaode jobtracher datanode tasktracher
?
station2
192.168.80.52
Datanode? jobtracher
?
用户为:cloud
?
?
2)分别安装HADOOP
分别建立文件
[root@station1 ~]# mkdir /cloud
[root@station1 ~]# chmod 777 /cloud
?
[cloud@station1 cloud]$ pwd
/cloud
[cloud@station1 cloud]$ ll
total 4
drwxr-xr-x 13 cloud cloud 4096 May? 9 04:35 hadoop-1.0.3
?
?
修改配制文件 namenode
?
[cloud@station1 conf]$ pwd
/cloud/hadoop-1.0.3/conf
[cloud@station1 conf]$ vi core-site.xml
?
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
?
<!-- Put site-specific property overrides in this file. -->
?
<configuration>
??????? <property>
??????????????? <name>fs.default.name</name>
??????????????? <value>hdfs://station1:9000</value>
??????? </property>
?
</configuration>
?
?
[cloud@station1 conf]$ vi hdfs-site.xml
?
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
?
<!-- Put site-specific property overrides in this file. -->
?
<configuration>
?<property>
???????? <name>dfs.replication</name>
???????? <value>1</value>
</property>
?
?
</configuration>
?
?
?
?
[cloud@station1 conf]$ vi /hadoop/hadoop-1.0.3/conf/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_09
?
?
另一个节点也一样配的。
?
?
?
?
下面还要这样配
[cloud@station1 conf]$ more masters
station1
[cloud@station1 conf]$ more slaves
station1
station2
?
?
?
?
?
?
?
3)修改/etc/hosts 文件
~
[root@station1 ~]# vi /etc/hosts
?
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1?????????????? station1 localhost.localdomain localhost
192.168.80.51??????????? station1
192.168.80.52??????????? station2
::1???????????? localhost6.localdomain6 localhost6
?
?
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1?????????????? station1 localhost.localdomain localhost
192.168.80.51??????????? station1
192.168.80.52??????????? station2
::1???????????? localhost6.localdomain6 localhost6
?
?
?
?
?
4)增加 SSH信任
1)分别修改hadoop 配制文件masters?? 和? slaves
[root@station2 ~]# useradd cloud
[root@station2 ~]# passwd cloud
Changing password for user cloud.
New UNIX password:
BAD PASSWORD: it is too simplistic/systematic
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
?
cloud@station2 ~]$? ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
Generating public/private dsa key pair.
/home/cloud/.ssh/id_dsa already exists.
Overwrite (y/n)? y
Your identification has been saved in /home/cloud/.ssh/id_dsa.
Your public key has been saved in /home/cloud/.ssh/id_dsa.pub.
The key fingerprint is:
4c:89:95:c5:89:cc:d8:a9:de:ce:e8:8e:7f:13:75:0f cloud@station2
[cloud@station2 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[cloud@station2 ~]$ chmod 644 ~/.ssh/authorized_keys
[cloud@station2 ~]$? scp ~/.ssh/authorized_keys station1:/home/cloud/.ssh/
cloud@station1's password:
authorized_keys??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? 100% 1812???? 1.8KB/s?? 00:00???
[cloud@station2 ~]$ ssh station1
Last login: Tue Oct 30 13:44:56 2012 from station2
[cloud@station1 ~]$ hostname
station1
?
?
?
5)启动
在主节点
?
[cloud@station1 conf]$ hadoop namenode -format
Warning: $HADOOP_HOME is deprecated.
?
12/10/30 14:59:03 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:?? host = station1/192.168.80.51
STARTUP_MSG:?? args = [-format]
STARTUP_MSG:?? version = 1.0.3
STARTUP_MSG:?? build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1335192; compiled by 'hortonfo' on Tue May? 8 20:31:25 UTC 2012
************************************************************/
12/10/30 14:59:03 INFO util.GSet: VM type?????? = 32-bit
12/10/30 14:59:03 INFO util.GSet: 2% max memory = 19.33375 MB
12/10/30 14:59:03 INFO util.GSet: capacity????? = 2^22 = 4194304 entries
12/10/30 14:59:03 INFO util.GSet: recommended=4194304, actual=4194304
12/10/30 14:59:04 INFO namenode.FSNamesystem: fsOwner=cloud
12/10/30 14:59:04 INFO namenode.FSNamesystem: supergroup=supergroup
12/10/30 14:59:04 INFO namenode.FSNamesystem: isPermissionEnabled=true
12/10/30 14:59:04 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
12/10/30 14:59:04 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
12/10/30 14:59:04 INFO namenode.NameNode: Caching file names occuring more than 10 times
12/10/30 14:59:04 INFO common.Storage: Image file of size 111 saved in 0 seconds.
12/10/30 14:59:04 INFO common.Storage: Storage directory /tmp/hadoop-cloud/dfs/name has been successfully formatted.
12/10/30 14:59:04 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at station1/192.168.80.51
************************************************************/
?
[cloud@station1 hadoop-1.0.3]$ ./bin/start-all.sh
Warning: $HADOOP_HOME is deprecated.
?
starting namenode, logging to /cloud/hadoop-1.0.3/libexec/../logs/hadoop-cloud-namenode-station1.out
station2: starting datanode, logging to /cloud/hadoop-1.0.3/libexec/../logs/hadoop-cloud-datanode-station2.out
station1: starting datanode, logging to /cloud/hadoop-1.0.3/libexec/../logs/hadoop-cloud-datanode-station1.out
station1: Error: JAVA_HOME is not set.
station2: Error: JAVA_HOME is not set.
station1: starting secondarynamenode, logging to /cloud/hadoop-1.0.3/libexec/../logs/hadoop-cloud-secondarynamenode-station1.out
station1: Error: JAVA_HOME is not set.
starting jobtracker, logging to /cloud/hadoop-1.0.3/libexec/../logs/hadoop-cloud-jobtracker-station1.out
station2: starting tasktracker, logging to /cloud/hadoop-1.0.3/libexec/../logs/hadoop-cloud-tasktracker-station2.out
station1: starting tasktracker, logging to /cloud/hadoop-1.0.3/libexec/../logs/hadoop-cloud-tasktracker-station1.out
station2: Error: JAVA_HOME is not set.
station1: Error: JAVA_HOME is not set.
[cloud@station1 hadoop-1.0.3]$ ./bin/stop-all.sh?
?
?
?
安装完成
?
?
?
?
?
?
下面进行测试
?
测试文件
[cloud@station1 cloud]$ hadoop fs -put eclipse-java-helios-SR2-linux-gtk.tar.gz? /
?上传文件没有问题
?
?
?