首页 诗词 字典 板报 句子 名言 友答 励志 学校 网站地图
当前位置: 首页 > 教程频道 > 软件管理 > 软件架构设计 >

hadoop 可能遇到的异常

2012-08-27 
hadoop可能遇到的错误1.question2011-08-15 13:07:42,558 INFO org.apache.hadoop.ipc.Client: Retryingco

hadoop 可能遇到的错误
1.question

  2011-08-15 13:07:42,558 INFO org.apache.hadoop.ipc.Client: Retrying      connect to server: server0/192.168.2.10:9000. Already tried 5 time(s).
  2011-08-15 13:07:42,558 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: server0/192.168.2.10:9000. Already tried 5 time(s).
  2011-08-15 13:07:42,558 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: server0/192.168.2.10:9000. Already tried 5 time(s).
  2011-08-15 13:07:42,558 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: server0/192.168.2.10:9000. Already tried 5 time(s).
  2011-08-15 13:07:42,558 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: server0/192.168.2.10:9000. Already tried 5 time(s).
 
    answer:
 
    namenode 节点没有起来,查看namenode日志 排错 
 
2.question

   ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /server/bin/hadoop/data: namenode namespaceID = 1866210138; datanode namespaceID = 629576566 
  
   answer:
   
   可能是namesplaceId的版本重复了,此时可先format,在删除那么文件,在重新format, 所有slave也format(可选)
  
3.question

 
  2011-08-15 17:26:57,748 ERROR namenode.NameNode - java.lang.NullPointerException
        at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:136)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:176)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:206)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:240)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)  
  
   answer:
    hdfs://server0:9000/
  
    这个问题是 9000 后边的/   注意配置 hadoop配置文件内  所有的 路径后边不带 "/"
    **切记更改之后同步到所有slave上
   
4.question

 
  Exception in thread "main" java.io.IOException: Call to server0/192.168.2.10:9000 failed on local exception: java.io.EOFException
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
        at org.apache.hadoop.ipc.Client.call(Client.java:743)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
        at $Proxy0.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
        at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
        at org.apache.nutch.crawl.Crawl.main(Crawl.java:94)
       
       
        2011-08-15 18:24:57,507 WARN  ipc.Server - Incorrect header or version mismatch from 192.168.2.10:42413 got version 3 expected version 4
       
  answer:
    
     当slave调不到master的时候 如果配置文件没问题 报这个错误,则是  hadoop版本的问题,  hadoop和nutch1.2中hadoop版本不一样
    
5.question

     2011-08-16 17:07:00,946 ERROR datanode.DataNode - DatanodeRegistration(192.168.2.12:50010, storageID=DS-1678238992-127.0.0.2-50010-1313485333243, infoPort=50075, ipcPort=50020):DataXceiver
        org.apache.hadoop.hdfs.server.datanode.BlockAlreadyExistsException: Block blk_6201731654815689582_1003 is valid, and cannot be written to.
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.writeToBlock(FSDataset.java:983)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:98)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:259)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
        at java.lang.Thread.run(Thread.java:662)
       
   answer:
  
       这个问题则为:
       /etc/hosts  要ip 映射到主机名
      
       例如:
       #hadoop master
192.168.2.10    server0
192.168.2.11    server1
192.168.2.12    server2
192.168.2.13    server3

**当你修改了这个发现还有这个问题时

vi /etc/HOSTNAME  这个文件里 一定要改成相应的 master 或是 slave 所在的 主机名
而不能是localhost
例如:server1 的机器
则 HOSTNAME 内为 server1

热点排行