Hadoop 常见错误解决方法(12-13持续整理更新中)
1.错误日志:Cannot create directory Name node is in safe mode.安全模式(由于使用ctrl+c强行退出引起) 解决方法 $HADOOP_HOME/bin/hadoop dfsadmin -safemode leave
2.错误日志:could only be replicated to 0 nodes, instead of 1 运行hadoop dfsadmin -report 查看运行状况,发现Configured Capacity: 0 (0 KB)Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: ?%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0可用节点为0,datanode可能没有启动
运行jps
20211 SecondaryNameNode14205 Jps20043 NameNode13996 RunJar20290 JobTracker发现datanode确实没有启动
解决方法:运行$HADOOP_HOME/bin/hadoop-daemon.sh start datanode
3.使用hive保留关键字作为cloumn name会提示报错解决方法:使用反引号将关键字括起来:create table if not exists test(`update` bigint );