7-MapReduce入门
?一定要选择一个MAIN,不然不会运行。
?
2)
?
?
?
?
?
?
3)输出目录是HADOOP自己创建的。如果有这个目录,一定要删除。如下:
4)测试命令:
?
java -jar WordCount.jar hdfs://station1:9000/input/???? hdfs://station1:9000/out
?
5)核心代码如下:
?
import java.io.IOException;import java.util.StringTokenizer;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;public class WordCount {/** * @param args * @throws Exception */public static void main(String[] args) throws Exception {Job job = new Job();job.setJarByClass(WordCount.class);job.setJobName("WordCount");FileInputFormat.addInputPath(job, new Path(args[0]));FileOutputFormat.setOutputPath(job, new Path(args[1]));job.setMapperClass(Map.class);job.setReducerClass(MyReduce.class);job.setOutputKeyClass(Text .class);job.setOutputValueClass(IntWritable.class);System.exit(job.waitForCompletion(true) ? 0 : 1);// JOB运行完程序才退出 }
?
?
/** * LongWritable, IntWritable, Text 均是 Hadoop 中实现的用于封装 Java * 数据类型的类,这些类实现了WritableComparable接口, * 都能够被串行化从而便于在分布式环境中进行数据交换,你可以将它们分别视为long,int,String 的替代品 */public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {private final static IntWritable one = new IntWritable(1);private Text word = new Text();;/** * Mapper接口中的map方法: void map(K1 key, V1 value, OutputCollector<K2,V2> * output, Reporter reporter) 映射一个单个的输入k/v对到一个中间的k/v对 * 输出对不需要和输入对是相同的类型,输入对可以映射到0个或多个输出对。 * OutputCollector接口:收集Mapper和Reducer输出的<k,v>对。 * OutputCollector接口的collect(k, v)方法:增加一个(k,v)对到output * * * lwWritable 每行的行号 vlaue 每行的内容 * * * */public void map(LongWritable key, Text value, Context context)throws IOException, InterruptedException { String line = value.toString().toLowerCase(); // 全部转为小写字母 StringTokenizer tokenizer = new StringTokenizer(line);// StringTokenizer tokenizer = new StringTokenizer(line);// StringTokenizer是一个用来分隔String的应用类,相当于VB的split函数。while (tokenizer.hasMoreTokens()) {word.set(tokenizer.nextToken()); // 输出这个单词context.write(word, one); // 这个单词出现了一次}};}public static class MyReduce extends Reducer<Text, IntWritable, Text, IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result);};} }
?
6)效果:
?
?
?
?
?
与原来的统计结果一样。没有任何问题
[hadoop@station1 bin]$ grep the stop-all.sh |wc
?
???? 10???? 107???? 639
?
?
?
?
?
?