Windows10下搭建Hadoop2.7.6的Eclipse开发环境

xiaoxiao2021-02-28  27

一、安装hadoop环境

不建议安装最新版本,可能会出问题。本文选择安装2.7.6版本。 请参考: https://blog.csdn.net/goodmentc/article/details/80946431

二、安装eclipse

2.1 下载eclipse

下载地址:https://www.eclipse.org/downloads/ 下载成功后,双击安装。 我下载的版本是:Version: Luna Service Release 2 (4.4.2)

2.2 下载hadoop的eclipse插件包

下载地址:https://download.csdn.net/download/goodmentc/10527519 解压文件,在release目录找到jar包(hadoop-eclipse-plugin-2.6.0.jar)放到eclipse安装目录下的plugins目录即可。不同版本的Hadoop需要选择相应的插件版本,如果不合适,可以多试几个插件包。

三、配置

1.打开eclipse,通过Window->Preferences进入进行设置,如下图: 选择将Hadoop的安装路径。

2.把map\reduce设置窗口调出显示,方便设置Window->Show View->Other找到Map/Reduce Locations,单击“ok”确定。 确定后,在eclipse中会多出一个视图:

点击如下“大象”图标,即可进入设置界面:

设置界面:

端口设置: 第一个端口:50010,根据启动Hadoop后的Namenode窗口,如下图: 标红色的地方: 127.0.0.1:50010

第二个端口: 9000 这是我在前面配置文件core-site.xml中配置的端口9000。

“Location name”一栏随便填个名字,然后点击“Finish”。

四、启动Hadoop

在Hadoop安装目录中sbin目录,执行命令:start-all.cmd 启动成功后,会自动打开4个终端窗口。 执行jps查看启动进程: 除了jps进程外,还有四个进程。 创建输入目录: D:\develop\hadoop-2.7.6\bin>hdfs dfs -mkdir hdfs://localhost:9000/testdir 上传一个文件: D:\develop\hadoop-2.7.6\bin>hadoop fs -put yarn.cmd hdfs://localhost:9000/testdir/input

五、创建工程

5.1 创建map/reduce工程wordcount

步骤略。

5.2 新建测试类MyWordCount

package hdp.test; import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; public class MyWordCount { public static class WordCountMapper extends Mapper<Object, Text, Text, IntWritable> { private final IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context) throws IOException, InterruptedException { StringTokenizer stn = new StringTokenizer(value.toString()); while (stn.hasMoreTokens()) { word.set(stn.nextToken()); context.write(word, one); } } } public static class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { Configuration conf = new Configuration(); String[] cliArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); if (cliArgs.length != 2) { System.err.println("Usage: mywordcount <in> <out>"); System.exit(2); } Job myJob = Job.getInstance(conf, "My first job"); myJob.setJarByClass(MyWordCount.class); myJob.setMapperClass(WordCountMapper.class); myJob.setReducerClass(WordCountReducer.class); myJob.setCombinerClass(WordCountReducer.class); myJob.setOutputKeyClass(Text.class); myJob.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(myJob, new Path(cliArgs[0])); FileOutputFormat.setOutputPath(myJob, new Path(cliArgs[1])); boolean isSucced = myJob.waitForCompletion(true); System.out.println("result:" + isSucced); System.exit(isSucced ? 0: 1); } }

代码说明: 数组String[] cliArgs其实是配置的运行参数:

[hdfs://localhost:9000/testdir/input, hdfs://localhost:9000/testdir/out]

5.3 设置WordCount运行参数

右键wordcount工程,Run As->Run Configurations,填入参数: hdfs://localhost:9000/testdir/input hdfs://localhost:9000/testdir/out 如下图: 配置运行时的input和output两个参数,我这里把本地文件上传到了input目录(已存在),hdfs目录作为输出,其中out目录在hdfs中不存在,如果已经存在则先删除,或使用其他名字,否则运行时会报错。

5.4 运行程序

运行成功会打印出:result:true,结果如下图示:

六、遇到的错误

1.out目录已存在导致: Exception in thread "main" org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://localhost:9000/testdir/out already exists at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146) at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Unknown Source) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at hdp.test.MyWordCount.main(MyWordCount.java:66)

程序执行成功后,再次执行,会报错。 原因:out目录已存在。解决:删除out目录,重新执行程序:

D:\develop\hadoop-2.7.6\bin>hadoop fs -rm -r hdfs://localhost:9000/testdir/out 18/07/07 19:10:37 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes. Deleted hdfs://localhost:9000/testdir/out 2.input目录不存在导致: Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost:9000/testdir/input at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:323) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:265) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:387) at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Unknown Source) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at hdp.test.MyWordCount.main(MyWordCount.java:66)

解决方法: 创建input目录 D:\develop\hadoop-2.7.6\bin>hadoop fs -mkdir hdfs://localhost:9000/testdir/input

七、其他

以上程序及流程亲自试验成功。 本文参考: https://blog.csdn.net/houjingjun/article/details/70198223 。 感谢!

转载请注明原文地址: https://www.6miu.com/read-2628228.html

最新回复(0)