Mac hadoop伪分布式安装

xiaoxiao2021-02-27  309

原地址1 原地址2 Hadoop on Mac 在Mac OS x 10.11.6

以下操作基于Hadoop 2.8.0, 环境:Java version 1.8.0

1.开启ssh和远程登陆 - 偏好设置->共享->远程访问->允许所有人

ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys

2.安装hadoop

brew install hadoop

3.配置伪分布式 需要修改一系列文件,文件在/usr/local/Cellar/hadoop/2.8.0/libexec/etc/hadoop中

修改hadoop-env.sh 注释掉 export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"

添加

export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.realm= -Djava.security.krb5.kdc=" export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.conf=/dev/null" 修改yarn-env.sh 添加 YARN_OPTS="$YARN_OPTS -Djava.security.krb5.realm=OX.AC.UK -Djava.security.krb5.kdc=kdc0.ox.ac.uk:kdc1.ox.ac.uk" 修改 core-site.xml <configuration> <property> <name>hadoop.tmp.dir</name> <value>/tmp/hadoop-${user.name}</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> </configuration> 修改 hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>${user.home}/hadoop/data/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>${user.home}/hadoop/data/datanode</value> </property </configuration> 修改 mapred-site.xml cp /usr/local/Cellar/hadoop/2.5.0/libexec/etc/hadoop/mapred-site.xml.template /usr/local/Cellar/hadoop/2.5.0/libexec/etc/hadoop/mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> yarn-site.xml <configuration> <property> <name>yarn.resourcemanager.address</name> <value>localhost:8032</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>

4.运行Hadoop

$ cd /usr/local/Cellar/hadoop/2.5.0/libexec/sbin $ ./start-dfs.sh $ ./start-yarn.sh

Tips: 如果出现Warning

WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable 只是因为hadoop为32位,你的系统为64位,没什么大问题,不用管。

5.检查运行状态

jps

6.可以通过一下网址查看状态 Cluser status: http://localhost:8088 HDFS status: http://localhost:50070 Secondary NameNode status: http://localhost:50090

转载请注明原文地址: https://www.6miu.com/read-7500.html

最新回复(0)