HADOOP3.3.0 安装配置

HADOOP3.3.0 安装配置,第1张

HADOOP3.3.0 安装配置 一、 配置DNS
vim /etc/hosts

192.168.2.33    hadoop01
192.168.2.34    hadoop02
192.168.2.35    hadoop03
192.168.2.36    hadoop04
二、关闭防火墙
# 关闭防火墙
systemctl stop firewalld

# 关闭自启动
systemctl disable firewalld
三、配置JDK环境

上传jdk-8u231-linux-x64.tar.gz到服务器

tar -zxvf jdk-8u231-linux-x64.tar.gz
mv jdk1.8.0_231 /opt/
# 创建软连接
ln -s /opt/jdk1.8.0_231 /opt/jdk

# 配置java环境
vim /etc/profile

# 在最后加上
# Java
export JAVA_HOME=/opt/jdk
export CLASSPATH=$JAVA_HOME/lib/
export PATH=$PATH:$JAVA_HOME/bin

# 使环境变量生效
source /etc/profile

# 验证java安装
java -version
四、搭建Hadoop完全分布式集群 1.下载hadoop
wget https://apache.website-solution.net/hadoop/common/hadoop-3.3.0/hadoop-3.3.0.tar.gz
2.配置hadoop环境变量
# 解压到opt下
tar -zxvf hadoop-3.3.0.tar.gz -C /opt/

vim /etc/profile
# hadoop
export HADOOP_HOME=/opt/hadoop-3.3.0
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop

# 保存后,使profile生效
source /etc/profile
3.配置Hadoop环境脚本文件中的JAVA_HOME参数
cd /opt/hadoop-3.3.0/etc/hadoop
#分别在hadoop-env.sh、mapred-env.sh、yarn-env.sh文件中添加或修改如下参数
vim hadoop-env.sh
vim mapred-env.sh
vim yarn-env.sh
export JAVA_HOME="/opt/jdk"
4.修改Hadoop配置文件

创建文件夹

mkdir -p /opt/hadoop/tmp

Hadoop安装目录下的etc/hadoop目录中,需修改core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml、workers

cd /opt/hadoop-3.2.0/etc/hadoop
  1. core-site.xml (配置Common组件属性)

  
      
      fs.defaultFS
      hdfs://hadoop01:9000
  
  
      
      hadoop.tmp.dir
     /opt/hadoop/tmp
 
 
  1. hdfs-site.xml (配置HDFS组件属性)

    
        
        dfs.namenode.http-address
       	hadoop01:50070
    
    
        dfs.namenode.name.dir
        file:/opt/hadoop/dfs/name
    
    
        dfs.datanode.data.dir
        file:/opt/hadoop/dfs/data
    
    
        
        dfs.replication
        3
    
     
        dfs.webhdfs.enabled 
        true 
    
    
       dfs.permissions
        true
       配置为false后,可以允许不要检查权限就生成dfs上的文件,方便倒是方便了,但是你需要防止误删除.
   

  1. mapred-site.xml (配置Map-Reduce组件属性)

    
        mapreduce.framework.name
        
        yarn
    
    
        mapreduce.jobhistory.address
        hadoop01:10020
    
    
        mapreduce.jobhistory.webapp.address
        hadoop01:19888
    
    
        mapreduce.application.classpath
        $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
    

  1. yarn-site.xml(配置资源调度属性)



    
        yarn.resourcemanager.hostname
        
        hadoop01
    
    
        yarn.nodemanager.aux-services
        
        mapreduce_shuffle
    
    
        yarn.resourcemanager.webapp.address
        hadoop01:8088
        配置外网只需要替换外网ip为真实ip,否则默认为 localhost:8088
    
    
        yarn.scheduler.maximum-allocation-mb
        2048
        每个节点可用内存,单位MB,默认8182MB
    
    
        yarn.nodemanager.vmem-check-enabled
        false
        忽略虚拟内存的检查,如果你是安装在虚拟机上,这个配置很有用,配上去之后后续 *** 作不容易出问题。
    
    
        yarn.nodemanager.env-whitelist
        JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME
    

  1. workers
hadoop01
hadoop02
hadoop03
hadoop04
  1. 配置启动脚本,添加HDFS和Yarn权限
# 添加HDFS权限:编辑如下脚本,在第二行空白位置添加HDFS权限
cd /opt/hadoop-3.2.0/sbin
vim start-dfs.sh 
vim stop-dfs.sh

HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
# 添加Yarn权限:编辑如下脚本,在第二行空白位置添加Yarn权限
cd /opt/hadoop-3.2.0/sbin
vim start-yarn.sh 
vim stop-yarn.sh 

YARN_RESOURCEMANAGER_USER=root
HDFS_DATANODE_SECURE_USER=yarn
YARN_NODEMANAGER_USER=root
5.克隆hadoop01

(1)克隆hadoop01到hadoop02、hadoop03、hadoop04
(2)修改hadoop02、hadoop03、hadoop04的ip

6.配置免密登录
# 生成ssh密钥
ssh-keygen -t rsa

cd /root/.ssh
ls 

# 在主节点(hadoop01)上将公钥拷到一个特定文件authorized_keys中
cp id_rsa.pub authorized_keys

# 把authorized_keys拷贝到hadoop02上
scp authorized_keys root@hadoop02:/root/.ssh/

# 登录hadoop02主机
cd .ssh/
cat id_rsa.pub >> authorized_keys
# 在把authorized_keys拷贝到hadoop03上
scp authorized_keys root@hadoop03:/root/.ssh/

# 登录hadoop03主机
cd .ssh/
cat id_rsa.pub >> authorized_keys
# 在把authorized_keys拷贝到hadoop04上
scp authorized_keys root@hadoop04:/root/.ssh/

# 登录hadoop04主机
cd .ssh/
cat id_rsa.pub >> authorized_keys
# 把生成好的authorized_keys拷贝到hadoop01,hadoop02,hadoop03
scp authorized_keys root@hadoop01:/root/.ssh/
scp authorized_keys root@hadoop02:/root/.ssh/
scp authorized_keys root@hadoop03:/root/.ssh/

# 验证免密登录
使用ssh 用户名@节点名或ssh ip地址命令验证免密码登录
ssh root@hadoop02
7.初始化 & 启动
cd /opt/hadoop-3.2.0
# init 
# 格式化
bin/hdfs namenode -format wmqhadoop

#启动
sbin/start-dfs.sh
sbin/start-yarn.sh

# 后面开启
sbin/start-all.sh

# 停止
sbin/stop-all.sh
8.验证Hadoop启动成功
[root@hadoop01 ~]# jps
2400 ResourceManager
1929 DataNode
2537 NodeManager
3817 Jps
1787 NameNode
2155 SecondaryNameNode

[root@hadoop02 ~]# jps
1666 DataNode
2028 Jps
1773 NodeManager

[root@hadoop03 ~]# jps
1813 NodeManager
2072 Jps
1706 DataNode

[root@hadoop04 ~]# jps
1736 NodeManager
1996 Jps
1629 DataNode

在浏览器输入:http://hadoop01:8088
打开ResourceManager页面

在浏览器输入:http://hadoop01:50070
打开Hadoop Namenode页面

五、mysql-5.7安装 1.下载
wget http://repo.mysql.com/yum/mysql-5.7-community/el/7/x86_64/mysql57-community-release-el7-10.noarch.rpm

rpm -ivh mysql57-community-release-el7-10.noarch.rpm
2.使用yum命令即可完成安装
1、安装命令:
yum -y install mysql-community-server

2、启动msyql:
systemctl start mysqld #启动MySQL

3、获取安装时的临时密码(在第一次登录时就是用这个密码):
grep 'temporary password' /var/log/mysqld.log
sGpt=V+8f,qv

3.设置开机启动
systemctl enable mysqld

4.登录
mysql -uroot -p
# 输入刚才的密码

5.修改密码
ALTER USER 'root'@'localhost' IDENTIFIED BY 'root';

# 出现了如下错误:
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
# 原因是因为密码设置的过于简单会报错,MySQL有密码设置的规范,具体是与validate_password_policy的值有关

# 密码的长度是由validate_password_length决定的,但是可以通过以下命令修改
set global validate_password_length=4;

# validate_password_policy决定密码的验证策略,默认等级为MEDIUM(中等),可通过以下命令修改为LOW(低)
set global validate_password_policy=0;

6. 执行授权命令允许远程登陆
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'root' WITH GRANT OPTION;

欢迎分享,转载请注明来源:内存溢出

原文地址: http://www.outofmemory.cn/zaji/5699054.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-17
下一篇 2022-12-17

发表评论

登录后才能评论

评论列表(0条)

保存