前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >MapReduce快速入门系列(13) | MapReduce之reduce端join与map端join算法实现

MapReduce快速入门系列(13) | MapReduce之reduce端join与map端join算法实现

作者头像
不温卜火
发布2020-10-28 15:25:42
5350
发布2020-10-28 15:25:42
举报
文章被收录于专栏:不温卜火

本片博文博主为大家讲解MapReduce之Join的多种应用。

一. Reduce Join

1.1 Reduce Join 工作原理

  Map端的主要工作:为来自不同表或文件的key/value对,打标签以区别不同来源的记录。然后用连接字段作为key,其余部分和新加的标志作为val,最后进行输出。

  Reduce端的主要工作:在Reduce端以连接字段作为key的分组已经完成,我们只需要在每一个分组当中将那些来源于不同文件的记录(在Mao阶段已经打标志)分开,最后进行合并就ok了。

1.2 Reduce Join 案例

1. 需求

1
1

将商品信息表中数据根据商品pid合并到订单数据表中。 最终形式如下表:

id

pname

amount

1001

小米

1

1004

小米

4

1002

华为

2

1005

华为

5

1003

格力

3

1006

格力

6

2. 需求分析

  通过将关联条件作为Map输出的key,将两表满足Join条件的数据并携带数据所来源的文件信息,发往同一个ReduceTask,在Reduce中进行数据的串联,如下图所示。

2
2

3. 完成代码

  • 1. 创建商品和订合并后的OrderBean类
代码语言:javascript
复制
package com.buwenbuhuo.reducejoin;

import org.apache.hadoop.io.WritableComparable;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
/**
 * @author 卜温不火
 * @create 2020-04-25 17:24
 * com.buwenbuhuo.reducejoin - the name of the target package where the new class or interface will be created.
 * mapreduce0422 - the name of the current project.
 */
public class OrderBean implements WritableComparable<OrderBean> {
    private String id;
    private String pid;
    private int amount;
    private String pname;

    @Override
    public String toString() {
        return id + "\t" + pname + "\t" + amount;
    }

    public String getId() {
        return id;
    }

    public void setId(String id) {
        this.id = id;
    }

    public String getPid() {
        return pid;
    }

    public void setPid(String pid) {
        this.pid = pid;
    }

    public int getAmount() {
        return amount;
    }

    public void setAmount(int amount) {
        this.amount = amount;
    }

    public String getPname() {
        return pname;
    }

    public void setPname(String pname) {
        this.pname = pname;
    }

    @Override
    public int compareTo(OrderBean o) {
        int compare = this.pid.compareTo(o.pid);

        if (compare == 0) {
            return o.pname.compareTo(this.pname);
        } else {
            return compare;
        }
    }

    @Override
    public void write(DataOutput out) throws IOException {
        out.writeUTF(id);
        out.writeUTF(pid);
        out.writeInt(amount);
        out.writeUTF(pname);
    }

    @Override
    public void readFields(DataInput in) throws IOException {
        this.id = in.readUTF();
        this.pid = in.readUTF();
        this.amount = in.readInt();
        this.pname = in.readUTF();
    }
}
  • 2. 编写RJMapper类
代码语言:javascript
复制
package com.buwenbuhuo.reducejoin;

import com.buwenbuhuo.reducejoin.OrderBean;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;

import java.io.IOException;
/**
 * @author 卜温不火
 * @create 2020-04-25 17:24
 * com.buwenbuhuo.reducejoin - the name of the target package where the new class or interface will be created.
 * mapreduce0422 - the name of the current project.
 */
public class RJMapper extends Mapper<LongWritable, Text, OrderBean, NullWritable> {

    private OrderBean orderBean = new OrderBean();

    private String filename;

    @Override
    protected void setup(Context context) throws IOException, InterruptedException {
        FileSplit fs = (FileSplit) context.getInputSplit();
        filename = fs.getPath().getName();
    }

    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        String[] fields = value.toString().split("\t");
        if (filename.equals("order.txt")) {
            orderBean.setId(fields[0]);
            orderBean.setPid(fields[1]);
            orderBean.setAmount(Integer.parseInt(fields[2]));
            orderBean.setPname("");
        } else {
            orderBean.setPid(fields[0]);
            orderBean.setPname(fields[1]);
            orderBean.setId("");
            orderBean.setAmount(0);
        }
        context.write(orderBean, NullWritable.get());
    }
}
  • 3. 编写RJReducer类
代码语言:javascript
复制
package com.buwenbuhuo.reducejoin;

import com.buwenbuhuo.reducejoin.OrderBean;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.Reducer;

import java.io.IOException;
import java.util.Iterator;
/**
 * @author 卜温不火
 * @create 2020-04-25 17:24
 * com.buwenbuhuo.reducejoin - the name of the target package where the new class or interface will be created.
 * mapreduce0422 - the name of the current project.
 */
public class RJReducer extends Reducer<OrderBean, NullWritable, OrderBean, NullWritable> {

    @Override
    protected void reduce(OrderBean key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException {
        //拿到迭代器
        Iterator<NullWritable> iterator = values.iterator();
        //数据指针下移,获取第一个OrderBean
        iterator.next();
        //从第一个OrderBean中取出品牌名称
        String pname = key.getPname();

        //遍历剩下的OrderBean,设置品牌名称并写出
        while (iterator.hasNext()) {
            iterator.next();
            key.setPname(pname);
            context.write(key, NullWritable.get());
        }
    }
}
  • 4. 编写 RJComparator类(构造器)
代码语言:javascript
复制
package com.buwenbuhuo.reducejoin;

import com.buwenbuhuo.reducejoin.OrderBean;
import org.apache.hadoop.io.WritableComparable;
import org.apache.hadoop.io.WritableComparator;
/**
 * @author 卜温不火
 * @create 2020-04-25 17:24
 * com.buwenbuhuo.reducejoin - the name of the target package where the new class or interface will be created.
 * mapreduce0422 - the name of the current project.
 */
public class RJComparator extends WritableComparator {

    protected RJComparator() {
        super(OrderBean.class, true);
    }

    @Override
    public int compare(WritableComparable a, WritableComparable b) {
        OrderBean oa = (OrderBean) a;
        OrderBean ob = (OrderBean) b;
        return oa.getPid().compareTo(ob.getPid());
    }
}
  • 5. 编写RJDriver类
代码语言:javascript
复制
package com.buwenbuhuo.reducejoin;

import com.buwenbuhuo.reducejoin.OrderBean;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;
/**
 * @author 卜温不火
 * @create 2020-04-25 17:24
 * com.buwenbuhuo.reducejoin - the name of the target package where the new class or interface will be created.
 * mapreduce0422 - the name of the current project.
 */
public class RJDriver {
    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
        Job job = Job.getInstance(new Configuration());

        job.setJarByClass(RJDriver.class);

        job.setMapperClass(RJMapper.class);
        job.setReducerClass(RJReducer.class);

        job.setMapOutputKeyClass(OrderBean.class);
        job.setMapOutputValueClass(NullWritable.class);

        job.setOutputKeyClass(OrderBean.class);
        job.setOutputValueClass(NullWritable.class);

        job.setGroupingComparatorClass(RJComparator.class);

        FileInputFormat.setInputPaths(job, new Path("d:\\input"));
        FileOutputFormat.setOutputPath(job, new Path("d:\\output"));

        boolean b = job.waitForCompletion(true);
        System.exit(b ? 0 : 1);
    }
}

4. 查看运行结果

  • 1. 运行
3
3
  • 2. 结果
4
4

结果正确,说明我们的reducer端的join算法算是成功实现了!!!

二. Map Join

2.1 使用场景

Map Join适用于一张表十分小、一张表很大的场景。

2.2 优点

思考:在Reduce端处理过多的表,非常容易产生数据倾斜。怎么办? 在Map端缓存多张表,提前处理业务逻辑,这样增加Map端业务,减少Reduce端数据的压力,尽可能的减少数据倾斜。

2.3 具体办法:采用DistributedCache

  • (1)在Mapper的setup阶段,将文件读取到缓存集合中。
  • (2)在驱动函数中加载缓存。
代码语言:javascript
复制
// 缓存普通文件到Task运行节点。
job.addCacheFile(new URI("file://d:/cache/pd.txt"));

2.4 Map Join案例

1. 需求

5
5

将商品信息表中数据根据商品pid合并到订单数据表中。

id

pname

amount

1001

小米

1

1004

小米

4

1002

华为

2

1005

华为

5

1003

格力

3

1006

格力

6

2. 需求分析

MapJoin适用于关联表中有小表的情形。

6
6

3. 代码实现

  • 1. 创建MJMapper类
代码语言:javascript
复制
package com.buwenbuhuo.mapjoin;

import org.apache.commons.lang.StringUtils;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;
import java.net.URI;
import java.util.HashMap;
import java.util.Map;
/**
 * @author 卜温不火
 * @create 2020-04-25 17:54
 * com.buwenbuhuo.mapjoin - the name of the target package where the new class or interface will be created.
 * mapreduce0422 - the name of the current project.
 */
public class MJMapper extends Mapper<LongWritable, Text, Text, NullWritable> {

    private Map<String, String> pMap = new HashMap<>();

    private Text k = new Text();

    @Override
    protected void setup(Context context) throws IOException, InterruptedException {
        URI[] cacheFiles = context.getCacheFiles();
        String path = cacheFiles[0].getPath().toString();
        FileSystem fileSystem = FileSystem.get(context.getConfiguration());
        FSDataInputStream bufferedReader = fileSystem.open(new Path(path));
        String line;
        while (StringUtils.isNotEmpty(line = bufferedReader.readLine())) {
            String[] fields = line.split("\t");
            pMap.put(fields[0], fields[1]);
        }
        IOUtils.closeStream(bufferedReader);
    }

    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        String[] fields = value.toString().split("\t");
        String pname = pMap.get(fields[1]);
        if (pname == null) {
            pname = "NULL";
        }
        k.set(fields[0] + "\t" + pname + "\t" + fields[2]);
        context.write(k, NullWritable.get());
    }
}
  • 2. 创建MJDriver类
代码语言:javascript
复制
package com.buwenbuhuo.mapjoin;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;
import java.net.URI;
/**
 * @author 卜温不火
 * @create 2020-04-25 17:54
 * com.buwenbuhuo.mapjoin - the name of the target package where the new class or interface will be created.
 * mapreduce0422 - the name of the current project.
 */
public class MJDriver {

    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
        Job job = Job.getInstance(new Configuration());

        job.setJarByClass(MJDriver.class);

        job.setMapperClass(MJMapper.class);
        job.setNumReduceTasks(0);

        job.addCacheFile(URI.create("file:///d:/input/pd.txt"));

        FileInputFormat.setInputPaths(job, new Path("d:\\input\\order.txt"));
        FileOutputFormat.setOutputPath(job, new Path("d:\\output"));

        boolean b = job.waitForCompletion(true);
        System.exit(b ? 0 : 1);
    }
}

4. 运行及查看结果

  • 1. 运行
7
7
  • 2. 查看结果
8
8

结果正确,说明我们的map端的join算法算是成功实现了!!!

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2020/04/29 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 一. Reduce Join
    • 1.1 Reduce Join 工作原理
      • 1.2 Reduce Join 案例
        • 1. 需求
        • 2. 需求分析
        • 3. 完成代码
        • 4. 查看运行结果
    • 二. Map Join
      • 2.1 使用场景
        • 2.2 优点
          • 2.3 具体办法:采用DistributedCache
            • 2.4 Map Join案例
              • 1. 需求
              • 2. 需求分析
              • 3. 代码实现
              • 4. 运行及查看结果
          领券
          问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档