源码网商城,靠谱的源码在线交易网站 我的订单 购物车 帮助

源码网商城

java连接hdfs ha和调用mapreduce jar示例

  • 时间:2022-04-08 22:51 编辑: 来源: 阅读:
  • 扫一扫,手机访问
摘要:java连接hdfs ha和调用mapreduce jar示例
Java API 连接 HDFS HA
[u]复制代码[/u] 代码如下:
public static void main(String[] args) {   Configuration conf = new Configuration();   conf.set("fs.defaultFS", "hdfs://hadoop2cluster");   conf.set("dfs.nameservices", "hadoop2cluster");   conf.set("dfs.ha.namenodes.hadoop2cluster", "nn1,nn2");   conf.set("dfs.namenode.rpc-address.hadoop2cluster.nn1", "10.0.1.165:8020");   conf.set("dfs.namenode.rpc-address.hadoop2cluster.nn2", "10.0.1.166:8020");   conf.set("dfs.client.failover.proxy.provider.hadoop2cluster",        "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");   FileSystem fs = null;   try {      fs = FileSystem.get(conf);      FileStatus[] list = fs.listStatus(new Path("/"));      for (FileStatus file : list) {        System.out.println(file.getPath().getName());       }   } catch (IOException e) {      e.printStackTrace();   } finally{       try {         fs.close();       } catch (IOException e) {         e.printStackTrace();       }   } }
Java API调用MapReduce程序
[u]复制代码[/u] 代码如下:
String[] args = new String[24]; args[0] = “/usr/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar"; args[1] = "wordcount"; args[2] = "-D"; args[3] = "yarn.resourcemanager.address=10.0.1.165:8032"; args[4] = "-D"; args[5] = "yarn.resourcemanager.scheduler.address=10.0.1.165:8030"; args[6] = "-D"; args[7] = "fs.defaultFS=hdfs://hadoop2cluster/"; args[8] = "-D"; args[9] = "dfs.nameservices=hadoop2cluster"; args[10] = "-D"; args[11] = "dfs.ha.namenodes.hadoop2cluster=nn1,nn2"; args[12] = "-D"; args[13] = "dfs.namenode.rpc-address.hadoop2cluster.nn1=10.0.1.165:8020"; args[14] = "-D"; args[15] = "dfs.namenode.rpc-address.hadoop2cluster.nn2=10.0.1.166:8020"; args[16] = "-D"; args[17] = "dfs.client.failover.proxy.provider.hadoop2cluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"; args[18] = "-D"; args[19] = "fs.hdfs.impl=org.apache.hadoop.hdfs.DistributedFileSystem"; args[20] = "-D"; args[21] = "mapreduce.framework.name=yarn"; args[22] = "/input"; args[23] = "/out01"; RunJar.main(args);
  • 全部评论(0)
联系客服
客服电话:
400-000-3129
微信版

扫一扫进微信版
返回顶部