zoukankan      html  css  js  c++  java
  • spark 33G表

    http://192.168.2.51:4041

    http://hadoop1:8088/proxy/application_1512362707596_0006/executors/

    Executors

    from pyspark.sql import SparkSession

    my_spark = SparkSession
    .builder
    .appName("myAppYarn-10g")
    .master('yarn')
    .config("spark.mongodb.input.uri", "mongodb://pyspark_admin:admin123@192.168.2.50/recommendation.article")
    .config("spark.mongodb.output.uri", "mongodb://pyspark_admin:admin123@192.168.2.50/recommendation.article")
    .getOrCreate()

    db_rows = my_spark.read.format("com.mongodb.spark.sql.DefaultSource").load().collect()


    Summary

    Executors

  • 相关阅读:
    轻奢侈品_百度百科
    什么是轻奢风?_百度知道
    Nitrous.IO融资665万美元 帮助开发者省去配置调试等工作-CSDN.NET
    Intro to Computer Science Class Online (CS101)
    (92) Web Crawling: How can I build a web crawler from scratch?
    zombie-phantom
    How to choose between zombie.js and PhantomJS for automated web testing? [closed]
    Zombie.js Insanely fast, headless full-stack testing using Node.js
    (92) Is there a better crawler than Scrapy?
    tpopela/vips_java
  • 原文地址:https://www.cnblogs.com/rsapaper/p/7976610.html
Copyright © 2011-2022 走看看