zoukankan      html  css  js  c++  java
  • Spark基础脚本入门实践1

    1、创建数据框架 Creating DataFrames


    val df = spark.read.json("file:///usr/local/spark/examples/src/main/resources/people.json");
    df.show();

    写到hdfs路径:
    df.select("age", "name").write.save("examples/src/main/resources/peopleOUT.json")

    再读出来:
    val peopleDF = spark.read.format("parquet").load("/user/root/examples/src/main/resources/peopleOUT.json")
    格式是parquet,是hdfs保存后的列格式存储格式,是一个通用存储格式。
    注意:如果用json来读肯定就会乱码了
    可以直接show,也可以通过路径来写SQL
    val sqlDF = spark.sql("SELECT * FROM parquet.`examples/src/main/resources/peopleOUT.json`")


    2、如果没有向HDFS文件系统里写数据,试试这条语句


    val df = spark.read.json("examples/src/main/resources/people.json")

    会提示错误:
    org.apache.spark.sql.AnalysisException: Path does not exist: hdfs://localhost:9000/user/root/examples/src/main/resources/people.json;
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:382)

    3、弱类型数据集查询


    import spark.implicits._

    // 数据集结构
    df.printSchema()
    // root
    // |-- age: long (nullable = true)
    // |-- name: string (nullable = true)

    // 只显示 "name" 列
    df.select("name").show()
    // +-------+
    // | name|
    // +-------+
    // |Michael|
    // | Andy|
    // | Justin|
    // +-------+

    // 年龄加1
    df.select($"name", $"age" + 1).show()
    // +-------+---------+
    // | name|(age + 1)|
    // +-------+---------+
    // |Michael| null|
    // | Andy| 31|
    // | Justin| 20|
    // +-------+---------+

    // 年龄大于21
    df.filter($"age" > 21).show()
    // +---+----+
    // |age|name|
    // +---+----+
    // | 30|Andy|
    // +---+----+

    // 按年龄分组计数
    df.groupBy("age").count().show()


    4、使用sql查询的方式


    Running SQL Queries

    df.createOrReplaceTempView("people")

    val sqlDF = spark.sql("SELECT * FROM people")
    sqlDF.show()

    全局临时视图
    df.createGlobalTempView("people")

    // Global temporary view is tied to a system preserved database `global_temp`
    spark.sql("SELECT * FROM global_temp.people").show()
    // +----+-------+
    // | age| name|
    // +----+-------+
    // |null|Michael|
    // | 30| Andy|
    // | 19| Justin|
    // +----+-------+

    // Global temporary view is cross-session
    spark.newSession().sql("SELECT * FROM global_temp.people").show()

    5、定义一个自定义类型


    case class Person(name: String, age: Long)

    6、schema反射型推理模式 Inferring the Schema

    import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder
    import org.apache.spark.sql.Encoder

    // For implicit conversions from RDDs to DataFrames
    import spark.implicits._

    // Create an RDD of Person objects from a text file, convert it to a Dataframe
    val peopleDF = spark.sparkContext
    .textFile("examples/src/main/resources/people.txt")
    .map(_.split(","))
    .map(attributes => Person(attributes(0), attributes(1).trim.toInt))
    .toDF()
    // Register the DataFrame as a temporary view
    peopleDF.createOrReplaceTempView("people")

    // SQL statements can be run by using the sql methods provided by Spark
    val teenagersDF = spark.sql("SELECT name, age FROM people WHERE age BETWEEN 13 AND 19")

    // The columns of a row in the result can be accessed by field index
    teenagersDF.map(teenager => "Name: " + teenager(0)).show()
    // +------------+
    // | value|
    // +------------+
    // |Name: Justin|
    // +------------+

    // or by field name
    teenagersDF.map(teenager => "Name: " + teenager.getAs[String]("name")).show()
    // +------------+
    // | value|
    // +------------+
    // |Name: Justin|
    // +------------+

    // No pre-defined encoders for Dataset[Map[K,V]], define explicitly
    implicit val mapEncoder = org.apache.spark.sql.Encoders.kryo[Map[String, Any]]
    // Primitive types and case classes can be also defined as
    // implicit val stringIntMapEncoder: Encoder[Map[String, Any]] = ExpressionEncoder()

    // row.getValuesMap[T] retrieves multiple columns at once into a Map[String, T]
    teenagersDF.map(teenager => teenager.getValuesMap[Any](List("name", "age"))).collect()
    // Array(Map("name" -> "Justin", "age" -> 19))


    7、schema指定式查询模式

    import org.apache.spark.sql.types._

    // Create an RDD
    val peopleRDD = spark.sparkContext.textFile("examples/src/main/resources/people.txt")

    // The schema is encoded in a string
    val schemaString = "name age"

    // Generate the schema based on the string of schema
    val fields = schemaString.split(" ")
    .map(fieldName => StructField(fieldName, StringType, nullable = true))
    val schema = StructType(fields)

    // Convert records of the RDD (people) to Rows
    val rowRDD = peopleRDD
    .map(_.split(","))
    .map(attributes => Row(attributes(0), attributes(1).trim))

    // Apply the schema to the RDD
    val peopleDF = spark.createDataFrame(rowRDD, schema)

    // Creates a temporary view using the DataFrame
    peopleDF.createOrReplaceTempView("people")

    // SQL can be run over a temporary view created using DataFrames
    val results = spark.sql("SELECT name FROM people")

    // The results of SQL queries are DataFrames and support all the normal RDD operations
    // The columns of a row in the result can be accessed by field index or by field name
    results.map(attributes => "Name: " + attributes(0)).show()
    // +-------------+
    // | value|
    // +-------------+
    // |Name: Michael|
    // | Name: Andy|
    // | Name: Justin|
    // +-------------+


    8、自定义聚合函数

    http://spark.apache.org/docs/latest/sql-programming-guide.html#untyped-user-defined-aggregate-functions


    9、parquet格式的读写


    import spark.implicits._

    val peopleDF = spark.read.json("examples/src/main/resources/people.json")

    // DataFrames can be saved as Parquet files, maintaining the schema information
    peopleDF.write.parquet("people.parquet")

    // Read in the parquet file created above
    // Parquet files are self-describing so the schema is preserved
    // The result of loading a Parquet file is also a DataFrame
    val parquetFileDF = spark.read.parquet("people.parquet")

    // Parquet files can also be used to create a temporary view and then used in SQL statements
    parquetFileDF.createOrReplaceTempView("parquetFile")
    val namesDF = spark.sql("SELECT name FROM parquetFile WHERE age BETWEEN 13 AND 19")
    namesDF.map(attributes => "Name: " + attributes(0)).show()


    10、手写json的数据源


    val otherPeopleRDD = spark.sparkContext.makeRDD(
    """{"name":"Yin","address":{"city":"Columbus","state":"Ohio"}}""" :: Nil)
    val otherPeople = spark.read.json(otherPeopleRDD)
    otherPeople.show()
    // +---------------+----+
    // | address|name|
    // +---------------+----+
    // |[Columbus,Ohio]| Yin|
    // +---------------+----+


    11、json数据源

    val path = "examples/src/main/resources/people.json"
    val peopleDF = spark.read.json(path)

    // The inferred schema can be visualized using the printSchema() method
    peopleDF.printSchema()
    // root
    // |-- age: long (nullable = true)
    // |-- name: string (nullable = true)

    // Creates a temporary view using the DataFrame
    peopleDF.createOrReplaceTempView("people")

    // SQL statements can be run by using the sql methods provided by spark
    val teenagerNamesDF = spark.sql("SELECT name FROM people WHERE age BETWEEN 13 AND 19")
    teenagerNamesDF.show()

    12、hive数据源


    import org.apache.spark.sql.Row
    import org.apache.spark.sql.SparkSession

    case class Record(key: Int, value: String)

    // warehouseLocation points to the default location for managed databases and tables
    val warehouseLocation = "spark-warehouse"

    val spark = SparkSession
    .builder()
    .appName("Spark Hive Example")
    .config("spark.sql.warehouse.dir", warehouseLocation)
    .enableHiveSupport()
    .getOrCreate()

    import spark.implicits._
    import spark.sql

    sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
    sql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src")

    // Queries are expressed in HiveQL
    sql("SELECT * FROM src").show()
    // +---+-------+
    // |key| value|
    // +---+-------+
    // |238|val_238|
    // | 86| val_86|
    // |311|val_311|
    // ...

    // Aggregation queries are also supported.
    sql("SELECT COUNT(*) FROM src").show()
    // +--------+
    // |count(1)|
    // +--------+
    // | 500 |
    // +--------+

    // The results of SQL queries are themselves DataFrames and support all normal functions.
    val sqlDF = sql("SELECT key, value FROM src WHERE key < 10 ORDER BY key")

    // The items in DaraFrames are of type Row, which allows you to access each column by ordinal.
    val stringsDS = sqlDF.map {
    case Row(key: Int, value: String) => s"Key: $key, Value: $value"
    }
    stringsDS.show()
    // +--------------------+
    // | value|
    // +--------------------+
    // |Key: 0, Value: val_0|
    // |Key: 0, Value: val_0|
    // |Key: 0, Value: val_0|
    // ...

    // You can also use DataFrames to create temporary views within a SparkSession.
    val recordsDF = spark.createDataFrame((1 to 100).map(i => Record(i, s"val_$i")))
    recordsDF.createOrReplaceTempView("records")

    // Queries can then join DataFrame data with data stored in Hive.
    sql("SELECT * FROM records r JOIN src s ON r.key = s.key").show()
    // +---+------+---+------+
    // |key| value|key| value|
    // +---+------+---+------+
    // | 2| val_2| 2| val_2|
    // | 4| val_4| 4| val_4|
    // | 5| val_5| 5| val_5|
    // ...


    13、jdbc数据源,表的读写(整表)

    比如用mysql,首先要把jar文件(如mysql-connector-java-5.1.26.jar)放在jars目录,然后用jar包参数启动

    spark-shell --driver-class-path /usr/local/spark/jars/mysql-connector-java-5.1.21.jar --jars /usr/local/spark/jars/mysql-connector-java-5.1.21.jar

    val jdbcDF = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/openfire").option("dbtable", "ofUser").option("user", "root").option("password", "mysql").load()


    // Saving data to a JDBC source
    jdbcDF.write
    .format("jdbc")
    .option("url", "jdbc:postgresql:dbserver")
    .option("dbtable", "schema.tablename")
    .option("user", "username")
    .option("password", "password")
    .save()

    jdbcDF2.write
    .jdbc("jdbc:postgresql:dbserver", "schema.tablename", connectionProperties)

  • 相关阅读:
    Remote Procedure Call (RPC) Locator与X3
    Delphi的悬浮窗口
    改变CSS样式
    JavaScript 打印Web页面指定区域的信息
    判断页面元素存在与否
    如何处理HTML标签属性
    jQuery 获取和设置表单元素
    如何添加/移除CSS类
    处理网页内容
    正则表达式 收集
  • 原文地址:https://www.cnblogs.com/starcrm/p/7010047.html
Copyright © 2011-2022 走看看