zoukankan      html  css  js  c++  java
  • Spark:The Definitive Book第十二章笔记

    What Are the Low-Level APIs?

    There are two sets of low-level APIs: there is one for manipulating distributed data (RDDs), and another for distributing and manipulating distributed shared variables (broadcast variables and accumulators).

    When to Use the Low-Level APIs?

    You should generally use the lower-level APIs in three situations:

    You need some functionality that you cannot find in the higher-level APIs; for example, if you need very tight control over physical data placement across the cluster.

    You need to maintain some legacy codebase written using RDDs.

    You need to do some custom shared variable manipulation.

    When you’re calling a DataFrame transformation, it actually just becomes a set of RDD transformations. This understanding can make your task easier as you begin debugging more and more complex workloads.

    How to Use the Low-Level APIs?

    A SparkContext is the entry point for low-level API functionality. You access it through the SparkSession, which is the tool you use to perform computation across a Spark cluster.

    spark.sparkContext

    About RDDs

    virtually all Spark code you run, whether DataFrames or Datasets, compiles down to an RDD. The Spark UI, covered in the next part of the book, also describes job execution in terms of RDDs.

    In short, an RDD represents an immutable, partitioned collection of records that can be operated on in parallel. Unlike DataFrames though, where each record is a structured row containing fields with a known schema, in RDDs the records are just Java, Scala, or Python objects of the programmer’s choosing.

    RDDs give you complete control because every record in an RDD is a just a Java or Python object. You can store anything you want in these objects, in any format you want. This gives you great power, but not without potential issues. Every manipulation and interaction between values must be defined by hand, meaning that you must “reinvent the wheel” for whatever task you are trying to carry out. Also, optimizations are going to require much more manual work, because Spark does not understand the inner structure of your records as it does with the Structured APIs. For instance, Spark’s Structured APIs automatically store data in an optimzied, compressed binary format, so to achieve the same space-efficiency and performance, you’d also need to implement this type of format inside your objects and all the low-level operations to compute over it. Likewise, optimizations like reordering filters and aggregations that occur automatically in Spark SQL need to be implemented by hand.

    The RDD API is similar to the Dataset, which we saw in the previous part of the book, except that RDDs are not stored in, or manipulated with, the structured data engine. However, it is trivial to convert back and forth between RDDs and Datasets, so you can use both APIs to take advantage of each API’s strengths and weaknesses.

    Types of RDDs

    If you look through Spark’s API documentation, you will notice that there are lots of subclasses of RDD. For the most part, these are internal representations that the DataFrame API uses to create optimized physical execution plans. As a user, however, you will likely only be creating two types of RDDs: the “generic” RDD type or a key-value RDD that provides additional functions, such as aggregating by key. For your purposes, these will be the only two types of RDDs that matter. Both just represent a collection of objects, but key-value RDDs have special operations as well as a concept of custom partitioning by key.

    Internally, each RDD is characterized by five main properties:

    • A list of partitions

    • A function for computing each split

    • A list of dependencies on other RDDs

    • Optionally, a Partitioner for key-value RDDs (e.g., to say that the RDD is hash-partitioned)

    • Optionally, a list of preferred locations on which to compute each split (e.g., block locations for a Hadoop Distributed File System [HDFS] file)

    笔记:The Partitioner is probably one of the core reasons why you might want to use RDDs in your code. Specifying your own custom Partitioner can give you significant performance and stability improvements if you use it correctly.

    Different kinds of RDDs implement their own versions of each of the aforementioned properties, allowing you to define new data sources.

    RDDs follow the exact same Spark programming paradigms that we saw in earlier chapters. They provide transformations, which evaluate lazily, and actions, which evaluate eagerly, to manipulate data in a distributed fashion. The RDD APIs are available in Python as well as Scala and Java. For Scala and Java, the performance is for the most part the same, the large costs incurred in manipulating the raw objects. Python, however, can lose a substantial amount of performance when using RDDs. Running Python RDDs equates to running Python user-defined functions (UDFs) row by row.

    We serialize the data to the Python process, operate on it in Python, and then serialize it back to the Java Virtual Machine (JVM). This causes a high overhead for Python RDD manipulations.

    When to Use RDDs?

    In general, you should not manually create RDDs unless you have a very, very specific reason for doing so. They are a much lower-level API that provides a lot of power but also lacks a lot of the optimizations that are available in the Structured APIs. For the vast majority of use cases, DataFrames will be more efficient, more stable, and more expressive than RDDs.

    The most likely reason for why you’ll want to use RDDs is because you need fine-grained control over the physical distribution of data (custom partitioning of data).

    Datasets and RDDs of Case Classes

    The difference is that Datasets can still take advantage of the wealth of functions and optimizations that the Structured APIs have to offer. With Datasets, you do not need to choose between only operating on JVM types or on Spark types, you can choose whatever is either easiest to do or most flexible. You get the both of best worlds.

    Creating RDDs

    Interoperating Between DataFrames, Datasets, and RDDs

    One of the easiest ways to get RDDs is from an existing DataFrame or Dataset. Converting these to an RDD is simple: just use the rdd method on any of these data types. You’ll notice that if you do a conversion from a Dataset[T] to an RDD, you’ll get the appropriate native type T back (remember this applies only to Scala and Java):

    
    // in Scala: converts a Dataset[Long] to RDD[Long]
    
    spark.range(500).rdd
    
    

    To operate on this data, you will need to convert this Row object to the correct data type or extract values out of it, as shown in the example that follows. This is now an RDD of type Row:

    
    // in Scala
    
    spark.range(10).toDF().rdd.map(rowObject => rowObject.getLong(0))
    
    

    You can use the same methodology to create a DataFrame or Dataset from an RDD. All you need to do is call the toDF method on the RDD:

    
    // in Scala
    
    spark.range(10).rdd.toDF()
    
    

    This command creates an RDD of type Row. This row is the internal Catalyst format that Spark uses to represent data in the Structured APIs. This functionality makes it possible for you to jump between the Structured and low-level APIs as it suits your use case.The RDD API will feel quite similar to the Dataset API in Chapter 11 because they are extremely similar to each other (RDDs being a lower-level representation of Datasets) that do not have a lot of the convenient functionality and interfaces that the Structured APIs do.

    From a Local Collection

    To create an RDD from a collection, you will need to use the parallelize method on a SparkContext (within a SparkSession).This turns a single node collection into a parallel collection. When creating this parallel collection, you can also explicitly state the number of partitions into which you would like to distribute this array.

    
    // in Scala
    
    val myCollection = "Spark The Definitive Guide : Big Data Processing Made Simple"
    
      .split(" ")
    
    val words = spark.sparkContext.parallelize(myCollection, 2)
    
    

    An additional feature is that you can then name this RDD to show up in the Spark UI according to a given name:

    
    // in Scala
    
    words.setName("myWords")
    
    words.name // myWords
    
    

    From Data Sources

    Although you can create RDDs from data sources or text files, it’s often preferable to use the Data Source APIs. RDDs do not have a notion of “Data Source APIs” like DataFrames do; they primarily define their dependency structures and lists of partitions. The Data Source API that we saw in Chapter 9 is almost always a better way to read in data. That being said, you can also read data as RDDs using sparkContext. For example, let’s read a text file line by line:

    spark.sparkContext.textFile("/some/path/withTextFiles")

    This creates an RDD for which each record in the RDD represents a line in that text file or files. Alternatively, you can read in data for which each text file should become a single record. The use case here would be where each file is a file that consists of a large JSON object or some document that you will operate on as an individual:

    spark.sparkContext.wholeTextFiles("/some/path/withTextFiles")

    In this RDD, the name of the file is the first object and the value of the text file is the second string object.

    Manipulating RDDs

    You manipulate RDDs in much the same way that you manipulate DataFrames. As mentioned, the core difference being that you manipulate raw Java or Scala objects instead of Spark types. There is also a dearth of “helper” methods or functions that you can draw upon to simplify calculations. Rather, you must define each filter, map functions, aggregation, and any other manipulation that you want as a function.

    Transformations

    Just as you do with DataFrames and Datasets, you specify transformations on one RDD to create another. In doing so, we define an RDD as a dependency to another along with some manipulation of the data contained in that RDD.

    • distinct

    • filter

    • map

    • flatMap

    • sort

    To sort an RDD you must use the sortBy method, and just like any other RDD operation, you do this by specifying a function to extract a value from the objects in your RDDs and then sort based on that.

    words.sortBy(word => word.length() * -1).take(2)

    Random Splits

    We can also randomly split an RDD into an Array of RDDs by using the randomSplit method, which accepts an Array of weights and a random seed:// in Scala

    val fiftyFiftySplit = words.randomSplit(Array[Double](0.5, 0.5))

    This returns an array of RDDs that you can manipulate individually.

    Actions

    Just as we do with DataFrames and Datasets, we specify actions to kick off our specified transformations. Actions either collect data to the driver or write to an external data source.

    reduce

    You can use the reduce method to specify a function to “reduce” an RDD of any kind of value to one value.

    
    // in Scala
    
    spark.sparkContext.parallelize(1 to 20).reduce(_ + _) // 210
    
    
    
    // in Scala
    
    def wordLengthReducer(leftWord:String, rightWord:String): String = {
    
      if (leftWord.length > rightWord.length)
    
        return leftWord
    
      else
    
        return rightWord
    
    }
    
    words.reduce(wordLengthReducer)
    
    

    This reducer is a good example because you can get one of two outputs. Because the reduce operation on the partitions is not deterministic, you can have either “definitive” or “processing” (both of length 10) as the “left” word. This means that sometimes you can end up with one, whereas other times you end up with the other.

    • count

    • countApprox

    • countApproxDistinct

    • countByValue

    This method counts the number of values in a given RDD. However, it does so by finally loading the result set into the memory of the driver. You should use this method only if the resulting map is expected to be small because the entire thing is loaded into the driver’s memory. Thus, this method makes sense only in a scenario in which either the total number of rows is low or the number of distinct items is low:

    words.countByValue()

    • countByValueApprox

    • first

    • max and min

    • take

    take and its derivative methods take a number of values from your RDD. This works by first scanning one partition and then using the results from that partition to estimate the number of additional partitions needed to satisfy the limit.There are many variations on this function, such as takeOrdered, takeSample, and top. You can use takeSample to specify a fixed-size random sample from your RDD. You can specify whether this should be done by using withReplacement, the number of values, as well as the random seed. top is effectively the opposite of takeOrdered in that it selects the top values according to the implicit ordering

    Saving Files

    Saving files means writing to plain-text files. With RDDs, you cannot actually “save” to a data source in the conventional sense. You must iterate over the partitions in order to save the contents of each partition to some external database. This is a low-level approach that reveals the underlying operation that is being performed in the higher-level APIs. Spark will take each partition, and write that out to the destination.

    saveAsTextFile

    To save to a text file, you just specify a path and optionally a compression codec:

    words.saveAsTextFile("file:/tmp/bookTitle")

    To set a compression codec, we must import the proper codec from Hadoop. You can find these in the org.apache.hadoop.io.compress library:

    
    // in Scala
    
    import org.apache.hadoop.io.compress.BZip2Codec
    
    words.saveAsTextFile("file:/tmp/bookTitleCompressed", classOf[BZip2Codec])
    
    

    SequenceFiles

    Spark originally grew out of the Hadoop ecosystem, so it has a fairly tight integration with a variety of Hadoop tools. A sequenceFile is a flat file consisting of binary key–value pairs. It is extensively used in MapReduce as input/output formats.

    Spark can write to sequenceFiles using the saveAsObjectFile method or by explicitly writing key–value pairs, as described in Chapter 13:

    
    words.saveAsObjectFile("/tmp/my/sequenceFilePath")
    
    

    Hadoop Files

    There are a variety of different Hadoop file formats to which you can save. These allow you to specify classes, output formats, Hadoop configurations, and compression schemes. (For information on these formats, read Hadoop: The Definitive Guide [O’Reilly, 2015].) These formats are largely irrelevant except if you’re working deeply in the Hadoop ecosystem or with some legacy mapReduce jobs.

    Caching

    The same principles apply for caching RDDs as for DataFrames and Datasets. You can either cache or persist an RDD. By default, cache and persist only handle data in memory. We can name it if we use the setName function that we referenced previously in this chapter:

    words.cache()

    We can specify a storage level as any of the storage levels in the singleton object: org.apache.spark.storage.StorageLevel, which are combinations of memory only; disk only; and separately, off heap.

    words.getStorageLevel

    Checkpointing

    One feature not available in the DataFrame API is the concept of checkpointing. Checkpointing is the act of saving an RDD to disk so that future references to this RDD point to those intermediate partitions on disk rather than recomputing the RDD from its original source. This is similar to caching except that it’s not stored in memory, only disk. This can be helpful when performing iterative computation, similar to the use cases for caching:

    
    spark.sparkContext.setCheckpointDir("/some/path/for/checkpointing")
    
    words.checkpoint()
    
    

    Now, when we reference this RDD, it will derive from the checkpoint instead of the source data. This can be a helpful optimization.

    Pipe RDDs to System Commands

    The pipe method is probably one of Spark’s more interesting methods. With pipe, you can return an RDD created by piping elements to a forked external process. The resulting RDD is computed by executing the given process once per partition. All elements of each input partition are written to a process’s stdin as lines of input separated by a newline. The resulting partition consists of the process’s stdout output, with each line of stdout resulting in one element of the output partition. A process is invoked even for empty partitions.

    We can use a simple example and pipe each partition to the command wc. Each row will be passed in as a new line, so if we perform a line count, we will get the number of lines, one per partition:

    words.pipe("wc -l").collect()

    mapPartitions

    The previous command revealed that Spark operates on a per-partition basis when it comes to actually executing code. You also might have noticed earlier that the return signature of a map function on an RDD is actually MapPartitionsRDD. This is because map is just a row-wise alias for mapPartitions, which makes it possible for you to map an individual partition (represented as an iterator). That’s because physically on the cluster we operate on each partition individually (and not a specific row).A simple example creates the value “1” for every partition in our data, and the sum of the following expression will count the number of partitions we have:

    words.mapPartitions(part => Iterator[Int](1)).sum() // 2

    Naturally, this means that we operate on a per-partition basis and allows us to perform an operation on that entire partition. This is valuable for performing something on an entire subdataset of your RDD. You can gather all values of a partition class or group into one partition and then operate on that entire group using arbitrary functions and controls.An example use case of this would be that you could pipe this through some custom machine learning algorithm and train an individual model for that company’s portion of the dataset. A Facebook engineer has an interesting demonstration of their particular implementation of the pipe operator with a similar use case demonstrated at Spark Summit East 2017.

    Other functions similar to mapPartitions include mapPartitionsWithIndex. With this you specify a function that accepts an index (within the partition) and an iterator that goes through all items within the partition. The partition index is the partition number in your RDD, which identifies where each record in our dataset sits (and potentially allows you to debug). You might use this to test whether your map functions are behaving correctly:

    
    // in Scala
    
    def indexedFunc(partitionIndex:Int, withinPartIterator: Iterator[String]) = {
    
      withinPartIterator.toList.map(
    
        value => s"Partition: $partitionIndex => $value").iterator
    
    }
    
    words.mapPartitionsWithIndex(indexedFunc).collect()
    
    

    foreachPartition

    Although mapPartitions needs a return value to work properly, this next function does not. foreachPartition simply iterates over all the partitions of the data. The difference is that the function has no return value. This makes it great for doing something with each partition like writing it out to a database. In fact, this is how many data source connectors are written. You can create our own text file source if you want by specifying outputs to the temp directory with a random ID:

    
    words.foreachPartition { iter =>
    
      import java.io._
    
      import scala.util.Random
    
      val randomFileName = new Random().nextInt()
    
      val pw = new PrintWriter(new File(s"random-file-${randomFileName}.txt"))
    
      while (iter.hasNext) {
    
          pw.write(iter.next())
    
      }
    
      pw.close()
    
    }
    
    

    You’ll find these two files if you scan your directory.

    glom

    glom is an interesting function that takes every partition in your dataset and converts them to arrays. This can be useful if you’re going to collect the data to the driver and want to have an array for each partition. However, this can cause serious stability issues because if you have large partitions or a large number of partitions, it’s simple to crash the driver.

    In the following example, you can see that we get two partitions and each word falls into one partition each:

    spark.sparkContext.parallelize(Seq("Hello", "World"), 2).glom().collect()

    Conclusion

  • 相关阅读:
    CriticalSection关键代码区(临界区)
    互斥对象(Mutex)、事件对象(Event) 与 关键代码段(CriticalSection) 的比较(互斥量Mutex 保证只有一个实例运行)
    VC popup Menu,弹出菜单
    C++ virtual inherit, 虚基类
    互斥量Mutex
    sprintf,swprintf,_stprintf
    图灵图书列表
    IDirect3DDevice9::EvictManagedResources
    过年去了,消失好久
    《C++编程规范》中文版上市
  • 原文地址:https://www.cnblogs.com/DataNerd/p/10449231.html
Copyright © 2011-2022 走看看