zoukankan      html  css  js  c++  java
  • Accessing data in Hadoop using dplyr and SQL

    If your primary objective is to query your data in Hadoop to browse, manipulate, and extract it into R, then you probably want to use SQL. You can write SQL code explicitly to interact with Hadoop, or you can write SQL code implicitly with dplyr. The dplyrpackage has a generalized backend for data sources that translates your R code into SQL. You can use RStudio and dplyr to work with several of the most popular software packages in the Hadoop ecosystem, including Hive, Impala, HBase and Spark.

    There are two methods for accessing data in Hadoop using dplyr and SQL.

    ODBC

    You can connect R and RStudio to Hadoop with an ODBC connection. This effectively treats Hadoop like any other data source (i.e., as if Hadoop were a relational database). You will need a data source specific driver (e.g., Hive, Impala, HBase) installed on your desktop or your sever. You will also need a few R packages. We recommend using these R packages: DBIdplyr, and odbc. Note that the dplyr package may also reference the dbplyr package to help translate R into specific variants of SQL. You can use the odbc package to create a connection with Hadoop and run queries:

    library(odbc)

    con <- dbConnect(odbc::odbc(), driver = <driver>, host = <host>, dbname = <dbname>, user = <user>, password = <password>, port = 10000)

    tbl(con, "mytable") # dplyr
    dbGetQuery(con, "SELECT * FROM mytable") # SQL

    dbDisconnect(con)

    Spark

    If you are running Spark on Hadoop, you may also elect to use the sparklyr package to access your data in HDFS. Spark is a general engine for large-scale data processing, and it supports SQL. The sparklyr package communicates with the Spark API to run SQL queries, and it also has a dplyr backend. You can use sparklyr to create a connect with Spark run queries:

    library(sparklyr)
    
    con <- spark_connect(master = "yarn-client")

    tbl(con, "mytable") # dplyr
    dbGetQuery(con, "SELECT * FROM mytable") # SQL

    spark_disconnect(con)


    转自:https://support.rstudio.com/hc/en-us/articles/115008241668-Accessing-data-in-Hadoop-using-dplyr-and-SQL
  • 相关阅读:
    利用IIS应用请求转发ARR实现IIS和tomcat整合共用80端口
    Application Request Route实现IIS Server Farms集群负载详解
    jQuery插件之ajaxFileUpload
    百度上传组件
    jQuery选择器总结 jQuery 的选择器可谓之强大无比,这里简单地总结一下常用的元素查找方法
    如何使 WebAPI 自动生成漂亮又实用在线API文档
    Swagger+AutoRest 生成web api客户端(.Net)
    小程序开发的40个技术窍门,纯干货!
    为你下一个项目准备的 50 个 Bootstrap 插件
    In-Memory:内存数据库
  • 原文地址:https://www.cnblogs.com/payton/p/8758893.html
Copyright © 2011-2022 走看看