`

drill1.0配置hive storage plugin及测试

阅读更多

drill1.0配置hive storage plugin及测试

drill,hive


截止到目前本博客发布前,apache drill最新发布版本是1.0.0,对与此版本的数据源支持和文件格式的支持:

  • avro
  • parquet
  • hive
  • hbase
  • csv tsv psv
  • File system 
    对于目前我的需求:snappy+sequencefile 的hdfs存储格式,drill没有直接的支持,想到hive支持查询snappy+sequencefile,而drill支持hive,由此产生了是否可以通过hive storage plugin的方式来读取snappy+sequencefile? 经查证是可以的,配置如下:

    1. hive开启metastore的thrift服务:在hive-site.xml中加入如下配置 
      1. <property>
      2. <name>hive.metastore.uris</name>
      3. <value>thrift://10.170.250.47:9083</value>
      4. </property>
      5. <property>
      6. <name>hive.metastore.local</name>
      7. <value>false</value>
      8. </property>

    启动metastore服务:

    1. [hadoop@gateway local]$ ../hive-1.2.1/bin/hive --service metastore &
    1. 从drill的web ui上配置hive的plugin: 
      1. {
      2. "type":"hive",
      3. "enabled":true,
      4. "configProps":{
      5. "hive.metastore.uris":"thrift://10.170.250.47:9083",#hivemetastore服务地址和端口
      6. "javax.jdo.option.ConnectionURL":"jdbc:mysql://xxx:3306/hive_database",
      7. "hive.metastore.warehouse.dir":"/user/hive/warehouse",#为hivehdfs上的warehouse目录
      8. "fs.default.name":"hdfs://xxx:9000",
      9. "hive.metastore.sasl.enabled":"false"
      10. }
      11. }

    保存退出后,重启drillbit服务

    1. [hadoop@gateway drill-1.1.0]$ bin/drillbit.sh restart
    2. ```
    3. 3. 查询sequencefile测试:
    4. ``` shell
    5. [hadoop@gateway drill-1.1.0]$ bin/sqlline -u jdbc:drill:zk=10.172.171.229:2181
    6. apache drill 1.0.0
    7. "the only truly happy people are children, the creative minority and drill users"
    8. 0: jdbc:drill:zk=10.172.171.229:2181>use hive.ai;
    9. +-------+--------------------------------------+
    10. | ok | summary |
    11. +-------+--------------------------------------+
    12. |true|Default schema changed to [hive.ai]|
    13. +-------+--------------------------------------+
    14. 1 row selected (0.188 seconds)
    15. 0: jdbc:drill:zk=10.172.171.229:2181>!table
    16. +------------+---------------------+---------------------+-------------+----------+-----------+-------------+------------+----------------------------+-----------------+
    17. | TABLE_CAT | TABLE_SCHEM | TABLE_NAME | TABLE_TYPE | REMARKS | TYPE_CAT | TYPE_SCHEM | TYPE_NAME | SELF_REFERENCING_COL_NAME | REF_GENERATION |
    18. +------------+---------------------+---------------------+-------------+----------+-----------+-------------+------------+----------------------------+-----------------+
    19. | DRILL | INFORMATION_SCHEMA | CATALOGS | TABLE |||||||
    20. | DRILL | INFORMATION_SCHEMA | COLUMNS | TABLE |||||||
    21. | DRILL | INFORMATION_SCHEMA | SCHEMATA | TABLE |||||||
    22. | DRILL | INFORMATION_SCHEMA | TABLES | TABLE |||||||
    23. | DRILL | INFORMATION_SCHEMA | VIEWS | TABLE |||||||
    24. | DRILL | hive.ai | metric_data_entity | TABLE |||||||
    25. | DRILL | sys | boot | TABLE |||||||
    26. | DRILL | sys | drillbits | TABLE |||||||
    27. | DRILL | sys | memory | TABLE |||||||
    28. | DRILL | sys | options | TABLE |||||||
    29. | DRILL | sys | threads | TABLE |||||||
    30. | DRILL | sys | version | TABLE |||||||
    31. +------------+---------------------+---------------------+-------------+----------+-----------+-------------+------------+----------------------------+-----------------+
    32. 0: jdbc:drill:zk=10.172.171.229:2181> SELECT count(1) FROM metric_data_entity where pt='2015080510';
    33. +-----------+
    34. | EXPR$0 |
    35. +-----------+
    36. |40455402|
    37. +-----------+
    38. 1 row selected (14.482 seconds)
    39. 0: jdbc:drill:zk=10.172.171.229:2181>
    40. ```
    41. 以上查询已经可以支持sequencefile查询,但是查询有压缩的snappy的文件就报错:
    42. ```
    43. 2015-08-0516:34:49,067[WorkManager-2] ERROR o.apache.drill.exec.work.WorkManager- org.apache.drill.exec.work.WorkManager$WorkerBee$1.run() leaked an exce
    44. ption.
    45. java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
    46. at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(NativeMethod)
    47. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)~[na:1.7.0_85]
    48. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[na:1.7.0_85]
    49. at java.lang.Thread.run(Thread.java:745)[na:1.7.0_85]
    50. 2015-08-0516:39:05,781[UserServer-1] INFO o.a.drill.exec.work.foreman.Foreman-State change requested. RUNNING --> CANCELLATION_REQUESTED

    很明显要配置snappy的本地库:LD_LIBRARY_PATH环境变量,请配置下面的第四步

  1. 配置LD_LIBRARY_PATH=/oneapm/local/hadoop-2.7.1/lib/native的 系统环境变量 并加入到CLASSPATH中
  • 参考文献:
  • https://drill.apache.org/docs/hive-storage-plugin/
  • https://gist.github.com/vicenteg/7e060e79603f1e7ed3b4
  • http://blog.csdn.net/reesun/article/details/8556078
  • 分享到:
    评论

    相关推荐

    Global site tag (gtag.js) - Google Analytics