`
xpenxpen
  • 浏览: 703698 次
  • 性别: Icon_minigender_1
  • 来自: 上海
社区版块
存档分类
最新评论

spark安装(win7单机模式)

阅读更多
2016/06/16更新
windows环境下需要额外下载winutils

1.环境
首先要注意不同版本的spark需要不同版本的scala,python。
我这里试验成功的版本如下:

Windows 7
JDK 1.7.0_72
scala 2.10.5
python 2.7.8
spark 1.4.1
winutils

2.下载

2.1 下载spark+hadoop
官网下载
选择如下
Choose a Spark release: 1.4.1
Choose a package type: pre-built for hadoop 2.6 and later
Choose a download type:随意
Download Spark: spark-1.4.1-bin-hadoop2.6.tgz

下载完解压。

2.2 下载winutils,hadoop.dll
另外要在windows上成功运行,需要下载winutils
(文末附件也有下载)

将winutils.exe,hadoop.dll复制到spark-1.4.1-bin-hadoop2.6\bin目录下。
设置环境变量HADOOP_HOME = spark-1.4.1-bin-hadoop2.6根目录

3.shell测试
以下shell会输出部分错误信息,但不影响主要功能。

3.1 python shell

D:\opensource\hadoop\spark-1.4.1-bin-hadoop2.6>bin\pyspark

>>> lines = sc.textFile("README.md")
>>> lines.count()
[Stage 0:>                                    (0 + 2) / 2]

98
>>> lines.first()
u'# Apache Spark'
>>>

3.2 scala shell

D:\opensource\hadoop\spark-1.4.1-bin-hadoop2.6>bin\spark-shell


scala> val lines = sc.textFile("README.md")
lines: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[1] at textFile at <console>:21

scala> lines.count()
res0: Long = 98

scala> lines.first()
res1: String = # Apache Spark

scala>


3.3 查看SparkUI
以上2种shell任意打开一种,然后浏览http://localhost:4040即可。


4. 例子程序——PI计算

D:\opensource\hadoop\spark-1.4.1-bin-hadoop2.6>bin\run-example org.apache.spark.examples.SparkPi
16/06/16 15:42:54 WARN NativeCodeLoader: Unable to load native-hadoop library fo
r your platform... using builtin-java classes where applicable
[Stage 0:>                                                          (0 + 2) / 2]
[Stage 0:=============================>                             (1 + 1) / 2]

Pi is roughly 3.1416

5. 可独立运行的java程序
《Learning Spark》一书的源码
https://github.com/databricks/learning-spark
编译以后运行

D:\opensource\hadoop\spark-1.4.1-bin-hadoop2.6>bin\spark-submit --class com.oreilly.learningsparkexamples.java.WordCount ./java-0.0.2.jar local ./README.md ./wordCount.txt
16/06/30 16:48:54 WARN SparkConf: null jar passed to SparkContext constructor
16/06/30 16:48:54 WARN NativeCodeLoader: Unable to load native-hadoop library fo
r your platform... using builtin-java classes where applicable

结果在spark根目录下生成了wordCount.txt文件夹,里面有WordCount结果。
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics