技术改变世界 阅读塑造人生! - shaogx.com

This string was altered by TechBlog\Plugins\Example.; This is an example to show the potential of an offcanvas layout pattern in Bootstrap. Try some responsive-range viewport sizes to see it in action.

spark+openfire二次开发

spark+openfire二次开发(一)文章分类:Java编程1.准备工作:到官网上下载Openfire 3.6.4,并通过svn下载openfire、Spark和SparkWeb的源代码官网地址如下:... 全文

Spark源代码编译生成全攻略

其实关于这个问题在Spark的官网www.igniterealtime.org上有很详尽的介绍,因此本文大部分内容是从英文文档引用而来的,其中还有一些个人的经验。 想了解更多关于"Spark"的文章,请点击这里. ... 全文

RDD Dependency详解

RDD的最重要的特性之一就是血缘关系,血缘关系描述了一个RDD是如何从父RDD计算得来的。这个性质可以理解为人类的进化,人是怎么从猿人一步步进化到现代的人类的,每个进化阶段可以理解为一个RDD。如果某个RDD丢失了,则可以根据血缘关系,从父RDD计算得来。总结:RDD可以描述为一组partition的向量表示,且具有依赖关系。... 全文

spark rdd scala

Spark:用Scala和Java实现WordCount

http://www.cnblogs.com/byrhuangqiang/p/4017725.html 为了在IDEA中编写scala,今天安装配置学习了IDEA集成开发环境。IDEA确实很优秀,学会之后,用起来很顺手。关于如何搭建scala和IDEA开发环境,请看文末的参考资料。... 全文

大数据应用领域

     聚类模型属于飞溅读式挖掘模型,以用户属性、行为、消费等特征数据为输入,将用户自动聚类为若干类,通常用来挖掘潜在目标客户群体,也可以用在大数据营销工具、CRM工具和防欺诈解决方案上。     分类预测模型分析学习历史数据经验,预测分析未来数据发展方向。模型输出是离散数据或类别的称为分类模型,模型输出是数值类型数据的模型称为数值预测模型。分类模型根据训练数据集的类别号属性,学习现有分类数据的分类规则来构建分类器,最终备用于分类新数据。数值预测模型根据数据输入,对训练数据进行模型拟合,最终建立连续性数值函数。分类预测模型的典型应用有欺诈检测、市场定位、性能预测、医疗诊断、价格预测等。... 全文

大数据 Spark

利用 Spark 进行数据分析和性能改进

window.location.href='http://www.ibm.com/developerworks/cn/linux/l-sparkdataanalysis/?cmp=dwnpr&cpb=dw&ct=dwcon&cr=cn_51CTO_dl&ccy=cn';【责任编辑:凌云 TEL:(010)68476606】原文:利用 Spark 进行数据分析和性能改进返回开发首页... 全文

ibmdw

Storm的简单总结

1. Storm是什么:    Storm简单来说,就是分布式实时计算系统。     按照storm作者的说法,storm对于实时计算的意义类似于hadoop 对于批处理的意义 。2. Storm的主要特点:... 全文

spark storm hadoop

Spark技术内幕:Stage划分及提交源码分析

http://blog.csdn.net/anzhsoft/article/details/39859463当触发一个RDD的action后,以count为例,调用关系如下:... 全文

Spark技术内幕:Client,Master和Worker 通信源码解析

http://blog.csdn.net/anzhsoft/article/details/30802603Spark的Cluster Manager可以有几种部署模式:... 全文

Big Data Analytics Beyond Hadoop

1. Introduction Google’s seminal paper on Map-Reduce [1] was the trigger that led to lot of developments in the big data space. Though the Map-Reduce paradigm was known in functional programming literature, the paper provided scalable implementations of the paradigm on a cluster of nodes. The paper, along with Apache Hadoop, the open source implementation of the MR paradigm, enabled end users to process large data-sets on a cluster of nodes – a usability paradigm shift. Hadoop, which comprises the MR implementation along with the Hadoop Distributed File System (HDFS), has now become the de-facto standard for data processing, with a lot of Industrial game-changers such as Disney, Sears, Walmart, and AT&T having their own Hadoop cluster installations. Hadoop is no doubt very useful for a number of use cases, especially those where the data can be split into independent chunks and certain computations need to run on the chunks and aggregated for a final result. This is appropriate for the Map-Reduce (MR) paradigm. It allows the computations to be parallelized and near-linear speed-ups to be obtained across a cluster of nodes. There are a number of cases where Hadoop may not be appropriate – this has been highlighted by, among others, Vincent Granville (http://www.analyticbridge.com/profiles/blogs/what-mapreduce-can-t-do). To summarize, the cases where MR is not appropriate are those where data cannot be partitioned into independent chunks – either computation spans the chunks or there needs to be communication for intermediate results to be exchanged. Moreover, Hadoop is also not well suited for realizing iterative Machine Learning (ML) algorithms such as the kernel support vector machines, multivariate logistic regression, etc.  This is reflected clearly in Mahout (the open source machine learning library written over Hadoop), which has only sequential implementation of some iterative algorithms. This point has also been reinforced by several others – see Prof. Srirama’s paper [2], for instance, here. This paper outlines that Hadoop is suited for simpler iterative algorithms where the algorithm can be expressed as a single execution of an MR model or a sequential execution of constant MR models. Hadoop is not well suited in cases where the algorithm can only be expressed in a way that each iteration is a single MR model or each iteration comprises multiple MR models – conjugate gradient descent, for instance, would be in the last category. Mapscale (http://www.cs.ucsb.edu/~cgb/mapscale.html) also shows that Hadoop is not well suited for iterative algorithms such as conjugate gradient descent, block tridiagonal and fast fourier transforms.... 全文

hadoop spark pregel tachyon storm

Scala环境搭建之eclipse

因为Spark的缘故,我们来看看另外一门语言——Scala,为什么要看这门语言呢?唉~其实你不看也没关系,只不过spark的内核就是用Scala写的,spark也提供其他语言的编程模型....看自己爱好啦~1、下载... 全文

scala spark sdk eclipse 开发工具

Google打造Spark:基于Web的应用开发工具

Google喜欢Web应用,但在编程工具领域,还是单机软件的天下。为此,Google于周四推出了名叫Spark(星火)的项目,有望能就此改变这一状况。据Francois Beaufort表示,Spark是一个运行于Chrome浏览器中的Web IDE(集成开发环境),用它来编写Chrome应用应该再合适不过了。这也意味着Chromebook码农们无需迁移到Windows、Mac或Linux上。Spark是Google正在打造的开发工具,用于在浏览器中编写Web应用。... 全文

Google Web 开发工具

大数据Scala编程问题集(01)

大数据Scala编程.问题集(01)by 高焕堂洞庭国际智能硬件检测基地 & 中云大数据中心(IDC) 首席架构师微博:@高焕堂_台北Q-01: 如何使用Scala的Singleton机制来表达Class-level的数据。Answer:... 全文

Scala Spark BigData 大數據

局域网聊天服务器(openfire)安装与配置

局域网聊天服务器(openfire)安装与配置 一.安装mysql tar -zxvf mysql-5.1.44.tar.gz cd mysql-5.1.44 ./configure --prefix=/usr/local/mysql/ make && make install cp support-files/my-medium.cnf /etc/my.cnf /usr/local/mysql/bin/mysql_install_db  --user=mysql chown -R root:mysql /usr/local/mysql/ chown -R mysql /usr/local/mysql/var/ echo "/usr/local/mysql/lib/mysql" >> /etc/ld.so.conf ldconfig /usr/local/mysql/bin/mysqld_safe  --user=mysql & netstat -tupln | grep 3306 echo "/usr/local/mysql/bin/mysqld_safe  --user=mysql & " >> /etc/profile 二.安装openfire 下载openfire.tar.gz (这个是我已经做好的,直接解压就可以用)安装的时候也不会出先错误 下载地址: http://www.kuaipan.cn/file/id_27533970182769603.html 选择普通下载就可以 在mysql数据库中新建openfire数据库,命名为openfire 下在完成后将openfire.tar.gz解压到/usr/local下面 并重新命名 mv openfire.bak openfire 进入到/usr/local/openfire/bin ./openfire start #开启服务 在浏览器中输入http://ip:9090 可看到访问页面(如果不能正常访问在重新开启服务)... 全文

openfire 休闲 spark 职场 mysql

Flex SDK 4(Gumbo)浅析SDK 4默认的Spark样式与Halo样式

通过《Flex SDK 4(Gumbo)更方便的自定义样式、自定义SparkSkin》这三篇文章,我们可以得到一个结论:Spark组件和Halo组件是可以共享、公用一套皮肤的。本篇文章,让我们分析一下Flex SDK 4(Gumbo)里面的默认皮肤式样(主要是Spark组件和Halo组件)。通过分析Flex SDK 4(Gumbo)默认的皮肤,有助于我们理解、学习如何使SparkSkin、Skin等方式制作皮肤。下图所示了全部的Flex SDK 4(Gumbo)CSS样式: sdks\4.0.0\frameworks\themes目录下面的内容就是Flash Builder 4 新增的主题(theme)样式所在的位置。包括:AeonGraphical、Halo、HaloClassic、Ice、Institutional、Smoke、Spark、Wireframe、Wooden等主题。我们比较常用的是:Halo、HaloClassic、Spark这三种主题样式。在\sdks\4.0.0\frameworks\projects如下的三个文件夹:halo、haloclassic、sparkskins、wireframe这四个文件夹里面的内容是对应theme文件中四种样式的源代码。在\sdks\4.0.0\frameworks\projects下面有一个叫做sparkskins的文件夹,它里面的内容做什么的呢?仔细看一下其中的路径:sdks\4.0.0\frameworks\projects\sparkskins\src\mx\skins\spark通过这个路径(mx\skins)就可以看出这个文件夹中的内容是关于Halo组件的皮肤文件。sdks\4.0.0\frameworks\projects\sparkskins\src\mx\skins\spark包括如下的文件:AccordionHeaderSkin.mxml、BorderSkin.mxml、ButtonBarFirstButtonSkin.mxml、ButtonBarLastButtonSkin.mxml、ButtonBarMiddleButtonSkin.mxmlButtonSkin.mxml、CheckBoxSkin.mxml、ColorPickerSkin.mxml、ComboBoxSkin.mxml、DataGridHeaderBackgroundSkin.mxmlDataGridHeaderSeparatorSkin.mxml、DateChooserNextMonthSkin.mxml、DateChooserNextYearSkin.mxml、DateChooserPrevMonthSkin.mxmlDateChooserPrevYearSkin.mxml、DateChooserRollOverIndicatorSkin.mxml、DateChooserSelectionIndicatorSkin.mxmlDateChooserTodayIndicatorSkin.mxml、DefaultButtonSkin.mxml、EditableComboBoxSkin.mxml、LinkButtonSkin.mxml、MenuItemSkin.mxmlMenuSeparatorSkin.mxml、PanelBorderSkin.mxml、PopUpButtonSkin.mxml、ProgressBarSkin.mxml、ProgressBarTrackSkin.mxmlProgressIndeterminateSkin.as、ProgressMaskSkin.as、RadioButtonSkin.mxml、ScrollBarDownButtonSkin.mxml、ScrollBarThumbSkin.mxmlScrollBarTrackSkin.mxml、ScrollBarUpButtonSkin.mxml、SliderThumbSkin.mxml、SliderTrackHighlightSkin.mxml、SliderTrackSkin.mxmlSparkSkinForHalo.as、StepperDecrButtonSkin.mxml、StepperIncrButtonSkin.mxml、TabSkin.mxml、TextInputBorderSkin.mxml我们随便打开一个,例如ButtonSkin.mxml。<?xml version="1.0" encoding="utf-8"?> <!-- ADOBE SYSTEMS INCORPORATEDCopyright 2008 Adobe Systems IncorporatedAll Rights Reserved.NOTICE: Adobe permits you to use, modify, and distribute this filein accordance with the terms of the license agreement accompanying it.--><!--- The Spark skin class for the Halo Button component. --><local:SparkSkinForHalo xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark"       xmlns:local="mx.skins.spark.*" minWidth="21" minHeight="19"  alpha.disabled="0.5">...............................................................................................................................................................看着是不是很眼熟?“The Spark skin class for the Halo Button component”说明这是Spark Skin应用于Halo组件的皮肤文件。不过有一个有意思的地方,我在sdks\4.0.0\里面没有找到关于这套组件相应的CSS文件,也就是当我们使用Flex SDK 4(Gumbo)里面的Halo组件时,并没用使用sdks\4.0.0\frameworks\projects\sparkskins\src\mx\skins\spark里面的内容作为默认皮肤。Flex SDK 4(Gumbo)里面的Halo组件默认使用了sdks\4.0.0\frameworks\projects\halo\src\mx\skins\halo下面的皮肤,也就是用传统方式制作的皮肤。 总结一下,比较重要的三套皮肤所在的位置:Spark组件默认皮肤的位置:sdks\4.0.0\frameworks\projects\flex4\src\spark\skins\default Halo组件默认皮肤的位置:sdks\4.0.0\frameworks\projects\halo\src\mx\skins\halo Spark skin for Halo皮肤的位置:sdks\4.0.0\frameworks\projects\sparkskins\src\mx\skins\spark 那么我们如何使用Spark skin for Halo皮肤呢?请看以下代码:<fx:Style>Button {     skin: ClassReference("mx.skins.spark.ButtonSkin");}</fx:Style><mx:Button label="我是halo组件" />其中引入的位置:mx.skins.spark.ButtonSkin,而mx.skins.spark.*包里面的内容对应了sdks\4.0.0\frameworks\projects\sparkskins\src\mx\skins\spark。  以上就是关于Flex SDK 4(Gumbo)里面默认的一些皮肤,希望对大家有所帮助。本文出自 “我的博客” 博客,请务必保留此出处http://wonlen.blog.51cto.com/939068/204439... 全文

Flex Gumbo FB4新体验 sparkskin halo sparkskins

Flex SDK 4(Gumbo)更方便的自定义样式、自定义SparkSkin(一)

  在Flex SDK 4(Gumbo)新增加了一个包:spark.skins,这个包里面只有一个class:SparkSkin,而我们(非美工的程序员)通过这个class来实现任意自定义控件的样式。下图是SparkSkin的继承关系: 通过上述关系可以得出如下的结论:1、SparkSkin是一个Group类型的容器。(它继承与Group)2、Base class for Spark skins.(是全部Spark Class的基础类,也就说全部的mx.spark的可视化控件的外观全部都是SparkSkin的子类)另外,请大家注意另外一个class:Skin,这个class是SparkSkin的父类,同时Skin继承与Group,那么Skin与SparkSkin的区别的什么呢?下图是Skin的继承关系: SparkSkin:是全部Spark Class的基础类,也就说全部的mx.spark的可视化控件的外观全部都是SparkSkin的子类。Skin:是SparkSkin的父类,例如ButtonBarSkin就是Skin的子类,如果想要自定义这部分组件的样式,则需要使用Skin。综上所述,也就是可以使用SparkSkin的地方,我们使用Skin一样可以达到同样的效果。先让我们看一下一个自定义Button后的效果: 如果是在Flex SDK 3.X时代或者Flex SDK 2.X时代的时候,如果想要达到上述的效果,我们只能自己动手来“画”这个形状,或者寻求美工的帮助来实现这样的效果。在Flex SDK 4(Gumbo)中,我们只需要将这个button的样式继承与SparkSkin或者Skin,然后在其中加入一些我想要的内容即可,请看以下的代码:<?xml version="1.0" encoding="utf-8"?><s:SparkSkin  xmlns:s="library://ns.adobe.com/flex/spark"  xmlns:fx="http://ns.adobe.com/mxml/2009"> <s:states>  <s:State name="up"/>  <s:State name="over"/>  <s:State name="down"/>  <s:State name="disabled"/> </s:states> <fx:Metadata>[HostComponent("spark.components.Button")]</fx:Metadata> <s:Ellipse width="100%" height="100%">  <s:fill>   <s:SolidColor color="0x131313" color.over="#191919" color.down="#ffffff"/>  </s:fill>  <s:stroke>   <s:SolidColorStroke color="0x0c0d0d" />  </s:stroke> </s:Ellipse> <s:RichText   f   f   color="0xBBBBBB"   textAlign="center"   horiz   verticalCenter="1"   width="100%">         </s:RichText></s:SparkSkin>那么我们在什么地方将这个样式应用呢?我们可以用以下几个方式:1、Button {      skinClass: ClassReference("com.rianote.flex.skin.KButton");}2、<Button skin />3、myButton.setStyle( "skinClass", Class( KButton ));其中skinClass也是Flex SDK 4(Gumbo)里面新增加的一个class,其作用就是设定当前这个组件的Skin。让我们看一下主程序:<?xml version='1.0' encoding='UTF-8'?><s:Application xmlns:s="library://ns.adobe.com/flex/spark" xmlns:fx="http://ns.adobe.com/mxml/2009" height="254" width="576"  backgroundColor="#222222" > <fx:Script>  <![CDATA[   import com.rianote.flex.skin.Button;  ]]> </fx:Script> <s:Button x="54" y="56" skin height="32" width="77" label="Button"/></s:Application>由于本例描述的内容比较简单,我就不上传source了,下一节我将详细描述一下KButton里面描述的内容。:)  本文出自 “我的博客” 博客,请务必保留此出处http://wonlen.blog.51cto.com/939068/204604... 全文

Flex Flash Builder Gumbo FB4新体验 spark

1 2 3