mapping,apache-spark,storm , Data transformations on-the-fly

Data transformations on-the-fly


Tag: mapping,apache-spark,storm

Is there a way, other than a manual mapping, to translate related values on-the-fly? I know this sounds vague but what I am looking for is a way to take an input value of say "2015 Ford" and translate it given to a mapping provided by a client that indicates the same value in their system. Say they provide us with a mapping of how each car is represented in their system. So something like this:

"Ford" -> 1111 "BMW" -> 2222 "Ferrari" -> 5050

I would like to see if something like a storm bolt could be used to achieve this. (Obviously, I am not familiar with storm beyond it data enrichment capabilities) Or if there may be another data system that provides this. Because doing a one-off mapping for each of my clients does not really feel feasible.

Thanks in advance for any advice!


Basically you want join your mapping file with the live stream(or batch data).

In spark you can broadcast your mapping file and then use it like hashmap to update your live stream.


spark-mllib: Error “reassignment to val” in source code

I'm using IDEA SBT project to test spark-mllib code. Here is build.sbt: name := "SparkTest" version := "1.0" scalaVersion := "2.11.6" libraryDependencies ++= Seq( "org.apache.spark" %% "spark-core" % "1.2.0", "org.apache.spark" %% "spark-mllib" % "1.2.0" ) After all the import and compile work has done, I found some errors in lib...

Removing duplicates from Spark RDDPair values

I am new to Python and also Spark. I've an pair RDD containing (key, List) but some of the values are duplicate. RDD is of the form (zipCode,streets) I want a pair RDD which does not contain duplicates. I am trying to achieve it using python. Can anyone please help...

Join files using Apache Spark / Spark SQL

I am trying to use Apache Spark for comparing two different files based on some common field, and get the values from both files and write it as output file. I am using Spark SQL for joining both files (after storing the RDD as table). Is this the correct approach?...

File Processing with Spark and Cassandra

Right now I'm working on loading a table from a Cassandra cluster into a Spark cluster with the Datastax Cassandra Spark Connector. Right now the spark program performs a simple mapreduce job that counts the number of rows in the Cassandra table. Everything is set up and run locally. The...

Connecting from Spark/pyspark to PostgreSQL

I've installed Spark on a Windows machine and want to use it via Spyder. After some troubleshooting the basics seems to work: import os os.environ["SPARK_HOME"] = "D:\Analytics\Spark\spark-1.4.0-bin-hadoop2.6" from pyspark import SparkContext, SparkConf from pyspark.sql import SQLContext spark_config = SparkConf().setMaster("local[8]") sc = SparkContext(conf=spark_config) sqlContext = SQLContext(sc) textFile = sc.textFile("D:\\Analytics\\Spark\\spark-1.4.0-bin-hadoop2.6\\") textFile.count() textFile.filter(lambda...

Spark streaming on YARN executor's logs not available

I'm running the following code .map{x => Logger.fatal("Hello World") x._2 } It's spark streaming applciation runs on YARN. I upadted log4j and provided it with spark-submit (using --files). My Log4j configuration was loaded which I see from logs and applied to Driver's logs (I see my log level only and...

Spark streaming transform function

I am having compilation errors in the transform function for spark streaming. Specifically seem to be missing finalizing the DStream variable or something similar. I have copied from the amplab tutorials so slightly confused... Here is the code, the problem is in the transform function towards the end. Here is...

Include package in Spark local mode

I'm writing some unit tests for my Spark code in python. My code depends on spark-csv. In production I use spark-submit --packages com.databricks:spark-csv_2.10:1.0.3 to submit my python script. I'm using pytest to run my tests with Spark in local mode: conf = SparkConf().setAppName('myapp').setMaster('local[1]') sc = SparkContext(conf=conf) My question is, since...

Which spark MLIB algorithm to use?

I'm newbie to machine learning and would like to understand what algorithm (Classification algorithm or co-relation algorithm?) to use in order to understand what is the relationship between one or more attributes. for example consider I have following set of attributes, Bill No, Bill Amount, Tip amount, Waiter Name and...

R Creating a Character Column from a Numeric Column w/o using For Loop

I am trying to create a column of characters based on an existing column of numbers, preferably without using a for loop. I have come up with a variety of ways to do this, but I keep feeling like I'm making this far more complicated than it needs to be....

Call Distinct on 'pyspark.resultiterable.ResultIterable'

I am writing some spark code and I have an RDD which looks like [(4, <pyspark.resultiterable.ResultIterable at 0x9d32a4c>), (1, <pyspark.resultiterable.ResultIterable at 0x9d32cac>), (5, <pyspark.resultiterable.ResultIterable at 0x9d32bac>), (2, <pyspark.resultiterable.ResultIterable at 0x9d32acc>)] What I need to do is to call a distinct on the pyspark.resultiterable.ResultIterable I tried this def distinctHost(a, b): p...

Retrieving TriangleCount

I'm trying to retrieve the amount of triangles from a graph using graphX. As I'm new to both Scala and graphX, I'm currently quite stuck. I'm creating a graph from an edgefile: 1 2 1 3 2 3 This should be 1 triangle. Next I'm using the build in function...

Apache Spark: Error while starting PySpark

On a Centos machine, Python v2.6.6 and Apache Spark v1.2.1 Getting the following error when trying to run ./pyspark Seems some issue with python but not able to figure out 15/06/18 08:11:16 INFO spark.SparkContext: Successfully stopped SparkContext Traceback (most recent call last): File "/usr/lib/spark_1.2.1/spark-1.2.1-bin-hadoop2.4/python/pyspark/", line 45, in <module> sc =...

SparkR and Packages

How do one call packages from spark to be utilized for data operations with R? example i am trying to access my test.csv in hdfs as below Sys.setenv(SPARK_HOME="/opt/spark14") library(SparkR) sc <- sparkR.init(master="local") sqlContext <- sparkRSQL.init(sc) flights <- read.df(sqlContext,"hdfs:// /user/root/test.csv","com.databricks.spark.csv", header="true") but getting error as below: Caused by: java.lang.RuntimeException: Failed to...

Spark-submit class not found exception

I am trying to run this simple spark application with the spark submit command using this quick start tutorial. when I try to run it using spark-1.4.0-bin-hadoop2.6\bin>spark-submit --class " SimpleApp" --master local[4] C:/.../Documents/Sparkapp/target/scala- 2.10/simple-project_2.10-1.0.jar I get the following exception: java.lang.ClassNotFoundException: SimpleApp at$ Source) at$

Convert RDD[Map[String,Double]] to RDD[(String,Double)]

I did some calculation and returned my values in a RDD containing scala map and now I want to remove this map and want to collect all keys values in a RDD. Any help will be appreciated....

How to un-nest a spark rdd that has the following type ((String, scala.collection.immutable.Map[String,scala.collection.immutable.Map[String,Int]]))

Its a nested map with contents like this when i print it onto screen (5, Map ( "ABCD" -> Map("3200" -> 3, "3350.800" -> 4, "200.300" -> 3) (1, Map ( "DEF" -> Map("1200" -> 32, "1320.800" -> 4, "2100" -> 3) I need to get something like this Case...

How to get rid of “Spark assembly has been built with Hive, including Datanucleus jars on classpath” message?

I am running an Apache Spark app as a cron job, but I keep getting emails with the following message Spark assembly has been built with Hive, including Datanucleus jars on classpath message My cron entry is something like the following ...home/ > /home/SaprkJobOperation-`date +\%Y\%m\%d\%H`-cron.log I understand that there are...

Access key from mapValues or flatMapValues?

In Spark 1.3, is there a way to access the key from mapValues? Specifically, if I have val y = x.groupBy(someKey) val z = y.mapValues(someFun) can someFun know which key of y it is currently operating on? Or do I have to do val y = => (someKey(r), r)).groupBy(_._1)...

Graphx EdgeRDD count taking long time to compute

I am running a stand alone spark, I have this code below related to EdgeRDD. These are graph edges loaded from a textfile. There are around 67 million records. val edges: RDD[Edge[Int]] = => {val x = line.split("\\s+") Edge(x(0).toLong, x(1).toLong, x(2).toInt); }) val edges1: EdgeRDD[Int] = EdgeRDD.fromEdges(edges) println(edges1.count) The...

Using EF6 code-first, reference same property name

How to reference both ShippingAddressId and BillingAddressId properties in Customer class to Address class which has a diffrent key named AddressId? Running update-database -verbose causes error: Unable to determine the principal end of an association between the types 'Project1.Customer' and 'Project1.Address'. The principal end of this association must be explicitly...

Spark on yarn jar upload problems

I am trying to run a simple Map/Reduce java program using spark over yarn (Cloudera Hadoop 5.2 on CentOS). I have tried this 2 different ways. The first way is the following: YARN_CONF_DIR=/usr/lib/hadoop-yarn/etc/hadoop/; /var/tmp/spark/spark-1.4.0-bin-hadoop2.4/bin/spark-submit --class MRContainer --master yarn-cluster --jars /var/tmp/spark/spark-1.4.0-bin-hadoop2.4/lib/spark-assembly-1.4.0-hadoop2.4.0.jar simplemr.jar This method gives the following error: diagnostics: Application application_1434177111261_0007...

Spark executors with different amounts of memory on Mesos

Is it possible to have executors with different amounts of memory on a Mesos cluster? Or am I bounded by the machine with the least memory? (Assuming I want to use all available cpus).

How to extract application ID from the PySpark context

A previous question recommends sc.applicationId, but it is not present in PySpark, only in scala. So, how do I figure out the application id (for yarn) of my PySpark process?...

Shuffled vs non-shuffled coalesce in Apache Spark

What is the difference between the following transformations when they are executed right before writing RDD to a file? coalesce(1, shuffle = true) coalesce(1, shuffle = false) Code example: val input = sc.textFile(inputFile) val filtered = input.filter(doSomeFiltering) val mapped = mapped.coalesce(1, shuffle = true).saveAsTextFile(outputFile) vs mapped.coalesce(1, shuffle = false).saveAsTextFile(outputFile)...

Is the DStream return by updateStateByKey function only contains one RDD?

Is the DStream return by updateStateByKey function only contains one RDD? If not,Under what circumstances will the DStream contains more than one RDD?

Error when running job that queries against Cassandra via Spark SQL through Spark Jobserver

So I'm trying to run job that simply runs a query against cassandra using spark-sql, the job is submitted fine and the job starts fine. This code works when it is not being run through spark jobserver (when simply using spark submit). Could someone tell my what is wrong with...

reduceByKey with two columns in Spark

I am trying to do group by two columns in Spark and am using reduceByKey as follows: pairsWithOnes = ( input: (input.column1,input.column2, 1))) print pairsWithOnes.take(20) The above maps command works fine and produces three columns with the third one being all ones. I tried summing the third by the first...

Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.CanSetDropBehind issue in ecllipse

I have the below spark word count program : package com.sample.spark; import java.util.Arrays; import java.util.List; import java.util.Map; import org.apache.spark.SparkConf; import*; import; import; import; import; import; import...

Pyspark: using filter for feature selection

I have an array of dimensions 500 x 26. Using the filter operation in pyspark, I'd like to pick out the columns which are listed in another array at row i. Ex: if a[i]= [1 2 3] Then pick out columns 1, 2 and 3 and all rows. Can this...

How do I flatMap a row of arrays into multiple rows?

After parsing some jsons I have a one-column DataFrame of arrays scala> val jj =sqlContext.jsonFile("/home/aahu/jj2.json") res68: org.apache.spark.sql.DataFrame = [r: array<bigint>] scala> jj.first() res69: org.apache.spark.sql.Row = [List(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)] I'd like to explode each row out into several rows. How? edit: Original json file:...

JavaScript objects: 'addEventListener is not a function'

I create an array of JavaScript objects called elements and loop through it in order to draw every one on a canvas as well as hooking up a few event handlers to it. However, I get an error stating that 'Uncaught TypeError: elements[i].addEventListener is not a function' Here is the...

OutofMemoryErrory creating fat jar with sbt assembly

We are trying to make a fat jar file containing one small scala source file and a ton of dependencies (simple mapreduce example using spark and cassandra): import org.apache.spark.SparkContext import org.apache.spark.SparkContext._ import com.datastax.spark.connector._ import org.apache.spark.SparkConf object VMProcessProject { def main(args: Array[String]) { val conf = new SparkConf() .set("", "") .set("spark.executor.extraClassPath",...

Passing a function foreach key of an Array

I have an array like that : val pairs: Array[(Int, ((VertexId, Seq[Int]), Int))] which generates this output : (11,((11,ArraySeq(2, 5, 4, 5)),1)) (11,((12,ArraySeq(7, 7, 8, 2)),1)) (11,((13,ArraySeq(5, 9, 8, 7)),1)) (1,((1,ArraySeq(1, 2, 3, 4)),1)) (1,((4,ArraySeq(1, 5, 1, 1)),1)) I want to build a Graph for each pairs._1. That means for...

How to transform a tabular data into transactions in spark(scala)?

I have an order transaction dataset, which looks like the following table 1,John,iPhone Cover,9.99 2,Jack,iPhone Cover,9.99 4,Jill,Samsung Galaxy Cover,9.95 3,John,Headphones,5.49 5,Bob,iPad Cover,5.45 I am considering grouping data within certain differences into different transactions. For example, I would group product 1,2,4 into transaction list List(1,2,4) for their absolute differences in price...

Conditionally Combining/Reducing key-pairs

I've had this issue for some time now, and I think it has to do with my lack of understanding of how to use combineByKey and reduceByKey, so hopefully somebody can clear this up. I am working with DNA sequences, so I have a procedure to produce a bunch of...

In sbt, how can we specify the version of hadoop on which spark depends?

Well I have a sbt project which uses spark and spark sql, but my cluster uses hadoop 1.0.4 and spark 1.2 with spark-sql 1.2, currently my build.sbt looks like this: libraryDependencies ++= Seq( "com.datastax.cassandra" % "cassandra-driver-core" % "2.1.5", "com.datastax.cassandra" % "cassandra-driver-mapping" % "2.1.5", "com.datastax.spark" % "spark-cassandra-connector_2.10" % "1.2.1", "org.apache.spark" %...

Populate csv with Scala

I have a csv file which is more or less "semi-structured) rowNumber;ColumnA;ColumnB;ColumnC; 1;START; b; c; 2;;;; 4;;;; 6;END;;; 7;START;q;x; 10;;;; 11;END;;; Now I would like to get data of this row --> 1;START; b; c; populated until it finds a 'END' in columnA. Then it should take this row -->...

Install SparkR that comes with Spark 1.4

The newest version of Spark (1.4) now comes with SparkR. Does anyone know how to go about installing the SparkR implementation on Windows? The sparkR.R script is currently located in C:/spark-1.4.0/R/pkgs/R/ This appears to be a step in the right direction, but the instructions don't work for Windows as there...

Issue with UDF on a column of Vectors in PySpark DataFrame

I am having trouble using a UDF on a column of Vectors in PySpark which can be illustrated here: from pyspark import SparkContext from pyspark.sql import Row from pyspark.sql.types import DoubleType from pyspark.sql.functions import udf from pyspark.mllib.linalg import Vectors FeatureRow = Row('id', 'features') data = sc.parallelize([(0, Vectors.dense([9.7, 1.0, -3.2])), (1,...

Spark: use reduceByKey instead of groupByKey and mapByValues

I have an RDD with duplicates values with the following format: [ {key1: A}, {key1: A}, {key1: B}, {key1: C}, {key2: B}, {key2: B}, {key2: D}, ..] I would like the new RDD to have the following output and to get ride of duplicates. [ {key1: [A,B,C]}, {key2: [B,D]}, ..]...

Count of second dimension in two dimension data in Spark

I have the data in this format (apple, laptop) (apple, laptop) (apple, ipad) (dell, laptop) I want to output to be (apple, laptop, 2) (apple, ipad, 1) (dell, laptop, 1) I wanted to do this using groupby and then count but groupby is not allowing grouping based on two columns....

how to parse a custom log file in scala to extract some key value pairs using patterns

I am building a spark streaming app that takes in logs coming out of a server. A log line looks something like this. 2015-06-18T13:53:46.606-0400 CustomLog v4 INFO: source="ABCD" type="type1" <xml some xml here attr1='value1' attr2='value2' > </xml> <some more xml></> time ="232" I am trying to follow the sample app...

Mapping from One Sheet to Another Fill Unfilled chracter

So I have one excel document that has partially filled character array ("|" shows excel column line): Current Result ("_" is a space): 1111GGH80100022190 1112QQH80100023201 1113GGH80100045201 1114AAH80100025190 So my current code outputs this above result. The problem is that characters 1-5 and 21-24 get skipped over. In general if there...

Subtract an RDD from another RDD dosen't work correctly

I want to subtract an RDD from another RDD. I looked into the documentation and I found that subtract can do that. Actually, when I tested subtract, the final RDD remains the same and the values are not removed ! Is there any other function to do that ? Or...

Pyspark StructType is not defined

I'm trying to struct a schema for db testing, and StructType apparently isn't working for some reason. I'm following a tut, and it doesn't import any extra module. <type 'exceptions.NameError'>, NameError("name 'StructType' is not defined",), <traceback object at 0x2b555f0>) I'm on spark 1.4.0, and Ubuntu 12 if that has anything...

“remoteContext object has no attribute”

I'm running Spark 1.4 in Databrick's Cloud. I loaded a file into my S3 instance and mounted it. Mounting worked. But I'm having trouble creating an RDD: dbutils.fs.mount("s3n://%s:%[email protected]%s" % (ACCESS_KEY, SECRET_KEY, AWS_BUCKET_NAME), "/mnt/%s" % MOUNT_NAME) Any ideas? sc.parallelize([1,2,3]) rdd = sc.textFiles("/mnt/GDELT_2014_EVENTS/GDELT_2014.csv") ...

PySpark No suitable driver found for jdbc:mysql://dbhost

I am trying to write my dataframe to a mysql table. I am getting No suitable driver found for jdbc:mysql://dbhost when I try write. As part of the preprocessing I read from other tables in the same DB and have no issues doing that. I can do the full run...

Selenium Web Driver : How to map html elements to Java Object.

As part of Selenium Web-driver learning I came across a scenario. Please let me know the professional approach to proceed. I am testing a eCommerce application where while I click on Mobile link all mobile phones are getting displayed.I want to check whether they are sorted based on name and...

Profiling a scala spark application

I would like to profile my spark scala applications to figure out the parts of the code which i have to optimize. I enabled -Xprof in --driver-java-options but this is not of much help to me as it gives lot of granular details. I am just interested to know how...