web-dev-qa-db-fra.com

SparkException: les valeurs à assembler ne peuvent pas être nulles

Je veux utiliser StandardScaler pour normaliser les fonctionnalités.

Voici mon code:

val Array(trainingData, testData) = dataset.randomSplit(Array(0.7,0.3))
val vectorAssembler = new VectorAssembler().setInputCols(inputCols).setOutputCol("features").transform(trainingData)   
val stdscaler = new StandardScaler().setInputCol("features").setOutputCol("scaledFeatures").setWithStd(true).setWithMean(false).fit(vectorAssembler)

mais il a levé une exception lorsque j'ai essayé d'utiliser StandardScaler

[Stage 151:==>                                                    (9 + 2) / 200]16/12/28 20:13:57 WARN scheduler.TaskSetManager: Lost task 31.0 in stage 151.0 (TID 8922, slave1.hadoop.ml): org.Apache.spark.SparkException: Values to assemble cannot be null.
    at org.Apache.spark.ml.feature.VectorAssembler$$anonfun$assemble$1.apply(VectorAssembler.scala:159)
    at org.Apache.spark.ml.feature.VectorAssembler$$anonfun$assemble$1.apply(VectorAssembler.scala:142)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
    at org.Apache.spark.ml.feature.VectorAssembler$.assemble(VectorAssembler.scala:142)
    at org.Apache.spark.ml.feature.VectorAssembler$$anonfun$3.apply(VectorAssembler.scala:98)
    at org.Apache.spark.ml.feature.VectorAssembler$$anonfun$3.apply(VectorAssembler.scala:97)
    at org.Apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
    at org.Apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.Java:43)
    at org.Apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
    at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:214)
    at scala.collection.AbstractIterator.aggregate(Iterator.scala:1336)
    at org.Apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1093)
    at org.Apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1093)
    at org.Apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1094)
    at org.Apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1094)
    at org.Apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:766)
    at org.Apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:766)
    at org.Apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.Apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.Apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.Apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.Apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.Apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.Apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
    at org.Apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
    at org.Apache.spark.scheduler.Task.run(Task.scala:85)
    at org.Apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at Java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.Java:1142)
    at Java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.Java:617)
    at Java.lang.Thread.run(Thread.Java:745)

Y a-t-il un problème avec le VectorAssembler?

J'ai vérifié quelques lignes du VectorAssembler et cela semblait OK.

vectorAssembler.take(5)
13
April

Spark> = 2,4

Puisque Spark 2.4 VectorAssembler étend HasHandleInvalid. Cela signifie que vous pouvez skip:

assembler.setHandleInvalid("skip").transform(df).show
+---+---+---------+
| x1| x2| features|
+---+---+---------+
|3.0|4.0|[3.0,4.0]|
+---+---+---------+

keep (notez que les algorithmes ML sont peu susceptibles de gérer cela correctement):

assembler.setHandleInvalid("keep").transform(df).show
+----+----+---------+
|  x1|  x2| features|
+----+----+---------+
| 1.0|null|[1.0,NaN]|
|null| 2.0|[NaN,2.0]|
| 3.0| 4.0|[3.0,4.0]|
+----+----+---------+

ou par défaut à error.

Spark <2,4

Il n'y a rien de mal avec VectorAssembler. Spark Vector ne peut tout simplement pas contenir null valeurs.

import org.Apache.spark.ml.feature.VectorAssembler

val df = Seq(
  (Some(1.0), None), (None, Some(2.0)), (Some(3.0), Some(4.0))
).toDF("x1", "x2")

val assembler = new VectorAssembler()
  .setInputCols(df.columns).setOutputCol("features")

assembler.transform(df).show(3)
org.Apache.spark.SparkException: Failed to execute user defined function($anonfun$3: (struct<x1:double,x2:double>) => vector)
...
Caused by: org.Apache.spark.SparkException: Values to assemble cannot be null.

Null n'est pas significatif pour les algorithmes ML et ne peut pas être représenté à l'aide de scala.Double.

Vous devez soit déposer:

assembler.transform(df.na.drop).show(2)
+---+---+---------+
| x1| x2| features|
+---+---+---------+
|3.0|4.0|[3.0,4.0]|
+---+---+---------+

ou remplir/imputer (voir aussi Remplacer les valeurs manquantes par la moyenne - Spark Dataframe ):

// For example with averages
val replacements: Map[String,Any] = Map("x1" -> 2.0, "x2" -> 3.0)
assembler.transform(df.na.fill(replacements)).show(3)
+---+---+---------+
| x1| x2| features|
+---+---+---------+
|1.0|3.0|[1.0,3.0]|
|2.0|2.0|[2.0,2.0]|
|3.0|4.0|[3.0,4.0]|
+---+---+---------+

nulls.

24
user6910411