提问者:小点点

将JavaRDD<Row>转换为JavaRDD<Vector>


我正在尝试在维基百科XML转储上执行LDA。在获得原始文本的RDD后,我正在创建一个数据帧并通过Tokenizer,StopWords和CountVectorizer管道对其进行转换。我打算将 Vectors 输出的 RDD 从 CountVectorizer 传递到 MLLib 中的 OnlineLDA。这是我的代码:

 // Configure an ML pipeline
 RegexTokenizer tokenizer = new RegexTokenizer()
   .setInputCol("text")
   .setOutputCol("words");

 StopWordsRemover remover = new StopWordsRemover()
          .setInputCol("words")
          .setOutputCol("filtered");

 CountVectorizer cv = new CountVectorizer()
          .setVocabSize(vocabSize)
          .setInputCol("filtered")
          .setOutputCol("features");

 Pipeline pipeline = new Pipeline()
          .setStages(new PipelineStage[] {tokenizer, remover, cv});

// Fit the pipeline to train documents.
 PipelineModel model = pipeline.fit(fileDF);

 JavaRDD<Vector> countVectors = model.transform(fileDF)
          .select("features").toJavaRDD()
          .map(new Function<Row, Vector>() {
            public Vector call(Row row) throws Exception {
                Object[] arr = row.getList(0).toArray();

                double[] features = new double[arr.length];
                int i = 0;
                for(Object obj : arr){
                    features[i++] = (double)obj;
                }
                return Vectors.dense(features);
            }
          });

由于该行,我收到类转换异常

Object[] arr = row.getList(0).toArray();


Caused by: java.lang.ClassCastException: org.apache.spark.mllib.linalg.SparseVector cannot be cast to scala.collection.Seq
at org.apache.spark.sql.Row$class.getSeq(Row.scala:278)
at org.apache.spark.sql.catalyst.expressions.GenericRow.getSeq(rows.scala:192)
at org.apache.spark.sql.Row$class.getList(Row.scala:286)
at org.apache.spark.sql.catalyst.expressions.GenericRow.getList(rows.scala:192)
at xmlProcess.ParseXML$2.call(ParseXML.java:142)
at xmlProcess.ParseXML$2.call(ParseXML.java:1)

我在这里找到了 Scala 语法来做到这一点,但找不到任何在 Java 中做到这一点的例子。我尝试了row.getAs[Vector](0),但这只是Scala语法。有什么方法可以在Java中做到这一点吗?


共2个答案

匿名用户

所以我可以用一个简单的矢量模型来做到这一点。我不知道为什么我没有先尝试简单的事情!

         JavaRDD<Vector> countVectors = model.transform(fileDF)
              .select("features").toJavaRDD()
              .map(new Function<Row, Vector>() {
                public Vector call(Row row) throws Exception {
                    return (Vector)row.get(0);
                }
              });

匿名用户

您不需要将DataFrameDataSet转换为JavaRDD,它就可以与LDA一起工作。经过几个小时的摆弄,我终于在Scala中使用了原生的rdd

相关进口:

import org.apache.spark.ml.feature.{CountVectorizer, RegexTokenizer, StopWordsRemover}
import org.apache.spark.ml.linalg.{Vector => MLVector}
import org.apache.spark.mllib.clustering.{LDA, OnlineLDAOptimizer}
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.sql.{Row, SparkSession}

以下代码片段的其余部分与本示例相同:

val cvModel = new CountVectorizer()
        .setInputCol("filtered")
        .setOutputCol("features")
        .setVocabSize(vocabSize)
        .fit(filteredTokens)


val countVectors = cvModel
        .transform(filteredTokens)
        .select("docId","features")
        .rdd.map { case Row(docId: String, features: MLVector) => 
                   (docId.toLong, Vectors.fromML(features)) 
                 }
val mbf = {
    // add (1.0 / actualCorpusSize) to MiniBatchFraction be more robust on tiny datasets.
    val corpusSize = countVectors.count()
    2.0 / maxIterations + 1.0 / corpusSize
  }
  val lda = new LDA()
    .setOptimizer(new OnlineLDAOptimizer().setMiniBatchFraction(math.min(1.0, mbf)))
    .setK(numTopics)
    .setMaxIterations(2)
    .setDocConcentration(-1) // use default symmetric document-topic prior
    .setTopicConcentration(-1) // use default symmetric topic-word prior

  val startTime = System.nanoTime()
  val ldaModel = lda.run(countVectors)
  val elapsed = (System.nanoTime() - startTime) / 1e9

  /**
    * Print results.
    */
  // Print training time
  println(s"Finished training LDA model.  Summary:")
  println(s"Training time (sec)\t$elapsed")
  println(s"==========")

感谢此处代码的作者。