我正在寻找一种简单的方法,用5-10个最重要的术语来获取描述特定文档的列表。它甚至可以基于特定的字段,比如项目描述。
我觉得这应该很简单。无论如何,Solr都是根据其在文档中出现的相对次数与其在所有文档中的总体出现情况对每个术语进行分级(tf-国防军)。
然而,我无法找到一种方法来传递一个文档给Solr,并获取我想要的条件列表。
发布于 2014-03-18 10:43:22
如果您只需要文档中的顶级术语,就可以使用项向量分量,假设您的字段有termVectors="true"
,您可以请求tv.tf_idf,并使用得分最高的前n项。
发布于 2014-03-14 12:11:01
您可能正在寻找一个MoreLikeThis组件,特别是启用了MoreLikeThis组件标志。
发布于 2014-03-23 13:42:14
我想你可能会想要研究某些类型的词,通常是名词。有一次,我在集群例程中做了类似的事情,我使用词性标记的OpenNLP部分提取所有名词短语(使用块或词性标记),然后简单地将每个词放在一个HashMap中。下面是一些使用句子分块的代码,但是使用直截了当的词类可能是一种简单的调整(但如果需要帮助,请告诉我)。代码所做的是提取每一部分的词性,然后对词性的每一部分进行块化,在这些块上循环得到名词短语,然后将其添加到术语频率hashmap中。真的很简单。你可以选择跳过所有的OpenNLP东西,但你会想要做很多噪音消除等。看一看。
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.util.HashMap;
import opennlp.tools.chunker.ChunkerME;
import opennlp.tools.chunker.ChunkerModel;
import opennlp.tools.postag.POSModel;
import opennlp.tools.postag.POSTaggerME;
import opennlp.tools.tokenize.TokenizerME;
import opennlp.tools.tokenize.TokenizerModel;
import opennlp.tools.util.Span;
/**
*
* Extracts noun phrases from a sentence. To create sentences using OpenNLP use
* the SentenceDetector classes.
*/
public class OpenNLPNounPhraseExtractor {
static final int N = 2;
public static void main(String[] args) {
try {
HashMap<String, Integer> termFrequencies = new HashMap<>();
String modelPath = "c:\\temp\\opennlpmodels\\";
TokenizerModel tm = new TokenizerModel(new FileInputStream(new File(modelPath + "en-token.zip")));
TokenizerME wordBreaker = new TokenizerME(tm);
POSModel pm = new POSModel(new FileInputStream(new File(modelPath + "en-pos-maxent.zip")));
POSTaggerME posme = new POSTaggerME(pm);
InputStream modelIn = new FileInputStream(modelPath + "en-chunker.zip");
ChunkerModel chunkerModel = new ChunkerModel(modelIn);
ChunkerME chunkerME = new ChunkerME(chunkerModel);
//this is your sentence
String sentence = "Barack Hussein Obama II is the 44th awesome President of the United States, and the first African American to hold the office.";
//words is the tokenized sentence
String[] words = wordBreaker.tokenize(sentence);
//posTags are the parts of speech of every word in the sentence (The chunker needs this info of course)
String[] posTags = posme.tag(words);
//chunks are the start end "spans" indices to the chunks in the words array
Span[] chunks = chunkerME.chunkAsSpans(words, posTags);
//chunkStrings are the actual chunks
String[] chunkStrings = Span.spansToStrings(chunks, words);
for (int i = 0; i < chunks.length; i++) {
String np = chunkStrings[i];
if (chunks[i].getType().equals("NP")) {
if (termFrequencies.containsKey(np)) {
termFrequencies.put(np, termFrequencies.get(np) + 1);
} else {
termFrequencies.put(np, 1);
}
}
}
System.out.println(termFrequencies);
} catch (IOException e) {
}
}
}
https://stackoverflow.com/questions/22386160
复制相似问题