I'm new on topic modeling and I'm trying to use Mallet library but I have a question.
I'm using Simple parallel threaded implementation of LDA to find topics for some instances. My question is what is estimate function in ParallelTopicModel?
I have search in API but they have not description. Also I have read this tutorial.
Can someone explain what is this function?
EDIT
This is an example of my code:
public void runModel(Sting [] str){
ParallelTopicModel model = new ParallelTopicModel(numTopics);
ArrayList<Pipe> pipeList = new ArrayList<Pipe>();
// Pipes: lowercase, tokenize, remove stopwords, map to features
pipeList.add(new CharSequenceLowercase());
pipeList.add(new CharSequence2TokenSequence(Pattern.compile("\\p{L}[\\p{L}\\p{P}]+\\p{L}")));
pipeList.add(new TokenSequence2FeatureSequence());
InstanceList instances = new InstanceList(new SerialPipes(pipeList));
instances.addThruPipe(new StringArrayIterator(str));
model.addInstances(instances);
model.setNumThreads(THREADS);
model.setOptimizeInterval(optimizeation);
model.setBurninPeriod(burninInterval);
model.setNumIterations(numIterations);
// model.estimate();
}
estimate() runs LDA, attempting to estimate the topic model given the data and settings you've already set up.
Have a look at the main() function of the ParrallelTopicModel source for inspiration about what's needed to estimate a model.
Related
Thanks in advance.
I am using Word2Vec in DeepLearning4j.
How do I clear the vocab cache in Word2Vec. This is because I want it to retrain on a new set of word patterns every time I reload Word2Vec. For now, it seems that the vocabulary of the previous set of word patterns persists and I get the same result even though I changed my input training file.
I try to reset the model, but it doesn't work. Codes:-
Word2Vec vec = new Word2Vec.Builder()
.minWordFrequency(1)
.iterations(1)
.layerSize(4)
.seed(1)
.windowSize(1)
.iterate(iter)
.tokenizerFactory(t)
.resetModel(true)
.limitVocabularySize(1)
.build();
Anyone can help?
If you want to retrain (this is called training), I understand that you just want to completely ignore previous learned model (vocabulary, words vector, ...). To do that you should create another Word2Vec object and fit it with new data. You should use an other instance for SentenceIterator and Tokenizer classes so. Your problem could be the way you change your input training files.
It should be ok if you just change the SentenceIterator, i.e :
SentenceIterator iter = new CollectionSentenceIterator(DataFetcher.getFirstDataset());
Word2Vec vec = new Word2Vec.Builder()
.iterate(iter)
....
.build();
vec.fit();
vec.wordsNearest("clear", 10); // you will see results from first dataset
SentenceIterator iter2 = new CollectionSentenceIterator(DataFetcher.getSecondDataset());
vec = new Word2Vec.Builder()
.iterate(iter2)
....
.build();
vec.fit();
vec.wordsNearest("clear", 10); // you will see results from second dataset, without any first dataset implication
If you run the code twice and you changed your input data between executions (let's say A and then B) you shouldn't have the same results. If so that's mean your model learned the same thing with input data A and B.
If you want to update training (this is called inference), I mean use previous learned model and new data to update this model, then you should use this example from dl4j examples.
I'm trying to use Spark's MLLib Naive Bayes algorithm to make some predictions. Unfortunately I can't make it 'cause apparently the algorithms works with a "libsvm" format (label label:feature) that I can't get from my Data Set. I'm Working on Java and I obtain the data from a MySQL Database... Here's the code I'm using:
public class SparkML {
public static void main(String[] args) throws IOException {
//This two lines hide spark logs
Logger.getLogger("org").setLevel(Level.ERROR);
Logger.getLogger("akka").setLevel(Level.ERROR);
//Here I create the spark session
SparkSession spark = SparkSession.builder().appName("Test").config("spark.master", "local[*]").getOrCreate();
// This three lines take care of DB connection
Properties dbProperties = new Properties();
dbProperties.load(new FileInputStream(new File("properties.flat")));
String jdbcUrl = dbProperties.getProperty("jdbcUrl");
// Retrieving training data
String table = "spark_tests.sparkTrainData";
Dataset<Row> train = spark.read().jdbc(jdbcUrl, table, dbProperties);
// Retrieving test data
table = "spark_tests.sparkTrainData";
Dataset<Row> test = spark.read().jdbc(jdbcUrl, table, dbProperties);
NaiveBayes nb = new NaiveBayes();
NaiveBayesModel model = nb.fit(train); //When executing this, I get "java.lang.IllegalArgumentException: Field "features" does not exist."
}
}
Any Idea on how I can achieve this? Or if there's another way to do it? I've already checked Spark's APIs, Spark's Tutorials and they only work with *.txt files
MLlib supports local vectors and matrices stored on a single machine, as well as distributed matrices backed by one or more RDDs. Local vectors and local matrices are simple data models that serve as public interfaces. The underlying linear algebra operations are provided by Breeze. A training example used in supervised learning is called a “labeled point” in MLlib.
From Spark MLlib Guide
You have to transform your data to a "labeled point", if you provide a example of your train data maybe i can help you with the code...
I'm quite familiar with Weka as I've used the GUI. I'm doing some classification experiments that requires the SpreadSubsample filter on both my training and testing data.
I'm learning java, and want to use the weka API to do this. I've got to the point where I'm loading my training and testing data into Weka like so:
DataSource source = new DataSource("training.arff");
Instances trainingData = source.getDataSet();
if (trainingData.classIndex() == -1)
trainingData.setClassIndex(trainingData.numAttributes() - 1);
and I'm getting an output. Everything is working.
However, I have no idea how to implement a filter. I have the training and testing.arff files already produced and need to filter them through the spreadsubsample filter before loading it into weka.
If anyone could help with a thorough explanation and answer, it'd be much appreciated. Thankyou.
Here some sample code:
SpreadSubsample ff = new SpreadSubsample();
String opt = " ";//any options you like, see documentation
String[] optArray = weka.core.Utils.splitOptions(opt);//right format for the options
ff.setOptions(optArray);
ff.setInputFormat(dataset);
Instances filteredInstances = Filter.useFilter(dataset, ff);
Hope it helped.
Hye there! I just need the help for implementing Naive Bayes Text Classification Algorithm in Java to just test my Data Set for research purposes. It is compulsory to implement the algorithm in Java; rather using Weka or Rapid Miner tools to get the results!
My Data Set has the following type of Data:
Doc Words Category
Means that I have the Training Words and Categories for each training (String) known in advance. Some of the Data Set is given below:
Doc Words Category
Training
1 Integration Communities Process Oriented Structures...(more string) A
2 Integration Communities Process Oriented Structures...(more string) A
3 Theory Upper Bound Routing Estimate global routing...(more string) B
4 Hardware Design Functional Programming Perfect Match...(more string) C
.
.
.
Test
5 Methodology Toolkit Integrate Technological Organisational
6 This test contain string naive bayes test text text test
SO the Data Set comes from a MySQL DataBase and it may contain multiple training strings and test strings as well! The thing is I just need to implement Naive Bayes Text Classification Algorithm in Java.
The algorithm should follow the following example mentioned here Table 13.1
Source: Read here
The thing is that I can implement the algorithm in Java Code myself but i just need to know if it is possible that there exist some kind a Java library with source code documentation available to allow me to just test the results.
The problem is I just need the results for just one time only means its just a test for results.
So, come to the point can somebody tell me about any good java library that helps my code this algorithm in Java and that could made my dataset possible to process the results, or can somebody give me any good ideas how to do it easily...something good that can help me.
I will be thankful for your help.
Thanks in advance
As per your requirement, you can use the Machine learning library MLlib from apache. The MLlib is Spark’s scalable machine learning library consisting of common learning algorithms and utilities. There is also a java code template to implement the algorithm utilizing the library. So to begin with, you can:
Implement the java skeleton for the Naive Bayes provided on their site as given below.
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.mllib.classification.NaiveBayes;
import org.apache.spark.mllib.classification.NaiveBayesModel;
import org.apache.spark.mllib.regression.LabeledPoint;
import scala.Tuple2;
JavaRDD<LabeledPoint> training = ... // training set
JavaRDD<LabeledPoint> test = ... // test set
final NaiveBayesModel model = NaiveBayes.train(training.rdd(), 1.0);
JavaPairRDD<Double, Double> predictionAndLabel =
test.mapToPair(new PairFunction<LabeledPoint, Double, Double>() {
#Override public Tuple2<Double, Double> call(LabeledPoint p) {
return new Tuple2<Double, Double>(model.predict(p.features()), p.label());
}
});
double accuracy = predictionAndLabel.filter(new Function<Tuple2<Double, Double>, Boolean>() {
#Override public Boolean call(Tuple2<Double, Double> pl) {
return pl._1().equals(pl._2());
}
}).count() / (double) test.count();
For testing your datasets, there is no best solution here than use the Spark SQL. MLlib fits into Spark's APIs perfectly. To start using it, I would recommend you to go through the MLlib API first, implementing the Algorithm according to your needs. This is pretty easy using the library.
For the next step to allow the processing of your datasets possible, just use the Spark SQL.
I will recommend you to stick to this. I too have hunted down multiple options before settling for this easy to use library and it's seamless support for inter-operations with some other technologies. I would have posted the complete code here to perfectly fit your answer. But I think you are good to go.
You can use the Weka Java API and include it in your project if you do not want to use the GUI.
Here's a link to the documentation to incorporate a classifier in your code:
https://weka.wikispaces.com/Use+WEKA+in+your+Java+code
Please take a look at the Bow toolkit.
It has a Gnu license and source code. Some of its code includes
Setting word vector weights according to Naive Bayes, TFIDF, and several other methods.
Performing test/train splits, and automatic classification tests.
It's not a Java library, but you could compile the C code to ensure that you Java had similar results for a given corpus.
I also spotted a decent Dr. Dobbs article that implements in Perl. Once again, not the desired Java, but will give you the one-time results that you are asking for.
Hi I thinks Spark would help you a lot:
http://spark.apache.org/docs/1.2.0/mllib-naive-bayes.html
you can even choose the language you think is the most appropriate to your needs Java / Python / Scala!
You may want to take a look at this.
https://mahout.apache.org/users/classification/bayesian.html
Please use scipy from python. There is already an implementation of what you need:
class sklearn.naive_bayes.MultinomialNB(alpha=1.0, fit_prior=True, class_prior=None)¶
scipy
You can use an algorithm platform like KNIME, it has variety of classification algorithms (Naive bayed included). You can run it with a GUI or Java API.
If you want to implement Naive Bayes Text Classification Algorithm in Java, then WEKA Java API will be a better solution. The data set should have to be in .arff format. Creating an .arff file from mySql database is very easy. Here is the attachment of the java code for the classifier a link of a sample .arff file.
Create a new Text document. Open it with Notepad. Copy and paste all the texts below the link. Save it as DataSet.arff. http://storm.cis.fordham.edu/~gweiss/data-mining/weka-data/weather.arff
Download Weka Java API: http://www.java2s.com/Code/Jar/w/weka.htm
Code for the classifier:
public static void main(String[] args) {
try {
StringBuilder txtAreaShow = new StringBuilder();
//reads the arff file
BufferedReader breader = null;
breader = new BufferedReader(new FileReader("DataSet.arff"));
//if 40 attributes availabe then 39 will be the class index/attribuites(yes/no)
Instances train = new Instances(breader);
train.setClassIndex(train.numAttributes() - 1);
breader.close();
//
NaiveBayes nB = new NaiveBayes();
nB.buildClassifier(train);
Evaluation eval = new Evaluation(train);
eval.crossValidateModel(nB, train, 10, new Random(1));
System.out.println("Run Information\n=====================");
System.out.println("Scheme: " + train.getClass().getName());
System.out.println("Relation: ");
System.out.println("\nClassifier Model(full training set)\n===============================");
System.out.println(nB);
System.out.println(eval.toSummaryString("\nSummary Results\n==================", true));
System.out.println(eval.toClassDetailsString());
System.out.println(eval.toMatrixString());
//txtArea output
txtAreaShow.append("\n\n\n");
txtAreaShow.append("Run Information\n===================\n");
txtAreaShow.append("Scheme: " + train.getClass().getName());
txtAreaShow.append("\n\nClassifier Model(full training set)"
+ "\n======================================\n");
txtAreaShow.append("" + nB);
txtAreaShow.append(eval.toSummaryString("\n\nSummary Results\n==================\n", true));
txtAreaShow.append(eval.toClassDetailsString());
txtAreaShow.append(eval.toMatrixString());
txtAreaShow.append("\n\n\n");
System.out.println(txtAreaShow.toString());
} catch (FileNotFoundException ex) {
System.err.println("File not found");
System.exit(1);
} catch (IOException ex) {
System.err.println("Invalid input or output.");
System.exit(1);
} catch (Exception ex) {
System.err.println("Exception occured!");
System.exit(1);
}
You can take a look at Blayze - It's a pretty minimal Naive Bayes library for the JVM written in Kotlin. Should be easy to follow.
Full disclosure: I'm one of the authors of Blayze
I would like to create a simple XMPP client in java that shares his location (XEP-0080) with other clients.
I already know I can use the smack library for XMPP and that it supports PEP, which is needed for XEP-0080.
Does anyone have an example how to implement this or any pointers, i don't find anything using google.
thanks in advance.
Kristof's right, the doc's are sparse - but they are getting better. There is a good, albeit hard to find, set of docs on extensions though. The PubSub one is at http://www.igniterealtime.org/fisheye/browse/~raw,r=11613/svn-org/smack/trunk/documentation/extensions/pubsub.html.
After going the from scratch custom IQ Provider route with an extension I found it was easier to do it using the managers as much as possible. The developers that wrote the managers have abstracted away a lot of the pain points.
Example (modified-for-geoloc version of one rcollier wrote on the Smack forum):
ConfigureForm form = new ConfigureForm(FormType.submit);
form.setPersistentItems(false);
form.setDeliverPayloads(true);
form.setAccessModel(AccessModel.open);
PubSubManager manager
= new PubSubManager(connection, "pubsub.communitivity.com");
Node myNode = manager.createNode("http://jabber.org/protocol/geoloc", form);
StringBuilder body = new StringBuilder(); //ws for readability
body.append("<geoloc xmlns='http://jabber.org/protocol/geoloc' xml:lang='en'>");
body.append(" <country>Italy</country>");
body.append(" <lat>45.44</lat>");
body.append(" <locality>Venice</locality>");
body.append(" <lon>12.33</lon>");
body.append(" <accuracy>20</accuracy>");
body.append("</geoloc>");
SimplePayload payload = new SimplePayload(
"geoloc",
"http://jabber.org/protocol/geoloc",
body.toString());
String itemId = "zz234";
Item<SimplePayload> item = new Item<SimplePayload>(itemId, payload);
// Required to recieve the events being published
myNode.addItemEventListener(myEventHandler);
// Publish item
myNode.publish(item);
Or at least that's the hard way :). Just remembered there's a PEPManager now...
PEPProvider pepProvider = new PEPProvider();
pepProvider.registerPEPParserExtension(
"http://jabber.org/protocol/tune", new TuneProvider());
ProviderManager.getInstance().addExtensionProvider(
"event",
"http://jabber.org/protocol/pubsub#event", pepProvider);
Tune tune = new Tune("jeff", "1", "CD", "My Title", "My Track");
pepManager.publish(tune);
You'd need to write the GeoLocProvider and GeoLoc classes.
I covered a pure PEP based approach as an alternative method in detail for Android here: https://stackoverflow.com/a/26719158/406920.
This will be very close to what you'd need to do with regular Smack.
Take a look at the existing code for implementations of other extensions. This will be your best example of how to develop with the current library. Unfortunately, there is no developers guide that I know of, so I just poked around to understand some of the basics myself until I felt comfortable with the environment. Hint: Use the providers extension facility to add custom providers for the extension specific stanzas.
You can ask questions on the developer forum for Smack, and contribute your code back to the project from here as well. If you produce an implementation of this extension, then you could potentially get commit privileges yourself if you want it.