I'm new with the Jahmm package, also I'm new with Java.
I'm having an error in KMeansLearner that says
Incompatible Types List<ObservationVector> cannot be converted to
List<? extends Observation Vector>
What does this mean? I have only observation vectors until now, and I declared it on headers. Can please anyone can tell how do I fix this? And if I want to use a <ObservationReal>, how does it affects the code?
Here is my code:
package jahmm;
import be.ac.ulg.montefiore.run.jahmm.*;
import be.ac.ulg.montefiore.run.jahmm.ForwardBackwardCalculator;
import be.ac.ulg.montefiore.run.jahmm.Hmm;
import be.ac.ulg.montefiore.run.jahmm.KMeansCalculator;
import be.ac.ulg.montefiore.run.jahmm.ObservationVector;
import be.ac.ulg.montefiore.run.jahmm.ObservationVector;
import be.ac.ulg.montefiore.run.jahmm.OpdfDiscrete;
import be.ac.ulg.montefiore.run.jahmm.OpdfMultiGaussian;
import be.ac.ulg.montefiore.run.jahmm.ViterbiCalculator;
import be.ac.ulg.montefiore.run.jahmm.draw.GenericHmmDrawerDot;
import be.ac.ulg.montefiore.run.jahmm.io.ObservationReader;
import be.ac.ulg.montefiore.run.jahmm.io.ObservationSequencesReader;
import be.ac.ulg.montefiore.run.jahmm.io.ObservationVectorReader;
import be.ac.ulg.montefiore.run.jahmm.learn.KMeansLearner;
import be.ac.ulg.montefiore.run.jahmm.learn.BaumWelchLearner;
import be.ac.ulg.montefiore.run.jahmm.learn.BaumWelchScaledLearner;
import be.ac.ulg.montefiore.run.jahmm.toolbox.MarkovGenerator;
import be.ac.ulg.montefiore.run.jahmm.ObservationReal;
import be.ac.ulg.montefiore.run.jahmm.OpdfInteger;
import java.io.*;
import java.lang.*;
import java.util.*;
/**
*
* #author
*/
public class Jahmm {
/**
* #param args the command line arguments
*/
public static void main(String[] args) {
// TODO code application logic here
//Instances instances;
Reader reader;
int i, j, k;
try{
//String filename = argv[0];
String filex="dta_eat.seq";
//filex = "Desktop\R\dt_junto.csv";
String csvFileToRead = filex;
reader = new FileReader(filex);
List<ObservationVector> sequences =
ObservationSequencesReader.readSequence(new ObservationVectorReader(),
reader);
reader.close();
OpdfMultiGaussianFactory gMix = new OpdfMultiGaussianFactory(3);
KMeansLearner<ObservationVector> kml;
kml = new KMeansLearner<ObservationVector>(6,
gMix, sequences);
Hmm<ObservationVector> initHmm = kml.iterate();
//Hmm<ObservationVector> fittedHmm = kml.learn();
//Hmm<ObservationVector> initHmm = kml.iterate();
} catch(Exception e){
e.printStackTrace();
}
}
}
I'll really would appreciate your help.
you get List of Lists:
Reader reader = new FileReader("vectors.seq");
List<List<ObservationVector>> v = ObservationSequencesReader.
readSequences(new ObservationVectorReader(2), reader);
reader.close();
See this example.
Related
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.io.PrintWriter;
import java.net.URI;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import javax.management.AttributeChangeNotification;
import org.apache.jena.datatypes.xsd.XSDDatatype;
import org.apache.jena.rdf.model.Model;
import org.apache.jena.rdf.model.ModelFactory;
import org.apache.jena.rdf.model.Property;
import org.apache.jena.rdf.model.RDFNode;
import org.apache.jena.rdf.model.RDFReaderI;
import org.apache.jena.rdf.model.Resource;
import org.apache.jena.rdf.model.Statement;
import org.apache.jena.rdf.model.StmtIterator;
import org.apache.jena.riot.Lang;
import org.apache.jena.riot.RDFDataMgr;
import org.apache.jena.riot.system.StreamRDFWriter;
import org.apache.jena.vocabulary.VCARD;
import org.geotools.data.DataStore;
import org.geotools.data.DataStoreFinder;
import org.geotools.data.DataUtilities;
import org.geotools.data.FeatureSource;
import org.geotools.data.FileDataStore;
import org.geotools.data.FileDataStoreFinder;
import org.geotools.data.Query;
import org.geotools.data.ServiceInfo;
import org.geotools.data.shapefile.ShapefileDataStore;
import org.geotools.data.simple.SimpleFeatureCollection;
import org.geotools.data.simple.SimpleFeatureIterator;
import org.geotools.data.simple.SimpleFeatureSource;
import org.geotools.feature.FeatureCollection;
import org.geotools.feature.FeatureIterator;
import org.geotools.swing.data.JFileDataStoreChooser;
import org.opengis.feature.ComplexAttribute;
import org.opengis.feature.simple.SimpleFeature;
import org.opengis.feature.simple.SimpleFeatureType;
import org.opengis.feature.type.FeatureType;
import org.opengis.filter.Filter;
public class ShpToRdf {
public static void main(String[] args) throws IOException {
ArrayList<String> names = new ArrayList<String>();
ArrayList<String> values = new ArrayList<String>();
File file = JFileDataStoreChooser.showOpenFile("shp", null);
if (file == null) {
return;
}
FileDataStore myData = FileDataStoreFinder.getDataStore(file);
SimpleFeatureSource source = myData.getFeatureSource();
SimpleFeatureType schema = source.getSchema();
Query query = new Query(schema.getTypeName());
query.setMaxFeatures(100);
Model model = ModelFactory.createDefaultModel();
String shpURI = "http://www.shp.fake/";
Resource shapeFile = model.createResource(shpURI);
FeatureCollection<SimpleFeatureType, SimpleFeature> collection = source.getFeatures(query);
try (FeatureIterator<SimpleFeature> features = collection.features()) {
while (features.hasNext()) {
SimpleFeature feature = features.next();
model.setNsPrefix("shp", shpURI);
for (org.opengis.feature.Property attribute : feature.getProperties()) {
names.add(attribute.getName().toString());
values.add(attribute.getValue().toString());
}
}
}
ArrayList<Integer> ids = new ArrayList<Integer>();
for(int i=0; i<names.size();i++) {
if (names.get(i).equals("Id")) {
ids.add(i);
}
}
Property features = model.createProperty(shpURI,"features");
for(int i = 0; i<ids.size();i++) {
Property id = model.createProperty(shpURI,names.get(ids.get(i)));
shapeFile = model.createResource(shpURI)
.addProperty(features, model.createResource()
.addProperty(id,model.createResource()
.addProperty(id, values.get(ids.get(i)))
.addProperty(features, "feature1")
.addProperty(features, "feature2")
.addProperty(features, "feature3")));
}
RDFDataMgr.write(System.out, model, Lang.RDFXML);
}
}
I am trying to create an application that converts Shape File(shp) to RDF.
The problem is that I can get two ArrayLists from the shp. The one has the names of the values (id,name,geometry etc.), and the other has the values.
To create the RDF, I have to match each Id with the matching values(ex. Id =1 has name = road 1, geometry = line etc.)
Could you help me with this?
Thank you!
I think you should be able to do this by tweaking the following bit of logic
for (org.opengis.feature.Property attribute : feature.getProperties()) {
names.add(attribute.getName().toString());
values.add(attribute.getValue().toString());
}
Instead of putting them in two lists, you can put them in a list of pairs. This way when you iterate over the list, you know the mapping between the subject and object.
It should look something similar to
List<Pair<String, Integer>> contentList = new ArrayList<Pair<String, String>>();
for (org.opengis.feature.Property attribute : feature.getProperties()) {
Pair<String, Integer> subjectObjectPairs = new Pair<String, String>(attribute.getName().toString(), attribute.getValue().toString());
contentList.add(subjectObjectPairs);
}
I'm not sure what the ids ArrayList is for, but you could move that logic into the for loop above to make sure you're only getting identifiers.
I am new to Corda and started with the provided tutorials.
In two different tutorials I ran into the same problem. The functions requireThat() and requireSingleCommand() are undefined.
Below you see my code from the "HelloWorld! Pt. 2" tutorial.
The requireThat() function produces the following error:
The method requireThat((<no type> require) -> {}) is undefined for the type SignTxFlow Java(67108964)
Although, the requireThat() function is part of the net.corda.core.contracts package which is included in the code.
Thank you for your help!
package com.template.flows;
import com.template.states.IOUState;
import co.paralleluniverse.fibers.Suspendable;
import net.corda.core.contracts.ContractState;
import net.corda.core.crypto.SecureHash;
import net.corda.core.flows.FlowException;
import net.corda.core.flows.FlowLogic;
import net.corda.core.flows.FlowSession;
import net.corda.core.flows.InitiatedBy;
import net.corda.core.flows.ReceiveFinalityFlow;
import net.corda.core.flows.SignTransactionFlow;
import net.corda.core.transactions.SignedTransaction;
import net.corda.core.contracts.*;
import net.corda.core.flows.*;
import net.corda.core.contracts.Command;
import net.corda.core.transactions.*;
import net.corda.core.utilities.ProgressTracker;
// ******************
// * Responder flow *
// ******************
#InitiatedBy(IOUFlow.class)
public class IOUFlowResponder extends FlowLogic<Void> {
private FlowSession otherPartySession;
public IOUFlowResponder(FlowSession otherPartySession) {
this.otherPartySession = otherPartySession;
}
#Suspendable
#Override
public Void call() throws FlowException {
class SignTxFlow extends SignTransactionFlow {
private SignTxFlow(FlowSession otherPartySession) {
super(otherPartySession);
}
#Override
protected void checkTransaction(SignedTransaction stx) {
requireThat(require -> {
ContractState output = stx.getTx().getOutputs().get(0).getData();
require.using("this must be an IOU transaction. ", output instanceof IOUState);
IOUState iou = (IOUState) output;
require.using("The IOU's value can't be too high.", iou.getValue() < 100);
return null;
});
}
}
SecureHash expectedTxId = subFlow(new SignTxFlow(otherPartySession)).getId();
subFlow(new ReceiveFinalityFlow(otherPartySession, expectedTxId));
return null;
}
}
The problem is solved, if the package
import static net.corda.core.contracts.ContractsDSL.requireThat;
is implemented at the top.
For the requirement of my project, I need to build a class from the confluent java code to write data from kafka topic to the hdfs filesystem.
It is actually working in CLI with connect-standalone, but I need to do the same thing with the source code which I built successfully.
I have a problem with SinkTask and hdfsConnector classes.
An exception is showing up in the put method.
Here below is my class code:
package io.confluent.connect.hdfs;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.TimeUnit;
import org.apache.kafka.connect.errors.ConnectException;
import org.apache.kafka.connect.sink.SinkConnector;
import org.apache.kafka.connect.sink.SinkRecord;
import org.apache.kafka.connect.sink.SinkTaskContext;
import io.confluent.connect.avro.AvroData;
import io.confluent.connect.hdfs.avro.AvroFormat;
import io.confluent.connect.hdfs.partitioner.DefaultPartitioner;
import io.confluent.connect.storage.common.StorageCommonConfig;
import io.confluent.connect.storage.partitioner.PartitionerConfig;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.config.ConfigDef;
public class main{
private static Map<String, String> props = new HashMap<>();
protected static final TopicPartition TOPIC_PARTITION = new TopicPartition(TOPIC, PARTITION);
protected static String url = "hdfs://localhost:9000";
protected static SinkTaskContext context;
public static void main(String[] args) {
HdfsSinkConnector hk = new HdfsSinkConnector();
HdfsSinkTask h = new HdfsSinkTask();
props.put(StorageCommonConfig.STORE_URL_CONFIG, url);
props.put(HdfsSinkConnectorConfig.HDFS_URL_CONFIG, url);
props.put(HdfsSinkConnectorConfig.FLUSH_SIZE_CONFIG, "3");
props.put(HdfsSinkConnectorConfig.FORMAT_CLASS_CONFIG, AvroFormat.class.getName());
try {
hk.start(props);
Collection<SinkRecord> sinkRecords = new ArrayList<>();
SinkRecord record = new SinkRecord("test", 0, null, null, null, null, 0);
sinkRecords.add(record);
h.initialize(context);
h.put(sinkRecords);
hk.stop();
} catch (Exception e) {
throw new ConnectException("Couldn't start HdfsSinkConnector due to configuration error", e);
}
}
}
I want to run particular testStep of my testcase of soap ui using java code. My problem is when I try to run at test step level it need argument of TestCase runner which is anonymous inner type and TestCaseRunContext which is interface. Do I have to implement both to run the same? if yes can please any sample how to do that??
here's my code
package com.testauto.soaprunner.soap.impl;
import java.sql.Timestamp;
import java.util.ArrayList;
import java.util.Date;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.eviware.soapui.SoapUI;
import com.eviware.soapui.StandaloneSoapUICore;
import com.eviware.soapui.impl.wsdl.WsdlProject;
import com.eviware.soapui.impl.wsdl.WsdlTestSuite;
import com.eviware.soapui.impl.wsdl.testcase.WsdlTestCase;
import com.eviware.soapui.impl.wsdl.testcase.WsdlTestCaseRunner;
import com.eviware.soapui.impl.wsdl.teststeps.WsdlTestStep;
import com.eviware.soapui.model.TestPropertyHolder;
import com.eviware.soapui.model.iface.MessageExchange;
import com.eviware.soapui.model.propertyexpansion.PropertyExpansionUtils;
import com.eviware.soapui.model.testsuite.TestCase;
import com.eviware.soapui.model.testsuite.TestCaseRunContext;
import com.eviware.soapui.model.testsuite.TestProperty;
import com.eviware.soapui.model.testsuite.TestStepResult;
import com.eviware.soapui.model.testsuite.TestSuite;
import com.eviware.soapui.support.types.StringToObjectMap;
import com.eviware.soapui.support.types.StringToStringsMap;
import com.testauto.soaprunner.data.InputData;
import com.testauto.soaprunner.data.ReportData;
public class RunTestImpl{
static Logger logger = LoggerFactory.getLogger(RunTestImpl.class);
List<ReportData> reportDatList=new ArrayList<ReportData>();
public List<ReportData> process(Map<String, String> readDataMap, InputData input, Map<List<String>, String> configurationMap, List<String> configuration, WsdlTestSuite testSuite)
{
List<ReportData> report = new ArrayList<ReportData>();
logger.info("Into the Class for running test cases");
try{
report= getTestSuite(readDataMap,input,configurationMap,configuration,testSuite);
}
catch(Exception e)
{
logger.info(e.getMessage());
}
return report;
}
private List<ReportData> getTestSuite(Map<String, String> readDataMap, InputData input, Map<List<String>, String> configurationMap, List<String> configuration, WsdlTestSuite testSuite) throws Exception {
ReportData report=new ReportData();
logger.info("Into the Class for running test cases");
String suiteName = "";
String reportStr = "";
List<String> testCaseNameList= setPropertyValues(readDataMap,input);
WsdlTestCaseRunner runner = null;
List<TestSuite> suiteList = new ArrayList<TestSuite>();
List<TestCase> caseList = new ArrayList<TestCase>();
SoapUI.setSoapUICore(new StandaloneSoapUICore(true));
System.out.println("testcase name "+ configurationMap.get(configuration));
// WsdlTestCase testCase= testSuite.getTestCaseByName(input.getApiName()+"_"+testCaseName+"_TestCase");
WsdlTestCase testCase= testSuite.getTestCaseByName("my_TESTCASE");
WsdlTestStep tesStep=testCase.getTestStepByName(configurationMap.get(testCaseNameList));
System.out.println("test case name:"+testCase.getName());
report.setTestCase(testCase.getName());
suiteList.add(testSuite);
runner= tesStep.run(?,?);
return reportDatList;
}
private List<String> setPropertyValues(Map<String, String> readDataMap, InputData input) {
String testCaseName="";
TestPropertyHolder holder = PropertyExpansionUtils.getGlobalProperties();
List<String> dataConfigurationList=new ArrayList<String>();
Iterator entries = readDataMap.entrySet().iterator();
while (entries.hasNext()) {
Entry thisEntry = (Entry) entries.next();
String key = (String) thisEntry.getKey();
String value = (String) thisEntry.getValue();
testCaseName+=key;
holder.setPropertyValue(key, holder.getPropertyValue(key));
dataConfigurationList.add(key);
}
System.out.println("testCaseName"+testCaseName);
return dataConfigurationList;
}
}
}
After trying different things I got something like this.
TestCaseRunContext context = new MockTestRunContext(new MockTestRunner(testStep.getTestCase()), testStep);
MockTestRunner runner = new MockTestRunner(testStep.getTestCase());
TestStepResult testStepResult= testStep.run(runner, context);
I don't know how it works this trick worked for me. if someone know the reason behind this please share
I am trying to run WordCount MapReduce program to read and count data stored in Cassandra table (Column Family) but, when I compile my program I got the same error repeated times. Below is my source code and error I got. Can anyone help me to solve this issue? Thanks in advance.
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.apache.cassandra.db.IColumn;
import org.apache.cassandra.hadoop.*;
import org.apache.cassandra.hadoop.ColumnFamilyInputFormat;
import org.apache.cassandra.hadoop.ConfigHelper;
import org.apache.cassandra.thrift.*;
import org.apache.cassandra.utils.ByteBufferUtil;
/**
* This sums the word count stored in the input_words_count ColumnFamily for the key "key-if-verse1".
*
* Output is written to a text file.
*/
public class WordCountCounters extends Configured implements Tool
{
private static final Logger logger = LoggerFactory.getLogger(WordCountCounters.class);
static final String COUNTER_COLUMN_FAMILY = "input_words";
private static final String OUTPUT_PATH_PREFIX = "/Users/Deepu/Documents/dse-3.2.4/dse-data/word_count_counters";
public static void main(String[] args) throws Exception
{
// Let ToolRunner handle generic command-line options
ToolRunner.run(new Configuration(), new WordCountCounters(), args);
System.exit(0);
}
public static class SumMapper extends Mapper<ByteBuffer, SortedMap<ByteBuffer, IColumn>, Text, LongWritable>
{
public void map(ByteBuffer key, SortedMap<ByteBuffer, IColumn> columns, Context context) throws IOException, InterruptedException
{
long sum = 0;
for (IColumn column : columns.values())
{
logger.debug("read " + key + ":" + column.name() + " from " + context.getInputSplit());
sum += ByteBufferUtil.toLong(column.value());
}
context.write(new Text(ByteBufferUtil.string(key)), new LongWritable(sum));
}
}
public int run(String[] args) throws Exception
{
Job job = new Job(getConf(), "wordcountcounters");
job.setJarByClass(WordCountCounters.class);
job.setMapperClass(SumMapper.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(LongWritable.class);
FileOutputFormat.setOutputPath(job, new Path(OUTPUT_PATH_PREFIX));
job.setInputFormatClass(ColumnFamilyInputFormat.class);
ConfigHelper.setRpcPort(job.getConfiguration(), "9160");
ConfigHelper.setInitialAddress(job.getConfiguration(), "localhost");
ConfigHelper.setPartitioner(job.getConfiguration(), "org.apache.cassandra.dht.RandomPartitioner");
ConfigHelper.setInputColumnFamily(job.getConfiguration(), WordCount.KEYSPACE, WordCountCounters.COUNTER_COLUMN_FAMILY);
SlicePredicate predicate = new SlicePredicate().setSlice_range(
new SliceRange().
setStart(ByteBufferUtil.EMPTY_BYTE_BUFFER).
setFinish(ByteBufferUtil.EMPTY_BYTE_BUFFER).
setCount(100));
ConfigHelper.setInputSlicePredicate(job.getConfiguration(), predicate);
job.waitForCompletion(true);
return 0;
}
}
Compiation Errors are:
Because you commented out these two lines perhaps:
//import org.apache.cassandra.hadoop.ColumnFamilyInputFormat;
//import org.apache.cassandra.hadoop.ConfigHelper;