Help! I'm looking to create a Java application that generates a graph in any one of these formats:
.graphml
.ygf
.gml
.tgf
I need to be able to open the file in the graph editor "yEd".
So far, I have found these solutions:
yFiles For Java
Pro: Export to graphml, able to open in yEd, Java based, perfect.
Why I can't use it: Would cost me more than $2000 to use :( it is exactly what I need however
Gephi
Pro: FREE, Export to graphml, Java based!
Why I can't use it: When I try to open the generated graphml file in yEd, the graphml is broken: it's linear - one line, like this screenshot:
If I get it to work, then this is perfect
The graph I tried was generated using their example project
JGraphX
Pro: Able to generate a graph, Java based, FREE
Why I can't use it: How to export the generated graph to graphml? I couldn't figure it out...
Prefuse
Pro: Free, graph generation, Java based
What I can't use it: Seems like I can only read graphml, and not write graphml. Also, I built the demos fine with build.sh all, but then when I tried to run demos.jar, I got "Failed to load Main-Class"...
Blueprints with GraphML Reader and Writer Library (Tinkerpop?)
Pro: Java, Free, seems like you can export graphml with it
Why I can't use it: I'm confused, do I need to use this in conjunction with one of the "Implementations" listed? How do I use this?
JGraphT with GraphMLExporter
Pro: Able to generate graph, Java based, free, can export to graphml I think
Why I can't use it: I can't figure out how to export it! When I tried to open the generated graphml in yed, I got "yEd has encountered the following error: Could not import file test.graphml." I used thier example project, and did this:
JGraphT Code I Used:
UndirectedGraph<String, DefaultEdge> g = new SimpleGraph<String, DefaultEdge>(DefaultEdge.class);
String v1 = "v1";
String v2 = "v2";
String v3 = "v3";
String v4 = "v4";
// add the vertices
g.addVertex(v1);
g.addVertex(v2);
g.addVertex(v3);
g.addVertex(v4);
// add edges to create a circuit
g.addEdge(v1, v2);
g.addEdge(v2, v3);
g.addEdge(v3, v4);
g.addEdge(v4, v1);
FileWriter w;
try {
GmlExporter<String, DefaultEdge> exporter =
new GmlExporter<String, DefaultEdge>();
w = new FileWriter("test.graphml");
exporter.export(w, g);
} catch (IOException e) {
e.printStackTrace();
}
Any ideas? Thanks!
It might be late to answer, but for solution number two:
Right after you import the graph into yEd, just click "Layout" and select one. yed will not choose one for you as default, that's why it seemed to be linear.
I also wanted to export JgraphT graphs for yED but was not happy with the results. Therefore, I created an extended GMLWriter supporting yED's specific GML format (groups, colours, different edges,...).
GML-Writer-for-yED
I don't know if this fits your use case, but I use neo4j for creating a graph and then use the neo4j-shell-tools to export the graph as graphml. Perhaps this will work for you.
Just replace every occurrence of GmlExporter with GraphMLExporter in your code. That should work.
I´m using de Prefuse library and you can generate a GraphML file from a Graph object with de class GraphMLWriter.
I created a little Tutorial/Github Repo and sample code on how to work with the classes of JgraphT to export to GraphML and GML and how the results could look like in yED.
As already mentioned in another answer, if you don't want to do much configuration yourself, GML-Writer-for-yED might be handy.
Related
I'd like to embed Tablesaw interactive graphs in Jupyter Notebook using the IJava kernel. I realize that Tablesaw may not be able to do this out of the box, but I'm willing to put in a little effort to make this happen. I'm a Java expert, but I'm new to Jupyter Notebook and to the IJava kernel, so I'm not sure where to start. Is there some API for Jupyter Notebook or for IJava for embedding objects?
First of all I installed Anaconda and then installed the IJava kernel in Jupyter Notebook with no problem. So far it's working without a hitch using OpenJDK 11 on Windows 10! Next I tried to use Tablesaw. I was able to add its Maven dependencies, load a CSV file, and create a plot. Very nice!
However to produce a graph Tablesaw generates a temporary HTML file using Plotly, and invokes the browser to show the interactive plot. In other words, the graph does not appear inside Jupyter Notebook.
Tablesaw has an example with embedded graphs using the BeakerX kernel (not IJava), and as you can see (scroll down to "Play (Money)ball with Linear Regression"), they are embedding a Tablesaw Plot directly within Jupyter Notebook. So I know that conceptually embedding an interactive Tablesaw graph in Jupyter Notebook with a Java kernel is possible.
Is this capability something specific to BeakerX? I would switch to BeakerX, but from the documentation I didn't see anything about BeakerX support Java 9+. In addition IJava seemed like a leaner implementation, built directly on top of JShell.
Where do I start to figure out how to embed a Tablesaw Plot object as an interactive graph in Jupyter Notebook using the IJava kernel, the way they are doing in BeakerX?
The IJava kernel has 2 main functions for display_data; display and render. Both hook into the Renderer from the base kernel. We can register a render function for the Figure type from tablesaw.
Add the tablesaw-jsplot dependency (and all required transitive dependencies) via the loadFromPOM cell magic:
%%loadFromPOM
<dependency>
<groupId>tech.tablesaw</groupId>
<artifactId>tablesaw-jsplot</artifactId>
<version>0.30.4</version>
</dependency>
Register a render function for IJava's renderer.
We create a registration for tech.tablesaw.plotly.components.Figure.
If an output type is not specified during the render then we want it to default to text/html (with the preferring call).
If an html render is requested, we build the target <div> as well as the javascript that invokes a plotly render into that <div>. Most of this logic is done via tablesaw's asJavascript method but hooks into Jupyter notebook's require AMD module setup.
import io.github.spencerpark.ijava.IJava;
IJava.getKernelInstance().getRenderer()
.createRegistration(tech.tablesaw.plotly.components.Figure.class)
.preferring(io.github.spencerpark.jupyter.kernel.display.mime.MIMEType.TEXT_HTML)
.register((figure, ctx) -> {
ctx.renderIfRequested(io.github.spencerpark.jupyter.kernel.display.mime.MIMEType.TEXT_HTML, () -> {
String id = UUID.randomUUID().toString().replace("-", "");
figure.asJavascript(id);
Map<String, Object> context = figure.getContext();
StringBuilder html = new StringBuilder();
html.append("<div id=\"").append(id).append("\"></div>\n");
html.append("<script>require(['https://cdn.plot.ly/plotly-1.44.4.min.js'], Plotly => {\n");
html.append("var target_").append(id).append(" = document.getElementById('").append(id).append("');\n");
html.append(context.get("figure")).append('\n');
html.append(context.get("plotFunction")).append('\n');
html.append("})</script>\n");
return html.toString();
});
});
Then we can create a table to display (taken from the Tablesaw docs).
import tech.tablesaw.api.*;
import tech.tablesaw.plotly.api.*;
import tech.tablesaw.plotly.components.*;
String[] animals = {"bear", "cat", "giraffe"};
double[] cuteness = {90.1, 84.3, 99.7};
Table cuteAnimals = Table.create("Cute Animals")
.addColumns(
StringColumn.create("Animal types", animals),
DoubleColumn.create("rating", cuteness)
);
cuteAnimals
Finally we can create a Figure for the table and display it via one of 3 methods for displaying things in IJava.
VerticalBarPlot.create("Cute animals", cuteAnimals, "Animal types", "rating");
which is equivalent to a render call where the "text/html" is implicit because we set it as the preferred type (preferring during registration)
render(VerticalBarPlot.create("Cute animals", cuteAnimals, "Animal types", "rating"), "text/html");
and if it is not at the end of a cell, the display function is another option. For example to display the chart and the cuteAnimals afterwards:
Figure figure = VerticalBarPlot.create("Cute animals", cuteAnimals, "Animal types", "rating");
display(figure);
cuteAnimals
try using the command prompt and using pip install Tablesaw, then use other suggestions.
Hye there! I just need the help for implementing Naive Bayes Text Classification Algorithm in Java to just test my Data Set for research purposes. It is compulsory to implement the algorithm in Java; rather using Weka or Rapid Miner tools to get the results!
My Data Set has the following type of Data:
Doc Words Category
Means that I have the Training Words and Categories for each training (String) known in advance. Some of the Data Set is given below:
Doc Words Category
Training
1 Integration Communities Process Oriented Structures...(more string) A
2 Integration Communities Process Oriented Structures...(more string) A
3 Theory Upper Bound Routing Estimate global routing...(more string) B
4 Hardware Design Functional Programming Perfect Match...(more string) C
.
.
.
Test
5 Methodology Toolkit Integrate Technological Organisational
6 This test contain string naive bayes test text text test
SO the Data Set comes from a MySQL DataBase and it may contain multiple training strings and test strings as well! The thing is I just need to implement Naive Bayes Text Classification Algorithm in Java.
The algorithm should follow the following example mentioned here Table 13.1
Source: Read here
The thing is that I can implement the algorithm in Java Code myself but i just need to know if it is possible that there exist some kind a Java library with source code documentation available to allow me to just test the results.
The problem is I just need the results for just one time only means its just a test for results.
So, come to the point can somebody tell me about any good java library that helps my code this algorithm in Java and that could made my dataset possible to process the results, or can somebody give me any good ideas how to do it easily...something good that can help me.
I will be thankful for your help.
Thanks in advance
As per your requirement, you can use the Machine learning library MLlib from apache. The MLlib is Spark’s scalable machine learning library consisting of common learning algorithms and utilities. There is also a java code template to implement the algorithm utilizing the library. So to begin with, you can:
Implement the java skeleton for the Naive Bayes provided on their site as given below.
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.mllib.classification.NaiveBayes;
import org.apache.spark.mllib.classification.NaiveBayesModel;
import org.apache.spark.mllib.regression.LabeledPoint;
import scala.Tuple2;
JavaRDD<LabeledPoint> training = ... // training set
JavaRDD<LabeledPoint> test = ... // test set
final NaiveBayesModel model = NaiveBayes.train(training.rdd(), 1.0);
JavaPairRDD<Double, Double> predictionAndLabel =
test.mapToPair(new PairFunction<LabeledPoint, Double, Double>() {
#Override public Tuple2<Double, Double> call(LabeledPoint p) {
return new Tuple2<Double, Double>(model.predict(p.features()), p.label());
}
});
double accuracy = predictionAndLabel.filter(new Function<Tuple2<Double, Double>, Boolean>() {
#Override public Boolean call(Tuple2<Double, Double> pl) {
return pl._1().equals(pl._2());
}
}).count() / (double) test.count();
For testing your datasets, there is no best solution here than use the Spark SQL. MLlib fits into Spark's APIs perfectly. To start using it, I would recommend you to go through the MLlib API first, implementing the Algorithm according to your needs. This is pretty easy using the library.
For the next step to allow the processing of your datasets possible, just use the Spark SQL.
I will recommend you to stick to this. I too have hunted down multiple options before settling for this easy to use library and it's seamless support for inter-operations with some other technologies. I would have posted the complete code here to perfectly fit your answer. But I think you are good to go.
You can use the Weka Java API and include it in your project if you do not want to use the GUI.
Here's a link to the documentation to incorporate a classifier in your code:
https://weka.wikispaces.com/Use+WEKA+in+your+Java+code
Please take a look at the Bow toolkit.
It has a Gnu license and source code. Some of its code includes
Setting word vector weights according to Naive Bayes, TFIDF, and several other methods.
Performing test/train splits, and automatic classification tests.
It's not a Java library, but you could compile the C code to ensure that you Java had similar results for a given corpus.
I also spotted a decent Dr. Dobbs article that implements in Perl. Once again, not the desired Java, but will give you the one-time results that you are asking for.
Hi I thinks Spark would help you a lot:
http://spark.apache.org/docs/1.2.0/mllib-naive-bayes.html
you can even choose the language you think is the most appropriate to your needs Java / Python / Scala!
You may want to take a look at this.
https://mahout.apache.org/users/classification/bayesian.html
Please use scipy from python. There is already an implementation of what you need:
class sklearn.naive_bayes.MultinomialNB(alpha=1.0, fit_prior=True, class_prior=None)¶
scipy
You can use an algorithm platform like KNIME, it has variety of classification algorithms (Naive bayed included). You can run it with a GUI or Java API.
If you want to implement Naive Bayes Text Classification Algorithm in Java, then WEKA Java API will be a better solution. The data set should have to be in .arff format. Creating an .arff file from mySql database is very easy. Here is the attachment of the java code for the classifier a link of a sample .arff file.
Create a new Text document. Open it with Notepad. Copy and paste all the texts below the link. Save it as DataSet.arff. http://storm.cis.fordham.edu/~gweiss/data-mining/weka-data/weather.arff
Download Weka Java API: http://www.java2s.com/Code/Jar/w/weka.htm
Code for the classifier:
public static void main(String[] args) {
try {
StringBuilder txtAreaShow = new StringBuilder();
//reads the arff file
BufferedReader breader = null;
breader = new BufferedReader(new FileReader("DataSet.arff"));
//if 40 attributes availabe then 39 will be the class index/attribuites(yes/no)
Instances train = new Instances(breader);
train.setClassIndex(train.numAttributes() - 1);
breader.close();
//
NaiveBayes nB = new NaiveBayes();
nB.buildClassifier(train);
Evaluation eval = new Evaluation(train);
eval.crossValidateModel(nB, train, 10, new Random(1));
System.out.println("Run Information\n=====================");
System.out.println("Scheme: " + train.getClass().getName());
System.out.println("Relation: ");
System.out.println("\nClassifier Model(full training set)\n===============================");
System.out.println(nB);
System.out.println(eval.toSummaryString("\nSummary Results\n==================", true));
System.out.println(eval.toClassDetailsString());
System.out.println(eval.toMatrixString());
//txtArea output
txtAreaShow.append("\n\n\n");
txtAreaShow.append("Run Information\n===================\n");
txtAreaShow.append("Scheme: " + train.getClass().getName());
txtAreaShow.append("\n\nClassifier Model(full training set)"
+ "\n======================================\n");
txtAreaShow.append("" + nB);
txtAreaShow.append(eval.toSummaryString("\n\nSummary Results\n==================\n", true));
txtAreaShow.append(eval.toClassDetailsString());
txtAreaShow.append(eval.toMatrixString());
txtAreaShow.append("\n\n\n");
System.out.println(txtAreaShow.toString());
} catch (FileNotFoundException ex) {
System.err.println("File not found");
System.exit(1);
} catch (IOException ex) {
System.err.println("Invalid input or output.");
System.exit(1);
} catch (Exception ex) {
System.err.println("Exception occured!");
System.exit(1);
}
You can take a look at Blayze - It's a pretty minimal Naive Bayes library for the JVM written in Kotlin. Should be easy to follow.
Full disclosure: I'm one of the authors of Blayze
I have 2 .m files. One is the function and the other one (read.m) reads then function and exports the results into an excel file. I have a java program that makes some changes to the .m files. After the changes I want to automate the execution/running of the .m files. I have downloaded the matlabcontrol.jar and I am looking for a way to use it to invoke and run the read.m file that then reads the function.
Can anyone help me with the code? Thanks
I have tried this code but it does not work.
public static void tomatlab() throws MatlabConnectionException, MatlabInvocationException {
MatlabProxyFactoryOptions options =
new MatlabProxyFactoryOptions.Builder()
.setUsePreviouslyControlledSession(true)
.build();
MatlabProxyFactory factory = new MatlabProxyFactory(options);
MatlabProxy proxy = factory.getProxy();
proxy.eval("addpath('C:\\path_to_read.m')");
proxy.feval("read");
proxy.eval("rmpath('C:\\path_to_read.m')");
// close connection
proxy.disconnect();
}
Based on the official tutorial in the Wiki of the project, it seems quite straightforward to start with this API.
The path-manipulation might be a bit tricky, but I would give a try to loading the whole script into a string and passing it to eval (please note I have no prior experience with this specific Matlab library). That could be done quite easily (with joining Files.readAllLines() for example).
Hope that helps something.
Is there a way to generate BPEL programmatically in Java?
I tried using the BPEL Eclipse Designer API to write this code:
Process process = null;
try {
Resource.Factory.Registry reg =Resource.Factory.Registry.INSTANCE;
Map<String, Object> m = reg.getExtensionToFactoryMap();
m.put("bpel", new BPELResourceFactoryImpl());//it works with XMLResourceFactoryImpl()
//create resource
URI uri =URI.createFileURI("myBPEL2.bpel");
ResourceSet rSet = new ResourceSetImpl();
Resource bpelResource = rSet.createResource(uri);
//create/populate process
process = BPELFactory.eINSTANCE.createProcess();
process.setName("myBPEL");
Sequence mySeq = BPELFactory.eINSTANCE.createSequence();
mySeq.setName("mainSequence");
process.setActivity(mySeq);
//save resource
bpelResource.getContents().add(process);
Map<String,String> map= new HashMap<String, String>();
map.put("bpel", "http://docs.oasis-open.org/wsbpel/2.0/process/executable");
map.put("tns", "http://matrix.bpelprocess");
map.put("xsd", "http://www.w3.org/2001/XMLSchema");
bpelResource.save(map);
}
catch (Exception e) {
e.printStackTrace();
}
}
but I received an error:
INamespaceMap cannot be attached to an eObject ...
I read this message by Simon:
I understand that using the BPEL model outside of eclipse might be desirable, but it was never intended by us. Thus, this isn't supported
Is there any other API that can help?
You might want to give JAXB a try. It helps you to transform the official BPEL XSD into Java classes. You use those classes to construct your BPEL document and output it.
I had exactly the same problem with the BPELUnit [1], so I started a module in BPELUnit that has the first things necessary for generating and reading BPEL Models [2] although it is far from complete. Supported is only BPEL 2.0 (1.1 will follow later) and handlers are also currently not supported (but will be added). It is under active development because BPELUnit's code coverage component will be based on it so it will get BPEL-feature complete over time. You are happily invited to contribute if you need to close gaps earlier.
You can check it out from GitHub or grap the Maven artifact.
As of now there is no documentation but you can have a look at the JUnit tests that read and write processes.
If this is not suitable for, I'd like to share some experiences with you:
Do not use JAXB: You will need to read and write XML Namespaces which are not preserved with JAXB. That's why I have chosen XMLBeans. DOM would be the other alternative that I can think of.
The inheritance in the XML Schema is not really developer friendly. That's why there are own interface structures and wrappers around the XMLBeans generated classes.
Daniel
[1] http://www.bpelunit.net
[2] https://github.com/bpelunit/bpelunit/tree/master/net.bpelunit.model.bpel
This has been solved using the unify framework API after adding the necessary classes to handle correlation. BPELUnit stated by #Daniel seems to be another alternative.
The Eclipse BPEL API is based on an EMF Model. So you could generate your own artifacts using JET or Xpand based on that. This way there is no requirement to run inside Eclipse.
Although you may can't use BPEL outside of Eclipse, have you considered moving parts of your application inside it?
The BPEL XML Schemas are listed in the appendig of the spec. So you could also base your work on that and integrate with existing BPEL applications where necessary.
In case anyone is looking to solve the above problem while still running inside eclipse environment.
The problem can be resolved as stated by Luca Pino here by adding:
AdapterRegistry.INSTANCE.registerAdapterFactory( BPELPackage.eINSTANCE, BasicBPELAdapterFactory.INSTANCE );
before the resource creation line i.e.
Resource bpelResource = rSet.createResource(uri);
Note: Another solution, to the same problem, also stating how to resolve the dependencies to make this code work, can be found in my other answer here.
Using Eclipse jdt facilities, you can traverse the AST of java code snippets as follows:
ASTParser ASTparser = ASTParser.newParser(AST.JLS3);
ASTparser.setSource("package x;class X{}".toCharArray());
ASTparser.createAST(null).accept(...);
But when trying to perform code complete & code selection it seems that I have to do it in a plug-in application since I have to write codes like
IFile file = ResourcesPlugin.getWorkspace().getRoot().getFile(new Path(somePath));
ICodeAssist i = JavaCore.createCompilationUnitFrom(f);
i.codeComplete/codeSelect(...)
Is there anyway that I can finally get a stand-alone java application which incorporates the jdt code complete/select facilities?
thx a lot!
shi kui
I have noticed it that using org.eclipse.jdt.internal.codeassist.complete.CompletionParser
I can parse a code snippet as well.
CompletionParser parser =new CompletionParser(new ProblemReporter(
DefaultErrorHandlingPolicies.proceedWithAllProblems(),
new CompilerOptions(null),
new DefaultProblemFactory(Locale.getDefault())),
false);
org.eclipse.jdt.internal.compiler.batch.CompilationUnit sourceUnit =
new org.eclipse.jdt.internal.compiler.batch.CompilationUnit(
"class T{f(){new T().=1;} \nint j;}".toCharArray(), "testName", null);
CompilationResult compilationResult = new CompilationResult(sourceUnit, 0, 0, 0);
CompilationUnitDeclaration unit = parser.dietParse(sourceUnit, compilationResult, 25);
But I have 2 questions:
1. How to retrive the assist information?
2. How can I specify class path or source path for the compiler to look up type/method/field information?
I don't think so, unless you provide your own implementation of ICodeAssist.
As the Performing code assist on Java code mentions, Elements that allow this manipulation should implement ICodeAssist.
There are two kinds of manipulation:
Code completion - compute the completion of a Java token.
Code selection - answer the Java element indicated by the selected text of a given offset and length.
In the Java model there are two elements that implement this interface: IClassFile and ICompilationUnit.
Code completion and code selection only answer results for a class file if it has attached source.
You could try opening a File outside of any workspace (like this FAQ), but the result wouldn't implement ICodeAssist.
So the IFile most of the time comes from a workspace location.