Python to Java translation - java

i get quite short code of algorithm in python, but i need to translate it to Java. I didnt find any program to do that, so i will really appreciate to help translating it.
I learned python a very little to know the idea how algorithm work.
The biggest problem is because in python all is object and some things are made really very confuzing like
sum(self.flow[(source, vertex)] for vertex, capacity in self.get_edges(source))
and "self.adj" is like hashmap with multiple values which i have no idea how to put all together. Is any better collection for this code in java?
code is:
class FlowNetwork(object):
def __init__(self):
self.adj, self.flow, = {},{}
def add_vertex(self, vertex):
self.adj[vertex] = []
def get_edges(self, v):
return self.adj[v]
def add_edge(self, u,v,w=0):
self.adj[u].append((v,w))
self.adj[v].append((u,0))
self.flow[(u,v)] = self.flow[(v,u)] = 0
def find_path(self, source, sink, path):
if source == sink:
return path
for vertex, capacity in self.get_edges(source):
residual = capacity - self.flow[(source,vertex)]
edge = (source,vertex,residual)
if residual > 0 and not edge in path:
result = self.find_path(vertex, sink, path + [edge])
if result != None:
return result
def max_flow(self, source, sink):
path = self.find_path(source, sink, [])
while path != None:
flow = min(r for u,v,r in path)
for u,v,_ in path:
self.flow[(u,v)] += flow
self.flow[(v,u)] -= flow
path = self.find_path(source, sink, [])
return sum(self.flow[(source, vertex)] for vertex, capacity in self.get_edges(source))
g = FlowNetwork()
map(g.add_vertex, ['s','o','p','q','r','t'])
g.add_edge('s','o',3)
g.add_edge('s','p',3)
g.add_edge('o','p',2)
g.add_edge('o','q',3)
g.add_edge('p','r',2)
g.add_edge('r','t',3)
g.add_edge('q','r',4)
g.add_edge('q','t',2)
print g.max_flow('s','t')
result of this example is "5".
algorithm find max flow in graph(linked list or whatever) from source vertex "s" to destination "t".
Many thanks for any idea

Java doesn't have anything like Python's comprehension syntax. You'll have to replace it with code that loops over the list and aggregates the value of sum as it goes.
Also, self.flow looks like a dictionary indexed by pairs. The only way to match this, AFAIK, is to create a class with two fields that implements hashCode and equals to use as a key for a HashMap.

Related

JGraphT: How to represent set of Vertices and Edges as efficiently as possible

I am new to Graph theory in addition to this JGraphT (Java) library I'm using in order to implement a solution to a logistics issue I'm trying to solve. As such, I'm a little lost on the best way to tackle this problem I'm having to represent a shipment's path from Point A to Point C given the incoming data.
Given a list of Conveyance segments or Ordered Pairs, how do I represent this programmatically with the fewest possible edges?
Delivery 1 goes from Atlanta to Mumbai.
Delivery 2 goes from Atlanta to London.
Delivery 3 goes from London to Mumbai.
In my visual graph representation, I want to remove the explicit Atlanta to Mumbai edge and simply infer that from the other edges and represent it simply as:
Atlanta -> London -> Mumbai
I feel like there's likely an existing Path Algorithm that can be applied to solve this rather simple use case but I'm struggling to figure out which one given my relative newness to the subject matter. If my requirement was to remove excessive vertices rather than edges, then it seems like the ShortestPathAlgorithm would be of use here.
I can possibly identify the ultimate source and sink of my given pairs (i.e. Atlanta is the source and Mumbai is the sink) but don't want to go down the path of manually removing the edges if possible.
Current representation:
Desired representation:
I have created a class to get me close to implement the alternative Depth-first solution #JorisKinable mentions below but still not understanding why "Atlanta, Mumbai, and London" are listed in that order. If no weight is applied to the edges, what causes Mumbai to come before London in this scenario?
public final class Demo {
public static void main(String[] args) throws Exception {
// Create the graph object
Graph<String, DefaultEdge> graph = new DefaultDirectedGraph<>(DefaultEdge.class);
String atlanta = "Atlanta";
String london = "London";
String mumbai = "Mumbai";
graph.addVertex(atlanta);
graph.addVertex(london);
graph.addVertex(mumbai);
graph.addEdge(atlanta, london);
graph.addEdge(london, mumbai);
graph.addEdge(atlanta, mumbai);
ComponentNameProvider<String> vertexIdProvider = name -> name;
ComponentNameProvider<String> vertexLabelProvider = name -> name;
String start = graph.vertexSet().stream().filter(r -> r.equals("Atlanta")).findAny().get();
System.out.println("-- traverseGraph output");
traverseGraph(graph, start);
GraphExporter<String, DefaultEdge> exporter = new DOTExporter<>(vertexIdProvider, vertexLabelProvider, null);
Writer writer = new StringWriter();
exporter.exportGraph(graph, writer);
System.out.println(writer.toString());
}
private static void traverseGraph(Graph<String, DefaultEdge> graph, String start) {
Iterator<String> iterator = new DepthFirstIterator<>(graph, start);
while (iterator.hasNext()) {
String string = iterator.next();
System.out.println(string);
}
}
}
Currently the question is not stated precise enough to give an exact answer. It seems however that you can solve your problem through the following steps:
Construct a directed graph with all arcs included. Add one additional node 's' to the graph which has outgoing arcs to all other nodes.
Perform a Breadth First Search (BFS) starting from node 's'.
Finally, remove node 's' as well as all edges which are not part BFS tree.
You could also use Depth First Search instead of BFS, and remove all back edges, forward edges and cross edges.
All of this is easily accomplished in JGraphT, but that's a separate question.

Java 8 stream map reusing assigned variable

I'm aware that title may be a little confusing but I couldn't come up with a better one so let me explain what I mean...
I have this piece of code:
int spacing = Integer.MAX_VALUE;
for (Edge edge : edges) {
if (!union.connected(edge.getStart(), edge.getEnd())) {
spacing = Math.min(spacing, edge.getWeight());
}
}
Is there a way to turn this code into java 8 code?
Of course, first step is filtering which is simple, but then it becomes tricker because I would have to reuse computed variable spacing in Stream::map method and I have no idea is that possible.
I'm not entirely sure about the syntax (I can't test it), but it should work with something like this :
int spacing = edges.stream()
.filter(edge -> !union.connected(edge.getStart(), edge.getEnd()))
.min (Collectors.comparing(Edge::getWeight))
.map (Edge::getWeight) // get the weight of the min Edge
.orElse(Integer.MAX_VALUE);
Not sure about the Comparator passed to min.
In case the code above doesn't work, you can mapToInt first (assuming getWeight() returns int) :
int spacing = edges.stream()
.filter(edge -> !union.connected(edge.getStart(), edge.getEnd()))
.mapToInt(Edge::getWeight)
.min ()
.orElse(Integer.MAX_VALUE);

Using of getSpectrum() in Libgdx library

I know the first thing you are thinking is "look for it in the documentation", however, the documentation is not clear about it.
I use the library to get the FFT and I followed this short guide:
http://www.digiphd.com/android-java-reconstruction-fast-fourier-transform-real-signal-libgdx-fft/
The problem arises when it uses:
fft.forward(array);
fft_cpx=fft.getSpectrum();
tmpi = fft.getImaginaryPart();
tmpr = fft.getRealPart();
Both "fft_cpx", "tmpi", "tmpr" are float vectors. While "tmpi" and "tmpr" are used for calculate the magnitude, "fft_cpx" is not used anymore.
I thought that getSpectrum() was the union of getReal and getImmaginary but the values are all different.
Maybe, the results from getSpectrum are complex values, but what is their representation?
I tried without fft_cpx=fft.getSpectrum(); and it seems to work correctly, but I'd like to know if it is actually necessary and what is the difference between getSpectrum(), getReal() and getImmaginary().
The documentation is at:
http://libgdx-android.com/docs/api/com/badlogic/gdx/audio/analysis/FFT.html
public float[] getSpectrum()
Returns: the spectrum of the last FourierTransform.forward() call.
public float[] getRealPart()
Returns: the real part of the last FourierTransform.forward() call.
public float[] getImaginaryPart()
Returns: the imaginary part of the last FourierTransform.forward()
call.
Thanks!
getSpectrum() returns absolute values of complex numbers.
It is calculated like this
for (int i = 0; i < spectrum.length; i++) {
spectrum[i] = (float)Math.sqrt(real[i] * real[i] + imag[i] * imag[i]);
}

How to access ilog decision variable from java?

I have a linear problem modelled in IBM ILOG CPLEX Optimization Studio, that returns correct solutions, i.e. objective values.
For simulation purposes I use an ILOG model model file and a data file that I both call from java:
IloOplFactory.setDebugMode(false);
IloOplFactory oplF = new IloOplFactory();
IloOplErrorHandler errHandler = oplF.createOplErrorHandler(System.out);
IloOplModelSource modelSource = oplF.createOplModelSource("CDA_Welfare_Examination_sparse2.mod");
IloCplex cplex = oplF.createCplex();
IloOplSettings settings = oplF.createOplSettings(errHandler);
IloOplModelDefinition def=oplF.createOplModelDefinition(modelSource,settings);
IloOplModel opl=oplF.createOplModel(def,cplex);
String inDataFile = path;
IloOplDataSource dataSource=oplF.createOplDataSource(inDataFile);
opl.addDataSource(dataSource);
opl.generate();
opl.convertAllIntVars(); // converts integer bounds into LP compatible format
if (cplex.solve()){
}
else{
System.out.println("Solution could not be achieved, probably insufficient memory or some other weird problem.");
}
Now, I would like to access the actual decision variable match[Matchable] from java.
In ILOG CPLEX Optimization Studio I use the following nomenclatura:
tuple bidAsk{
int b;
int a;
}
{bidAsk} Matchable = ...;
dvar float match[Matchable];
In Java I access the objective value in the following way (which works fine):
double sol = new Double(opl.getSolutionGetter().getObjValue());
Now, how do I access the decision variable "match"? So far I have started with
IloOplElement dVarMatch = opl.getElement("match");
but I can't seem to get any further. Help is very much appreciated! Thanks a lot!
You're on the right track. You need to get tuples which represent each valid bidAsk in Matchable, then use the tuple as an index into the decision variable object. Here's some sample code in Visual Basic (what I happen to be writing in right now, should be easy to translate to java):
' Get the tuple set named "Matchable"
Dim matchable As ITupleSet = opl.GetElement("Matchable").AsTupleSet
' Get the decision variables named "match"
Dim match As INumVarMap = opl.GetElement("match").AsNumVarMap
' Loop through each bidAsk in Matchable
For Each bidAsk As ITuple In matchable
' This is the current bidAsk's 'b' value
Dim b As Integer = bidAsk.GetIntValue("b")
' This is the current bidAsk's 'a' value
Dim a As Integer = bidAsk.GetIntValue("a")
' this is another way to get bidAsk.b and bidAsk.a
b = bidAsk.GetIntValue(0)
a = bidAsk.GetIntValue(1)
' This is the decision variable object for match[<b,a>]
Dim this_variable As INumVar = match.Get(bidAsk)
' This is the value of that decision variable in the current solution
Dim val As Double = opl.Cplex.GetValue(this_variable)
Next
You can get the variable values through the IloCplex-Object like that:
cplex.getValue([variable reference]);
I never imported a model like that. When you create the model in java, references to the decision variables are easily at hand, but there should be a way to obtain the variables. You could check the documentation:
cplex docu

Tips optimizing Java code

So, I've written a spellchecker in Java and things work as they should. The only problem is that if I use a word where the max allowed distance of edits is too large (like say, 9) then my code runs out of memory. I've profiled my code and dumped the heap into a file, but I don't know how to use it to optimize my code.
Can anyone offer any help? I'm more than willing to put up the file/use any other approach that people might have.
-Edit-
Many people asked for more details in the comments. I figured that other people would find them useful, and they might get buried in the comments. Here they are:
I'm using a Trie to store the words themselves.
In order to improve time efficiency, I don't compute the Levenshtein Distance upfront, but I calculate it as I go. What I mean by this is that I keep only two rows of the LD table in memory. Since a Trie is a prefix tree, it means that every time I recurse down a node, the previous letters of the word (and therefore the distance for those words) remains the same. Therefore, I only calculate the distance with that new letter included, with the previous row remaining unchanged.
The suggestions that I generate are stored in a HashMap. The rows of the LD table are stored in ArrayLists.
Here's the code of the function in the Trie that leads to the problem. Building the Trie is pretty straight forward, and I haven't included the code for the same here.
/*
* #param letter: the letter that is currently being looked at in the trie
* word: the word that we are trying to find matches for
* previousRow: the previous row of the Levenshtein Distance table
* suggestions: all the suggestions for the given word
* maxd: max distance a word can be from th query and still be returned as suggestion
* suggestion: the current suggestion being constructed
*/
public void get(char letter, ArrayList<Character> word, ArrayList<Integer> previousRow, HashSet<String> suggestions, int maxd, String suggestion){
// the new row of the trie that is to be computed.
ArrayList<Integer> currentRow = new ArrayList<Integer>(word.size()+1);
currentRow.add(previousRow.get(0)+1);
int insert = 0;
int delete = 0;
int swap = 0;
int d = 0;
for(int i=1;i<word.size()+1;i++){
delete = currentRow.get(i-1)+1;
insert = previousRow.get(i)+1;
if(word.get(i-1)==letter)
swap = previousRow.get(i-1);
else
swap = previousRow.get(i-1)+1;
d = Math.min(delete, Math.min(insert, swap));
currentRow.add(d);
}
// if this node represents a word and the distance so far is <= maxd, then add this word as a suggestion
if(isWord==true && d<=maxd){
suggestions.add(suggestion);
}
// if any of the entries in the current row are <=maxd, it means we can still find possible solutions.
// recursively search all the branches of the trie
for(int i=0;i<currentRow.size();i++){
if(currentRow.get(i)<=maxd){
for(int j=0;j<26;j++){
if(children[j]!=null){
children[j].get((char)(j+97), word, currentRow, suggestions, maxd, suggestion+String.valueOf((char)(j+97)));
}
}
break;
}
}
}
Here's some code I quickly crafted showing one way to generate the candidates and to then "rank" them.
The trick is: you never "test" a non-valid candidate.
To me your: "I run out of memory when I've got an edit distance of 9" screams "combinatorial explosion".
Of course to dodge a combinatorial explosion you don't do thing like trying to generate yourself all words that are at a distance from '9' from your misspelled work. You start from the misspelled word and generate (quite a lot) of possible candidates, but you refrain from creating too many candidates, for then you'd run into trouble.
(also note that it doesn't make much sense to compute up to a Levenhstein Edit Distance of 9, because technically any word less than 10 letters can be transformed into any other word less than 10 letters in max 9 transformations)
Here's why you simply cannot test all words up to a distance of 9 without either having an OutOfMemory error or simply a program never terminating:
generating all the LED up to 1 for the word "ptmizing", by only adding one letter (from a to z) generates already 9*26 variations (i.e. 324 variations) [there are 9 positions where you can insert one out of 26 letters)
generating all the LED up to 2, by only adding one letter to what we know have generates already 10*26*324 variations (60 840)
generating all the LED up to 3 gives: 17 400 240 variations
And that is only by considering the case where we add one, add two or add three letters (we're not counting deletion, swaps, etc.). And that is on a misspelled word that is only nine characters long. On "real" words, it explodes even faster.
Sure, you could get "smart" and generate this in a way not to have too many dupes etc. but the point stays: it's a combinatorial explosion that explodes fastly.
Anyway... Here's an example. I'm simply passing the dictionary of valid words (containing only four words in this case) to the corresponding method to keep this short.
You'll obviously want to replace the call to the LED with your own LED implementation.
The double-metaphone is just an example: in a real spellchecker words that do "sound alike"
despite further LED should be considered as "more correct" and hence often suggest first. For example "optimizing" and "aupteemising" are quite far from a LED point of view, but using the double-metaphone you should get "optimizing" as one of the first suggestion.
(disclaimer: following was cranked in a few minutes, it doesn't take into account uppercase, non-english words, etc.: it's not a real spell-checker, just an example)
#Test
public void spellCheck() {
final String src = "misspeled";
final Set<String> validWords = new HashSet<String>();
validWords.add("boing");
validWords.add("Yahoo!");
validWords.add("misspelled");
validWords.add("stackoverflow");
final List<String> candidates = findNonSortedCandidates( src, validWords );
final SortedMap<Integer,String> res = computeLevenhsteinEditDistanceForEveryCandidate(candidates, src);
for ( final Map.Entry<Integer,String> entry : res.entrySet() ) {
System.out.println( entry.getValue() + " # LED: " + entry.getKey() );
}
}
private SortedMap<Integer, String> computeLevenhsteinEditDistanceForEveryCandidate(
final List<String> candidates,
final String mispelledWord
) {
final SortedMap<Integer, String> res = new TreeMap<Integer, String>();
for ( final String candidate : candidates ) {
res.put( dynamicProgrammingLED(candidate, mispelledWord), candidate );
}
return res;
}
private int dynamicProgrammingLED( final String candidate, final String misspelledWord ) {
return Levenhstein.getLevenshteinDistance(candidate,misspelledWord);
}
Here you generate all possible candidates using several methods. I've only implemented one such method (and quickly so it may be bogus but that's not the point ; )
private List<String> findNonSortedCandidates( final String src, final Set<String> validWords ) {
final List<String> res = new ArrayList<String>();
res.addAll( allCombinationAddingOneLetter(src, validWords) );
// res.addAll( allCombinationRemovingOneLetter(src) );
// res.addAll( allCombinationInvertingLetters(src) );
return res;
}
private List<String> allCombinationAddingOneLetter( final String src, final Set<String> validWords ) {
final List<String> res = new ArrayList<String>();
for (char c = 'a'; c < 'z'; c++) {
for (int i = 0; i < src.length(); i++) {
final String candidate = src.substring(0, i) + c + src.substring(i, src.length());
if ( validWords.contains(candidate) ) {
res.add(candidate); // only adding candidates we know are valid words
}
}
if ( validWords.contains(src+c) ) {
res.add( src + c );
}
}
return res;
}
One thing you could try is, increase the Java's heap size, in order to overcome "out of memory error".
Following article will help you in order to understand how to increase heap size in Java
http://viralpatel.net/blogs/2009/01/jvm-java-increase-heap-size-setting-heap-size-jvm-heap.html
But I think the better approach to address your problem is, find out a better algorithm than the current algorithm
Well without more Information on the topic there is not much the community could do for you... You can start with the following:
Look at what your Profiler says (after it has run a little while): Does anything pile up? Are there a lot of Objects - this should normally give you a hint on what is wrong with your code.
Publish your saved dump somewhere and link it in your question, so someone else could take a look at it.
Tell us which profiler you are using, then somebody can give you hints on where to look for valuable information.
After you have narrowed down your problem to a specific part of your Code, and you cannot figure out why there are so many objects of $FOO in your memory, post a snippet of the relevant part.

Categories

Resources