I want to consist of graph function my problem for genetic algorithm. How can I do ? My chart consists of 2 independent axes, lets say X is number of iterations and Y represents corresponding best chromosome minimum value of fitness function. I am doing replacement after mutation, and then I am selecting the best chromosome. You can see below my cycle. How I can implement a graph library? I don't know anything about how to draw graphs.
for(int i=0; i
for(int j=0;j<parameters.getMaxSelection();j++){
select.binaryTournamentSelection(pop.chromosome);
for(int k=0;k<parameters.getMaxCrossover();k++){
crossing.onePointCrossover();
for(int m=0;m<parameters.getMaxMutation();m++){
mut.bitFlip();
mut.steadyStateSorting(pop);
}
}
}
}
This may not be what you are looking for but this code might give you some ideas about the XY Graph : https://github.com/najikadri/ONSA
I am currently working on a project for my 2nd year. I am supposed to code in java a tuner. I have chosen to do a guitar tuner.
After looking around on the internet, I found a java code to do a FFT. I changed it a bit, understood it and have tested it. I know it works fine (i made a graph of it and looked at the different peaks using simple sines functions).
I am now trying to find the fundamental frequency. From what I understand, this frequency is given by the first peak.
I would thus like to create a method that finds for instance the first 5 peaks of my FFT and gives them to me with their indexes.
I first did a simple method where I compared two by two each point of my spectrogram and when the sign changed that's where I knew there was a peak. This method works great with ideal signals (without any noise). However it becomes completely useless if I add noise.
I am really bad in java (I actually started with this project and basically the simple function I described above is my master piece.... just so you get an idea of my level).
Can anyone help me? I would really appreciate it! :)
Thanks in advance!
Have a great day!
fireangel
I'd say your best bet is going to be to read in all the values as an array, then run over them and 'smooth' them using a rolling average of some kind.
Afterwards, you'll have a much smoother curve. Find your peaks using this curve, then go back to your original data and use the peak indexes to find the actual peak there.
pseudocode:
// Your raw data
int[] data = getData();
// This is an array to hold your 'smoothed' data
int[] newData = new int[data.length];
// Iterate over your data, smooth it, and read it into your smoothed array
for (i < data.length) {
newData[i] = (data[i-2] + data[i-1] + data[i] + data[i+1] + data[i+2]) / 5;
}
// Use your existing peak finding function on your smoothed data, and get
// another array of the indexes your peaks occur.
int[] peakIndexes = yourPeakFindingFunction(newData);
// Create an array to hold your final values.
int[] peakValues = new int[peakIndexes.length];
// Iterate over your peak indexes and get the original data's value at that location.
for(i < peakIndexes.length) {
peadValues[i] = data[peakIndexes[i]];
}
Very basic and very brute-force, but it should get you on the right track for an assignment.
You'll need to play with the algorithms for smoothing the data so it's representative and for finding the actual peak at the location indicated by the smoothed data (as it won't be exact).
I'm working on AdaBoost implementation in Java.
It should have work for "double" coordinates on 2D 3D or 10D.
All I found for Java is for a binary data (0,1) and not for multi-dimensional space.
I'm currently looking for a way to represent the dimensions and to initialize the classifiers for boosting.
I'm looking for suggestions on how to represent the multidimensional space in Java, and how to initialize the classifiers to begin with.
The data is something in between [-15,+15]. And the target values are 1 or 2.
To use a boosted decision tree on spatial data, the typical approach is to try to find a "partition point" on some axis that minimizes the residual information in the two subtrees. To do this, you find some value along some axis (say, the x axis) and then split the data points into two groups - one group of points whose x coordinate is below that split point, and one group of points whose x coordinate is above that split point. That way, you convert the real-valued spatial data into 0/1 data - the 0 values are the ones below the split point, and the 1 values are the ones above the split point. The algorithm is thus identical to AdaBoost, except that when choosing the axis to split on, you also have to consider potential splitting points.
How about using JBoost, I think it's got what you're looking for.
Why don't you use a double[] array for each object? That is the common way of representing feature vectors in Java.
I am attempting to recreate a board game in Java which involves me storing a set of valid places pieces can be placed (for the AI). I thought that perhaps instead of storing as a list of Points, it would be run-time faster if I had an array/list/dictionary of the X coordinates in which there was an array/list of the y coordinates, so once you found the x coordinate you would only have to check its Ys not all the remaining points'.
The trouble I have is that i must change the valid points often. I came up with some possible solutions but have difficulty picking/implementing them:
HashMap < Integer, ArrayList > with X as an integer key and the Ys as an ArrayList.
Problem: I would have to create a new ArrayList every time I add an X.
Also I am unsure about runtime performance of HashMap.
int[X][Y] array initialized to the board size with each point set to its relative location (point 2,3 sets[2][3]) unset point being an invalid integer.
Problem: I would have to iterate through all the points and check every point.
List of Points This would simply be a Linked/Array List of Points.
Problem: Lists are slower than arrays.
How would using a Linked list of Points compare to checking the whole array like above?
Perhaps I should use a 2d linked list? What would be the fastest runtime way to do this?
You're worrying about the wrong things. Accessing collection/map/array items is extremely fast. The graphical part will be way more performance-sensitive. Just use whatever data structure is most natural. It's unlikely that you're going to be storing enough items to really matter anyway. Build it first, then figure out where your performance problems really are.
if you use an ArrayList of Points you have nearly the same performance as with an array (in Java)
and I think this is the fastest solution, because as you already mentioned you have to iterate through the complete int-array and a HashMap and the relying ArrayLists have to be changed depending on changing/adding coordinates
I am making a program where you can click on a map to see a "close-up view" of the area around it, such as on Google Maps.
When a user clicks on the map, it gets the X and Y coordinate of where they clicked.
Let's assume that I have an array of booleans of where these close-up view pictures are:
public static boolean[][] view_set=new boolean[Map.width][Map.height];
//The array of where pictures are. The map has a width of 3313, and a height of 3329.
The program searches through a folder, where images are named to where the X and Y coordinate of where it was taken on the map. The folder contains the following images (and more, but I'll only list five):
2377,1881.jpg, 2384,1980.jpg, 2389,1923.jpg, 2425,1860.jpg, 2475,1900.jpg
This means that:
view_set[2377][1881]=true;
view_set[2384][1980]=true;
view_set[2389][1923]=true;
view_set[2425][1860]=true;
view_set[2475][1900]=true;
If a user clicks at the X and Y of, for example, 2377,1882, then I need the program to figure out which image is closest (the answer in this case would be 2377,1881).
Any help would be appreciated,
Thanks.
Your boolean[][] is not a good datastructure for this problem, at least if it is not really dense (e.g. normally a point with close-up view is available in the surrounding 3×3 or maybe 5×5 square).
You want a 2-D-map with nearest-neighbor search. A useful data structure for this goal is the QuadTree. This is a tree of degree 4, used to represent spatial data. (I'm describing here the "Region QuadTree with point data".)
Basically, it divides a rectangle in four about equal size rectangles, and subdivides each of the rectangles further if there is more than one point in it.
So a node in your tree is one of these:
a empty leaf node (corresponding to a rectangle without points in it)
a leaf node containing exactly one point (corresponding to a rectangle with one point in it)
a inner node with four child nodes (corresponding to a rectangle with more than one point in it)
(In implementations, we can replace empty leaf nodes with a null-pointer in its parent.)
To find a point (or "the node a point would be in"), we start at the root node, look if our point is north/south/east/west of the dividing point, and go to the corresponding child node. We continue this until we arrive at some leaf node.
For adding a new point, we either wind up with an empty node - then we can put the new point here. If we end up at a node with already a point in it, create four child nodes (by splitting the rectangle) and add both points to the appropriate child node. (This might be the same, then repeat recursively.)
For the nearest-neighbor search, we will either wind up with an empty node - then we back up one level, and look at the other child nodes of this parent (comparing each distance). If we reach a child node with one point in it, we measure the distance of our search point to this point. If it is smaller than the distance to the edges or the node, we are done. Otherwise we will have to look at the points in the neighboring nodes, too, and compare the results here, taking the minimum. (We will have to look at at most four points, I think.)
For removal, after finding a point, we make its node empty. If the parent node now contains only one point, we replace it by a one-point leaf node.
The search and adding/removing are in O(depth) time complexity, where the maximum depth is limited by log((map length+width)/minimal distance of two points in your structure), and average depth is depending on the distribution of the points (e.g. the average distance to the next point), more or less.
Space needed is depending on number of points and average depth of the tree.
There are some variants of this data structure (for example splitting a node only when there are more than X points in it, or splitting not necessarily in the middle), to optimize the space usage and avoid too large depths of the tree.
Given the location the user clicked, you could search for the nearest image using a Dijkstra search.
Basically you start searching in increasingly larger rectangles around the clicked location for images. Of course you only have to search the boundaries of these rectangles, since you've already searched the body. This algorithm should stop as soon as an image is found.
Pseudo code:
int size = 0
Point result = default
while(result == default)
result = searchRectangleBoundary(size++, pointClicked)
function Point searchRectangleBoundary(int size, Point centre)
{
point p = {centre.X - size, centre.Y - size}
for i in 0 to and including size
{
if(view_set[p.X + i][p.Y]) return { p.X + i, p.Y}
if(view_set[p.X][p.Y + i]) return { p.X, p.Y + i}
if(view_set[p.X + i][p.Y + size]) return { p.X + i, p.Y + size}
if(view_set[p.X + size][p.Y + i]) return { p.X + size, p.Y + i}
}
return default
}
Do note that I've left out range checking for brevity.
There is a slight problem, but depending on the application, it might not be a problem. It doesn't use euclidian distances, but the manhattan metric. So it doesn't necessarily find the closest image, but an image at most the square root of 2 times as far.
Based on
your comment that states you have 350-500 points of interest,
your question that states you have a map width of 3313, and a height of 3329
my calculator which tells me that that represents ~11 million boolean values
...you're going about this the wrong way. #JBSnorro's answer is quite an elegant way of finding the needle (350 points) in the haystack (11 million points), but really, why create the haystack in the first place?
As per my comment on your question, why not just use a Pair<Integer,Integer> class to represent co-ordinates, store them in a set, and scan them? It's simpler, quicker, less memory consuming, and is way more scalable for larger maps (assuming the points of interest are sparse... which it seems is a sensible assumption given that they're points of interest).
..trust me, computing the Euclidean distance ~425 times beats wandering around an 11 million value boolean[][] looking for the 1 value in 25,950 that's of interest (esp. in a worst case analysis).
If you're really not thrilled with the idea of scanning ~425 values each time, then (i) you're more OCD than me (:P); (ii) you should check out nearest neighbour search algorithms.
I do not know if you are asking for this. If the user point is P1 {x1, y1} and you want to calculate its distance to P2 {x2,y2}, the distance is calculated using Pythagoras'Theorem
distance^2 = (x2-x1)^2 + (y2-y1)^2
If you only want to know the closest, you can avoid calculating the square root (the smaller the distance, the smaller the square too so it serves you the same).