I have a LinkedList< Point > points ,with random values:
10,20
15,30
13,43
.
.
I want to perform this kind of loop:
for (int i= points.get(0).x; i< points.get(0).y; i++){
for (int j= points.get(1).x; j< points.get(1).y; j++){
for (int k= points.get(2).x; k< points.get(2).y; k++){
...
...
}
}
}
How can I do that if I don't know the size of the list?
There's probably a better way to solve equations like that with less cpu and memory consumption but a brute-force approach like your's could be implemented via recursion or some helper structure to keep track of the state.
With recursion you could do it like this:
void permutate( List<Point> points, int pointIndex, int[] values ) {
Point p = points.get(pointIndex);
for( int x = p.x; x < p.y; x++ ) {
values[pointIndex] = x;
//this assumes pointIndex to be between 0 and points.size() - 1
if( pointIndex < points.size() - 1 ) {
permutate( points, pointIndex + 1; values );
}
else { //pointIndex is assumed to be equal to points.size() - 1 here
//you have collected all intermediate values so solve the equation
//this is simplified since you'd probably want to collect all values where the result is correct
//as well as pass the equation somehow
int result = solveEquation( values );
}
}
}
//initial call
List<Point> points = ...;
int[] values = new int[points.size()];
permutate( points, 0, values );
This would first iterate over the points list using recursive calls and advancing the point index by one until you reach the end of the list. Each recursive call would iterate over the point values and add the current one to an array at the respective position. This array is then used to calculate the equation result.
Note that this might result in a stack overflow for huge equations (the meaning of "huge" depends on the environment but is normally at several 1000 points). Performance might be really low if you check all permutations in any non-trivial case.
Related
Given an array of integers, return indices of the two numbers such that they add up to a specific target.
You may assume that each input would have exactly one solution, and you may not use the same element twice.
Consider input [3,2,4] and target is 6. I added (3,0) and (2,1) to the map and when I come to 4 and calculate value as 6 - 4 as 2 and when I check if 2 is a key present in map or not, it does not go in if loop.
I should get output as [1,2] which are the indices for 2 and 4 respectively
public int[] twoSum(int[] nums, int target) {
int len = nums.length;
int[] arr = new int[2];
Map<Integer,Integer> map = new HashMap<Integer,Integer>();
for(int i = 0;i < len; i++)
{
int value = nums[i] - target;
if(map.containsKey(value))
{
System.out.println("Hello");
arr[0] = value;
arr[1] = map.get(value);
return arr;
}
else
{
map.put(nums[i],i);
}
}
return null;
}
I don't get where the problem is, please help me out
Given an array of integers, return indices of the two numbers such that they add up to a specific target.
You may assume that each input would have exactly one solution, and you may not use the same element twice. Consider input [3,2,4] and target is 6. I added (3,0) and (2,1) to the map and when I come to 4 and calculate value as 6 - 4 as 2 and when I check if 2 is a key present in map or not, it does not go in if loop.
Okay, let's take a step back for a second.
You have a list of values, [3,2,4]. You need to know which two will add up 6, well, by looking at it we know that the answer should be [1,2] (values 2 and 4)
The question now is, how do you do that programmatically
The solution is (to be honest), very simple, you need two loops, this allows you to compare each element in the list with every other element in the list
for (int outter = 0; outter < values.length; outter++) {
int outterValue = values[outter];
for (int inner = 0; inner < values.length; inner++) {
if (inner != outter) { // Don't want to compare the same index
int innerValue = values[inner];
if (innerValue + outterValue == targetValue) {
// The outter and inner indices now form the answer
}
}
}
}
While not highly efficient (yes, it would be easy to optimise the inner loop, but given the OP's current attempt, I forewent it), this is VERY simple example of how you might achieve what is actually a very common problem
int value = nums[i] - target;
Your subtraction is backwards, as nums[i] is probably smaller than target. So value is getting set to a negative number. The following would be better:
int value = target - nums[i];
(Fixing this won't fix your whole program, but it explains why you're getting the behavior that you are.)
This code for twoSum might help you. For the inputs of integer array, it will return the indices of the array if the sum of the values = target.
public static int[] twoSum(int[] nums, int target) {
int[] indices = new int[2];
outerloop:
for(int i = 0; i < nums.length; i++){
for(int j = 0; j < nums.length; j++){
if((nums[i]+nums[j]) == target){
indices[0] = i;
indices[1] = j;
break outerloop;
}
}
}
return indices;
}
You can call the function using
int[] num = {1,2,3};
int[] out = twoSum(num,4);
System.out.println(out[0]);
System.out.println(out[1]);
Output:
0
2
You should update the way you compute for the value as follows:
int value = target - nums[i];
You can also check this video if you want to better visualize it. It includes Brute force and Linear approach:
I'm trying to solve the following problem:
I feel like I've given it a lot of thoughts and tried a lot of stuff. I manage to solve it, and produce correct values but the problem is that it isn't time efficient enough. It completes 2 out of the Kattis tests and fails on the 3 because of the time limit 1 second was exceeded. There is noway for me to see what the input was that they tested with I'm afraid.
I started out with a recursive solution and finished that. But then I realised that it wasn't time efficient enough so I instead tried to switch to an iterative solution.
I start with reading input and add those to an ArrayList. And then I call the following method with target as 1000.
public static int getCorrectWeight(List<Integer> platesArr, int target) {
/* Creates two lists, one for storing completed values after each iteration,
one for storing new values during iteration. */
List<Integer> vals = new ArrayList<>();
List<Integer> newVals = new ArrayList<>();
// Inserts 0 as a first value so that we can start the first iteration.
int best = 0;
vals.add(best);
for(int i=0; i < platesArr.size(); i++) {
for(int j=0; j < vals.size(); j++) {
int newVal = vals.get(j) + platesArr.get(i);
if (newVal <= target) {
newVals.add(newVal);
if (newVal > best) {
best = newVal;
}
} else if ((Math.abs(target-newVal) < Math.abs(target-best)) || (Math.abs(target-newVal) == Math.abs(target-best) && newVal > best)) {
best = newVal;
}
}
vals.addAll(newVals);
}
return best;
}
My question is, is there some way that I can reduce the time complexity on this one for large number of data?
The main problem is that the size of vals and newVals can grow very quickly, as each iteration can double their size. You only need to store 1000 or so values which should be manageable. You're limiting the values but because they're stored in an ArrayList, it ends up with a lot of duplicate values.
If instead, you used a HashSet, then it should help the efficiency a lot.
You only need to store a DP table of size 2001 (0 to 2000)
Let dp[i] represent if it is possible to form ikg of weights. If the weight goes over the array bounds, ignore it.
For example:
dp[0] = 1;
for (int i = 0; i < values.size(); i++){
for (int j = 2000; j >= values[i]; j--){
dp[j] = max(dp[j],dp[j-values[i]);
}
}
Here, values is where all the original weights are stored. All values of dp are to be set to 0 except for dp[0].
Then, check 1000 if it is possible to make it. If not, check 999 and 1001 and so on.
This should run in O(1000n + 2000) time, since n is at most 1000 this should run in time.
By the way, this is a modified knapsack algorithm, you might want to look up some other variants.
If you think too generally about this type of problem, you may think you have to check all possible combinations of input (each weight can be included or excluded), giving you 2n combinations to test if you have n inputs. This is, however, rather beside the point. Rather, the key here is that all weights are integers, and that the goal is 1000.
Let's examine corner cases first, because that limits the search space.
If all weights are >= 1000, pick the smallest.
If there is at least one weight < 1000, that is always better than any weight >= 2000, so you can ignore any weight >= 1000 for combination purposes.
Then, apply dynamic programming. Keep a set (you got HashSet as suggestion from other poster, but BitSet is even better since the maximum value in it is so small) of all combinations of the first k inputs, and increase k by combining all previous solutions with the k+1'th input.
When you have considered all possibilities, just search the bit vector for the best response.
static int count() {
int[] weights = new int[]{900, 500, 498, 4};
// Check for corner case to limit search later
int min = Integer.MAX_VALUE;
for (int weight : weights) min = Math.min(min, weight);
if (min >= 1000) {
return min;
}
// Get all interesting combinations
BitSet combos = new BitSet();
for (int weight : weights) {
if (weight < 1000) {
for (int t = combos.previousSetBit(2000 - weight) ; t >= 0; t = combos.previousSetBit(t-1)) {
combos.set(weight + t);
}
combos.set(weight);
}
}
// Pick best combo
for (int distance = 0; distance <= 1000; distance++) {
if (combos.get(1000 + distance)) {
return 1000 + distance;
}
if (combos.get(1000 - distance)) {
return 1000 - distance;
}
}
return 0;
}
I'm creating a voxel engine in Java using LWJGL just for practice, but I'm getting stuck on the chunk management system. More specifically, I'm trying to convert a Chunk, which is just a 3D array of integers for the block id, into an Octree for optimal rendering.
So far, I have the system working, but it's horribly inefficient.
Here's a screenshot of a 16*16*16 chunk with all positions below y=8 set to 1 (the red blocks)
https://raw.githubusercontent.com/ninthworld/Octree/master/screenshot0.png
I added a debugger to the OctNode generator code to find out how many times it needed to access the chunk array, and it came back with 8392704.
It accessed the chunk array over 8 million times just to generate 8 children nodes.
When I set the chunk array to only have blocks below y=4, the program has a blackscreen for almost 30 seconds, and the debugger returns 1623199744 array accesses.
Over 1.6 billion array calls just to generate 68 children nodes.
I obviously need to reduce the number of array calls, that much is certain, but I'm not sure how I would go about doing that. Here's the github page for the project if you'd like to see the entire source code.
Here are the important parts of my code:
Main.java
// Initialize Octree Object
// This is just an extended OctNode object
octree = new Octree(chunk);
octree.generate(lookup);
OctNode.java
public void generate(){
int value = -1;
// Loop through an area an area of width*width*width
for(int x=0; x<width; x++){
for(int y=0; y<width; y++){
for(int z=0; z<width; z++){
// Get the value from the master array based on the node's
// offset + the forloop'd area
int store = array[x+(int)offset.x][y+(int)offset.y][z+(int)offset.z];
// Basically sets the value variable to the value at
// 0, 0, 0 with respect to the offset
if(value < 0)
value = store;
// Then check if the current position's value is the
// same as the first one found (int value), if it's not,
// then this node needs to be subdivided
if(store != value){
// Create 8 children for this node
children = new OctNode[8];
// And for each of the children...
for(int i=0; i<children.length; i++){
// Set their offset equal to this node's offset +
// this node's width/2 all with respect to it's
// individual quadrant (which is related to i)
Vector3f offS = new Vector3f(offset.x + (width/2f)*(i%2), offset.y + (width/2f)*((int)Math.floor(i/2f)%2), offset.z + (width/2f)*((int)Math.floor(i/4f)%2));
// Initialize the new child node
children[i] = new OctNode(array, (int)(width/2f), offS);
// And now do the same thing (recursion), but
// for a smaller area
children[i].generate();
}
}
}
}
}
// This is only called if the node is completely made of one value
if(children == null){
data = value;
}
}
That's the best I can explain it unfortunately. If you could point out an easier, faster way to do the same thing that would be amazing.
You are partitioning the tree too frequently. If you have a 16x16x16 array with all different values, you are going to recurse at every cell except the first, so you will call generate (16x16x16-1) times at the top level, rather than just once; the value of the children array will be overwritten many times over; of course, you will repeat the doing of unnecessary work at the next level down etc.
You should move the decision to subdivide the current octree node outside the nested for loops. For example:
public void generate(){
// Assuming that width >= 1.
int minValue = Integer.MAX_VALUE;
int maxValue = Integer.MIN_VALUE;
// Loop through an area an area of width*width*width
// looking for min and max values in that area.
for(int x=0; x<width; x++){
for(int y=0; y<width; y++){
for(int z=0; z<width; z++){
int store = array[x+(int)offset.x][y+(int)offset.y][z+(int)offset.z];
minValue = Math.min(minValue, store);
maxValue = Math.max(maxValue, store);
}
}
}
if (minValue != maxValue) {
// Subdivide if the min and maxValues are different,
// as above.
children = new OctNode[8];
// etc.
} else {
data = minValue;
}
}
My raw signal graph is as follows.
What I intend to do is to do the "real peak" detection. That is the sawtooth-like noise peaks in the raw signal should not be counted.
After the Chebyshev Type 2 LPF implemented in Python, the signal is smoothed into the following graph.
As can be seen, I can implement the LPF in Python.
But my problem is to implement it in Java.
Is there any readily-built LPF that suits my purpose?
Or anyone can teach me how to do this in Java?
The parameters are as follows:
Cut-off freq. = 4Hz.
Sampling rate = 350Hz.
There are a number of ways to implement a filter like this. Direct Form I is both straightforward and numerically stable, so I'll recommend that. I'll show double precision numbers for recursive variables to ensure accuracy. You may want to use doubles all around to avoid conversions, but I'll show with float and double so you can see where you really need the doubles.
I don't have code handy for a high order filter like this, so this is untested, but the concepts here and the link above will get you the answer. You can always compare your results to the python results.
First, you should already have coefficients of the following form:
float a[10] = { ... }
float b[10] = { ... }
Now, you'll want to make sure the coefficients are normalized, if they aren't already:
for( int i=0; i<10; ++i )
b[i] /= a[0];
for( int i=1; i<10; ++i )
a[i] /= a[0];
You're last setup step will be to create your memory buffers to store old inputs (x) and outputs (y):
float x[10] = { 0, 0, 0, ... }
double y[10] = { 0, 0, 0, ... }
When "reseting" the filter for a new dataset, remember to set the values of those to 0 again.
Now you are ready to start processing. This involves two steps: 1. calculating your output, and 2. updating your stored values.
float processOneValue( float in ) {
// calculate new output:
double out = in * b[0] ;
for( int i=0; i<9; ++i )
out += x[i]*b[i+1] ;
for( int i=0; i<9; ++i )
out -= y[i]*a[i+1] ;
// update:
for( int i=9; i>=1; --i )
y[i] = y[i-1];
y[0] = out;
for( int i=9; i>=1; --i )
x[i] = x[i-1];
x[0] = in;
return out;
}
Since this is such a high order filter, it might be more efficient to use a ringbuffer rather than the "bucket-brigade" style updates I used for x and y, but this works and is simpler to read.
Now, to process an array of data, just loop on processOneValue(). You can obtain the output in place or in a new array.
I essentially have a bunch of data objects which map timestamps in milliseconds to float values. I'm looking to essentially find the peak/max of the data in a given range. I've been essentially using something like this:
float previousValue = 0;
for (int i = 0; i < data.size(); i++) {
MyData value = data.get(i);
if (value.getData() < previousValue) {
// found the peak!
break;
} else {
previousValue = value.getData();
}
}
The only problem with this algorithm is that it doesn't account for noise. Essentially, I could have values like this:
[0.1025, 0.3000, 0.3025, 0.3500, 0.3475, 0.3525, 0.1025]
The actual peak is at 0.3525, but my algorithm above would see it as 0.3500, as it comes first. Due to the nature of my calculations, I can't just do max() on the array and find out the largest value, I need to find the largest value that comes first before it falls.
How can I find the top of my peak, while accounting for some variance in noise?
There are two issues:
filtering out the noise;
finding the peak.
It seems like you already have a solution for 2, and need to solve 1.
To filter out the noise, you need some kind of low-pass filter. A moving average is one such filter. For example, exponential moving average is very easy to implement and should work well.
In summary: put your series through the filter, and then apply the peak finding algorithm.
an easier method to find a single peak (or the highest value) in an array (any numeric array: int, double) is to loop through the array and set a variable to the highest value...
Example: (all examples use a float array called "data")
float highest = 0; //use a number equal to or below the lowest possible value
for (int i = 0; i < data.length; i++){
if (data[i] > highest){
highest = data[i];
}
}
to find multiple peaks in noisy data filtering some of the noise out I used this method:
boolean[] isPeak = new boolean[20]; // I am looking for 20 highest peaks
float[] filter = new float[9]; // the range to which I want to define a peak is 9
float[] peaks = new float[20]; // again the 20 peaks I want to find
float lowpeak = 100; // use a value higher than the highest possible value
// first we start the filter cycling through the data
for (int i = 0; i < data.length; i++){
for (int a = filter.length-1; a > 0; a--){
filter[a] = filter[a-1];
}
filter[0] = data[1]
// now we check to see if the filter detects a peak
if (filter[4]>filter[0] && filter[4]>filter[1] && filter[4]>filter[2] &&
filter[4]>filter[3] && filter[4]>filter[5] && filter[4]>filter[6] &&
filter[4]>filter[7] && filter[4]>filter[8]){
// now we find the lowest peak
for (int x = 0; x < peaks.lengt-1; x++){
if (peaks[x] < lowpeak){
lowpeak = peaks[x];
}
}
// now we check to see if the peak is above the lowest peak
for (int x = 0; x < peaks.length; x++){
if (peaks[x] > lowpeak && peaks[x] != peaks[x+1]){
for (int y = peaks.length-1; y > 0 && !isPeak[y]; y--){
peaks[y] = peaks[y-1];
}
peaks[0] = filter[4];
}
}
}
}
this may not be the most efficient way to do this but it gets the job done!