Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I am working on a problem. Out of 17 test cases 10 works fine and gives the result in less than a second but in 7 cases it is taking 2 seconds which are beyond the time limit. Following is the code
import java.util.*;
import java.io.*;
class TestClass
{
static PrintWriter wr = new PrintWriter(System.out);
public static void func1(int arr[], int n)
{
int temp = arr[0];
for (int jj = 0; jj < n; jj++)
{
if (jj == (n - 1))
arr[jj] = temp;
else
arr[jj] = arr[jj + 1];
}
}
public static void func2(int arr[], int n, int rt)
{
int count = 0;
for (int a = 0; a < n; a++)
{
for (int b = a; b < n; b++)
{
if (arr[a] > arr[b])
count++;
}
}
if (rt == (n - 1))
wr.print(count);
else
wr.print(count + " ");
}
public static void main(String args[]) throws Exception
{
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
String str = br.readLine().trim();
StringTokenizer st = new StringTokenizer(str);
int t = Integer.parseInt(st.nextToken());
for (int i = 0; i < t; i++) //for test cases
{
str = br.readLine().trim();
st = new StringTokenizer(str);
int n = Integer.parseInt(st.nextToken());
int arr[] = new int[n];
str = br.readLine().trim();
st = new StringTokenizer(str);
for (int j = 0; j < n; j++) //to take input of array for each test case
{
arr[j] = Integer.parseInt(st.nextToken());
}
for (int rt = 0; rt < n; rt++) //for number of times circular shifting of array is done
{
func1(arr, n); //circularly shifts the array by one position
func2(arr, n, rt); //prints the number of inversion counts
}
if (i != (t - 1))
wr.println();
}
wr.close();
br.close();
}
}
Can someone suggest how to optimize the code so that it takes less time in execution.
I know BufferReader and PrintWriter takes less time as compared to Scanner and System.out.print. I was using scanner and System.out.print earlier but changed it later in hope of getting less time but it didn't help. Also I did it earlier without the use of func1 and func2 and did all the operations in main only. The time in both the cases remains the same.
I am getting the currect output in all the cases so code is correct, I just need help in optimizing it.
The website you are using acquires questions from past programming competitions. I recognize this as a familiar problem
Like most optimization questions, the preferred steps are:
Do less.
Do the same in fewer instructions.
Don't use functions.
Use faster instructions.
In your case, you have an array, and you wish to rotate it a number of times, and then to process it from the rotated position.
Rotating an array is an incredibly expensive operation, because you typically need to copy every element in the array into a new location. What is worse for you is that you are doing it the simplest way, you are rotating the array one step for every step needing rotation.
So, if you have a 100 element array that needs rotated 45 steps, you would then have (3 copies per element swap) 100 * 45 * 3 copies to perform your rotation.
In the above example, a better approach would be to figure out a routine that rotates an array 45 elements at a time. There are a number of ways to do this. The easiest is to double the RAM requirements and just have two arrays
b[x] = a[(mod(x+45), a.length)]
An even faster "do less" would be to never rotate the array, but to perform the calculation in reverse. This is conceptually the function of the desired index in the rotated array to the actual index in the pre-rotated array. This avoids all copying, and the index numbers (by virtue of being heavily manipulated in the math processing unit) will already be stored in the CPU registers, which is the fastest RAM a computer has.
Note that once you have the starting index in the original array, you can then calculate the next index without going through the calculation again.
I might have read this problem a bit wrong; because, it is not written to highlight the problem being solved. However, the core principles above apply, and it will be up to you to apply them to the exact specifics of your programming challenge.
An example of a faster rotate that does less
public static void func1(int arr[], int shift) {
int offset = shift % arr.length;
int [] rotated = new int[arr.length];
// (arr.length - 1) is the last index, walk up till we need to copy from the head of arr
for (int index = 0; index < (arr.length - 1) - offset; index++) {
rotated[index] = arr[index+offset];
}
// copy the end of the array back into the beginning
for ( int index = (arr.length - 1) - offset; index < arr.length; index++ ) {
rotated[index] = (offset - ((arr.length - 1) - index) - 1);
}
System.arraycopy(rotated, 0, arr, 0, arr.length);
}
This copies the array into its rotated position in one pass, instead of doing a pass per index to be rotated.
The first rule of optimisation (having decided it is necessary) is to use a profiler. This counts how many times methods are invoked, and measures the accumulated time within each method, and gives you a report.
It doesn't matter if a method is slow if you only run it a few times. If you run it hundreds of thousands of times, you need to either make it faster, or run it fewer times.
If you're using a mainstream IDE, you already have a profiler. Read its documentation and use it.
The other first rule of optimisation is, if there's already literature about the problem you're trying to solve, read it. Most of us might have invented bubble-sort independently. Fewer of us would have come up with QuickSort, but it's a better solution.
It looks as if you're counting inversions in the array. Your implementation is about as efficient as you can get, given that naive approach.
for(int i=0; i< array.length; i++) {
int n1 = array[i];
for(int j=i+1; j< array.length; j++) {
n2 = array[j];
if(n1 > n2) {
count++;
}
}
}
For an array of length l this will take ( l - 1) + ( l - 2 ) ... 1 -- that's a triangular number, and grows proportionally to the square of l.
So for l=1000 you're doing ~500,000 comparisons. Then since you're repeating the count for all 1000 rotations of the array, that would be 500,000,000 comparisons, which is definitely the sort of number where things start taking a noticeable amount of time.
Googling for inversion count reveals a more sophisticated approach, which is to perform a merge sort, counting inversions as they are encountered.
Otherwise, we need to look for opportunities for huge numbers of loop iterations. A loop inside a loop makes for big numbers. A loop inside a loop inside another loop makes for even bigger numbers.
You have:
for (int i = 0; i < t; i++) {
// stuff removed
for (int rt = 0; rt < n; rt++) {
// snip
func2(arr, n, rt); //prints the number of inversion counts
}
// snip
}
public static void func2(int arr[], int n, int rt) {
// snip
for (int a = 0; a < n; a++) {
for (int b = a; b < n; b++) {
// stuff
}
}
// snip
}
That's four levels of looping. Look at the input values for your slow tests, and work out what n * n * n * t is -- that an indicator of how many times it'll do the work in the inner block.
We don't know what your algorithm is supposed to achieve. But think about whether you're doing the same thing twice in any of these loops.
It looks as if func1() is supposed to rotate an array. Have a look at System.arrayCopy() for moving whole chunks of array at a time. Most CPUs will do this in a single operation.
Related
class Check {
static void countOddEven(int a[], int n) {
int countEven = 0, countOdd = 0;
for (int item : a) {
if (item % 2 == 0) {
countEven++;
}
}
countOdd = n - countEven;
System.out.println(countOdd + " " + countEven);
}
}
Code is to calculate even and odd numbers in an array. Please help to optimise the code.
You’re code is not correct.
If you were meant to count the even and odd numbers in a, then you are counting the even numbers correctly. If n is not equal to the length of a, then your calculation of the count of odd numbers is incorrect.
If on the other hand — and I’m just guessing — you were meant to count the even and odd numbers among the first n elements, then you are counting the even numbers incorrectly since you are iterating over all of a. Also in this case, if n is much smaller than the length of a, there is an optimization in only iterating over the first n elements as you should.
Finally you may try the following version. I doubt that it buys you anything, but I am leaving the measurements to you.
int countOdd = 0;
for (int ix = 0; ix < n; ix++) {
countOdd += a[ix] & 1;
}
int countEven = n - countOdd;
The trick is: a[ix] & 1 gives you the last bit of a[ix]. This is 1 for odd numbers and 0 for even numbers (positive or negative). So we are really adding a 1 for each odd number.
You should try running code with for loop instead of for each loop
Because ,
When accessing arrays, at least with primitive data for loop is dramatically faster.
however
When accessing collections, a foreach is significantly faster than the basic for loop’s array access.
but if you are getting some another errors , so might have done something wrong while calling the method (make sure n=length of your array)
here is the whole code with main method.
class Check{
static void countOddEven(int a[], int n){
int countEven=0,countOdd=0;
for(int i=0;i<n;i++)
if(a[i]%2==0)
{
countEven++;
}
countOdd=n-countEven;
System.out.println(countOdd+" "+countEven);
}
public static void main(String[] arg){
int a[]={2,3,4,5,6};
int n = 5;
countOddEven(a,n);
}
}
I'm trying to solve the following problem:
I feel like I've given it a lot of thoughts and tried a lot of stuff. I manage to solve it, and produce correct values but the problem is that it isn't time efficient enough. It completes 2 out of the Kattis tests and fails on the 3 because of the time limit 1 second was exceeded. There is noway for me to see what the input was that they tested with I'm afraid.
I started out with a recursive solution and finished that. But then I realised that it wasn't time efficient enough so I instead tried to switch to an iterative solution.
I start with reading input and add those to an ArrayList. And then I call the following method with target as 1000.
public static int getCorrectWeight(List<Integer> platesArr, int target) {
/* Creates two lists, one for storing completed values after each iteration,
one for storing new values during iteration. */
List<Integer> vals = new ArrayList<>();
List<Integer> newVals = new ArrayList<>();
// Inserts 0 as a first value so that we can start the first iteration.
int best = 0;
vals.add(best);
for(int i=0; i < platesArr.size(); i++) {
for(int j=0; j < vals.size(); j++) {
int newVal = vals.get(j) + platesArr.get(i);
if (newVal <= target) {
newVals.add(newVal);
if (newVal > best) {
best = newVal;
}
} else if ((Math.abs(target-newVal) < Math.abs(target-best)) || (Math.abs(target-newVal) == Math.abs(target-best) && newVal > best)) {
best = newVal;
}
}
vals.addAll(newVals);
}
return best;
}
My question is, is there some way that I can reduce the time complexity on this one for large number of data?
The main problem is that the size of vals and newVals can grow very quickly, as each iteration can double their size. You only need to store 1000 or so values which should be manageable. You're limiting the values but because they're stored in an ArrayList, it ends up with a lot of duplicate values.
If instead, you used a HashSet, then it should help the efficiency a lot.
You only need to store a DP table of size 2001 (0 to 2000)
Let dp[i] represent if it is possible to form ikg of weights. If the weight goes over the array bounds, ignore it.
For example:
dp[0] = 1;
for (int i = 0; i < values.size(); i++){
for (int j = 2000; j >= values[i]; j--){
dp[j] = max(dp[j],dp[j-values[i]);
}
}
Here, values is where all the original weights are stored. All values of dp are to be set to 0 except for dp[0].
Then, check 1000 if it is possible to make it. If not, check 999 and 1001 and so on.
This should run in O(1000n + 2000) time, since n is at most 1000 this should run in time.
By the way, this is a modified knapsack algorithm, you might want to look up some other variants.
If you think too generally about this type of problem, you may think you have to check all possible combinations of input (each weight can be included or excluded), giving you 2n combinations to test if you have n inputs. This is, however, rather beside the point. Rather, the key here is that all weights are integers, and that the goal is 1000.
Let's examine corner cases first, because that limits the search space.
If all weights are >= 1000, pick the smallest.
If there is at least one weight < 1000, that is always better than any weight >= 2000, so you can ignore any weight >= 1000 for combination purposes.
Then, apply dynamic programming. Keep a set (you got HashSet as suggestion from other poster, but BitSet is even better since the maximum value in it is so small) of all combinations of the first k inputs, and increase k by combining all previous solutions with the k+1'th input.
When you have considered all possibilities, just search the bit vector for the best response.
static int count() {
int[] weights = new int[]{900, 500, 498, 4};
// Check for corner case to limit search later
int min = Integer.MAX_VALUE;
for (int weight : weights) min = Math.min(min, weight);
if (min >= 1000) {
return min;
}
// Get all interesting combinations
BitSet combos = new BitSet();
for (int weight : weights) {
if (weight < 1000) {
for (int t = combos.previousSetBit(2000 - weight) ; t >= 0; t = combos.previousSetBit(t-1)) {
combos.set(weight + t);
}
combos.set(weight);
}
}
// Pick best combo
for (int distance = 0; distance <= 1000; distance++) {
if (combos.get(1000 + distance)) {
return 1000 + distance;
}
if (combos.get(1000 - distance)) {
return 1000 - distance;
}
}
return 0;
}
I am looking to find the sum of all subsquares of an extremely large matrix, in which
{ {1, 2}
{3,4} } in a 2D matrix would return 20. I have achieved this using java, but the program is very slow. Is there a faster way to do so, or another language that would be faster?
public class insaneMat
{
public static void main(String[] args)
{
int[][] mat = new int[10000][10000];
try
{
Scanner input = new Scanner (new File("file.in"));
int count = 0;
for (int r = 0; r < mat.length; r++)
{
for (int c = 0; c <mat[r].length; c++)
{
mat[r][c] = input.nextInt();
count++;
System.out.println("Loaded num " + count);
}
}
}
catch(Exception e)
{
System.out.println("ERROR!");
}
int n = mat.length;
int k;
int sum = 0;
for (k = 1; k < 10000; k++)
{
System.out.println("Calculating with subsquare of size " + k);
for (int i=0; i<n-k+1; i++)
{
for (int j=0; j<n-k+1; j++)
{
for (int p=i; p<k+i; p++)
for (int q=j; q<k+j; q++)
sum += mat[p][q];
}
}
}
System.out.println("Number = " + sum);
}
This works as intended in java, except it is very, very slow. The program works for a 2x2 matrix, as provided above, but runs about 30 subsquares per hour. Can it be done in a faster method, regardless of whether I use java or a different language? Would Matlab be useful?
The way to solve this is to work out analytically how many subsquares of a matrix of size N each element[i,j] will belong to ... call this function C(n, i,j). Then work out the sum of C(n, i, j) * element[i, j] over all i, j.
The complexity of the computation will be O(N^2) ... compared with your current algorithm's O(N^5) (And I suspect that the algorithm might be wrong ... if you want all subsquares from 1 to N-1 ... and the true complexity is O(N!).)
Anyhow, all you need is a bit of mathematics :-)
Warning, these sums are going to get exceedingly large.
What you have is O(n^5) algorithm, those tend to be very slow. Matlab will help because it is super optimized for Matrix operations. However, to get the maximum speed you will need to write vectorized code, meaning that two of your inner loops should be compressed into something like this sum(sum(M[i:k+i,j:k+j])) If you don't have lots of money for Matlab, these days numpy tends to be very fast.
Another observation that Stephen C mentioned, is that you are computing same thing over and over again. For example sum of elements of submatrix M[2:3,3:4] is also useful for submatrices M[1:3,3:4] and M[1:5,1:8] etc. So you need to keep tabs on those intermediate results. Look up Memoization and Dynamic Programming for the exact techniques and the theory behind it.
The easiest almost automated way to do it is through memoization. https://en.wikipedia.org/wiki/Memoization Which is essentially making your inner loops into separate methods. This methods keep a record of arguments they were called with and the received results. So that next time they called with the same parameters they can just look up the result without doing actual calculation. Many functional languages have a library that turns regular function into memoized one. Looks like it is now possible to do in Java 8 as well with new functional extensions https://dzone.com/articles/java-8-automatic-memoization
I've passed the sample input, and also some input which I decided to strike up (even the extreme cases), but I don't know why I got "Incorrect." Also, ignore the quickSort() call, I've implemented it and it works fine. In case you need an overview, here's the link to the problem: https://code.google.com/codejam/contest/2434486/dashboard#s=p0&a=0
I even checked the editorial and I don't know where I missed.
Scanner in = new Scanner(System.in);
int t = in.nextInt();
for (int i = 0; i < t; i++) {
int size = in.nextInt();
int n = in.nextInt();
int ops = 0;
int[] sizes = new int[n];
for (int j = 0; j < n; j++) {
sizes[j] = in.nextInt();
}
quickSort(sizes);
for (int j = 0; j < n; j++) {
if (sizes[j] < size) {
size += sizes[j];
} else {
int newSize = (size * 2) - 1;
if (sizes[j] < newSize) {
size = newSize;
}
ops++;
}
}
System.out.println("Case #" + (i + 1) + ": " + ops);
}
So lets run through the logic of the problem you are trying to solve.
You have two options:
Remove a mote
Add a mote
Currently you are only adding motes, represented by newSize = (size * 2) - 1. Effectively doubling (almost) the size of your mote each time.
What if the new size of your mote is still insufficient to capture the mote represented by the current j iteration in your loop? You need to make the choice again of adding another mote, or removing that mote.
So lets say your mote has a size of 2 and the other motes have sizes of 1, 1, 3, 5, 8, 1000, 1001. You could either continue to add lots of motes until your mote can absorb 1000 and 1001, or you could just remove the two that are absurdly large.
With this revelation at seems that iteration will not always lead to the most optimal answer. At each step each choice you make is another possible path that could potentially hold the answer, but you want to find the decision path that leads to the least amount of operations. Its as if you want to follow all possible paths and then choose from the best solution found.
Luckily there are plenty of resources online about this. I recommend googling pathfinding and pathfinding algorithms. There is a bit to learn on the topic, but it's all interesting and fun.
Goodluck!
in addition to #br3nt answer lets consider this:
if i were to add a Simple Greedy solution to your algorithm it'll be a check that if
if (sizes[j] < size) {
// ...
else {
deleteMote(j);
}
whereas deleteMote(j) will do exactly as its name implies and delete the mote whenever more than one step(addition of a mote) is required to grow large enough to eat the mote at index J.
this may not be the perfect solution but that's the first step towards your goal.
I've wrote two different implementations of heapsort today and both have given me the same results:
Object i: 18
Object i: 11
Object i: 10
Object i: 9
Object i: 8
Object i: 3
Object i: 7
Object i: 1
Object i: 4
Now I've checked my code with this page here; and believe one of my implementations is exactly the same as the pseudo-code would suggest, whilst the other is very similar to one of the Java implementations.
I'd like to stress the fact that I've actually wrote two different versions, and checked them against implementations I can find! So I am really stumped at the moment! I've stepped through it with a debugger a few times - but have probably must've something along the way? I even made a debugging function which just looped through the list and outputted the contents using System.out.println() - but this still didn't help a great deal!
The algorithm is working on a List - and I have done nothing to optimise it at this stage; it merely is a bit of an experiment at the moment. I have working implementations of QuickSort, BubbleSort & Insertion Sort - but this one has left me stumped!
My first implementation is below:
public static List<Integer> execSort(List<Integer> s) {
int n = (s.size()-1);
Integer t;
for(int i = n/2; i>0; i--){
s = downheap(s, i, n);
}
while(n >= 1){
t= s.remove(0);
s.add(0, s.remove(n-1));
s.add(n-1, t);
n--;
s = downheap(s, 1, n);
}
return s;
}
public static List<Integer> downheap(List<Integer> s, int i, int n){
Integer t = s.get(i-1);
int j;
while( i <= n/2 ){
j = i*2;
if( (j<n) && (s.get(j-1) < s.get(j)))
j++;
if( t >= s.get(j-1)){
break;
} else {
/* Swap them, without using a third variable
- although with all the get()/set() methods
it would be better to have a third one, doh! */
s.set(i-1, (s.get(i-1) + s.get(j-1)));
s.set(j-1, (s.get(i-1) - s.get(j-1)));
s.set(i-1, (s.get(i-1) - s.get(j-1)));
i=j;
}
}
s.set(i-1, t);
return s;
}
You can also see them on Github as Gists:
- Implementation 1
- Implementation 2
Any ideas as to why some of the elements don't want to sort?! I'm aware that this implementation is going to be sub-optimal, that working on a List<> isn't going to be the best data structure and I should probably look at using primitive data-types as opposed to (ab)using auto-boxing... but that's for another post! I just want a working version before I try and improve it ;)
In the gist (you accidentally linked both to the same), you have a few typos
public static List<Integer> execSort(List<Integer> s) {
int start = (s.size()/2)-1;
int end = s.size()-1;
while( start >= 0){
s = sift(s, start, end);
sift takes the count as last argument, not the last index, so the argument ought to be s.size() (or end+1) instead of end.
public static List<Integer> sift(List<Integer> s, int start, int count){
int root = start;
while( ((root*2)+1) < 2 ){
That must be while(root*2+1 < count), and not < 2.
In the code you have here, you have in part the same problem (caused by an odd indexing strategy, I suspect):
for(int i = n/2; i>0; i--){
s = downheap(s, i, n);
Since you always get(i-1) resp. j-1 in downheap, you need an upper bound of s.size() or n+1 while building the initial heap.
}
while(n >= 1){
This loop should only run while n > 1, or you'll swap the smallest element out of place.
t= s.remove(0);
s.add(0, s.remove(n-1));
s.add(n-1, t);
The old root must go in the last place, that's place n, not n-1, s.add(n,t).
n--;
s = downheap(s, 1, n);
}
In downheap, the final
s.set(i-1, t);
is superfluous, you always swap t, so when that line is reached, the element at i-1 already is t.