Extremely slow code when using big array sizes [closed] - java

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have a program that runs extremely slow when using big array numbers.
I use an int[3000][3000], a String[27000], and a String[5000] array in the final code. This code takes forever to run. Could this be because the arrays take too much space?

It depends a good deal on the complexity of your algorithms on which you are manipulating the data. with. This determines how much time it will take as you start throwing more data in it (by making the arrays larger and larger).If you are just iterating through the data, then it would be on the order of O(n), meaning it would be proportional to the amount of data give; so if you doubled the length of you arrays, it would take twice as long to execute your program. If you were, say , comparing every element with the other, it would mean it would be on the order of O(n^2), so if you doubled the length of your arrays, it would take around four times longer to process them.
You would have to post your program for us to have any idea if your algorithm is just to complex for your computer to handle.
see also: Big O notation

Many Factors :
Processor processing Speed.
Memory allocation
3000*3000*4 bytes = 36* 10.00.000 bytes = 338 MB
Use list

Related

Is there a programmatic way or eclipse plugin to calculate big O notation for java method [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Is there a programmatic way or eclipse plugin to calculate big-O notation for java method ?
No, there isn't such plugin, and if it was, it would be a mere approximation. Namely, even determining whether the program will finish running or not is intractable - see Halting problem.
Now, about the possible approximation. Let's say you have a plugin that tests your program with a small dataset (e.g. N = 1000) and a medium dataset (e.g. N = 10000). If your program runs 10 times longer with a medium dataset compared to a small dataset, plugin should conclude that your program is O(N), right? Not quite. What about best/average/worst case? For example, quicksort's worst case is O(N^2), but it is generally considered as O(N*logN) sorting algorithm. Therefore, if the plugin hits the special input, it will give a wrong result. What about constants? The program whose running time is O(N + k*logN) is considered O(N), but if a constant k is large enough compared to N, plugin would not be able to reach this conclusion, etc.
Regarding your comment:
If anybody tried codility challenges
they are evaluation your solution
against performance using big O notation , and I'm sure that they are
not calculation it manually, that's why I'm asking this question.
Authors of Codility challenges have solutions of their problems with a well-known time complexity (they analyzed it manually). When they measure the running time of your solution for various input and compare it with a running time of their solutions for the same input, they can automatically determine the time complexity of your program (of course, taking into account the programming language you have chosen and certain deviations of the measured time).

Best data structure to store 10,000 records in java [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I was asked this in interview. I thought the question was too generic to specify a particular data structure.
However, still if we channelize the question to following criterion what would be the best data structure to use:
If insertion speed should be fastest?
If search of particular data is to be the fastest?
The HashSet provides both O(1) insertion and O(1) search, which is hard to top from a theoretical point of view.
In practice, for a size of 10.000 references, a sorted ArrayList might still outperform HashSet, although insertion is O(n) and search is O(log(n)). Why? Because it stores the data (the references at least) in a contiguous memory range and therefore can take advantage of hardware memory caching.
The issue with big-O notation is that it completely ignores the time required for a single operation. That's fine for asymptotic considerations and very huge data sets, but for a size of 10.000 it might be misleading.
Haven't tried it, though. And I bet your Interviewer hasn't either :).

cost a huge time because the number is huge? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have a program which involves with a bunch of huge numbers (I have to put them into bignumbers type). The time complexity is unexpectedly huhge too. So, I was wondering, do these two factors have a connections? Any comments are greatly appreciated.
Do they have a connection to each other? Probably not.
You can have a large complexity algorithm working on small numbers (such as calculating the set of all sets for ten thousand numbers all in the range 0..30000) and you can have very efficient algorithms working on large numbers (such as simply adding up ten thousand BigInteger variables).
However, they'll both probably have a cascading effect on the time it takes your program to run. Large numbers will add a bit, a high-complexity algorithm will add a bit more I say 'add' but the effect is likely to be multiplicative, much worse - for example, using an inefficient algorithm may may your code take 30% longer, and the use of BigInteger may add 30% to that, giving you a 69% overall hit:
t * 1.3 * 1.3 = 1.69t
Sorry for the general answer but, without more specifics in the question, a general answer is the best you'll probably get. In any case, I believe (or at least hope) it answers the question you asked.

complexity and big oh notation on algorithms [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I need som help understanding how this works! how do I go about calculating the complexity of 'Computing the first half of an array of n items' or 'displaying the third element in a linked list' ? I need someone to explain how this works, theses are just examples, so be free to use your own if it helps explaining! thank you.
You should look at how the processing time of the algorithm grows as the size of the input grows. I'll take your two concrete examples:
Computing the first half of an array of n items
We need to process n/2 items. If n doubles, then the processing time should also double. Consequently, this is a linear operation (i.e. O(n)).
displaying the third element in a linked list
We always want to display the third element, so the size of the list doesn't actually matter. If it doubles, we don't care; the processing time is not affected. Consequently, this is a constant-time operation (i.e. O(1)), it doesn't depend on the size of the input.

Clustering: Finding a Average Reading [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am looking into finding algorithm within the area of clustering or machine learning which will facilitate or creating a typical data reading for a group of readings. The issue is that it must facilitate time series data; thus some traditional (k-means) techniques are not as useful.
Can anyone recommend places to look or particular algorithms that would provide a typical reading and relatively simple to implement (in Java), manipulate and understand?
As an idea. Try to convert all data types into time, then you will have vectors of the same type (time), then any clustering strategy will work fine.
By converting to time I actually mean that any measurement or data type we know about has a time in its nature. Time is not a 4-th dimension, as many think! Time is actually 0-dimension. Even a point of no physical dimensions which may not exist in space, exists in time.
Distance, weight, temperature, pressure, directions, speed... all measures we do can be converted into certain functions of time.
I have tried this approach on several projects and it payed back with really nice solutions.
Hope, this might help you here as well.
For most machine learning problems in Java, weka usually works pretty well.
See, for example: http://facweb.cs.depaul.edu/mobasher/classes/ect584/weka/k-means.html

Categories

Resources