Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have a program which involves with a bunch of huge numbers (I have to put them into bignumbers type). The time complexity is unexpectedly huhge too. So, I was wondering, do these two factors have a connections? Any comments are greatly appreciated.
Do they have a connection to each other? Probably not.
You can have a large complexity algorithm working on small numbers (such as calculating the set of all sets for ten thousand numbers all in the range 0..30000) and you can have very efficient algorithms working on large numbers (such as simply adding up ten thousand BigInteger variables).
However, they'll both probably have a cascading effect on the time it takes your program to run. Large numbers will add a bit, a high-complexity algorithm will add a bit more I say 'add' but the effect is likely to be multiplicative, much worse - for example, using an inefficient algorithm may may your code take 30% longer, and the use of BigInteger may add 30% to that, giving you a 69% overall hit:
t * 1.3 * 1.3 = 1.69t
Sorry for the general answer but, without more specifics in the question, a general answer is the best you'll probably get. In any case, I believe (or at least hope) it answers the question you asked.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I need a hint on an Interview question that I came across. I tried to find a solution but I need advice from experts over here. What are the different strategies you would employ had you came across this particular situation? The question and my thoughts are as follows:
Q. You want to store a huge number of objects in a list in java. The number of objects is very huge and gradually increasing, but you have very limited memory available. How would you do that?
A. I answered by saying that, once the number of elements in the list
get over a certain threshold, I would dump them to a file. I would typically then build cache-like data-structure that would hold the most-frequently or recently added elements. I gave an analogy of page swapping employed by the OS.
Q. But this would involve disk access and it would be slower and affect the execution.
I did not know the solution for this and could not think properly during the interview. I tried to answer as:
A. In this case, I would think of horizontally scaling the system or
adding more RAM.
Immediately after I answered this, my telephonic interview ended. I felt that the interviewer was not happy with the answer. But then, what should have been the answer.
Moreover, I am not curious only about the answer, I would like to learn about different ways with which this can be handled.
Maybe I am not sure but It indicates that somewhat Flyweight Pattern. This is the same pattern which is been used in String pool and its efficient implementation is must Apart from that, we need focus on database related tasks in order to persist the data if the threshold limit is surpassed. Another technique is to serialize it but as you said, the interviewer was not satisfied and wanted some other explanation.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Is there a programmatic way or eclipse plugin to calculate big-O notation for java method ?
No, there isn't such plugin, and if it was, it would be a mere approximation. Namely, even determining whether the program will finish running or not is intractable - see Halting problem.
Now, about the possible approximation. Let's say you have a plugin that tests your program with a small dataset (e.g. N = 1000) and a medium dataset (e.g. N = 10000). If your program runs 10 times longer with a medium dataset compared to a small dataset, plugin should conclude that your program is O(N), right? Not quite. What about best/average/worst case? For example, quicksort's worst case is O(N^2), but it is generally considered as O(N*logN) sorting algorithm. Therefore, if the plugin hits the special input, it will give a wrong result. What about constants? The program whose running time is O(N + k*logN) is considered O(N), but if a constant k is large enough compared to N, plugin would not be able to reach this conclusion, etc.
Regarding your comment:
If anybody tried codility challenges
they are evaluation your solution
against performance using big O notation , and I'm sure that they are
not calculation it manually, that's why I'm asking this question.
Authors of Codility challenges have solutions of their problems with a well-known time complexity (they analyzed it manually). When they measure the running time of your solution for various input and compare it with a running time of their solutions for the same input, they can automatically determine the time complexity of your program (of course, taking into account the programming language you have chosen and certain deviations of the measured time).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am looking into finding algorithm within the area of clustering or machine learning which will facilitate or creating a typical data reading for a group of readings. The issue is that it must facilitate time series data; thus some traditional (k-means) techniques are not as useful.
Can anyone recommend places to look or particular algorithms that would provide a typical reading and relatively simple to implement (in Java), manipulate and understand?
As an idea. Try to convert all data types into time, then you will have vectors of the same type (time), then any clustering strategy will work fine.
By converting to time I actually mean that any measurement or data type we know about has a time in its nature. Time is not a 4-th dimension, as many think! Time is actually 0-dimension. Even a point of no physical dimensions which may not exist in space, exists in time.
Distance, weight, temperature, pressure, directions, speed... all measures we do can be converted into certain functions of time.
I have tried this approach on several projects and it payed back with really nice solutions.
Hope, this might help you here as well.
For most machine learning problems in Java, weka usually works pretty well.
See, for example: http://facweb.cs.depaul.edu/mobasher/classes/ect584/weka/k-means.html
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have a program that runs extremely slow when using big array numbers.
I use an int[3000][3000], a String[27000], and a String[5000] array in the final code. This code takes forever to run. Could this be because the arrays take too much space?
It depends a good deal on the complexity of your algorithms on which you are manipulating the data. with. This determines how much time it will take as you start throwing more data in it (by making the arrays larger and larger).If you are just iterating through the data, then it would be on the order of O(n), meaning it would be proportional to the amount of data give; so if you doubled the length of you arrays, it would take twice as long to execute your program. If you were, say , comparing every element with the other, it would mean it would be on the order of O(n^2), so if you doubled the length of your arrays, it would take around four times longer to process them.
You would have to post your program for us to have any idea if your algorithm is just to complex for your computer to handle.
see also: Big O notation
Many Factors :
Processor processing Speed.
Memory allocation
3000*3000*4 bytes = 36* 10.00.000 bytes = 338 MB
Use list
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
There's one question posted about Fibonacci Series, which I am much familiar with it. But there are multiple answers and linked questions to it. As I was digging through it with some interest, there's a solution which is linked to here
This algorithm solves the problem by O(log(n)) quite impressive. But I couldn't understand the logic and so called Matrix exponentiation [looked wiki, but unable to relate to it].
So kindly anyone can explain exactly how they achieved with more details and better explanation [if you can explain with code, prefer in Java, much helpful].
Thanks :)
What you need to understand is the algorithm, not the implementation.
The first thing you need to understand is that this algorithm will not give you all the fibonacci numbers, only those with with a n that is a power of 2.
The second thing is that a multiplication of constantly sized matrices of course takes constant ( O(1) ) time.
The trick now is to correctly note that the n'th fibonacci number can be formed by n-times multiplication of the matrix described in your link, which i will call M.
You get the log complexity by now "reordering" the matrix operations from, for example M*(M*(M*M)) to (M*M)*(M*M). With each matrix squaring, you go to M^2n instead of M^n+1.