Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I need som help understanding how this works! how do I go about calculating the complexity of 'Computing the first half of an array of n items' or 'displaying the third element in a linked list' ? I need someone to explain how this works, theses are just examples, so be free to use your own if it helps explaining! thank you.
You should look at how the processing time of the algorithm grows as the size of the input grows. I'll take your two concrete examples:
Computing the first half of an array of n items
We need to process n/2 items. If n doubles, then the processing time should also double. Consequently, this is a linear operation (i.e. O(n)).
displaying the third element in a linked list
We always want to display the third element, so the size of the list doesn't actually matter. If it doubles, we don't care; the processing time is not affected. Consequently, this is a constant-time operation (i.e. O(1)), it doesn't depend on the size of the input.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I was asked this in interview. I thought the question was too generic to specify a particular data structure.
However, still if we channelize the question to following criterion what would be the best data structure to use:
If insertion speed should be fastest?
If search of particular data is to be the fastest?
The HashSet provides both O(1) insertion and O(1) search, which is hard to top from a theoretical point of view.
In practice, for a size of 10.000 references, a sorted ArrayList might still outperform HashSet, although insertion is O(n) and search is O(log(n)). Why? Because it stores the data (the references at least) in a contiguous memory range and therefore can take advantage of hardware memory caching.
The issue with big-O notation is that it completely ignores the time required for a single operation. That's fine for asymptotic considerations and very huge data sets, but for a size of 10.000 it might be misleading.
Haven't tried it, though. And I bet your Interviewer hasn't either :).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I've been aksed to write a program that finds the largest area of equal neighbour elements in a rectangular matrix and prints its size. I tried to construct a 2d array with some numbers but I think that I should switch to using a tree or something in order to solve this problem. Chould somebody suggest a possible way of solving it?
For example:
"Hint: use the algorithm Depth-first search or Breadth-first search."
Sounds like a standard maze search problem. I suggest you use recursion to find all the elements which you haven't been to before which have the same number as the one you have found. You can either update the matrix as you go or create a copy to keep track of the cells you have visited. So you don't need a tree or even an additional complex data structure.
use the algorithm Depth-first search or Breadth-first search
These are two types of recursive searches. I suspect you could implement both of these to see how they behave.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have a program which involves with a bunch of huge numbers (I have to put them into bignumbers type). The time complexity is unexpectedly huhge too. So, I was wondering, do these two factors have a connections? Any comments are greatly appreciated.
Do they have a connection to each other? Probably not.
You can have a large complexity algorithm working on small numbers (such as calculating the set of all sets for ten thousand numbers all in the range 0..30000) and you can have very efficient algorithms working on large numbers (such as simply adding up ten thousand BigInteger variables).
However, they'll both probably have a cascading effect on the time it takes your program to run. Large numbers will add a bit, a high-complexity algorithm will add a bit more I say 'add' but the effect is likely to be multiplicative, much worse - for example, using an inefficient algorithm may may your code take 30% longer, and the use of BigInteger may add 30% to that, giving you a 69% overall hit:
t * 1.3 * 1.3 = 1.69t
Sorry for the general answer but, without more specifics in the question, a general answer is the best you'll probably get. In any case, I believe (or at least hope) it answers the question you asked.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have a program that runs extremely slow when using big array numbers.
I use an int[3000][3000], a String[27000], and a String[5000] array in the final code. This code takes forever to run. Could this be because the arrays take too much space?
It depends a good deal on the complexity of your algorithms on which you are manipulating the data. with. This determines how much time it will take as you start throwing more data in it (by making the arrays larger and larger).If you are just iterating through the data, then it would be on the order of O(n), meaning it would be proportional to the amount of data give; so if you doubled the length of you arrays, it would take twice as long to execute your program. If you were, say , comparing every element with the other, it would mean it would be on the order of O(n^2), so if you doubled the length of your arrays, it would take around four times longer to process them.
You would have to post your program for us to have any idea if your algorithm is just to complex for your computer to handle.
see also: Big O notation
Many Factors :
Processor processing Speed.
Memory allocation
3000*3000*4 bytes = 36* 10.00.000 bytes = 338 MB
Use list
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
There's one question posted about Fibonacci Series, which I am much familiar with it. But there are multiple answers and linked questions to it. As I was digging through it with some interest, there's a solution which is linked to here
This algorithm solves the problem by O(log(n)) quite impressive. But I couldn't understand the logic and so called Matrix exponentiation [looked wiki, but unable to relate to it].
So kindly anyone can explain exactly how they achieved with more details and better explanation [if you can explain with code, prefer in Java, much helpful].
Thanks :)
What you need to understand is the algorithm, not the implementation.
The first thing you need to understand is that this algorithm will not give you all the fibonacci numbers, only those with with a n that is a power of 2.
The second thing is that a multiplication of constantly sized matrices of course takes constant ( O(1) ) time.
The trick now is to correctly note that the n'th fibonacci number can be formed by n-times multiplication of the matrix described in your link, which i will call M.
You get the log complexity by now "reordering" the matrix operations from, for example M*(M*(M*M)) to (M*M)*(M*M). With each matrix squaring, you go to M^2n instead of M^n+1.