What is Complexity of a program and how to calculate it? [closed] - java

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am a Java Developer, I wanted to know about Complexity of Program and its calculation ? (i am beginner please answer in simple terms that i can understand
thanks in advance..!!)

Generally complexity is a number of operation you have to perform to achieve your goal.
Complexity is marked as O(n) where n is the complexity. For example complexity of assignment is O(1).
Complexity of access to array element is O(1) too. Complexity of iteration over all elements of array, collection, map etc is O(n) where n is number of elements of the collection. For example if you want to find sum of all elements of n-elements array you have to perform operation with complexity O(n).
Please note that complexity of looking for specific element of array is n also although average number of operations is n/2 because the element may be at first, last or any other position.
Complexity of sorting depends on the algorithm. Simple algorithms sort arrays with complexity of O(n^2), while better algorithms like quick short have O(n*ln(n)).

There are 2 types of complexity
1. space complexity
2. Time complexity
The time required for the execution of a program(or loops or statement) is reffered to as time complexity. The space or memory required for the program is reffered to as space complexity. Both complexities are measured in Big Oh notations .

Related

optimal maximum difference within subarrays [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 4 years ago.
Improve this question
I have been given a question that I have been stuck on for quite a while.
Question:
Given a sorted array of N integers, divide the array in to a maximum of R adjacent and non-overlapping subarrays with a maximum of M elements, so that we can minimize the difference between the largest and smallest valaue within each subarray.The output should contain the maximum difference within any of the subarrays after minimizing the difference in each of the subarrays.
Example:
N = 7
R = 4
M = 3
Original Array: [1,2,3,3,3,5,6]
Optimal subarrays(1 possible case): [1], [2], [3,3,3],[5,6]
Correct Output: 1
I was thinking of testing every possible value for the minimum difference and testing each value in O(N) time, but this would lead to a runtime more costly than nlogn.
What would be the most time and memory efficient solution to solve this problem?
I suggest using bisection to find the largest difference for which it is possible to divide the array in the desired way.
To test if the division is possible, greedily assign elements to subarrays starting from the left while the constraints are met.
Only start a new subarray when forced (either by the difference getting too large, or by reaching the maximum number of elements allowed in an array).
I believe this greedy assignment should always find a solution if one exists for a particular value of difference.

Is it possible to insert a number into a sorted array, time: log(n) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I have a sorted array of integers (of size n) and would like to insert a new element into the array, is it possible to do so in O(log(n))? I need to keep an insertion and find computational complexity of O(log(n))).
Right now the only idea I have is to do binary search in order to find the desired index for insertion - this would take O(log(n)), but then I would have to create a new array and copy the entire cells which would take O(n).
EDIT:
It was solved by using an AVL Tree instead, that way any new elements added takes O(log(n)) and finding an element takes O(log(n))
"is it possible to do so in log(n)? " - in short no. From my recollections and experience inserting into an arbitrary position in an array is always O(N), almost by definition. If you want faster performance use something like the TreeMap class.

Time complexity of LinkedList, HashMap, TreeSet? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am a student of CS, learning about Java Collections.
At my school we have received a chart with with time complexity of different operations on data structures. There are some things in the chart that don't make sense to me.
Linked List
It says that time complexity for inserting at the end and finding the number of elements is implementation dependent. Why is that? Why isn't it O(n)?
HashMap
For the HashMap it says that tc for finding number of elements or determining whether the hashMap is empty has also tc implementation dependent.
TreeMap same goes for the TreeMap. According to the chart, the tc of the operation to determine the number of elements is implementation dependent. Which implementation do they mean?
Some implementations of LinkedList can keep a count of the number of elements in their list as well as a pointer to the last element for quick tail insertion. This means that they are done O(1) as it is a quick data access.
Similar reasoning can be followed for implementations of HashMap and TreeSet, if they keep a count of the number of elements with their data structure then functions such as isEmpty() and size() can be done O(1) also.

Sorting algorithm implementation [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
First time poster, sorry if I break any etiquette.
I'm studying for my Data structures and algorithms midterm and I have a practice problem I don't really understand.
Suppose you are given a sorted list of N elements
followed by f(N) randomly-order elements.
How would you sort the entire list if f(N) = O(1)?
How would you sort the entire list if
f(N) = O(log N)?
We have gone over lots of sorting algorithms but the exam focuses on Insertion, Quick and Merge Sort. I don't really understand what it means by if F(N) = O(log N). Is that using Big oh notation to say that the most amount of random elements on the end would be either a constant or log(N) in each respect case.
Thanks for any insight.
Edited: Fixed my obvious mistake in terms of Big Oh notation, not sure where to go from here though.
In the first example you are given a problem, where a constant number of non-ordered elements follow the sorted sequence. In essence this means that you may implement an algorithm to insert a single non-ordered element and then repeat it several times. The overall complexity of inserting all f(N) = O(1) elements will be the same. One of the algorithms you mention is best to perform this operation.
In the second case you have number of elements to be inserted in the order of log(n). In this case you can not ignore this number as it is dependent on the input size. Thus you need to think of a smarter way to merge it with the remaining part that is already sorted. (TIP: maybe the operation you need to perform will help you choose an algorithm?)

Sort absolute values of array in less than O(n^2) JAVA [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Given a sorted array of both positive and negative numbers (Example:-9, -7, -4, -0.9, 1, 2, 3, 8) i need to output the elements in the array in sorted order of their absolute values in less than O(N^2) without using any inbuilt function.
Does anyone know any acceptable solution for this simple problem?
I was thinking of modifying the quicksort algorithm to make it check the abs values for elements.
I would binary search for 0, conceptually split the list into 2 parts, and merge the two lists (the negative value one as the positive value of itself) into a single new one by walking the negative in reverse and the positive forward.
O(log n) for binary search, O(n) for the merge of 2 sorted lists.
Pick any 2-way comparison sorting algorithm of your pleasure with runtime bounds less than O(N^2). Quicksort is a valid choice.
When doing comparisons (which will show up in the 2-way comparison sorting algorithm), instead of comparing the values a and b compare abs(a) and abs(b). Just make sure that you don't replace a and b with abs(a) and abs(b), just use the latter two when doing comparasons
Just have two pointers at the end of the arrays then compare the absolute value with the end.
start == array[0]
end == array.length
while start != end
if abs(start) => end
put start in front of end
start++
end--
else
end--
I might be missing some pieces but that's the idea.
0(n) solution.

Categories

Resources