Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 4 years ago.
Improve this question
I have been given a question that I have been stuck on for quite a while.
Question:
Given a sorted array of N integers, divide the array in to a maximum of R adjacent and non-overlapping subarrays with a maximum of M elements, so that we can minimize the difference between the largest and smallest valaue within each subarray.The output should contain the maximum difference within any of the subarrays after minimizing the difference in each of the subarrays.
Example:
N = 7
R = 4
M = 3
Original Array: [1,2,3,3,3,5,6]
Optimal subarrays(1 possible case): [1], [2], [3,3,3],[5,6]
Correct Output: 1
I was thinking of testing every possible value for the minimum difference and testing each value in O(N) time, but this would lead to a runtime more costly than nlogn.
What would be the most time and memory efficient solution to solve this problem?
I suggest using bisection to find the largest difference for which it is possible to divide the array in the desired way.
To test if the division is possible, greedily assign elements to subarrays starting from the left while the constraints are met.
Only start a new subarray when forced (either by the difference getting too large, or by reaching the maximum number of elements allowed in an array).
I believe this greedy assignment should always find a solution if one exists for a particular value of difference.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am a Java Developer, I wanted to know about Complexity of Program and its calculation ? (i am beginner please answer in simple terms that i can understand
thanks in advance..!!)
Generally complexity is a number of operation you have to perform to achieve your goal.
Complexity is marked as O(n) where n is the complexity. For example complexity of assignment is O(1).
Complexity of access to array element is O(1) too. Complexity of iteration over all elements of array, collection, map etc is O(n) where n is number of elements of the collection. For example if you want to find sum of all elements of n-elements array you have to perform operation with complexity O(n).
Please note that complexity of looking for specific element of array is n also although average number of operations is n/2 because the element may be at first, last or any other position.
Complexity of sorting depends on the algorithm. Simple algorithms sort arrays with complexity of O(n^2), while better algorithms like quick short have O(n*ln(n)).
There are 2 types of complexity
1. space complexity
2. Time complexity
The time required for the execution of a program(or loops or statement) is reffered to as time complexity. The space or memory required for the program is reffered to as space complexity. Both complexities are measured in Big Oh notations .
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Suppose we have the following collection of integers: {1,1,2}. We can arrange the order of this collection in 3 possible ways:
1,1,2
1,2,1
2,1,1
How can we calculate the number of ways we can arrange a collection of integers in general? Suppose the size of the collection is very large (10^5 in worst case scenario) but the answer is always small enough to fit into a long. Does an efficient solution to this problem exist, and if so, how could one implement it in Java?
Suppose you have n integers made from n_i copies of integer x_i.
To work out the number of arrangements simply compute
t = n!
However, as you don't care about how numbers with the same value are arranged, for each value of i reduce the total by:
t = t / n_i!
In you case, you have 3 integers, with 1 copy of 2, and 2 copies of 1.
You compute:
t = 3! = 6
t = t/1! = 6/1 = 6
t = t/2! = 6/2 = 3
so the answer is 3.
How can we calculate the number of ways we can arrange a collection of integers in general?
Loop through the collection, counting the number of times each number appears in the collection.
The formula for the number of permutations is size factorial / number of duplicate integers factorial. In your example of 1, 1, 2, the size is 3 and the number of duplicate integers is 2.
3! / 2! = 3 * 2 * 1 / 2 * 1 = 3;
Another, more computer friendly way of calculating 3! / 2! is to divide out the 2 factorial first, which leaves you with 3.
If the collection has no duplicate integers, then the number of permutations is size factorial.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
First time poster, sorry if I break any etiquette.
I'm studying for my Data structures and algorithms midterm and I have a practice problem I don't really understand.
Suppose you are given a sorted list of N elements
followed by f(N) randomly-order elements.
How would you sort the entire list if f(N) = O(1)?
How would you sort the entire list if
f(N) = O(log N)?
We have gone over lots of sorting algorithms but the exam focuses on Insertion, Quick and Merge Sort. I don't really understand what it means by if F(N) = O(log N). Is that using Big oh notation to say that the most amount of random elements on the end would be either a constant or log(N) in each respect case.
Thanks for any insight.
Edited: Fixed my obvious mistake in terms of Big Oh notation, not sure where to go from here though.
In the first example you are given a problem, where a constant number of non-ordered elements follow the sorted sequence. In essence this means that you may implement an algorithm to insert a single non-ordered element and then repeat it several times. The overall complexity of inserting all f(N) = O(1) elements will be the same. One of the algorithms you mention is best to perform this operation.
In the second case you have number of elements to be inserted in the order of log(n). In this case you can not ignore this number as it is dependent on the input size. Thus you need to think of a smarter way to merge it with the remaining part that is already sorted. (TIP: maybe the operation you need to perform will help you choose an algorithm?)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Given a sorted array of both positive and negative numbers (Example:-9, -7, -4, -0.9, 1, 2, 3, 8) i need to output the elements in the array in sorted order of their absolute values in less than O(N^2) without using any inbuilt function.
Does anyone know any acceptable solution for this simple problem?
I was thinking of modifying the quicksort algorithm to make it check the abs values for elements.
I would binary search for 0, conceptually split the list into 2 parts, and merge the two lists (the negative value one as the positive value of itself) into a single new one by walking the negative in reverse and the positive forward.
O(log n) for binary search, O(n) for the merge of 2 sorted lists.
Pick any 2-way comparison sorting algorithm of your pleasure with runtime bounds less than O(N^2). Quicksort is a valid choice.
When doing comparisons (which will show up in the 2-way comparison sorting algorithm), instead of comparing the values a and b compare abs(a) and abs(b). Just make sure that you don't replace a and b with abs(a) and abs(b), just use the latter two when doing comparasons
Just have two pointers at the end of the arrays then compare the absolute value with the end.
start == array[0]
end == array.length
while start != end
if abs(start) => end
put start in front of end
start++
end--
else
end--
I might be missing some pieces but that's the idea.
0(n) solution.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
can you please tell me how to find max 2 element in 10 element less complexity in java .
I do like this but it complexity is too high .i need to reduce that
Algo
take max =a[0]
for(int i =0;i<10;i++){
if(a[i]>max)
max=a[i]
}
Second way to sort the array using bubble sort then find the last 2 element ?
Instead of sorting the array, you can simply do the following:
Keep a largestValue and a secondLargestValue
Loop through the entire array once, for each element:
Check to see if the current element is greater than largestValue:
If so, assign largestValue to secondLargestValue, then assign the current element to largestValue (think of it as shifting everything down by 1)
If not, check to see if the current element is greater than secondLargestValue
If so, assign the current element to secondLargestValue
If not, do nothing.
O(n) run time
O(1) space requirement
max =a[0];
max2 = a[0];
for(int i =0;i<10;i++)
{
if(a[i]>max)
{
max2=max;
max=a[i];
continue;
}
if(a[i]>max2||max2==max)
{
max2=a[i];
}
}