I have two linked list which every list represent a number.
for example list 1 represent a number of 29 and the second represent a number of 7.
I want to implement operator less equal for these two linkedlist which works following way:
if number which represented by first linkedlist is less equal to the second return true. if not return false.
The most important thing is to go over a each linkedlist just a once. so using size & get methods which defined in linkedlist are not useful.
The issue which I face it when two numbers represented by linkedlists have a different size of length.
for example 10 ? 1 should return false.
for 1 ? 10 it should return true and also for 10 ? 10. Each number represented by a linkedlist.
I want to go over one or two linkedlist through iterator to understand if the number which represented by first linkedlist is less equal to another.
I wrote a code which works only when numbers represented a same length, for example, 29? 45 or 7 ? 6 and etc.
Iterator<T> iter1 = a1.Iterator();
Iterator<T> iter2 = a2.Iterator();
while (iter1.hasNext()) {
if (!(iter1.next().lessEqual(iter2.next()))) //if iter2 !hasNext this will throw exception; for ex 1?10
return false;
}
return true;
How can I implement to fix it for different sizes of numbers which represented by linkedlist? Please note every single number in the whole number are allocated in link. for example 294. the number 2 is in first link, 9 in the second and 4 in third link of the current linkedlist.
To avoid an exception being thrown, you need to change the loop condition:
while (iter1.hasNext() && iter2.hasNext()) {
However this is not enough to make your program give the correct answer, for that you need a bit more work.
Assuming that there are no leading zeros, notice that:
If one number has fewer digits than the other, the shorter number is the smaller
If both numbers have the same number of digits, then the first different digit decides which one is smaller
The above should be fairly easy to implement if you are using LinkedList from the standard library, because it has a size() method to check the length without iterating over elements.
If you cannot use the LinkedList and you cannot get the size of the list in constant time, then you need to work a bit harder.
Consider these implementation steps:
Iterate both lists until reaching the end of any of the lists (I already gave you the condition for that).
In each iteration, compare the digits: if you find a difference, then save that for later use, and do not overwrite it again in the loop. You could use for example a Boolean firstIsSmaller, initialized to null, and set to true or false when the first different digit is found, otherwise stay null until the end of the loop.
At the end of the loop, if one of the list did not reach the end, that list is the bigger number.
If both lists reached the end at the same time, then use firstIsSmaller to decide which number is smaller. When firstIsSmaller is null, that means no difference was found, the numbers are equal. Otherwise the boolean value decides if the first is smaller or not.
Related
I am trying to think how to solve the Subset sum problem with an extra constraint: The subset of the array needs to be continuous (the indexes needs to be). I am trying to solve it using recursion in Java.
I know the solution for the non-constrained problem: Each element can be in the subset (and thus I perform a recursive call with sum = sum - arr[index]) or not be in it (and thus I perform a recursive call with sum = sum).
I am thinking about maybe adding another parameter for knowing weather or not the previous index is part of the subset, but I don't know what to do next.
You are on the right track.
Think of it this way:
for every entry you have to decide: do you want to start a new sum at this point or skip it and reconsider the next entry.
a + b + c + d contains the sum of b + c + d. Do you want to recompute the sums?
Maybe a bottom-up approach would be better
The O(n) solution that you asked for:
This solution requires three fixed point numbers: The start and end indices, and the total sum of the span
Starting from element 0 (or from the end of the list if you want) increase the end index until the total sum is greater than or equal to the desired value. If it is equal, you've found a subset sum. If it is greater, move the start index up one and subtract the value of the previous start index. Finally, if the resulting total is greater than the desired value, move the end index back until the sum is less than the desired value. In the other case (where the sum is less) move the end index forward until the sum is greater than the desired value. If no match is found, repeat
So, caveats:
Is this "fairly obvious"? Maybe, maybe not. I was making assumptions about order of magnitude similarity when I said both "fairly obvious" and o(n) in my comments
Is this actually o(n)? It depends a lot on how similar (in terms of order of magnitude (digits in the number)) the numbers in the list are. The closer all the numbers are to each other, the fewer steps you'll need to make on the end index to test if a subset exists. On the other hand, if you have a couple of very big numbers (like in the thousands) surrounded by hundreds of pretty small numbers (1's and 2's and 3's) the solution I've presented will get closers to O(n^2)
This solution only works based on your restriction that the subset values are continuous
lets say you have two lists of ints: [1, 2, 8] and [3, 6, 7]. If the budget is 6, the int returned has to be 5 (2+3). I am struggling to find a solution faster than n^2.
The second part of the question is "how would you change your function if the number of lists is not fixed?" as in, there are multiple lists with the lists being stored in a 2d array. Any guidance or help would be appreciated.
I think my first approach would be to use for statements and add the elements to each of the next array, then take whatever is close to the budget but not exceeding it. I am new to java and I don't get your solution though, so I don't know if this would help.
For the second part of your question, are you referring to the indefinite length of your array? If so, you can use the ArrayList for that.
I will provide an answer for the case of 2 sequences. You will need to ask a separate question for the extended case. I am assuming the entries are natural numbers (i.e. no negatives).
Store your values in a NavigableSet. The implementations of this interface (e.g. TreeSet) allow you to efficiently find (for example) the largest value less than an amount.
So if you have each of your 2 sets in a NavigableSet then you could use:
set1.headSet(total).stream()
.map(v1 -> Optional.ofNullable(set2.floor(total - v1)).map(v2 -> v1 + v2)
.flatMap(Optional::stream)
.max(Integer::compareTo);
Essentially this streams all elements of the first set that are less than the total and then finds the largest element of the second set that's less than total - element (if any), adds them and finds the largest. it returns an Optional which is empty if there is no such element.
You wouldn't necessarily need the first set to be a NavigableSet - you could use any sorted structure and then use Stream.takeWhile to only look at elements smaller than the target.
This should be O(n * log n) once the sets are constructed. You could perhaps do even better by navigating the second set rather than using floor.
The best way to approach the problem if there are multiple lists, would be to use a hash table to store the pair or sequence of values that sum to the number you want (the budget).
You would simply print out the hash map key and values (key being an integer, values: a list of integers) that sum to the number.
Time complexity O(N) where N is the size of the largest list, Space Complexity of O(N).
I have an array of integers which is updated every set interval of time with a new value (let's call it data). When that happens I want to check if that array contains any other array of integers from specified set (let's call that collection).
I do it like this:
separate a sub-array from the end of data of length X (arrays in the collection have a set max length of X);
iterate trough the collection and check if any array in it is contained in the separated data chunk;
It works, though it doesn't seem optimal. But every other idea I have involves creating more collections (e.g. create a collection of all the arrays from the original collection that end with the same integer as data, repeat). And that seems even more complex (on the other hand, it looks like the only way to deal with arrays in collections without limited max length).
Are there any standard algorithms to deal with such a problem? If not, are there any worthwhile optimizations I can apply to my approach?
EDIT:
To be precise, I:
separate a sub-array from the end of data of length X (arrays in the collection have a set max length of X and if the don't it's just the length of the longest one in the collection);
iterate trough the collection and for every array in it:
separate sub-array from the previous sub-array with length matching current array in collection;
use Java's List.equals to compare the arrays;
EDIT 2:
Thanks for all the replays, surely they'll come handy some day. In this case I decided to drop the last steps and just compare the arrays in my own loop. That eliminates creating yet another sub-array and it's already O(N), so in this specific case will do.
Take a look at the KMP algorithm. It's been designed with String matching in mind, but it really comes down to matching subsequences of arrays to given sequences. Since that algorithm has linear complexity (O(n)), it can be said that it's pretty optimal. It's also a basic staple in standard algorithms.
dfens proposal is smart in that it incurs no significant extra complexity iff you keep the current product along with the main array, and can be checked in O(1), but it is also quite fragile and produces many false positives and negatives. Just imagine a target array [1, 1, ..., 1], which will always produce a positive test for all non-trivial main arrays. It also breaks down when one bucket contains a 0. That means that a successful check against his test is always a necessary condition for a hit (0s aside), but is never sufficient - aka with that method alone, you can never be sure of the validity of that result.
look at the rsync algorithm... if i understand it correctly you could go about:
you've got a immense array of data [length L]
at the end of that data, you've got N Bytes of data, and you want to know whether those N bytes ever appeared before.
precalculate:
for every offset in the array, calculate the checksum over the next N data elements.
Hold that checksum in a seperate array.
Using a rolling checksum like rsync does, you can do this step in O(N) time for all elements..
Whenever new data arrives:
Calculate the checksum over the last N elements. Using a rolling checksum, this could be O(1)
Check that checksum against all the precalculated checksums. If it matches, check equality of the subarrays (subslices , whatever...). If that matches too, you've got a match.
I think, in essence this is the same as dfen's approach with the product of all numbers.
I think you can keep product of array to for immediate rejections.
So if your array is [n_1,n_2,n_3,...] you can say that it is not subarray of [m_1,m_2,m_3,...] if product m_1*m_2*... = M is not divisible by productn_1*n_2*... = N.
Example
Let's say you have array
[6,7]
And comparing with:
[6,10,12]
[4,6,7]
Product of you array is 6*7 = 42
6 * 10 * 12 = 720 which is not divisible by 42 so you can reject first array immediately
[4, 6, 7] is divisble by 42 (but you cannot reject it - it can have other multipliers)
In each interval of time you can just multiply product by new number to avoid computing whole product everytime.
Note that you don't have to allocate anything if you simulate List's equals yourself. Just one more loop.
Similar to dfens' answer, I'd offer other criteria:
As the product is too big to be handled efficiently, compute the GCD instead. It produces much more false positives, but surely fits in long or int or whatever your original datatype is.
Compute the total number of trailing zeros, i.e., the "product" ignoring all factors but powers of 2. Also pretty weak, but fast. Use this criterion before the more time-consuming ones.
But... this is a special case of DThought's answer. Use rolling hash.
I'm reading the code of Guava's Ints.max(int... array) (and similarly, Ints.min, Longs.min, etc.) They throw an IllegalArgumentException if array.length == 0 (This is Guava 15.0).
I wonder why they do not return the "identity element" in this case, instead of throwing an exception. By "identity element" I mean the element behaving like 1 for product, or 0 for sum.
That is, I would expect Ints.min() to be Integer.MAX_VALUE, Ints.max() to be Integer.MIN_VALUE, and so on.
The rationale behind this is that if you split an array in two, the min of the whole array must be the min between the mins of both sub arrays. Or, for the mathematically inclined, the sum over an empty set of real numbers is 0, the product is 1, the union of an empty collection of sets is the empty set, and so on.
Since Guava libraries tend to be carefully produced, I guess there must be an explanation for throwing an exception here. So the question is: why?
Edit: I understand that most people expect max and min of an array to be an element of the array, but this is because max/min of two elements is always one of them. On the other hand, if one regards max/min just as (commutative) binary operations, returning the identity element makes more sense. To me.
Because, IMHO, in 99.99% of the cases, when you ask the minimum element of an array, you want to get an element of this array, and not some arbitrary large value. And thus, most of the time, an empty array is a special condition, that needs a specific treatment. And not handling this special condition is thus a bug, signalled by an exception.
You said it yourself -
The rationale behind this is that if you split an array in two, the min of the whole array must be the min between the mins of both sub arrays. Or, for the mathematically inclined, the sum over an empty set of real numbers is 0, the product is 1, the union of an empty collection of sets is the empty set, and so on.
So [-1] = [-1] union [] but max([-1]) != max([-1] union []). I agree that for product or sum it makes more sense to return the respective identity, but not max/min.
I also prefer the property that max/min(S) be an element of S. Not some element having no relevance with respect to less than and greater than.
In particular if I'm working in a domain with a lot of negative numbers - say temperatures in Northern Canada - a day where my sample of temperatures is empty because the thermometer broke - it should not randomly show up as a relatively very warm day.
The minimum/maximum of array values must come from that array. If the array is empty, there there is no value to take. Returning Integer.MAX_VALUE OR Integer.MIN_VALUE here would be wrong, because those values aren't in the array. Nothing is in the array. Mathematically, the answer is the empty set, but that isn't a valid value among possible int values. There is no possible int correct answer, so the only possible correct course of action is to throw an Exception.
I have a series of numbers, i.e. 6, 5, 7, 8, 9, 1, which through some mathematical process I will eventually obtain through repetition. For this reason, I want to use a vector to store the last six numbers yielded by this process and then compare the contents of that vector to the series of numbers above. If it identically matches the series of numbers, I will end the program, or if not continue the mathematical procedure.
My question is I how I might go about comparing the series of numbers and the vector efficiently. Originally I was going to use a simple if/else conditional to compare each individual value in the vector with its correspondent in the series(e.g. where v is a Vector filled with six numbers, if (v.get(0) == 6 && v.get(1) == 5 ...)), but considering that this will be evaluated a number of times before the vector equals the series, I'm beginning to wonder if that would be a relatively costly calculation compared to some alternate procedure.
My other idea to do this is to store the series of numbers in a second vector and then compare the two vectors. However, being inexperienced with vectors and Java in general, I'm not sure as to how I might go about doing this and how it might be more efficient than the simple if and else clause mentioned above.
Any advice as to how comparisons between a bunch of numbers and a vector's contents might be done more efficiently? Or if not a vector, than perhaps a list or array instead?
Make a hash value out of your sequence of numbers, example (warning - this hash function is only for demonstration purposes):
[N1,N2,N3,N4,N5] -> N1 xor N2 xor N3 xor N4 xor N5.
Then first you only need to check the hash values of your result vector and the original vector, and only if they match need you to compare each individual number.
You should avoid Vector here, because it's implemented to be thread safe and has some overhead for this. Use LinkedList instead for maximum performance of insert and remove operations and ArrayList for maximum performance of random access.
Example:
void repeatedCompute() {
List<Integer> triggerList = new ArrayList<Integer>() {{
add(6);
add(5);
add(7);
add(8);
add(9);
add(1);
}};
List<Integer> intList = new LinkedList<Integer>();
boolean found = false;
while (!found) {
int nextResult = compute();
intList.add(0, nextResult);
if (intList.size() > triggerList.size()) {
intList.remove(intList.size() - 1);
}
if (intList.equals(triggerList)) {
found = true;
}
}
}
int compute() {
return new Random().nextInt(10);
}
Comparing the lists has the advantage that comparison is stopped after the first element mismatch, you don't have to touch all elements. Comparing lists is quite fast!
I would guess that it wouldn't be too expensive to just keep one vector around with your target numbers in, then populate a new one with your newly generated numbers, then iterate through them comparing. It may be that you're likely to have a failed comparison on the first number, so it only costs you one compare to detect failure.
It seems that you're going to have to collect your six numbers which ever method you use, so just comparing integers won't be too expensive.
Whatever you do, please measure!
You should generate at least two algorithms for your comparison task and compare their performance. Pick the fastest.
However, maybe the first step is to compare the run time of your mathematical process to the comparison task. If your mathematical process runtime is 100 times or more than the comparison, you just have to pick something simple and don't worry.
Note that the && operator short-circuits, i.e. the right operand is only evaluated if the left operand does not determine the result by itself. Hence the comparison will only look at those elements necessary.
I do not think, however, that this will in any case overshadow the time spent on the mathematical calculations used to generate the numbers you compare. Use jvisualvm to investigate where the time is spent.
If you only need to verify that against one set of numbers, then comparing the numbers directly is more efficient. Computing the hash requires you to perform some arithmetic and that is more expensive than comparison. If you need to verify that against multiple sets of numbers than the hash solution would be more efficient.