Trying to figure the error with this code. This works for small samples but fails for huge numbers (I don't have a large sample in my hand though).
The solution worked for the following tests.
private static final int[] A = {9,3,9,3,9,7,9};
private static final int[] A2 = {9,3,9};
private static final int[] A3 = {9,3,9,3,9,7,7,2,2,11,9};
#Test
public void test(){
OddOccurance oddOccurance =new OddOccurance();
int odd=oddOccurance.solution(A);
assertEquals(7,odd);
}
#Test
public void test2(){
OddOccurance oddOccurance =new OddOccurance();
int odd=oddOccurance.solution(A2);
assertEquals(3,odd);
}
#Test
public void test3(){
OddOccurance oddOccurance =new OddOccurance();
int odd=oddOccurance.solution(A3);
assertEquals(11,odd);
}
when an array is given with an odd number of integers (except one integer other integers can be repeated). The solution is to find the non-repeating integer. Any other better ideas (Time and space optimized) to implement this as well, welcome.
public int solution(int[] A) {
// write your code in Java SE 8
Map<Integer, List<Integer>> map = new HashMap<>();
int value = 0;
//iterate throught the list and for each array value( key in the map)
// set how often it appears as the value of the map
for (int key : A) {
if (map.containsKey(key)) {
map.get(key).add(value);
} else {
List<Integer> valueList = new ArrayList<>();
valueList.add(value);
map.put(key, valueList);
}
}
Set<Map.Entry<Integer, List<Integer>>> entrySet = map.entrySet();
// en
for (Map.Entry<Integer, List<Integer>> entry : entrySet) {
if (entry.getValue().size() == 1) {
return entry.getKey();
}
}
return 0;
}
Update
Looking at failed outputs
WRONG ANSWER, got 0 expected 42
WRONG ANSWER, got 0 expected 700
It seems it didn't even go to the for loop but just return 0
It's a standard problem, if the actual statement is the following:
each number except one appears even number of times; the remaining number appears once.
The solution is to take xor of all numbers. Since every repeating number occures even number of times, it will cancel itself. The reason is that xor is commutative:
a xor b xor c = a xor c xor b = c xor b xor a = etc.
For example, in case of 1, 2, 3, 1, 2
1 xor 2 xor 3 xor 1 xor 2 =
(1 xor 1) xor (2 xor 2) xor 3 =
0 xor 0 xor 3 =
3
One approach would be to create a new array containing the frequency of each value. You could start by looping through your initial array to calculate the maximum value in it.
For example, the array {9,3,9,3,9,7,7,2,2,11,9} would have a maximum value of 11. With this information, create a new array that can store the frequency of every possible value in your initial array. Then, assuming there is only one integer that repeats once, return the index of the new array that has a frequency of 1. This method should run in O(n) where n is the size of the input array.
Here's an implementation:
public int solution(int[] inp)
{
int max = inp[0];
for(int i = 1; i < inp.length; i++)
{
if(inp[i] > max)
max = inp[i];
}
int[] histogram = new int[max + 1]; //We add 1 so we have an index for our max value
for(int i = 0; i < inp.length; i++)
histogram[inp[i]]++; //Update the frequency
for(int i = 0; i < histogram.length; i++)
{
if(histogram[i] == 1)
return i;
}
return -1; //Hopefully this doesn't happen
}
Hope this helps
It's hard to know why yours failed without the actual error message. Regardless, as your array input gets very large, your internal data structure grows accordingly, but doesn't need to. Instead an array of Integer as the value, we can just use one Integer:
public int solution(int[] a) {
Integer ONE = 1;
Map<Integer, Integer> map = new HashMap<>();
for (int key : a) {
Integer value = (map.containsKey(key)) ? map.get(key) + ONE : ONE;
map.put(key, value);
}
for (Map.Entry<Integer, Integer> entry : map.entrySet()) {
if (entry.getValue().equals(ONE)) {
return entry.getKey();
}
}
return -1;
}
I'm assuming the odd array length requirement is to avoid an array of length of two, where the items would both be unduplicated or duplicated.
Since we don't need the actual total, we can simplify this further and just consider parity. Here's a rework that does and uses the evolving new rules of this question, looking for the odd man out:
public int solution(int[] a) {
Map<Integer, Boolean> odd = new HashMap<>();
for (int key : a) {
odd.put(key, (odd.containsKey(key)) ? ! odd.get(key) : Boolean.TRUE);
}
for (Map.Entry<Integer, Boolean> entry : odd.entrySet()) {
if (entry.getValue()) {
return entry.getKey();
}
}
return 0;
}
Returns zero on failure as we now know:
A is an integer within the range [1..1,000,000,000]
Related
I'm using a Map with eligible words for a hangman game I'm developing. The Integer in the Map stores the times a word has been chosen, so in the beginning the Map looks like this:
alabanza 0
esperanza 0
comunal 0
aprender 0
....
After some plays, the Map would look like this
alabanza 3
esperanza 4
comunal 3
aprender 1
....
I'd like to choose the next word randomly but having the less chosen word a bigger probability of been chosen.
I've read Java - Choose a random value from a HashMap but one with the highest integer assigned but it's the opposite case.
I''ve also thought I could use a list with repeated words (the more times a word appears in the list, the more the probabilities of been chosen) but I've only managed to get to this:
int numberOfWords=wordList.size(); //The Map
List<String> repeatedWords=new ArrayList<>();
for (Map.Entry<String,Integer> entry : wordList.entrySet()) {
for (int i = 0; i < numberOfWords-entry.getValue(); i++) {
repeatedWords.add(entry.getKey());
}
}
Collections.shuffle(repeatedWords); //Should it be get(Random)?
String chosenWord=repeatedWords.get(0);
I think this fails when the amount of words chosen equals the amount of words.
Edit
Finally there's a problem with the probability of each word once they have different numbers. I've changed the point of view so I first put a probability of 1000 (It could be any number) and every time I choose a word, I reduce the probability a certain amount (let's say, 20%), so I use:
wordList.put(chosen,(int)(wordList.get(chosen)*0.8)+1);
After that I choose the word with the recipe Lajos Arpad or Ahmad Shahwan gave.
If the game were to be played many many times, all the probabilities would tend to 1, but that's not my case.
Thanks all who answered.
Try this:
import java.util.Map;
import java.util.HashMap;
import java.util.Random;
public class MyClass {
public static void main(String args[]) {
Map<String, Integer> wordList = new HashMap<>();
wordList.put("alabanza", 3);
wordList.put("esperanza", 4);
wordList.put("comunal", 3);
wordList.put("aprender", 1);
Map<String, Integer> results = new HashMap<>(4);
for (int i = 0; i < 100; i++) {
String name = randomize(wordList);
Integer old = results.getOrDefault(name, 0);
results.put(name, old + 1);
}
for (Map.Entry<String, Integer> e : results.entrySet()) {
System.out.println(e.getKey() + "\t" + e.getValue());
}
}
private static String randomize(Map<String, Integer> wordList) {
final Integer sum = wordList.values().stream().reduce(Integer::sum).orElse(0);
final int grandSum = (wordList.size() - 1) * sum;
final int random = new Random().nextInt(grandSum + 1);
int index = 0;
for (Map.Entry<String, Integer> e: wordList.entrySet()) {
index += (sum - e.getValue());
if (index >= random) {
return e.getKey();
}
}
return null;
}
}
Out put is the times a name was chosen over 100 trial:
aprender 37
alabanza 25
comunal 23
esperanza 15
You can try it yourself here.
I won't provide exact code, but basic idea.
Iterate over wordList.values() to find the maximum weight M and sum of weights S.
Now let each word w have likelihood (like probability, but they don't have to sum to 1) to be chosen M + 1 - wordList.get(w), so a word with weight 1 is M times more likely to be chosen than a word with weight M.
The sum of likelihoods will be (M + 1) * wordList.size() - S (that's why we need S). Pick a random number R between 0 and this sum.
Iterate over wordList.entrySet(), summing likelihoods as you go. When the sum passes R, that's the word you want.
Your map values are your weights.
You need to pick an integer lower than the weights sum.
You pick each String entry, with its weight. When the weight sum passes the random integer, you are on THE String.
This will give you :
public static void main(String ... args){
Map<String, Integer> wordList = new HashMap<>();
wordList.put("foo", 4);
wordList.put("foo2", 2);
wordList.put("foo3", 7);
System.out.println(randomWithWeight(wordList));
}
public static String randomWithWeight(Map<String, Integer> weightedWordList) {
int sum = weightedWordList.values().stream().collect(Collectors.summingInt(Integer::intValue));
int random = new Random().nextInt(sum);
int i = 0;
for (Map.Entry<String, Integer> e : weightedWordList.entrySet()){
i += e.getValue();
if (i > random){
return e.getKey();
}
}
return null;
}
For the sake of simplicity let us suppose that you have an array called occurrences, which has int elements (you will easily translate this into your data structure).
Now, let's find the maximum value:
int max = 0;
for (int i = 0; i < occurrences.length; i++) {
if (max < occurrences[i]) max = occurrences[i];
}
Let's increment it:
max++;
Now, let's give a weight of max to the items which have 0 as value, a weight of max - 1 to the items which occurred once, and so on (no item will have a weight of 0 since we incremented max):
int totalWeight = 0;
for (int j = 0; j < occurrences.length; j++) {
totalWeight += max - occurrences[j];
}
Note that all items will have their weight. Now, let's suppose you have a randomized integer, called r, where 0 < r <= totalWeight:
int resultIndex = -1;
for (int k = 0; (resultIndex < 0) && k < occurrences.length; k++) {
if (r <= max - occurrences[k]) resultIndex = k;
else r -= max - occurrences[k];
}
and the result is occurrences[resultIndex]
Given a stream of number, like 1,3,5,4,6,9, I was asked to print them like 1,3-6,9. My approach was to hold min 2 numbers in a maxHeap and max 2 numbers in a minHeap. And I have come up with a following solution. Do you have any suggestion to make it more optimized? Its time complexity is O(nlogn).
public static ArrayList<Integer> mergingMiddleNums (int[] arr){
if (arr == null || arr.length < 3){
throw new IllegalArgumentException();
}
ArrayList<Integer> result = new ArrayList<>();
Queue<Integer> minHeap = new PriorityQueue<>();
Queue<Integer> maxHeap = new PriorityQueue<Integer>(new Comparator<Integer>() {
#Override
public int compare(Integer num1, Integer num2) {
return num2-num1;
}
});
for (int i = 0 ; i < 2 ; i++){
minHeap.add(arr[i]);
}
for (int i = 0 ; i < 2 ; i++){
maxHeap.add(arr[i]);
}
for (int i = 2 ; i <arr.length; i++){
if(arr[i] > minHeap.peek()){
minHeap.poll();
minHeap.add(arr[i]);
}
}
result.add(minHeap.poll());
result.add(minHeap.poll());
for (int i = 2 ; i <arr.length; i++){
if(arr[i] < maxHeap.peek()){
maxHeap.poll();
maxHeap.add(arr[i]);
}
}
result.add(maxHeap.poll());
result.add(maxHeap.poll());
Collections.sort(result);
return result;
}
It depends on whether your output needs to stream or not. Let's start with non-streaming output, because your current implementation addresses this.
Your code's overall complexity will be, at best, O(nLog(n)), but you can radically simplify your implementation by storing every incoming number in a collection, converting it to an array, and sorting it, before scanning over the items sequentially to identify continuous ranges. The most expensive operation here would be the sort, which would define your runtime. To save space, you could use a set or heap collection to avoid storing duplicates (the formation of which will be somewhere near O(nLog(n)) - which being the same runtime, remains collapsed at a total runtime of O(nLog(n))
If your code is expected to stream the printing along with output, that is, to print ranges as they are formed and move to the next range whenever the next number encountered is not directly adjacent to the current range, you can do it in O(n) by storing the numeric bounds of the current range as you go and either printing and resetting them if the currently-examined number is not adjacent or inside the bounds, or by expanding the bounds if it is.
A possible implementation would be to use a hashtable to store wether each integer was present in the input values or not. Then, it's simply a matter of iterating from the min value to the max and use the hashtable to find out where are the number clusters.
Such implementation would basically be O(n) with n=max-min (and not number of items in list). So if you have many numbers within a reasonably small range of values, then you could be better than a sort-based approach.
import java.util.HashMap;
import java.util.Map;
class Test {
private int min=0, max=-1;
private Map<Integer,Integer> map=new HashMap<Integer,Integer>();
public static void main(String args[]) {
int[] input={1,3,5,4,6,9};
Test t = new Test();
t.readNumbers(input);
t.outputRanges();
}
public void readNumbers(int[] values) {
// Get min and max values, and store all existing values in map
for(int v:values) {
if(first || v<min) min=v;
if(first || v>max) max=v;
first=false;
map.put(v, 1);
}
}
public void outputRanges() {
// Iterate from min to max and use map to find out existing
// values
int last=min-2;
boolean inRange=false;
first=true;
for(int i=min;i<=max;++i) {
if(map.get(i)==null) continue;
if(i==last+1) {
inRange=true;
} else {
if(inRange) {
closeRange(last);
inRange=false;
}
output(i);
}
last=i;
}
if(inRange) closeRange(last);
}
private boolean first;
private void commaUnlessFirst() {
if(!first) System.out.printf(",");
first=false;
}
private void output(int i) {
commaUnlessFirst();
System.out.printf("%d", i);
}
private void closeRange(int i) {
System.out.printf("-%d", i);
}
}
As in the title, I want to use Knuth-Fisher-Yates shuffle algorithm to select N random elements from a List but without using List.toArray and change the list. Here is my current code:
public List<E> getNElements(List<E> list, Integer n) {
List<E> rtn = null;
if (list != null && n != null && n > 0) {
int lSize = list.size();
if (lSize > n) {
rtn = new ArrayList<E>(n);
E[] es = (E[]) list.toArray();
//Knuth-Fisher-Yates shuffle algorithm
for (int i = es.length - 1; i > es.length - n - 1; i--) {
int iRand = rand.nextInt(i + 1);
E eRand = es[iRand];
es[iRand] = es[i];
//This is not necessary here as we do not really need the final shuffle result.
//es[i] = eRand;
rtn.add(eRand);
}
} else if (lSize == n) {
rtn = new ArrayList<E>(n);
rtn.addAll(list);
} else {
log("list.size < nSub! ", lSize, n);
}
}
return rtn;
}
It uses list.toArray() to make a new array to avoid modifying the original list. However, my problem now is that my list could be very big, can have 1 million elements. Then list.toArray() is too slow. And my n could range from 1 to 1 million. When n is small (say 2), the function is very in-efficient as it still need to do list.toArray() for a list of 1 million elements.
Can someone help improve the above code to make it more efficient when dealing with large lists. Thanks.
Here I assume Knuth-Fisher-Yates shuffle is the best algorithm to do the job of selecting n random elements from a list. Am I right? I would be very glad to if there is other algorithms better than Knuth-Fisher-Yates shuffle to do the job in terms of the speed and the quality of the results (guarantee real randomness).
Update:
Here is some of my test results:
When selection n from 1000000 elements.
When n<1000000/4 the fastest way to through using Daniel Lemire's Bitmap function to select n random id first then get the elements with these ids:
public List<E> getNElementsBitSet(List<E> list, int n) {
List<E> rtn = new ArrayList<E>(n);
int[] ids = genNBitSet(n, 0, list.size());
for (int i = 0; i < ids.length; i++) {
rtn.add(list.get(ids[i]));
}
return rtn;
}
The genNBitSet is using the code generateUniformBitmap from https://github.com/lemire/Code-used-on-Daniel-Lemire-s-blog/blob/master/2013/08/14/java/UniformDistinct.java
When n>1000000/4 the Reservoir Sampling method is faster.
So I have built a function to combine these two methods.
You are probably looking for something like Resorvoir Sampling.
Start with an initial array with first k elements, and modify it with new elements with decreasing probabilities:
java like pseudo code:
E[] r = new E[k]; //not really, cannot create an array of generic type, but just pseudo code
int i = 0;
for (E e : list) {
//assign first k elements:
if (i < k) { r[i++] = e; continue; }
//add current element with decreasing probability:
j = random(i++) + 1; //a number from 1 to i inclusive
if (j <= k) r[j] = e;
}
return r;
This requires a single pass on the data, with very cheap ops every iteration, and the space consumption is linear with the required output size.
If n is very small compared to the length of the list, take an empty set of ints and keep adding a random index until the set has the right size.
If n is comparable to the length of the list, do the same, but then return items in the list that don't have indexes in the set.
In the middle ground, you can iterate through the list, and randomly select items based on how many items you've seen, and how many items you've already returned. In pseudo-code, if you want k items from N:
for i = 0 to N-1
if random(N-i) < k
add item[i] to the result
k -= 1
end
end
Here random(x) returns a random number between 0 (inclusive) and x (exclusive).
This produces a uniformly random sample of k elements. You could also consider making an iterator to avoid building the results list to save memory, assuming the list is unchanged as you're iterating over it.
By profiling, you can determine the transition point where it makes sense to switch from the naive set-building method to the iteration method.
Let's assume that you can generate n random indices out of m that are pairwise disjoint and then look them up efficiently in the collection. If you don't need the order of the elements to be random, then you can use an algorithm due to Robert Floyd.
Random r = new Random();
Set<Integer> s = new HashSet<Integer>();
for (int j = m - n; j < m; j++) {
int t = r.nextInt(j);
s.add(s.contains(t) ? j : t);
}
If you do need the order to be random, then you can run Fisher--Yates where, instead of using an array, you use a HashMap that stores only those mappings where the key and the value are distinct. Assuming that hashing is constant time, both of these algorithms are asymptotically optimal (though clearly, if you want to randomly sample most of the array, then there are data structures with better constants).
Just for convenience: A MCVE with an implementation of the Resorvoir Sampling proposed by amit (possible upvotes should go to him (I'm just hacking some code))
It seems like this is indeed a algorithm that nicely covers the cases of where the number of elements to select is low compared to the list size, and the cases where the number of elements is high compared to the list size (assumung that the properties about the randomness of the result that are stated on the wikipedia page are correct).
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Random;
import java.util.TreeMap;
public class ReservoirSampling
{
public static void main(String[] args)
{
example();
//test();
}
private static void test()
{
List<String> list = new ArrayList<String>();
list.add("A");
list.add("B");
list.add("C");
list.add("D");
list.add("E");
int size = 2;
int runs = 100000;
Map<String, Integer> counts = new TreeMap<String, Integer>();
for (int i=0; i<runs; i++)
{
List<String> sample = sample(list, size);
String s = createString(sample);
Integer count = counts.get(s);
if (count == null)
{
count = 0;
}
counts.put(s, count+1);
}
for (Entry<String, Integer> entry : counts.entrySet())
{
System.out.println(entry.getKey()+" : "+entry.getValue());
}
}
private static String createString(List<String> list)
{
Collections.sort(list);
StringBuilder sb = new StringBuilder();
for (String s : list)
{
sb.append(s);
}
return sb.toString();
}
private static void example()
{
List<String> list = new ArrayList<String>();
for (int i=0; i<26; i++)
{
list.add(String.valueOf((char)('A'+i)));
}
for (int i=1; i<=26; i++)
{
printExample(list, i);
}
}
private static <T> void printExample(List<T> list, int size)
{
System.out.printf("%3d elements: "+sample(list, size)+"\n", size);
}
private static final Random random = new Random(0);
private static <T> List<T> sample(List<T> list, int size)
{
List<T> result = new ArrayList<T>(Collections.nCopies(size, (T) null));
int i = 0;
for (T element : list)
{
if (i < size)
{
result.set(i, element);
i++;
continue;
}
i++;
int j = random.nextInt(i);
if (j < size)
{
result.set(j, element);
}
}
return result;
}
}
If n is way smaller then size, you could use this algorith, witch is unfortunatly quadratic with n, but doest depend on size of array at all.
Example with size = 100 and n = 4.
choose random number from 0 to 99, lets say 42, and add it to result.
choose random number from 0 to 98, lets say 39, and add it to result.
choose random number from 0 to 97, lets say 41, but since 41 is bigger or equal than 39, increment it by 1, so you have 42, but that is bigger then equal than 42, so you have 43.
...
Shortly, you choose from remaining numbers and then compuce what number have you acctualy chosen. I would use link list for this, but maybe there are better data structures.
Summarizing Changwang's update. If you want more than 250,000 items, use amit's answer. Otherwise use Knuth-Fisher-Yates Shuffle as shown in entirety here
NOTE: The result is always in the original order as well
public static <T> List<T> getNRandomElements(int n, List<T> list) {
List<T> subList = new ArrayList<>(n);
int[] ids = generateUniformBitmap(n, list.size());
for (int id : ids) {
subList.add(list.get(id));
}
return subList;
}
// https://github.com/lemire/Code-used-on-Daniel-Lemire-s-blog/blob/master/2013/08/14/java/UniformDistinct.java
private static int[] generateUniformBitmap(int num, int max) {
if (num > max) {
DebugUtil.e("Can't generate n ints");
}
int[] ans = new int[num];
if (num == max) {
for (int k = 0; k < num; ++k) {
ans[k] = k;
}
return ans;
}
BitSet bs = new BitSet(max);
int cardinality = 0;
Random random = new Random();
while (cardinality < num) {
int v = random.nextInt(max);
if (!bs.get(v)) {
bs.set(v);
cardinality += 1;
}
}
int pos = 0;
for (int i = bs.nextSetBit(0); i >= 0; i = bs.nextSetBit(i + 1)) {
ans[pos] = i;
pos += 1;
}
return ans;
}
If you want them randomized, I use:
public static <T> List<T> getNRandomShuffledElements(int n, List<T> list) {
List<T> randomElements = getNRandomElements(n, list);
Collections.shuffle(randomElements);
return randomElements;
}
I needed something for this in C#, here's my solution which works on a generic List.
It selects N random elements of the list and places them at the front of the list.
So upon returning, the first N elements of the list are randomly selected. It is fast and efficient even when you're dealing with a very large number of elements.
static void SelectRandom<T>(List<T> list, int n)
{
if (n >= list.Count)
{
// n should be less than list.Count
return;
}
int max = list.Count;
var random = new Random();
for (int i = 0; i < n; i++)
{
int r = random.Next(max);
max = max - 1;
int irand = i + r;
if (i != irand)
{
T rand = list[irand];
list[irand] = list[i];
list[i] = rand;
}
}
}
I have a map as shown below:
Key Value
23 20
32 20 (20+20 =40 , min=23 max=32)
43 18
45 24 (24+18 =42 , since 42 >40 so here min and max will be same that is 43
47 10
56 6 (24 +10 +6 =40) so here min =45 and max = 56
43 50 ****so how we will handle the case where value is greater than key 50 >43 ********
so i have implemented the logic which will take the
1) where value of map value reaches 40
2) where the map value upon calculation becomes greater than 40
3) ** haven't implemented the scenario in where first instance where the value of map at initial level is greater at first instance let say as shown above the key is 43 and the value is 50**
Now please advise how to handle the third scenario, what I have implemented is ..
create a Pair class that will hold the key and the value.
class Pair {
public int key;
public int value;
public Pair(int key, int value){
this.key = key;
this.value = value;
}
}
hen create a list of pair and iterate through it. If the sum is 0, initialize the min and the max. Then for each pair iterated, add its value to the sum. If the sum is inferior continue the loop and update the max key, else you have two cases possible:
The sum is equals to the limit so update the max key
The sum is not equals to the limit (so it's superior), decrement the index and don't update the max key
public static void main(String[] arg) {
Map<Long, Integer> m = new LinkedHashMap<>();
//fill your map here
List<Pair> l = new ArrayList<>();
for(Map.Entry<Long, Integer> entries : m.entrySet()){
l.add(new Pair(entries.getKey(), entries.getValue()));
}
//Now you have a list of Pair
int sum = 0;
int min = -1;
int max = -1;
for(int i = 0; i < pairList.size(); i++){
Pair p = pairList.get(i);
if(sum == 0){
min = p.key;
max = p.key;
}
sum += p.value;
if(sum < LIMIT){
max = p.key;
} else {
if(sum > LIMIT){
i--;
} else {
max = p.key;
}
System.out.println(min+"_"+max);
sum = 0;
}
}
}
Which prints:
23_32
43_43
45_56
Can you please advise how to handle the third scenario where first instance where the value of map at initial level is greater at first instance let say as shown above the key is 43 and the value is 50**
You still did not really specify clearly what you are going to achieve. You did not say whether the keys of your input map are in ascending order (that is, whether it is a TreeMap). In the original question (linked in the first comment), your input was 2 lists. The example that you posted does not make any sense at all, because it contains the key=43 twice - so it can't be a map. Your description sounds like there is some constraint between the keys and the values (the value 50 being greater than the key 43), but maybe this is just an artifact of your explaination.
Your solution approach, and how you updated some variables in your implementation, may be an attempt to give precise information. But maybe a verbal or semi-formal description of your actual goal are more helpful here. Maybe you just can not explain what you want to achieve. In the worst case, you don't know it.
At the moment, my interpretation is roughly the following: Your input consists of two lists (!). And you are trying to find ranges of the first list, so that the sum of the corresponding values in the second list is at least 40.
IF this is the case, you can just compute the indices for the list of values where the summation should start and where the summation should end. When you have these indices, you can obtain the corresponding "keys" from the first list.
import java.util.ArrayList;
import java.util.List;
public class SumSplit
{
public static void main(String[] args)
{
List<Integer> keys = new ArrayList<Integer>();
keys.add(23);
keys.add(32);
keys.add(43);
keys.add(45);
keys.add(47);
keys.add(56);
keys.add(43);
List<Integer> values = new ArrayList<Integer>();
values.add(20);
values.add(20);
values.add(18);
values.add(24);
values.add(10);
values.add( 6);
values.add(50);
final int SPLIT_VALUE = 40;
List<Integer> minIndices = new ArrayList<Integer>();
List<Integer> maxIndices = new ArrayList<Integer>();
int sum = 0;
int minIndex = -1;
for (int i=0; i<keys.size(); i++)
{
Integer value = values.get(i);
sum += value;
if (minIndex == -1)
{
minIndex = i;
}
if (sum >= SPLIT_VALUE)
{
minIndices.add(minIndex);
maxIndices.add(i);
minIndex = -1;
sum = 0;
}
}
for (int i=0; i<minIndices.size(); i++)
{
Integer min = minIndices.get(i);
Integer max = maxIndices.get(i);
System.out.println("min: "+keys.get(min)+", max "+keys.get(max));
printInfo(min, max, keys, values);
}
}
private static void printInfo(int min, int max, List<Integer> keys, List<Integer> values)
{
int sum = 0;
for (int i=min; i<=max; i++)
{
Integer key = keys.get(i);
Integer value = values.get(i);
sum += value;
System.out.println(" "+key+" : "+value+" (sum until now: "+sum+")");
}
System.out.println("Sum: "+sum);
}
}
If this is not what you are going to achieve, please describe clearly what your actual goal is.
I've written a java prog that stores some values:
public class array05 {
public static void main(String[] args) {
//placement of value
int arryNum[] = {2,3,4,5,4,4,3};
//placement of index, to start at 0
for(int counter=0;counter<arryNum.length;counter++){
System.out.println(counter + ":" + arryNum[counter]);
}
}
}
which generate such output:
0:2
1:3
2:4
3:5
4:4
5:4
6:3
and now I need to count numbers in this output #1.
Output #2 should be this:
1: 0
2: 1
3: 2
4: 3
5: 1
It means it counts ONE 2, TWO 3, THREE 4, and only One 5.
I am not sure how to write the code for output 2.
Is a binary search needed here?
can anybody shed a light?
if you are expecting in your array values between 1-5 (i assuming this from your expected output)
int arryNum[] = { 2, 3, 4, 5, 4, 4, 3 };
int[] counter = new int[] { 0, 0, 0, 0, 0 };
for (int i = 0; i < arryNum.length; i++) {
counter[arryNum[i] - 1]++;
}
for (int i = 0; i < counter.length; i++)
System.out.println((i + 1) + ":" + counter[i]);
I advise you to use a Map:
If a number doesn't exist in it, add it with the value of 1.
If the number exists, add 1 to the its value.
Then you print the map as a key and value.
For example, for your array {2,3,4,5,4,4,3} this will work as follows:
Does the map contains the key 2? No, add it with value 1. (The same for 3, 4 and 5)
Does the map contains 4? Yes! Add 1 to its value. Now the key 4 has the value of 2.
...
If you don't want to use Map, this is how you would do it with Arrays only(if you have numbers from 1 to 9 only)
Integer[] countArray = new Integer[10]
// Firstly Initialize all elements of countArray to zero
// Then
for(i=0;i<arryNum.length();i++){
int j = arryNum[i];
countArray[j]++;
}
This countArray has number of 0's in 1st position, number of 1s in 2nd position and so on
This is a solution to this problem:
import java.util.Arrays;
public class array05 {
public static void main(String[] args) {
//placement of value
int arryNum[] = {2,3,4,5,4,4,3};
// Sort the array so counting same objects is easy
Arrays.sort(arryNum);
int index = 0; // The current index
int curnum; // The current number
int count; // The count of this number
while (index < arryNum.length) {
// Obtain the current number
curnum = arryNum[index];
// Reset the counter
count = 0;
// "while the index is smaller than the amount of items
// and the current number is equal to the number in the current index,
// increase the index position and the counter by 1"
for (; index < arryNum.length && curnum == arryNum[index]; index ++, count++);
// count should contain the appropriate amount of the current
// number now
System.out.println(curnum + ":" + count);
}
}
}
People posted good solutions using Map, so I figured I'd contribute a good solution that will always work (not just for the current values), without using a Map.
Something like this:
//numbers to count
int arryNum[] = {2,3,4,5,4,4,3};
//map to store results in
Map<Integer, Integer> counts = new HashMap<Integer, Integer>();
//Do the counting
for (int i : arryNum) {
if (counts.containsKey(i) {
counts.put(i, counts.get(i)+1);
} else {
counts.put(i, 1);
}
}
//Output the results
for (int i : counts.keySet()) {
System.out.println(i+":"+counts.get(i));
}
Use Map to store count values:
import java.util.HashMap;
import java.util.Map;
class array05{
public static void main(String[] args){
// Container for count values
Map <Integer, Integer> result = new HashMap<Integer, Integer>();
int arryNum[] = {2,3,4,5,4,4,3};
for(int i: arryNum){ //foreach more correct in this case
if (result.containsKey(i)) result.put(i, result.get(i)+1);
else result.put(i, 1);
}
for (int i: result.keySet()) System.out.println(i + ":" + result.get(i));
}
}
Result below:
2:1
3:2
4:3
5:1
One approach is to use a map. When you read the first array on each number check if it exists in the map, if it does then just increment the value assigned to the number(key), if not then create a new key in the map with value "1".
Check out http://docs.oracle.com/javase/7/docs/api/java/util/HashMap.html
You can try this way too
int arrayNum[] = {2,3,4,5,4,4,3};
Map<Integer,Integer> valMap=new HashMap<>();
for(int i:arrayNum){ // jdk version should >1.7
Integer val=valMap.get(i);
if(val==null){
val=0;
}
valMap.put(i,val+1);
}
Arrays.sort(arrayNum);
for(int i=0;i< arrayNum[arrayNum.length-1];i++){
System.out.println(i+1+" : "+((valMap.get(i+1)==null) ? 0:valMap.get(i+1)));
}
Out put
1 : 0
2 : 1
3 : 2
4 : 3
5 : 1
But following way is better
int arrayNum[] = {2,3,4,5,4,4,3};
Arrays.sort(arrayNum);
int countArray[]=new int[arrayNum[arrayNum.length-1]+1];
for(int i:arrayNum){
countArray[i]= countArray[i]+1;
}
for(int i=1;i<countArray.length;i++){
System.out.println(i+" : "+countArray[i]);
}
Out put
1 : 0
2 : 1
3 : 2
4 : 3
5 : 1
I would prefer some generic solution like this one:
public static <T> Map<T, Integer> toCountMap(List<T> itemsToCount) {
Map<T, Integer> countMap = new HashMap<>();
for (T item : itemsToCount) {
countMap.putIfAbsent(item, 0);
countMap.put(item, countMap.get(item) + 1);
}
return countMap;
}
//Count the times of numbers present in an array
private HashMap<Integer, Integer> countNumbersInArray(int[] array) {
HashMap<Integer, Integer> hashMap = new HashMap<>();
for (int item : array) {
if (hashMap.containsKey(item)) {
hashMap.put(item, hashMap.get(item) + 1);
} else {
hashMap.put(item, 1);
}
}
return hashMap;
}