I am reading a text and calculating how many times this value has occurred in the text. To do this I am using an ArrayList, whenever more than one character of the same type is added which is already in my ArrayList....I increment a counter. So at the end of the method I'm able to print the letters of the alphabet contained within the text matching with it's corresponding occurrence.
for(int i; i < text.length i++)
counter = 0
if arraylist already contains the character then continue
otherwise add the character to the array
for j; j < text.length j++
if index of text(j) and text(i) == the same
counter++
system out print arraylist[i] + counter
This is pseudo code to give you an idea of how my program works, I don't want to post the actual code up as it is assessed and I'm conscious about people using it.
So, I'm looking for a way to identify how to find the highest and lowest letters which have occurred. I'm struggling for ideas unless I pass on both the counter and index of array list character to some sort of data structure such as a hashmap =/ I feel like I must really be overthinking it though, unless the way I've structured my program isn't the best for what I'm trying to do. Because obviously I can't compare the counters each loop? .... questioning whether having a hashmap may be better and worth restarting everything.
Anyway, any suggestions welcome! ( this is assessed so please don't give an answer, but more of a possibility for how it could be approached )
Try using a hashmap, i.e.
HashMap<Character, Integer> charMap;
Where the Integer is the count you would like to keep track of. Populate your hashmap with the appropriate characters. After, you can simply get the character by the get("somechar") method and increase the integer by 1.
After you're done iterating through the characters, you can iterate through the hashmap to determine the character with the lowest/highest frequency.
Related
I've got an ArrayList consiting of objects card, which have the attributes (int) value, (String) symbol and (String) image and to them belonging getters.
ArrayList<card> cardDeck = new ArrayList<>();
cardDeck has self-explanatory 52 elements, each a card object.
Now, if I want to print out all the cards in cardDeck there are two simple solutions:
First solution:
for(int i = 0; i < cardDeck.size(); i++){
System.out.println((i + 1) + ". card:");
System.out.println(cardDeck.get(i).getValue());
System.out.println(cardDeck.get(i).getSymbol());
System.out.println(cardDeck.get(i).getImage());
}
Second solution:
for(int i = 0; i < cardDeck.size(); i++){
card temp = cardDeck.get(i);
System.out.println((i + 1) + ". card:");
System.out.println(temp.getValue());
System.out.println(temp.getSymbol());
System.out.println(temp.getImage());
}
My question is, if there are any noticeable difference in either execution time or complexity.
On the first thought, in the first solution the program would have to look up the card in the ArrayList first every time, before being able to print its info. Which isn't the case in the second solution, as a temporary copy was made.
On second thought though, even in the second solution, the program would still need to look up the info of the temporary card object with every call.
Any help / ideas / advice appreciated!
So we have
3 array lookups (with the same index and no modification on the array, so the compiler MAY optimize it) in the first solution
error prone code in the first solution (what happens if you need to change the index to i+1 and forget to correct the code in all 3 places)
versus:
1 array lookup in the second solution - optimized without relying on the compiler
better readable code in the second solution (if you replace temp by card, which you can do if you properly start the class name uppercase: Card card)
Array lookups are not that cheap in Java - the arrays are guarded (bounds checks) to prevent buffer overflow injection vulnerabilities.
So you have two very good reasons that tell you to go with the second solution.
Using Java 8,
cardDeck.parallelStream().forEach(card -> {System.out.println(card.getValue());System.out.println(card.getSymbol());System.out.println(card.getImage());});
This does not guarantee better performance, this depends on the number of CPU cores available.
Peter has already said what would be the better idea from the programming perspective.
I want to add that the OP asked about complexity. I interpret that in the sense of asymptotical time required in relation to the size of the card deck.
The answer is that from a theoretical complexity perspective, both approaches are the same. Each array lookup adds a constant factor to required time. It's both O(n) with n being the number of cards.
On another note, the OP asked about copying elements of the list. Just to make it clear: The statement card temp = cardDeck.get(i) does not cause the ith list element to be copied. The temp variable now just points to the element that is located at the ith position of cardDeck at the time of running the loop.
First, you have other solutions for example, using for eatch loop or using forEatch method with lambda expressions.
And about speed, you don't have to worry about speed until your program runs in regular computers and you don't have to deal with weak or low processors, but in your case, you can make your app less complex with using functional programming e.g,
cardDec.forEatch(card) -> {
System.out.println(card.getValue());
System.out.println(card.getSymbol());
System.out.println(card.getImage());
};
I have a String array that contains a lot of words. I wish to get the index of a word contained in the array (-1 if it is not contained).
I have first made a loop to search through all elements in the array while incrementing a variable and when I find it, I return the variable's value.
However the array can be very very very big so searching through all elements is extremely slow. I have decided that before adding a new word in my string array, I would use hashCode() % arrayLength to get the index of where I should put it. Then, to get the index back, I would just reuse hashCode() % arrayLength to instantly know at what index it is.
The problem is that sometimes there are "clashes", and two elements can have the same index in the array.
Anyone has an idea how to deal with it? Or any other alternatives to get the index of an element faster?
You are trying to implement Open Addressing using an array. Unless this is a homework exercise, Java standard library already has classes to solve the search and collision problem.
You probably want to use a HashSet to check if the String exists. Behind the scene it's using HashMap which implements Separate Chaining to resolve conflicts.
String[] words = { "a" };
Set<String> set = new HashSet<>(Arrays.asList(words));
return set.contains("My Word") ? 1 : -1;
The technique you are referring to is one of the implementations of hash tables in general. It is called Linear Probing which is a form of a general technique called Open Addressing. If you have calculated the index of the word based on the hashCode() % array.lengthand find a conflict (non-empty element or not the element you are looking for); then you have three ways to perform the conflict resolution:
Linear Search
This is done by incrementing the position and check if it is empty or has the element you are looking for. That is, your second position will be (hashCode(input) + 2) % array.length and then (hashCode(input) + 3) % array.length and so on. The problem with this approach is that your insertion or lookup performance will degrade to linear O(n) if the array is close to completely populated.
Quadratic Search
This is just an optimization to the above technique by jumping qudratically if you find a clash. So, your second index will be (hashCode(input) + 2*2) % array.length and then (hashCode(input) + 3*3) % array.length and so on which helps getting to the right location faster.
Double Hashing
This is even a more efficient approach to handle resolution by introducing another hashing function hashCode2() which you use in conjunction to the first one. In that case, your next search index will be (hashCode(input) + 2*hashCode2(input)) % array.length and then (hashCode(input) + 3*hashCode2(input)) % array.length and so on.
The more randomly distributed your jumps are, the better performance it gets over large hash tables
Hope this helps.
I have a general programming question, that I have happened to use Java to answer. This is the question:
Given an array of ints write a program to find out how many numbers that are not unique are in the array. (e.g. in {2,3,2,5,6,1,3} 2 numbers (2 and 3) are not unique). How many operations does your program perform (in O notation)?
This is my solution.
int counter = 0;
for(int i=0;i<theArray.length-1;i++){
for(int j=i+1;j<theArray.length;j++){
if(theArray[i]==theArray[j]){
counter++;
break; //go to next i since we know it isn't unique we dont need to keep comparing it.
}
}
}
return counter:
Now, In my code every element is being compared with every other element so there are about n(n-1)/2 operations. Giving O(n^2). Please tell me if you think my code is incorrect/inefficient or my O expression is wrong.
Why not use a Map as in the following example:
// NOTE! I assume that elements of theArray are Integers, not primitives like ints
// You'll nee to cast things to Integers if they are ints to put them in a Map as
// Maps can't take primitives as keys or values
Map<Integer, Integer> elementCount = new HashMap<Integer, Integer>();
for (int i = 0; i < theArray.length; i++) {
if (elementCount.containsKey(theArray[i]) {
elementCount.put(theArray[i], new Integer(elementCount.get(theArray[i]) + 1));
} else {
elementCount.put(theArray[i], new Integer(1));
}
}
List<Integer> moreThanOne = new ArrayList<Integer>();
for (Integer key : elementCount.keySet()) { // method may be getKeySet(), can't remember
if (elementCount.get(key) > 1) {
moreThanOne.add(key);
}
}
// do whatever you want with the moreThanOne list
Notice that this method requires iterating through the list twice (I'm sure there's a way to do it iterating once). It iterates once through theArray, and then implicitly again as it iterates through the key set of elementCount, which if no two elements are the same, will be exactly as large. However, iterating through the same list twice serially is still O(n) instead of O(n^2), and thus has much better asymptotic running time.
Your code doesn't do what you want. If you run it using the array {2, 2, 2, 2}, you'll find that it returns 2 instead of 1. You'll have to find a way to make sure that the counting is never repeated.
However, your Big O expression is correct as a worst-case analysis, since every element might be compared with every other element.
Your analysis is correct but you could easily get it down to O(n) time. Try using a HashMap<Integer,Integer> to store previously-seen values as you iterate through the array (key is the number that you've seen, value is the number of times you've seen it). Each time you try to add an integer into the hashmap, check to see if it's already there. If it is, just increment that integers counter. Then, at the end, loop through the map and count the number of times you see a key with a corresponding value higher than 1.
First, your approach is what I would call "brute force", and it is indeed O(n^2) in the worst case. It's also incorrectly implemented, since numbers that repeat n times are counted n-1 times.
Setting that aside, there are a number of ways to approach the problem. The first (that a number of answers have suggested) is to iterate the array, and using a map to keep track of how many times the given element has been seen. Assuming the map uses a hash table for the underlying storage, the average-case complexity should be O(n), since gets and inserts from the map should be O(1) on average, and you only need to iterate the list and map once each. Note that this is still O(n^2) in the worst case, since there's no guarantee that the hashing will produce contant-time results.
Another approach would be to simply sort the array first, and then iterate the sorted array looking for duplicates. This approach is entirely dependent on the sort method chosen, and can be anywhere from O(n^2) (for a naive bubble sort) to O(n log n) worst case (for a merge sort) to O(n log n) average-though-likely case (for a quicksort.)
That's the best you can do with the sorting approach assuming arbitrary objects in the array. Since your example involves integers, though, you can do much better by using radix sort, which will have worst-case complexity of O(dn), where d is essentially constant (since it maxes out at 9 for 32-bit integers.)
Finally, if you know that the elements are integers, and that their magnitude isn't too large, you can improve the map-based solution by using an array of size ElementMax, which would guarantee O(n) worst-case complexity, with the trade-off of requiring 4*ElementMax additional bytes of memory.
I think your time complexity of O(n^2) is correct.
If space complexity is not the issue then you can have an array of 256 characters(ASCII) standard and start filling it with values. For example
// Maybe you might need to initialize all the values to 0. I don't know. But the following can be done with O(n+m) where n is the length of theArray and m is the length of array.
int[] array = new int[256];
for(int i = 0 ; i < theArray.length(); i++)
array[theArray[i]] = array[theArray[i]] + 1;
for(int i = 0 ; i < array.length(); i++)
if(array[i] > 1)
System.out.print(i);
As others have said, an O(n) solution is quite possible using a hash. In Perl:
my #data = (2,3,2,5,6,1,3);
my %count;
$count{$_}++ for #data;
my $n = grep $_ > 1, values %count;
print "$n numbers are not unique\n";
OUTPUT
2 numbers are not unique
So here I am with this simple question
Consider these two for cycles and please
explain to me if there's any difference
between the two ways of writing
method 1 :
for(i=(max-1) ; i>=0 ; i--){ do-some-stuff }
method 2 :
for(i=max ; i>0 ; i--) { do-some-stuff }
the reason I'm asking this is because today at school
while we were seeing some Java functions, there was
this palindrome method wich would use as max the
length of the word passed to it and the method used
to cycle trough the for was the first, can anyone
clarify me why the person who writed that piece of
code prefeered using that method ?
Yes, there's a big difference - in the version, the range is [0, max-1]. In the second version, it's [1, max]. If you're trying to access a 0-based array with max elements, for example, the second version will blow up and the first won't.
If the order in which the loop ran didn't matter, I'd personally use the more idiomatic ascending sequence:
for (int i = 0; i < max; i++)
... but when descending, the first form gives the same range of values as this, just in the opposite order.
Both the loops will iterate max times. But the ranges would be different:
First loop's range would be max - 1 to 0 (both inclusive)
Second second loop's range would be max to 1.
Therefore, if you are using i as an array index, or doing some work which is a function of i , dependent of i, then it will create problems for the terminal values (for example 0 is considered in the first one, where as not by the second one). But if you simply want to iterate the loop max nos of times , and do some work which is independent of the value of i, then there is no difference.
I'm doing a wee project (in Java) while uni is out just to test myself and I've hit a stumbling block.
I'm trying to write a program that will read in from a text version of dictionary, store it in a ds (data structure), then ask the user for a random string (preferably a nonsense string, but only letters and -'s, no numbers or other punctuation - I'm not interested in anything else), find out all the anagrams of the inputted string, compare it to the dictionary ds and return a list of all the possible anagrams that are in the dictionary.
Okay, for step 1 and 2 (reading from the dictionary), when I'm reading everything in I stored it in a Map, where the keys are the letter of the alphabet and the values are ArrayLists storing all the words beginning with that letter.
I'm stuck at finding all the anagrams, I figured how to calculate the number of possible permutations recursively (proudly) and I'm not sure how to go about actually doing the rearranging.
Is it better to break it up into char and play with it that way, or split it up and keep it as string elements? I've seen sample code online in different sites but I don't want to see code, I would to know the kind of approach/ideas behind developing the solution for this as I'm kinda stuck how to even begin :(
I mean, I think I know how I'm going to go about the comparison to the dictionary ds once I've generated all permutations.
Any advice would be helpful, but not code if that'd be alright, just ideas.
P.S. If you're wanting to see my code so far (for whatever reason), I'll post what I've got.
public String str = "overflow";
public ArrayList<String> possibilities = new ArrayList<String>();
public void main(String[] args)
{
permu(new boolean[str.length()],"");
}
public void permu(boolean[] used, String cur)
{
if (cur.length()==str.length())
{
possibilities.add(cur);
return;
}
for (int a = 0; a < str.length(); a++)
{
if (!used[a])
{
used[a]=true;
cur+=str.charAt(a);
permu(used,cur);
used[a] = false;
cur = cur.substring(0,cur.length()-1);
}
}
}
Simple with a really horrible run-time but it will get the job done.
EDIT : The more advanced version of this is something called a Dictionary Trie. Basically it's a Tree in which each node has 26 nodes one for each letter in the alphabet. And each node also has a boolean telling whether or not it is the end of a word. With this you can easily insert words into the dictionary and easily check if you are even on a correct path for creating a word.
I will paste the code if you would like
Computing the permutations really seem like a bad idea in this case. The word "overflow" for instance has 40320 permutations.
A better way to find out if one word is a permutation of another is to count how many times each letter occur (it will be a 26-tuple) and compare these tuples against each other.
It might be helpful if you gave an example to clarify the problem. As I understand it, you are saying that if the user typed in, say, "islent", the program would reply with "listen", "silent", and "enlist".
I think the easiest solution would be to take each word in your dictionary and store it with both the word as entered, and with the word with the letters re-arranged into alphabetical order. Let's call this the "canonical value". Index on the canonical value. Then convert the input into the canonical value, and do a straight search for matches.
To pursue the above example, when we build the dictinoary and saw the word "listen", we would translate this to "eilnst" and store "eilnst -> listen". We'd also store "eilnst -> silent" and "eilnst -> enlist". Then we get the input string, convert this to "eilnst", do a search and immediately find the three hits.