Java improve the performance of concatenating Strings in a loop - java

I'm writing a function with Java which can be simplified as following:
StringBuilder s = new StringBuilder(50);
Queue<String> q = new LinkedList<>();
for(some condition){
s.setLength(0);
s.append("...").append("...").append("...").append("...");//add several strings to s
q.add(s.toString());
}
I save those strings in the queue q, and if the size of the queue is bigger than a value, it will write q to a database. However it becomes slow especially when the times of loop is huge(millions). I assume that is because the concatenation takes a large amount of time. So is there any better way to do the concatenation? Thanks in advance for your help!
Update:
I want to use the same StringBuilder to create strings so each time I use s.setLength(0) to reset it at the beginning of each loop. These strings are the information of new nodes, such as its ID and some properties, so I need to retrieve these information by calling some functions and append them to the string s. The idea behind it is when the queue reaches to a specific size, I pop the information from this queue to create nodes to the Neo4j databases, since it will cost more if I use one transaction for every new node.

If you are calling functions to get the strings then the fastest way is to not use the StringBuilder at all. Instead, use concat() function which is the best when looking performance-wise.
Here is an example:
Queue<String> q = new LinkedList<>();
for(any condition){
q.add(getStringOne().concat(getStringTwo()).concat(getStringThree()));
}
Update: So, after a comment that Builder is better than concat in this case. I ran some tests. StringBuilder takes more than 2X time when compared to concat(). And this is because we are not using the loop to create a single string but a new string with every iteration.
Here is the repl which I used to perform test: https://repl.it/#ankitbeniwal/StringBuilderVersusStringConcat

Related

Find word frequency from huge text file using self-balancing immutable Binary Search Tree in Java?

I am trying to understand how I may implement a BST that would read huge text files and store the frequency of every word in Java? I'm also trying to make it work in a multi-threaded way, so I believe I'd have to make it thread-safe as well!
EDIT: Thank you for your answers. But I'm looking for Java code wherein we construct the BST as well as add the mentioned functionality, without libraries.
Just use a ConcurrentMap from String to AtomicInteger or LongAdder. Add the word the first time they are found and increment the integer after that. In Java 8, you can use computeIfAbsent to do this as a 1-liner, or in earlier versions you can use putIfAbsent. In either case, however it's best to check if the count object already exists with a get call first, since the methods that can modify the map are slower - even if they don't add to the map. If the initial fast-path get() returns no existing element you proceed to the ...ifAbsent call:
for (String word : words) {
AtomicInteger count = chm.get(word);
if (count == null) {
if ((count = chm.putIfAbsent(word, new AtomicInteger(1))) == null) {
continue;
}
}
count.incrementAndGet();
}
This will be fast and concurrent. You can split the file into chunks at the top level, and process each chunk on a different thread.
That's if you insist on a shared structure for counting. It would likely be faster to simple have each thread keep its count in a private HashMap and then just reduce the per thread results at the end by summing the maps.
Maybe you should split your file into chunks, process each chunk with not-thread-safe alghorithm on a different thread and then merge results. You will not suffer from synchronization penalty.
Or just use single thread cause bottleneck is not processor but is hard disk.
A self-balancing tree isn't immutable by definition.
You could look up AVL trees or another one from this list.
However, I recommend another approach: Use a Trie to store the words. It will save a lot of space and will be much faster than a binary search tree.

How do we compare a list of strings to just a string

I have:
String[] Value={"Available to trade at 12 30","I love sherlock"}
and I want to check if sherlock is present in the list without using for each loop.
Java streams are handy for this
String[] value = {"Available to trade at 12 30", "I love sherlock"};
Stream.of(value).anyMatch(s -> s.contains("sherlock"));
If you want to get the string that has sherlock:
String[] value = {"Available to trade at 12 30", "I love sherlock"};
Stream.of(value).filter(s -> s.contains("sherlock")).findFirst().get();
Or use findAny(), if you don't care about order. Both findAny and findFirst return an Optional, which will be empty if there are no matches and .get() will throw.
You can do something like this
Arrays.asList(Value).contains("string to be searched");
Better option is to convert array to list so that multiple functions can be used
The problem inherently requires you to iterate through all the elements in the array, essentially performing a for each. However, you can choose whether you want good memory performance or good execution time for the lookup.
If you want good memory performance leave it as is and iterate through the list every time you perform the check. For good execution time you should create a hashset and populate it with every substring present in the list. This is a bit time consuming and memory-intensive but once you have built your set you can keep it and reuse, making each runtime check only take log n time.
You could convert the array into a single String and then use the String .contains method.
String entireArray = Arrays.toString(Value);
boolean sherlockPresent = entireArray.contains("sherlock");

Efficient Intersection and Union of Lists of Strings

I need to efficiently find the ratio of (intersection size / union size) for pairs of Lists of strings. The lists are small (mostly about 3 to 10 items), but I have a huge number of them (~300K) and have to do this on every pair, so I need this actual computation to be as efficient as possible. The strings themselves are short unicode strings -- averaging around 5-10 unicode characters.
The accepted answer here Efficiently compute Intersection of two Sets in Java? looked extremely helpful but (likely because my sets are small (?)) I haven't gotten much improvement by using the approach suggested in the accepted answer.
Here's what I have so far:
protected double uuEdgeWeight(UVertex u1, UVertex u2) {
Set<String> u1Tokens = new HashSet<String>(u1.getTokenlist());
List<String> u2Tokens = u2.getTokenlist();
int intersection = 0;
int union = u1Tokens.size();
for (String s:u2Tokens) {
if (u1Tokens.contains(s)) {
intersection++;
} else {
union++;
}
}
return ((double) intersection / union);
My question is, is there anything I can do to improve this, given that I'm working with Strings which may be more time consuming to check equality than other data types.
I think because I'm comparing multiple u2's against the same u1, I could get some improvement by doing the cloning of u2 into a HashSet outside of the loop (which isn't shown -- meaning I'd pass in the HashSet instead of the object from which I could pull the list and then clone into a set)
Anything else I can do to squeak out even a small improvement here?
Thanks in advance!
Update
I've updated the numeric specifics of my problem above. Also, due to the nature of the data, most (90%?) of the intersections are going to be empty. My initial attempt at this used the clone the set and then retainAll the items in the other set approach to find the intersection, and then shortcuts out before doing the clone and addAll to find the union. That was about as efficient as the code posted above, presumably because of the trade of between it being a slower algorithm overall versus being able to shortcut out a lot of the time. So, I'm thinking about ways to take advantage of the infrequency of overlapping sets, and would appreciate any suggestions in that regard.
Thanks in advance!
You would get a large improvement by moving the HashSet outside of the loop.
If the HashSet really has only got a few entries in it then you are probably actually just as fast to use an Array - since traversing an array is much simpler/faster. I'm not sure where the threshold would lie but I'd measure both - and be sure that you do the measurements correctly. (i.e. warm up loops before timed loops, etc).
One thing to try might be using a sorted array for the things to compare against. Scan until you go past current and you can immediately abort the search. That will improve processor branch prediction and reduce the number of comparisons a bit.
If you want to optimize for this function (not sure if it actually works in your context) you could assign each unique String an Int value, when the String is added to the UVertex set that Int as a bit in a BitSet.
This function should then become a set.or(otherset) and a set.and(otherset). Depending on the number of unique Strings that could be efficient.

Appending Strings vs appending chars in Java

I am trying to solve an algorithmic task where speed is of primary importance. In the algorithm, I am using a DFS search in a graph and in every step, I add a char and a String. I am not sure whether this is the bottleneck of my algorithm (probably not) but I am curious what is the fastest and most efficient way to do this.
At the moment, I use this:
transPred.symbol + word
I think that there is might be a better alternative than the "+" operator but most String methods only work with other Strings (would converting my char into String and using one of them make a difference?).
Thanks for answers.
EDIT:
for (Transition transPred : state.transtitionsPred) {
walk(someParameters, transPred.symbol + word);
}
transPred.symbol is a char and word is a string
A very common problem / concern.
Bear in mind that each String in java is immutable. Thus, if you modify the string it actually creates a new object. This results in one new object for each concatenation you're doing above. This isn't great, as it's simply creating garbage that will have to be collected at some point.
If your graph is overly large, this might be during your traversal logic - and it may slow down your algorithm.
To avoid creating a new String for each concatenation, use the StringBuilder. You can declare one outside your loop and then append each character with StringBuilder.append(char). This does not incur a new object creation for each append() operation.
After your loop you can use StringBuilder.toString(), this will create a new object (the String) but it will only be one for your entire loop.
Since you replace one char in the string at each iteration I don't think that there is anything faster than a simple + append operation. As mentioned, Strings are immutable, so when you append a char to it, you will get a new String object, but this seems to be unavoidable in your case since you need a new string at each iteration.
If you really want to optimize this part, consider using something mutable like an array of chars. This would allow you to replace the first character without any excessive object creation.
Also, I think you're right when you say that this probably isn't your bottleneck. And remember that premature optimization is the root of all evil etc. (Don't mind the irony that the most popular example of good optimization is avoiding excessive string concatenation).

The best way to store and access 120,000 words in java

I'm programming a java application that reads strictly text files (.txt). These files can contain upwards of 120,000 words.
The application needs to store all +120,000 words. It needs to name them word_1, word_2, etc. And it also needs to access these words to perform various methods on them.
The methods all have to do with Strings. For instance, a method will be called to say how many letters are in word_80. Another method will be called to say what specific letters are in word_2200.
In addition, some methods will compare two words. For instance, a method will be called to compare word_80 with word_2200 and needs to return which has more letters. Another method will be called to compare word_80 with word_2200 and needs to return what specific letters both words share.
My question is: Since I'm working almost exclusively with Strings, is it best to store these words in one large ArrayList? Several small ArrayLists? Or should I be using one of the many other storage possibilities, like Vectors, HashSets, LinkedLists?
My two primary concerns are 1.) access speed, and 2.) having the greatest possible number of pre-built methods at my disposal.
Thank you for your help in advance!!
Wow! Thanks everybody for providing such a quick response to my question. All your suggestions have helped me immensely. I’m thinking through and considering all the options provided in your feedback.
Please forgive me for any fuzziness; and let me address your questions:
Q) English?
A) The text files are actually books written in English. The occurrence of a word in a second language would be rare – but not impossible. I’d put the percentage of non-English words in the text files at .0001%
Q) Homework?
A) I’m smilingly looking at my question’s wording now. Yes, it does resemble a school assignment. But no, it’s not homework.
Q) Duplicates?
A) Yes. And probably every five or so words, considering conjunctions, articles, etc.
Q) Access?
A) Both random and sequential. It’s certainly possible a method will locate a word at random. It’s equally possible a method will want to look for a matching word between word_1 and word_120000 sequentially. Which leads to the last question…
Q) Iterate over the whole list?
A) Yes.
Also, I plan on growing this program to perform many other methods on the words. I apologize again for my fuzziness. (Details do make a world of difference, do they not?)
Cheers!
I would store them in one large ArrayList and worry about (possibly unnecessary) optimisations later on.
Being inherently lazy, I don't think it's a good idea to optimise unless there's a demonstrated need. Otherwise, you're just wasting effort that could be better spent elsewhere.
In fact, if you can set an upper bound to your word count and you don't need any of the fancy List operations, I'd opt for a normal (native) array of string objects with an integer holding the actual number. This is likely to be faster than a class-based approach.
This gives you the greatest speed in accessing the individual elements whilst still retaining the ability to do all that wonderful string manipulation.
Note I haven't benchmarked native arrays against ArrayLists. They may be just as fast as native arrays, so you should check this yourself if you have less blind faith in my abilities than I do :-).
If they do turn out to be just as fast (or even close), the added benefits (expandability, for one) may be enough to justify their use.
Just confirming pax assumptions, with a very naive benchmark
public static void main(String[] args)
{
int size = 120000;
String[] arr = new String[size];
ArrayList al = new ArrayList(size);
for (int i = 0; i < size; i++)
{
String put = Integer.toHexString(i).toString();
// System.out.print(put + " ");
al.add(put);
arr[i] = put;
}
Random rand = new Random();
Date start = new Date();
for (int i = 0; i < 10000000; i++)
{
int get = rand.nextInt(size);
String fetch = arr[get];
}
Date end = new Date();
long diff = end.getTime() - start.getTime();
System.out.println("array access took " + diff + " ms");
start = new Date();
for (int i = 0; i < 10000000; i++)
{
int get = rand.nextInt(size);
String fetch = (String) al.get(get);
}
end = new Date();
diff = end.getTime() - start.getTime();
System.out.println("array list access took " + diff + " ms");
}
and the output:
array access took 578 ms
array list access took 907 ms
running it a few times the actual times seem to vary some, but generally array access is between 200 and 400 ms faster, over 10,000,000 iterations.
If you will access these Strings sequentially, the LinkedList would be the best choice.
For random access, ArrayLists have a nice memory usage/access speed tradeof.
My take:
For a non-threaded program, an Arraylist is always fastest and simplest.
For a threaded program, a java.util.concurrent.ConcurrentHashMap<Integer,String> or java.util.concurrent.ConcurrentSkipListMap<Integer,String> is awesome. Perhaps you would later like to allow threads so as to make multiple queries against this huge thing simultaneously.
If you're going for fast traversal as well as compact size, use a DAWG (Directed Acyclic Word Graph.) This data structure takes the idea of a trie and improves upon it by finding and factoring out common suffixes as well as common prefixes.
http://en.wikipedia.org/wiki/Directed_acyclic_word_graph
Use a Hashtable? This will give you your best lookup speed.
ArrayList/Vector if order matters (it appears to, since you are calling the words "word_xxx"), or HashTable/HashMap if it doesn't.
I'll leave the exercise of figuring out why you would want to use an ArrayList vs. a Vector or a HashTable vs. a HashMap up to you since I have a sneaking suspicion this is your homework. Check the Javadocs.
You're not going to get any methods that help you as you've asked for in the examples above from your Collections Framework class, since none of them do String comparison operations. Unless you just want to order them alphabetically or something, in which case you'd use one of the Tree implementations in the Collections framework.
How about a radix tree or Patricia trie?
http://en.wikipedia.org/wiki/Radix_tree
The only advantage of a linked list over an array or array list would be if there are insertions and deletions at arbitrary places. I don't think this is the case here: You read in the document and build the list in order.
I THINK that when the original poster talked about finding "word_2200", he meant simply the 2200th word in the document, and not that there are arbitrary labels associated with each word. If so, then all he needs is indexed access to all the words. Hence, an array or array list. If there really is something more complex, if one word might be labeled "word_2200" and the next word is labeled "foobar_42" or some such, then yes, he'd need a more complex structure.
Hey, do you want to give us a clue WHY you want to do any of this? I'm hard pressed to remember the last time I said to myself, "Hey, I wonder if the 1,237th word in this document I'm reading is longer or shorter than the 842nd word?"
Depends on what the problem is - speed or memory.
If it's memory, the minimum solution is to write a function getWord(n) which scans the whole file each time it runs, and extracts word n.
Now - that's not a very good solution. A better solution is to decide how much memory you want to use: lets say 1000 items. Scan the file for words once when the app starts, and store a series of bookmarks containing the word number and the position in the file where it is located - do this in such a way that the bookmarks are more-or-less evenly spaced through the file.
Then, open the file for random access. The function getWord(n) now looks at the bookmarks to find the biggest word # <= n (please use a binary search), does a seek to get to the indicated location, and scans the file, counting the words, to find the requested word.
An even quicker solution, using rather more memnory, is to build some sort of cache for the blocks - on the basis that getWord() requests usually come through in clusters. You can rig things up so that if someone asks for word # X, and its not in the bookmarks, then you seek for it and put it in the bookmarks, saving memory by consolidating whichever bookmark was least recently used.
And so on. It depends, really, on what the problem is - on what kind of patterns of retreival are likely.
I don't understand why so many people are suggesting Arraylist, or the like, since you don't mention ever having to iterate over the whole list. Further, it seems you want to access them as key/value pairs ("word_348"="pedantic").
For the fastest access, I would use a TreeMap, which will do binary searches to find your keys. Its only downside is that it's unsynchronized, but that's not a problem for your application.
http://java.sun.com/javase/6/docs/api/java/util/TreeMap.html

Categories

Resources