I've got a list of objects with a value and want to summarise all these values. What is the preferred way to do this in Java 8?
public static void main(String[] args) {
List<AnObject> longs = new ArrayList<AnObject>();
longs.add(new AnObject());
longs.add(new AnObject());
longs.add(new AnObject());
long mappedSum = longs.stream().mapToLong(AnObject::getVal).sum();
long collectedSum = longs.stream().collect(Collectors.summingLong(AnObject::getVal));
System.out.println(mappedSum);
System.out.println(collectedSum);
}
private static class AnObject {
private long val = 10;
public long getVal() {
return val;
}
}
I think mapToLong is more straight forward but I can't really motivate why.
Edit: I've updated the question by changing from summarizeLong to summingLong, that's why some answers and comments might seem a bit off.
I think using Collectors.summarizingLong(AnObject::getVal)) would do more work than you need it to do, as it computes other statistics beside the sum (average, count, min, max, ...).
If you just need the sum, use the simpler and more efficient method :
long mappedSum = longs.stream().mapToLong(AnObject::getVal).sum();
After you changed Collectors.summarizingLong to Collectors.summingLong, it's hard to say which option would be more efficient. The first option has an extra step (mapToLong), but I'm not sure how much difference that would make, since the second option does more work in collect compared to what the first option does in sum.
Related
This is Leetcode question no: 940. Distinct Subsequences II
My code is using Recursion to fetch the subsequences. And I am using an external HashSet to keep count of the unique subsets. The reason I am subtracting one from the size is to incorporate for the empty string because as per the solution we are not supposed to include the empty string. When I run the code for individual test cases, it runs fine. But when the test cases are run together consecutively, my solution is deemed wrong. Can anybody point me in the direction as to where I could be going wrong in my code? Is it to do with the Set usage or the recursion code itself? I am aware that this problem can be solved using Dynamic Programming(which I am yet to tackle), but I just wanted to know if there is a solution possible through Recursion ??
Please refer attached images for solution and test cases runs:
Code that I have written on Leet code
The individual test cases that I have run on Leetcode
The joint test cases run by Leetcode
Code:
class Solution {
static Set<String> myset = new HashSet<String>(0);
public int distinctSubseqII(String s) {
int i=0;
String curr="";
subsets(s, curr, i);
int val = myset.size()-1;
return val;
}
public static void subsets(String str,String curr,int i){
if(i==str.length()){
//System.out.println(curr);
myset.add(curr);
return;
}
subsets(str, curr, i+1);
subsets(str, curr+str.charAt(i), i+1);
}
}
Looking at your code, I can see that the static variable is the problem. The contents of that Set carry over from one call of distinctSubseqII to the next. That means that the size of the Set will be wrong ... for the second and all later calls.
Cutting to the chase ...
You need to do the recursion in a helper function.
In your example, it might look like this.
public class Solution {
public int distinctSubseqII(String s) {
Set<String> myset = new HashSet<String>(0);
return recursive(s, mySet);
}
// Helper function
private int recursive(String s, Set<String> mySet) {
...
// call 'recursive' ... recursively
...
}
}
The "helper function" is a common pattern for recursive solutions.
Firstly, I'd like to say I am still learning java, so excuse any unconventional code and/or questions.
I'm trying to allow the HashMap to set 'unique' keys while only using one put method.
This is what I currently have:
static int killcount = 0;
static Map<Integer, Integer> enumMap = new HashMap<Integer, Integer>();
public static void incrementKillcount() {
enumMap.put(getId(), killcount++);
}
(Again, excuse any unconventional java code, I'm still learning).
In this instance, if I'm not mistaken, the key is interchangeable (or at least from my experimenting). So the key doesn't really matter all that much. But while only having one put method, every key shares the same value. I'd like to make the value have some sort of 'unique' value.
For example, if I wanted this to count by increasing the killcount by 1 (killcount++;) until it reached 10, and then moving to a different key to count to 10 again, it would start counting from 10 instead of 1.
Thanks in advance, and again, excuse me for my terrible java skills! :)
The problem is static key word for killcount in your:-
static int killcount = 0;
The killcount will be initialized only once since it is static.
So increment operator will increment the previous value in killcount.
Solution to your problem
:-
set the killcount to zero while changing the key after saving it to map i.e. once killcount reaches 10.
public static void incrementKillcount() {
enumMap.put(getId(), killcount++);
if(killcount==10){
//resetting the static killcount value once it reaches 10
killcount=0;
}
For Scenario in your comment:-
public static void main(String[] args){
int sizeOfEachJar=10;
int numberOfJars=2;
Map<Integer, Integer> jarMap=new HashMap<Integer, Integer>(sizeOfEachJar);
//I am putting 10 cookies in each jar which is max size of each jar
for(int i=1;i<=numberOfJars;i++){
jarMap.put(i, sizeOfEachJar);
}
//Now eating three cookies from first jar
for(int i=0;i<3;i++){
jarMap.put(1, sizeOfEachJar--);
}
//Now eating 2 cookies out of 2nd jar
for(int i=0;i<2;i++){
jarMap.put(2, sizeOfEachJar--);
}
//Now finding out how many cookies remaining in all jars
int remainingCookies=0;
for(int i=1;i<=numberOfJars;i++){
remainingCookies+=jarMap.get(i);
}
System.out.println(remainingCookies);
}
You have to check your getID() code whether it returns same value or not.
as per your words your getId is returning same value each time
assuming same =1
int getId()
{
return same;
}
update your code with getId() by placing some use full logic to return a key
you may update like:
int getId()
{
return same++;
}
you may take same as static or according to your convince.
prefer effective java for the same.
What is the simplest way to implement a parallel computation (e.g. on a multiple core processor) using Java.
I.E. the java equivalent to this Scala code
val list = aLargeList
list.par.map(_*2)
There is this library, but it seems overwhelming.
http://gee.cs.oswego.edu/dl/jsr166/dist/extra166ydocs/
Don't give up so fast, snappy! ))
From the javadocs (with changes to map to your f) the essential matter is really just this:
ParallelLongArray a = ... // you provide
a.replaceWithMapping (new LongOp() { public long op(long a){return a*2L;}};);
is pretty much this, right?
val list = aLargeList
list.par.map(_*2)
& If you are willing to live with a bit less terseness, the above can be a reasonably clean and clear 3 liner (and of course, if you reuse functions, then its the same exact thing as Scala - inline functions.):
ParallelLongArray a = ... // you provide
LongOp f = new LongOp() { public long op(long a){return a*2L;}};
a.replaceWithMapping (f);
[edited above to show concise complete form ala OP's Scala variant]
and here it is in maximal verbose form where we start from scratch for demo:
import java.util.Random;
import jsr166y.ForkJoinPool;
import extra166y.Ops.LongGenerator;
import extra166y.Ops.LongOp;
import extra166y.ParallelLongArray;
public class ListParUnaryFunc {
public static void main(String[] args) {
int n = Integer.parseInt(args[0]);
// create a parallel long array
// with random long values
ParallelLongArray a = ParallelLongArray.create(n-1, new ForkJoinPool());
a.replaceWithGeneratedValue(generator);
// use it: apply unaryLongFuncOp in parallel
// to all values in array
a.replaceWithMapping(unaryLongFuncOp);
// examine it
for(Long v : a.asList()){
System.out.format("%d\n", v);
}
}
static final Random rand = new Random(System.nanoTime());
static LongGenerator generator = new LongGenerator() {
#Override final
public long op() { return rand.nextLong(); }
};
static LongOp unaryLongFuncOp = new LongOp() {
#Override final public long op(long a) { return a * 2L; }
};
}
Final edit and notes:
Also note that a simple class such as the following (which you can reuse across your projects):
/**
* The very basic form w/ TODOs on checks, concurrency issues, init, etc.
*/
final public static class ParArray {
private ParallelLongArray parr;
private final long[] arr;
public ParArray (long[] arr){
this.arr = arr;
}
public final ParArray par() {
if(parr == null)
parr = ParallelLongArray.createFromCopy(arr, new ForkJoinPool()) ;
return this;
}
public final ParallelLongArray map(LongOp op) {
return parr.replaceWithMapping(op);
}
public final long[] values() { return parr.getArray(); }
}
and something like that will allow you to write more fluid Java code (if terseness matters to you):
long[] arr = ... // you provide
LongOp f = ... // you provide
ParArray list = new ParArray(arr);
list.par().map(f);
And the above approach can certainly be pushed to make it even cleaner.
Doing that on one machine is pretty easy, but not as easy as Scala makes it. That library you posted is already apart of Java 5 and beyond. Probably the simplest thing to use is a ExecutorService. That represents a series of threads that can be run on any processor. You send it tasks and those things return results.
http://download.oracle.com/javase/1,5.0/docs/api/java/util/concurrent/ThreadPoolExecutor.html
http://www.fromdev.com/2009/06/how-can-i-leverage-javautilconcurrent.html
I'd suggest using ExecutorService.invokeAll() which will return a list of Futures. Then you can check them to see if their done.
If you're using Java7 then you could use the fork/join framework which might save you some work. With all of these you can build something very similar to Scala parallel arrays so using it is fairly concise.
Using threads, Java doesn't have this sort of thing built-in.
There will be an equivalent in Java 8: http://www.infoq.com/articles/java-8-vs-scala
Is it (performance-wise) better to use Arrays or HashMaps when the indexes of the Array are known? Keep in mind that the 'objects array/map' in the example is just an example, in my real project it is generated by another class so I cant use individual variables.
ArrayExample:
SomeObject[] objects = new SomeObject[2];
objects[0] = new SomeObject("Obj1");
objects[1] = new SomeObject("Obj2");
void doSomethingToObject(String Identifier){
SomeObject object;
if(Identifier.equals("Obj1")){
object=objects[0];
}else if(){
object=objects[1];
}
//do stuff
}
HashMapExample:
HashMap objects = HashMap();
objects.put("Obj1",new SomeObject());
objects.put("Obj2",new SomeObject());
void doSomethingToObject(String Identifier){
SomeObject object = (SomeObject) objects.get(Identifier);
//do stuff
}
The HashMap one looks much much better but I really need performance on this so that has priority.
EDIT: Well Array's it is then, suggestions are still welcome
EDIT: I forgot to mention, the size of the Array/HashMap is always the same (6)
EDIT: It appears that HashMaps are faster
Array: 128ms
Hash: 103ms
When using less cycles the HashMaps was even twice as fast
test code:
import java.util.HashMap;
import java.util.Random;
public class Optimizationsest {
private static Random r = new Random();
private static HashMap<String,SomeObject> hm = new HashMap<String,SomeObject>();
private static SomeObject[] o = new SomeObject[6];
private static String[] Indentifiers = {"Obj1","Obj2","Obj3","Obj4","Obj5","Obj6"};
private static int t = 1000000;
public static void main(String[] args){
CreateHash();
CreateArray();
long loopTime = ProcessArray();
long hashTime = ProcessHash();
System.out.println("Array: " + loopTime + "ms");
System.out.println("Hash: " + hashTime + "ms");
}
public static void CreateHash(){
for(int i=0; i <= 5; i++){
hm.put("Obj"+(i+1), new SomeObject());
}
}
public static void CreateArray(){
for(int i=0; i <= 5; i++){
o[i]=new SomeObject();
}
}
public static long ProcessArray(){
StopWatch sw = new StopWatch();
sw.start();
for(int i = 1;i<=t;i++){
checkArray(Indentifiers[r.nextInt(6)]);
}
sw.stop();
return sw.getElapsedTime();
}
private static void checkArray(String Identifier) {
SomeObject object;
if(Identifier.equals("Obj1")){
object=o[0];
}else if(Identifier.equals("Obj2")){
object=o[1];
}else if(Identifier.equals("Obj3")){
object=o[2];
}else if(Identifier.equals("Obj4")){
object=o[3];
}else if(Identifier.equals("Obj5")){
object=o[4];
}else if(Identifier.equals("Obj6")){
object=o[5];
}else{
object = new SomeObject();
}
object.kill();
}
public static long ProcessHash(){
StopWatch sw = new StopWatch();
sw.start();
for(int i = 1;i<=t;i++){
checkHash(Indentifiers[r.nextInt(6)]);
}
sw.stop();
return sw.getElapsedTime();
}
private static void checkHash(String Identifier) {
SomeObject object = (SomeObject) hm.get(Identifier);
object.kill();
}
}
HashMap uses an array underneath so it can never be faster than using an array correctly.
Random.nextInt() is many times slower than what you are testing, even using array to test an array is going to bias your results.
The reason your array benchmark is so slow is due to the equals comparisons, not the array access itself.
HashTable is usually much slower than HashMap because it does much the same thing but is also synchronized.
A common problem with micro-benchmarks is the JIT which is very good at removing code which doesn't do anything. If you are not careful you will only be testing whether you have confused the JIT enough that it cannot workout your code doesn't do anything.
This is one of the reason you can write micro-benchmarks which out perform C++ systems. This is because Java is a simpler language and easier to reason about and thus detect code which does nothing useful. This can lead to tests which show that Java does "nothing useful" much faster than C++ ;)
arrays when the indexes are know are faster (HashMap uses an array of linked lists behind the scenes which adds a bit of overhead above the array accesses not to mention the hashing operations that need to be done)
and FYI HashMap<String,SomeObject> objects = HashMap<String,SomeObject>(); makes it so you won't have to cast
For the example shown, HashTable wins, I believe. The problem with the array approach is that it doesn't scale. I imagine you want to have more than two entries in the table, and the condition branch tree in doSomethingToObject will quickly get unwieldly and slow.
Logically, HashMap is definitely a fit in your case. From performance standpoint is also wins since in case of arrays you will need to do number of string comparisons (in your algorithm) while in HashMap you just use a hash code if load factor is not too high. Both array and HashMap will need to be resized if you add many elements, but in case of HashMap you will need to also redistribute elements. In this use case HashMap loses.
Arrays will usually be faster than Collections classes.
PS. You mentioned HashTable in your post. HashTable has even worse performance thatn HashMap. I assume your mention of HashTable was a typo
"The HashTable one looks much much
better "
The example is strange. The key problem is whether your data is dynamic. If it is, you could not write you program that way (as in the array case). In order words, comparing between your array and hash implementation is not fair. The hash implementation works for dynamic data, but the array implementation does not.
If you only have static data (6 fixed objects), array or hash just work as data holder. You could even define static objects.
Does anyone know how can I create a new Performance Counter (perfmon tool) in Java?
For example: a new performance counter for monitoring the number / duration of user actions.
I created such performance counters in C# and it was quite easy, however I couldn’t find anything helpful for creating it in Java…
If you want to develop your performance counter independently from the main code, you should look at aspect programming (AspectJ, Javassist).
You'll can plug your performance counter on the method(s) you want without modifying the main code.
Java does not immediately work with perfmon (but you should see DTrace under Solaris).
Please see this question for suggestions: Java app performance counters viewed in Perfmon
Not sure what you are expecting this tool to do but I would create some data structures to record these times and counts like
class UserActionStats {
int count;
long durationMS;
long start = 0;
public void startAction() {
start = System.currentTimeMillis();
}
public void endAction() {
durationMS += System.currentTimeMillis() - start;
count++;
}
}
A collection for these could look like
private static final Map<String, UserActionStats> map =
new HashMap<String, UserActionStats>();
public static UserActionStats forUser(String userName) {
synchronized(map) {
UserActionStats uas = map.get(userName);
if (uas == null)
map.put(userName, uas = new UserActionStats());
return uas;
}
}