How to measure performance of this piece of code? - java

I have the following piece of code which I want to measure the performance of, both in terms of time and memory usage:
public static void main(String[] args) throws IOException {
final String bigFile = "large-file.txt";
if (args.length == 1) {
List<String> lines = Utils.readLinesFromFile(bigFile);
Path path = Paths.get(bigFile);
if (Files.exists(path)) {
lines = Files.readAllLines(Paths.get(bigFile), StandardCharsets.UTF_8);
}
List<String> filteredLines = lines.stream().filter(x -> x.startsWith(args[0])).collect(Collectors.toList());
for (String s: filteredLines) {
System.out.println(s);
}
} else {
System.out.println(String.format("Expected one argument"));
}
}
where large-file.txt is a file with 2,000,000 lines.
I don't really know how to measure the performance. Should I be using a profiler (can I use a profiler given the program is so short-running?), or would it be adequate to use System.nanoTime() throughout the code and run the program multiple times to get some sort of average performance? This doesn't help in terms of memory usage though - unless I can use Runtime.totalMemory() calls to measure before and after the load of the large file into memory.

You can use Guava stop watch to measure execution time
Stopwatch stopwatch = new Stopwatch().start();
method call();
System.out.println(stopwatch.stop())

Related

Log line taking 10's of milliseconds

I am seeing very high latencies when invoking java.util.logging.Logger.log() in some instances, in the following code:
private static Object[] NETWORK_LOG_TOKEN = new Object[] {Integer.valueOf(1)};
private final TimeProbe probe_ = new TimeProbe();
public void onTextMessagesReceived(ArrayList<String> msgs_list) {
final long start_ts = probe_.addTs(); // probe A
// Loop through the messages
for (String msg: msgs_list) {
probe_.addTs(); // probe B
log_.log(Level.INFO, "<-- " + msg, NETWORK_LOG_TOKEN);
probe_.addTs(); // probe C
// Do some work on the message ...
probe_.addTs(); // probe D
}
final long end_ts = probe_.addTs(); // probe E
if (end_ts - start_ts >= 50) {
// If the run was slow (>= 50 millis) we print all the recorded timestamps
log_.info(probe_.print("Slow run with " + msgs_list.size() + " msgs: "));
}
probe_.clear();
}
The probe_ is simply an instance of this very basic class:
public class TimeProbe {
final ArrayList<Long> timestamps_ = new ArrayList<>();
final StringBuilder builder_ = new StringBuilder();
public void addTs() {
final long ts = System.currentTimeMillis();
timestamps_.add(ts);
return ts;
}
public String print(String prefix) {
builder_.setLength(0);
builder_.append(prefix);
for (long ts: timestamps_) {
builder_.append(ts);
builder_.append(", ");
}
builder_.append("in millis");
return builder_.toString();
}
public void clear() {
timestamps_.clear();
}
}
And here is the handler that logs the NETWORK_LOG_TOKEN entries:
final FileHandler network_logger = new FileHandler("/home/users/dummy.logs", true);
network_logger2.setFilter(record -> {
final Object[] params = record.getParameters();
// This filter returns true if the params suggest that the record is a network log
// We use Integer.valueOf(1) as our "network token"
return (params != null && params.length > 0 && params[0] == Integer.valueOf(1));
});
In some cases, I am getting the following ouputs (adding labels with probe A,B,C,D,E to make things more clear):
// A B C D B C D E
slow run with 2 msgs: 1616069594883, 1616069594883, 1616069594956, 1616069594957, 1616069594957, 1616069594957, 1616069594957, 1616069594957
Everything takes less than 1ms, except for the line of code between B and C (during the first iteration of the for loop), which takes a whopping 73 milliseconds. This does not occur every time onTextMessagesReceived() is called, but the fact that it does is big problem. I would welcome any ideas explaining where this lack of predictability comes from.
As a side note, I have checked that my disk IO is super low, and no GC pause occurred around this time. I would think my NETWORK_LOG_TOKEN setup is pretty flimsy at best in terms of design, but I still cannot think of reasons why sometimes, this first log line takes forever. Any pointers or suggestions as to what could be happening would be really appreciated :)!
Things to try:
Enable JVM safepoint logs. VM pauses are not always caused by GC.
If you use JDK < 15, disable Biased Locking: -XX:-UseBiasedLocking. There are many synchronized places in JUL framework. In a multithreaded application, this could cause biased lock revocation, which is a common reason for a safepoint pause.
Run async-profiler in the wall-clock mode with .jfr output. Then, using JMC, you'll be able to find what a thread was exactly doing near the given moment of time.
Try putting a log file onto tmpfs to exclude disk latency, or use MemoryHandler instead of FileHandler to check whether file I/O affects pauses at all.
Everything takes less than 1ms, except for the line of code between B and C (during the first iteration of the for loop), which takes a whopping 73 milliseconds. [snip] ...but I still cannot think of reasons why sometimes, this first log line takes forever.
The first log record that is published to the root logger or its handlers will
trigger lazy loading of the root handlers.
If you don't need to publish to the root logger handlers then call log_.setUseParentHandlers(false) when you add your FileHandler. This will make it so your log records don't travel up to the root logger. It also ensures that you are not publishing to other handlers attached to the parent loggers.
You can also load the root handlers by doing Logger.getLogger("").getHandlers() before you start your loop. You'll pay the price for loading them but at a different time.
log_.log(Level.INFO, "<-- " + msg, NETWORK_LOG_TOKEN);
The string concatenation in this line is going to do array copies and create garbage. Try to do:
log_.log(Level.INFO, msg, NETWORK_LOG_TOKEN);
The default log method will walk the current thread stack. You can avoid that walk by using logp​ methods in tight loops:
public Foo {
private static final String CLASS_NAME = Foo.class.getName();
private static final Logger log_ = Logger.getLogger(CLASS_NAME);
public void onTextMessagesReceived(ArrayList<String> msgs_list) {
String methodName = "onTextMessagesReceived";
// Loop through the messages
for (String msg: msgs_list) {
probe_.addTs(); // probe B
log_.logp(Level.INFO, CLASS_NAME, methodName, msg, NETWORK_LOG_TOKEN);
probe_.addTs(); // probe C
// Do some work on the message ...
probe_.addTs(); // probe D
}
}
}
In your code you are attaching a filter to the FileHandler. Depends on the use case but loggers also accept filters. Sometimes it makes sense to install a filter on the logger instead of the handler if you are targeting a specific message.

Is it possible to extend the Object class to know how many objects I currently have [duplicate]

How can I find the number of live objects on the heap in Java program?
jmap is the standard java utility that you can use to capture heap dumps and statistics. I can't say what protocol is used by jmap to connect to the JVM to get this info, and it's not clear if this information is available to a program running in the JVM directly (though I'm sure the program can query it's JVM through some socket to get this information).
JVM TI is a tool interface used by C code, and it has pretty much full access to the goings on of the JVM, but it is C code and not directly available by the JVM. You could probably write a C lib and then interface with it, but there's nothing out of the box.
There are several JMX MBeans, but I don't think any of them provide an actual object count. You can get memory statistics from these though (these are what JConsole uses). Check out the java.lang.management classes.
If you want some fast (easy to implement, not necessarily a quick result as a jmap takes some time), I'd fork off a run of jmap, and simply read the resulting file.
The simplest way is to use jmap tool. If you will print objects histogram at the end you'll see total number of instances and also accumulated size of all objects:
jmap -histo <PID> will print whole objects with number of instances and size. The last line will contain total number
Total 2802946 174459656
Second column is total instances count, and last is total bytes.
Use jvisualvm, and do a memory sample. It will show the number of classes and instances:
There is a hack you can try:
create your own java.lang.Object (copy the original source)
count the created objects in the constructor (not called for arrays)
add the path to your classfile to the boot classpath
see this (old) article for a sample.
Probably there are better ways to do it using JPDA or JMX, but I've not found how...
As far as I know, you cannot. You can, however, get the amount of memory used for the program:
Runtime rt = Runtime.getRuntime();
System.out.println("Used: " + (rt.totalMemory() - rt.freeMemory());
System.out.println("Free: " + rt.freeMemory());
System.out.println("Total: " + rt.totalMemory());
If all your objects are created using some kind of Factory class you can find number of objects in the heap. Even then you have to have something in the finalize() method. Of course, this cannot be done for all objects, e.g. the jdk library classes cannot be modified. But if you want to find number of instances of a particular class you have created you can potentially find that.
For debugging, you can use a profiler (like YourKit, a commercial java profiler). You'll find both open source and commercial variants of java profilers.
For integration with your code, you might look at using "Aspect Oriented Programming" technique. AOP frameworks (e.g. AspectWerkz) let you change the class files at class load time. This will let you modify constructors to register objects to your "all-my-runtime-objects-framework".
public class NumOfObjects {
static int count=0;
{
count++;
}
public static void main(String[] args)
{
NumOfObjects no1=new NumOfObjects();
System.out.println("no1:" + count);//1
NumOfObjects no2=new NumOfObjects();
System.out.println("no2:"+ count); //2
for (int i=0; i<10;i++)
{
NumOfObjects noi=new NumOfObjects();
}
System.out.println("Total objects:"+count);// 12
}
}
public class ObjectCount
{
static int i;
ObjectCount()
{
System.out.println(++i);
}
public static void main(String args[])
{
ObjectCount oc = new ObjectCount();
ObjectCount od = new ObjectCount();
ObjectCount oe = new ObjectCount();
ObjectCount of = new ObjectCount();
ObjectCount og = new ObjectCount();
}
}
class Test1
{
static int count=0;
public Test1()
{
count++;
System.out.println("Total Objects"+" "+count);
}
}
public class CountTotalNumberOfObjects
{
public static void main(String[] args)
{
Test1 t = new Test1();
Test1 t1 = new Test1();
Test1 t3 = new Test1();
Test1 t11 = new Test1();
Test1 t111 = new Test1();
Test1 t13 = new Test1();
}
}

Spark java.lang.StackOverflowError

I'm using spark in order to calculate the pagerank of user reviews, but I keep getting Spark java.lang.StackOverflowError when I run my code on a big dataset (40k entries). when running the code on a small number of entries it works fine though.
Entry Example :
product/productId: B00004CK40 review/userId: A39IIHQF18YGZA review/profileName: C. A. M. Salas review/helpfulness: 0/0 review/score: 4.0 review/time: 1175817600 review/summary: Reliable comedy review/text: Nice script, well acted comedy, and a young Nicolette Sheridan. Cusak is in top form.
The Code:
public void calculatePageRank() {
sc.clearCallSite();
sc.clearJobGroup();
JavaRDD < String > rddFileData = sc.textFile(inputFileName).cache();
sc.setCheckpointDir("pagerankCheckpoint/");
JavaRDD < String > rddMovieData = rddFileData.map(new Function < String, String > () {
#Override
public String call(String arg0) throws Exception {
String[] data = arg0.split("\t");
String movieId = data[0].split(":")[1].trim();
String userId = data[1].split(":")[1].trim();
return movieId + "\t" + userId;
}
});
JavaPairRDD<String, Iterable<String>> rddPairReviewData = rddMovieData.mapToPair(new PairFunction < String, String, String > () {
#Override
public Tuple2 < String, String > call(String arg0) throws Exception {
String[] data = arg0.split("\t");
return new Tuple2 < String, String > (data[0], data[1]);
}
}).groupByKey().cache();
JavaRDD<Iterable<String>> cartUsers = rddPairReviewData.map(f -> f._2());
List<Iterable<String>> cartUsersList = cartUsers.collect();
JavaPairRDD<String,String> finalCartesian = null;
int iterCounter = 0;
for(Iterable<String> out : cartUsersList){
JavaRDD<String> currentUsersRDD = sc.parallelize(Lists.newArrayList(out));
if(finalCartesian==null){
finalCartesian = currentUsersRDD.cartesian(currentUsersRDD);
}
else{
finalCartesian = currentUsersRDD.cartesian(currentUsersRDD).union(finalCartesian);
if(iterCounter % 20 == 0) {
finalCartesian.checkpoint();
}
}
}
JavaRDD<Tuple2<String,String>> finalCartesianToTuple = finalCartesian.map(m -> new Tuple2<String,String>(m._1(),m._2()));
finalCartesianToTuple = finalCartesianToTuple.filter(x -> x._1().compareTo(x._2())!=0);
JavaPairRDD<String, String> userIdPairs = finalCartesianToTuple.mapToPair(m -> new Tuple2<String,String>(m._1(),m._2()));
JavaRDD<String> userIdPairsString = userIdPairs.map(new Function < Tuple2<String, String>, String > () {
//Tuple2<Tuple2<MovieId, userId>, Tuple2<movieId, userId>>
#Override
public String call (Tuple2<String, String> t) throws Exception {
return t._1 + " " + t._2;
}
});
try {
//calculate pagerank using this https://github.com/apache/spark/blob/master/examples/src/main/java/org/apache/spark/examples/JavaPageRank.java
JavaPageRank.calculatePageRank(userIdPairsString, 100);
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
sc.close();
}
I have multiple suggestions which will help you to greatly improve the performance of the code in your question.
Caching: Caching should be used on those data sets which you need to refer to again and again for same/ different operations (iterative algorithms.
An example is RDD.count — to tell you the number of lines in the
file, the file needs to be read. So if you write RDD.count, at
this point the file will be read, the lines will be counted, and the
count will be returned.
What if you call RDD.count again? The same thing: the file will be
read and counted again. So what does RDD.cache do? Now, if you run
RDD.count the first time, the file will be loaded, cached, and
counted. If you call RDD.count a second time, the operation will use
the cache. It will just take the data from the cache and count the
lines, no recomputing.
Read more about caching here.
In your code sample you are not reusing anything that you've cached. So you may remove the .cache from there.
Parallelization: In the code sample, you've parallelized every individual element in your RDD which is already a distributed collection. I suggest you to merge the rddFileData, rddMovieData and rddPairReviewData steps so that it happens in one go.
Get rid of .collect since that brings the results back to the driver and maybe the actual reason for your error.
This problem will occur when your DAG grows big and too many level of transformations happening in your code. The JVM will not be able to hold the operations to perform lazy execution when an action is performed in the end.
Checkpointing is one option. I would suggest to implement spark-sql for this kind of aggregations. If your data is structured, try to load that into dataframes and perform grouping and other mysql functions to achieve this.
When your for loop grows really large, Spark can no longer keep track of the lineage. Enable checkpointing in your for loop to checkpoint your rdd every 10 iterations or so. Checkpointing will fix the problem. Don't forget to clean up the checkpoint directory after.
http://spark.apache.org/docs/latest/streaming-programming-guide.html#checkpointing
Below things fixed stackoverflow error, as others pointed it's because of lineage that spark keeps building, specially when you have loop/iteration in code.
Set checkpoint directory
spark.sparkContext.setCheckpointDir("./checkpoint")
checkpoint dataframe/Rdd you are modifying/operating in iteration
modifyingDf.checkpoint()
Cache Dataframe which are reused in each iteration
reusedDf.cache()

Creating Performance Counters in Java

Does anyone know how can I create a new Performance Counter (perfmon tool) in Java?
For example: a new performance counter for monitoring the number / duration of user actions.
I created such performance counters in C# and it was quite easy, however I couldn’t find anything helpful for creating it in Java…
If you want to develop your performance counter independently from the main code, you should look at aspect programming (AspectJ, Javassist).
You'll can plug your performance counter on the method(s) you want without modifying the main code.
Java does not immediately work with perfmon (but you should see DTrace under Solaris).
Please see this question for suggestions: Java app performance counters viewed in Perfmon
Not sure what you are expecting this tool to do but I would create some data structures to record these times and counts like
class UserActionStats {
int count;
long durationMS;
long start = 0;
public void startAction() {
start = System.currentTimeMillis();
}
public void endAction() {
durationMS += System.currentTimeMillis() - start;
count++;
}
}
A collection for these could look like
private static final Map<String, UserActionStats> map =
new HashMap<String, UserActionStats>();
public static UserActionStats forUser(String userName) {
synchronized(map) {
UserActionStats uas = map.get(userName);
if (uas == null)
map.put(userName, uas = new UserActionStats());
return uas;
}
}

How to find the number of objects in the heap

How can I find the number of live objects on the heap in Java program?
jmap is the standard java utility that you can use to capture heap dumps and statistics. I can't say what protocol is used by jmap to connect to the JVM to get this info, and it's not clear if this information is available to a program running in the JVM directly (though I'm sure the program can query it's JVM through some socket to get this information).
JVM TI is a tool interface used by C code, and it has pretty much full access to the goings on of the JVM, but it is C code and not directly available by the JVM. You could probably write a C lib and then interface with it, but there's nothing out of the box.
There are several JMX MBeans, but I don't think any of them provide an actual object count. You can get memory statistics from these though (these are what JConsole uses). Check out the java.lang.management classes.
If you want some fast (easy to implement, not necessarily a quick result as a jmap takes some time), I'd fork off a run of jmap, and simply read the resulting file.
The simplest way is to use jmap tool. If you will print objects histogram at the end you'll see total number of instances and also accumulated size of all objects:
jmap -histo <PID> will print whole objects with number of instances and size. The last line will contain total number
Total 2802946 174459656
Second column is total instances count, and last is total bytes.
Use jvisualvm, and do a memory sample. It will show the number of classes and instances:
There is a hack you can try:
create your own java.lang.Object (copy the original source)
count the created objects in the constructor (not called for arrays)
add the path to your classfile to the boot classpath
see this (old) article for a sample.
Probably there are better ways to do it using JPDA or JMX, but I've not found how...
As far as I know, you cannot. You can, however, get the amount of memory used for the program:
Runtime rt = Runtime.getRuntime();
System.out.println("Used: " + (rt.totalMemory() - rt.freeMemory());
System.out.println("Free: " + rt.freeMemory());
System.out.println("Total: " + rt.totalMemory());
If all your objects are created using some kind of Factory class you can find number of objects in the heap. Even then you have to have something in the finalize() method. Of course, this cannot be done for all objects, e.g. the jdk library classes cannot be modified. But if you want to find number of instances of a particular class you have created you can potentially find that.
For debugging, you can use a profiler (like YourKit, a commercial java profiler). You'll find both open source and commercial variants of java profilers.
For integration with your code, you might look at using "Aspect Oriented Programming" technique. AOP frameworks (e.g. AspectWerkz) let you change the class files at class load time. This will let you modify constructors to register objects to your "all-my-runtime-objects-framework".
public class NumOfObjects {
static int count=0;
{
count++;
}
public static void main(String[] args)
{
NumOfObjects no1=new NumOfObjects();
System.out.println("no1:" + count);//1
NumOfObjects no2=new NumOfObjects();
System.out.println("no2:"+ count); //2
for (int i=0; i<10;i++)
{
NumOfObjects noi=new NumOfObjects();
}
System.out.println("Total objects:"+count);// 12
}
}
public class ObjectCount
{
static int i;
ObjectCount()
{
System.out.println(++i);
}
public static void main(String args[])
{
ObjectCount oc = new ObjectCount();
ObjectCount od = new ObjectCount();
ObjectCount oe = new ObjectCount();
ObjectCount of = new ObjectCount();
ObjectCount og = new ObjectCount();
}
}
class Test1
{
static int count=0;
public Test1()
{
count++;
System.out.println("Total Objects"+" "+count);
}
}
public class CountTotalNumberOfObjects
{
public static void main(String[] args)
{
Test1 t = new Test1();
Test1 t1 = new Test1();
Test1 t3 = new Test1();
Test1 t11 = new Test1();
Test1 t111 = new Test1();
Test1 t13 = new Test1();
}
}

Categories

Resources