I was trying a example code with spring. And a part of code is like below;
private List<Point> points;
long timeTakeninMilis = System.currentTimeMillis();
public List<Point> getPoints() {
return points;
}
public void setPoints(List<Point> points) {
this.points = points;
}
public void drawJava8() {
points.stream().forEachOrdered(
point -> System.out.println("Point : (" + point.getX() + ", "
+ point.getY() + ")"));
System.out.println("Total Time Taken drawJava8(): "
+ (System.currentTimeMillis() - timeTakeninMilis)
+ " miliseconds");
}
public void draw() {
for (Point point : points) {
System.out.println("Point = (" + point.getX() + ", " + point.getY()
+ " )");
}
System.out.println("Total Time Taken draw(): "
+ (System.currentTimeMillis() - timeTakeninMilis)
+ " miliseconds");
}
The OUTPUT,
Jun 30, 2015 11:30:53 AM org.springframework.context.support.ClassPathXmlApplicationContext prepareRefresh
INFO: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext#7daf6ecc: startup date [Tue Jun 30 11:30:53 IST 2015]; root of context hierarchy
Jun 30, 2015 11:30:53 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from class path resource [spring.xml]
Point = (0, 0 )
Point = (-50, 0 )
Point = (0, 50 )
Total Time Taken draw(): 70 miliseconds
Point : (0, 0)
Point : (-50, 0)
Point : (0, 50)
Total Time Taken drawJava8(): 124 miliseconds
Jun 30, 2015 11:30:54 AM org.springframework.context.support.ClassPathXmlApplicationContext doClose
INFO: Closing org.springframework.context.support.ClassPathXmlApplicationContext#7daf6ecc: startup date [Tue Jun 30 11:30:53 IST 2015]; root of context hierarchy
Why it is taking more time?
Or i am doing something wrong?
I was expecting it to be faster or of similar speed...
Please help me understand what is the benefit of the Lambda Expressions?
INFO: I did it in two different programs. Times are taken from those. I merged them here to make it short.
Adding this as an analysis per original poster's request.
We can not really predict the sophisticated analysis and transformation that the modern JIT compiler performs on running code. Hence while benchmarking items such as these, you should not conclude it just by running two method calls.
Instead, create various sample input sets (boundary cases) and check the perofmance by repeatedly calling your test cases without shutting down JVM. In this case for example :
for (int i=0;i<100;i++){draw(); drawJava8();}
Once you have the results, find out average execution and you can safely ignore first execution result as it might not have had optimizations.
So the conclusion you have drawn from your tests is not completely correct.
Related
I'm writing application to mesure speed of CRUD with hibernate for Derby.
This is my function :
#Override
public BulkTestResult testBulkInsertScenario(Long volume, Integer repeat) {
StopWatch sw = new StopWatch();
BulkTestResult bulkTestResult = new BulkTestResult();
bulkTestResult.setStartDate(Instant.now());
bulkTestResult.setCountTest(volume);
bulkTestResult.setTestRepeat(repeat);
familyService.clear();
for(int i =0; i < repeat; i++) {
List<ProjectEntity> projects = dataAnonymization.generateProjectEntityList(volume);
runBulkTest(sw, bulkTestResult, projects, true);
}
bulkTestResult.setEndDate(Instant.now());
return bulkTestResult;
}
private void runBulkTest(StopWatch sw, BulkTestResult bulkTestResult, List<ProjectEntity> projects, boolean resetAfter) {
sw.reset();
sw.start();
familyService.save(projects);
sw.stop();
bulkTestResult.addMsSpeedResult(sw.getTime());
if (resetAfter) familyService.clear();
sw.reset();
}
clear method remove all record from DB.
The problem that I have is values that I recieved as output of application.
Testing data : 1000 record, and 10 repeats
Example speed values recieved running this test few times:
311, 116, 87, (...)38
32, 27, 30, (...) 24
22, 19, 18, (...) 21
19, 18, 18, (...) 19
Why there are so many difference and why for first time insert is always slower ?
It could be any hardware acceleration ?
I found solution.
This issue is related to optimalization. After Disable JIT, recieved values are correct.
-Djava.compiler=NONE -Xint
I have 2 schedulers, which executes at a fixedDelay of 5s.
I have 2 use-cases:
If If - condition BusinessLogic class is true, then I want to sleep both the schedulers for a time of 3 secs, which means both the schedulers should execute now after 8 secs [5 secs + 3 secs].
If code qualifies the else condition, then both the schedulers should continue to execute at fixed delay of 5 secs.
Code:
Scheduler class:
import java.util.Date;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
#Component
public class TestSchedulers {
#Autowired
private BusinessLogic businessLogic;
#Scheduled(fixedDelay = 5000)
public void scheduler1(){
Date currentDate = new Date();
System.out.println("Started Sceduler 1 at " + currentDate);
String schedulerName = "Scheduler one";
businessLogic.logic(schedulerName);
}
#Scheduled(fixedDelay = 5000)
public void scheduler2(){
Date currentDate= new Date();
System.out.println("Started Sceduler 2 at " + currentDate);
String schedulerName = "Scheduler two";
businessLogic.logic(schedulerName);
}
}
Business logic class:
import java.util.Random;
import org.springframework.stereotype.Service;
#Service
public class BusinessLogic {
public void logic(String schedulerName) {
if(randomGen() < 100){
System.out.println("\nExecuting If condition for [" + schedulerName + "]");
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}else if(randomGen() > 100){
System.out.println("\nExecuting Else condition for [" + schedulerName + "]");
}
}
//Generate random numbers
public int randomGen(){
Random rand = new Random();
int randomNum = rand.nextInt((120 - 90) + 1) + 90;
return randomNum;
}
}
The problem
Both the schedulers are not starting at the same time.
When the if part is executing, then only one schedulers sleep for extra 3 secs, but I want both theschedulers to do so.
Log for reference:
Started Sceduler 1 at Sun May 26 12:34:53 IST 2019
Executing If condition for [Scheduler one]
2019-05-26 12:34:53.266 INFO 9028 --- [ main] project.project.App : Started App in 1.605 seconds (JVM running for 2.356)
Started Sceduler 2 at Sun May 26 12:34:56 IST 2019
Executing If condition for [Scheduler two]
Started Sceduler 1 at Sun May 26 12:35:01 IST 2019
Executing Else condition for [Scheduler one]
Started Sceduler 2 at Sun May 26 12:35:04 IST 2019
Executing Else condition for [Scheduler two]
Started Sceduler 1 at Sun May 26 12:35:06 IST 2019
Executing If condition for [Scheduler one]
Started Sceduler 2 at Sun May 26 12:35:09 IST 2019
Executing Else condition for [Scheduler two]
Started Sceduler 1 at Sun May 26 12:35:14 IST 2019
Executing If condition for [Scheduler one]
Started Sceduler 2 at Sun May 26 12:35:17 IST 2019
Executing If condition for [Scheduler two]
Started Sceduler 1 at Sun May 26 12:35:22 IST 2019
Executing Else condition for [Scheduler one]
Started Sceduler 2 at Sun May 26 12:35:25 IST 2019
Executing Else condition for [Scheduler two]
Started Sceduler 1 at Sun May 26 12:35:27 IST 2019
please help..
In each scheduler you invoke if(randomGen() < 100) independently of each other. So for one scheduler it could give result > 100 and for other < 100 or for both it could be the same. What you will need to do is to run randomGen() outside of the schedulers and store the single result in a way that both schedulers can access it and then they will rely on the same value in their if(randomGenValue < 100) statement and will behave the same way
Following a question form a colleague about parallel streams I wrote the following code to test something out.
public class Test {
public static void main(String args[]) {
List<Runnable> list = new LinkedList<>();
list.add(() -> {
try {
Thread.sleep(10000);
System.out.println("Time : " + System.nanoTime() + " " + "Slow task");
} catch (InterruptedException e) {
e.printStackTrace();
}
});
for (int i = 0; i < 1000; i++) {
int j = i;
list.add(() -> System.out.println("Time : " + System.nanoTime() + " " + j));
}
list.parallelStream().forEach(r -> r.run());
}
}
Strangely the output is always something like the following.
Time : 4096118049370412 61
Time : 4096118049567530 311
Time : 4096118049480238 217
Time : 4096118049652415 405
Time : 4096118049370678 436
Time : 4096118049370575 155
Time : 4096118049720639 437
Time : 4096118049719368 280
Time : 4096118049804630 281
Time : 4096118049684148 406
Time : 4096118049660398 218
TRUNCATED
Time : 4096118070511768 669
Time : 4096118070675678 670
Time : 4096118070584951 426
Time : 4096118070704143 427
Time : 4096118070714441 428
Time : 4096118070722080 429
Time : 4096118070729569 430
Time : 4096118070736782 431
Time : 4096118070744069 432
Time : 4096118070751286 433
Time : 4096118070758554 434
Time : 4096118070765913 435
Time : 4096118070550370 930
Time : 4096118070800538 931
Time : 4096118070687425 671
Time : 4096118070813669 932
Time : 4096118070827794 672
Time : 4096118070866089 933
Time : 4096118070881358 673
Time : 4096118070895344 934
Time : 4096118070907608 674
Time : 4096118070920712 935
Time : 4096118070932934 675
Time : 4096118070945131 936
Time : 4096118070957850 676
Time : 4096118070982326 677
Time : 4096118070991158 678
Time : 4096118070999002 679
Time : 4096118071006501 680
Time : 4096118071017766 681
Time : 4096118071025766 682
Time : 4096118071033318 683
Time : 4096118071070603 684
Time : 4096118071080240 685
Time : 4096128063025914 Slow task
Time : 4096128063123940 0
Time : 4096128063148135 1
Time : 4096128063173285 2
Time : 4096128063176723 3
Time : 4096128063179939 4
Time : 4096128063183077 5
Time : 4096128063191001 6
Time : 4096128063194156 7
Time : 4096128063197273 8
Time : 4096128063200395 9
Time : 4096128063203581 10
Time : 4096128063206988 11
Time : 4096128063210155 12
Time : 4096128063213285 13
Time : 4096128063216411 14
Time : 4096128063219542 15
Time : 4096128063222733 16
Time : 4096128063232190 17
Time : 4096128063235653 18
Time : 4096128063238827 19
Time : 4096128063241962 20
Time : 4096128063245176 21
Time : 4096128063248296 22
Time : 4096128063251444 23
Time : 4096128063254557 24
Time : 4096128063257705 25
Time : 4096128063261566 26
Time : 4096128063264733 27
Time : 4096128063268115 28
Time : 4096128063272851 29
Process finished with exit code 0
That is, there is always some tasks waiting for the slow task to finish processing, even though all the other tasks have finished. I would assume that the slow task should take only one thread and all the other tasks should finish without any problem and only the slow task should take the full 10 seconds. I have 8 CPUs so the parallelism level is 7.
What could the reason be for this?
To add more information, the code is only for understanding purposes. I am not putting it anywhere in production.
There are some limited capabilities when it comes to work-stealing with streams, so if a single thread has pegged itself for some work in other runners, that work will be blocked until it's finished processing other tasks.
You can visualize this by adding a few more debugging notes around your code...
class Test {
public static void main(String[] args) {
List<Runnable> list = new LinkedList<>();
list.add(() -> {
try {
System.out.println("Long sleep - " + Thread.currentThread().getName());
Thread.sleep(10000);
System.out.println("Time : " + System.nanoTime() + " " + "Slow task");
} catch (InterruptedException e) {
e.printStackTrace();
}
});
for (int i = 0; i < 1000; i++) {
int j = i;
list.add(() -> System.out.println("Time : " + System.nanoTime() + " " + j));
}
list.parallelStream().forEach(r -> {
System.out.println(Thread.currentThread().getName());
r.run();
System.out.println();
});
}
}
Upon running this, I observe the following message come up:
Long sleep - ForkJoinPool.commonPool-worker-4
...and about ten seconds later...
Time : 11525122027429 Slow task
ForkJoinPool.commonPool-worker-4
Time : 11525122204035 0
ForkJoinPool.commonPool-worker-4
Time : 11525122245739 1
ForkJoinPool.commonPool-worker-4
Time : 11525122267015 2
ForkJoinPool.commonPool-worker-4
Time : 11525122286921 3
ForkJoinPool.commonPool-worker-4
Time : 11525122306266 4
ForkJoinPool.commonPool-worker-4
Time : 11525122338787 5
ForkJoinPool.commonPool-worker-4
Time : 11525122357288 6
ForkJoinPool.commonPool-worker-4
Time : 11525122376716 7
ForkJoinPool.commonPool-worker-4
Time : 11525122395218 8
ForkJoinPool.commonPool-worker-4
Time : 11525122414165 9
ForkJoinPool.commonPool-worker-4
Time : 11525122432755 10
ForkJoinPool.commonPool-worker-4
Time : 11525122452805 11
ForkJoinPool.commonPool-worker-4
Time : 11525122472624 12
ForkJoinPool.commonPool-worker-4
Time : 11525122491380 13
ForkJoinPool.commonPool-worker-4
Time : 11525122514417 14
ForkJoinPool.commonPool-worker-4
Time : 11525122534550 15
ForkJoinPool.commonPool-worker-4
Time : 11525122553751 16
So this implies that on my box, worker-4 had some work slated for it which couldn't be stolen based on the fact that there were some uneven chunks. Note: if a thread is processing a task in a chunk, that work isn't going to be broken up any further.
[31, 31, 31, 32, 31, 31, 31, 32, 31, 31, 31, 32, 31, 31, 31, 32, 31, 31, 31, 32, 31, 31, 31, 32, 31, 31, 31, 32, 31, 32, 31, 32, 0]
If you were looking for a threading implementation which could steal work from threads which ran longer, it'd be best to use the work-stealing pool directly.
class Test {
public static void main(String[] args) throws InterruptedException {
List<Runnable> list = new LinkedList<>();
list.add(() -> {
try {
System.out.println("Long sleep - " + Thread.currentThread().getName());
Thread.sleep(10000);
System.out.println("Time : " + System.nanoTime() + " " + "Slow task");
} catch (InterruptedException e) {
e.printStackTrace();
}
});
for (int i = 0; i < 1000; i++) {
int j = i;
list.add(() -> {
System.out.println(Thread.currentThread().getName());
System.out.println("Time : " + System.nanoTime() + " " + j);
System.out.println();
});
}
final ExecutorService stealingPool = Executors.newWorkStealingPool();
list.forEach(stealingPool::execute);
stealingPool.shutdown();
stealingPool.awaitTermination(15, TimeUnit.SECONDS);
}
}
The above prints out a more tenable and more reasonable result at the end of the list:
Time : 12210445469314 Slow task
...which implies that all available work has been processed in the time allotted (15 seconds).
I wanted to ask if anybody had the same problem with the Quartz scheduler. I created Jobs with Trigger and JobKeys where i set the Groupnames. But when i print out the group that has been set it is always DEFAULT.
How can i Set this groupname to finally be able to group Jobs together and most importantly cancel only specified groups? With a Code similar like this:
public void unscheduleByGroupname(String groupName) throws SchedulerException {
for (JobKey jobKey : scheduler.getJobKeys(GroupMatcher.jobGroupEquals(groupName))) {
scheduler.unscheduleJob(new TriggerKey(jobKey.getName(), jobKey.getGroup()));
}
}
Input:
TriggerKey tKey = new TriggerKey("Trigger:" + jobName + "-Somename:" + object.toString(),
"Group:" + jobName + "-Somename:" + object.toString());
JobKey jKey = new JobKey("Job:" + jobName + "-Somename:" + object.toString(),
"Group:" + jobName + "-Somename:" + object.toString());
JobDetail job = JobBuilder.newJob(Somename.class).withDescription("Somename")
.withIdentity(jKey).build();
Trigger trigger = TriggerBuilder.newTrigger().forJob(jKey).startAt(new Date()).withIdentity(tKey).build();
Output Function:
for (String groupName : scheduler.getJobGroupNames()) {
for (JobKey jobKey : scheduler.getJobKeys(GroupMatcher.jobGroupEquals(groupName))) {
String jobName = jobKey.getName();
String jobGroup = jobKey.getGroup();
// get job's trigger
List<Trigger> triggers = (List<Trigger>) scheduler.getTriggersOfJob(jobKey);
Date nextFireTime = triggers.get(0).getNextFireTime();
System.out.println("[jobName] : " + jobName + " [groupName] : " + jobGroup + " - " + nextFireTime);}
Output:
[jobName] : Job:-Somename:13 [groupName] : DEFAULT - Tue Jul 19 13:48:40 CEST 2016
[jobName] : Job:-Somename:14 [groupName] : DEFAULT - Tue Jul 19 13:49:11 CEST 2016
[jobName] : Job:-Somename:15 [groupName] : DEFAULT - Tue Jul 19 13:49:41 CEST 2016
[jobName] : Job:-Somename:16 [groupName] : DEFAULT - Tue Jul 19 13:50:11 CEST 2016
When you are seeing up the job identity, you can add the group info. I chain the methods like below, and it works for me (I can see that the group is the desired name that I set):
JobDetail job = JobBuilder.newJob(ScheduledJob.class)
.withIdentity("JOB KEY", "GROUP NAME")
.withDescription("Job description")
.usingJobData(dataMap)
.build();
I initialize the logger like this:
public static void init() {
ConsoleHandler handler = new ConsoleHandler();
handler.setFormatter(new LogFormatter());
Logger.getLogger(TrackerConfig.LOGGER_NAME).setUseParentHandlers(false);
Logger.getLogger(TrackerConfig.LOGGER_NAME).addHandler(handler);
}
The LogFormatter's format function:
#Override
public String format(LogRecord record) {
StringBuilder sb = new StringBuilder();
sb.append(new SimpleDateFormat("yyyy-MM-dd HH:mm:ss Z").format(new Date(record.getMillis())))
.append(" ")
.append(record.getLevel().getLocalizedName()).append(": ")
.append(formatMessage(record)).append(LINE_SEPARATOR);
return sb.toString();
}
To use the Log I use the following method:
private static void log(Level level, String message) {
Logger.getLogger(TrackerConfig.LOGGER_NAME).log(level, message);
if (level.intValue() >= TrackerConfig.DB_LOGGER_LEVEL.intValue()) {
DBLog.getInstance().log(level, message);
}
}
The DBLog.log method:
public void log(Level level, String message) {
try {
this.logBatch.setTimestamp(1, new Timestamp(Calendar.getInstance().getTime().getTime()));
this.logBatch.setString(2, level.getName());
this.logBatch.setString(3, message);
this.logBatch.addBatch();
} catch (SQLException ex) {
Log.logError("SQL error: " + ex.getMessage()); // if this happens the code will exit anyways so it will not cause a loop
}
}
Now a normal Log output looks like that:
2013-04-20 18:00:59 +0200 INFO: Starting up Tracker
It works for some time but the LogFormatter seems to be reset for whatever reason.
Sometimes only one Log entry is displayed correctly and after that the Log entries are displayed like:
Apr 20, 2013 6:01:01 PM package.util.Log log INFO:
Loaded 33266 database entries.
again.
What I tried:
For debugging purposes I added a thread that outputs the memory usage of the jvm every x seconds.
The output worked with the right Log Format until the reserved memory value changed (the free memory value change did not reset the log format) like this:
2013-04-20 18:16:24 +0200 WARNING: Memory usage: 23 / 74 / 227 MiB
2013-04-20 18:16:25 +0200 WARNING: Memory usage: 20 / 74 / 227 MiB
2013-04-20 18:16:26 +0200 WARNING: Memory usage: 18 / 74 / 227 MiB
Apr 20, 2013 6:16:27 PM package.util.Log log WARNING:
Memory usage: 69 / 96 / 227 MiB
Apr 20, 2013 6:16:27 PM package.util.Log log INFO:
Scheduler running
Apr 20, 2013 6:16:27 PM package.Log log WARNING:
Memory usage: 67 / 96 / 227 MiB
Also note that the log level seems to be reset from warning to info here.
Where the problem seems to be:
When I comment out the database log function like this:
private static void log(Level level, String message) {
Logger.getLogger(TrackerConfig.LOGGER_NAME).log(level, message);
if (level.intValue() >= TrackerConfig.DB_LOGGER_LEVEL.intValue()) {
// DBLog.getInstance().log(level, message);
}
}
the log is formatted properly.
Any ideas what could be wrong with the DBLog's log function or why the log suddenly resets?
I would not really call this a solution but it works now.
The cause seemed to be the memory calculation itself.
Even if I just calculated it without logging it, the log format was reset.
I have no idea why it worked when I just commented out the DBLog usage.
int mb = 1024 * 1024;
long freeMemory = Runtime.getRuntime().freeMemory() / mb;
long reservedMemory = Runtime.getRuntime().totalMemory() / mb;
long maxMemory = Runtime.getRuntime().maxMemory() / mb;
String memoryUsage = "Memory usage: " + freeMemory + " / " + reservedMemory + " / " + maxMemory + " MiB";
This is the code I used. As soon as I commented it out the log format did not reset anymore and now everything works as expected.