I'm writing application to mesure speed of CRUD with hibernate for Derby.
This is my function :
#Override
public BulkTestResult testBulkInsertScenario(Long volume, Integer repeat) {
StopWatch sw = new StopWatch();
BulkTestResult bulkTestResult = new BulkTestResult();
bulkTestResult.setStartDate(Instant.now());
bulkTestResult.setCountTest(volume);
bulkTestResult.setTestRepeat(repeat);
familyService.clear();
for(int i =0; i < repeat; i++) {
List<ProjectEntity> projects = dataAnonymization.generateProjectEntityList(volume);
runBulkTest(sw, bulkTestResult, projects, true);
}
bulkTestResult.setEndDate(Instant.now());
return bulkTestResult;
}
private void runBulkTest(StopWatch sw, BulkTestResult bulkTestResult, List<ProjectEntity> projects, boolean resetAfter) {
sw.reset();
sw.start();
familyService.save(projects);
sw.stop();
bulkTestResult.addMsSpeedResult(sw.getTime());
if (resetAfter) familyService.clear();
sw.reset();
}
clear method remove all record from DB.
The problem that I have is values that I recieved as output of application.
Testing data : 1000 record, and 10 repeats
Example speed values recieved running this test few times:
311, 116, 87, (...)38
32, 27, 30, (...) 24
22, 19, 18, (...) 21
19, 18, 18, (...) 19
Why there are so many difference and why for first time insert is always slower ?
It could be any hardware acceleration ?
I found solution.
This issue is related to optimalization. After Disable JIT, recieved values are correct.
-Djava.compiler=NONE -Xint
Related
I'm using MongoDB 4.0.1 with Java driver (MongoDB-driver-sync) 3.8.0
My collection has 564'039 elements with 13 key-values, 2 of which are maps with 10 more key-values.
If I execute the following query in the console, it gives me the results in less than a second:
db.getCollection('tracking_points').find({c: 8, d: 11,
t: {$gte: new Date("2018-08-10"), $lte: new Date("2018-09-10")}
})
But if I execute this in Java it takes more than 30 seconds:
collection.find(
and(
eq("c", clientId),
eq("d", unitId),
gte("t", start),
lte("t", end)
)
).forEach((Block<Document>) document -> {
// nothing here
});
There is an index on "t" (timestamp) and without it, the console find takes few seconds.
How can this be fixed?
Edit: Here is the log from the DB after the java query:
"2018-09-21T08:06:38.842+0300 I COMMAND [conn9236] command fleetman_dev.tracking_points command: count { count: \"tracking_points\", query: {}, $db: \"fleetman_dev\", $readPreference: { mode: \"primaryPreferred\" } } planSummary: COUNT keysExamined:0 docsExamined:0 numYields:0 reslen:45 locks:{ Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_msg 0ms",
"2018-09-21T08:06:38.862+0300 I COMMAND [conn9236] command fleetman_dev.tracking_points command: find { find: \"tracking_points\", filter: { c: 8, d: 11, t: { $gte: new Date(1536526800000), $lte: new Date(1536613200000) } }, $db: \"fleetman_dev\", $readPreference: { mode: \"primaryPreferred\" } } planSummary: IXSCAN { t: 1 } cursorid:38396803834 keysExamined:101 docsExamined:101 numYields:0 nreturned:101 reslen:24954 locks:{ Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { ",
"2018-09-21T08:06:39.049+0300 I COMMAND [conn9236] command fleetman_dev.tracking_points command: getMore { getMore: 38396803834, collection: \"tracking_points\", $db: \"fleetman_dev\", $readPreference: { mode: \"primaryPreferred\" } } originatingCommand: { find: \"tracking_points\", filter: { c: 8, d: 11, t: { $gte: new Date(1536526800000), $lte: new Date(1536613200000) } }, $db: \"fleetman_dev\", $readPreference: { mode: \"primaryPreferred\" } } planSummary: IXSCAN { t: 1 } cursorid:38396803834 keysExamined:33810 doc",
You are using the Java driver correctly but your conclusion - that the Java driver is much slower than the console - is based on an invalid comparison. The two code blocks is your question are not equivalent. In the shell variant you retrieve a cursor. In the Java variant you retrieve a cursor and you walk over the contents of that cursor.
A valid comparison between the Mongo shell and the Java driver would either have to include walking over the cursor in the shell variant, for example:
db.getCollection('tracking_points').find({c: 8, d: 11,
t: {$gte: new Date("2018-08-10"), $lte: new Date("2018-09-10")}
}).forEach(
function(myDoc) {
// nothing here
}
)
Or it would have to remove walking over the cursor from the Java variant, for example:
collection.find(
and(
eq("c", clientId),
eq("d", unitId),
gte("t", start),
lte("t", end)
)
);
Both of these would be more valid forms of comparison. If you run either of those you'll see that the elapsed times are much closer to each other. The follow on question might be 'why does it take 30s to read this data?'. If so, the fact that you can get the cursor back sub second tells us that the issue is not about indexing, instead it is likely to be related to the amount of data being read by the query.
To isolate where the issue is occurring you could gather elasped times for the following:
read the data, iterating over each document but do not parse each document
read the data and parse each document while reading
If the elapsed time for no. 2 is not much more than the elapsed time for no. 1 then you know that the issue is not in parsing and is more likely to be in network transfer. If the elapsed time for no. 2 is much greater than no. 1 then you know that the issue is in parsing and you can dig into the parse call to attribute the elapsed time. It could be constrained resources on the client (CPU and/or memory) or a sub optimal parse implementation. I can't tell at this remove but using the above approach to isolate where the problem resides will at least help you to direct your investigation.
we are developing tool that's support For execute Jmx files in our Application
currently i have issue with get the Execution Duration Time Based On Steping thread Group Values.
I got the Duration time by using Java
public class SteppingThreadGroup {
public static void main(String[] args) {
int TotalThreads = 500,
firstWait = 25,
thenStart_threads = 5,
nextAdd_threads = 25,
threadsEverySeconds = 30,
usingRamupSeconds = 5,
holdForSeconds =600,
finallyStopThreads =25,
down_threadsEverySeconds =25,
//extra varaibles
RemaingThreads =0, Duration=0;
float RampDown =0,rampupTime =0;;
if(thenStart_threads>0){
rampupTime =(firstWait+usingRamupSeconds+threadsEverySeconds);
RemaingThreads =TotalThreads-thenStart_threads;
System.out.format("Inital_Remaing_threads=%S,Inital_rampupTime=%f \n\n",RemaingThreads,rampupTime);
// System.out.format(" ",rampupTime);
}else{
rampupTime = (firstWait);
RemaingThreads=TotalThreads;
}
while(RemaingThreads >0){
if(RemaingThreads>nextAdd_threads)
rampupTime+= (Duration+usingRamupSeconds+threadsEverySeconds);
else if(RemaingThreads==nextAdd_threads)
rampupTime +=(Duration+usingRamupSeconds);
else
rampupTime +=(Duration+(RemaingThreads*(usingRamupSeconds/nextAdd_threads)));
RemaingThreads=RemaingThreads-nextAdd_threads;
}
RampDown=((TotalThreads/finallyStopThreads-1)*down_threadsEverySeconds);
Duration=(int) (holdForSeconds+rampupTime+RampDown);
System.out.format("RamupTime =%S && HoldForSeconds =%S \n RamDown =%S && Total_Duration=%S",
rampupTime,holdForSeconds,RampDown,Duration);
}
}
but is it possible to find The Duration In formula Base ?
I implement one Formula but its not working Fine as i expected.
The sample Stepping ThreadGroup Value Images
Image1:
Image2:
I built a system that simulates memory paging, just like an MMU.
And to better regulate and understand how it works, I am logging it.
My problem is that it seems the log is not accurately reflecting the operations of the system, or rather it does, but then I have a big problem with threads that I need some help solving.
I'll try and explain.
public void run() //gets pages and writes to them
{ // i printed the pageId of every process to check they are running at the same time and competing for resources
for(ProcessCycle currentCycle : processCycles.getProcessCycles())
{
Long[] longArray = new Long[currentCycle.getPages().size()];
try {
for(int i = 0; i < currentCycle.getPages().size();i++)
{
MMULogger.getInstance().write("GP:P" + id + " " + currentCycle.getPages().get(i) + " " + Arrays.toString(currentCycle.getData().get(i)), Level.INFO);
}
Page<byte[]>[] newPages = mmu.getPages(currentCycle.getPages().toArray(longArray));
List<byte[]> currentPageData = currentCycle.getData();
System.out.println("process id " + id);
for(int i = 0; i < newPages.length;i++)
{
byte[] currentData = currentPageData.get(i);
newPages[i].setContent(currentData);
}
Thread.sleep(currentCycle.getSleepMs());
} catch (ClassNotFoundException | IOException | InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
this code snippet is taken from a class called Process. Just like in a computer, I have multiple processes, and they each need to read and write from pages, which they request from the class called MMU. this is the "mmu.getpages" part.
We also write to our log file inside the method get pages:
public synchronized Page<byte[]>[] getPages(java.lang.Long[] pageIds) throws java.io.IOException, ClassNotFoundException
{
#SuppressWarnings("unchecked")
Page<byte[]>[] toReturn = new Page[pageIds.length];
for(int i = 0; i < pageIds.length; i++)
{
Long currentPage = algo.getElement(pageIds[i]);
if(currentPage == null) //page not found in RAM
{
if(ram.getInitialCapacity() != ram.getNumOfPages()) //ram is NOT full
{
MMULogger.getInstance().write("PF:"+pageIds[i], Level.INFO);
algo.putElement(Long.valueOf(pageIds[i]),Long.valueOf(pageIds[i]));
ram.addPage(HardDisk.getInstance().pageFault(pageIds[i]));
}
else //ram is full
{
Long IDOfMoveToHdPage = algo.putElement(pageIds[i], pageIds[i]);
Page<byte[]> moveToHdPage = ram.getPage((int)((long)IDOfMoveToHdPage));
Page<byte[]> moveToRAM = HardDisk.getInstance().pageReplacement(moveToHdPage, pageIds[i]);
ram.removePage(moveToHdPage);
ram.addPage(moveToRAM);
MMULogger.getInstance().write("PR: MTH " + moveToHdPage.getPageId() + " MTR " + moveToRAM.getPageId(), Level.INFO);
}
}
toReturn[i] = ram.getPage((int)((long)pageIds[i]));
}
return toReturn;
}
So all in all to recap - a process requests pages, I write to the log file which process requests which page and what it wants to write to it, and then I call mmu.getpages, and the logic of the system continues.
My problem is that the log looks like this:
GP:P2 5 [102, 87, -9, 85, -5]
GP:P1 1 [-9, -18, 50, -124, -102]
GP:P4 10 [79, -51, 67, 118, 111]
GP:P2 6 [-20, -22, 3, -74, -65]
GP:P3 7 [90, 56, 91, 71, -115]
PF:5
GP:P6 18 [28, -39, -3, 64, -117]
GP:P5 13 [72, -26, 52, -84, 6]
GP:P4 11 [-55, -70, -88, -9, 38]
GP:P1 2 [39, 112, -117, 5, 109]
GP:P5 12 [38, -31, 18, -40, 36]
which is not what I wanted. At first you can see process 2 requested page 5 and wanted to write to it [102, 87, -9, 85, -5].
After that line, I would have expected to see "PF:5" but its further down. I think it is the case because process 2 ran out of time and didnt manage to finish mmu.getpages operation. so it never printed PF:5 to the file.
That is a problem for me. I want the processes to run simultaneously in a multithreaded fashion, but i want the log to be of the form:
GP:P2 5 [1,1,1,1,1]
PF:5
GP:P2 7 [1,2,3,4,5]
PF:7
GP:P19 12 [0,0,0,0,0]
PF:12
For example
I was trying a example code with spring. And a part of code is like below;
private List<Point> points;
long timeTakeninMilis = System.currentTimeMillis();
public List<Point> getPoints() {
return points;
}
public void setPoints(List<Point> points) {
this.points = points;
}
public void drawJava8() {
points.stream().forEachOrdered(
point -> System.out.println("Point : (" + point.getX() + ", "
+ point.getY() + ")"));
System.out.println("Total Time Taken drawJava8(): "
+ (System.currentTimeMillis() - timeTakeninMilis)
+ " miliseconds");
}
public void draw() {
for (Point point : points) {
System.out.println("Point = (" + point.getX() + ", " + point.getY()
+ " )");
}
System.out.println("Total Time Taken draw(): "
+ (System.currentTimeMillis() - timeTakeninMilis)
+ " miliseconds");
}
The OUTPUT,
Jun 30, 2015 11:30:53 AM org.springframework.context.support.ClassPathXmlApplicationContext prepareRefresh
INFO: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext#7daf6ecc: startup date [Tue Jun 30 11:30:53 IST 2015]; root of context hierarchy
Jun 30, 2015 11:30:53 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from class path resource [spring.xml]
Point = (0, 0 )
Point = (-50, 0 )
Point = (0, 50 )
Total Time Taken draw(): 70 miliseconds
Point : (0, 0)
Point : (-50, 0)
Point : (0, 50)
Total Time Taken drawJava8(): 124 miliseconds
Jun 30, 2015 11:30:54 AM org.springframework.context.support.ClassPathXmlApplicationContext doClose
INFO: Closing org.springframework.context.support.ClassPathXmlApplicationContext#7daf6ecc: startup date [Tue Jun 30 11:30:53 IST 2015]; root of context hierarchy
Why it is taking more time?
Or i am doing something wrong?
I was expecting it to be faster or of similar speed...
Please help me understand what is the benefit of the Lambda Expressions?
INFO: I did it in two different programs. Times are taken from those. I merged them here to make it short.
Adding this as an analysis per original poster's request.
We can not really predict the sophisticated analysis and transformation that the modern JIT compiler performs on running code. Hence while benchmarking items such as these, you should not conclude it just by running two method calls.
Instead, create various sample input sets (boundary cases) and check the perofmance by repeatedly calling your test cases without shutting down JVM. In this case for example :
for (int i=0;i<100;i++){draw(); drawJava8();}
Once you have the results, find out average execution and you can safely ignore first execution result as it might not have had optimizations.
So the conclusion you have drawn from your tests is not completely correct.
I am using Aspose for automating a ppt generation process after reading some CSV files:
ITable tbl = sld.getShapes().addTable(20, 49, dbCols,dblRows);
dbCols is 7 & dbRows is 15, but it is throwing:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
Can someone suggest something? Does Aspose limit the number of rows on a particular table?
No, Aspose does not limit number of rows or columns on any table.
Using the Aspose.Slides API, you can add a table with 7 columns and 15 rows.
double[] dblCols = { 30, 30, 30, 30, 30, 30, 30 };
double[] dblRows = { 20,20,20,20,20,20,20,20,20,20,20,20,20,20,20 };
//Add table shape to slide
ITable tbl = slide.getShapes().addTable(100, 50, dblCols, dblRows);
Where 30 is the width of each column, 20 is the height of each row; 100 and 50 are the x,y coordinates of top left corner of the table.
I work with Aspose as a Support Developer.