I'd like to know how long my SQL queries take to execute. It seems the jdbc layer doesn't report this, and I couldn't find it in the MyBatis logs, either. I can't believe there is no way to easily get this?
You can use a a StopWatch from (org.apache.commons.lang.time) package. So it would run in your java code, you'd have something like this after you add dependeNcy / import StopWatch into your java class
StopWatch sw = new StopWatch();
sw.start();
// query you want to measure the time for
sw.stop();
long timeInMilliseconds = sw.getTime();
System.out.println("Time in ms is: " + timeInMilliseconds );
// or maybe log it if you like?
Related
I want to check how fast the CRUD Operations are executing on a MongoDB.
Therefore I recorded the time with the following code:
long start = System.nanoTime();
FindIterable<Document> datasetFindIterable = this.collection.find(filter);
long finish = System.nanoTime();
long timeElapsed = finish - start;
I am aware, that the FindIterable Object comes with "executionStats" and "executionTimeMillis":
JSONObject jsonobject = (JSONObject) parser.parse(datasetFindIterable.explain().toJson())
JSONObject executionStats = (JSONObject) jsonobject.get("executionStats");
Long executionTimeMillis = (Long) executionStats.get("executionTimeMillis");
However I am a bit confused, I get the following results:
start (ns)
finish (ns)
timeElapsed (ns)
executionTimeMillis (ms)
582918161918004
582918161932511
14507
1234
14507 ns are 0.014507 ms
How can it be, that the executionTimeMillis (1234 ms) is that much larger than the difference between the System.nanoTime() (=0.014507 ms). Shouldn't it be the other way around, since the System.nanoTime() does also need some time to execute itself?
If I recall correctly, there are asynchronous and synchronous MongoDB Drivers available.
If you use an asynchronous driver, it could be the issue, that the
"long finish = System.nanoTime();"
command does not wait for the
"FindIterable<Document> datasetFindIterable = this.collection.find(filter);"
command to return with a value, therefore the time difference could be lower than the execution time stored in the FindIterable variable.
Inside the doGet method in my servlet I'm using a JPA TypedQuery to retrieve my data. I'm able to get the data I want through an http get request method. The method to get the data takes roughly 10 seconds and when I make a single request all is good. The problem occurs when I get multiple requests at the same time. If I make 4 request at the same time, all 4 queries are lumped together and they take 40 seconds to get the data back for all of them. How can I get JPA to make 4 separate queries in parallel? Is this something in the persistence.xml that needs set or is it a code related issue? Note: I've also tried executing this code in a thread. A link and some appropriate terminology to increase my understanding would be appreciated.
Thanks!
try{
String sequenceNo = request.getParameter("sequenceNo");
EntityManagrFactory emf = Persistence.createEntityManagerFactory("mydbcon");
EntityManager em = emf.createEntityManager();
long startTime = System.currentTimeMillis();
List<Myeo> returnData = methodToGetData(em);
System.out.println(sequenceNo + " " + (System.currentTimeMillis() - startTime));
String myJson = new Gson().toJson(returnData);
resp.getOutputStream().print(myJson);
resp.getOutputStream().flush();
}finally{
resp.getOutputStream().close();
if (em.isOpen())
em.close();
}
4 simulaneous request samples
localhost/myservlet/mycodeblock?sequenceNo=A
localhost/myservlet/mycodeblock?sequenceNo=B
localhost/myservlet/mycodeblock?sequenceNo=C
localhost/myservlet/mycodeblock?sequenceNo=D
resulting print statements
A 38002
B 38344
C 38785
D 39065
What I want
A 9002
B 9344
C 9785
D 10065
If you do 4 separate GET-requests these request should be called in parallel. They must not be lumped together, since they are called in different transactions.
If that does not work as you wrote, you should check whether you have defined a database-connection-pool-size or a servlet-thread-pool-size which serializes the calls to the dbms.
I have a requirement to insert/update more than 15000 rows in 3 tables. So that's 45k total inserts.
I used Statelesssession in hibernate after reading online that it is the best for batch processing as it doesn't have a context cache.
session = sessionFactory.openStatelessSession;
for(Employee e: emplList) {
session.insert(e);
}
transcation.commit;
But this codes takes more than an hour to complete.
Is there a way to save all the entity objects in one go?
Save the entire collection rather than doing it one by one?
Edit: Is there any other framework that can offer a quick insert?
Cheers!!
You should read this article of Vlad Mihalcea:
How to batch INSERT and UPDATE statements with Hibernate
You need to make sure that you've set the hibernate property:
hibernate.jdbc.batch_size
So that Hibernate can batch these inserts, otherwise they'll be done one at a time.
There is no way to insert all entities in one go. Even if you could do something like session.save(emplList) internally Hibernate will save one by one.
Accordingly to Hibernate User Guide StatelessSession do not use batch feature:
The insert(), update(), and delete() operations defined by the StatelessSession interface operate directly on database rows. They cause the corresponding SQL operations to be executed immediately. They have different semantics from the save(), saveOrUpdate(), and delete() operations defined by the Session interface.
Instead use normal Session and clear the cache from time to time. Acttually, I suggest you to measure your code first and then make changes like use hibernate.jdbc.batch_size, so you can see how much any tweak had improved your load.
Try to change it like this:
session = sessionFactory.openSession();
int count = 0;
int step = 0;
int stepSize = 1_000;
long start = System.currentTimeMillis();
for(Employee e:emplList) {
session.save(e);
count++;
if (step++ == stepSize) {
long elapsed = System.currentTimeMillis() - start;
long linesPerSecond = stepSize / elapsed * 1_000;
StringBuilder msg = new StringBuilder();
msg.append("Step time: ");
msg.append(elapsed);
msg.append(" ms Lines: ");
msg.append(count);
msg.append("/");
msg.append(emplList.size());
msg.append(" Lines/Seconds: ");
msg.append(linesPerSecond);
System.out.println(msg.toString());
start = System.currentTimeMillis();
step = 0;
session.clear();
}
}
transcation.commit;
About hibernate.jdbc.batch_size - you can try different values, including some very large depending on underlying database in use and network configuration. For example, I do use a value of 10,000 for a 1gbps network between app server and database server, giving me 20,000 records per second.
Change stepSize to the same value of hibernate.jdbc.batch_size.
So I am using a Virtuoso SPARQL endpoint and I am using Jena to query it. I use QueryFactory and QueryExecution to create a SPARQL query :
Query query = QueryFactory.create(sparqlQueryString1);
QueryExecution qexec = QueryExecutionFactory.sparqlService("http://localhost:8890/sparql", query);
ResultSet results = qexec.execSelect();
Now I want to calculate the time taken to run this query. How does one find such a time using Jena on Virtuoso? Is that possible? Obviously I did look at functions like getTimeOut1() and getTimeOut2(). They don't seem to be giving me any good direction. As a hack I tried using Java's inbuilt System.currentTimeMillis(), However I am not sure if that is the right way. Any pointers as to how I can find execution time would be appreciated!
Results come back as a stream so the timing needs to span from just before qexec.execSelect() to just after the app finished handling the results, not just call execSelect.
Timer timer = new Timer() ;
timer.startTimer() ;
ResultSet results = qexec.execSelect();
ResultSetFormatter.consume(results) ;
long x = timer.finishTimer() ; // Time in milliseconds.
It's not clear whether you want to time the full round-trip, or just the time Virtuoso spends on things...
Virtuoso 7 lets you get the compilation (query plan) and execution time of a query using the profile function.
You can also enable general query logging and profiling using the prof_enable function.
I want my java program to be run at a specific date and time requested by the user which will be in the form of Timestamp the requested timestamp will be stored in the database and the code should start running at that point of time.
should I use Timer class for this or Quartz scheduler. please advice me a better solution. I am new to java so I'm not that familiar with these scheduler. if anyone can help me by giving a simple example it'll be a great help for me how can I give the timestamp as parameter in timer .
for (int i = 0; i < 4; i++)
{
if (bur[i] > 0) {
if (bur[i] > qtm) {
execOrder.add(i + 1);
bur[i] = bur[i] -qtm;
flagClounter++;
} else {
execOrder.add(i + 1);
bur[i] = 0;
flagClounter++;
}
}
}
if the above is the code part ..how can I use it using timer and how to give the Timestamp there or in Quartz. please help me.
Quartz scheduler is a very good option for achieving these kind of functionalities in java..Go with it.. http://www.tutorialsavvy.com/2012/12/quartz-scheduler-scheduling-job-in-java.html
You can use Quartz triggers..
Basically quartz has two type of triggers
1. Simple Trigger
2. Cron Trigger
Suppose if you want to run you Job on a particular Date and Time then use Cron Trigger. Cron trigger accepts Cron Expression which looks like below
expression=59 59 23 FRI * ?
Expression is says job should execute every Friday night at 11:59:59 PM
More expression can be obtained here.
On the other hand Simple trigger accepts milliseconds and executes once your application starts. i.e if Milliseconds specified is 10000, then job executes after 10000 milliseconds once application starts.