I'm using MySQL 5.1, Apache Tomcat 7, MyBatis 3.1
I have a method with code like this:
for( Order o : orders) {
List<Details> list = getDetails(o);
//Create PDF report ...
}
Where getDetails is a method that executes a stored procedure that takes some time to execute ( 1 to 2 seconds), The problem here is that I have many orders (near 4000) and I need to execute this method to process every order, and when I hit that method, the CPU usage of the MySQL process goes up to 90 - 100%
Is that normal?, Do I need to use Thread.sleep() after getDetails if executed?, Or do I need to do some modifications to my query?,
Related
PreparedStatement.executeQuery() is taking ~20x longer to execute than if it were run directly via the shell. I've logged with timers to determine that this method is the culprit.
The query and some DB info (ignoring the Java issue for the moment):
mysql> SELECT username from users where user_id = 1; // lightning fast
Running that same query 1,000 times via mysqlslap is also lightning fast.
mysqlslap --create-schema=mydb --user=root -p --query="select username from phpbb_users where user_id = 1" --number-of-queries=1000 --concurrency=1
Benchmark
Average number of seconds to run all queries: 0.051 seconds
Minimum number of seconds to run all queries: 0.051 seconds
Maximum number of seconds to run all queries: 0.051 seconds
Number of clients running queries: 1
Average number of queries per client: 1000
The Problem: Performing the same query in JDBC slows things significantly. In a for loop calling the below queryUsername() 1,000 times (this is called in the Main method, which isn't shown here) takes around 872ms. That's ~17x slower! I've tracked down the heavy usage by placing timers in various spots (omitted some for brevity). The primary suspect is stmt.executeQuery() which took 776ms of the 872ms runtime.
public static String queryUsername() {
String username = "";
// DBCore.getConnection() returns HikariDataSource.getConnection() implementation exactly as per https://www.baeldung.com/hikaricp
try (Connection connection = DBCore.getConnection();
PreparedStatement stmt = connection.prepareStatement("SELECT username from phpbb_users where user_id = ?");) {
stmt.setInt(1, 1); // just looking for user_id 1 for now
// Google timer used to measure how long executeQuery() is taking
// Another Timer is used outside of this method call to see how long
// total execution takes.
// Approximately 1 second in for loop calling this method 1000 times
Stopwatch s = Stopwatch.createStarted();
try (ResultSet rs = stmt.executeQuery();) {
s.stop(); // stopping the timer after executeQuery() has been called
timeElapsed += s.elapsed(TimeUnit.MICROSECONDS);
while (rs.next())
{
username = rs.getString("username"); // the query returns 1 record
}
}
} catch (SQLException e) {
e.printStackTrace();
}
return username;
}
Additional context and things tried:
SHOW OPEN TABLES has several tables open, but all have In_use=0 and Name_locked=0.
SHOW FULL PROCESSLIST looks healthy.
user_id is an indexed primary key
The Server is an Upcloud $5/month 1-Core, 1GB RAM running Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-66-generic x86_64). Mysql Ver 8.0.23-0ubuntu0.20.04.1 for Linux on x86_64 ((Ubuntu))
JDBC Driver is mysql-connector-java_8.0.23.jar, which was obtained from mysql-connector-java_8.0.23-1ubuntu20.04_all via https://dev.mysql.com/downloads/connector/j/
Don't reconnect each time. Open the connection at the start; reuse it until the web page (or program) is finished.
Chances are that you are comparing different realities.
When running mysqlslap you are most likely using Unix Domain Sockets in the communication between the tool and MySQL server. Try changing that to TCP and you should observe an immediate performance drop. Connector/J, on the other hand, creates TCP based connections by default (Unix Domain Sockets can be used but only by using a third party library).
Also, in mysqlslap you are running a simple query directly, which are handled by a COM_QUERY protocol command. In the Java sample you are preparing the query first and then executing it. Depending on how Connector/J is configured this may result in a single COM_QUERY protocol command or a pair of commands, namely, COM_STMT_PREPARE and COM_STMT_EXECUTE. Connector/J is also affected by how its statement caches are configured (and/or the CP ones). However, you are only measuring the executeQuery part so, theoretically, Connector/J could be being favored here.
Finally, unless you actually come up with a use case where you guarantee that both executions are effectively doing the same work under the same circumstances, you can compare results and point out differences, but you can't take any conclusions from it. For example, it's not that hard to introduce caches and make those simple iterations even completely skip communicating to the server... that would make things extremely fast.
move borrowing connection and Stopwatch related code out of method. then measure as:
Stopwatch s = Stopwatch.createStarted();
try (Connection con = ....) {
for (int i=0; i < 1000; i++) {
queryUsername( con );
}
}
s.stop();
print s.elapsed(TimeUnit.MICROSECONDS);
We are trying to use MYSQL's locking function Get_Lock and Release_Lock in our spring java application . The code we are using is defined in a procedure for both seperately which simply invoke from our java code .The queries inside the proc are given below .
We have been monitoring the execution time of these functions and found that at times it takes a millisecond to execute but at other times , this takes around 400 to 600 ms . I have tried the following approaches but there hasn't been much of a difference :
1. Use "Do" in place of select with these functions .
2. Using an int data type of the key which we are using as lock string .
3. Decreasing the length of lock string .
I am using a timeout of 0 to avoid connections being locked.
Can anyone please suggest me a way to optimize this .Is there a way of optimizing innodb buffer pool or something related to these configurations .
Please do let me know if any other input is required from my end .
Please find below some proc code and stats for your reference .
Current Mysql Code :
Proc :
get_Name_lock:
-- Using select
Select get_lock(Name,0) into c_Name_flag;
-- Using Do
Do get_lock(Name,0) ;
Proc :
release_Name_lock :
-- Using select
Select release_lock(Name) into c_Name_flag;
-- Using Do
Do release_lock(Name);
Request rate : (around)10 requests/sec .
Mysql Version : 5.7.19-log
I have a PL/SQL function that is called from our Java code.
I have the SQL_ID of the PL/SQL function execution and I have access to V$ views on my read-only DB user. The query takes quite some time to execute? Is there a way to profile the PL/SQL function execution to check where exactly the execution is stuck?
I know how to do this for SQL queries with V$SQL, V$ACTIVE_SESSION_HISTORY and V$SESSION_LONGOPS, but I am unable to figure out how to do this for PL/SQL code.
The PL/SQL function takes 4 minutes to execute, so I can execute quite a few V$ queries manually in that time. What V$ views should I check to find a line in the execution plan/function? Is this even possible?
Maybe you can use DBMS_PROFILER for your problem. But if you want to use this method, you have to install some infrastructure.
I don't want to describe the process how to install "proftab.sql", this link shows how it works.
It also shows some quick examples how to trace a specific function.
I can provide my version of the analyze-query after some testing:
select ppr.runid,
ppr.run_comment1,
decode(nvl(ppd.total_occur, 0), 0, 0, ppd.total_time / ppd.total_occur / 1000000) as Avg_msek,
ppd.total_time / 1000000 totaltime_msek,
ppd.total_occur,
uc.name,
uc.line,
uc.text as Source
from plsql_profiler_data ppd, plsql_profiler_units ppu, user_source uc, plsql_profiler_runs ppr
where ppd.runid = ppu.runid
and ppu.runid = ppr.runid
-- and ppr.run_comment1 = 'MYTEST' --show only a specific testrun
-- and ppr.runid = (select max(runid) from plsql_profiler_runs) /*to get the last run*/
and ppd.unit_number = ppu.unit_number
and ppu.unit_name = uc.name
and ppd.line#(+) = uc.line
and uc.type in ('PACKAGE BODY', 'TYPE BODY')
--order by uc.name, uc.line; --Show all code by line
--order by totaltime_msek desc; --Sort by slowest lines
order by total_occur desc, avg_msek desc --Sort by calls and slowest ones
I'm performing a test with CouchBase 4.0 and java sdk 2.2. I'm inserting 10 documents whose keys always start by "190".
After inserting these 10 documents I query them with:
cb.restore("190", cache);
Thread.sleep(100);
cb.restore("190", cache);
The query within the 'restore' method is:
Statement st = Select.select("meta(c).id, c.*").from(this.bucketName + " c").where(Expression.x("meta(c).id").like(Expression.s(callId + "_%")));
N1qlQueryResult result = bucket.query(st);
The first call to restore returns 0 documents:
Query 'SELECT meta(c).id, c.* FROM cache c WHERE meta(c).id LIKE "190_%"' --> Size = 0
The second call (100ms later) returns the 10 documents:
Query 'SELECT meta(c).id, c.* FROM cache c WHERE meta(c).id LIKE "190_%"' --> Size = 10
I tried adding PersistTo.MASTER in the 'insert' statement, but it neither works.
It seems that the 'insert' is not persisted immediately.
Any help would be really appreciated.
Joan.
You're using N1QL to query the data - and N1QL is only eventually consistent (by default), so it only shows up after the indices are recalculated. This isn't related to whether or not the data is persisted (meaning: written from RAM to disc).
You can try to change the scan_consitency level from its default - NOT_BOUNDED - to get consistent results, but that would take longer to return.
read more here
java scan_consitency options
I am trying to implement paging in hibernate and i am seeing some weird behavior from hibernate. I have tried two queries with the same result
List<SomeData> dataList = (List<SomeData>) session.getCurrentSession()
.createQuery("from SomeData ad where ad.bar = :bar order by ad.id.name")
.setString("bar", foo)
.setFirstResult(i*PAGE_SIZE)
.setMaxResults(PAGE_SIZE)
.setFetchSize(PAGE_SIZE) // page_size is 1000 in my case
.list();
and
List<SomeData> datalist= (List<SomeData>) session.getCurrentSession()
.createCriteria(SomeData.class)
.addOrder(Order.asc("id.name"))
.add(Expression.eq("bar", foo))
.setFirstResult(i*PAGE_SIZE)
.setMaxResults(PAGE_SIZE)
.list();
I have this in a for loop and each time this query runs, the run time increases. The first call returns in 100 ms, the second in 150 and the 5th call takes 2 seconds and so on.
Looking in the server (MySql 5.1.36) logs, I see that the select query does get generated properly with the LIMIT clause but for each record that is returned, hibernate for some reason also emits an update query. after the first result, it updates 1000 records, after the second result, it updates 2000 records and so on. So for a page size of 1000 and 5 iterations of the loop, the database is getting hit with 15,000 queries (5K + 4K + 3K + 2K + 1K ) Why is that happening?
I tried making a native SQL query and it worked as expected. The query is
List asins = (List) session.getCurrentSession()
.createSQLQuery("SELECT * FROM some_data where foo = :foo order by bar
LIMIT :from , :page")
.addScalar(..)
.setInteger("page", PAGE_SIZE)
.setInteger("from", (i*PAGE_SIZE))
... // set other params
.list();
My mapping class has setters/getters for the blob object as
void setSomeBlob(Blob blob){
this.someByteArray = this.toByteArray(blob)
}
void Blob getSomeBlob(){
return Hibernate.createBlob(someByteArray)
}
Turn on bound parameters logging (you can do that by setting "org.hibernate.type" log level to "TRACE") to see what specifically is being updated.
Most likely you're modifying the entities after they've been loaded - either explicitly or implicitly (e.g. returning different value from getter or using a default value somewhere).
Another possibility is that you've recently altered (one of) the table(s) you're selecting from and column default in the table doesn't match default value in the entity.