oracle query performance in java - java

My Scenario is i have big query with lot of joins and lot of decode/case calls in select and i am passing one param to where condition from java and i see for 150000 rows java fetch is very slow but query is running faster in SQL developer client interface.
i thought of creating or replacing a view which takes one parameter and call that view from java.
Did not find resource to know how to pass prams to create or replace view statement from java ?
Any one suggest other approach that fetches rows quickly ?
Using oracle 12c and driver is jdbc7 and jdk8

First (and easiest):
Set the JDBC fetch size to a high number in your statement. There is a setFetchSize(int) method on Statement, PreparedStatement, CallableStatement, and ResultSet objects.
This defaults to something small like 10 rows. Set that to a reasonably high number, such as 500 or more.
This is a setting that will definitely slow down a query that pulls back hundreds of thousands of records.
Second:
Verify that the query is indeed running fast in SQL Developer, to the last row.
You can export to a file or try wrapping the query in a PL/SQL statement that will loop through all records.
If you wish, you can use AUTOTRACE in SQL*Plus to your advantage:
SET TIMING ON
SET AUTOTRACE TRACEONLY
<your query>
This will run the query to the end, pulling all records over the network but not displaying them.
The goal here is to prove that your SQL statement is indeed returning all records as quickly as needed.
If not, then you have a standard tuning exercise. Get it running to completion quickly in SQL Developer first.

Related

Should I use preparedStatement in a repetitive query in which where clause predicates change often causing change of plan chosen

I have a Java application which is executing queries on PostgreSQL 9.3 Server using JDBC. In my java application, I had to execute same query many times(in thousands) with different arguments in 'where' clause predicates alone. I have been using Statement class till now. I recently read about PreparedStatement class somewhere and I am thinking should I use it to speedup processing. But my doubt is this. Since my query executes each time with different values in Where clause predicates, the selectivity will change and hence plan chosen by the db server will change. In that case, will using PreparedStatement speedup the processing? Is the plan chosen when Preparedstatement is created or plan is chosen only when execute is called on the preparedstatement object? If plan is chosen when preparedstatement is created itself, how is it done since optimizer chooses plans based on selectivity calculated using actual predicate values.
My Query is a complex one involving many tables. Template is like,
select something from tables where predicate1 and predicate2 and price < X and date < Y;
where X and Y varies for each query.
From PostgreSQL doc :
PREPARE creates a prepared statement. A prepared statement is a
server-side object that can be used to optimize performance. When the
PREPARE statement is executed, the specified statement is parsed,
analyzed, and rewritten. When an EXECUTE command is subsequently
issued, the prepared statement is planned and executed. This division
of labor avoids repetitive parse analysis work, while allowing the
execution plan to depend on the specific parameter values supplied.
moe was right : preparing a query will only remove the overhead of reparsing it again and again. The planing is done only when you will execute the prepared query with its parameters.
In 9.3, it uses a heuristic. It does something like planning the query with the specific bind values the first 5 times the prepared statement is executed. If none of those plans turn out to be substantially better than the generic plan, then it stops the individual planning and justs uses the generic plan from then on.
But there is another wrinkle in that just because your code told the driver to use a prepared statement doesn't mean driver is actually doing so. A lot of drivers do weird things.
The real answer is test, test, test.

Oracle SQL from Java using Spring returns nothing, and doesnt throw exception

I have a Java code that uses Spring to connect and execute sql on an Oracle DB. I have a query that takes long time to execute (20 minutes or sometimes more). I have a Executor Service and it has a Thread that will execute the query and process the results. If i put a timeout to the DB and Spring, the system will time out correctly but will return nothing else before that. If i run the query from SQL plus, it will return values. The time out is set up 3 times what it takes to execute on SQL Developer.
Any ideas!?
Assuming that your Spring query is using bind variables, are you using bind variables when you execute the query in SQL*Plus/ SQL Developer? Or are you using literals?
What version of Oracle are you using?
Have you checked to see whether the query plans for the two environments are different?
20 minutes for a query in Oracle? I'll bet you don't have appropriate indexes on the columns in your WHERE clause.
The dead giveaway is to do an EXPLAIN PLAN on the query. If you see a TABLE SCAN, take appropriate measures.
If you can run the same query in SQL*Plus and see it return in a reasonable time, then I'm incorrect and the problem is due to something else that you did in Java code.
I don't see why you need a separate thread for a query. I'd run the code straight, without a thread, and see how it behaves. If you aren't indexed properly, add some; if the query brings back too much data, add WHERE clauses to restrict it. You've taken extraordinary measures without really understanding what the root cause is.

Hibernate Batch Insert. Would it ever use one insert instead of multiple inserts?

I've been looking around trying to determine some Hibernate behavior that I'm unsure about. In a scenario where Hibernate batching is properly set up, will it only ever use multiple insert statements when a batch is sent? Is it not possible to use a DB independent multi-insert statement?
I guess I'm trying to determine if I actually have the batching set up correctly. I see the multiple insert statements but then I also see the line "Executing batch size: 25."
There's a lot of code I could post but I'm trying to keep this general. So, my questions are:
1) What can you read in the logs to be certain that batching is being used?
2) Is it possible to make Hibernate use a multi-row insert versus multiple insert statements?
Hibernate uses multiple insert statements (one per entity to insert), but sends them to the database in batch mode (using Statement.addBatch() and Statement.executeBatch()). This is the reason you're seeing multiple insert statements in the log, but also "Executing batch size: 25".
The use of batched statements greatly reduces the number of roundtrips to the database, and I would be surprised if it were less efficient than executing a single statement with multiple inserts. Moreover, it also allows mixing updates and inserts, for example, in a single database call.
I'm pretty sure it's not possible to make Hibernate use multi-row inserts, but I'm also pretty sure it would be useless.
I know that this is an old question but i had the same problem that i thought that hibernate batching means that hibernate would combine multiple inserts into one statement which it doesn't seem to do.
After some testing i found this answer that a batch of multiple inserts is just as good as a multi-row insert. I did a test inserting 1000 rows one time using hibernate batch and one time without. Both tests took about 20s so there was no performace gain in using hibernate batch.
To be sure i tried using the rewriteBatchedStatements option from the MySQL Connector/J which actually combines multiple inserts into one statement. It reduced the time to insert 1000 records down to 3s.
So after all hibernate batch seems to be useless and a real multi-row insert to be much better. Am i doing something wrong or what causes my test results?
The Oracle bulk insert collect an array of entyty and pass in a single block to the db associating to it a unic ciclic insert/update/delete.
Is unic way to speed network throughput .
Oracle suggest to do it calling a stored procedure from hibernate passing it an array of datas.
http://biemond.blogspot.it/2012/03/oracle-bulk-insert-or-select-from-java.html?m=1
Is not only a software problem but infrastructural!
Problem is network data flow optimization and TCP stack fragmentation.
Mysql have function.
You have to do something like what is described in this article.
Normal transfer on network the correct volume of data is the solution
You have also to verify network mtu and Oracle sdu/tdu utilization respect data transferred between application and database

Fastest way to iterate through large table using JDBC

I'm trying to create a java program to cleanup and merge rows in my table. The table is large, about 500k rows and my current solution is running very slowly. The first thing I want to do is simply get an in-memory array of objects representing all the rows of my table. Here is what I'm doing:
pick an increment of say 1000 rows at a time
use JDBC to fetch a resultset on the following SQL query
SELECT * FROM TABLE WHERE ID > 0 AND ID < 1000
add the resulting data to an in-memory array
continue querying all the way up to 500,000 in increments of 1000, each time adding results.
This is taking way to long. In fact its not even getting past the second increment from 1000 to 2000. The query takes forever to finish (although when I run the same thing directly through a MySQL browser its decently fast). Its been a while since I've used JDBC directly. Is there a faster alternative?
First of all, are you sure you need the whole table in memory? Maybe you should consider (if possible) selecting rows that you want to update/merge/etc. If you really have to have the whole table you could consider using a scrollable ResultSet. You can create it like this.
// make sure autocommit is off (postgres)
con.setAutoCommit(false);
Statement stmt = con.createStatement(
ResultSet.TYPE_SCROLL_INSENSITIVE, //or ResultSet.TYPE_FORWARD_ONLY
ResultSet.CONCUR_READ_ONLY);
ResultSet srs = stmt.executeQuery("select * from ...");
It enables you to move to any row you want by using 'absolute' and 'relative' methods.
One thing that helped me was Statement.setFetchSize(Integer.MIN_VALUE). I got this idea from Jason's blog. This cut down execution time by more than half. Memory consumed went down dramatically (as only one row is read at a time.)
This trick doesn't work for PreparedStatement, though.
Although it's probably not optimum, your solution seems like it ought to be fine for a one-off database cleanup routine. It shouldn't take that long to run a query like that and get the results (I'm assuming that since it's a one off a couple of seconds would be fine). Possible problems -
is your network (or at least your connection to mysql ) very slow? You could try running the process locally on the mysql box if so, or something better connected.
is there something in the table structure that's causing it? pulling down 10k of data for every row? 200 fields? calculating the id values to get based on a non-indexed row? You could try finding a more db-friendly way of pulling the data (e.g. just the columns you need, have the db aggregate values, etc.etc)
If you're not getting through the second increment something is really wrong - efficient or not, you shouldn't have any problem dumping 2000, or 20,000 rows into memory on a running JVM. Maybe you're storing the data redundantly or extremely inefficiently?

sql server query running slow from java

I have a java program that runs a bunch of queries against an sql server database. The first of these, which queries against a view returns about 750k records. I can run the query via sql server management studio, and I get results in about 30 seconds. however, I kicked off the program to run last night. when I checked on it this morning, this query still had not returned results back to the java program, some 15 hours later.
I have access to the database to do just about anything I want, but I'm really not sure how to begin debugging this. What should one do to figure out what is causing a situation like this? I'm not a dba, and am not intimately familiar with the sql server tool set, so the more detail you can give me on how to do what you might suggest would be appreciated.
heres the code
stmt = connection.createStatement();
clientFeedRS = stmt.executeQuery(StringBuffer.toString());
EDIT1:
Well it's been a while, and this got sidetracked, but this issue is back. I looked into upgrading from jdbc driver v 1.2 to 2.0, but we are stuck on jdk 1.4, and v 2.0 require jdk 1.5 so that's a non starter. Now I'm looking at my connection string properties. I see 2 that might be useful.
SelectMethod=cursor|direct
responseBuffering=adaptive|full
Currently, with the latency issue, I am running with cursor as the selectMethod, and with the default for responseBuffering which is full. Is changing these properties likely to help? if so, what would be the ideal settings? I'm thinking, based on what I can find online, that using a direct select method and adaptive response buffering might solve my issue. any thoughts?
EDIT2:
WEll I ended changing both of these connection string params, using the default select method(direct) and specifying the responseBuffering as adaptive. This ends up working best for me and alleviates the latency issues I was seeing. thanks for all the help.
I had similar problem, with a very simple request (SELECT . FROM . WHERE = .) taking up to 10 seconds to return a single row when using a jdbc connection in Java, while taking only 0.01s in sqlshell. The problem was the same whether i was using the official MS SQL driver or the JTDS driver.
The solution was to setup this property in the jdbc url :
sendStringParametersAsUnicode=false
Full example if you are using MS SQL official driver : jdbc:sqlserver://yourserver;instanceName=yourInstance;databaseName=yourDBName;sendStringParametersAsUnicode=false;
Instructions if using different jdbc drivers and more detailled infos about the problem here : http://emransharif.blogspot.fr/2011/07/performance-issues-with-jdbc-drivers.html
SQL Server differentiates its data types that support Unicode from the ones that just support ASCII. For example, the character data types that support Unicode are nchar, nvarchar, longnvarchar where as their ASCII counter parts are char, varchar and longvarchar respectively. By default, all Microsoft’s JDBC drivers send the strings in Unicode format to the SQL Server, irrespective of whether the datatype of the corresponding column defined in the SQL Server supports Unicode or not. In the case where the data types of the columns support Unicode, everything is smooth. But, in cases where the data types of the columns do not support Unicode, serious performance issues arise especially during data fetches. SQL Server tries to convert non-unicode datatypes in the table to unicode datatypes before doing the comparison. Moreover, if an index exists on the non-unicode column, it will be ignored. This would ultimately lead to a whole table scan during data fetch, thereby slowing down the search queries drastically.
In my case, i had 30M+ records in the table i was searching from. The duration to complete the request went from more than 10 seconds, to approximatively 0.01s after applying the property.
Hope this will help someone !
It appears this may not have applied to your particular situation, but I wanted to provide another possible explanation for someone searching for this problem.
I just had a similar problem where a query executed directly in SQL Server took 1 minute while the same query took 5 minutes through a java prepared statemnent. I tracked it down to the fact that it is was done as a prepared statement.
When you execute a query directly in SQL Server, you are providing it a non-parameterized query, in which it knows all of the search criteria at optimization time. In my case, my search criteria included a date range, and SQL server was able to look at it, decide "that date range is huge, let's not use the date index" and then it chose something much better.
When I execute the same query through a java prepared statement, at the time that SQL Server is optimizing the query, you haven't yet provided it any of the parameter values, so it has to make a guess which index to use. In the case of my date range, if it optimizes for a small range and I give it a large range, it will perform slower than it could. Likewise if it optimizes for a large range and I give it a small one, it's again going to perform slower than it could.
To demonstrate this was indeed the problem, as an experiment I tried giving it hints as to what to optimize for using SQL Server's "OPTIMIZE FOR" option. When I told it to use a tiny date range, my java query (which actually had a wide date range) actually took twice as long as before (10 minutes, as opposed to 5 minutes before, and as opposed to 1 minute in SQL Server). When I told it my exact dates to optimize for, the execution time was identical between the java prepared statement.
So my solution was to hard code the exact dates into the query. This worked for me because this was just a one-off statement. The PreparedStatement was not intended to be reused, but merely to parameterize the values to avoid SQL injection. Since these dates were coming from a java.sql.Date object, I didn't have to worry about my date values containing injection code.
However, for a statement that DOES need to be reused, hard coding the dates wouldn't work. Perhaps a better option for that would be to create multiple prepared statements optimized for different date ranges (one for a day, one for a week, one for a month, one for a year, and one for a decade...or maybe you only need 2 or 3 options...I don't know) and then for each query, execute the one prepared statement whose time range best matches the range in the actual query.
Of course, this only works well if your date ranges are evenly distributed. If 80% of your records were in the last year, and 20% percent spread out over the previous 10 years, then doing the "multiple queries based on range size" thing might not be best. You'd have to optimize you queries based on specific ranges or something. You'd need to figure that out through trial an error.
Be sure that your JDBC driver is configured to use a direct connection and not a cusror based connection. You can post your JDBC connection URL if you are not sure.
Make sure you are using a forward-only, read-only result set (this is the default if you are not setting it).
And make sure you are using updated JDBC drivers.
If all of this is not working, then you should look at the sql profiler and try to capture the sql query as the jdbc driver executes the statement, and run that statement in the management studio and see if there is a difference.
Also, since you are pulling so much data, you should be try to be sure you aren't having any memory/garbage collection slowdowns on the JVM (although in this case that doesn't really explain the time discrepancy).
If the query is parametrized it can be a missing parameter or a parameter that is set with the wrong function, e.g. setLong for string, etc.
Try to run your query with all parameters hardcoded into the query body without any ? to see of this is a problem.
I know this is an old question but since it's one of the first results when searching for this issue I figured I should post what worked for me. I had a query that took less than 10 seconds when I used SQL Server JDBC driver but more than 4 minutes when using jTDS. I tried all suggestions mentioned here and none of it made any difference. The only thing that worked is adding this to the URL ";prepareSQL=1"
See Here for more
I know this is a very old question but since it's one of the first results when searching for this issue I thought that I should post what worked for me.
I had a query that took about 3 seconds when I used SQL Server Management Studio (SSMS) but took 3.5 minutes when running using jTDS JDBC driver via the executeQuery method.
None of the suggestion mentioned above worked for me mainly because I was using just Statement and not Prepared Statement. The only thing that worked for me was to specify the name of the initial or default database in the connection string, to which the connecting user has at least the db_datareader database role membership. Having only the public role is not sufficient.
Here’s the sample connection string:
jdbc:jtds:sqlserver://YourSqlServer.name:1433/DefaultDbName
Please ensure that you have the ending /DefaultDbName specified in the connection string. Here DefaultDbName is the name of the database to which the user ID specified for making the JDBC connection has at least the db_datareader database role. If omitted, SQL Server defaults to using the master database. If the user ID used to make the JDBC connection only has the public role in the master database, the query takes exceptionally long.
I don’t know why this happens. However, I know a different query plan is used in such circumstances. I confirmed this using the SQL Profiler tool.
Environment details:
SQL Server version: 2016
jTDS driver version: 1.3.1
Java version: 11
Pulling back that much data is going to require lots of time. You should probably figure out a way to not require that much data in your application at any given time. Page the data or use lazy loading for example. Without more details on what you're trying to accomplish, it's hard to say.
The fact that it is quick when run from management studio could be due to an incorrectly cached query plan and out of date indexes (say, due to a large import or deletions). Is it returning all 750K records quickly in SSMS?
Try rebuilding your indexes (or if that would take too long, update your statistics); and maybe flushing the procedure cache (use caution if this is a production system...): DBCC FREEPROCCACHE
To start debugging this, it would be good to determine whether the problem area is in the database or in the app. Have you tried changing the query such that it returns a much smaller result? If that doesnt return, I would suggest targeting the way you are accessing the DB from Java.
Try adjusting the fetch size of the Statement and try selectMethod of cursor
http://technet.microsoft.com/en-us/library/aa342344(SQL.90).aspx
We had issues with large result sets using mysql and needed to make it stream the result set as explained in the following link.
http://helpdesk.objects.com.au/java/avoiding-outofmemoryerror-with-mysql-jdbc-driver
Quote from the MS Adaptive buffer guidelines:
Avoid using the connection string property selectMethod=cursor to allow the application to process a very large result set. The adaptive buffering feature allows applications to process very large forward-only, read-only result sets without using a server cursor. Note that when you set selectMethod=cursor, all forward-only, read-only result sets produced by that connection are impacted. In other words, if your application routinely processes short result sets with a few rows, creating, reading, and closing a server cursor for each result set will use more resources on both client-side and server-side than is the case where the selectMethod is not set to cursor.
And
There are some cases where using selectMethod=cursor instead of responseBuffering=adaptive would be more beneficial, such as:
If your application processes a forward-only, read-only result set slowly, such as reading each row after some user input, using selectMethod=cursor instead of responseBuffering=adaptive might help reduce resource usage by SQL Server.
If your application processes two or more forward-only, read-only result sets at the same time on the same connection, using selectMethod=cursor instead of responseBuffering=adaptive might help reduce the memory required by the driver while processing these result sets.
In both cases, you need to consider the overhead of creating, reading, and closing the server cursors.
See more: http://technet.microsoft.com/en-us/library/bb879937.aspx
Sometimes it could be due to the way parameters are binding to the query object.
I found the following code is very slow when executing from java program.
Query query = em().createNativeQuery(queryString)
.setParameter("param", SomeEnum.DELETED.name())
Once I remove the "deleted" parameter and directly append that "DELETED" string to the query, it became super fast. It may be due to that SQL server is expecting to have all the parameters bound to decide the optimized plan.
Two connections instead of two Statements
I had one connection to SQL server and used it for running all queries I needed, creating a new Statement in each method that needed DB interaction.
My application was traversing a master table and, for each record, fetching all related information from other tables, so the first and largest query would be running from beginning to end of the execution while iterating its result set.
Connection conn;
conn = DriverManager.getConnection("jdbc:jtds:sqlserver://myhostname:1433/DB1", user, pasword);
Statement st = conn.createStatement();
ResultSet rs = st.executeQuery("select * from MASTER + " ;");
// iterating rs will cause the other queries to complete Entities read from MASTER
// ...
Statement st1 = conn.createStatement();
ResultSet rs1 = st1.executeQuery("select * from TABLE1 where id=" + masterId + ";");
// st1.executeQuery() makes rs to be cached
// ...
Statement st2 = conn.createStatement();
ResultSet rs2 = st2.executeQuery("select * from TABLE2 where id=" + masterId + ";");
// ...
This meant that any subsequent queries (to read single records from the other tables) would cause the first result set to be cached entirely and not before that the other queries would run at all.
The solution was running all other queries in a second connection. This let the first query and its result set alone and undisturbed while the rest of the queries run swiftly in the other connection.
Connection conn;
conn = DriverManager.getConnection("jdbc:jtds:sqlserver://myhostname:1433/DB1", user, pasword);
Statement st = conn.createStatement();
ResultSet rs = st.executeQuery("select * from MASTER + " ;");
// ...
Connection conn2 = DriverManager.getConnection("jdbc:jtds:sqlserver://myhostname:1433/DB1", user, pasword);
Statement st1 = conn2.createStatement();
ResultSet rs1 = st1.executeQuery("select * from TABLE1 where id=" + masterId + ";");
// ...
Statement st2 = conn2.createStatement();
ResultSet rs2 = st2.executeQuery("select * from TABLE2 where id=" + masterId + ";");
// ...
Does it take a similar amount of time with SQLWB? If the Java version is much slower, then I would check a couple of things:
You shoudl get the best performance with a forward-only, read-only ResultSet.
I recall that the older JDBC drivers from MSFT were slow. Make sure you are using the latest-n-greatest. I think there is a generic SQL Server one and one specifically for SQL 2005.

Categories

Resources