Long time of fetching data from OracleDB using Eclipselink - java

In my application I'm using Eclipselink as ORM for OracleDB and I encountered performance problem.
I'm executing code like this:
entityManager
.createNamedQuery(RoleToPermissionEntity.FIND_BY_APPLICATION_ROLE, RoleToPermissionEntity.class)
.setParameter(RoleToPermissionEntity.APPLICATION_ROLES_QUERY_PARAM, applicationRoles)
.getResultList();
with named query:
SELECT mapping
FROM RoleToPermissionEntity mapping
WHERE mapping.applicationRole IN :applicationRoles
ORDER BY mapping.id
Entity manager is set by #PersistenceContext.
For 3 given application roles application gets 123 rows (from 393), 9 column each (2 Timestamps with time zone, 3 numbers, 4 short varchars).
I checked time of execution as difference between System.nanoTime() before and after execution of given code. It's about 550 ms, no matter if it's executed 1st time or 10th in a row. And my assumption is that it should be much faster.
My first guess was problem with query, so I checked Eclipselink logs. Executed query is:
SELECT *all_columns*
FROM *table_name*
WHERE (APPLICATION_ROLE IN (?,?,?)) ORDER BY ID
bind => [3_application_roles]
Looks ok for me. I tried to execute it as native query, but result is the same. I tried also other queries like SELECT * FROM table_name, but time still is about 500-600 ms.
I wanted to have some comparison for this time so I created database connection manually and executed query like:
Class.forName("oracle.jdbc.driver.OracleDriver");
connection = DriverManager.getConnection(database_args);
Statement statement = connection.createStatement();
statement.executeQuery(query);
I executed it for several times, first (when connection was established) took quite a long time, but next took like 50-60 ms.
My second guess was problem with connection pool. I tried to find something in Eclipselink docs and I noticed only that parameters:
<property name="eclipselink.connection-pool.default.initial" value="1"/>
<property name="eclipselink.connection-pool.default.min" value="16"/>
<property name="eclipselink.connection-pool.default.max" value="16"/>
should be set. They are, but the problem still exists.
Content of my persistence.xml:
<persistence>
<persistence-unit name=unit transaction-type="JTA">
<jta-data-source>datasource</jta-data-source>
<exclude-unlisted-classes>false</exclude-unlisted-classes>
<!-- cache needs to be deactivated for multiple pods -->
<!-- https://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching -->
<shared-cache-mode>NONE</shared-cache-mode>
<properties>
<property name="eclipselink.logging.level" value="FINE"/>
<property name="eclipselink.logging.level.sql" value="FINE"/>
<property name="eclipselink.logging.parameters" value="true"/>
<!--<property name="eclipselink.ddl-generation" value="create-or-extend-tables"/>-->
<property name="eclipselink.weaving" value="false"/>
<property name="eclipselink.target-database"
value="org.eclipse.persistence.platform.database.oracle.Oracle12Platform"/>
<property name="eclipselink.connection-pool.default.initial" value="1"/>
<property name="eclipselink.connection-pool.default.min" value="16"/>
<property name="eclipselink.connection-pool.default.max" value="16"/>
</properties>
</persistence-unit>
</persistence>
What can I do to fix this behavior?

After few next hours I found the problem. Default fetch size of OJDBC is 10, so with increasing number of rows to fetch time increases very fast.
What is strange: this was my first idea, so I tried to set <property name="eclipselink.jdbc.fetch-size" value="100"/> in persistence.xml. It didn't work, so I jumped to other solutions. Today I set it on single query by query.setHint("eclipselink.jdbc.fetch-size", 100) and it works.

Related

Hibernate - H2 database is not created

I want to use Hibernate with H2 and I want the schema to be created automatically. There are many examples online and my configurations seem fine, but it is not created. Previously I used it with MySQL and did not have any problem. Are there additional parameters to be included in anywhere for H2?
My persistence unit is defined in persistence.xml as follows:
<persistence-unit name="some.jpa.name"
transaction-type="RESOURCE_LOCAL">
<provider>org.hibernate.jpa.HibernatePersistenceProvider</provider>
<!-- tried with and without class property
<class>some.package.KeywordTask</class>
-->
<properties>
<property name="javax.persistence.jdbc.driver" value="org.h2.Driver" />
<property name="javax.persistence.jdbc.url" value="jdbc:h2:./test" />
<property name="javax.persistence.jdbc.user" value="" />
<property name="javax.persistence.jdbc.password" value="" />
<property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect" />
<property name="hibernate.hbm2ddl.auto" value="create" />
<property name="show_sql" value="true" />
</properties>
</persistence-unit>
Since show_sql is set to true, I expect to see create statements but nothing happens, i.e. the schema is not created.
I keep my EntityManagerFactory as a final static variable:
public static EntityManagerFactory emf = Persistence.createEntityManagerFactory("some.jpa.name");
In some place in my code, I am trying to persist an entity:
EntityManager em = emf.createEntityManager();
em.getTransaction().begin();
KeywordTask task = new KeywordTask();
task.setKeyword(keywordTask.getKey());
task.setLimit(keywordTask.getValue());
em.persist(task);
em.getTransaction().commit();
em.close();
This throws exception with cause:
org.h2.jdbc.JdbcSQLException: Table "KEYWORDTASK" not found;
which is expected since the schema is not created.
How can I get the schema created?
The reason of this problem was quite unrelated! I am writing it here in case some other guys might face it too, and spend half a day for such a stupid thing.
First, I changed from H2 to Derby to check, and it worked. In this way, I was sure that there was no problem with persistence.xml configuration.
After searching around the logs, I realized that hibernate was not able to create the table since one of the properties of the KeywordTask entity was limit, and it is a reserved word! (Remember the place that I persist an instance and observe the name of the setter: setLimit.) After changing the name of the property, it worked.

java Hibernate unnecessary queries on detached objects

I'm inserting, updating and deleting many detached objects with hibernate and a c3p0 connectionpool.
The problem is that hibernate does not batch the statements but instead does a
select ##session.tx_read_only
between every session.persist/insert/update/delete(object). Profiling the sql-connection it looks like this:
select ##session.tx_read_only
insert...
select ##session.tx_read_only
insert...
select ##session.tx_read_only
insert...
select ##session.tx_read_only
insert...
select ##session.tx_read_only
insert...
select ##session.tx_read_only
with select ##session.tx_rad_only always returning "0" (of course). It doesn't matter whether i use a stateless or stateful session. The resulting performance is not acceptable and far of any expectation.
My Hibernate Konfiguration:
<property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
<property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="hibernate.connection.url">jdbc:mysql://127.0.0.1:4040/xy?zeroDateTimeBehavior=convertToNull</property>
<property name="hibernate.connection.username">xy</property>
<property name="hibernate.connection.password">xy</property>
<property name="hibernate.connection.autocommit">false</property>
<property name="hibernate.show_sql">false</property>
<property name="hibernate.format_sql">false</property>
<property name="hibernate.use_sql_comments">false</property>
<property name="hibernate.query.factory_class">org.hibernate.hql.internal.classic.ClassicQueryTranslatorFactory</property>
<property name="hibernate.connection.provider_class">org.hibernate.service.jdbc.connections.internal.C3P0ConnectionProvider</property>
<property name="hibernate.c3p0.min_size">5</property>
<property name="hibernate.c3p0.max_size">20</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">250</property>
<property name="hibernate.c3p0.idle_test_period">3000</property>
<property name="hibernate.jdbc.batch_size">250</property>
<property name="hibernate.connection.release_mode">auto</property>
<property name="hibernate.order_inserts">true</property>
<property name="hibernate.order_updates">true</property>
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.cache.region.factory_class">org.hibernate.cache.ehcache.EhCacheRegionFactory</property>
<property name="hibernate.cache.provider_class">org.hibernate.cache.EhCacheProvider</property>
<property name="hibernate.cache.use_query_cache">true</property>
<property name="net.sf.ehcache.configurationResourceName">hibernate_ehcache.xml</property>
I'm using:
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-c3p0</artifactId>
<version>4.3.5.Final</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.31</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
<version>4.3.4.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>ejb3-persistence</artifactId>
<version>1.0.2.GA</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-ehcache</artifactId>
<version>4.3.4.Final</version>
</dependency>
I have nearly no expirience with hibernate and it is a good guess i made a huge mistake so please feel free to suggest anything.
I switched to hibernate because the ORM functionality and am coming from plain jdbc prepared statements with a blazing fast performance. The MYSQL-Server is in a good configuration-state.
edit1:
i'm aware of:
Unnecessary queries in Hibernate - MySql
I've got no transactional annotations in my entities nor a defined isolationlevel anywhere
edit2:
i changed my connectionpool to bonecp - and the problem continues. Seems to be clearly a hibernate copnfiguration issue.
edit3:
tried many different things and found maybe a trace of a hint:
If I manualy session.flush() every 5 inserts (=size of batch e.g.)[tried the batch-example AGAIN from hibernate], the select ##session.tx_read_only query appears double - every 5 queries. I therefore assume that select ##session.tx_read_only is related to flushing. are there any ways to prevent hibernate from flushing every single insert/update?
I tried so far: session.setFlushMode(FlushMode.COMMIT/NEVER/etc) without any change in the behaviour. Maybe I misconfigured anything... what does hibernate trigger to flush every insert? Unique constraint at the tables? hibernate validation framework? complex object-graphs? difficult concurrency? maybe a locking issue (Hibernate isn't sure if someone else locked the tables and doesn't batch but checks every single insert if the table is read only?)?
i found nothing related to this extrem (I assume) flushing bahaviour.
We solved this issue by just setting useLocalSessionState=true in the connection string.
The below link explain the details of ReadOnly related changes happened from Mysql5.6 and the java connector 5.1.23.
http://dev.mysql.com/doc/relnotes/connector-j/en/news-5-1-23.html
You need to include all those CRUD operations in a single Transaction so all statements are executed in the same DB connection.
You can also enable the following Hibernate configurations:
<property name="hibernate.order_inserts" value="true"/>
<property name="hibernate.order_updates" value="true"/>
<property name="hibernate.jdbc.batch_size" value="50"/>
Those queries don't mean you don't have batching. It's a MySQL thing
Some drivers require special batching reordering directives:
<property name="hibernate.connection.url">jdbc:mysql://host:port/db?rewriteBatchedStatements=true</property
I found the flaw in my configuration.
I had to change the mysql-connectionsstring from
<property name="hibernate.connection.url">jdbc:mysql://127.0.0.1:4040/xy?zeroDateTimeBehavior=convertToNull</property>
to
<property name="hibernate.connection.url">jdbc:mysql://127.0.0.1:4040/xy?rewriteBatchedStatements=true</property>
this solved my problems.

Eclipse,Hibernate tools, Is there any way to preview the sql equivalent query for criteria editor

I am using Hibernate tools 3.3 in Eclipse Indigo.
Is there any way to view the Sql equivalent query for the criteria that I created?
There is one Hibernate Dynamic SQL View which shows Sql preview for Hql editor.
But I haven't find any preview for criteria.
With Hibernate Criteria API the only way to view the SQL output is to run the query. No preview. In order to view the generated SQL you must configure your datasource to log your sql statements. Here is a persistence.xml example for Hibernate MS SQLServer dialect. The ="true" elements are all instructions to be verbose when running Hibernate queries. This has a big impact on performance, needless to say.
<persistence-unit name="projectPU" transaction-type="RESOURCE_LOCAL">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>jdbc/projectDS</jta-data-source>
<properties>
<property name="hibernate.dialect"
value="org.hibernate.dialect.SQLServerDialect" />
<property name="hibernate.show_sql" value="true" />
<property name="hibernate.format_sql" value="true" />
<property name="hibernate.use_sql_comments" value="true" />
</properties>
</persistence-unit>
The sql will only be generated when you run criteria.list()
Criteria crit = session.createCriteria(Foo.class);
// create aliases and projections etc. whose effects are not visible yet
List<Foo> fooList = crit.list(); // only now can you see errors!
See Logging hibernate SQL using log4j

JPA insert fails after 400 (or so) inserts - transaction error

Update: It seems it's failing after one insert, #412, fails a not null constraint at the database level. The transaction is probably rolling itself back. Given this setup, is it possible to get a new transaction established?
I'm trying to insert a lot of rows into my oracle database, and JPA works just fine until about the 400th insert. I expect to have several thousand rows to insert.
Here's my psuedo-code (shortened for clarity) & persistence.xml:
#Stateless
public class LocalContentService
{
#Inject EntityManager em;
public void mySavingMethod(){
for(Foo foo : fooDao.fetchAllFoos()){
Bar bar = new Bar(foo);
em.persist(bar);
em.flush();
em.clear();
log.debug("Saved content for: " + bar.getId());
}
}
<persistence-unit name="databaseTest" >
<jta-data-source>java:/jdbc/testDS</jta-data-source>
<exclude-unlisted-classes>true</exclude-unlisted-classes>
<class>org.myorg.Bar</class>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect" />
<property name="hibernate.show_sql" value="false" />
<property name="hibernate.format_sql" value="false" />
<property name="hibernate.use_sql_comments" value="true" />
</properties>
</persistence-unit>
After roughly 400 rows, I get this error message and all subsequent inserts fail:
ERROR [stderr] (http--127.0.0.1-8080-1)
javax.persistence.TransactionRequiredException: JBAS011469:
Transaction is required to perform this operation (either use a
transaction or extended persistence context)
So my question is two fold
1) What on earth happened to my transaction midway through the process? Can it be avoided?
2) Is there a better approach to doing a bulk insert like this one (keeping in mind that i'm loading a bunch of Foo's and they need to be transformed into Bar's before persisting.
I'm running inside a jBoss 7.1.1.Final AS and hibernate-jpa-2.0-api

Connection could not be allocated because: User id length (0) is outside the range of 1 to 255

I'm creating a login interface using Netbeans with JSF, EJB and JPA. When I try to deploy the project, it throws the below exception:
Internal Exception: java.sql.SQLException:
Error in allocating a connection. Cause: Connection could not be allocated because: User id length (0) is outside the range of 1 to 255.
Error Code: 0. Please see server.log for more details.
C:\Users\Dell\Desktop\assignmenttask2\nbproject\build-impl.xml:1033: The module has not been deployed.
See the server log for details.
How is this caused and how can I solve it?
you need to configure in persistence.xml .
<properties>
<property name="javax.persistence.jdbc.user" value="APP"/>
<property name="javax.persistence.jdbc.password" value="APP"/>
</properties>
see here
#PSR answer's did the trick for me, here's more on that:
netbeans (reproduced on 7.4 build 201310111528) JPA's persistance unit creation wizard does not enforce giving a username password.
Problem is it does not work with Java DB (derby). Bigger problem that you get this awkward error regarding user id length, which is another one of those really-not-helpful error messages.
So, to solve this, either recreate the persistence unit (persistence.xml) with a user name password, or add the two lines manually under <properites> in the xml:
<properties>
<property name="javax.persistence.jdbc.user" value="APP"/>
<property name="javax.persistence.jdbc.password" value="APP"/>
</properties>
HTH
Often the problem occurs due to an empty string as password, so to solve that you've to mention a '()' in the value attribute of the password, for example if the user is APP and there is no password set, the config would be:
<properties>
<property name="javax.persistence.jdbc.user" value="APP"/>
<property name="javax.persistence.jdbc.password" value="()"/>
</properties>

Categories

Resources