I have a Spring Boot (1.5.5) application using Hibernate (5.4.0) to query Oracle database. The application is deployed on WebLogic 12c. One of its query sometimes (50%) timed out (timeout set to 2 minutes). This query is to join two tables (one of the tables has 25 million rows; the other has 800K rows), aggregate and get the count on a column.
When the application runs on Tomcat (which comes with SpringBoot), this query always works fine. I also check the execution plan and the query is fast enough (10 seconds on Oracle SQL developer).
I wonder why sometimes this query timed out when the application runs on WebLogic.
Thank you in advance for your help.
Related
We have a Java application (basically integration tests) that uses Hibernate (which uses Jdbc) to read/write data to the MySQL Database. Hibernate objects like sessions or transactions are created and configured via our own code (no Spring or other wrappers are being used). The issue is that periodically (multiple times during tests execution) we observe a "No database selected" exception. Database URL that we use for DataSource configuration already contains database name in it:
jdbc:mysql://localhost:3306/test?useSSL=false&createDatabaseIfNotExist=false&cacheServerConfiguration=true&cacheResultSetMetadata=true&useLocalSessionState=true&rewriteBatchedStatements=true&tcpNoDelay=true&tcpTrafficClass=16&alwaysSendSetIsolation=false&tcpSndBuf=1048576&tcpRcvBuf=1048576&characterEncoding=utf8&allowPublicKeyRetrieval=true
I tried to catch the Exception and test the connection's selected database by running select database() and it actually reports that the value is null on the database side.
Even more strange thing is that next queries on the same connection are executed against the normal database (so it somehow self-heals).
Does anybody know why can MySQL connections "lose" and then "restore" selected database?
Or maybe there is a way to trace the problem down. Will be grateful for any help or thought that you can provide
Versions:
Java 1.8.0_292
Mysql 5.6.31
Hibernate 5.4.2
JDBC mysql-connector-java 8.0.22
Have a JavaEE application that is being migrated from Oracle to SQL Server 2016.
Uses Java 1.7, jboss 4.2.3.GA and hibernate 3.2.4.sp1.
The application uses the javax EntityManager for DB access and so queries look like this:
List<ServiceProvider> providers = entityManager
.createQuery("FROM ServiceProvider sp order by sp.id")
.setMaxResults(spCount)
.getResultList();
But a SQL Trace shows the query being wrapped in exec sp_executesql.
For example the above becomes exec sp_executesql N'SELECT TOP (50) ....'
If I trace a query coming from say an SSRS report, it is not wrapped in the sp_executesql.
What is responsible for this transformation?
** edited to a single focused question.
As #MarkRotteveel mentioned in his comment, it seems the MS JDBC driver uses sp_executesql when executing a prepared statement. Once we fixed our missing pool-size and prepared-statement-cache-size options we see no difference between Oracle 12g and SQL Server 2016 so I don't believe there is a performance hit by using sp_executesql or if there it, it is very minimal.
<min-pool-size>20</min-pool-size>
<max-pool-size>220</max-pool-size>
<prepared-statement-cache-size>100</prepared-statement-cache-size>
Interestingly enough, Hibernate executes fewer queries when targeting MSSQL than Oracle. The query in my original post results in 12 Oracle queries vs 10 in MSSQL.
I have a spring boot application with java 8 ,jpa etc and a jboss application with j2ee applications which calls too many sql procedures to update the table.
I have a query something like this in spring boot to get all the employee:
#Cacheable("employeeList")
List{Employee} findByAddressId(Long addressId);
But if someone inserts a new record to Employee table in the same address id from sql procdure from jboss application, the spring boot application is not able to pick the new records , because the query is so generic to that address id.
So i want to create a trigger on that table on insert and update , so when ever insert/update happens it should update the cache with new records belongs to that address id.
Can somebody please tell me how to do this?
If I understand the question correctly you have a spring boot app and a separate jboss app that are connecting to the same database and are insert/updating to the same database tables.
With spring's #Cachable you need to be able to tell spring when you should evict the cached item. For example, having the method that updates the entity being marked as #CacheEvict is an easy way to evict the entity from the cache. The problem here is that if the jboss app updates a record there is no way for spring boot app to know this.
Using a database trigger would seem problematic since you'd have to somehow have the db trigger communicate to the spring boot app to allow eviction to happen.
One solution may be having both the jboss and spring boot app use a distributed caches, like ehcache with terracotta.
My app uses 3 other databases (MySQL) and 3 PUs on Glassfish 3.1. While retrieving data from database 'A' with relation (for example #OneToOne) from model from database 'B' an error occurs:
SELECT command denied to user .. for table 'related_table'
This error DOES NOT occur when I name each entity this way:
#Table(name="db_name.table_name"), but this is not a good option, because I use H2 Database for tests and H2 can not resolve such naming.
[edit]
Important thing: app worked perfectly without this 'special table naming' in #Table, before I reinstalled my Glassfish - same version and I have nothing changed in Glassfish Settings for this purpose before..
I have a struts 2 application and a toplink persistence provider running on tomcat 6.0.20 and a MySql 5.1.38 server on a GNU/Linux machine. After committing the data the when i go to retrieve it the data it has disappeared from the database.
I do a em.commit() and em.flush() after my queries have executed. How do they disappear? I am using all standard configuration files. I have reduced the wait_timeout and the interactive_timout period in mysql. Also am using autoReconnectforPools in my persistence.xml.
I also invalidate the cache on every users logout.
Any ideas?
anyway it does not matter, the problem was solved by removing softweak from persistence.xml's entity type declaration and adding hardweak in its place.