I have a system based on Oracle10g (datasource) + Weblogic 10.3 + Eclipselink.
If I insert or delete data via the Java software with DAOs, all the data are immediately avaliable, but if I manually insert data (via SqlDeveloper or using the java.sql.Connection class), the new data are not brought from database.
Why it happens? And how to fix it?
It happens because EclipseLink cannot be aware that changes are made to database via some other methods. External tools do not inform EclipseLink about changes and EclipseLink is not all the time polling database content for possible changes. Such a implementation would kill performance. Same happens also with changes made via JPA bulk operations, like JPQL and native SQL DELETE and INSERT queries.
You cannot really fix the problem, but it is easier to live with it, when you turn of shared cache. Be aware of likely performance hit.
In JPA 1:
<property name="eclipselink.cache.shared.default" value="false"/>
and in JPA 2:
<shared-cache-mode>NONE</shared-cache-mode>
As an addition of shared cache, also EntityManager maintains cache. Single entity can refreshed from database via refresh method.
Related
For development and deployment of my WAR application I use the drop-and-create functionality. Basically erasing everything from the database and then automatically recreating all the necessary tables and fields according to my #Entity-classes.
Obviously, for production the drop-and-create functionality is out of question. How would I have to create the database tables and fields?
The nice thing about #Entity-classes is that due to OQL and the use of EntityManager all the database queries are generated, hence the WAR application gets database independent. If I now had to create the queries by hand in SQL and then let the application execute them, then I would have to decide in which sql dialect they are (i.e. MySQL, Oracly, SQL Server, ...). Is there a way to create the tables database independently? Is there a way to run structural database updates as well database independently (i.e. for database version 1 to database version 2)? Like altering field or table names, adding tables, droping tables, etc.?
Thank you #Qwerky for mentioning Liquibase. This absolutely is a solution and perfect for my case as I won't have to worry about versioning anymore. Liquibase is very easy to understand and studied in minutes.
For anyone looking for database versioning / scheme appliance:
Liquibase
I am developing spring mvc application
For now I am using innodb mysql but I have to develop the application to support other databases also.
Can any one please suggest me how to handle concurrent sql update on single record.
Suppose two users are trying to update same record then how to handle such scenario.
Note: My database structure is dependent on some configuration (It can change at runtime) and my spring controller is singleton in nature.
Thanks.
Update:
Just for reference I am going to implement version like https://stackoverflow.com/a/3618445/3898076).
Transactions are the way to go when it comes to concurrent sql updates, in spring you can use a transaction manager.
As for the database structure, as far as I know MySql does not support transactions for DDL commands, that is if you change the structure concurrently with updating, you're likely to run into problems.
To handle multiple users working on the same data, you need to implement a manual "lock" or "version" field on the table to keep track of last updates.
I am using Hibernate's multi-tenancy feature via JPA, with a database per tenant strategy. One of my requirements is to be able to run a query against a table that exists in each database but obviously with different data. Is this possible?
Thanks in advance for your time.
Nope. this is not possible because when hibernate runs queries it is already initialized with a connection. MT support in Hibernate is basically done a little "outside of Hibernate" itself. It's kind of feeding hibernate with a proper connection and when it's fed :) it's bound to that connection.
If you need cross-tenant queries you might want to reconsider multitenancy or change JPA provider to the one that support "shared schema approach" e.g. EclipseLink. With shared shema approach you have two choices:
run native query agains table containing mt-aware entities
create additional entity - dont mark it as multitenant - map it to the table containing mt-ware entities and run JPQL query in standard manner
I need some clarification with the Hibernate second level cache.
How does the hibernate second level cache works?
Does it loads all the data from the tables for which there is #Cacheable annotation (with respect to hibernate annotation) in the entity classes on server start up in the Java EE environment?
Will the cache gets sync up when there is an update on those tables and how?
Last one is there any ways for my DAO code to get notified when there is an updated on some table which i am interested upon? (Looking for any listener which can intimate abt the updates of the tables).
How does the hibernate second level cache works?
When your entity is marked as cacheable and if you have configured the second level cache then hibernate will cache the entity to the second level cache after the first read.
Hibernate provides the flexibility to plugin any cache implementation that follows hibernates specification. Refer Hibernate Manual for more details on second level cache and configurations options.
Does it loads all the data from the tables for which there is #Cacheable annotation (with respect to hibernate annotation) in the entity classes on server start up in the Java EE environment?
I don't think there is any configuration for achieving this. Indirectly you can achieve this by reading the entire table in startup, this can adversely affect the system startup time. (i don't prefer this). If the entity is modified externally, then hibernate can't sync it and you will end up getting stale data.
Will the cache gets sync up when there is an update on those tables and how?
The cache won't get updated instantly after the table update. The subsequent call to fetch the updated record will go the database, hibernate achieves this internally by using session timestamps.
Last one is there any ways for my DAO code to get notified when there is an updated on some table which i am interested upon? (Looking for any listener which can intimate abt the updates of the tables).
No, hibernate doesn't support this.
That's a too broad question to be answered here.
No. It populates the cache lazily. Each time you get a cachable entity from the database, using the hibernate API or a query, this entity is stored in the cache. Later, when session.get() is called with an ID of an entity that is in the cache, no database query is necessary.
If the update is made through Hibernate, then the cache is updated. If it's done using an external application, or a SQL query, or even a bulk update HQL query, then the cache is unaware of the update. That's why you need to be careful about which entities you make cachable, which time-to-live you choose, etc. Sometimes, returning stale values is not problematic, and sometimes it is unacceptable.
No.
When I execute some queries inside Hibernate transaction -> the data successfully updated in my mysql, but in my application there are still old values. When i restart - it`s Ok. If i set autocommit mode - works fine, but i have to use transaction ;-). Any ideas?
Thanks in advance.
Manipulating the database directly with UPDATE doesn't affect the objects cached in the session. You should clear the session (Session.clear()). Something like:
session.flush()
session.clear()
query.executeUpdate()
Or even better, you can avoid the problem by not using update queries and manipulating the object state in memory:
myobj.setName(newValue)
session.saveOrUpdate(myobj)
In hibernate either you are using JPA API or Hibernate's native API any query that you run using below interface
Criteria (Hibernate Native API)
Query (Hibernate Native API)
EntityManager createQuery() (JPA)
The queries dont interact with the second level or first level cache . They directly hit the database .If your query is updating the entities currently in the persistence context , those entities will not reflect the changes.This is the default behavior.
In order to update your context to show the latest state of entity use the refresh() in Session or in EntityManager to reflect the latest entity state in persistence context.
Read the docs below for more info
http://docs.oracle.com/javaee/7/api/javax/persistence/EntityManager.html#refresh-java.lang.Object-
https://docs.jboss.org/hibernate/orm/3.5/javadocs/org/hibernate/Session.html#refresh%28java.lang.Object%29
Otherwise as a convention always run your DML before loading any data in the persistence context.
Hope this helps :D