Error While acquiring Client session in TopLink - java

I am facing a peculiar issue. Below is the stack trace of what error i am getting.Please help.
Exception [TOPLINK-7001] (Oracle TopLink - 11g Release 1 (11.1.1.1.0) (Build 090527)): oracle.toplink.exceptions.ValidationException
Exception Description: You must login to the ServerSession before acquiring ClientSessions.
at oracle.toplink.exceptions.ValidationException.loginBeforeAllocatingClientSessions(ValidationException.java:1155)
at oracle.toplink.threetier.ServerSession.acquireClientSession(ServerSession.java:313)
at oracle.toplink.threetier.ServerSession.acquireClientSession(ServerSession.java:303)
at com.ofss.elcm.domain.Session.fetchClientSession(Session.java:113)
at com.ofss.elcm.domain.Session.acquireUnitOfWork(Session.java:132)

EclipseLink has the facility to check for classloader changes in cases of application redeployment. This can cause issues when calling into the SessionManager for a particular session from both a Web container and a EJB container.
Ensure that you are using the API getSession(null, sessionName, classLoader, true, false) or the same method with the longer signature to disable this classLoader checking. If you wish to construct a XMLSessionConfigLoader directly you can disable the classloader checking directly though xmlSessionConfigLoader.setShouldCheckClassLoader(false).

Did you try using the sessionmanager from a singleton object? That way, you should always get the same manager-instance and there should be no classloader-issues.

Related

Session replication with VaadinSession not working

We have a web application that is using Spring Boot (1.5) with Vaadin (7.7), and is using Apache Shiro (1.4.0) for security.
The application is configured to use DefaultWebSessionManager to let Shiro handle the session management instead of the servlet container.
We are using the official Vaadin Spring integration (1.2.0), and after some configuration it all works as intended. The VaadinSession contains a wrapped ShiroHttpSession internally.
We want to achieve session replication, by configuring Shiro to use a SessionDAO that is backed by an external Cache, which means the sessions get (de)serialized.
As soon as we start using this SessionDAO, Vaadin will crash and stop working. When replace the external cache by an in memory Map for the sake of debugging, it works again.
It seems this is caused by the SpringVaadinServlet, as it stores the VaadinSession as a session attribute. VaadinSession is Serializable and the Javadoc shows:
Everything inside a VaadinSession should be serializable to ensure
compatibility with schemes using serialization for persisting the
session data.
Inside the VaadinSession are some fields that are not Serializable, for example a Lock and the wrapped http session inside is also marked as transient.
Because of this, the session that Vaadin uses will be broken as soon as it is distributed, resulting in a lot of crashes.
So it turns out the VaadinSession is not actually usable in session replication? Why is this and how can we work around this?
Note: we also have a version of the application that is using Vaadin 8, and here the same thing happens. It seems that the issue is caused by the Vaadin Spring integration.
Inside the VaadinSession are some fields that are not Serializable, for example a Lock and the wrapped http session inside is also marked as transient.
The wrapped http session is not part of Vaadin session, it is the the http session. Thus it is transient. The same can be said about Lock, whose instance is stored in the http session.
In order to implement session serialization correctly, you need to hook into serialization events and update the transients when session is being deserialized. VaadinSession should be loaded with VaadinService#loadSession, which calls VaadinSession#refreshTransients.
Everything inside a VaadinSession should be serializable to ensure compatibility with schemes using serialization for persisting the session data.
This statement does not imply that you can serialize your application out of the box. It just means, that in case your application is serializable as well, with careful engineering you can serialize the whole thing.
For example Vaadin is not updating the session attribute in each possible occasion for performance reasons. There is method VaadinService#storeSession for that. So you need to either override right method or setup request filter. E.g. you could do this at VaadinService#endRequest.
Note, you need to use sticky sessions in order to get this to work with moderate amount of effort. If your session is de-serialized in different machine, the re-entrant lock instances wont be valid. If you would like to be able to de-serialize the session in different machine, it would require that your infrastructure can offer distributed lock that you can use instead of re-entrant Lock of Java and override Vaadin's getSessionLock and setSessionLock methods to use that.
Valuable sources of further info:
Generic notes from Vaadin's CTO
https://vaadin.com/blog/session-replication-in-the-world-of-vaadin
Testimonial from developer who did it with one stack
https://vaadin.com/learn/tutorials/hazelcast
Thoughts from another senior developer
https://mvysny.github.io/vaadin-14-session-replication/

Spring #Transactional not working in ApplicationServer

We have a Spring-Boot application exposing some REST endpoints. We allow for this application to be operated standalone (as executable jar) or as a war to be deployed in a wildfly-11 application-server.
The class defining the REST-endpoints is marked #RestController #Transactional(REQUIRES_NEW) (both on class level, obviously). When running standalone, everything works as expected but when deployed in wildfly, the rollback on exceptions does not work. We established this by sending the exact same REST-message while operating on the exact same database.
We have confirmed via debugging that the final frames of the stacktrace is identical in both cases and especially in both cases we see a transactional-proxy around our REST-controller bean.
One difference would be, that within wildfly the application will use a jndi-datasource, prepared by wildfly while standalone the spring-boot will manage the database-connections.
Any idea what is wrong here?
Edit
I just tried explicitly invoking setRollbackOnly on the JtaTransactionmanager from within my code. The transaction will still commit. This sort of looks like a bug in Spring Boot to me.
Edit 2
Debugging further reveals that the transaction seems to be set to autocommit - every statement is immediately written to the database. This seems to be in violation to the annotation #Transactional and also to the fact that Spring creates a transactional proxy around my bean.
It's not a full answer - just a reasoning. JNDI is usally used at the app server layer whereas JDBC - at the application layer. At the App server layer are used global transaction settins that are overriding app settings. Follow the spring doc to get more
For reasons beyond my understanding the default transactional behaviour when deploying a spring-boot webapp to an application-server is auto-commit.
The solution to this problem is to enrich your application-configuration with the property spring.datasource.tomcat.default-auto-commit=false

Hibernate Exception during deploy

I´m using hibernate 4.3.6 in my vaadin project.
Every time I make changes in the sources code, it is expected that the application builds again and the new source code is deployed automatically to Tomcat. In other words, Tomcat should reload its context.
The problem is that during this operation hibernate throws an error:
GRAVE: Exception loading sessions from persistent storage
org.hibernate.HibernateException: registry does not contain entity manager factory: myproject
at org.hibernate.jpa.internal.EntityManagerFactoryRegistry.getNamedEntityManagerFactory
(...)
After that log, i get:
24/09/2014 13:14:43 org.apache.catalina.core.StandardContext reload
INFO: Reloading Context with name [/myproject] is completed
However, I cannot continue using the website, since I receive a message saying that session is lost.
My question is: what is this hibernate exception and how can I solve it?
EDIT:
This error only happens when I store in session a JPA Entity, for example: the logged user
I don´t know any way to get what you want, in Tomcat, except with JRebel. The staff of Vaadin itself uses and recommends. Link with interesting information about Vaadin+JRebel: http://zeroturnaround.com/blog/jrebel-case-study-vaadin-eliminates-redeploys-and-saves-10-of-development-time/
If in the future you decide to use Jetty instead of Tomcat, you can make settings and get dynamic reloading of the application as suggested here: https://blog.oio.de/2012/08/23/dynamic-reloading-of-vaadin-applications-with-maven-and-eclipse /

WAS serialization exception from un-used package

I am seeing the following exception in my production WebSphere 8 server log:
WASSession E MTMBuffWrapper getBytes write object exception.
e= java.io.NotSerializableException: org.apache.commons.logging.impl.Jdk14Logger
However the only logging package used in deployed application is java.util.logging.Logger.
I am not seeing any serialization exception in my local RAD server, only in production environment.
Any idea?
You don't see exceptions in RAD because you dont have persistence session enabled or PMI counter collecting the session size (which is probably set on the production).
Although you don't use org.apache.commons.logging.impl.Jdk14Logger in your code there is very high probability that some kind of third party framework that you use in your app is using it.
You have to check what objects you put in the session (search for all session.setAttribute() method calls). You must be putting some third party object into session which is using that logger.
Try to remove commons-logging-1.1.jar from your application, if you have one there.
WebSphere internally uses the commons-logging library, so the conflict rises. We faced a similar problem, also using IBM provided solutions like https://www.ibm.com/support/pages/javaionotserializableexception-thrown-websphere-application-server-community-edition-when-applications-are-stopped did not help. In our case use, the jcl-over-slf4j library solved the problem.

Disabling contextual LOB creation as createClob() method threw error

I am using Hibernate 3.5.6 with Oracle 10g. I am seeing the below exception during initialization but the application itself is working fine. What is the cause for this exception? and how it can be corrected?
Exception
Disabling contextual LOB creation as createClob() method threw error : java.lang.reflect.InvocationTargetException
Info
Oracle version: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0
JDBC driver: Oracle JDBC driver, version: 11.1.0.7.0
Disable this warning by adding property below.
For Spring application:
spring.jpa.properties.hibernate.temp.use_jdbc_metadata_defaults=false
Normal JPA:
hibernate.temp.use_jdbc_metadata_defaults=false
As you noticed, this exception isn't a real problem. It happens during the boot, when Hibernate tries to retrieve some meta information from the database. If this annoys you, you can disable it:
hibernate.temp.use_jdbc_metadata_defaults false
Looking at the comments in the source:
Basically here we are simply checking
whether we can call the
java.sql.Connection methods for LOB
creation added in JDBC 4. We not only
check whether the java.sql.Connection
declares these methods, but also
whether the actual java.sql.Connection
instance implements them (i.e. can be
called without simply throwing an
exception).
So, it's trying to determine if it can use some new JDBC 4 methods. I guess your driver may not support the new LOB creation method.
In order to hide the exception:
For Hibernate 5.2 (and Spring Boot 2.0), you can either use the use_jdbc_metadata_defaults property that the others pointed out:
# Meant to hide HHH000424: Disabling contextual LOB creation as createClob() method threw error
spring.jpa.properties.hibernate.temp.use_jdbc_metadata_defaults: false
Or, if you want to not have any side effects from the above setting (there's a comment warning us about some Oracle side effects, I don't know if it's valid or not), you can just disable the logging of the exception like this:
logging:
level:
# Hides HHH000424: Disabling contextual LOB creation as createClob() method threw error
org.hibernate.engine.jdbc.env.internal.LobCreatorBuilderImpl: WARN
To get rid of the exception
INFO - HHH000424: Disabling contextual LOB creation as createClob() method threw error :java.lang.reflect.InvocationTargetException
In hibernate.cfg.xml file Add below property
<property name="hibernate.temp.use_jdbc_metadata_defaults">false</property>
Update to this for using Hibernate 4.3.x / 5.0.x - you could just set this property to true:
<prop key="hibernate.jdbc.lob.non_contextual_creation">true</prop>
to get rid of that error message. Same effect but without the "threw exception" detail.
See LobCreatorBuilder source for details.
Just add below line in application.properties
spring.jpa.properties.hibernate.temp.use_jdbc_metadata_defaults: false
As mentioned in other comments using
hibernate.temp.use_jdbc_metadata_defaults = false
...will fix the annoying message, but can lead to many other surprising problems. Better solution is just to disable contextual LOB creation with this:
hibernate.jdbc.lob.non_contextual_creation = true
This will cause Hibernate (in my case, its 5.3.10.Final) to skip probing the JDBC driver and just output following message:
HHH000421: Disabling contextual LOB creation as hibernate.jdbc.lob.non_contextual_creation is true
So far it looks like this setting doesn't cause any problems.
Updating JDBC driver to the lastest version removed the nasty error message.
You can download it from here:
http://www.oracle.com/technetwork/database/enterprise-edition/jdbc-112010-090769.html
Free registration is required though.
If you set:
hibernate.temp.use_jdbc_metadata_defaults: false
it can cause you troubles with PostgreSQL when your table name contains reserved word like user. After insert it will try to find id sequence with:
select currval('"user"_id_seq');
which will obviously fail. This at least with Hibernate 5.2.13 and Spring Boot 2.0.0.RC1. Haven't found other way to prevent this message so now just ignoring it.
When working with Spring boot 2.1.x this warning message appears when starting up the application.
As indicated here, maybe this problem didn't show up in earlier versions because the related property was set to true by default and now it is false:
https://github.com/spring-projects/spring-boot/issues/12007
In consequence, solving this is as simple as adding the following property to the spring application.property file.
spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation = true
The problem occurs because of you didn't choose the appropriate JDBC. Just download and use the JDBC for oracle 10g rather than 11g.
I am using hibernate 5.3.17 and it works fine by adding given properties
hibernate.default_entity_mode=dynamic-map
hibernate.temp.use_jdbc_metadata_defaults=true
hibernate.jdbc.lob.non_contextual_creation = true
Thanks
I hit this error when my web app was started in Linux by user logged in with insufficient access rights. This error
org.hibernate.engine.jdbc.internal.LobCreatorBuilder - HHH000424:
Disabling contextual LOB creation as createClob() method threw error :
java.lang.reflect.InvocationTargetException
usually preceded by other errors / exceptions, especially from your application server i.e
for Tomcat:
org.apache.catalina.LifecycleException: Failed to initialize component ...
or
java.lang.UnsatisfiedLinkError: ... cannot open shared object file: No such file or directory
Solution:
Stop your web apps current instance.
Login with super user or those with sufficient access rights i.e root
Restart your web app or call previous function again.
For anyone who is facing this problem with Spring Boot 2
by default spring boot was using hibernate 5.3.x version, I have added following property in my pom.xml
<hibernate.version>5.4.2.Final</hibernate.version>
and error was gone. Reason for error is already explained in posts above
As mentioned by Jacek Prucia, setting the hibernate.temp.use_jdbc_metadata_defaults=false, will bring other "surprising problems", one of them is the batch inserts will stop working..
Remove #Temporal annotations if you use it with java.sql.* classes.
Check if you are not on a VPN. I had the same issue but realized the db I was connecting to remote!

Categories

Resources