Sometimes when I use multiple Modeshape actions inside one function I get this error:
javax.jcr.RepositoryException: The session with an ID of '060742fc6' has been closed and can no longer be used.
I couldn't find any explanations of this on the web. Here is what I call:
myFunction( service.doSomething ( service.getStuff ( id, "en_EN" ).getPath() ) );
doSomething, getStuff:
#Interceptors({Foo.class, TraceInterceptor.class})
#Override
public Node doSomething(final String bar) throws RepositoryException {
return modeshape.execute(new JcrHandler<Node>() {
#Override
public Node execute(Session session) throws RepositoryException {
return session.getNode(bar);
}
});
}
#Interceptors(TraceInterceptor.class)
#Override
public ObjectExtended getStuff(final String query, final String language)
throws RepositoryException {
return modeshape.execute(new JcrHandler<ObjectExtended>() {
#Override
public ObjectExtendedexecute(Session session)
throws RepositoryException {
QueryManager queryManager = session.getWorkspace().getQueryManager();
ObjectExtendeditem = null;
String queryWrapped =
"select * from [th:this] as c where name(c)='lang_"
+ language + "' and c.[th:mylabel] "
+ "= '" + queryStr + "'";
LOGGER.debug("Query: " + queryWrapped);
Query query =
queryManager.createQuery(queryWrapped,Query.JCR_SQL2);
QueryResult result = query.execute();
NodeIterator iter = result.getNodes();
while (iter.hasNext()) {
Node node = iter.nextNode().getParent();
if (node.isNodeType("th:term")) {
item = new ObjectExtended();
item.setLabel(getLabel(language, node));
item.setPath(node.getPath());
}
}
return item;
}
});
}
Why is this happening please? What am I doing wrong?
That error message means one of two thing: either the repository is being shutdown, or the Session.logout() method is being called.
None of the above code shows how your sessions are being managed, and you don't say whether you are using a framework. But I suspect that somehow you are holding onto a Session too long (perhaps after your framework is closing the session), or the Session is leaking to multiple threads, and one thread is attempting to use it after the other has closed it.
The latter could be a real problem: while passing a single Session instance from one thread to another is okay (as long as the original thread no longer uses it), but per the JCR 2.0 specification Session instances are not threadsafe and should not be concurrently used by multiple threads.
If you're creating the Session in your code, it's often good to use a try-finally block:
Session session = null;
try {
session = ... // acquire the session
// use the session, including 0 or more calls to 'save()'
} catch ( RepositoryException e ) {
// handle it
} finally {
if ( session != null ) {
try {
session.logout();
} finally {
session = null;
}
}
}
Note that logout() does not throw a RepositoryException, so the above form usually works well. Of course, if you know you're not using session later on in the method, you don't need the inner try-finally to null the session reference:
Session session = null;
try {
session = ... // acquire the session
// use the session, including 0 or more calls to 'save()'
} catch ( RepositoryException e ) {
// handle it
} finally {
if ( session != null ) session.logout();
}
This kind of logic can easily be encapsulated.
Related
I have written some REST APIs using Java Servlets on Tomcat. These are my first experiences with Java and APIs and Tomcat. As I research and read about servlets, methods and parameter passing, and more recently thread safety, I realize I need some review, suggestions, and tutorial guidance from those of you who I see are far more experienced. I have found many questions / answers that seem to address pieces but my lack of experience clouds the clarity I desire.
The code below shows the top portion of one servlet example along with an example private method. I have "global" variables defined at the class level so that I may track the success of a method and determine if I need to send an error response. I do this because the method(s) already return a value.
Are those global variables creating an unsafe thread environment
Since the response is not visible in the private methods, how else might I determine the need to stop the process and send an error response if those global variables are unsafe
Though clipped for space, should I be doing all of the XML handling in the doGet method
Should I be calling all of the different private methods for the various data retrieval tasks and data handling
Should each method that accesses the same database open a Connection or should the doGet method create a Connection and pass it to each method
Assist, suggest, teach, guide to whatever you feel appropriate, or point me to the right learning resources so I may learn how to do better. Direct and constructive criticism welcome -- bashing and derogatory statements not preferred.
#WebServlet(name = "SubPlans", urlPatterns = {"*omitted*"})
public class SubPlans extends HttpServlet {
private transient ServletConfig servletConfig;
private String planSpecialNotes,
planAddlReqLinks,
legalTermsHeader,
legalTermsMemo,
httpReturnMsg;
private String[] subPlanInd = new String[4];
private boolean sc200;
private int httpReturnStatus;
private static final long serialVersionUID = 1L;
{
httpReturnStatus = 0;
httpReturnMsg = "";
sc200 = true;
planAddlReqLinks = null;
planSpecialNotes = null;
legalTermsHeader = "";
legalTermsMemo = null;
}
#Override
public void init(ServletConfig servletConfig)
throws ServletException {
this.servletConfig = servletConfig;
}
#Override
public ServletConfig getServletConfig() {
return servletConfig;
}
#Override
public String getServletInfo() {
return "SubPlans";
}
#Override
public void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
List<HashMap<String, Object>> alSubDeps = new ArrayList<HashMap<String, Object>>();
String[] coverageDates = new String[6],
depDates = new String[8];
String eeAltId = null,
eeSSN = null,
carrier = null,
logosite = null,
fmtSSN = "X",
subSQL = null,
healthPlan = null,
dentalPlan = null,
visionPlan = null,
lifePlan = null,
tier = null,
healthGroupNum = null,
effdate = null,
holdEffDate = null,
planDesc = "",
planYear = "",
summaryBenefitsLink = null;
int[][] effdates = new int[6][4];
int holdDistrictNumber = 0,
districtNumber = 0,
holdUnit = 0,
unit = 0;
boolean districtHasHSA = false;
XMLOutputFactory outputFactory = XMLOutputFactory.newInstance();
try {
eeAltId = request.getParameter("*omitted*");
if ( eeAltId != null ) {
Pattern p = Pattern.compile(*omitted*);
Matcher m = p.matcher(eeAltId);
if ( m.find(0) ) {
eeSSN = getSSN(eeAltId);
} else {
httpReturnStatus = 412;
httpReturnMsg = "Alternate ID format incorrect.";
System.err.println("Bad alternate id format " + eeAltId);
sc200 = false;
}
} else {
httpReturnStatus = 412;
httpReturnMsg = "Alternate ID missing.";
System.err.println("alternate id not provided.");
sc200 = false;
}
if ( sc200 ) {
coverageDates = determineDates();
subSQL = buildSubSQLStatement(eeSSN, coverageDates);
alSubDeps = getSubDeps(subSQL);
if ( sc200 ) {
XMLStreamWriter writer = outputFactory.createXMLStreamWriter(response.getOutputStream());
writer.writeStartDocument("1.0");
writer.writeStartElement("subscriber");
// CLIPPED //
writer.writeEndElement(); // subscriber
writer.writeEndDocument();
if ( sc200 ) {
response.setStatus(HttpServletResponse.SC_OK);
writer.flush();
} else {
response.sendError(httpReturnStatus, httpReturnMsg);
}
}
}
} catch (Exception e) {
e.printStackTrace();
System.err.println("Error writing XML");
System.err.println(e);
}
}
#Override
public void destroy() {
}
private String getPlanDescription(String planID) {
String planDesc = null;
String sqlEE = "SELECT ...";
Connection connGPD = null;
Statement stGPD = null;
ResultSet rsGPD = null;
try {
connGPD = getDbConnectionEE();
try {
stGPD = connGPD.createStatement();
planDesc = "Statement error";
try {
rsGPD = stGPD.executeQuery(sqlEE);
if ( !rsGPD.isBeforeFirst() )
planDesc = "No data";
else {
rsGPD.next();
planDesc = rsGPD.getString("Plan_Description");
}
} catch (Exception rsErr) {
httpReturnStatus = 500;
httpReturnMsg = "Error retrieving plan description.";
System.err.println("getPlanDescription: " + httpReturnMsg + " " + httpReturnStatus);
System.err.println(rsErr);
sc200 = false;
} finally {
if ( rsGPD != null ) {
try {
rsGPD.close();
} catch (Exception rsErr) {
System.err.println("getPlanDescription: Error closing result set.");
System.err.println(rsErr);
}
}
}
} catch (Exception stErr) {
httpReturnStatus = 500;
httpReturnMsg = "Error creating plan description statement.";
System.err.println("getPlanDescription: " + httpReturnMsg + " " + httpReturnStatus);
System.err.println(stErr);
sc200 = false;
} finally {
if ( stGPD != null ) {
try {
stGPD.close();
} catch (Exception stErr) {
System.err.println("getPlanDescription: Error closing query statement.");
System.err.println(stErr);
}
}
}
} catch (Exception connErr) {
httpReturnStatus = 500;
httpReturnMsg = "Error closing database.";
System.err.println("getPlanDescription: " + httpReturnMsg + " " + httpReturnStatus);
System.err.println(connErr);
sc200 = false;
} finally {
if ( connGPD != null ) {
try {
connGPD.close();
} catch (Exception connErr) {
System.err.println("getPlanDescription: Error closing connection.");
System.err.println(connErr);
}
}
}
return planDesc.trim();
}
I have "global" variables defined at the class level
You have instance variables declared at the class level. There are no globals in Java.
so that I may track the success of a method and determine if I need to send an error response.
Poor technique.
I do this because the method(s) already return a value.
You should use exceptions for this if the return values are already taken.
Are those global variables creating an unsafe thread environment
Those instance variables are creating an unsafe thread environment.
Since the response is not visible in the private methods, how else might I determine the need to stop the process and send an error response if those global variables are unsafe?
Via exceptions thrown by the methods, see above. If there is no exception, send an OK response, whatever form that takes, otherwise whatever error response is appropriate to the exception.
Though clipped for space, should I be doing all of the XML handling in the doGet method
Not if it's long or repetitive (used in other places too).
Should I be calling all of the different private methods for the various data retrieval tasks and data handling
Sure, why not?
Should each method that accesses the same database open a Connection or should the doGet() method create a Connection and pass it to each method
doGet() should open the connection, pass it to each method, and infallibly close it.
NB You don't need the ServletConfig variable, or the init() or getServletConfig() methods. If you remove all this you can get it from the base class any time you need it via the getServletConfig() method you have pointlessly overridden.
The variables you have defined are instance members. They are not global and are not class-level. They are variables scoped to one instance of your servlet class.
The servlet container typically creates one instance of your servlet and sends all requests to that one instance. So you will have concurrent requests overwriting these variables’ contents unpredictably.
It can be ok for a servlet to have static variables or instance member variables, but only if their contents are thread safe and they contain no state specific to a request. For instance it would be normal to have a (log4j or java.util.logging) Logger object as a static member, where the logger is specifically designed to be called concurrently without the threads interfering with each other.
For error handling use exceptions to fail fast once something goes wrong.
Servlets are painful to write and hard to test. Consider using a MVC web framework instead. Frameworks like spring or dropwizard provide built-in capabilities that make things like data access and error handling easier, but most importantly they encourage patterns where you write separate well-focused classes that each do one thing well (and can be reasoned about and tested independently). The servlet approach tends to lead people to cram disparate functions into one increasingly-unmanageable class file, which seems to be the road you’re headed down.
I have some entities using join-inheritance and I'm doing bulk operations on them. As explained in Multi-table Bulk Operations Hibernate uses a temporary table to execute the bulk operations.
As I understand temporary tables the data in them is temporary (deleted at end of transaction or session) but the table themselves are permanent. What I see is that Hibernate tries to create the temporary table every time such a query is executed. Which in my case is up more than 35.000 times per hour. The create table statement obviously fails every time, because a table with that name already exists. This is really unnecessary and probably hurts the performance, also the DBAs are not happy...
Is there a way that Hibernate remembers that it already created the temporary table?
If not, are there any workarounds? My only idea is to use single-table-inheritance instead to avoid using temporary tables completely.
Hibernate version is 4.2.8, DB is Oracle 11g.
I think this is a bug in TemporaryTableBulkIdStrategy, because when using the Oracle8iDialect says that temporary tables shouldn't be deleted:
#Override
public boolean dropTemporaryTableAfterUse() {
return false;
}
But this check is made only when deleting the table:
protected void releaseTempTable(Queryable persister, SessionImplementor session) {
if ( session.getFactory().getDialect().dropTemporaryTableAfterUse() ) {
TemporaryTableDropWork work = new TemporaryTableDropWork( persister, session );
if ( shouldIsolateTemporaryTableDDL( session ) ) {
session.getTransactionCoordinator()
.getTransaction()
.createIsolationDelegate()
.delegateWork( work, shouldTransactIsolatedTemporaryTableDDL( session ) );
}
else {
final Connection connection = session.getTransactionCoordinator()
.getJdbcCoordinator()
.getLogicalConnection()
.getConnection();
work.execute( connection );
session.getTransactionCoordinator()
.getJdbcCoordinator()
.afterStatementExecution();
}
}
else {
// at the very least cleanup the data :)
PreparedStatement ps = null;
try {
final String sql = "delete from " + persister.getTemporaryIdTableName();
ps = session.getTransactionCoordinator().getJdbcCoordinator().getStatementPreparer().prepareStatement( sql, false );
session.getTransactionCoordinator().getJdbcCoordinator().getResultSetReturn().executeUpdate( ps );
}
catch( Throwable t ) {
log.unableToCleanupTemporaryIdTable(t);
}
finally {
if ( ps != null ) {
try {
session.getTransactionCoordinator().getJdbcCoordinator().release( ps );
}
catch( Throwable ignore ) {
// ignore
}
}
}
}
}
but now when creating the table:
protected void createTempTable(Queryable persister, SessionImplementor session) {
// Don't really know all the codes required to adequately decipher returned jdbc exceptions here.
// simply allow the failure to be eaten and the subsequent insert-selects/deletes should fail
TemporaryTableCreationWork work = new TemporaryTableCreationWork( persister );
if ( shouldIsolateTemporaryTableDDL( session ) ) {
session.getTransactionCoordinator()
.getTransaction()
.createIsolationDelegate()
.delegateWork( work, shouldTransactIsolatedTemporaryTableDDL( session ) );
}
else {
final Connection connection = session.getTransactionCoordinator()
.getJdbcCoordinator()
.getLogicalConnection()
.getConnection();
work.execute( connection );
session.getTransactionCoordinator()
.getJdbcCoordinator()
.afterStatementExecution();
}
}
As a workaround, you could extend the Oracle dialect and override the dropTemporaryTableAfterUse method to return false.
I filled the HHH-9744 issue for this.
With Vlad pointing me in the right direction I came up with the following workaround to cache the names of already created temporary tables:
public class FixedTemporaryTableBulkIdStrategy extends TemporaryTableBulkIdStrategy {
private final Set<String> tables = new CopyOnWriteArraySet<>();
#Override
protected void createTempTable(Queryable persister, SessionImplementor session) {
final String temporaryIdTableName = persister.getTemporaryIdTableName();
if (!tables.contains(temporaryIdTableName)) {
super.createTempTable(persister, session);
tables.add(temporaryIdTableName);
}
}
}
This can be used by setting the property hibernate.hql.bulk_id_strategy to the fully qualified name of this class.
Please not that this is not a general solution and only works if the database/dialect uses global temporary tables (opposed to session/transaction specific).
I want to use construction
import org.hibernate.Session;
...
try (Session session){
}
How can I do that?
Because "The resource type Session does not implement java.lang.AutoCloseable"
I know, that I need to extend Session and override AutoCloseable method, but when I've try to do that, there is error "The type Session cannot be the superclass of SessionDAO; a superclass must be a class"
Update
I've wrote my own DAO framework, but will be use Spring for that
First, you should use a much more solid session/transaction handling infrastructure, like Spring offers you. This way you can use the Same Session across multiple DAO calls and the transaction boundary is explicitly set by the #Transactional annotation.
If this is for a test project of yours, you can use a simple utility like this one:
protected <T> T doInTransaction(TransactionCallable<T> callable) {
T result = null;
Session session = null;
Transaction txn = null;
try {
session = sf.openSession();
txn = session.beginTransaction();
result = callable.execute(session);
txn.commit();
} catch (RuntimeException e) {
if ( txn != null && txn.isActive() ) txn.rollback();
throw e;
} finally {
if (session != null) {
session.close();
}
}
return result;
}
And you can call it like this:
final Long parentId = doInTransaction(new TransactionCallable<Long>() {
#Override
public Long execute(Session session) {
Parent parent = new Parent();
Child son = new Child("Bob");
Child daughter = new Child("Alice");
parent.addChild(son);
parent.addChild(daughter);
session.persist(parent);
session.flush();
return parent.getId();
}
});
Check this GitHub repository for more examples like this one.
I have a DAO below, with a transactional delete per entity and in batch.
Deleting one entity at a time works just fine.
Batch delete has NO effect whatsoever :
the code below is simple and straightforward IMO, but the call to deleteMyObjects(Long[] ids) - which calls delete(Iterable keysOrEntities) of Objectify - has no effect !
public class MyObjectDao {
private ObjectifyOpts transactional = new ObjectifyOpts().setBeginTransaction(true);
private ObjectifyOpts nonTransactional = new ObjectifyOpts().setBeginTransaction(false);
private String namespace = null;
public MyObjectDao(String namespace) {
Preconditions.checkNotNull(namespace, "Namespace cannot be NULL");
this.namespace = namespace;
}
/**
* set namespace and get a non-transactional instance of Objectify
*
* #return
*/
protected Objectify nontxn() {
NamespaceManager.set(namespace);
return ObjectifyService.factory().begin(nonTransactional);
}
/**
* set namespace and get a transactional instance of Objectify
*
* #return
*/
protected Objectify txn() {
NamespaceManager.set(namespace);
Objectify txn = ObjectifyService.factory().begin(transactional);
log.log(Level.FINE, "transaction <" + txn.getTxn().getId() + "> started");
return txn;
}
protected void commit(Objectify txn) {
if (txn != null && txn.getTxn().isActive()) {
txn.getTxn().commit();
log.log(Level.FINE, "transaction <" + txn.getTxn().getId() + "> committed");
} else {
log.log(Level.WARNING, "commit NULL transaction");
}
}
protected void rollbackIfNeeded(Objectify txn) {
if (txn != null && txn.getTxn() != null && txn.getTxn().isActive()) {
log.log(Level.WARNING, "transaction <" + txn.getTxn().getId() + "> rolling back");
txn.getTxn().rollback();
} else if (txn == null || txn.getTxn() == null) {
log.log(Level.WARNING, "finalizing NULL transaction, not rolling back");
} else if (!txn.getTxn().isActive()) {
log.log(Level.FINEST, "transaction <" + txn.getTxn().getId() + "> NOT rolling back");
}
}
public void deleteMyObject(Long id) {
Objectify txn = null;
try {
txn = txn();
txn.delete(new Key<MyObject>(MyObject.class, id));
commit(txn);
} finally {
rollbackIfNeeded(txn);
}
}
public void deleteMyObjects(Long[] ids) {
Objectify txn = null;
List<Key<? extends MyObject>> keys = new ArrayList<Key<? extends MyObject>>();
for (long id : ids) {
keys.add(new Key<MyObject>(MyObject.class, id));
}
try {
txn = txn();
txn.delete(keys);
commit(txn);
} finally {
rollbackIfNeeded(txn);
}
}
}
When I call deleteMyObjects(Long[] ), I see nothing suspicious in the logs below. The transaction commits just fine without errors. But the data is not effected. Looping through the same list of Ids and deleting the objects one at a time, works just fine.
Feb 29, 2012 8:37:42 AM com.test.MyObjectDao txn
FINE: transaction <6> started
Feb 29, 2012 8:37:42 AM com.test.MyObjectDao commit
FINE: transaction <6> committed
Feb 29, 2012 8:37:42 AM com.test.MyObjectDao rollbackIfNeeded
FINEST: transaction <6> NOT rolling back
But the data is unchanged and present in the datastore !?!?!
Any help welcome.
UPDATE
Stepping into the Objectify code, I wonder wether this has something to do with the namespace ? Right here in the objectify code :
#Override
public Result<Void> delete(Iterable<?> keysOrEntities)
{
// We have to be careful here, objs could contain raw Keys or Keys or entity objects or both!
List<com.google.appengine.api.datastore.Key> keys = new ArrayList<com.google.appengine.api.datastore.Key>();
for (Object obj: keysOrEntities)
keys.add(this.factory.getRawKey(obj));
return new ResultAdapter<Void>(this.ads.delete(this.txn, keys));
}
When I inspect this.factory.getRawKey(obj) in debug, I notice that the namespace of the key is empty. NamespaceManager.get() however returns the correct namespace !?
Namespace was not set when creating the keys.
The namespace must be set BEFORE creating a key !
So rewriting it like this, fixed my problem :
public void deleteMyObjects(Long[] ids) {
Objectify txn = null;
try {
txn = txn();
List<Key<MyObject>> keys = new ArrayList<Key<MyObject>>();
for (long id : ids) {
keys.add(new Key<MyObject>(MyObject.class, id));
}
txn.delete(keys);
commit(txn);
} finally {
rollbackIfNeeded(txn);
}
}
Then I call this :
new MyObjectDAO("somenamespace").delete({ 1L, 34L, 116L });
In Hibernate when i save() an object in a transaction, and then i rollback it, the saved object still remains in the DB. It's strange because this issue doesn't happen with the update() or delete() method, just with save().
Here is the code i'm using:
DbEntity dbEntity = getDbEntity();
HibernateUtil.beginTransaction();
Session session = HibernateUtil.getCurrentSession();
session.save(dbEntity);
HibernateUtil.rollbackTransaction();
And here is the HibernateUtil class (just the involved functions, i guarantee the getSessionFactory() method works well - there is an Interceptor handler, but it doesn't matter now):
private static final ThreadLocal<Session> threadSession = new ThreadLocal<Session>();
private static final ThreadLocal<Transaction> threadTransaction = new ThreadLocal<Transaction>();
/**
* Retrieves the current Session local to the thread.
* <p/>
* If no Session is open, opens a new Session for the running thread.
*
* #return Session
*/
public static Session getCurrentSession()
throws HibernateException {
Session s = (Session) threadSession.get();
try {
if (s == null) {
log.debug("Opening new Session for this thread.");
if (getInterceptor() != null) {
log.debug("Using interceptor: " + getInterceptor().getClass());
s = getSessionFactory().openSession(getInterceptor());
} else {
s = getSessionFactory().openSession();
}
threadSession.set(s);
}
} catch (HibernateException ex) {
throw new HibernateException(ex);
}
return s;
}
/**
* Start a new database transaction.
*/
public static void beginTransaction()
throws HibernateException {
Transaction tx = (Transaction) threadTransaction.get();
try {
if (tx == null) {
log.debug("Starting new database transaction in this thread.");
tx = getCurrentSession().beginTransaction();
threadTransaction.set(tx);
}
} catch (HibernateException ex) {
throw new HibernateException(ex);
}
}
/**
* Rollback the database transaction.
*/
public static void rollbackTransaction()
throws HibernateException {
Transaction tx = (Transaction) threadTransaction.get();
try {
threadTransaction.set(null);
if ( tx != null && !tx.wasCommitted() && !tx.wasRolledBack() ) {
log.debug("Tyring to rollback database transaction of this thread.");
tx.rollback();
}
} catch (HibernateException ex) {
throw new HibernateException(ex);
} finally {
closeSession();
}
}
Thanks
Check if your database supports a roll back i.e. if you're using InnoDB tables and not MyISAM (you can mix transactional and non-transactional tables but in most cases, you want all your tables to be InnoDB).
MySQL by default uses the MyIsam storage engine. As the MyISAM does not support transactions, insert, update and delete statements are directly written to the database. The commit and rollback statements are ignored.
In order to use transaction you need to change the storage engine of you tables. Use this command:
ALTER TABLE table_name ENGINE = InnoDB;
(note how ever, that the two storage engines are different and you need to test you're application if it still behaves as expected)