I'm still new to the ehcache API so I may be missing something obvious but here's my current issue.
I currently have a persistent-disk cache that's being stored on my server. I'm currently implementing a passive write-behind cache method that saves key/value pairs to a database table. In the event the persistent-disk cache is lost, I'd like to restore the cache from the database table.
Example I'm using for my write-behind logic:
http://scalejava.blogspot.com/2011/10/ehcache-write-behind-example.html
I'm building a disk persistent using the following method:
import com.googlecode.ehcache.annotations.Cacheable;
import com.googlecode.ehcache.annotations.KeyGenerator;
import com.googlecode.ehcache.annotations.PartialCacheKey;
#Cacheable(cacheName = "readRuleCache", keyGenerator=#KeyGenerator(name="StringCacheKeyGenerator"))
public Rule read(#PartialCacheKey Rule rule,String info) {
System.out.print("Cache miss: "+ rule.toString());
//code to manipulate Rule object using info
try{
String serialziedRule =objectSerializer.convertToString(Rule);
readRuleCache.putWithWriter(new Element(rule.toString(),serialziedRule ));
}
catch(IOException ioe)
{
System.out.println("error serializing rule object");
ioe.printStackTrace();
}
return rule;
}
The write method I'm overriding in my CacheWriter implementation works fine. Things are getting saved to the database.
#Override
public void write(final Element element) throws CacheException {
String insertKeyValuePair ="INSERT INTO RULE_CACHE (ID, VALUE) VALUES " +
"('"+element.getObjectKey().toString()+"','"
+element.getObjectValue().toString()+"')";
Statement statement;
try
{
statement = connection.createStatement();
statement.executeUpdate(insertKeyValuePair);
statement.close();
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Querying and De-serializing the string back in to an object works fine too. I've validated that all the values of the object are present. The disk persistent cache is also being populated when I delete the *.data file and restart the application:
public void preLoadCache()
{
CacheManager cacheManager = CacheManager.getInstance();
readRuleCache = cacheManager.getCache("readRuleCache");
Query query=em.createNativeQuery("select * from RULE_CACHE");
#SuppressWarnings("unchecked")
List<Object[]> resultList = query.getResultList();
for(Object[] row:resultList)
{
try {
System.out.println("Deserializing: "+row[1].toString());
Rule rule = objectSerializer.convertToObject((String)row[1]);
rule= RuleValidator.verify(rule);
if(rule!=null)
{
readAirRuleCache.putIfAbsent(new Element(row[0], rule));
}
}
catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Question
Everything looks OK. However when I pass Rule objects with keys that should exist in the cache the "read" method is called regardless and the *.data file size is increased. Though the write method for the database doesn't attempt to insert existing keys again. Any ideas on what I'm doing wrong?
It turns out this was the culprit:
keyGenerator=#KeyGenerator(name="StringCacheKeyGenerator")
The source material I read on this suggested that the "toString()" method I overrode would be used as the key for the cache key/value pair. After further research it turns out that this is not true. Though the "toString()" key is used. It is nested within class information to create a much larger key.
Reference:
http://code.google.com/p/ehcache-spring-annotations/wiki/StringCacheKeyGenerator
Example Expected key:
"[49931]"
Example Actual Key:
"[class x.y.z.WeatherDaoImpl, getWeather class x.y.z.Weather, [class java.lang.String], [49931]]"
Related
I have a method in CDI bean which is transactional, on error it creates an entry in database with the exception message. This method can be called by RESTendpoint and in multithread way.
We have a SQL constraint to avoid duplicity in database
#Transactional
public RegistrationRuleStatus performCheck(RegistrationRule rule, User user) {
try {
//check if rule is dependent of other rules and if all proved, perform check
List<RegistrationRule> rules = rule.getRuleParentDependencies();
boolean parentDependenciesAreProved = true;
if (!CollectionUtils.isEmpty(rules)) {
parentDependenciesAreProved = ruleDao.areParentDependenciesProved(rule,user.getId());
}
if (parentDependenciesAreProved) {
Object service = CDI.current().select(Object.class, new NamedAnnotation(rule.getProvider().name())).get();
Method method = service.getClass().getMethod(rule.getProviderType().getMethod(), Long.class, RegistrationRule.class);
return (RegistrationRuleStatus) method.invoke(service, user.getId(), rule);
} else {
RegistrationRuleStatus status = statusDao.getStatusByUserAndRule(user, rule);
if (status == null) {
status = new RegistrationRuleStatus(user, rule, RegistrationActionStatus.START, new Date());
statusDao.create(status);
}
return status;
}
} catch (Exception e) {
LOGGER.error("could not perform check {} for provider {}", rule.getProviderType().name(), rule.getProvider().name(), e.getCause()!=null?e.getCause():e);
return statusDao.createErrorStatus(user,rule,e.getCause()!=null?e.getCause().getMessage():e.getMessage());
}
}
create Error method:
#Transactional
public RegistrationRuleStatus createErrorStatus(User user, RegistrationRule rule, String message) {
RegistrationRuleStatus status = getStatusByUserAndRule(user, rule);
if (status == null) {
status = new RegistrationRuleStatus(user, rule, RegistrationActionStatus.ERROR, new Date());
status.setErrorCode(CommonPropertyResolver.getMicroServiceErrorCode());
status.setErrorMessage(message);
create(status);
}else {
status.setStatus(RegistrationActionStatus.ERROR);
status.setStatusDate(new Date());
status.setErrorCode(CommonPropertyResolver.getMicroServiceErrorCode());
status.setErrorMessage(message);
update(status);
}
return status;
}
the problem is method is called twice at same time and the error recorded is DuplicateException but we don't want it. We verify at the beginning if object already exists, but I think it is called at exactly same time.
JAVA8/wildfly/CDI/JPA/eclipselink
Any idea?
I'd suggest you to consider following approaches:
1) Implement retry logic. Catch exception, analyze it. If it indicates an unexpected duplicate (like you described), then don't consider it as an error and just repeat the method call. Now your code will work differently: It will notice that a record already exists and will not create a duplicate.
2) Use isolation level SERIALIZABLE. Then within a single transaction your will "see" a consistent behaviour: If select operation hasn't found a particular record, then till the end of this transaction no other transaction will insert such record and there will be no exception related to duplicates. But the price is that the whole table will be locked for each such transaction. This can degrade the application performance essentially.
It has been known that you must use the following pattern in order to update an order in ATG Form-Handlers that doesn't inherit from the PurchaseProcessFormHanlder:
boolean acquireLock = false;
ClientLockManager lockManager = getLocalLockManager();
try {
acquireLock = !lockManager.hasWriteLock(profile.getRepositoryId(), Thread.currentThread());
if (acquireLock) {
lockManager.acquireWriteLock(profile.getRepositoryId(), Thread.currentThread());
}
boolean shouldRollback = false;
TransactionDemarcation transactionDemarcation = new TransactionDemarcation();
TransactionManager transactionManager = getTransactionManager();
transactionDemarcation.begin(transactionManager, TransactionDemarcation.REQUIRED);
try {
synchronized (getOrder()) {
...
...
...
}
} catch (final Exception ex) {
shouldRollback = true;
vlogError(ce, "There has been an exception during processing of order: {0}", getOrder().getId());
} finally {
try {
transactionDemarcation.end(shouldRollback);
} catch (final TransactionDemarcationException tde) {
vlogError(tde, "TransactionDemarcationException during finally: {0}", tde.getMessage());
} finally {
vlogDebug("Ending Transaction for orderId: {0}", order.getId());
}
}
} catch (final DeadlockException de) {
vlogError(de, "There has been an exception during processing of order: {0}", order.getId());
} catch (final TransactionDemarcationException tde) {
vlogError(tde, "There has been an exception during processing of order: {0}", order.getId());
} finally {
try {
if (acquireLock) {
lockManager.releaseWriteLock(getOrder().getProfileId(), Thread.currentThread(), true);
}
} catch (final Throwable th) {
vlogError(th, "There has been an error during release of write lock: {0}", th.getMessage());
}
}
In theory, any FormHandler that inherits from the PurchaseProcessFormHandler already implements the following steps OOTB:
Acquire LocalLockManager in order to avoid concurrent threads to modify the same order:
try {
acquireLock = !lockManager.hasWriteLock(profile.getRepositoryId(), Thread.currentThread());
if (acquireLock) {
lockManager.acquireWriteLock(profile.getRepositoryId(), Thread.currentThread());
}
} catch (final DeadlockException de) {
vlogError(de, "There has been an exception during processing of order: {0}", order.getId());
}
Create a new Transaction:
try {
TransactionDemarcation transactionDemarcation = new TransactionDemarcation();
TransactionManager transactionManager = getTransactionManager();
transactionDemarcation.begin(transactionManager, TransactionDemarcation.REQUIRED);
} catch (final TransactionDemarcationException tde) {
vlogError(tde, "There has been an exception during processing of order: {0}", order.getId());
}
Ending the Transaction being used:
try {
TransactionManager transactionManager = getTransactionManager();
Transaction transaction = transactionManager.getTransaction();
// If transaction is elegible for commiting:
transactionManager.commit();
transaction.commit();
// otherwise
transactionManager.rollback();
transaction.rollback();
} catch (final Exception ex) {
error = true;
vlogError(ex, "There has been an exception during processing of order: {0}", order.getId());
} finally {
// handle the error
}
Release the lock being used for the transaction:
finally {
ClientLockManager lockManager = getLocalLockManager();
lockManager.releaseWriteLock(profile.getRepositoryId(), Thread.currentThread(), true);
}
As per ATG documentation, the following methods implement the behaviour descripted above:
Method: beforeSet
Called before any setX methods on this form are set when a form that modifies properties of this form handler is submitted. Creates a transaction if necessary at the beginning of the form submission process, optionally obtaining a local lock to prevent multiple forms from creating transactions that may modify the same order.
Steps: 1 & 2
Method: afterSet
Called after any setX methods on this form are set when a form that modifies properties of this form handler is submitted. Commits or rolls back any transaction created in beforeSet, and releases any lock that was acquired at the time.
Steps: 3 & 4
Such as you will only have to handle the following procedures in order to update the order:
Syncronize the block of code that's going to be used for order updating in order to avoid thread concurrency.
synchronized (getOrder()) {
...
...
...
}
Perform order modifications:
synchronized (getOrder()) {
getOrder().setXXX();
getOrder().removeXXX();
}
Update the order (updateOrder pipeline chain will be invoked):
synchronized (getOrder()) {
...
...
...
getOrderManager().updateOrder(order);
}
This is pretty straightforward, unless you have to edit an order in any of the following scenarios:
Form handlers or custom form handler that are not in the PurchaseProcessFormHandler's hierachy.
Helpers or Tools classes.
Processors
ATG REST Web Services
&c
If so, you will have to implement the Transactional Pattern within your components.
Questions!
Is there any other pattern known to use instead of using the transactional pattern?
Would it be possible to implement/override the beforeSet & afterSet methods in FormHandlers just the same way ATG does it in PurchaseProcessFormHandler
Are you aware of any other approach?
The series of steps you have outlined above is the prescribed series of steps for updating an order.
Feel free to factor it out in any way you find useful. Just ensure that when you update an order, you, or your inherited code, have performed the requisite steps.
One common way that ATG does similar factoring is for a given method, say X(...), you would have a preX(...), doX(...), and postX(...) method. You can create an abstract class with all your boilerplate code in the preX() and postX() methods, maybe even declared final, and have the doX() declared abstract. Your component then will inherit from the abstract class and must implement the doX() method. You may need to handle exceptions explicitly as well.
This is, essentially, what the standard form handlers do (under different names).
For example;
public final void X(...) {
preX(...); // call the pre method
try {
doX(...); // call the do method
} catch (XException xe) {
// handle error
}
postX(...); // call the post method
}
protected final void preX(...) {
// do everything you need to do before your customer code
}
protected final void postX(...) {
// do everything you need to do after your customer code
}
protected abstract void doX(...) throws XException;
Another thing you could do, instead of inheriting from an abstract class, is to define an annotation that has all the boilerplate code.
A third thing you could do, in a similar way, but a lot harder to shoehorn into your ATG code, might be to define an aspect or a method invocation interceptor using third party frameworks.
However, once again, whatever you do, and however you do it, just ensure that you follow all the steps.
I'm trying to map a byte[] in Java to a BLOB field in my MySQL database. Here's the relevant code:
public void update(IUser data) {
UserRecordExt user = <get user>;
// copy other fields over
user.from(data, USER.OTHERFIELD, USER.OTHERFIELD2);
if (data instanceof IUserExt) {
String avatar = ((IUserExt) data).getAvatarUrl();
if (avatar != null) {
user.setAvatarUrl(avatar);
}
}
/* **** */
user.update();
}
IUser is the interface generated by jOOQ for our SQL table.
IUserExt is an extension of that interface, with support for a avatarUrl used by our API to temporarily store data.
UserRecordExt extends the UserRecord and implements IUserExt.
getAvatarUrl() receives a base64 encoded string from our API call.
setAvatarUrl() converts this string to abyte[], and stores it underUserRecordExt.avatar`.
The field in my database which I'm trying to save to is avatar, and when I reach /* **** */ in the debugger, I can see the avatar property is present and is a populated byte[].
My problem is that when user.update() is called, the generated SQL query I can see in the console has avatar set to NULL. I'm absolutely clueless as to what the reason for this could be, losing faith in jOOQ a little here as I'd expect any fields present in my user object when I call the update to be written to the DB.
Any ideas?
Here's the code for setAvatarUrl():
public void setAvatarUrl(String avatarUrl) {
try {
this.avatar = avatarUrl.getBytes("UTF-8");
} catch (UnsupportedEncodingException e) {
System.err.println("failed to getBytes() on avatar");
}
}
(The error message isn't how the exception should be handled, however looking at logs this isn't the point of failure. The byte[] is generated OK.)
The problem in your current approach is here:
public void setAvatarUrl(String avatarUrl) {
try {
this.avatar = avatarUrl.getBytes("UTF-8");
} catch (UnsupportedEncodingException e) {
System.err.println("failed to getBytes() on avatar");
}
}
You apparently added some sort of avatar field in your subtype UserRecordExt. But how is jOOQ supposed to know about it, when you call update()? That value isn't really part of the record.
A better implementation would be:
public void setAvatarUrl(String avatarUrl) {
try {
// This will actually set the AVATAR value on the record itself!
super.setAvatar(avatarUrl.getBytes("UTF-8"));
} catch (UnsupportedEncodingException e) {
System.err.println("failed to getBytes() on avatar");
}
}
The above probably answers your question right now. On the other hand:
This is not how you download a resource over the wire. This will just generate a binary representation of the URL, not of its content ;-)
I personally don't think that you should implement this kind of logic in the data access layer, but that discussion is out of scope for this question.
I am testing QueryDSL against the World database in MySql. I can retrieve the data as a List, but I cannot get it to return as a List. I am querying via SQL, nothing else. This is what I have.
private void getSomething(Connection connection) {
QCountry country = QCountry.country;
SQLTemplates dialect = new HSQLDBTemplates();
SQLQuery query = new SQLQueryImpl(connection, dialect);
//List<Object[]> countries = query.from(country).list(country.all());
List<QCountry> countries = query.from(country).list(country);
System.out.println(countries);
try {
connection.close();
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
I get this error:
java.lang.IllegalArgumentException: RelationalPath based projection can only be used with generated Bean types
You need to generate bean types as described here http://blog.mysema.com/2011/01/querying-in-sql-with-querydsl.html under Bean class generation.
This is a very simple example of hibernate usage in java: a function that when it's called, it creates a new object in the database. If everything goes fine, the changes are stored and visible immediately (no cache issues). If something fails, the database should be restored as if this function was never called.
public String createObject() {
PersistentTransaction t = null;
try {
t = PersistentManager.instance().getSession().beginTransaction();
Foods f = new Foods(); //Foods is an Hibernate object
//set some values on f
f.save();
t.commit();
PersistentManager.instance().getSession().clear();
return "everything allright";
} catch (Exception e) {
System.out.println("Error while creating object");
e.printStackTrace();
try {
t.rollback();
System.out.println("Database restored after the error.");
} catch (Exception e1) {
System.out.println("Error restoring database!");
e1.printStackTrace();
}
}
return "there was an error";
}
Is there any error? Would you change / improve anything?
I don't see anything wrong with your code here. As #Vinod has mentioned, we rely on frameworks like Spring to handle the tedious boiler plate code. After all, you don't want code like this to exist in every possible DAO method you have. They makes things difficult to read and debug.
One option is to use AOP where you apply AspectJ's "around" advice on your DAO method to handle the transaction. If you don't feel comfortable with AOP, then you can write your own boiler plate wrapper if you are not using frameworks like Spring.
Here's an example that I crafted up that might give you an idea:-
// think of this as an anonymous block of code you want to wrap with transaction
public abstract class CodeBlock {
public abstract void execute();
}
// wraps transaction around the CodeBlock
public class TransactionWrapper {
public boolean run(CodeBlock codeBlock) {
PersistentTransaction t = null;
boolean status = false;
try {
t = PersistentManager.instance().getSession().beginTransaction();
codeBlock.execute();
t.commit();
status = true;
}
catch (Exception e) {
e.printStackTrace();
try {
t.rollback();
}
catch (Exception ignored) {
}
}
finally {
// close session
}
return status;
}
}
Then, your actual DAO method will look like this:-
TransactionWrapper transactionWrapper = new TransactionWrapper();
public String createObject() {
boolean status = transactionWrapper.run(new CodeBlock() {
#Override
public void execute() {
Foods f = new Foods();
f.save();
}
});
return status ? "everything allright" : "there was an error";
}
The save will be through a session rather than on the object unless you have injected the session into persistent object.
Have a finally and do a session close also
finally {
//session.close()
}
Suggestion: If this code posted was for learning purpose then it is fine, otherwise I would suggest using Spring to manage this boiler plate stuff and worry only about save.