I have a simple spring web application, which is connected to a Postgre db. My question is I have method in dao, which is annotated with #Cacheable. Is there a way to log if the method goes to db, or its result is loaded from cache? For example, I'd like to see the following log:
The value is retrieved from db....
The value is retrieved from cache
You can enable trace logs for CacheAspectSupport. This is probably going to give you too much information though.
In case of a cache hit you'll get
Cache entry for key '" + key + "' found in cache '" + cache.getName() + "'"
And a cache miss
"No cache entry for key '" + key + "' in cache(s) " + context.getCacheNames()
There is no hookpoint to configure caching so that it calls you when those things happen. You may want to look at your cache library to see if they offer some hook point.
Related
In java fetching entities with query some time return less entities in some rare case. I am using javapersistance manager. Is it ideal to use it or need to switch to low level datastore fetch to solve it?
String query = "CUID == '" + cuidKey + "' && staffKey == '" + staffKey +"'&& StartTimeLong >= "+ startDate + " && StartTimeLong < " + endDate + " && status == 'confirmed'";
List<ResultJDO> tempResultList = jdoUtils.fetchEntitiesByQueryWithRangeOrder(ResultJDO.class, query, null, null, "StartTimeLong desc");
The result returned 4 entities in rare case, but most time return all 5 entities.
jdoUtils is a PersistanceManager object.
Should I need to switch to low level datastore fetch for exact results.
I have tried researching about the library you mentioned and for similar issues and found nothing that far. It's hard to know why this is happening or how to fix it with as little information.
On the other hand, the recommended way of programmatically interact with Google Cloud Platform products is through Google's client libraries since they are already tested and assured to work in almost all cases. Furthermore, their usage allows to open Github issue's if you find any problem so that the developers could address them. For the rare cases that you need some functionality not already covered you can open a feature request or directly call the API's.
In addition to Google's libraries there are two other options for Java that are under active development. One is Objectify and the other is Catatumbo.
I would suggest switching to Java Datastore libraries. You could find examples on how to interact with Datastore in link1 and link2. Also you could find community shared code samples in this programcreek page.
I have been using JSF, JPA and MySQL with EclipseLink for 5 years. I found that I want to shift to Object db as it is very fast specially with a very large dataset. During migration, I found this error.
In JPA with EclipseLink, I passed objects as parameters. But in Object DB, I need to pass the id of objects to get the results. I have to change this in several places. Can enyone help to overcome this issue.
THis code worked fine with EclipseLink and MySQL. Here I pass the object"salesRep" as the parameter.
String j = "select b from "
+ " Bill b "
+ " where b.billCategory=:cat "
+ " and b.billType=:type "
+ " and b.salesRep=:rep ";
Map m = new HashMap();
m.put("cat", BillCategory.Loading);
m.put("type", BillType.Billed_Bill);
m.put("rep", getWebUserController().getLoggedUser());
I have to chage like this to make it work in ObjectDB.Here I have to pass the id (type long) of the object"salesRep" as the parameter.
String j = "select b from "
+ " Bill b "
+ " where b.billCategory=:cat "
+ " and b.billType=:type "
+ " and b.salesRep.id=:rep ";
Map m = new HashMap();
m.put("cat", BillCategory.Loading);
m.put("type", BillType.Billed_Bill);
m.put("rep", getWebUserController().getLoggedUser().getId());
There is a difference between EclipseLink and ObjectDB in handling detached entity objects. The default behaviour of ObjectDB is to follow the JPA specification and stop loading referenced objects by field access (transparent navigation) once an object becomes detached. EclipseLink does not treat detached objects this way.
This could make a difference in situations such as in a JSF application, where an object becomes detached before loading all necessary referenced data.
One solution (the JPA portable way) is to make sure that all the required data is loaded before objects become detached.
Another possible solution is to enable loading referenced objects by access (transparent navigation) for detached objects, by setting the objectdb.temp.no-detach system property. See #3 in this forum thread.
As per title, I'm current using JDBC on eclipse to connect to my PostgreSQL.
I have been running EXPLAIN ANALYZE statements to retrieve query plans from postgres itself. However, is it possible to store these query plan in a structure that resemble a tree? e.g main branch and sub branch etc. I read somewhere that it is a good idea to store it into xml document first and manipulate it from there.
Is there an API in Java for me to achieve this? Thanks!
try using format xml eg
t=# explain (analyze, format xml) select * from pg_database join pg_class on true;
QUERY PLAN
----------------------------------------------------------------
<explain xmlns="http://www.postgresql.org/2009/explain"> +
<Query> +
<Plan> +
<Node-Type>Nested Loop</Node-Type> +
<Join-Type>Inner</Join-Type> +
<Startup-Cost>0.00</Startup-Cost> +
<Total-Cost>23.66</Total-Cost> +
<Plan-Rows>722</Plan-Rows> +
<Plan-Width>457</Plan-Width> +
<Actual-Startup-Time>0.026</Actual-Startup-Time> +
<Actual-Total-Time>3.275</Actual-Total-Time> +
<Actual-Rows>5236</Actual-Rows> +
<Actual-Loops>1</Actual-Loops> +
<Plans> +
<Plan> +
<Node-Type>Seq Scan</Node-Type> +
<Parent-Relationship>Outer</Parent-Relationship> +
...and so on
I have some ids (id from database, example 34645) that I currently log as "[34645] - something happended" using something like:
log.info("[" + id + "]" + foo);
Some logs, like "server starting", "database connection bla" dont have an id and thus doesn't log any and that's fine.
However, when I have an id I call methods that also log, but don't have the id, like:
lookup(name) {
//do some lookup and stuff
log.info("[" + name + "]" has some info we use somewhere: " + result);
}
Is there a (smart) way to get the id logged inside lookup() without passing id to lookup() or refactor class hierarchies? There are different threads logging so setting/unsetting id-values for logback to use will probably be difficult to get right.
as per request and I like credits, you can use MDC for that thing.
Info on that is here: http://logback.qos.ch/manual/mdc.html
Thanks :)
I try to enable the Hibernate query cache in my application which runs inside Wildfly 8.2. I read through all documentations without any result.
Can anybody please give me a minimalistic example what I have to do. Please including all configurations like persistence.xml, hibernate.cfg.xml, ...
After configuring the cache I run the following code after the query was executed to check if the cache works.
log.trace("getEntityLoadCount: " + statistic.getEntityLoadCount());
log.trace("getTransactionCount: " + statistic.getTransactionCount());
log.trace("getQueryCacheHitCount: " + statistic.getQueryCacheHitCount());
log.trace("getQueryCacheMissCount: "
+ statistic.getQueryCacheMissCount());
log.trace("getQueryCachePutCount: " + statistic.getQueryCachePutCount());
log.trace("getSecondLevelCacheHitCount: "
+ statistic.getSecondLevelCacheHitCount());
log.trace("getSecondLevelCacheMissCount: "
+ statistic.getSecondLevelCacheMissCount());
log.trace("getSecondLevelCachePutCount: "
+ statistic.getSecondLevelCachePutCount());
log.trace("getSecondLevelCacheRegionNames: "
+ Arrays.toString(statistic.getSecondLevelCacheRegionNames()));