Oracle Coherence : Transaction Management between two Cache - java

I am new in Oracle Coherence, I need to add functionality in my sample project which will enable transaction management between two Cache.
As we would getting the transaction object from NamedCache, and there are two different NamedCache object for both different Cache. So how i am able to achieved functionality which enable transaction management between two Cache.
For Single Cache i am able to do transaction management with following code.
public class TransactionExample extends Base {
public static void main(String[] args) {
// populate the cache
NamedCache cache = CacheFactory.getCache("dist-extend");
cache.clear();
String key1 = "key1";
String key2 = "key2";
//cache.clear();
// create one TransactionMap per NamedCache
TransactionMap mapTx = CacheFactory.getLocalTransaction(cache);
mapTx.setTransactionIsolation(TransactionMap.TRANSACTION_REPEATABLE_GET);
mapTx.setConcurrency(TransactionMap.CONCUR_PESSIMISTIC);
// gather the cache(s) into a Collection
Collection txnCollection = java.util.Collections.singleton(mapTx);
boolean fTxSucceeded = false;
try {
// start the transaction
mapTx.begin();
mapTx.put(key1, new Integer(1001));
//generateException()
mapTx.put(key2, new Integer(2001));
// commit the changes
fTxSucceeded = CacheFactory.commitTransactionCollection(txnCollection, 1);
int v1 = ((Integer) cache.get(key1)).intValue();
int v2 = ((Integer) cache.get(key2)).intValue();
out("Transaction " + (fTxSucceeded ? "succeeded" : "did not succeed"));
out("After Insert into Tx Object Updated value for key 1: " + v1);
out("After Insert into Tx Object Updated value for key 2: " + v2);
//CacheFactory.shutdown();
}
catch (Exception t) {
// rollback
CacheFactory.rollbackTransactionCollection(txnCollection);
t.printStackTrace();
}
/*azzert(v1 == 2, "Expected value for key1 == 2");
azzert(v2 == 2, "Expected value for key2 == 2");*/
//
out("Updated Value From Cache key 1: " + cache.get(key1));
out("Updated Value From Cache key 2: " + cache.get(key2));
}
public static void generateException() throws Exception
{
throw new Exception("Manual Error Throw");
}
}

According to Oracle documentation, you can achieve this by using Connection API. Both caches should be transactional and obtained from the same instance of Connection class. See example here.
Note that if you're planning to use synchronization between the cache and data source, this functionality could work differently depending on synchronization strategy (write-through, write-behind, etc) you use.

Related

Improve performance of loading 100,000 records from database

We created a program to make the use of the database easier in other programs. So the code im showing gets used in multiple other programs.
One of those other programs gets about 10,000 records from one of our clients and has to check if these are in our database already. If not we insert them into the database (they can also change and have to be updated then).
To make this easy we load all the entries from our whole table (at the moment 120,000), create a class for every entry we get and put all of them into a Hashmap.
The loading of the whole table this way takes around 5 minutes. Also we sometimes have to restart the program because we run into a GC overhead error because we work on limited hardware. Do you have an idea of how we can improve the performance?
Here is the code to load all entries (we have a global limit of 10.000 entries per query so we use a loop):
public Map<String, IMasterDataSet> getAllInformationObjects(ISession session) throws MasterDataException {
IQueryExpression qe;
IQueryParameter qp;
// our main SDP class
Constructor<?> constructorForSDPbaseClass = getStandardConstructor();
SimpleDateFormat itaTimestampFormat = new SimpleDateFormat("yyyyMMddHHmmssSSS");
// search in standard time range (modification date!)
Calendar cal = Calendar.getInstance();
cal.set(2010, Calendar.JANUARY, 1);
Date startDate = cal.getTime();
Date endDate = new Date();
Long startDateL = Long.parseLong(itaTimestampFormat.format(startDate));
Long endDateL = Long.parseLong(itaTimestampFormat.format(endDate));
IDescriptor modDesc = IBVRIDescriptor.ModificationDate.getDescriptor(session);
// count once before to determine initial capacities for hash map/set
IBVRIArchiveClass SDP_ARCHIVECLASS = getMasterDataPropertyBag().getSDP_ARCHIVECLASS();
qe = SDP_ARCHIVECLASS.getQueryExpression(session);
qp = session.getDocumentServer().getClassFactory()
.getQueryParameterInstance(session, new String[] {SDP_ARCHIVECLASS.getDatabaseName(session)}, null, null);
qp.setExpression(qe);
qp.setHitLimitThreshold(0);
qp.setHitLimit(0);
int nrOfHitsTotal = session.getDocumentServer().queryCount(session, qp, "*");
int initialCapacity = (int) (nrOfHitsTotal / 0.75 + 1);
// MD sets; and objects already done (here: document ID)
HashSet<String> objDone = new HashSet<>(initialCapacity);
HashMap<String, IMasterDataSet> objRes = new HashMap<>(initialCapacity);
qp.close();
// do queries until hit count is smaller than 10.000
// use modification date
boolean keepGoing = true;
while(keepGoing) {
// construct query expression
// - basic part: Modification date & class type
// a. doc. class type
qe = SDP_ARCHIVECLASS.getQueryExpression(session);
// b. ID
qe = SearchUtil.appendQueryExpressionWithANDoperator(session, qe,
new PlainExpression(modDesc.getQueryLiteral() + " BETWEEN " + startDateL + " AND " + endDateL));
// 2. Query Parameter: set database; set expression
qp = session.getDocumentServer().getClassFactory()
.getQueryParameterInstance(session, new String[] {SDP_ARCHIVECLASS.getDatabaseName(session)}, null, null);
qp.setExpression(qe);
// order by modification date; hitlimit = 0 -> no hitlimit, but the usual 10.000 max
qp.setOrderByExpression(session.getDocumentServer().getClassFactory().getOrderByExpressionInstance(modDesc, true));
qp.setHitLimitThreshold(0);
qp.setHitLimit(0);
// Do not sort by modification date;
qp.setHints("+NoDefaultOrderBy");
keepGoing = false;
IInformationObject[] hits = null;
IDocumentHitList hitList = null;
hitList = session.getDocumentServer().query(qp, session);
IDocument doc;
if (hitList.getTotalHitCount() > 0) {
hits = hitList.getInformationObjects();
for (IInformationObject hit : hits) {
String objID = hit.getID();
if(!objDone.contains(objID)) {
// do something with this object and the class
// here: construct a new SDP sub class object and give it back via interface
doc = (IDocument) hit;
IMasterDataSet mdSet;
try {
mdSet = (IMasterDataSet) constructorForSDPbaseClass.newInstance(session, doc);
} catch (Exception e) {
// cause for this
String cause = (e.getCause() != null) ? e.getCause().toString() : MasterDataException.ERRMSG_PART_UNKNOWN;
throw new MasterDataException(MasterDataException.ERRMSG_NOINSTANCE_POSSIBLE, this.getClass().getSimpleName(), e.toString(), cause);
}
objRes.put(mdSet.getID(), mdSet);
objDone.add(objID);
}
}
doc = (IDocument) hits[hits.length - 1];
Date lastModDate = ((IDateValue) doc.getDescriptor(modDesc).getValues()[0]).getValue();
startDateL = Long.parseLong(itaTimestampFormat.format(lastModDate));
keepGoing = (hits.length >= 10000 || hitList.isResultSetTruncated());
}
qp.close();
}
return objRes;
}
Loading 120,000 rows (and more) each time will not scale very well, and your solution may not work in the future as the record size grows. Instead let the database server handle the problem.
Your table needs to have a primary key or unique key based on the columns of the records. Iterate through the 10,000 records performing JDBC SQL update to modify all field values with where clause to exactly match primary/unique key.
update BLAH set COL1 = ?, COL2 = ? where PKCOL = ?; // ... AND PKCOL2 =? ...
This modifies an existing row or does nothing at all - and JDBC executeUpate() will return 0 or 1 indicating number of rows changed. If number of rows changed was zero you have detected a new record which does not exist, so perform insert for that new record only.
insert into BLAH (COL1, COL2, ... PKCOL) values (?,?, ..., ?);
You can decide whether to run 10,000 updates followed by however many inserts are needed, or do update+optional insert, and remember JDBC batch statements / auto-commit off may help speed things up.

Database insertion synchronization

I have a java code that generates a request number based on the data received from database, and then updates the database for newly generated
synchronized (this.getClass()) {
counter++;
System.out.println(counter);
System.out.println("start " + System.identityHashCode(this));
certRequest
.setRequestNbr(generateRequestNumber(certInsuranceRequestAddRq
.getAccountInfo().getAccountNumberId()));
System.out.println("outside funcvtion"+certRequest.getRequestNbr());
reqId = Utils.getUniqueId();
certRequest.setRequestId(reqId);
System.out.println(reqId);
ItemIdInfo itemIdInfo = new ItemIdInfo();
itemIdInfo.setInsurerId(certRequest.getRequestId());
certRequest.setItemIdInfo(itemIdInfo);
dao.insert(certRequest);
addAccountRel();
counter++;
System.out.println(counter);
System.out.println("end");
}
the output for System.out.println() statements is `
1
start 27907101
com.csc.exceed.certificate.domain.CertRequest#a042cb
inside function request number66
outside funcvtion66
AF88172D-C8B0-4DCD-9AC6-12296EF8728D
2
end
3
start 21695531
com.csc.exceed.certificate.domain.CertRequest#f98690
inside function request number66
outside funcvtion66
F3200106-6033-4AEC-8DC3-B23FCD3CA380
4
end
In my case I get a call from two threads for this code.
If you observe both the threads run independently. However the data for request number is same in both the cases.
is it possible that before the database updation for first thread completes the second thread starts execution.
`
the code for generateRequestNumber() is as follows:
public String generateRequestNumber(String accNumber) throws Exception {
String requestNumber = null;
if (accNumber != null) {
String SQL_QUERY = "select CERTREQUEST.requestNbr from CertRequest as CERTREQUEST, "
+ "CertActObjRel as certActObjRel where certActObjRel.certificateObjkeyId=CERTREQUEST.requestId "
+ " and certActObjRel.certObjTypeCd=:certObjTypeCd "
+ " and certActObjRel.certAccountId=:accNumber ";
String[] parameterNames = { "certObjTypeCd", "accNumber" };
Object[] parameterVaues = new Object[] {
Constants.REQUEST_RELATION_CODE, accNumber };
List<?> resultSet = dao.executeNamedQuery(SQL_QUERY,
parameterNames, parameterVaues);
// List<?> resultSet = dao.retrieveTableData(SQL_QUERY);
if (resultSet != null && resultSet.size() > 0) {
requestNumber = (String) resultSet.get(0);
}
int maxRequestNumber = -1;
if (requestNumber != null && requestNumber.length() > 0) {
maxRequestNumber = maxValue(resultSet.toArray());
requestNumber = Integer.toString(maxRequestNumber + 1);
} else {
requestNumber = Integer.toString(1);
}
System.out.println("inside function request number"+requestNumber);
return requestNumber;
}
return null;
}
Databases allow multiple simultaneous connections, so unless you write your code properly you can mess up the data.
Since you only seem to require a unique growing integer, you can easily generate one safely inside the database with for example a sequence (if supported by the database). Databases not supporting sequences usually provide some other way (such as auto increment columns in MySQL).

How to add elements to ConcurrentHashMap using ExecutorService

I have a requirement of reading User Information from 2 different sources (db) per userId and storing consolidated information in a Map with key as userId. Users in numbers can vary based on period they have opted for. Group of users may belong to different Period of Year.eg daily, weekly, monthly users.
I used HashMap and LinkedHashMap to get this done. As it slows down the process and to make it faster, I thought of using Threading here.
Reading some tutorials and examples now I am using ConcurrentHashMap and ExecutorService.
In cases based on some validation I want to skip the current iteration and move to next User info. It doesnot allow to use continue keyword to use within for loop. Is there any way to achieve same differently within Multithreaded code.
Moreover below code piece though it works, but its not significantly that faster than the code without threading which creates doubt if Executor Service is implemented correctly.
How do we debug in case we get any error in Multithreaded code. Execution holds at debug point but its not consistent and it does not move to next line with F6.
Can someone point out if I am missing something in the code. Or any other example of simillar use case also can be of great help.
public void getMap() throws UserException
{
long startTime = System.currentTimeMillis();
Map<String, Map<Integer, User>> map = new ConcurrentHashMap<String, Map<Integer, User>>();
//final String key = "";
try
{
final Date todayDate = new Date();
List<String> applyPeriod = db.getPeriods(todayDate);
for (String period : applyPeriod)
{
try
{
final String key = period;
List<UserTable1> eligibleUsers = db.findAllUsers(key);
Map<Integer, User> userIdMap = new ConcurrentHashMap<Integer, User>();
ExecutorService executor = Executors.newFixedThreadPool(eligibleUsers.size());
CompletionService<User> cs = new ExecutorCompletionService<User>(executor);
int userCount=0;
for (UserTable1 eligibleUser : eligibleUsers)
{
try
{
cs.submit(
new Callable<User>()
{
public User call()
{
int userId = eligibleUser.getUserId();
List<EmployeeTable2> empData = db.findByUserId(userId);
EmployeeTable2 emp = null;
if (null != empData && !empData.isEmpty())
{
emp = empData.get(0);
}else{
String errorMsg = "No record found for given User ID in emp table";
logger.error(errorMsg);
//continue;
// conitnue does not work here.
}
User user = new User();
user.setUserId(userId);
user.setFullName(emp.getFullName());
return user;
}
}
);
userCount++;
}
catch(Exception ex)
{
String errorMsg = "Error while creating map :" + ex.getMessage();
logger.error(errorMsg);
}
}
for (int i = 0; i < userCount ; i++ ) {
try {
User user = cs.take().get();
if (user != null) {
userIdMap.put(user.getUserId(), user);
}
} catch (ExecutionException e) {
} catch (InterruptedException e) {
}
}
executor.shutdown();
map.put(key, userIdMap);
}
catch(Exception ex)
{
String errorMsg = "Error while creating map :" + ex.getMessage();
logger.error(errorMsg);
}
}
}
catch(Exception ex){
String errorMsg = "Error while creating map :" + ex.getMessage();
logger.error(errorMsg);
}
logger.info("Size of Map : " + map.size());
Set<String> periods = map.keySet();
logger.info("Size of periods : " + periods.size());
for(String period :periods)
{
Map<Integer, User> mapOfuserIds = map.get(period);
Set<Integer> userIds = mapOfuserIds.keySet();
logger.info("Size of Set : " + userIds.size());
for(Integer userId : userIds){
User inf = mapOfuserIds.get(userId);
logger.info("User Id : " + inf.getUserId());
}
}
long endTime = System.currentTimeMillis();
long timeTaken = (endTime - startTime);
logger.info("All threads are completed in " + timeTaken + " milisecond");
logger.info("******END******");
}
You really don't want to create a thread pool with as many threads as users you've read from the db. That doesn't make sense most of the time because you need to keep in mind that threads need to run somewhere... There are not many servers out there with 10 or 100 or even 1000 cores reserved for your application. A much smaller value like maybe 5 is often enough, depending on your environment.
And as always for topics about performance: You first need to test what your actual bottleneck is. Your application may simply don't benefit of threading because e.g. you are reading form a db which only allows 5 concurrent connections a the same time. In that case all your other 995 threads will simply wait.
Some other thing to consider is network latency: Reading multiple user ids from multiple threads may even increase the round trip time needed to get the data for one user from the database. An alternative approach might be to not read one user at a time, but the data of all 10'000 of them at once. That way your maybe available 10 GBit Ethernet connection to your database might really speed things up because you have only small communication overhead with the database but it might serve you all data you need in one answer quickly.
So in short, from my opinion your question is about performance optimization of your problem in general, but you don't know enough yet to decide which way to go.
you could try something like that:
List<String> periods = db.getPeriods(todayDate);
Map<String, Map<Integer, User>> hm = new HashMap<>();
periods.parallelStream().forEach(s -> {
eligibleUsers = // getEligibleUsers();
hm.put(s, eligibleUsers.parallelStream().collect(
Collectors.toMap(UserTable1::getId,createUserForId(UserTable1:getId))
});
); //
And in the createUserForId you do your db-reading
private User createUserForId(Integer id){
db.findByUserId(id);
//...
User user = new User();
user.setUserId(userId);
user.setFullName(emp.getFullName());
return user;
}

How do I extract properties from Entities in Google App Engine Datastore using Java

I am using Google App Engine and trying to query / pull data from the Datastores. I have followed nearly 20 different tutorials without any luck.
Here is a picture of my Datastore and the respective sample data I have stored in there:
Here is some of the code I have to pull the data:
//To obtain the keys
final DatastoreService dss=DatastoreServiceFactory.getDatastoreService();
final Query query=new Query("Coupon");
List<Key> keys = new ArrayList<Key>();
//Put the keys into a list for iteration
for (final Entity entity : dss.prepare(query).asIterable(FetchOptions.Builder.withLimit(100000))) {
keys.add(entity.getKey());
}
try {
for (int i = 0; i < keys.size(); i++){
Entity myEntity = new Entity("Coupon", keys.get(i));
System.out.println("Size of the Keys array = " + keys.size());
String description = (String) myEntity.getProperty("desc");
String endDate = (String) myEntity.getProperty("endDate");
System.out.println("Description = " + description);
System.out.println("End Date: " + endDate);
//Map here is empty...
Map<String, Object> test = myEntity.getProperties();
System.out.println("MAP SIZE = " + test.size());
}
} catch (Exception e){
e.printStackTrace();
}
**OUPUT:**
Size of the Keys array = 2
Description = null
End date = null
MAP SIZE = 0
I have no clue why the description and end date are null. It is clearly pulling in the right Entity as the size shows 2, which matches the picture shown. Also, when I print the keys out, it matches as well
(Something like this: for the keys.get(i).toString(); -- Entity [!global:Coupon(123)/Coupon(no-id-yet)]:
. Or: Key String = !global:Coupon(5730827476402176)
I have followed the documentation (here) and some examples (here) to the best of my ability but I cannot seem to figure it out. Does anyone have any recommendations or experience in how to obtain the properties from Entities once you have them without them returning null?
I have gone through the following Stackoverflow questions without any success so please do not close this with a simple duplicate question marker on it:
1) How do i get all child entities in Google App Engine (Low-level API)
2) Storing hierarchical data in Google App Engine Datastore?
3) How do you use list properties in Google App Engine datastore in Java?
4) Mass updates in Google App Engine Datastore
5) Checking if Entity exists in google app engine datastore. .
have you tried this?
//Put the keys into a list for iteration
for (final Entity entity : dss.prepare(query).asIterable (FetchOptions.Builder.withLimit(100000))) {
String description = (String) entity.getProperty("desc");
String endDate = (String) entity.getProperty("endDate");
System.out.println("Description = " + description);
System.out.println("End Date: " + endDate);
}
In your example, you creating entity and it is expected that properties will be empty
Eureka! Many thanks to all that answered. Patrice and user2649908 especially thank you as you led me to the answer.
So, Patrice was entirely correct in that I was querying to get the keys, building a new entity, and then trying to parse the newly created (empty) entity.
The solution was to utilize PersistenceManager to parse the data and then use getter/ accessor methods to do so. The link for persistence manager (which I more or less just copied directly from as it worked perfectly) is here:
How to use JDO persistence manager?
Once I setup the persistence manager, I was able to get it to pull the data using this code:
try {
for (int i = 0; i < keys.size(); i++){
//See the link for How to use JDO persistence manager on how to use this
PersistenceManager pm = MyPersistenceManagerClass.getPM();
//Need to cast it here because it returns an object
Coupon coupon = (Coupon) pm.getObjectById(Coupon.class, keys.get(i));
System.out.println("Created by = " + coupon.getCreatedBy());
System.out.println("Description = " + coupon.getDesc());
System.out.println("Modified by = " + coupon.getModifiedBy());
}
} catch (Exception e){
e.printStackTrace();
}

Get entity group count always return 0

Following the GAE official doc i try to test it in my local dev environment(unit test), unfortunately the entity group count always return 0:
DatastoreService ds = DatastoreServiceFactory.getDatastoreService();
MemcacheService memcacheService = MemcacheServiceFactory.getMemcacheService();
Entity entity1 = new Entity("Simple");
Key key1 = ds.put(entity1);
Key entityGroupKey = Entities.createEntityGroupKey(key1);
//should print 1, but 0
showEntityGroupCount(ds, memcacheService, entityGroupKey);
Entity entity2 = new Entity("Simple", key1);
Key key2 = ds.put(entity2);
//should print 2, but still 0
showEntityGroupCount(ds, memcacheService, entityGroupKey);
below are copied from the doc for quick reference:
// A simple class for tracking consistent entity group counts
class EntityGroupCount implements Serializable {
long version; // Version of the entity group whose count we are tracking
int count;
EntityGroupCount(long version, int count) {
this.version = version;
this.count = count;
}
}
// Display count of entities in an entity group, with consistent caching
void showEntityGroupCount(DatastoreService ds, MemcacheService cache, PrintWriter writer,
Key entityGroupKey) {
EntityGroupCount egCount = (EntityGroupCount) cache.get(entityGroupKey);
if (egCount != null && egCount.version == getEntityGroupVersion(ds, null, entityGroupKey)) {
// Cached value matched current entity group version, use that
writer.println(egCount.count + " entities (cached)");
} else {
// Need to actually count entities. Using a transaction to get a consistent count
// and entity group version.
Transaction tx = ds.beginTransaction();
PreparedQuery pq = ds.prepare(tx, new Query(entityGroupKey));
int count = pq.countEntities(FetchOptions.Builder.withLimit(5000));
cache.put(entityGroupKey,
new EntityGroupCount(getEntityGroupVersion(ds, tx, entityGroupKey), count));
tx.rollback();
writer.println(count + " entities");
}
}
Any ideas about this problem? Thanks in advance.
Entities.createEntityGroupKey() is being called twice as a result of method nesting. Change both occurrences of
showEntityGroupCount(ds, memcacheService, entityGroupKey);
to
showEntityGroupCount(ds, memcacheService, key1);
and the correct counts appear (in the development environment anyway).

Categories

Resources