I am using ConcurrentSkipListMap in my code and I want to ensure a method to be concurrent-safe.
public synchronized UserLogin getOrCreateLogin(Integer userId) {
// new user
if (!sessionCache.containsKey(userId)) {
String sessionKey = sessionKeyPool.getKey();
sessionCache.putIfAbsent(userId, sessionKey);
return userSessions.computeIfAbsent(sessionKey, userLogin -> new UserLogin(userId, sessionKey));
} else { // old user, check session
String oldSessionKey = sessionCache.get(userId);
UserLogin old = userSessions.get(oldSessionKey);
if (!old.isExpired()) return old;
log.info("Session expired! Create another. User id: " + userId + ", last login: " + old.getStartTime());
String newKey = sessionKeyPool.getKey();
UserLogin another = new UserLogin(userId, newKey);
userSessions.replace(newKey, another);
sessionCache.replace(userId, newKey);
return another;
}
}
I know that the map atomatically does containsKey(), get and so one. But does not mean the whole block cannot be accessed by two threads at the same time, but at least between line and line the happen-before relationship is guarded(the first thread reaches this line will exit this line first). But, then how to actually use these methods to have a composable and consistent result? How do I lock these lines?
How can I lock to gain the finest granularity without compromising the consistency of output data?
Related
I am new in Oracle Coherence, I need to add functionality in my sample project which will enable transaction management between two Cache.
As we would getting the transaction object from NamedCache, and there are two different NamedCache object for both different Cache. So how i am able to achieved functionality which enable transaction management between two Cache.
For Single Cache i am able to do transaction management with following code.
public class TransactionExample extends Base {
public static void main(String[] args) {
// populate the cache
NamedCache cache = CacheFactory.getCache("dist-extend");
cache.clear();
String key1 = "key1";
String key2 = "key2";
//cache.clear();
// create one TransactionMap per NamedCache
TransactionMap mapTx = CacheFactory.getLocalTransaction(cache);
mapTx.setTransactionIsolation(TransactionMap.TRANSACTION_REPEATABLE_GET);
mapTx.setConcurrency(TransactionMap.CONCUR_PESSIMISTIC);
// gather the cache(s) into a Collection
Collection txnCollection = java.util.Collections.singleton(mapTx);
boolean fTxSucceeded = false;
try {
// start the transaction
mapTx.begin();
mapTx.put(key1, new Integer(1001));
//generateException()
mapTx.put(key2, new Integer(2001));
// commit the changes
fTxSucceeded = CacheFactory.commitTransactionCollection(txnCollection, 1);
int v1 = ((Integer) cache.get(key1)).intValue();
int v2 = ((Integer) cache.get(key2)).intValue();
out("Transaction " + (fTxSucceeded ? "succeeded" : "did not succeed"));
out("After Insert into Tx Object Updated value for key 1: " + v1);
out("After Insert into Tx Object Updated value for key 2: " + v2);
//CacheFactory.shutdown();
}
catch (Exception t) {
// rollback
CacheFactory.rollbackTransactionCollection(txnCollection);
t.printStackTrace();
}
/*azzert(v1 == 2, "Expected value for key1 == 2");
azzert(v2 == 2, "Expected value for key2 == 2");*/
//
out("Updated Value From Cache key 1: " + cache.get(key1));
out("Updated Value From Cache key 2: " + cache.get(key2));
}
public static void generateException() throws Exception
{
throw new Exception("Manual Error Throw");
}
}
According to Oracle documentation, you can achieve this by using Connection API. Both caches should be transactional and obtained from the same instance of Connection class. See example here.
Note that if you're planning to use synchronization between the cache and data source, this functionality could work differently depending on synchronization strategy (write-through, write-behind, etc) you use.
So I have a collection of emails and what I want to do is use them to output unique triplets (sender email, receiver email, timestamp) like so:
user1#stackoverflow.com user2#stackoverflow.com 09/12/2009 16:45
user1#stackoverflow.com user9#stackoverflow.com 09/12/2009 18:45
user3#stackoverflow.com user4#stackoverflow.com 07/05/2008 12:29
In the above example user 1 sent a single email to multiple recipients (user 2 and user 9). To store the recipients, I created a data structure EdgeWritable(implements WritableComparable)that will hold the Sender and Recipient email addresses as well as a Timestamp.
My mapper looks like this:
private final EdgeWritable edge = new EdgeWritable(); // Data structure for triplets.
private final NullWritable noval = NullWritable.get();
...
#Override
public void map(Text key, BytesWritable value, Context context)
throws IOException, InterruptedException {
byte[] bytes = value.getBytes();
Scanner scanner = new Scanner(new ByteArrayInputStream(bytes), "UTF-8");
String from = null; // Sender's Email address
ArrayList<String> recipients = new ArrayList<String>(); // List of recipients' Email addresses
long millis = -1; // Date
// Parse information from file
while(scanner.hasNext()) {
String line = scanner.nextLine();
if (line.startsWith("From:")) {
from = procFrom(stripCommand(line, "From:")); // Get sender e-mail address.
} else if (line.startsWith("To:")) {
procRecipients(stripCommand(line, "To:"), recipients); // Populate recipients into a list.
} else if (line.startsWith("Date:")) {
millis = procDate(stripCommand(line, "Date:")); // Get timestamp.
if (line.equals("")) { // Empty line indicates the end of the header
break;
}
}
scanner.close();
// Emit EdgeWritable as intermediate key containing Sender, Recipient and Timestamp.
if (from != null && recipients.size() > 0 && millis != -1) {
//EdgeWritable has 2 Text values (ew[0] and ew[1]) and a Timestamp. ew[0] is the sender, ew[1] is a recipient.
edge.set(0, from); // Set ew[0]
for(int i = 0; i < recipients.size(); i++) {
edge.set(1, recipients.get(i)); // Set edge from sender to each recipient i.
edge.setTS(millis); // Set date.
context.write(edge, noval); // Emit the edge as an intermediate key with a null value.
}
}
}
...
My reducer simply formats the date and outputs the edges:
public void reduce(EdgeWritable key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException {
String date = MailReader.sdf.format(edge.getTS());
out.set(edge.get(0) + " " + edge.get(1) + " " + date); // same edge from Mapper (an EdgeWritable).
context.write(noval, out); // same noval from Mapper (a NullWritable).
}
Using EdgeWritable as the intermediate key and NullWritable as the value (in mapper) is a requirement, I'm not permitted to use other methods. This is my first Hadoop / MapReduce program and I just wanted to know that I'm going in the right direction. I have looked at plenty of MapReduce examples online and have never seen key/value pairs being emitted in a for-loop the way I have done it. I feel like I'm missing some sort of trick here, but using a for-loop in this way is the only approach I can think of.
Is this 'bad'? I hope this is clear but please let me know if any further clarification is needed.
Map method gets called for each record, so your array list is having only 1 record for every call. Declare your array list at class level so that u can store values for all records. Then in clean up method you can do the emit logic which you have written inside map. Try this and let me know if that works.
I have a java code that generates a request number based on the data received from database, and then updates the database for newly generated
synchronized (this.getClass()) {
counter++;
System.out.println(counter);
System.out.println("start " + System.identityHashCode(this));
certRequest
.setRequestNbr(generateRequestNumber(certInsuranceRequestAddRq
.getAccountInfo().getAccountNumberId()));
System.out.println("outside funcvtion"+certRequest.getRequestNbr());
reqId = Utils.getUniqueId();
certRequest.setRequestId(reqId);
System.out.println(reqId);
ItemIdInfo itemIdInfo = new ItemIdInfo();
itemIdInfo.setInsurerId(certRequest.getRequestId());
certRequest.setItemIdInfo(itemIdInfo);
dao.insert(certRequest);
addAccountRel();
counter++;
System.out.println(counter);
System.out.println("end");
}
the output for System.out.println() statements is `
1
start 27907101
com.csc.exceed.certificate.domain.CertRequest#a042cb
inside function request number66
outside funcvtion66
AF88172D-C8B0-4DCD-9AC6-12296EF8728D
2
end
3
start 21695531
com.csc.exceed.certificate.domain.CertRequest#f98690
inside function request number66
outside funcvtion66
F3200106-6033-4AEC-8DC3-B23FCD3CA380
4
end
In my case I get a call from two threads for this code.
If you observe both the threads run independently. However the data for request number is same in both the cases.
is it possible that before the database updation for first thread completes the second thread starts execution.
`
the code for generateRequestNumber() is as follows:
public String generateRequestNumber(String accNumber) throws Exception {
String requestNumber = null;
if (accNumber != null) {
String SQL_QUERY = "select CERTREQUEST.requestNbr from CertRequest as CERTREQUEST, "
+ "CertActObjRel as certActObjRel where certActObjRel.certificateObjkeyId=CERTREQUEST.requestId "
+ " and certActObjRel.certObjTypeCd=:certObjTypeCd "
+ " and certActObjRel.certAccountId=:accNumber ";
String[] parameterNames = { "certObjTypeCd", "accNumber" };
Object[] parameterVaues = new Object[] {
Constants.REQUEST_RELATION_CODE, accNumber };
List<?> resultSet = dao.executeNamedQuery(SQL_QUERY,
parameterNames, parameterVaues);
// List<?> resultSet = dao.retrieveTableData(SQL_QUERY);
if (resultSet != null && resultSet.size() > 0) {
requestNumber = (String) resultSet.get(0);
}
int maxRequestNumber = -1;
if (requestNumber != null && requestNumber.length() > 0) {
maxRequestNumber = maxValue(resultSet.toArray());
requestNumber = Integer.toString(maxRequestNumber + 1);
} else {
requestNumber = Integer.toString(1);
}
System.out.println("inside function request number"+requestNumber);
return requestNumber;
}
return null;
}
Databases allow multiple simultaneous connections, so unless you write your code properly you can mess up the data.
Since you only seem to require a unique growing integer, you can easily generate one safely inside the database with for example a sequence (if supported by the database). Databases not supporting sequences usually provide some other way (such as auto increment columns in MySQL).
I have a requirement of reading User Information from 2 different sources (db) per userId and storing consolidated information in a Map with key as userId. Users in numbers can vary based on period they have opted for. Group of users may belong to different Period of Year.eg daily, weekly, monthly users.
I used HashMap and LinkedHashMap to get this done. As it slows down the process and to make it faster, I thought of using Threading here.
Reading some tutorials and examples now I am using ConcurrentHashMap and ExecutorService.
In cases based on some validation I want to skip the current iteration and move to next User info. It doesnot allow to use continue keyword to use within for loop. Is there any way to achieve same differently within Multithreaded code.
Moreover below code piece though it works, but its not significantly that faster than the code without threading which creates doubt if Executor Service is implemented correctly.
How do we debug in case we get any error in Multithreaded code. Execution holds at debug point but its not consistent and it does not move to next line with F6.
Can someone point out if I am missing something in the code. Or any other example of simillar use case also can be of great help.
public void getMap() throws UserException
{
long startTime = System.currentTimeMillis();
Map<String, Map<Integer, User>> map = new ConcurrentHashMap<String, Map<Integer, User>>();
//final String key = "";
try
{
final Date todayDate = new Date();
List<String> applyPeriod = db.getPeriods(todayDate);
for (String period : applyPeriod)
{
try
{
final String key = period;
List<UserTable1> eligibleUsers = db.findAllUsers(key);
Map<Integer, User> userIdMap = new ConcurrentHashMap<Integer, User>();
ExecutorService executor = Executors.newFixedThreadPool(eligibleUsers.size());
CompletionService<User> cs = new ExecutorCompletionService<User>(executor);
int userCount=0;
for (UserTable1 eligibleUser : eligibleUsers)
{
try
{
cs.submit(
new Callable<User>()
{
public User call()
{
int userId = eligibleUser.getUserId();
List<EmployeeTable2> empData = db.findByUserId(userId);
EmployeeTable2 emp = null;
if (null != empData && !empData.isEmpty())
{
emp = empData.get(0);
}else{
String errorMsg = "No record found for given User ID in emp table";
logger.error(errorMsg);
//continue;
// conitnue does not work here.
}
User user = new User();
user.setUserId(userId);
user.setFullName(emp.getFullName());
return user;
}
}
);
userCount++;
}
catch(Exception ex)
{
String errorMsg = "Error while creating map :" + ex.getMessage();
logger.error(errorMsg);
}
}
for (int i = 0; i < userCount ; i++ ) {
try {
User user = cs.take().get();
if (user != null) {
userIdMap.put(user.getUserId(), user);
}
} catch (ExecutionException e) {
} catch (InterruptedException e) {
}
}
executor.shutdown();
map.put(key, userIdMap);
}
catch(Exception ex)
{
String errorMsg = "Error while creating map :" + ex.getMessage();
logger.error(errorMsg);
}
}
}
catch(Exception ex){
String errorMsg = "Error while creating map :" + ex.getMessage();
logger.error(errorMsg);
}
logger.info("Size of Map : " + map.size());
Set<String> periods = map.keySet();
logger.info("Size of periods : " + periods.size());
for(String period :periods)
{
Map<Integer, User> mapOfuserIds = map.get(period);
Set<Integer> userIds = mapOfuserIds.keySet();
logger.info("Size of Set : " + userIds.size());
for(Integer userId : userIds){
User inf = mapOfuserIds.get(userId);
logger.info("User Id : " + inf.getUserId());
}
}
long endTime = System.currentTimeMillis();
long timeTaken = (endTime - startTime);
logger.info("All threads are completed in " + timeTaken + " milisecond");
logger.info("******END******");
}
You really don't want to create a thread pool with as many threads as users you've read from the db. That doesn't make sense most of the time because you need to keep in mind that threads need to run somewhere... There are not many servers out there with 10 or 100 or even 1000 cores reserved for your application. A much smaller value like maybe 5 is often enough, depending on your environment.
And as always for topics about performance: You first need to test what your actual bottleneck is. Your application may simply don't benefit of threading because e.g. you are reading form a db which only allows 5 concurrent connections a the same time. In that case all your other 995 threads will simply wait.
Some other thing to consider is network latency: Reading multiple user ids from multiple threads may even increase the round trip time needed to get the data for one user from the database. An alternative approach might be to not read one user at a time, but the data of all 10'000 of them at once. That way your maybe available 10 GBit Ethernet connection to your database might really speed things up because you have only small communication overhead with the database but it might serve you all data you need in one answer quickly.
So in short, from my opinion your question is about performance optimization of your problem in general, but you don't know enough yet to decide which way to go.
you could try something like that:
List<String> periods = db.getPeriods(todayDate);
Map<String, Map<Integer, User>> hm = new HashMap<>();
periods.parallelStream().forEach(s -> {
eligibleUsers = // getEligibleUsers();
hm.put(s, eligibleUsers.parallelStream().collect(
Collectors.toMap(UserTable1::getId,createUserForId(UserTable1:getId))
});
); //
And in the createUserForId you do your db-reading
private User createUserForId(Integer id){
db.findByUserId(id);
//...
User user = new User();
user.setUserId(userId);
user.setFullName(emp.getFullName());
return user;
}
This seems like a very strange problem. I'm stress testing my neo4j graph database, and so one of my tests requires creating a lot of users (in this specific test, 1000). So the code for that is as follows,
// Creates a n users and measures the time taken to add another
n = 1000;
tx = graphDb.beginTx();
try {
for(int i = 0; i < n; i++){
dataService.createUser(BigInteger.valueOf(i));
}
start = System.nanoTime();
dataService.createUser(BigInteger.valueOf(n));
end = System.nanoTime();
time = end - start;
System.out.println("The time taken for createUser with " + n + " users is " + time +" nanoseconds.");
tx.success();
}
finally
{
tx.finish();
}
And the code for dataService.createUser() is,
public User createUser(BigInteger identifier) throws ExistsException {
// Verify that user doesn't already exist.
if (this.nodeIndex.get(UserWrapper.KEY_IDENTIFIER, identifier)
.getSingle() != null) {
throw new ExistsException("User with identifier '"
+ identifier.toString() + "' already exists.");
}
// Create new user.
final Node userNode = graphDb.createNode();
final User user = new UserWrapper(userNode);
user.setIdentifier(identifier);
userParent.getNode().createRelationshipTo(userNode, NodeRelationships.PARENT);
return user;
}
Now I need to call dataService.getUser() after I've made these Users. The code for getUser() is as follows,
public User getUser(BigInteger identifier) throws DoesNotExistException {
// Search for the user.
Node userNode = this.nodeIndex.get(UserWrapper.KEY_IDENTIFIER,
identifier).getSingle();
// Return the wrapped user, if found.
if (userNode != null) {
return new UserWrapper(userNode);
} else {
throw new DoesNotExistException("User with identifier '"
+ identifier.toString() + "' was not found.");
}
}
So everything is going fine until I create the 129th user. I'm following along in the debugger and watching the value of dataService.getUser(BigInteger.valueOf(1)) which is the second node, dataService.getUser(BigInteger.valueOf(127)) which is the 128th node, and dataService.getUser(BigInteger.valueOf(i-1)) which is the last node created. And the debugger is telling me that after node 128 is created, node 129 and above aren't created because getUser() throws a DoesNotExistException for those nodes, but still gives values for node 2 and node 128.
The user id I'm passing to createUser() is autoindexed.
Any idea why it isn't making more nodes (or not indexing these nodes)?
It sounds suspiciously like a byte value conversion which flips around at 128. Could you make sure there isn't anything like that going on in your code?