Data commit issue in multithreading - java

I am new to Java and Hibernate.
I have implemented a functionality where I generate request nos. based on already saved request no. This is done by finding the maximum request no. and incrementing it by 1,and then again save i it to database.
However I am facing issues with multithreading. When two threads access my code at the same time both generate same request no. My code is already synchronized. Please suggest some solution.
synchronized (this.getClass()) {
System.out.println("start");
certRequest.setRequestNbr(generateRequestNumber(certInsuranceRequestAddRq.getAccountInfo().getAccountNumberId()));
reqId = Utils.getUniqueId();
certRequest.setRequestId(reqId);
ItemIdInfo itemIdInfo = new ItemIdInfo();
itemIdInfo.setInsurerId(certRequest.getRequestId());
certRequest.setItemIdInfo(itemIdInfo);
dao.insert(certRequest);
addAccountRel();
System.out.println("end");
}
Following is the output showing my synchronization:
start
end
start
end
Is it some Hibernate issue.
Does the use of transactional attribute in Spring affects the code commit in my Case?
I am using the following Transactional Attribute:
#Transactional(readOnly = false, propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
EDIT: code for generateRequestNumber() shown in chat room.
public String generateRequestNumber(String accNumber) throws Exception {
String requestNumber = null;
if (accNumber != null) {
String SQL_QUERY = "select CERTREQUEST.requestNbr from CertRequest as CERTREQUEST, "
+ "CertActObjRel as certActObjRel where certActObjRel.certificateObjkeyId=CERTREQUEST.requestId "
+ " and certActObjRel.certObjTypeCd=:certObjTypeCd "
+ " and certActObjRel.certAccountId=:accNumber ";
String[] parameterNames = {"certObjTypeCd", "accNumber"};
Object[] parameterVaues = new Object[]
{
Constants.REQUEST_RELATION_CODE, accNumber
};
List<?> resultSet = dao.executeNamedQuery(SQL_QUERY,
parameterNames, parameterVaues);
// List<?> resultSet = dao.retrieveTableData(SQL_QUERY);
if (resultSet != null && resultSet.size() > 0) {
requestNumber = (String) resultSet.get(0);
}
int maxRequestNumber = -1;
if (requestNumber != null && requestNumber.length() > 0) {
maxRequestNumber = maxValue(resultSet.toArray());
requestNumber = Integer.toString(maxRequestNumber + 1);
} else {
requestNumber = Integer.toString(1);
}
System.out.println("inside function request number" + requestNumber);
return requestNumber;
}
return null;
}

Don't synchronize on the Class instance obtained via getClass(). It can have some strange side effects. See https://www.securecoding.cert.org/confluence/pages/viewpage.action?pageId=43647087
For example use:
synchronize(this) {
// synchronized code
}
or
private synchronized void myMethod() {
// synchronized code
}
To synchronize on the object instance.
Or do:
private static final Object lock = new Object();
private void myMethod() {
synchronize(lock) {
// synchronized code
}
}
Like #diwakar suggested. This uses a constant field to synchronize on to guarantee that this code is synchronizing on the same lock.
EDIT: Based on information from chat, you are using a SELECT to get the maximum requestNumber and increasing the value in your code. Then this value is set on the CertRequest which is then persisted in the database via a DAO. If this persist action is not committed (e.g. by making the method #Transactional or some other means) then another thread will still see the old requestNumber value. So you could solve this by making the code transactional (how depends on which frameworks you use etc.). But I agree with #VA31's answer which states that you should use a database sequence for this instead of incrementing the value in code. Instead of a sequence you could also consider using an auto-incement field in CertRequest, something like:
#GeneratedValue(strategy=GenerationType.AUTO)
private int requestNumber;
For getting the next value from a sequence you can look at this question.

You mentioned this information in your question.
I have implemented a functionality where I generate request nos. based on already saved request no. This is done by finding the maximum request no. and incrementing it by 1,and then again save i it to database.
On a first look, it seems the problem caused by multi appserver code. Threads are synchronised inside one JVM(appserver). If you are using more than one appserver then you have to do it differently using more robust approach by using server to server communication or by batch allocation of request no to each appserver.
But, if you are using only one appserver and multiple threads accessing the same code then you can put a lock on the instance of the class rather then the class itself.
synchronized(this) {
lastName = name;
nameCount++;
}
Or you can use the locks private to the class instance
private Object lock = new Object();
.
.
synchronized(lock) {
System.out.println("start");
certRequest.setRequestNbr(generateRequestNumber(certInsuranceRequestAddRq.getAccountInfo().getAccountNumberId()));
reqId = Utils.getUniqueId();
certRequest.setRequestId(reqId);
ItemIdInfo itemIdInfo = new ItemIdInfo();
itemIdInfo.setInsurerId(certRequest.getRequestId());
certRequest.setItemIdInfo(itemIdInfo);
dao.insert(certRequest);
addAccountRel();
System.out.println("end");
}
But make sure that your DB is updated by the new sequence no before the next thread is accessing it to get new one.

It is a good practice to generate "the request number (Unique Id)" by using the DATABASE SEQUENCE so that you don't need to synchronize your Service/DAO methods.

First thing:
Why are you getting the thread inside the method. I is not required here.
Also, one thing;
Can you try like this once:
final static Object lock = new Object();
synchronized (lock)
{
.....
}
what I feel is that object what you are calling is different so try this once.

Related

Can ChronicleQueue tailers for two different queues be interleaved?

I have two separate ChronicleQueues that were created by independent threads that monitor web socket streams in a Java application. When I read each queue independently in a separate single-thread program, I can traverse each entire queue as expected - using the following minimal code:
final ExcerptTailer queue1Tailer = queue1.createTailer();
final ExcerptTailer queue2Tailer = queue2.createTailer();
while (true)
{
try( final DocumentContext context = queue1Tailer.readingDocument() )
{
if ( isNull(context.wire()) )
break;
counter1++;
queue1Data = context.wire()
.bytes()
.readObject(Queue1Data.class);
queue1Writer.write(String.format("%d\t%d\t%d%n", counter1, queue1Data.getEventTime(), queue1Data.getEventContent()));
}
}
while (true)
{
try( final DocumentContext context = queue2Tailer.readingDocument() )
{
if ( isNull(context.wire()) )
break;
counter2++;
queue2Data = context.wire()
.bytes()
.readObject(Queue2Data.class);
queue2Writer.write(String.format("%d\t%d\t%d%n", counter2, queue2Data.getEventTime(), queue2Data.getEventContent()));
}
}
In the above, I am able to read all the Queue1Data objects, then all the Queue2Data objects and access values as expected. However, when I try to interleave reading the queues (read an object from one queue, based on a property of Queue1Data object (a time stamp), read Queue2Data objects until the first object that is after the time stamp (the limit variable below), of the active Queue1Data object is found - then do something with it) after only one object from the queue2Tailer is read, an exception is thrown .DecoratedBufferUnderflowException: readCheckOffset0 failed. The simplified code that fails is below (I have tried putting the outer while(true) loop inside and outside the the queue2Tailer try block):
final ExcerptTailer queue1Tailer = queue1Queue.createTailer("label1");
try( final DocumentContext queue1Context = queue1Tailer.readingDocument() )
{
final ExcerptTailer queue2Tailer = queue2Queue.createTailer("label2");
while (true)
{
try( final DocumentContext queue2Context = queue2Tailer.readingDocument() )
{
if ( isNull(queue2Context.wire()) )
{
terminate = true;
break;
}
queue2Data = queue2Context.wire()
.bytes()
.readObject(Queue2Data.class);
while(true)
{
queue1Data = queue1Context.wire()
.bytes()
.readObject(Queue1Data.class); // first read succeeds
if (queue1Data.getFieldValue() > limit) // if this fails the inner loop continues
{ // but the second read fails
// cache a value
break;
}
}
// continue working with queu2Data object and cached values
} // end try block for queue2 tailer
} // end outer while loop
} // end outer try block for queue1 tailer
I have tried as above, and also with both Tailers created at the beginning of the function which does the processing (a private function executed when a button is clicked in a relatively simple Java application). Basically I took the loop which worked independently, and put it inside another loop in the function, expecting no problems. I thinking I am missing something crucial in how tailers are positioned and used to read objects, but I cannot figure out what it is - since the same basic code works when reading queues independently. The use of isNull(context.wire()) to determine when there are no more objects in a queue I got from one of the examples, though I am not sure this is the proper way to determine when there are no more objects in a queue when processing the queue sequentially.
Any suggestions would be appreciated.
You're not writing it correctly in the first instance.
Now, there's hardcore way of achieving what you are trying to achieve (that is, do everything explicitly, on lower level), and use MethodReader/MethodWriter magic rovided by Chronicle.
Hardcore way
Writing
// write first event type
try (DocumentContext dc = queueAppender.writingDocument()) {
dc.wire().writeEventName("first").text("Hello first");
}
// write second event type
try (DocumentContext dc = queueAppender.writingDocument()) {
dc.wire().writeEventName("second").text("Hello second");
}
This will write different types of messages into the same queue, and you will be able to easily distinguish those when reading.
Reading
StringBuilder reusable = new StringBuilder();
while (true) {
try (DocumentContext dc = tailer.readingDocument()) {
if (!dc.isPresent) {
continue;
}
dc.wire().readEventName(reusable);
if ("first".contentEquals(reusable)) {
// handle first
} else if ("second".contentEquals(reusable)) {
// handle second
}
// optionally handle other events
}
}
The Chronicle Way (aka Peter's magic)
This works with any marshallable types, as well as any primitive types and CharSequence subclasses (i.e. Strings), and Bytes. For more details have a read of MethodReader/MethodWriter documentation.
Suppose you have some data classes:
public class FirstDataType implements Marshallable { // alternatively - extends SelfDescribingMarshallable
// data fields...
}
public class SecondDataType implements Marshallable { // alternatively - extends SelfDescribingMarshallable
// data fields...
}
Then, to write those data classes to the queue, you just need to define the interface, like this:
interface EventHandler {
void first(FirstDataType first);
void second(SecondDataType second);
}
Writing
Then, writing data is as simple as:
final EventHandler writer = appender.methodWriterBuilder(EventHandler).get();
// assuming firstDatum and secondDatum are created earlier
writer.first(firstDatum);
writer.second(secondDatum);
What this does is the same as in the hardcore section - it writes event name (which is taken from the method name in method writer, i.e. "first" or "second" correspondingly), and then the actual data object.
Reading
Now, to read those events from the queue, you need to provide an implementation of the above interface, that will handle corresponding event types, e.g.:
// you implement this to read data from the queue
private class MyEventHandler implements EventHandler {
public void first(FirstDataType first) {
// handle first type of events
}
public void second(SecondDataType second) {
// handle second type of events
}
}
And then you read as follows:
EventHandler handler = new MyEventHandler();
MethodReader reader = tailer.methodReader(handler);
while (true) {
reader.readOne(); // readOne returns boolean value which can be used to determine if there's no more data, and pause if appropriate
}
Misc
You don't have to use the same interface for reading and writing. In case you want to only read events of second type, you can define another interface:
interface OnlySecond {
void second(SecondDataType second);
}
Now, if you create a handler implementing this interface and give it to tailer#methodReader() call, the readOne() calls will only process events of second type while skipping all others.
This also works for MethodWriters, i.e. if you have several processes writing different types of data and one process consuming all that data, it is not uncommon to define multiple interfaces for writing data and then single interface extending all others for reading, e.g.:
interface FirstOut {
void first(String first);
}
interface SecondOut {
void second(long second);
}
interface ThirdOut {
void third(ThirdDataType third);
}
interface AllIn extends FirstOut, SecondOut, ThirdOut {
}
(I deliberately used different data types for method parameters to show how it is possible to use various types)
With further testing, I have found that nested loops to read multiple queues which contain data in different POJO classes is possible. The problem with the code in the above question is that queue1Context is obtained once, OUTSIDE the loop that I expected to read queue1Data objects. My fundamental misconception was that DocumentContext objects managed stepping through objects in a queue, whereas actually ExcerptTailer objects manage stepping (maintaining indices) when reading a queue sequentially.
In case it might help someone else just getting started with ChronicleQueues, the inner loop in the original question should be:
while(true)
{
try (final DocumentContext queue1Context = queue1Tailer() )
{
queue1Data = queue1Context.wire()
.bytes()
.readObject(Queue1Data.class); // first read succeeds
if (queue1Data.getFieldValue() > limit) // if this fails the inner loop continues as expected
{ // and second and subsequent reads now succeed
// cache a value
break;
}
}
}
And of course the outer-most try block containing queue1Context (in the original code) should be removed.

Spring Batch Processor

I have a requirement in Spring Batch where I have a file with thousands of records coming in a sorted order.The key field is product code.
The file may have multiple records of the same product code.The requirement is that I have to group the records that have the same
product Code in a collection (i.e List) and then send them over to a method i.e validateProductCodes(List prodCodeList).
I am looking for the best way to do this.The approach I thought of was to read every record in the Processor and then build a collection
of records for the same product code in the processor.If at any point in the processor,if the product code in the record is different than it would imply that
the productCode grouping is complete and the validateProductCodes() can be called for that group of records with the same product code.Also I am using a Step.So does
not that automatically mean that the process is multithreaded?Meaning Groups of records with same productCode will be processed in a multithreaded way.Please advise.
Thanks
There are two questions in your question: first, you want to know how to group the items together and second how they are processed.
In order to group them, you could create a group reader as Luca suggested or something like:
public class GroupReader<I> implements ItemReader<List<I>>{
private SingleItemPeekableItemReader<I> reader;
private ItemReader<I> peekReaderDelegate;
public void setReader(ItemReader<I> reader) {
peekReaderDelegate = reader;
}
#Override
public void afterPropertiesSet() throws Exception {
Assert.notNull(peekReaderDelegate, "The 'itemReader' may not be null");
this.reader= new SingleItemPeekableItemReader<I>();
this.reader.setDelegate(delegateReader);
}
#Override
public List<I> read() throws Exception {
State state = State.NEW;
List<I> group = null;
I item = null;
while (state != State.COMPLETE) {
item = reader.read();
switch (state) {
case NEW: {
if (item == null) {
// end reached
state = State.COMPLETE;
break;
}
group = new ArrayList<I>();
group.add(item);
state = State.READING;
I nextItem = reader.peek();
if (isItAKeyChange(item, nextItem)) {
state = State.COMPLETE;
}
break;
}
case READING: {
group.add(item);
// peek and check if there the peeked entry has a new date
I nextItem = peekEntry();
if (isItAKeyChange(item, nextItem)) {
state = State.COMPLETE;
}
break;
}
default: {
throw new org.springframework.expression.ParseException(groupCounter, "ParsingError: Reader is in an invalid state");
}
}
}
return group;
}
}
For every key, this reader will return a list with all elements matching this key. Therefore, the grouping ist done directly in the reader.
You cannot do that with a processor, as you described.
Your second question about multithreading.
Now, using a step does not necessarily mean, that the step is processed with several threads.
In order to do that, you need set an AsyncTaskExecutor and you have to set the throttle limit.
But if you do that, your reader must be threadsafe, or otherwise your grouping won't work. You could do that by simply defining the read method above as synchronized.
Another way could be to write a small SynchronizedWrapperReader, as suggested in this question: Parellel Processing Spring Batch StaxEventItemReader
Please note, depending on your target you are writing to, you probably also have to synchronize the writer, and if necessary to reorder the result.

Making massive amounts of individual row updates faster or more efficient

I'm writing a java application that copies one database's information (db2) to anther database (sql server). The order of operations is very simple:
Check to see if anything has been updated in a certain time frame
Grab everything from the first database that is within the designated time frame
Map database information to POJOs
Divide subsets of POJOs into threads (pre defined # in properties file)
Threads cycle through each POJO Individually
Update the second database
I have everything working just fine, but at certain times of the day there is a huge jump in the amount of updates that need to take place (can get in to the hundreds of thousands).
Below you can see a generic version of my code. It follows the basic algorithm of the application. Object is generic, the actual application has 5 different types of specified objects each with its own updater thread class. But the generic functions below are exactly what they all look like. And in the updateDatabase() method, they all get added to threads and all run at the same time.
private void updateDatabase()
{
List<Thread> threads = new ArrayList<>();
addObjectThreads( threads );
startThreads( threads );
joinAllThreads( threads );
}
private void addObjectThreads( List<Thread> threads )
{
List<Object> objects = getTransformService().getObjects();
logger.info( "Found " + objects.size() + " Objects" );
createThreads( threads, objects, ObjectUpdaterThread.class );
}
private void createThreads( List<Thread> threads, List<?> objects, Class threadClass )
{
final int BASE_OBJECT_LOAD = 1;
int objectLoad = objects.size() / Database.getMaxThreads() > 0 ? objects.size() / Database.getMaxThreads() + BASE_OBJECT_LOAD : BASE_OBJECT_LOAD;
for (int i = 0; i < (objects.size() / objectLoad); ++i)
{
int startIndex = i * objectLoad;
int endIndex = (i + 1) * objectLoad;
try
{
List<?> objectSubList = objects.subList( startIndex, endIndex > objects.size() ? objects.size() : endIndex );
threads.add( new Thread( (Thread) threadClass.getConstructor( List.class ).newInstance( objectSubList ) ) );
}
catch (Exception exception)
{
logger.error( exception.getMessage() );
}
}
}
public class ObjectUpdaterThread extends BaseUpdaterThread
{
private List<Object> objects;
final private Logger logger = Logger.getLogger( ObjectUpdaterThread.class );
public ObjectUpdaterThread( List<Object> objects)
{
this.objects = objects;
}
public void run()
{
for (Object object : objects)
{
logger.info( "Now Updating Object: " + object.getId() );
getTransformService().updateObject( object );
}
}
}
All of these go to a spring service that looks like the code below. Again its generic, but each type of object has the exact same type of logic to them. The getObjects() from the code above are just one line pass throughs to the DAO so no need to really post that.
#Service
#Scope(value = "prototype")
public class TransformServiceImpl implements TransformService
{
final private Logger logger = Logger.getLogger( TransformServiceImpl.class );
#Autowired
private TransformDao transformDao;
#Override
public void updateObject( Object object )
{
String sql;
if ( object.exists() )
{
sql = Object.Mapper.UPDATE;
}
else
{
sql = Object.Mapper.INSERT;
}
boolean isCompleted = false;
while ( !isCompleted )
{
try
{
transformDao.updateObject( object, sql );
isCompleted = true;
}
catch (Exception exception)
{
logger.error( exception.getMessage() );
threadSleep();
logger.info( "Now retrying update for Object: " + object.getId() );
}
}
logger.info( "Updated Object: " + object.getId() );
}
}
Finally these all go to the DAO that looks like this:
#Repository
#Scope(value = "prototype")
public class TransformDaoImpl implements TransformDao
{
//#Resource is like #Autowired but with the added option of being able to specify the name
//Good for autowiring two different instances of the same class [NamedParameterJdbcTemplate]
//Another alternative = #Autowired #Qualifier(BEAN_NAME)
#Resource(name = "db2")
private NamedParameterJdbcTemplate db2;
#Resource(name = "sqlServer")
private NamedParameterJdbcTemplate sqlServer;
final private Logger logger = Logger.getLogger( TransformerImpl.class );
#Override
public void updateObject( Objet object, String sql )
{
MapSqlParameterSource source = new MapSqlParameterSource();
source.addValue( "column1_value", object.getColumn1Value() );
//put all source values from the POJO in just like above
sqlServer.update( sql, source );
}
}
My insert statements look like this:
"INSERT INTO dbo.OBJECT_TABLE " +
"(COLUMN1, COLUMN2...) " +
"VALUES(:column1_value, :column2_value... "
And my update statements look like this:
"UPDATE dbo.OBJECT_TABLE SET " +
"COLUMN1 = :column1_value, COLUMN2 = :column2_value, " +
"WHERE PRIMARY_KEY_COLUMN = :primary_key_value"
Its a lot of code and stuff I know, But I just wanted to layout everything I have in hopes that I can get help making this faster or more efficient. It takes hours on hours to update so many rows and it would nice if it only took a couple/few hours instead hours on hours. Thanks for any help. I welcome all learning experiences about spring, threads and databases.
If you're sending large amounts of SQL to the server, you should consider Batching it using the Statement.addBatch and Statement.executeBatch methods. The batches are finite in size (I always limited mine to 64K of SQL), but they dramatically lower the round trips to the database.
As I was iterating and creating SQL, I would keep track of how much I had batched already, when the SQL crossed the 64K boundary, I'd fire off an executeBatch and start a fresh one.
You may want to experiment with the 64K number, it may have been an Oracle limitation, which I was using at the time.
I can't speak to Spring, but batching is a part of the JDBC Statement. I'm sure it's straightforward to get to this.
Check to see if anything has been updated in a certain time frame
Grab everything from the first database that is within the designated time frame
Is there an index on the LAST_UPDATED_DATE column (or whatever you're using) in the source table? Rather than put the burden on your application, if it's within your control, why not write some triggers in the source database that create entries in an "update log" table? That way, all that your app would need to do is consume and execute those entries.
How are you managing your transactions? If you're creating a new transaction for each operation it's going to be brutally slow.
Regarding the threading code, have you considered using something more standard rather than writing your own? What you have is a pretty typical producer/consumer and Java has excellent support for that type of thing with ThreadPoolExecutor and numerous queue implementations to move data between threads that perform different tasks.
The benefit with using something off the shelf is that 1) it's well tested 2) there are numerous tuning options and sizing strategies that you can adjust to increase performance.
Also, rather than use 5 different thread types for each type of object that needs to be processed, have you considered encapsulating the processing logic for each type into separate strategy classes? That way, you could use a single pool of worker threads (which would be easier to size and tune).

java synchronized method fails in servlet

I have a java servlet which interacts with hibernate . It is necessary to generate a check id on the system thus:
for (long j = 0; j < soldProductQuantity.longValue(); j++) {
checkId = Hotel.getNextMcsCheckAndIncrement();
checkIdString = checkId.toString();
McsCheck.generateCheck(checkIdString);
}
where getNextMcsCheckAndIncrement() is defined as
static public synchronized Long getNextMcsCheckAndIncrement()
It pulls a value from the database using hibernate, does some operations on it, stores the modified value back, then returns the number.
Because getNextMcsCheckAndIncrement is synchronized, I would expect no two checks to have the same number, because no two threads could enter that method at the same time.
Yet I can see in my data repeated instances of multiple check ids. So clearly this isn't working. What am I missing?
The implementation of getNext as asked:
// Increment FIRST, then return the resulting value as the current MCS check value.
static public synchronized Long getNextMcsCheckAndIncrement() {
Hotel theHotel = null;
Long checkCounter;
#SuppressWarnings("unchecked")
List<Hotel> hotelList = Hotel.returnAllObjects();
for (Hotel currentHotel : hotelList) { // there should be only one.
theHotel = currentHotel;
}
checkCounter = theHotel.getMcsCheckCounter()+1;
theHotel.setMcsCheckCounter(checkCounter);
theHotel.update();
return checkCounter;
}
static public List returnAllObjects() {
return Hotel.query ("from Hotel");
}
static public List query(String queryString) {
Session session = HibernateUtil.getSessionFactory().openSession();
List result = session.createQuery(queryString).list();
session.close();
return result;
}
public void update() {
Session session = HibernateUtil.getSessionFactory().openSession();
Transaction transaction = session.beginTransaction();
session.update(this);
transaction.commit();
session.close();
}
Yes, I know it's not the best way to do it, and I'll solve that in time. But the immediate issue is why concurrency fails.
Anonymous in comments gave the correct answer: The problem must be the Hotel object in the hibernate database, not the synchronization method. Although the counter method is correctly synchronized, if the hotel object is being modified outside of this algorithm, those accesses will NOT be synchronized.
The correct answer, therefore, is to closely check all database accesses to the hotel object and ensure that the object is not modified elsewhere when this loop is in progress, or to refactor the counter out of the Hotel object into dedicated storage.

ThreadLocal value access across different threads

Given that a ThreadLocal variable holds different values for different threads, is it possible to access the value of one ThreadLocal variable from another thread?
I.e. in the example code below, is it possible in t1 to read the value of TLocWrapper.tlint from t2?
public class Example
{
public static void main (String[] args)
{
Tex t1 = new Tex("t1"), t2 = new Tex("t2");
new Thread(t1).start();
try
{
Thread.sleep(100);
}
catch (InterruptedException e)
{}
new Thread(t2).start();
try
{
Thread.sleep(1000);
}
catch (InterruptedException e)
{}
t1.kill = true;
t2.kill = true;
}
private static class Tex implements Runnable
{
final String name;
Tex (String name)
{
this.name = name;
}
public boolean kill = false;
public void run ()
{
TLocWrapper.get().tlint.set(System.currentTimeMillis());
while (!kill)
{
// read value of tlint from TLocWrapper
System.out.println(name + ": " + TLocWrapper.get().tlint.get());
}
}
}
}
class TLocWrapper
{
public ThreadLocal<Long> tlint = new ThreadLocal<Long>();
static final TLocWrapper self = new TLocWrapper();
static TLocWrapper get ()
{
return self;
}
private TLocWrapper () {}
}
As Peter says, this isn't possible. If you want this sort of functionality, then conceptually what you really want is just a standard Map<Thread, Long> - where most operations will be done with a key of Thread.currentThread(), but you can pass in other threads if you wish.
However, this likely isn't a great idea. For one, holding a reference to moribund threads is going to mess up GC, so you'd have to go through the extra hoop of making the key type WeakReference<Thread> instead. And I'm not convinced that a Thread is a great Map key anyway.
So once you go beyond the convenience of the baked-in ThreadLocal, perhaps it's worth questioning whether using a Thread object as the key is the best option? It might be better to give each threads unique IDs (Strings or ints, if they don't already have natural keys that make more sense), and simply use these to key the map off. I realise your example is contrived, but you could do the same thing with a Map<String, Long> and using keys of "t1" and "t2".
It would also arguably be clearer since a Map represents how you're actually using the data structure; ThreadLocals are more like scalar variables with a bit of access-control magic than a collection, so even if it were possible to use them as you want it would likely be more confusing for other people looking at your code.
Based on the answer of Andrzej Doyle here a full working solution:
ThreadLocal<String> threadLocal = new ThreadLocal<String>();
threadLocal.set("Test"); // do this in otherThread
Thread otherThread = Thread.currentThread(); // get a reference to the otherThread somehow (this is just for demo)
Field field = Thread.class.getDeclaredField("threadLocals");
field.setAccessible(true);
Object map = field.get(otherThread);
Method method = Class.forName("java.lang.ThreadLocal$ThreadLocalMap").getDeclaredMethod("getEntry", ThreadLocal.class);
method.setAccessible(true);
WeakReference entry = (WeakReference) method.invoke(map, threadLocal);
Field valueField = Class.forName("java.lang.ThreadLocal$ThreadLocalMap$Entry").getDeclaredField("value");
valueField.setAccessible(true);
Object value = valueField.get(entry);
System.out.println("value: " + value); // prints: "value: Test"
All the previous comments still apply of course - it's not safe!
But for debugging purposes it might be just what you need - I use it that way.
I wanted to see what was in ThreadLocal storage, so I extended the above example to show me. Also handy for debugging.
Field field = Thread.class.getDeclaredField("threadLocals");
field.setAccessible(true);
Object map = field.get(Thread.currentThread());
Field table = Class.forName("java.lang.ThreadLocal$ThreadLocalMap").getDeclaredField("table");
table.setAccessible(true);
Object tbl = table.get(map);
int length = Array.getLength(tbl);
for(int i = 0; i < length; i++) {
Object entry = Array.get(tbl, i);
Object value = null;
String valueClass = null;
if(entry != null) {
Field valueField = Class.forName("java.lang.ThreadLocal$ThreadLocalMap$Entry").getDeclaredField("value");
valueField.setAccessible(true);
value = valueField.get(entry);
if(value != null) {
valueClass = value.getClass().getName();
}
Logger.getRootLogger().info("[" + i + "] type[" + valueClass + "] " + value);
}
}
It only possible if you place the same value in a field which is not ThreadLocal and access that instead. A ThreadLocal by definition is only local to that thread.
ThreadLocalMap CAN be access via Reflection and Thread.class.getDeclaredField("threadLocals") setAccssible(true), and so on.
Do not do that, though. The map is expected to be accessed by the owning thread only and accessing any value of a ThreadLocal is a potential data race.
However, if you can live w/ the said data races, or just avoid them (way better idea). Here is the simplest solution. Extend Thread and define whatever you need there, that's it:
ThreadX extends Thread{
int extraField1;
String blah2; //and so on
}
That's a decent solution that doesn't relies on WeakReferences but requires that you create the threads. You can set like that ((ThreadX)Thread.currentThread()).extraField1=22
Make sure you do no exhibit data races while accessing the fields. So you might need volatile, synchronized and so on.
Overall Map is a terribad idea, never keep references to object you do not manage/own explicitly; especially when it comes to Thread, ThreadGroup, Class, ClassLoader... WeakHashMap<Thread, Object> is slightly better, however you need to access it exclusively (i.e. under lock) which might damper the performance in heavily multithreaded environment. WeakHashMap is not the fastest thing in the world.
ConcurrentMap, Object> would be better but you need a WeakRef that has equals and hashCode...

Categories

Resources