I am trying to solve a problem which seems quite common to me, but I could not find good solution for it.
In a very concurrent environment I need to release resources correctly when client session is destroyed. Here is input:
I use ConcurrentHashMap to store all allocated resources, map is required here to index resources
While session is being destroyed sometimes new resources are allocated from pending tasks, which I want ultimately deallocate as well
Here is my current solution:
while (!resourceMap.isEmpty()) {
Map<Integer, Resource> toDestroy = new HashMap<>(resourceMap);
for (Resource resource : toDestroy.values()) {
resource.destroy();
}
resourceMap.keySet().removeAll(toDestroy.keySet());
}
Which exists only because ConcurrentHashMap#values#iterator does not always reflect concurrent puts to resourceMap. I do not like this code and would prefer queue-like code, but unfortunately ConcurrentMap does not provide anything like this:
while ((Map.Entry<String, Resource> entry = resourceMap.removeAny()) != null) {
entry.value().destroy();
}
I am looking for solution which is similar to queue-like code above or any alternative approaches to this problem.
I do not like this code and would prefer queue-like code, but unfortunately ConcurrentMap does not provide anything like this ...
I would just use an iterator but then again I'm not a Java 8 fan.
while (!resourceMap.isEmpty()) {
Iterator<Resource> iterator = resourceMap.values().iterator();
while (iterator.hasNext()) {
Resource resource = iterator.next();
iterator.remove();
resource.destroy();
}
}
It's important to note that there are race conditions in this model. Someone could get the resource, go to use it, but at the same time it is being destroyed by this thread.
Related
I'm getting a java.util.ConcurrentModificationException on the line where the for-loop starts (see comment in code).
Why am i getting ConcurrentModificationException on this unmodifiableSet?
final Set<Port> portSet = Collections.unmodifiableSet(node.getOpenPorts());
if (!portSet.isEmpty()) {
StringBuilder tmpSb = new StringBuilder();
for (Port pp : portSet) { // <------- exception happening here
tmpSb.append(pp.getNum()).append(" ");
}
}
I've never witnessed this, but I'm getting crash reports from Google.
Something must be modifying the underlying set; i.e. the set returned by node.getOpenPorts().
Instead of wrapping the set with an "unmodifiable" wrapper, you could copy it.
final Set<Port> portSet = new HashSet<>(node.getOpenPorts());
But as a commenter (#Slaw) pointed out, that just moves the iteration inside the constructor and you would still get CCMEs.
The only real solutions are:
Change the implementation of the node class to use a concurrent set class for the port list that won't throw CCMEs if the collection is mutated while you are iterating it.
Change the implementation of the node class to return a copy of the port list. Deal with the updates-while-copying race condition with some internal locking.
Put a try / catch around the code and repeat the operation if you get a CCME
I've never witnessed this, but I'm getting crash reports from Google.
Yes. The problem only occurs if this code is executed while the open port list is changing.
How do i delete a file after serving it over http,
Files.TemporaryFile file = null;
try {
file = new Files.TemporaryFile(f);
return ok().sendFile(file.file());
} catch (IllegalArgumentException e) {
return badRequest(Json.newObject().put("message", e.getMessage()));
} finally {
file.clean();
}
with this code, the file gets deleted before it is served. i receive an empty file on the client.
Play framework in version 2.8 should support onClose argument in sendFile method also in Java (so far it seems to be supported only in Scala version).
In older versions (I have tried only on 2.7.x) you may apply the same approach like in the fix for 2.8, so:
public Result doSomething() {
final File fileToReturn = ....;
final Source<ByteString, CompletionStage<IOResult>> source = FileIO.fromFile(fileToReturn);
return Results.ok().streamed(wrap(source, () -> fileToReturn.delete()), Optional.of(fileToReturn.length()), Optional.of("content type, e.g. application/zip"));
}
private Source<ByteString, CompletionStage<IOResult>> wrap(final Source<ByteString, CompletionStage<IOResult>> source, final Runnable handler) {
return source.mapMaterializedValue(
action -> action.whenCompleteAsync((ioResult, exception) -> handler.run())
);
}
From reading the JavaFileUpload documentation for 2.6.x, it sounds like you don't need that finally block to clean up the file afterwards. Since you are using a TemporaryFile, garbage collection should take care of deleting the resource:
...the idea behind TemporaryFile is that it’s only in scope at completion and should be moved out of the temporary file system as soon as possible. Any temporary files that are not moved are deleted [by the garbage collector].
The same section goes on to describe that there is the potential that the file will not get garbage collection causing Denial Of Service issues. If you find that the files are not getting removed, then you can use the TemporaryFilesReaper:
However, under certain conditions, garbage collection does not occur in a timely fashion. As such, there’s also a play.api.libs.Files.TemporaryFileReaper that can be enabled to delete temporary files on a scheduled basis using the Akka scheduler, distinct from the garbage collection method.
I am not forcing all the project, but you can use a Scala for only this controller, then you can use onClose parameter of the sendFile method. The only attention - that parameter is not workable in all versions, it looks like in 2.5 there was an issue so it was not triggered (was not work: https://github.com/playframework/playframework/issues/6351).
Another way - you can use Akka streams, like here: https://www.playframework.com/documentation/2.6.x/JavaStream#Chunked-responses.
I have a stateful EJB which calls an EJB stateless method of Web parsing pages.
Here is my stateful code :
#Override
public void parse() {
while(true) {
if(false == _activeMode) {
break;
}
for(String url : _urls){
if(false == _activeMode) {
break;
}
for(String prioritaryUrl : _prioritaryUrls) {
if(false == _activeMode)
break;
boursoramaStateless.parseUrl(prioritaryUrl);
}
boursoramaStateless.parseUrl(url);
}
}
}
No problem here.
I have some asynchronously call (with JMS) that add to my _urls variable (a List) some value. Goal is to parse new url inside my infinity loop.
I receive ConcurrentModificationException when I try to add new url in my List via JMS onMessage method but it seems to be working because this new url is parsed.
When I try to wrap a synchronized block :
while(true){
synchronized(_url){
// code...
}
}
My new url is never parsed, I expected to be parsed after a for() loop finished...
So my question is : how can I modify List when it's accessed inside a loop without having ConcurrentModificationException please ?
I just want 2 threads to modify some shared resource at same time without synchronized block...
You may want a CopyOnWriteArrayList.
For (String s : urls) uses an Iterator internally. The iterator checks for concurrent modification so that its behavior is well defined.
You can use a for(int i= ... loop. This way, no exception is thrown, and if elements are only added to the end of the List, you still get a consistent snapshot (the list as it exists at some time during the iteration). If the elements in the list are moved around, you may get missing entries.
If you want to use synchronised, you need to synchronise on both ends, but that way you lose concurrent reads.
If you want concurrent access AND consistent snapshots, you can use any of the collections in the java.util.concurrent package.
CopyOnWriteArrayList has already been mentioned. The other interesting are LinkedBlockingQueue and ArrayBlockingQueue (Collections but not Lists) but that's about all.
ok thank you guys.
So I made some modifications.
1) added iterator and leaving synchronized block (inside parse() function and around addUrl() function which add new url to my List)
--> it's work like a charm, no ConcurrentModificationException launched
2) added iterator and removed synchronized blocks
--> ConcurrentModificationException is still launched...
For now, I will read more about your answers and test your solutions.
Thank you again guys
First, forget about synchronized when running into Java EE container. It bothers the container to optimize threads utilization and will not work in clustered environment.
Second, it seems that your design is wrong. You should not update private field of the bean using JMS. This thing causes ConcurrentModificationException. You probably should modify your bean to retrieve the collection from database and your MDB to store the URL into the Database.
Other, easier for you solution is the following.
Retrieve the currently existing URLs and copy them to other collection. Then iterate over this collection. When the global collection is updated via JMS the update is not visible in the copied collection, so no exceptions will be thrown:
while(true) {
for (String url : copyUrls(_prioritaryUrls)) {
// deal with url
}
}
private List<String> copyUrls(List<Stirng> urls) {
return new ArrayList<String>(urls); // this create copy of the source list
}
//........
public void onMessage(Message message) {
_prioritaryUrls.add(((TextMessage)message).getText());
}
I read the code sample/documentation about caching in the wiki page. I see that callback RemovalListener can be used to do tear down etc of evicted cached objects. My question is does the library make sure that the object is not being used by any other thread before calling the provided RemovalListener. Lets consider the code example from the docs:
CacheLoader<Key, DatabaseConnection> loader =
new CacheLoader<Key, DatabaseConnection> () {
public DatabaseConnection load(Key key) throws Exception {
return openConnection(key);
}
};
RemovalListener<Key, DatabaseConnection> removalListener =
new RemovalListener<Key, DatabaseConnection>() {
public void onRemoval(RemovalNotification<Key, DatabaseConnection> removal) {
DatabaseConnection conn = removal.getValue();
conn.close(); // tear down properly
}
};
return CacheBuilder.newBuilder()
.expireAfterWrite(2, TimeUnit.MINUTES)
.removalListener(removalListener)
.build(loader);
Here the cache is configured to evict elements 2 minutes after creation (I understand that it may not be exact two minutes because eviction would be piggybacked along with user read/write calls etc.) But whatever time be it, will the library check that there is no active reference present to the object being passed to the RemovalListener? Because I may have another thread who fetched the object from the cache long back but may be still using it. In that case I cannot call close() on it from RemovalListener.
Also the documentation of RemovalNotification says that: A notification of the removal of a single entry. The key and/or value may be null if they were already garbage collected.
So according to it conn could be null in the above example. How do we tear down the conn object properly in such case? Also the above code example in such case will throw NullPointerException.
The use case I am trying to address is:
The cache element need to expire after two minutes of creation.
The evicted object needs to be closed, but only afte making sure no one is using them.
Guava contributor here.
My question is does the library make sure that the object is not being used by any other thread before calling the provided RemovalListener.
No, that would be impossible for Guava to do generally -- and a bad idea anyway! If the cache values were Integers, then because Integer.valueOf reuses Integer objects for integers below 128, you could never expire an entry with a value below 128. That would be bad.
Also the documentation of RemovalNotification says that: A notification of the removal of a single entry. The key and/or value may be null if they were already garbage collected. So according to it conn could be null in the above example.
To be clear, that's only possible if you're using weakKeys, weakValues, or softValues. (And, as you've correctly deduced, you can't really use any of those if you need to do some teardown on the value.) If you're only using some other form of expiration, you'll never get a null key or value.
In general, I don't think a GC-based solution is going to work here. You must have a strong reference to the connection to close it properly. (Overriding finalize() might work here, but that's really a broken thing generally.)
Instead, my approach would be to cache references to a wrapper of some sort. Something like
class ConnectionWrapper {
private Connection connection;
private int users = 0;
private boolean expiredFromCache = false;
public Connection acquire() { users++; return connection; }
public void release() {
users--;
if (users == 0 && expiredFromCache) {
// The cache expired this connection.
// We're the only ones still holding on to it.
}
}
synchronized void tearDown() {
connection.tearDown();
connection = null; // disable myself
}
}
and then use a Cache<Key, ConnectionWrapper> with a RemovalListener that looks like...
new RemovalListener<Key, ConnectionWrapper>() {
public void onRemoval(RemovalNotification<Key, ConnectionWrapper> notification) {
ConnectionWrapper wrapper = notification.getValue();
if (wrapper.users == 0) {
// do the teardown ourselves; nobody's using it
wrapper.tearDown();
} else {
// it's still in use; mark it as expired from the cache
wrapper.expiredFromCache = true;
}
}
}
...and then force users to use acquire() and release() appropriately.
There's really not going to be any way better than this approach, I think. The only way to detect that there are no other references to the connection is to use GC and weak references, but you can't tear down a connection without a strong reference to it -- which destroys the whole point. You can't guarantee whether it's the RemovalListener or the connection user who'll need to tear down the connection, because what if the user takes more than two minutes to do its thing? I think this is probably the only feasible approach left.
(Warning: the above code assumes only one thread will be doing things at a time; it's not synchronized at all, but hopefully if you need it, then this is enough to give you an idea of how it should work.)
Please show me where I'm missing something.
I have a cache build by CacheBuilder inside a DataPool. DataPool is a singleton object whose instance various thread can get and act on. Right now I have a single thread which produces data and add this into the said cache.
To show the relevant part of the code:
private InputDataPool(){
cache=CacheBuilder.newBuilder().expireAfterWrite(1000, TimeUnit.NANOSECONDS).removalListener(
new RemovalListener(){
{
logger.debug("Removal Listener created");
}
public void onRemoval(RemovalNotification notification) {
System.out.println("Going to remove data from InputDataPool");
logger.info("Following data is being removed:"+notification.getKey());
if(notification.getCause()==RemovalCause.EXPIRED)
{
logger.fatal("This data expired:"+notification.getKey());
}else
{
logger.fatal("This data didn't expired but evacuated intentionally"+notification.getKey());
}
}}
).build(new CacheLoader(){
#Override
public Object load(Object key) throws Exception {
logger.info("Following data being loaded"+(Integer)key);
Integer uniqueId=(Integer)key;
return InputDataPool.getInstance().getAndRemoveDataFromPool(uniqueId);
}
});
}
public static InputDataPool getInstance(){
if(clsInputDataPool==null){
synchronized(InputDataPool.class){
if(clsInputDataPool==null)
{
clsInputDataPool=new InputDataPool();
}
}
}
return clsInputDataPool;
}
From the said thread the call being made is as simple as
while(true){
inputDataPool.insertDataIntoPool(inputDataPacket);
//call some logic which comes with inputDataPacket and sleep for 2 seconds.
}
and where inputDataPool.insertDataIntoPool is like
inputDataPool.insertDataIntoPool(InputDataPacket inputDataPacket){
cache.get(inputDataPacket.getId());
}
Now the question is, the element in cache is supposed to expire after 1000 nanosec.So when inputDataPool.insertDataIntoPool is called second time, the data which has been inserted first time will be evacuated as it must have got expired as the call is being after 2 seconds of its insertion.And then correspondingly Removal Listener should be called.
But this is not happening. I looked into cache stats and evictionCount is always zero, no matter how much time cache.get(id) is called.
But importantly, if I extend inputDataPool.insertDataIntoPool
inputDataPool.insertDataIntoPool(InputDataPacket inputDataPacket){
cache.get(inputDataPacket.getId());
try{
Thread.sleep(2000);
}catch(InterruptedException ex){ex.printStackTrace();
}
cache.get(inputDataPacket.getId())
}
then the eviction take place as expected with removal listener being called.
Now I'm very much clueless at the moment where I'm missing something to expect such kind of behaviour. Please help me see,if you see something.
P.S. Please ignore any typos.Also no check is being made, no generic has been used, all as this is just in the phase of testing the CacheBuilder functionality.
Thanks
As explained in the javadoc and in the user guide, There is no thread that makes sure entries are removed from the cache as soon as the delay has elapsed. Instead, entries are removed during write operations, and occasionally during read operations if writes are rare. This is to allow for a high throughput and a low latency. And of course, every write operation doesn't cause a cleanup:
Caches built with CacheBuilder do not perform cleanup and evict values
"automatically," or instantly after a value expires, or anything of
the sort. Instead, it performs small amounts of maintenance during
write operations, or during occasional read operations if writes are
rare.
The reason for this is as follows: if we wanted to perform Cache
maintenance continuously, we would need to create a thread, and its
operations would be competing with user operations for shared locks.
Additionally, some environments restrict the creation of threads,
which would make CacheBuilder unusable in that environment.
I had the same issue and I could find this at guava's documentation for CacheBuilder.removalListener
Warning: after invoking this method, do not continue to use this cache
builder reference; instead use the reference this method returns. At
runtime, these point to the same instance, but only the returned
reference has the correct generic type information so as to ensure
type safety. For best results, use the standard method-chaining idiom
illustrated in the class documentation above, configuring a builder
and building your cache in a single statement. Failure to heed this
advice can result in a ClassCastException being thrown by a cache
operation at some undefined point in the future.
So by changing your code to use the builder reference that is called after adding the removalListnener this problem can be resolved
CacheBuilder builder=CacheBuilder.newBuilder().expireAfterWrite(1000, TimeUnit.NANOSECONDS).removalListener(
new RemovalListener(){
{
logger.debug("Removal Listener created");
}
public void onRemoval(RemovalNotification notification) {
System.out.println("Going to remove data from InputDataPool");
logger.info("Following data is being removed:"+notification.getKey());
if(notification.getCause()==RemovalCause.EXPIRED)
{
logger.fatal("This data expired:"+notification.getKey());
}else
{
logger.fatal("This data didn't expired but evacuated intentionally"+notification.getKey());
}
}}
);
cache=builder.build(new CacheLoader(){
#Override
public Object load(Object key) throws Exception {
logger.info("Following data being loaded"+(Integer)key);
Integer uniqueId=(Integer)key;
return InputDataPool.getInstance().getAndRemoveDataFromPool(uniqueId);
}
});
This problem will be resolved. It is kind of wired but I guess it is what it is :)