How best to implement a DynamicPoller in Spring Integration - java

I have a flow that receives large messages (that land in an RDBMS table) so I can't process too many of these at a given time. As such I'm throttling the processing using <int:poller max-messages-per-poll="" />, and also with some queues with a capacity set like <int:queue capacity="">. I understand that multiple threads/transactions will participate in this flow and for the use-case this is acceptible.
The query polling the DB takes some time to run and as such I don't want to run it more often than I need to. Additionally the messages this flow receives tend to come in within "bursts", meaning it might get 1000 messages then not get any in for an hour.
What I'd like to do is use a dynamic-poller that will poll infrequently (since as-noted the query is costly to run) unless I see that I got a burst of messages in which case I want to poll very frequently until all messages are processed. For example if I have <int:poller max-messages-per-poll="100" /> and I know the poller just read in 100 messages, then chances are good that there are more messages in the RDBMS that need to be processed and I should poll again immediately after our processing has completed.
I know Spring doesn't offer a way to modify a trigger to make it dynamic in nature and have already looked at the Spring Integration Ref “7.1.5 Change Polling Rate at Runtime"
and at the dynamic-poller sample project: Dynamic Poller
That's a start but I really need the poller to change it's frequency based on current load.
I might not be correct on this, but I thought perhaps Gary mentioned something like this would be interesting to implement in his talk on "Implementing High-Availability Architectures with Spring Integration".
In any event writing a class(es) to change the poller frequency doesn't seem to be a big deal. What is a bit more challenging is how to know when a poll has occurred that produced no results since nothing gets posted to the output channel.
Some options I've considered:
Attach a <int:wire-tap channel="" /> to the poller's channel which calls a <int:service-activator>. Service activator examines number of messages and adjusts poller's period on the DynamicPeriodicTrigger.
Problem is that this will never get called if no messages are received so once I adjust to poll more frequently that polling period will remain indefinately.
Same as #1 but add logic to DynamicPeriodicTrigger that will revert period back to the initialDelay after next trigger occurs or after certain period of time.
Use an <int:advice-chain> element within the <int:poller> element with a MethodInterceptor implementation.
Similar to what Artem suggests in this link.
While this allows me to get in front of the receive method, it does not grant me access to results of the receive method (which would give me the number of messages retrieved). Note this appears to be confirmed by what Gary mentions on this link.
The request handler advice chain is a special case; we had to take care to only advise the internal endpoint methods and not any downstream processing (on output channels).
Advising pollers is simpler because we're advising the whole flow. As described in section "7.1.4 Namespace Support" subsection "AOP Advice chains", you simply create an advice by implementing the MethodInterceptor interface.
See SourcePollingChannelAdapterFactoryBeanTests.testAdviceChain() for a very simple advice...
Code:
adviceChain.add(new MethodInterceptor() {
public Object invoke(MethodInvocation invocation) throws Throwable {
adviceApplied.set(true);
return invocation.proceed();
}
});
This simply is used to assert that the advice was called properly; a real advice would add code before and/or after the invocation.proceed().
In effect, this advice advises all methods, but there is only one, (Callable.call()).
Create an AfterReturning advice with a pointcut that looks for the Message<T> receive() method.
Clone the JdbcPollingChannelAdapter and add my hooks in that new class.
Perhaps what Gary suggests on this link would be useful but the "gist" link is no longer valid.
UPDATED:
The option I ended up implementing was to use an AfterReturningAdvice that looked something like the following.
Original code:
<int-jdbc:inbound-channel-adapter id="jdbcInAdapter"
channel="inputChannel" data-source="myDataSource"
query="SELECT column1, column2 from tableA"
max-rows-per-poll="100">
<int:poller fixed-delay="10000"/>
</int-jdbc:inbound-channel-adapter>
New code:
<bean id="jdbcDynamicTrigger" class="DynamicPeriodicTrigger">
<constructor-arg name="period" value="20000" />
</bean>
<bean id="jdbcPollerMetaData" class="org.springframework.integration.scheduling.PollerMetadata">
<property name="maxMessagesPerPoll" value="1000"/>
<property name="trigger" ref="jdbcDynamicTrigger"/>
</bean>
<bean id="pollMoreFrequentlyForHighVolumePollingStrategy" class="springintegration.scheduling.PollMoreFrequentlyForHighVolumePollingStrategy">
<property name="newPeriod" value="1"/>
<property name="adjustmentThreshold" value="100"/>
<property name="pollerMetadata" ref="jdbcPollerMetaData"/>
</bean>
<aop:config>
<aop:aspect ref="pollMoreFrequentlyForHighVolumePollingStrategy" >
<aop:after-returning pointcut="bean(jdbcInAdapterBean) and execution(* *.receive(..))" method="afterPoll" returning="returnValue"/>
</aop:aspect>
</aop:config>
<bean id="jdbcInAdapterBean" class="org.springframework.integration.jdbc.JdbcPollingChannelAdapter">
<constructor-arg ref="myDataSource" />
<constructor-arg value="SELECT column1, column2 from tableA" />
<property name="maxRowsPerPoll" value="100" />
</bean>
<int:inbound-channel-adapter id="jdbcInAdapter" ref="jdbcInAdapterBean"
channel="inputChannel"
auto-startup="false">
<int:poller ref="jdbcPollerMetaData" />
</int:inbound-channel-adapter>
I've done a bit more research on this and feel that Spring Integration perhaps could offer some hooks into the pollers so that developers can better customize them.
For more info see https://jira.spring.io/browse/INT-3633
If that JIRA does not get implemented and someone is interested in the code I implemented add a comment to this and I'll make the code available on github or gist.

Thanks for opening the JIRA issue; we should discuss the feature over there because stack overflow is not well suited for extended conversations.
However, I'm not sure what you meant above by "...but the "gist" link is no longer valid...". It works fine for me... https://gist.github.com/garyrussell/5374267 but let's discuss in the JIRA.

Related

Camel Restlet maxThreads not working as expected

I am working on an application where a lot of camel routes are exposed as restlet routes. Lets call them endpoints. These endpoints are consumed by an angular application. These endpoints calls to a 3rd party system to gather the data and then after processing them, it passes the response to the angular application.
There are times when the 3rd party system is very slow, and in such cases our server's (Websphere 8.5.5.9) thread pool reaches at its maximum size (because most of them are waiting to get a response from 3rd party). Due to this there are no threads available for other parts (which does not interact with server via these endpoints) of application and hence they also suffers due to this.
So basically we want to limit the number of requests to be served by these 'endpoints' if the server is considerably overloaded so that other parts of application won't get affected. So we wanted to play around the number of threads which can process the incoming request on any of the endpoint. To do that as a poc (proof of concept) I used this example https://github.com/apache/camel/tree/master/examples/camel-example-restlet-jdbc
In this example I changed the following configuration
<bean id="RestletComponentService" class="org.apache.camel.component.restlet.RestletComponent">
<constructor-arg ref="RestletComponent" />
<property name="maxQueued" value="0" />
<property name="maxThreads" value="1" />
</bean>
And in the
org.apache.camel.example.restlet.jdbc.MyRouteConfig
I added a sleep of 20 secs on one of the get direct route as following:
from("direct:getPersons")
.process(exchange -> { Thread.sleep(20000);})
.setBody(simple("select * from person"))
.to("jdbc:dataSource");
Now my assumption (which I understood from the camel documentation at http://camel.apache.org/restlet.html) is that only 1 request can be served at a given time and no other requests will be accepted (since maxQueued is set to 0) when the original request is still in process. But that is not happening in real. With this code I can call this endpoint many times concurrently and all of them give response after 20 secs and few millis.
I am searching for similar kind of setup from last few days and I haven't got anything yet. I wanted to understand if I am doing something wrong or if I have understood the documentation incorrectly.
Camel version used here is 2.23.0-SNAPSHOT
Instead of trying to configuring the thread pool of a Camel component, you could try to use Camel Hystrix to control the downstream calls of your application with the Circuit Breaker pattern.
As soon as the downstream service returns errors or responds too slow, you can return an alternative response to the caller.

Spring Integration HTTP Inbound Gateway Request Overlap

I have an HTTP Inbound Gateway in my Integration Application, which I will call during some save operation. It's like this. If I have one product, I will call the API once, and if I have more than once, then I will call multiple times. The problem is, for single invoke, SI works just fine. But for multiple calls, request and response get messed up. I thought Spring Integration Channels are just like MQ's, but it is not?
Let me explain this. Let's say I have 2 products. First, I invoke SI for Product A and then for B. Response of A got mapped to request B! It happens all the time. I don't want to use some dirty hacks like wait for the first response to come and invoke again. This means the system has to wait for a long time. I guess we can do it in Spring Integration using task executor, but with all the basic samples out there, I can't find the right one. So please help me find out how can I fix this issue!
My Configuration is :
<int:channel id="n2iMotorCNInvokeRequest" />
<int:channel id="n2iMotorCNInvokeResponse" />
<int:channel id="n2iInvoketransformerOut" />
<int:channel id="n2iInvokeobjTransformerOut" />
<int:channel id="n2iInvokegatewayOut" />
<int-http:inbound-gateway id="i2nInvokeFromPOS"
supported-methods="GET"
request-channel="i2nInvokeRequest"
reply-channel="i2nInvokeResponse"
path="/postProduct/{Id}"
mapped-response-headers="Return-Status, Return-Status-Msg, HTTP_RESPONSE_HEADERS"
reply-timeout="50000">
<int-http:header name="Id" expression="#pathVariables.Id"/>
</int-http:inbound-gateway>
<int:service-activator id="InvokeActivator"
input-channel="i2nInvokeRequest"
output-channel="i2nInvokeResponse"
ref="apiService"
method="getProductId"
requires-reply="true"
send-timeout="60000"/>
<int:transformer input-channel="i2nInvokeResponse"
ref="apiTransformer"
method="retrieveProductJson"
output-channel="n2iInvokeRequest"/>
<int-http:outbound-gateway request-channel="n2iInvokeRequest" reply-channel="n2iInvoketransformerOut"
url="http://10.xx.xx.xx/api/index.php" http-method="POST"
expected-response-type="java.lang.String">
</int-http:outbound-gateway>
<int:service-activator
input-channel="n2iInvoketransformerOut"
output-channel="n2iInvokeobjTransformerOut"
ref="apiService"
method="productResponse"
requires-reply="true"
send-timeout="60000"/>
The i2nInvokeFromPOS gateway is what we call from Web Application which is where all the products will be created. This Integration API will fetch that data, and post it to the backend system so that it will get updated to the other POS locations too!
Steps :
I will send the productId to i2nInvokeFromPOS.
apiTransformer -> retrieveProductJson() method will fetch the product details from DB based on the ID
Send the Request JSON to Backend system using http:outbound-gateway
Get the response from Backend and update the product status as uploaded in DB. Happens in apiService -> productResponse()
Once the response for A is received, all I'm getting is HTTP 500 Error for the Request B! But the Backend API is just fine.
The framework is completely thread-safe - if you are seeing cross-talk between different requests/responses then one (or more) of your components that the framework is invoking is not thread-safe.
You can't keep state in fields in, for example, code invoked from a service activator.

Spring AMQP Multiple Vhosts

I currently need to utilize three vhosts for this application. I am only receiving messages over one as a consumer, and the others for RPC calls. Currently I am using CachingConnectionFactory that I have subclassed one for each virtual host. Then I am making each of those subclasses beans. I can then of course grab the connection factory to create the RabbitTemplate for the correct vhost instance.
I saw in the documentation about the AbstractRoutingConnectionFactory but wanted to know if there are any reasons I should refactor my currently working code. I want the most maintainable and perfomant solution, not the "easiest" one.
Thanks!
I am not sure why you felt it was necessary to subclass the CachingConnectionFactory you can simply declare multiple factories...
<rabbit:connection-factory id="default" host="localhost" />
<rabbit:connection-factory id="foo" host="localhost" virtual-host="/foo" />
<rabbit:connection-factory id="bar" host="localhost" virtual-host="/bar" />
Whether or not you using a routing connection factory (e.g. SimpleRoutingConnectionFactory) depends on your application needs. If you don't use it you would need 3 RabbitTemplates and decide which one to use at runtime.
With a routing connection factory, the RabbitTemplate can make the decision based on the message content with a send-connection-factory-selector-expression.
There's not really a lot of difference except the second decouples your application from the decision. For example, you can set a message header customerId before sending (or during message conversion if you're using a convertAndSend method) and use a selector expression such as #vhostFor.select(messageProperties.headers.customerId).
If you later add a new vhost you wouldn't have to change your main application, just your vhostFor lookup bean to pick the right vhost.

Activemq how to configure a consumer listener (in Java)

I would be happy on the condition that anyone share a simple sample for xml based configuration of consumer listener in Spring. Thanks in advance.
EDIT;
I just would like to hear about the listener of consumer, not the consumer implementation because I have already implemented an active-mq in my app. and it is running well, however I cannot be sure the order of consuming items which are send by producer synchronously.
The problem is the inconsistency and data manipulation due to async executions of a method (persisting some objects to db in order to log them) from concurrent consumers at a time.
EDIT2:
Let me clarify this complexity. I have an application that consist of two base separate parts. The first one is the synchronously executing Producer that asks db for the products newly come, and then "one by one" send them thorough the "jmsTemplate.send" method provided by active-mq. This was the operation which is synchronously executed from a Cron/Timer. In other words, producer is being executed from a timer/cron. Now The problem is consumer itself. When producer sends products "one by one", async consumers (with concurrency enabled) receive the products and consume them asynchronously.
The problem begins here. Because the method, which is executed from the consumer when the products are just received, does some db persistence operations. When the same product is being received by separate concurrent consumers (it happens because of our system, not a jms issue, dont pay attention on this point) then doing the same persistence operations on the same entity occurs some exceptions as easy to predict. How can I prevent this async operations of products or manage the orders of comsuming products in that kind of application.
Thanks.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:jms="http://www.springframework.org/schema/jms"
xmlns:p="http://www.springframework.org/schema/p"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/jms http://www.springframework.org/schema/jms/spring-jms-3.0.xsd">
<!-- A simple and usual connection to activeMQ -->
<bean id="activeMQConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://localhost:61616"></property>
</bean>
<!-- A POJO that implements the JMS message listener -->
<bean id="simpleMessageListener" class="MyJMSMessageListener" />
<!-- Cached Connection Factory to wrap the ActiveMQ connetion -->
<bean id="cachedConnectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="activeMQConnectionFactory"></property>
<property name="sessionCacheSize" value="10"></property>
<property name="reconnectOnException" value="true"></property>
</bean>
<!-- The Spring message listener container configuration -->
<jms:listener-container container-type="default" connection-factory="cachedConnectionFactory" acknowledge="auto">
<jms:listener destination="FOO.TEST" ref="simpleMessageListener" method="onMessage" />
</jms:listener-container>
</beans>
And the Java class that listens to the message itslef:
import javax.jms.Message;
import javax.jms.MessageListener;
public class MyJMSMessageListener implements MessageListener{
#Override
public void onMessage(Message message) {
// Do your work here
}
}
Starting this listener it's a matter of getting the Application Context, it will auto-start the JMS Listener once you do that.
EDIT according to the other question :
So your system may generate (for example) 2 or even more messages delivered to the Consumers with the same productID? Well first this is rather not YOUR problem, but the application's. Even if you somehow you fix it, it is not really a fix, but it is a way to hide the problem itself. Nevertheless, if forced to provide a solution, right now, I can think of only one, the easiest, disable the concurrent consuming, sort of. Here is what I would do: Receive the messages in the Queue and have only one Consumer on that queue. Inside this Consumer I would process as little from the message as I can - take ONLY the productID and place it in some other queue. Before this you would have to always check if the productID is not already in that queue. If it is just return silently, if it is not, that means it has never been processed, thus place this message in a different Queue : Queue2 for example, and then enable concurrent consumers on the second Queue Queue2. This still has flaws though: first the productID queue should somehow be cleaned once in a while, otherwise it will grow forever, but that is that that hard. The tricky part: what if you have a productID in the productID queue, but the product came for an UPDATE in the DB and not INSERT? You should not reject it then...

Java Web Application: How to implement caching techniques?

I am developing a Java web application that bases it behavior through large XML configuration files that are loaded from a web service. As these files are not actually required until a particular section of the application is accessed, they are loaded lazily. When one of these files are required, a query is sent to the webservice to retrieve the corresponding file. As some of the configuration files are likely to be used much, much more often than others I'd like to setup some kind of caching (with maybe a 1 hour expiration time) to avoid requesting the same file over and over.
The files returned by the web service are the same for all users across all sessions. I do not use JSP, JSF or any other fancy framework, just plain servlets.
My question is, what is considered a best practice to implement such a global, static cache within a java Web application? Is a singleton class appropriate, or will there be weird behaviors due to the J2EE containers? Should I expose something somewhere through JNDI? What shall I do so that my cache doesn't get screwed in clustered environments (it's OK, but not necessary, to have one cache per clustered server)?
Given the informations above, Would it be a correct implementation to put an object responsible for caching as a ServletContext attribute?
Note: I do not want to load all of them at startup and be done with it because that would
1). overload the webservice whenever my application starts up
2). The files might change while my application is running, so I would have to requery them anyway
3). I would still need a globally accessible cache, so my question still holds
Update: Using a caching proxy (such as squid) may be a good idea, but each request to the webservice will send rather large XML query in the post Data, which may be different each time. Only the web application really knows that two different calls to the webservice are actually equivalent.
Thanks for your help
Here's an example of caching with EhCache. This code is used in several projects to implement ad hoc caching.
1) Put your cache in the global context. (Don't forget to add the listener in WEB.XML).
import net.sf.ehcache.Cache;
import net.sf.ehcache.CacheManager;
public class InitializationListener implements ServletContextListener {
#Override
public void contextInitialized(ServletContextEvent sce) {
ServletContext ctx = sce.getServletContext();
CacheManager singletonManager = CacheManager.create();
Cache memoryOnlyCache = new Cache("dbCache", 100, false, true, 86400,86400);
singletonManager.addCache(memoryOnlyCache);
cache = singletonManager.getCache("dbCache");
ctx.setAttribute("dbCache", cache );
}
}
2) Retrieve the cache instance when you need it. i.e. from a servlet:
cache = (Cache) this.getContext().getAttribute("dbCache");
3) Query the cache just before you do an expensive operation.
Element e = getCache().get(key);
if (e != null) {
result = e.getObjectValue(); // get object from cache
} else {
// Write code to create the object you need to cache, then store it in the cache.
Element resultCacheElement = new Element(key, result);
cache.put(resultCacheElement);
}
4) Also don't forget to invalidate cached objects when appropriate.
You can find more samples here
Your question contains several separate questions together. Let's start slowly. ServletContext is good place where you can store handle to your cache. But you pay by having cache per server instance. It should be no problem. If you want to register cache in wider range consider registering it into JNDI.
The problem with caching. Basically, you are retrieving xml via webservice. If you are accesing this webservice via HTTP you can install simple HTTP proxy server on your side which handle caching of xml. The next step will be caching of resolved xml in some sort of local object cache. This cache can exists per server without any problem. In this second case the EHCache will do perfect job. In this case the chain of processing will be like this Client - http request -> servlet -> look into local cache - if not cached -> look into http proxy (xml files) -> do proxy job (http to webservice).
Pros:
Local cache per server instance, which contains only objects from requested xmls
One http proxy running on same hardware as our webapp.
Possibility to scale webapp without adding new http proxies for xml files.
Cons:
Next level of infrastructure
+1 point of failure (http proxy)
More complicated deployment
Update: don't forget to always send HTTP HEAD request into proxy to ensure that cache is up to date.
Option #1: Use an Open Source Caching Library Such as EHCache
Don't implement your own cache when there are a number of good open source alternatives that you can drop in and start using. Implementing your own cache is much more complex than most people realize and if you don't know exactly what you are doing wrt threading you'll easily start reinventing the wheel and resolving some very difficult problems.
I'd recommend EHCache it is under an Apache license. You'll want to take a look at the EHCace code samples.
Option #2: Use Squid
An even easier solution to your problem would be to use Squid... Put Squid in between the process that requests the data to be cached and the system making the request: http://www.squid-cache.org/
After doing some more looking around myself, it seems that the easiest way to achieve what I need (within the requirements and acceptable limitations described in the question), would be to add my caching object to the Servlet Context, and looking it up (or passing it around) where needed.
I'd just instantiate my configuration loader from a ServletContextListener, and within the contextInitialized() method, I'd just store it into the ServletContext using ServletContext.setAttribute(). It's then easy to look it up from the servlets themselves using request.getSession().getServletContext().getAttribute().
I suppose this is the proper way to do it without introducing spring or any other dependency injection framework.
Bref , you can use this ready spring ehcache configuration
1- ehcache.xml : show global configuration of Ehcache.
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="./ehcache.xsd" updateCheck="false" monitoring="autodetect" dynamicConfig="true" name="myCacheManager">
<!--
see ehcache-core-*.jar/ehcache-fallback.xml for description of elements
Attention: most of those settings will be overwritten by hybris
-->
<diskStore path="java.io.tmpdir"/>
</ehcache>
2- ehcache-spring.xml : create EhCacheManagerFactoryBean and EhCacheFactoryBean.
<bean id="myCacheManager" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean"
scope="singleton">
<property name="configLocation" value="ehcache.xml" />
<property name="shared" value="true" />
</bean>
<bean id="myCache" class="org.springframework.cache.ehcache.EhCacheFactoryBean" scope="singleton">
<property name="cacheManager" ref="myCacheManager" />
<property name="cacheName" value="myCache" />
<property name="maxElementsInMemory" value="1000" />
<property name="maxElementsOnDisk" value="1000" />
<property name="eternal" value="false" />
<property name="diskPersistent" value="true" />
<property name="timeToIdle" value="600" />
<property name="timeToLive" value="1200" />
<property name="memoryStoreEvictionPolicy" value="LRU" />
<property name="statisticsEnabled" value="true" />
<property name="sampledStatisticsEnabled" value="true" />
</bean>
3- Inject "myCache" bean in your business class , see the following exemple to get started with getting and putting a object in your cache.
#Resource("myCache")
private net.sf.ehcache.Cache myCache;
#Resource("myService")
private Service myService;
public byte[] getFromCache(final String code)
{
// init Cache
final StringBuilder builder = new StringBuilder();
// key to identify a entry in cache map
final String key = code;
// get form the cache
final Element element = myCache.get(key);
if (element != null && element.getValue() != null)
{
return (byte[]) element.getValue();
}
final byte[] somethingToBeCached = myService.getBy(code);
// store in the cache
myCache.put(new Element(key, somethingToBeCached));
return somethingTobeCached;
}
I did not had any problems with putting cached object instance inside ServletContext. Do not forget other 2 options (request scope, session scope) with setAttributes methods of this objects. Anything that is supported natively inside webcontainers and j2ee serveers is good (by good I mean it's vendor independed, and without heavy j2ee librarires like Spring). My biggest requirements is that servers gets up and running in 5-10 seconds.
I really dislike all caching solution, beacuse it's so easy to get it working on local machine, and hard to get it working on production machines. EHCACHE, Infinispan etc.. Unless you need cluster wide replication / distribution, tightly integrated with Java ecosystem, you can use REDIS (NOSQL datatabase) or nodejs ... Anything with HTTP interface will do. Especially
Caching can be really easy, and here is the pure java solution (no frameworks):
import java.util.*;
/*
ExpirableObject.
Abstract superclass for objects which will expire.
One interesting design choice is the decision to use
the expected duration of the object, rather than the
absolute time at which it will expire. Doing things this
way is slightly easier on the client code this way
(often, the client code can simply pass in a predefined
constant, as is done here with DEFAULT_LIFETIME).
*/
public abstract class ExpirableObject {
public static final long FIFTEEN_MINUTES = 15 * 60 * 1000;
public static final long DEFAULT_LIFETIME = FIFTEEN_MINUTES;
protected abstract void expire();
public ExpirableObject() {
this(DEFAULT_LIFETIME);
}
public ExpirableObject(long timeToLive) {
Expirer expirer = new Expirer(timeToLive);
new Thread(expirer).start();
}
private class Expirer implements Runnable {
private long _timeToSleep;
public Expirer (long timeToSleep){
_timeToSleep = timeToSleep;
}
public void run() {
long obituaryTime = System.currentTimeMillis() + _timeToSleep;
long timeLeft = _timeToSleep;
while (timeLeft > 0) {
try {
timeLeft = obituaryTime - System.currentTimeMillis();
if (timeLeft > 0) {
Thread.sleep(timeLeft);
}
}
catch (InterruptedException ignored){}
}
expire();
}
}
}
Please refer to this link for further improvements.

Categories

Resources