I put my couchbase initialization code inside a static code block:
static {
initCluster();
bucket = initBucket("graph");
metaBucket = initBucket("meta");
BLACKLIST = new SetObservingCache<String>(() -> getBlackList(), BLACKLIST_REFRESH_INTERVAL_SEC * 1000);
}
I know it's not a good practice but it was very convenient and served its purpose, as I need this code to run exactly once in a multi-threaded environment and block all subsequent calls from other threads until it's finished (blacklist has been initialized).
To my surprise, the call to getBlacklist() timed out and couldn't be completed.
However, when calling it again after 2 minutes (that's what the ObservingCache does), it completed in less than a second.
In order to solve this, I refactored my code and made the blacklist acquisition lazy:
public boolean isBlacklisted(String key) {
// BLACKLIST variable should NEVER be touched outside of this context.
assureBlacklistIsPopulated();
return BLACKLIST != null ? BLACKLIST.getItems().contains(key) : false;
}
private void assureBlacklistIsPopulated() {
if (!ENABLE_BLACKLIST) {
return;
}
if (BLACKLIST == null) {
synchronized (CouchConnectionManager.class) {
if (BLACKLIST == null) {
BLACKLIST = new SetObservingCache<String>(() -> getBlackList(), BLACKLIST_REFRESH_INTERVAL_SEC * 1000);
}
}
}
}
The call to isBlacklisted() blocks all other threads that attempt to check if an entry is blacklisted until blacklist is initialized.
I'm not a big fan of this solution because it's very verbose and error prone - one might try to read from BLACKLIST without calling assureBlacklistIsPopulated() beforehand.
The static (and non final) fields within the class are as follows:
private static CouchbaseCluster cluster;
private static Bucket bucket;
private static Bucket metaBucket;
private static SetObservingCache<String> BLACKLIST;
I can't figure out why the call succeeded when it wasn't a part of the static initialization block. Is there any known performance-related vulnerability of the static initialization block that I'm not aware of?
EDIT: Added initialization code per request
private Bucket initBucket(String bucketName) {
while(true) {
Throwable t = null;
try {
ReportableThread.updateStatus("Initializing bucket " + bucketName);
return cluster.openBucket(bucketName);
} catch(Throwable t1) {
t1.printStackTrace();
t = t1;
}
try {
ReportableThread.updateStatus(String.format("Failed to open bucket: %s reason: %s", bucketName, t));
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
private void initCluster() {
CouchbaseEnvironment env = DefaultCouchbaseEnvironment
.builder()
.kvTimeout(MINUTE)
.connectTimeout(MINUTE)
.retryStrategy(FailFastRetryStrategy.INSTANCE)
.requestBufferSize(16384 * 2)
.responseBufferSize(16384 * 2)
.build();
while(true) {
ReportableThread.updateStatus("Initializing couchbase cluster");
Throwable t = null;
try {
cluster = CouchbaseCluster.create(env, getServerNodes());
if(cluster != null) {
return;
}
} catch(Throwable t1) {
t1.printStackTrace();
t = t1;
}
try {
ReportableThread.updateStatus(String.format("Failed to create connection to couch %s", t));
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
public Set<String> getBlackList() {
ReportableThread.updateStatus("Getting black list");
AbstractDocument<?> abstractDoc = get("blacklist", metaBucket, JsonArrayDocument.class);
JsonArrayDocument doc = null;
if (abstractDoc != null && abstractDoc instanceof JsonArrayDocument) {
doc = (JsonArrayDocument)abstractDoc;
} else {
return new HashSet<String>();
}
ReportableThread.updateStatus(String.format("%s: Got %d items | sorting items", new Date(System.currentTimeMillis()).toString(), doc.content().size()));
HashSet<String> ret = new HashSet<String>();
for (Object string : doc.content()) {
if (string != null) {
ret.add(string.toString());
}
}
return ret;
}
1st: you are doing the double-check idiom. That's always bad.
Put only one if(BLACKLIST==null) and it must be inside the synchronized.
2nd: the lazy init is fine, but do it in a static getInstance() and NEVER expose the BLACKLIST field.
Related
I'm using redis with the help of jedis client. Attaching the code snippet for key value set/get here. Here I'm expecting my jedisPool to get initialised only once but it is getting initialised multiple times. Not sure where I'm going wrong. Scratching my head for several days with it. I have no clues why it does multiple initialisation.
//$Id$
package experiments.with.truth;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisPool;
import redis.clients.jedis.JedisPoolConfig;
public class RedisClientUtil {
private static JedisPool pool; //I persume the deafult value initialised in my static variable would be null
static int maxActiveConnections = 8;
static int maxWaitInMillis = 2000;
static String host = "127.0.0.1";
static int port = 6379;
static int REDIS_DB = 1;
public static void initRedisClient() throws Exception {
try {
Class classObj = Class.forName("redis.clients.jedis.JedisPool");
if (classObj != null && pool == null) {
JedisPoolConfig jedisConfig = new JedisPoolConfig();
jedisConfig.setMaxTotal(maxActiveConnections);
jedisConfig.setMaxWaitMillis(maxWaitInMillis);
pool = new JedisPool(jedisConfig, host, port);
System.out.println("Pool initialised successfully !");
}
} catch(ClassNotFoundException ex) {
System.out.println("Couldn't initialize redis due to unavailability of jedis jar in your machine. Exception : " + ex);
}
}
public Jedis getJedisConnection() {
if(pool == null) {
initRedisClient();
}
return pool.getResource();
}
private static void returnJedis(Jedis jedis) {
try {
pool.returnResource(jedis);
} catch(Exception ex) {
ex.printStackTrace();
}
}
public static String getValue(String key) throws Exception{
Jedis jedisCon = null;
try {
jedisCon = getJedisConnection();
jedisCon.select(REDIS_DB);
String val = jedisCon.get(key);
return val;
} catch (Exception e) {
e.printStackTrace();
} finally {
if (jedisCon != null) {
returnJedis(jedisCon);
}
}
return null;
}
public void addValueToRedis(String key, String value) {
Jedis jedisCon = null;
try {
jedisCon = getJedisConnection();
jedisCon.select(REDIS_DB);
jedisCon.set(key, value);
} catch (Exception e) {
e.printStackTrace();
} finally {
if (jedisCon != null) {
returnJedis(jedisCon);
}
}
}
public static void main(String[] args) {
// TODO Auto-generated method stub
System.out.println("Value : " + getValue("a"));
System.out.println("Value : " + getValue("b"));
System.out.println("Value : " + getValue("c"));
}
}
I could see this debug log Pool initialised successfully multiple times when my program runs. Can someone help me find the loophole in this? Or how could I make this better (or make to behave it as expected) by initialising only once throughout the entire program.
Looks like a basic multithreading case. Your app asks for 5 connections in a short time. All of them see that pool==null and proceed initializing it.
Easy solution: public static synchronized void initRedisClient() throws Exception {
update and private static volatile JedisPool pool; otherwise you may get null pointer exception.
For more complex and performant solutions search 'efficient lazy singletor in java', which will most probably lead you to Enum solution.
I have 1.5 million records in my mysql table. I'm trying to read all the records in a batch process i.e,planning to read 1000 records in a batch and print those records in console.
For this I'm planning to implement multithreading concept using java. How can I implement this?
In MySQL you get all records at once or you get them one by one in a streaming fashion (see this answer). Alternatively, you can use the limit keyword for chunking (see this answer).
Whether you use streaming results or chunking, you can use multi-threading to process (or print) data while you read data. This is typically done using a producer-consumer pattern where, in this case, the producer retrieves data from the database, puts it on a queue and the consumer takes the data from the queue and processes it (e.g. print to the console).
There is a bit of administration overhead though: both producer and consumer can freeze or trip over an error and both need to be aware of this so that they do not hang forever (potentially freezing your application). This is where "reasonable" timeouts come in ("reasonable" depends entirely on what is appropriate in your situation).
I have tried to put this in a minimal running example, but it is still a lot of code (see below). There are two commented lines that can be used to test the timeout-case. There is also a refreshTestData variable that can be used to re-use inserted records (inserting records can take a long time).
To keep it clean, a lot of keywords like private/public are omitted (i.e. these need to be added in non-demo code).
import java.sql.*;
import java.util.*;
import java.util.concurrent.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class FetchRows {
private static final Logger log = LoggerFactory.getLogger(FetchRows.class);
public static void main(String[] args) {
try {
new FetchRows().print();
} catch (Exception e) {
e.printStackTrace();
}
}
void print() throws Exception {
Class.forName("com.mysql.jdbc.Driver").newInstance();
Properties dbProps = new Properties();
dbProps.setProperty("user", "test");
dbProps.setProperty("password", "test");
try (Connection conn = DriverManager.getConnection("jdbc:mysql://localhost:3306/test", dbProps)) {
try (Statement st = conn.createStatement()) {
prepareTestData(st);
}
// https://stackoverflow.com/a/2448019/3080094
try (Statement st = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
java.sql.ResultSet.CONCUR_READ_ONLY)) {
st.setFetchSize(Integer.MIN_VALUE);
fetchAndPrintTestData(st);
}
}
}
boolean refreshTestData = true;
int maxRecords = 5_555;
void prepareTestData(Statement st) throws SQLException {
int recordCount = 0;
if (refreshTestData) {
st.execute("drop table if exists fetchrecords");
st.execute("create table fetchrecords (id mediumint not null auto_increment primary key, created timestamp default current_timestamp)");
for (int i = 0; i < maxRecords; i++) {
st.addBatch("insert into fetchrecords () values ()");
if (i % 500 == 0) {
st.executeBatch();
log.debug("{} records available.", i);
}
}
st.executeBatch();
recordCount = maxRecords;
} else {
try (ResultSet rs = st.executeQuery("select count(*) from fetchrecords")) {
rs.next();
recordCount = rs.getInt(1);
}
}
log.info("{} records available for testing.", recordCount);
}
int batchSize = 1_000;
int maxBatchesInMem = 3;
int printFinishTimeoutS = 5;
void fetchAndPrintTestData(Statement st) throws SQLException, InterruptedException {
final BlockingQueue<List<FetchRecordBean>> printQueue = new LinkedBlockingQueue<List<FetchRecordBean>>(maxBatchesInMem);
final PrintToConsole printTask = new PrintToConsole(printQueue);
new Thread(printTask).start();
try (ResultSet rs = st.executeQuery("select * from fetchrecords")) {
List<FetchRecordBean> l = new LinkedList<>();
while (rs.next()) {
FetchRecordBean bean = new FetchRecordBean();
bean.setId(rs.getInt("id"));
bean.setCreated(new java.util.Date(rs.getTimestamp("created").getTime()));
l.add(bean);
if (l.size() % batchSize == 0) {
/*
* The printTask can stop itself when this producer is too slow to put records on the print-queue.
* Therefor, also check printTask.isStopping() to break the while-loop.
*/
if (printTask.isStopping()) {
throw new TimeoutException("Print task has stopped.");
}
enqueue(printQueue, l);
l = new LinkedList<>();
}
}
if (l.size() > 0) {
enqueue(printQueue, l);
}
} catch (TimeoutException | InterruptedException e) {
log.error("Unable to finish printing records to console: {}", e.getMessage());
printTask.stop();
} finally {
log.info("Reading records finished.");
if (!printTask.isStopping()) {
try {
enqueue(printQueue, Collections.<FetchRecordBean> emptyList());
} catch (Exception e) {
log.error("Unable to signal last record to print.", e);
printTask.stop();
}
}
if (!printTask.await(printFinishTimeoutS, TimeUnit.SECONDS)) {
log.error("Print to console task did not finish.");
}
}
}
int enqueueTimeoutS = 5;
// To test a slow printer, see also Thread.sleep statement in PrintToConsole.print.
// int enqueueTimeoutS = 1;
void enqueue(BlockingQueue<List<FetchRecordBean>> printQueue, List<FetchRecordBean> l) throws InterruptedException, TimeoutException {
log.debug("Adding {} records to print-queue.", l.size());
if (!printQueue.offer(l, enqueueTimeoutS, TimeUnit.SECONDS)) {
throw new TimeoutException("Unable to put print data on queue within " + enqueueTimeoutS + " seconds.");
}
}
int dequeueTimeoutS = 5;
class PrintToConsole implements Runnable {
private final BlockingQueue<List<FetchRecordBean>> q;
private final CountDownLatch finishedLock = new CountDownLatch(1);
private volatile boolean stop;
public PrintToConsole(BlockingQueue<List<FetchRecordBean>> q) {
this.q = q;
}
#Override
public void run() {
try {
while (!stop) {
List<FetchRecordBean> l = q.poll(dequeueTimeoutS, TimeUnit.SECONDS);
if (l == null) {
log.error("Unable to get print data from queue within {} seconds.", dequeueTimeoutS);
break;
}
if (l.isEmpty()) {
break;
}
print(l);
}
if (stop) {
log.error("Printing to console was stopped.");
}
} catch (Exception e) {
log.error("Unable to print records to console.", e);
} finally {
if (!stop) {
stop = true;
log.info("Printing to console finished.");
}
finishedLock.countDown();
}
}
void print(List<FetchRecordBean> l) {
log.info("Got list with {} records from print-queue.", l.size());
// To test a slow printer, see also enqueueTimeoutS.
// try { Thread.sleep(1500L); } catch (Exception ignored) {}
}
public void stop() {
stop = true;
}
public boolean isStopping() {
return stop;
}
public void await() throws InterruptedException {
finishedLock.await();
}
public boolean await(long timeout, TimeUnit tunit) throws InterruptedException {
return finishedLock.await(timeout, tunit);
}
}
class FetchRecordBean {
private int id;
private java.util.Date created;
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public java.util.Date getCreated() {
return created;
}
public void setCreated(java.util.Date created) {
this.created = created;
}
}
}
Dependencies:
mysql:mysql-connector-java:5.1.38
org.slf4j:slf4j-api:1.7.20 (and to get logging shown in console: ch.qos.logback:logback-classic:1.1.7 with ch.qos.logback:logback-core:1.1.7)
I'm having to dabble with caching and multithreading (thread per request), and I am absolute beginner in that area, so any help would be appreciated
My requirements are:
Cache one single large object that has ether interval refresh or refresh from user
Because retrieving object data is very time consuming make it thread-safe
When retrieving object data return "Old data" until new data is available
Optimize it
From SO and some other user help I have this ATM:
** Edited with Sandeep's and Kayaman's advice **
public enum MyClass
{
INSTANCE;
// caching field
private CachedObject cached = null;
private AtomicLong lastVisistToDB = new AtomicLong();
private long refreshInterval = 1000 * 60 * 5;
private CachedObject createCachedObject()
{
return new CachedObject();
}
public CachedObject getCachedObject()
{
if( ( System.currentTimeMillis() - this.lastVisistToDB.get() ) > this.refreshInterval)
{
synchronized( this.cached )
{
if( ( System.currentTimeMillis() - this.lastVisistToDB.get() ) > this.refreshInterval)
{
this.refreshCachedObject();
}
}
}
return this.cached;
}
public void refreshCachedObject()
{
// This is to prevent threads waiting on synchronized from re-refreshing the object
this.lastVisistToDB.set(System.currentTimeMillis());
new Thread()
{
public void run()
{
createCachedObject();
// Update the actual refresh time
lastVisistToDB.set(System.currentTimeMillis());
}
}.start();
}
}
In my opinion my code does all of the above written requirements. (but I'm not sure)
With code soon going to third party analysis, I really would appreciate any input on code performance and blind spots
Thx for your help.
EDIT : VanOekel's answer IS the solution, because my code ( Edited with Sandeep's and Kayaman's advice ), doesn't account for impact of user-triggered refresh() in this multi-threading enviroment
Instead of DCL as proposed by Sandeep, I'd use the enum Singleton pattern, as it's the best way for lazy-init singletons these days (and looks nicer than DCL).
There's a lot of unnecessary variables and code being used, I'd simplify it a lot.
private static Object cachedObject;
private AtomicLong lastTime = new AtomicLong();
private long refreshPeriod = 1000;
public Object get() {
if(System.currentTimeMillis() - lastTime.get() > refreshPeriod) {
synchronized(cachedObject) {
if(System.currentTimeMillis() - lastTime.get() > refreshPeriod) {
lastTime.set(System.currentTimeMillis()); // This is to prevent threads waiting on synchronized from re-refreshing the object
new Thread() {
public void run() {
cachedObject = refreshObject(); // Get from DB
lastTime.set(System.currentTimeMillis()); // Update the actual refresh time
}
}.start();
}
}
}
return cachedObject;
}
Speedwise that could still be improved a bit, but a lot of unnecessary complexity is reduced. Repeated calls to System.currentTimeMillis() could be removed, as well as setting lastTime twice. But, let's start off with this.
You should put in double checked locking in getInstance().
Also, you might want to keep just one volatile cache object, and in getAndRefreshCashedObject(), and where-ever it's refreshed, you could calculate the new data, and assign it in a syncronized way to the cache object you have.
This way, the code might look smaller, and you don't need to maintain loadInProgress, oldCached variables
I arrive at a somewhat different solution when taking into account the "random" refresh triggered by a user. Also, I think the first fetch should wait for the cache to be filled (i.e. wait for first cached object to be created). And, finally, there should be some (unit) tests to verify the cache works as intended and is thread-safe.
First the cache implementation:
import java.util.concurrent.*;
import java.util.concurrent.atomic.*;
// http://stackoverflow.com/q/31338509/3080094
public enum DbCachedObject {
INSTANCE;
private final CountDownLatch initLock = new CountDownLatch(1);
private final Object refreshLock = new Object();
private final AtomicReference<CachedObject> cachedInstance = new AtomicReference<CachedObject>();
private final AtomicLong lastUpdate = new AtomicLong();
private volatile boolean refreshing;
private long cachePeriodMs = 1000L; // make this an AtomicLong if it can be updated
public CachedObject get() {
CachedObject o = cachedInstance.get();
if (o == null || isCacheOutdated()) {
updateCache();
if (o == null) {
awaitInit();
o = cachedInstance.get();
}
}
return o;
}
public void refresh() {
updateCache();
}
private boolean isCacheOutdated() {
return (System.currentTimeMillis() - lastUpdate.get() > cachePeriodMs);
}
private void updateCache() {
synchronized (refreshLock) {
// prevent users from refreshing while an update is already in progress
if (refreshing) {
return;
}
refreshing = true;
// prevent other threads from calling this method again
lastUpdate.set(System.currentTimeMillis());
}
new Thread() {
#Override
public void run() {
try {
cachedInstance.set(getFromDb());
// set the 'real' last update time
lastUpdate.set(System.currentTimeMillis());
initLock.countDown();
} finally {
// make sure refreshing is set to false, even in case of error
refreshing = false;
}
}
}.start();
}
private boolean awaitInit() {
boolean initialized = false;
try {
// assume cache-period is longer as the time it takes to create the cached object
initialized = initLock.await(cachePeriodMs, TimeUnit.MILLISECONDS);
} catch (Exception e) {
e.printStackTrace();
}
return initialized;
}
private CachedObject getFromDb() {
// dummy call, no db is involved
return new CachedObject();
}
public long getCachePeriodMs() {
return cachePeriodMs;
}
}
Second the cached object with a main-method that tests the cache implementation:
import java.util.concurrent.*;
import java.util.concurrent.atomic.*;
public class CachedObject {
private static final AtomicInteger createCount = new AtomicInteger();
static final long createTimeMs = 100L;
private final int instanceNumber = createCount.incrementAndGet();
public CachedObject() {
println("Creating cached object " + instanceNumber);
try {
Thread.sleep(createTimeMs);
} catch (Exception ignored) {}
println("Cached object " + instanceNumber + " created");
}
public int getInstanceNumber() {
return instanceNumber;
}
#Override
public String toString() {
return getClass().getSimpleName() + "-" + getInstanceNumber();
}
private static final long startTime = System.currentTimeMillis();
/**
* Test the use of DbCachedObject.
*/
public static void main(String[] args) {
ThreadPoolExecutor tp = (ThreadPoolExecutor) Executors.newCachedThreadPool();
final int tcount = 2; // amount of tasks running in paralllel
final long threadStartGracePeriodMs = 50L; // starting runnables takes time
try {
// verify first calls wait for initialization of first cached object
fetchCacheTasks(tp, tcount, createTimeMs + threadStartGracePeriodMs);
// verify immediate return of cached object
CachedObject o = DbCachedObject.INSTANCE.get();
println("Cached: " + o);
// wait for refresh-period
Thread.sleep(DbCachedObject.INSTANCE.getCachePeriodMs() + 1);
// trigger update
o = DbCachedObject.INSTANCE.get();
println("Triggered update for " + o);
// wait for update to complete
Thread.sleep(createTimeMs + 1);
// verify updated cached object is returned
fetchCacheTasks(tp, tcount, threadStartGracePeriodMs);
// trigger update
DbCachedObject.INSTANCE.refresh();
// wait for update to complete
Thread.sleep(createTimeMs + 1);
println("Refreshed: " + DbCachedObject.INSTANCE.get());
} catch (Exception e) {
e.printStackTrace();
} finally {
tp.shutdownNow();
}
}
private static void fetchCacheTasks(ThreadPoolExecutor tp, int tasks, long doneWaitTimeMs) throws Exception {
final CountDownLatch fetchStart = new CountDownLatch(tasks);
final CountDownLatch fetchDone = new CountDownLatch(tasks);
// println("Starting " + tasks + " tasks");
for (int i = 0; i < tasks; i++) {
final int r = i;
tp.execute(new Runnable() {
#Override public void run() {
fetchStart.countDown();
try { fetchStart.await();} catch (Exception ignored) {}
CachedObject o = DbCachedObject.INSTANCE.get();
println("Task " + r + " got " + o);
fetchDone.countDown();
}
});
}
println("Awaiting " + tasks + " tasks");
if (!fetchDone.await(doneWaitTimeMs, TimeUnit.MILLISECONDS)) {
throw new RuntimeException("Fetch cached object tasks incomplete.");
}
}
private static void println(String msg) {
System.out.println((System.currentTimeMillis() - startTime) + " " + msg);
}
}
The tests in the main-method need human eyes to verify the results, but they should provide sufficient input for unit tests. Once the unit tests are more refined, the cache implementation will probably need some finishing touches as well.
I have problem with my login application in java and flex. we use fingerprint login. the system waits for 60 seconds for any fingerprint input from the user. After that it automatically goes out of the page. The user also has text password option on that page. When user clicks on that option, control goes to some other page. But the problem is whenver user click on text password option, he is redirected but the thread of 60 seconds keep running. Can any one help me how to stop that thread. Here is my code. I am using blocking queue concept to get out of the input screen by inputting some dummy value of one bit.
private void interruptCaptureProcess() {
System.out.println("Interrupting Capture Process.");
ExactScheduledRunnable fingerScanInterruptThread = new ExactScheduledRunnable()
{
public void run()
{
try
{
if (capture != null)
{
DPFPSampleFactoryImpl test = new DPFPSampleFactoryImpl();
samples.put(test.createSample(new byte[1]));
capture.stopCapture();
}
}
catch (Exception e)
{
LOGGER.error("interruptCaptureProcess", e);
e.printStackTrace();
}
}
};
timeOutScheduler.schedule(fingerScanInterruptThread, getTimeOutValue(), TimeUnit.SECONDS);
}
/**
* Scans and Verifies the user finger print by matching it with the previous registered template for the user.
*
* #param userVO is the user value object which has to be verified.
* #return the acknowledgment string according to result for operation performed.
* #throws UserServiceException when there is an error in case of getting user record.
*/
public String verifyUserFingerPrint(Long userId) throws LoginServiceException {
System.out.println("Performing fingerprint verification...\n");
interruptCaptureProcess();
UserVO userVO = null;
try {
userVO = new UserService().findUserById(userId, true);
if (userVO != null) {
stopCaptureProcess();
DPFPSample sample = getSample(selectReader(), "Scan your finger\n");
timeOutScheduler.shutdownNow();
if (sample.serialize().length == 1) {
System.out.println("Coming in code");
return null;
} else if (sample.serialize().length == 2) {
System.out.println("Capturing Process has been Timed-Out");
return TIMEOUT;
}
if (sample == null)
throw new UserServiceException("Error in scanning finger");
DPFPFeatureExtraction featureExtractor = DPFPGlobal.getFeatureExtractionFactory()
.createFeatureExtraction();
DPFPFeatureSet featureSet = featureExtractor.createFeatureSet(sample,
DPFPDataPurpose.DATA_PURPOSE_VERIFICATION);
DPFPVerification matcher = DPFPGlobal.getVerificationFactory().createVerification();
matcher.setFARRequested(DPFPVerification.MEDIUM_SECURITY_FAR);
byte[] tempByte = userVO.getFingerPrint();
DPFPTemplateFactory facotory = new DPFPTemplateFactoryImpl();
for (DPFPFingerIndex finger : DPFPFingerIndex.values()) {
DPFPTemplate template = facotory.createTemplate(tempByte);
if (template != null) {
DPFPVerificationResult result = matcher.verify(featureSet, template);
// Fix of enh#1029
Map<ScriptRxConfigType, Map<ScriptRxConfigName, String>> scriptRxConfigMap = ScriptRxConfigMapSingleton
.getInstance().getScriptRxConfigMap();
Map<ScriptRxConfigName, String> fingerPrintPropertiesMap = scriptRxConfigMap
.get(ScriptRxConfigType.FINGERPRINT);
String fingerPrintDemoMode = fingerPrintPropertiesMap.get(ScriptRxConfigName.DEMOMODE);
if (fingerPrintDemoMode != null && fingerPrintDemoMode.equalsIgnoreCase("DemoEnabled")) {
return "LOGS_MSG_101";
}
// End of fix of enh#1029
if (result.isVerified()) {
System.out.println("Matching finger: %s, FAR achieved: %g.\n" + fingerName(finger)
+ (double) result.getFalseAcceptRate() / DPFPVerification.PROBABILITY_ONE);
return "LOGS_MSG_101";
}
}
}
}
} catch (IndexOutOfBoundsException iob) {
LOGGER.error("verifyUserFingerPrint", iob);
throw new LoginServiceException("LOGS_ERR_101", iob);
} catch (Exception exp) {
LOGGER.error("verifyUserFingerPrint", exp);
System.out.println("Failed to perform verification.");
throw new LoginServiceException("LOGS_ERR_105", exp);
} catch (Throwable th) {
LOGGER.error("verifyUserFingerPrint", th);
throw new LoginServiceException("LOGS_ERR_106", th.getMessage(), th);
}
System.out.println("No matching fingers found for \"%s\".\n" + userVO.getFirstName().toUpperCase());
throw new LoginServiceException("LOGS_ERR_107", null);
}
/* finger scanning process
*/
private void stopCaptureProcess() {
ExactScheduledRunnable fingerScanInterruptThread = new ExactScheduledRunnable() {
public void run() {
try {
DPFPSampleFactoryImpl test = new DPFPSampleFactoryImpl();
samples.put(test.createSample(new byte[2]));
capture.stopCapture();
} catch (Throwable ex) {
ex.printStackTrace();
}
}
};
timeOutScheduler.schedule(fingerScanInterruptThread, getTimeOutValue(), TimeUnit.SECONDS);
}
/**
* API will get the value for the finger scanner time out configuration(Default will be 60 seconds)
*/
private long getTimeOutValue() {
long waitTime = 60;
String configValue = ScriptRxSingleton.getInstance().getConfigurationValue(ConfigType.Security,
ConfigName.FingerprintTimeout);
try {
waitTime = Long.valueOf(configValue);
} catch (NumberFormatException e) {
LOGGER.debug("Configuration value is not a number for FingerTimeOut", e);
}
return waitTime;
}
Stopping blocking tasks in Java is a complicated topic, and requires cooperation between the blocking code and the code that wants to unblock it. The most common way in Java is to interrupt the thread that is blocking, which works if the code that is blocking and the code around it understands interruption. If that's not the case you're out of luck. Here's an answer that explains one way to interrupt a thread that is blocking in an Executor: https://stackoverflow.com/a/9281038/1109
I'm using a third party Java library to interact with a REST API. The REST API can sometimes take a long time to respond, eventually resulting in a java.net.ConnectException being thrown.
I'd like to shorten the timeout period but have no means of modifying the third party library.
I'd like to apply some form of timeout control around the calling of a Java method so that I can determine at what point to give up waiting.
This doesn't relate directly to network timeouts. I'd like to be able to try and perform an operation and be able to give up after a specified wait time.
The following is by no means valid Java but does conceptually demonstrate what I'd like to achieve:
try {
Entity entity = new Entity();
entity.methodThatMakesUseOfRestApi();
} catch (<it's been ages now, I don't want to wait any longer>) {
throw TimeoutException();
}
I recommend TimeLimiter from Google Guava library.
This is probably the current way how this should be done with plain Java:
public String getResult(final RESTService restService, String url) throws TimeoutException {
// should be a field, not a local variable
ExecutorService threadPool = Executors.newCachedThreadPool();
// Java 8:
Callable<String> callable = () -> restService.getResult(url);
// Java 7:
// Callable<String> callable = new Callable<String>() {
// #Override
// public String call() throws Exception {
// return restService.getResult(url);
// }
// };
Future<String> future = threadPool.submit(callable);
try {
// throws a TimeoutException after 1000 ms
return future.get(1000, TimeUnit.MILLISECONDS);
} catch (ExecutionException e) {
throw new RuntimeException(e.getCause());
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new TimeoutException();
}
}
There is no general timeout mechanism valid for arbitrary operations.
While... there is one... by using Thread.stop(Throwable). It works and it's thread safe, but your personal safety is in danger when the angry mob confronts you.
// realizable
try
{
setTimeout(1s); // 1
... any code // 2
cancelTimeout(); // 3
}
catch(TimeoutException te)
{
// if (3) isn't executed within 1s after (1)
// we'll get this exception
}
Now we have our nice CompletableFuture , here an application to achieve what was asked.
CompletableFuture.supplyAsync(this::foo).get(15, TimeUnit.SECONDS)
You could use a Timer and a TimerTask.
Here's a utility class I wrote, which should do the trick unless I've missed something. Unfortunately it can only return generic Objects and throw generic Exceptions. Others may have better ideas on how to achieve this.
public abstract class TimeoutOperation {
long timeOut = -1;
String name = "Timeout Operation";
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public long getTimeOut() {
return timeOut;
}
public void setTimeOut(long timeOut) {
this.timeOut = timeOut;
}
public TimeoutOperation (String name, long timeout) {
this.timeOut = timeout;
}
private Throwable throwable;
private Object result;
private long startTime;
public Object run () throws TimeoutException, Exception {
Thread operationThread = new Thread (getName()) {
public void run () {
try {
result = doOperation();
} catch (Exception ex) {
throwable = ex;
} catch (Throwable uncaught) {
throwable = uncaught;
}
synchronized (TimeoutOperation.this) {
TimeoutOperation.this.notifyAll();
}
}
public synchronized void start() {
super.start();
}
};
operationThread.start();
startTime = System.currentTimeMillis();
synchronized (this) {
while (operationThread.isAlive() && (getTimeOut() == -1 || System.currentTimeMillis() < startTime + getTimeOut())) {
try {
wait (1000L);
} catch (InterruptedException ex) {}
}
}
if (throwable != null) {
if (throwable instanceof Exception) {
throw (Exception) throwable;
} else if (throwable instanceof Error) {
throw (Error) throwable;
}
}
if (result != null) {
return result;
}
if (System.currentTimeMillis() > startTime + getTimeOut()) {
throw new TimeoutException("Operation '"+getName()+"' timed out after "+getTimeOut()+" ms");
} else {
throw new Exception ("No result, no exception, and no timeout!");
}
}
public abstract Object doOperation () throws Exception;
public static void main (String [] args) throws Throwable {
Object o = new TimeoutOperation("Test timeout", 4900) {
public Object doOperation() throws Exception {
try {
Thread.sleep (5000L);
} catch (InterruptedException ex) {}
return "OK";
}
}.run();
System.out.println(o);
}
}
static final int NUM_TRIES =4;
int tried =0;
boolean result =false;
while (tried < NUM_TRIES && !result)
{
try {
Entity entity = new Entity();
result = entity.methodThatMakesUseOfRestApi();
}
catch (<it's been ages now, I don't want to wait any longer>) {
if ( tried == NUM_TRIES)
{
throw new TimeoutException();
}
}
tried++;
Thread.sleep(4000);
}