I am setting up a simulator (for testing) of a server (Radius) which sends queries to another server (LDAP) using threads.
The queries need to be executed on a x per second basis.
I am using a scheduled thread pool executor with callable for this purpose so that I can create callables and submit them to the thread pool for execution.
Each thread should open its own connection and use it to query.
The thing is that I want the connection to be re-used by the same thread every time it is used.
To clarify:
If I have lets say a thread pool of 20 I want 20 connections to be created and used. (so I can send lets say 10.000 queries which will be processed in turn by the 20 threads/connections).
Now the (LDAP) server information to connect to is sent to the constructor of the callable and the callable sets up the connection for execution. Thereafter I retrieve the result using the future system of callable.
The problem with this is each time I create a callable the connection is being opened (and later closed of course).
I am looking for the best practice to keep the connections alive and them being re-used for each thread.
I have thought of some ways to implement this but they dont seem very efficient:
Use a connection pool inside my threadpool to retrieve a free connection when needed (Creates deadlock and other thread safety issues)
Use a static (or so) array with connections and using the thread number to retrieve its connection (Not foul proof either, see link)
What is the most efficient way of implementing this? <- old question, see edit for new question.
EDIT:
I was thinking because I cannot safely get a thread number, but the threadId is always unique, I can just use a
map<String/threadId, connection>
And pass the whole map (reference) to the callable. This way I can use something like: (pseudo code)
Connection con = map.get(this.getThreadId());
If (con == null){
con = new Connection(...);
map.put(this.getThreadId(), con)
}
It would also be possible to make the map static and just access it statically. This way I don't have to pass the map to the Callable.
This would be at least safe and doesnt force me to restructure my code.
New question:
What would be closer aligned with best practices; The above solution or Zim-Zam's solution?
And if the above is best, would it be better to go for the static solution or not?
I would implement this using a BlockingQueue that is shared between Callables, with the ScheduledThreadPoolExecutor putting x queries into the BlockingQueue every second
public class Worker implements Runnable {
private final BlockingQueue<Query> inbox;
private final BlockingQueue<Result> outbox;
public Worker(BlockingQueue<Query> inbox, BlockingQueue<Result> outbox) {
// create LDAP connection
this.inbox = inbox;
this.outbox = outbox;
}
public void run() {
try {
while(true) {
// waits for a Query to be available
Query query = inbox.take();
// execute query
outbox.add(new Result(/* result */));
}
} catch(InterruptedException e) {
// log and restart? close LDAP connection and return?
}
}
}
public class Master {
private final int x; // number of queries per second
private final BlockingQueue<Query> outbox = new ArrayBlockingQueue<>(4 * x);
private final BlockingQueue<Result> inbox = new ArrayBlockingQueue<>(4 * x);
private final ScheduledThreadPoolExecutor executor;
private final List<Future<?>> workers = new ArrayList<>(20);
private final Future<?> receiver;
public Master() {
// initialize executor
for(int i = 0; i < 20; i++) {
Worker worker = new Worker(inbox, outbox);
workers.add(executor.submit(worker));
}
receiver = executor.submit(new Runnable() {
public void run() {
while(!Thread.interrupted()) {
try {
Result result = inbox.take();
// process result
} catch(InterruptedException e) {
return;
}
}
}
}
}
executor.scheduleWithFixedDelay(new Runnable() {
public void run() {
// add x queries to the queue
}
}, 0, 1, TimeUnit.SECONDS);
}
Use BlockingQueue#add to add new Queries to outbox, if this throws an exception then your queue is full and you'll need to reduce the rate of query creation and/or create more workers. To break out of a worker's infinite loop call cancel(true) on its Future, this will throw an InterruptedException inside of the Worker.
Related
I have a thread pool with 8 threads
private static final ExecutorService SERVICE = Executors.newFixedThreadPool(8);
My mechanism emulating the work of 100 user (100 Tasks):
List<Callable<Boolean>> callableTasks = new ArrayList<>();
for (int i = 0; i < 100; i++) { // Number of users == 100
callableTasks.add(new Task(client));
}
SERVICE.invokeAll(callableTasks);
SERVICE.shutdown();
The user performs the Task of generating a document.
Get UUID of Task;
Get Task status every 10 seconds;
If Task is ready get document.
public class Task implements Callable<Boolean> {
private final ReportClient client;
public Task(ReportClient client) {
this.client = client;
}
#Override
public Boolean call() {
final var uuid = client.createDocument(documentId);
GetStatusResponse status = null;
do {
try {
Thread.sleep(10000); // This stop current thread, but not a Task!!!!
} catch (InterruptedException e) {
return Boolean.FALSE;
}
status = client.getStatus(uuid);
} while (Status.PENDING.equals(status.status()));
final var document = client.getReport(uuid);
return Boolean.TRUE;
}
}
I want to give the idle time (10 seconds) to another task. But when the command Thread.sleep(10000); is called, the current thread suspends its execution. First 8 Tasks are suspended and 92 Tasks are pending 10 seconds. How can I do 100 Tasks in progress at the same time?
The Answer by Yevgeniy looks correct, regarding Java today. You want to have your cake and eat it too, in that you want a thread to sleep before repeating a task but you also want that thread to do other work. That is not possible today, but may be in the future.
Project Loom
In current Java, a Java thread is mapped directly to a host OS thread. In all common OSes such as macOS, BSD, Linux, Windows, and such, when code executing in a host thread blocks (stops to wait for sleep, or storage I/O, or network I/O, etc.) the thread too blocks. The blocked thread suspends, and the host OS generally runs another thread on that otherwise unused core. But the crucial point is that the suspended thread performs no further work until your blocking call to sleep returns.
This picture may change in the not-so-distant future. Project Loom seeks to add virtual threads to the concurrency facilities in Java.
In this new technology, many Java virtual threads are mapped to each host OS thread. Juggling the many Java virtual threads is managed by the JVM rather than by the OS. When the JVM detects a virtual thread’s executing code is blocking, that virtual thread is "parked", set aside by the JVM, with another virtual thread swapped out for execution on that "real" host OS thread. When the other thread returns from its blocking call, it can be reassigned to a "real" host OS thread for further execution. Under Project Loom, the host OS threads are kept busy, never idled while any pending virtual thread has work to do.
This swapping between virtual threads is highly efficient, so that thousands, even millions, of threads can be running at a time on conventional computer hardware.
Using virtual threads, your code will indeed work as you had hoped: A blocking call in Java will not block the host OS thread. But virtual threads are experimental, still in development, scheduled as a preview feature in Java 19. Early-access builds of Java 19 with Loom technology included are available now for you to try. But for production deployment today, you'll need to follow the advice in the Answer by Yevgeniy.
Take my coverage here with a grain of salt, as I am not an expert on concurrency. You can hear it from the actual experts, in the articles, interviews, and presentations by members of the Project Loom team including Ron Pressler and Alan Bateman.
EDIT: I just posted this answer and realized that you seem to be using that code to emulate real user interactions with some system. I would strongly recommend just using a load testing utility for that, rather than trying to come up with your own. However, in that case just using a CachedThreadPool might do the trick, although probably not a very robust or scalable solution.
Thread.sleep() behavior here is working as intended: it suspends the thread to let the CPU execute other threads.
Note that in this state a thread can be interrupted for a number of reasons unrelated to your code, and in that case your Task returns false: I'm assuming you actually have some retry logic down the line.
So you want two mutually exclusive things: on the one hand, if the document isn't ready, the thread should be free to do something else, but should somehow return and check that document's status again in 10 seconds.
That means you have to choose:
You definitely need that once-every-10-seconds check for each document - in that case, maybe use a cachedThreadPool and have it generate as many threads as necessary, just keep in mind that you'll carry the overhead for numerous threads doing virtually nothing.
Or, you can first initiate that asynchronous document creation process and then only check for status in your callables, retrying as needed.
Something like:
public class Task implements Callable<Boolean> {
private final ReportClient client;
private final UUID uuid;
// all args constructor omitted for brevity
#Override
public Boolean call() {
GetStatusResponse status = client.getStatus(uuid);
if (Status.PENDING.equals(status.status())) {
final var document = client.getReport(uuid);
return Boolean.TRUE;
} else {
return Boolean.FALSE; //retry next time
}
}
}
List<Callable<Boolean>> callableTasks = new ArrayList<>();
for (int i = 0; i < 100; i++) {
var uuid = client.createDocument(documentId); //not sure where documentId comes from here in your code
callableTasks.add(new Task(client, uuid));
}
List<Future<Boolean>> results = SERVICE.invokeAll(callableTasks);
// retry logic until all results come back as `true` here
This assumes that createDocument is relatively efficient, but that stage can be parallelized just as well, you just need to use a separate list of Runnable tasks and invoke them using the executor service.
Note that we also assume that the document's status will indeed eventually change to something other than PENDING, and that might very well not be the case. You might want to have a timeout for retries.
In your case, it seems like you need to check if a certain condition is met every x seconds. In fact, from your code the document generation seems asynchronous and what the Task keeps doing after that is just is waiting for the document generation to happen.
You could launch every document generation from your Thread-Main and use a ScheduledThreadPoolExecutor to verify every x seconds whether the document generation has been completed. At that point, you retrieve the result and cancel the corresponding Task's scheduling.
Basically, one ConcurrentHashMap is shared among the thread-main and the Tasks you've scheduled (mapRes), while the other, mapTask, is just used locally within the thread-main to keep track of the ScheduledFuture returned by every Task.
public class Main {
public static void main(String[] args) {
ScheduledThreadPoolExecutor pool = (ScheduledThreadPoolExecutor) Executors.newScheduledThreadPool(8);
//ConcurrentHashMap shared among the submitted tasks where each Task updates its corresponding outcome to true as soon as the document has been produced
ConcurrentHashMap<Integer, Boolean> mapRes = new ConcurrentHashMap<>();
for (int i = 0; i < 100; i++) {
mapRes.put(i, false);
}
String uuid;
ScheduledFuture<?> schedFut;
//HashMap containing the ScheduledFuture returned by scheduling each Task to cancel their repetition as soon as the document has been produced
Map<String, ScheduledFuture<?>> mapTask = new HashMap<>();
for (int i = 0; i < 100; i++) {
//Starting the document generation from the thread-main
uuid = client.createDocument(documentId);
//Scheduling each Task 10 seconds apart from one another and with an initial delay of i*10 to not start all of them at the same time
schedFut = pool.scheduleWithFixedDelay(new Task(client, uuid, mapRes), i * 10, 10000, TimeUnit.MILLISECONDS);
//Adding the ScheduledFuture to the map
mapTask.put(uuid, schedFut);
}
//Keep checking the outcome of each task until all of them have been canceled due to completion
while (!mapTasks.values().stream().allMatch(v -> v.isCancelled())) {
for (Integer key : mapTasks.keySet()) {
//Canceling the i-th task scheduling if:
// - Its result is positive (i.e. its verification is terminated)
// - The task hasn't been canceled already
if (mapRes.get(key) && !mapTasks.get(key).isCancelled()) {
schedFut = mapTasks.get(key);
schedFut.cancel(true);
}
}
//... eventually adding a sleep to check the completion every x seconds ...
}
pool.shutdown();
}
}
class Task implements Runnable {
private final ReportClient client;
private final String uuid;
private final ConcurrentHashMap mapRes;
public Task(ReportClient client, String uuid, ConcurrentHashMap mapRes) {
this.client = client;
this.uuid = uuid;
this.mapRes = mapRes;
}
#Override
public void run() {
//This is taken form your code and I'm assuming that if it's not pending then it's completed
if (!Status.PENDING.equals(client.getStatus(uuid).status())) {
mapRes.replace(uuid, true);
}
}
}
I've tested your case locally, by emulating a scenario where n Tasks wait for a folder with their same id to be created (or uuid in your case). I'll post it right here as a sample in case you'd like to try something simpler first.
public class Main {
public static void main(String[] args) {
ScheduledThreadPoolExecutor pool = (ScheduledThreadPoolExecutor) Executors.newScheduledThreadPool(2);
ConcurrentHashMap<Integer, Boolean> mapRes = new ConcurrentHashMap<>();
for (int i = 0; i < 16; i++) {
mapRes.put(i, false);
}
ScheduledFuture<?> schedFut;
Map<Integer, ScheduledFuture<?>> mapTasks = new HashMap<>();
for (int i = 0; i < 16; i++) {
schedFut = pool.scheduleWithFixedDelay(new MyTask(i, mapRes), i * 20, 3000, TimeUnit.MILLISECONDS);
mapTasks.put(i, schedFut);
}
while (!mapTasks.values().stream().allMatch(v -> v.isCancelled())) {
for (Integer key : mapTasks.keySet()) {
if (mapRes.get(key) && !mapTasks.get(key).isCancelled()) {
schedFut = mapTasks.get(key);
schedFut.cancel(true);
}
}
}
pool.shutdown();
}
}
class MyTask implements Runnable {
private int num;
private ConcurrentHashMap mapRes;
public MyTask(int num, ConcurrentHashMap mapRes) {
this.num = num;
this.mapRes = mapRes;
}
#Override
public void run() {
System.out.println("Task " + num + " is checking whether the folder exists: " + Files.exists(Path.of("./" + num)));
if (Files.exists(Path.of("./" + num))) {
mapRes.replace(num, true);
}
}
}
I am emulating a simple connection between a client and a server. The client petitions are sent and the server proccesses them in a concurrent way: the server class extends Thread and the task is run when the object is created.
The server is always open, listening to petitions, when there is one then a object is created using the socket as a parameter, and the task is then run as I said.
I am trying to measure the time it takes to process all the petitions one client sends at once, but I can't manage to do it. With threads, pools and such I would usually take the initial time and take the time when I know everything finished and voila (usually after a join or checking if the pool is terminated).
But now I can't manage to know when all the tasks are done, because the server is always running.
Any ideas?
I'm going to try to sum up the code in case someone didn't understand:
import java.net.*;
import java.io.*;
public class MyServer extends Thread
{
Socket socket;
public MyServer(Socket s) { socket=s; this.start(); }
public void run()
{
// proccessing of the data sent by the client (just printing values)
// data is read properly, don't worry
socket.close();
}
public static void main(String[] args)
{
int port = 2001; // the same one the client is using
try
{
ServerSocket chuff = new ServerSocket(port, 3000);
while (true)
{
Socket connection = chuff.accept();
new MyServer(connection);
}
} catch (Exception e) {}
}
}
It's not clear from your question whether a client will (a) send more work down a single connection later, or (b) open multiple connections at once.
If it won't ever do either, then the processing of one connection is the unit of work to time (and in fact I think all you need to time is how long the thread is alive for).
If a client might do one of those things, then if you can, change your protocol so that clients send work in one single packet; then measure how long it takes to process one of those packets. This gives you an unambiguous definition of what you are actually measuring, the lack of which might be what is causing you problems here.
For each incoming connection, I would do it as follows:
Handover the connection to a Runnable class that performs the work.
Measure the time taken by the run method and at the end of run method, prepare a Statistics object that contains the client details and the time taken to run and post it to a LinkedBlockingQueue.
Have another thread that would poll this queue, extracts the Statistics object and updates the database or data where per-client run times are tracked.
If you want to be notified when no more connections are incomming you must set a SO_TIMEOUT, otherwise accept() blocks forever. Timeouts are enabled by invoking ServerSocket.setSoTimeout(int).
To measure performance each thread could update a shared variable with the time when they completed the task. Mark this variable as volatile to keep the values synchronized and wait until all your threads have terminated and accept has raised a java.net.SocketTimeoutException.
Note that you're also measuring the network latency between the incoming requests, is this inteded?
I would highly recommended instead of creating new Thread every time on accepting the client task consider using ExecutorService instead.
If you want to check the timing for performing number of tasks by server may be you can send list of task in one go as mentioned above and use CompletionService to check total time to complete all tasks(Runnable). Below is a sample test class to show how to capture completion time:
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.*;
public class ServerPerformanceTest {
public static void main(String[] args) {
System.out.println("Total time taken : " + totalTimeTaken(1000, 16));
}
public static long totalTimeTaken(final int taskCount, final int threadCount) {
//Mocking Dummy task send by client
Runnable clientTask = new Runnable() {
#Override
public void run() {
System.out.println("task done");
}
};
long startTime = System.currentTimeMillis();
//Prepare list of tasks for performance test
List<Runnable> tasks = Collections.nCopies(taskCount, clientTask);
ExecutorService executorService = Executors.newFixedThreadPool(threadCount);
ExecutorCompletionService<String> completionService = new ExecutorCompletionService<String>(executorService);
//Submit all tasks
for (Runnable _task : tasks) {
completionService.submit(_task, "Done");
}
//Get from all Future tasks till all tasks completed
for (int i = 0; i < tasks.size(); i++) {
try {
completionService.take().get();
} catch (InterruptedException e) {
e.printStackTrace(); //do something
} catch (ExecutionException e) {
e.printStackTrace(); //do something
}
}
long endTime = System.currentTimeMillis();
return (endTime - startTime);
}
}
I have a web application, that, on a single request may require to load hundreds of data. Now the problem is that data is scattered. So, I have to load data from several places, apply filters on them, process them and then respond. Performing all these operations sequentially makes servlet slow!
So I have thought of loading all the data in separate threads like t[i] = new Thread(loadData).start();, waiting for all threads to finish using while(i < count) t[i].join(); and when done, join the data and respond.
Now I am not sure if this approach is right or there is some better method. I have read somewhere is that spawning thread in servlets is not advisable.
My desired code will look something like this.
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException
{
Iterable<?> requireddata = requiredData(request);
Thread[] t = new Thread[requireddata.size];
int i = 0;
while (requireddata.hasNext())
{
t[i] = new Thread(new loadData(requiredata.next())).start();
i++;
}
for(i = 0 ; i < t.length ; i++)
t[i].join();
// after getting the data process and respond!
}
The main problem is that you'll bring the server to its knees if many concurrent requests comes in for your servlet, because you don't limit the number of threads that can be spawned. Another problem is that you keep creating new threads instead of reusing them, which is inefficient.
These two problems are solved easily by using a thread pool. And Java has native support for them. Read the tutorial.
Also, make sure to shutdown the thread pool when the webapp is shut down, using a ServletContextListener.
Sounds like a problem for the CyclicBarrier.
For example:
ExecutorService executor = Executors.newFixedThreadPool(requireddata.size);
public void executeAllAndAwaitCompletion(List<? extends T> threads){
final CyclicBarrier barrier = new CyclicBarrier(threads.size() + 1);
for(final T thread : threads){
executor.submit(new Runnable(){
public void run(){
//it is not a mistake to call run() here
thread.run();
barrier.await();
}
});
}
barrier.await();
}
The last thread from threads will be excuted once the all others finish.
Instead of calling Executors.newFixedThreadPool(requireddata.size);, it is better to reuse some existing thread pool.
You may consider using Executor framework from java.util.concurrent api. For example you can create your computation task as Callable and then submit that task to a ThreadPoolExecutor. Sample code from Java Concurrency in Practice:-
public class Renderer {
private final ExecutorService executor;
Renderer(ExecutorService executor) { this.executor = executor; }
void renderPage(CharSequence source) {
final List<ImageInfo> info = scanForImageInfo(source);
CompletionService<ImageData> completionService =
new ExecutorCompletionService<ImageData>(executor);
for (final ImageInfo imageInfo : info)
completionService.submit(new Callable<ImageData>() {
public ImageData call() {
return imageInfo.downloadImage();
}
});
renderText(source);
try {
for (int t = 0, n = info.size(); t < n; t++) {
Future<ImageData> f = completionService.take();
ImageData imageData = f.get();
renderImage(imageData);
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} catch (ExecutionException e) {
throw launderThrowable(e.getCause());
}
}
}
Since you are waiting for all the threads to complete and then you are providing the response, IMO multiple threads won't help if you are using just CPU cycles. It will only increase the response time by adding the context switch delay in the threads. A single thread will be better. However if network/IO etc are involved you can make use of thread pool.
But you would like to re-consider your approach. Processing huge amount of data synchronously in a http request is not advisable. Will not be a good experience for the end user. What you can do is start a thread to process the data and provide a response saying "It is processing". You can provide the web user with some kind gesture to check the status whenever he wants.
I created some workflow how to wait for all thread which I created. This example works in 99 % of cases but sometimes method waitForAllDone is finished sooner then all thread are completed. I know it because after waitForAllDone I am closing stream which is using created thread so then occurs exception
Caused by: java.io.IOException: Stream closed
my thread start with:
#Override
public void run() {
try {
process();
} finally {
Factory.close(this);
}
}
closing:
protected static void close(final Client client) {
clientCount--;
}
when I creating thread I call this:
public RobWSClient getClient() {
clientCount++;
return new Client();
}
and clientCount variable inside factory:
private static volatile int clientCount = 0;
wait:
public void waitForAllDone() {
try {
while (clientCount > 0) {
Thread.sleep(10);
}
} catch (InterruptedException e) {
LOG.error("Error", e);
}
}
You need to protect the modification and reading of clientCount via synchronized. The main issue is that clientCount-- and clientCount++ are NOT an atomic operation and therefore two threads could execute clientCount-- / clientCount++ and end up with the wrong result.
Simply using volatile as you do above would ONLY work if ALL operations on the field were atomic. Since they are not, you need to use some locking mechanism. As Anton states, AtomicInteger is an excellent choice here. Note that it should be either final or volatile to ensure it is not thread-local.
That being said, the general rule post Java 1.5 is to use a ExecutorService instead of Threads. Using this in conjuction with Guava's Futures class could make waiting for all to complete to be as simple as:
Future<List<?>> future = Futures.successfulAsList(myFutureList);
future.get();
// all processes are complete
Futures.successfulAsList
I'm not sure that the rest of your your code has no issues, but you can't increment volatile variable like this - clientCount++; Use AtomicInteger instead
The best way to wait for threads to terminate, is to use one of the high-level concurrency facilities.
In this case, the easiest way would be to use an ExecutorService.
You would 'offer' a new task to the executor in this way:
...
ExecutorService executor = Executors.newFixedThreadPool(POOL_SIZE);
...
Client client = getClient(); //assuming Client implements runnable
executor.submit(client);
...
public void waitForAllDone() {
executor.awaitTermination(30, TimeUnit.SECOND) ; wait termination of all threads for 30 secs
...
}
In this way, you don't waste valuable CPU cycles in busy waits or sleep/awake cycles.
See ExecutorService docs for details.
I just want to implement the following in Java , Do anyone have some idea..?
public String method1(){
//statement1
.
.
.
//statement 5
}
I want to set a timer for the statemen1 ( which involves some network communication ) . If the statement1 is not getting finished even after 25seconds , the control should go to statement 5 . how can I implement this in java ..?
You can make use of the java.util.TimerTask.
extend TimerTask and over-ride the run() method.
What you put in the run method is what should be executed every 25 seconds.
To start the timer do the following:
Timer tmer = new Timer("Network Timer",false);
ExtendedTimerTask extdTT = new ExtendedTimerTask(<params_go_here>)
tmer.schedule(extdTT,25000,25000);
You can parse the object which does the networking part at <params_go_here> and assign to a local variable in your ExtendedTimerTask.
When the timer executes you can do the necassary calls on your <params_go_here> object to see if its finished.
Please note that the checker will run in a seperate thread as java.util.TimerTask implements java.util.Runnable
Cool
You could do something like this:
private volatile Object resultFromNetworkConnection;
public String method1(){
resultFromNetworkConnection = null;
new Thread(){
public void run(){
//statement1
.
.
.
// assign to result if the connection succeeds
}
}.start();
long start = System.currentMilis();
while (System.currentMilis() - start < 25 * 1000) {
if (resultFromNetworkConnection != null) break;
Thread.sleep(100);
}
// If result is not null, you can use it, otherwise, you can ignore it
//statement 5
}
If there is no time-out parameter for the blocking method at statement1, you would have to put statement1 in a separate thread, then wait(25000) for it to finish, if the wait times-out, you go ahead with statement 5 and ignore the result of the blocking call.
I/O operations (including network communication) are synchronous. So you can configure a timeout for the particular network communication, and you will have the desired behaviour. How exactly to configure the timeout - depends on what you are using.
You mention network communication, so I'll give a rough example with an InputStream from a Socket with a timeout set that may apply to other classes. While you could make timer threads, this is simpler.
socket.setSoTimeout(25 * 1000);
try
{
data = readMyData(socket.getInputStream());
doStuff(data);
}
catch(SocketTimeoutException e){ }
doStatement5();
Here's is a pattern that you can use. The idea is to start a separate thread to do the network stuff. The "main" thread will wait for the adequate time and check a shared variable that indicates if the networking stuff did his job on time.
public class TestConstrainNetworkOP {
private Object lock = new Object();
private Object dataAvailable;
private Object constrainedNetworkOp() throws InterruptedException {
Thread t = new Thread(new DoTask());
t.start();
Thread.sleep(25000);
synchronized (lock) {
if (dataAvailable != null) {
//the data arrived on time
}
else{
//data is not available and
//maybe throw a timeoutexception
}
}
}
public class DoTask implements Runnable {
#Override
public void run() {
// do the networking
synchronized (lock) {
// save your data here
dataAvailable = new Long(1);
}
}
}
}
This is a useful pattern if you don't too much control over the network layer (e.g. RMI, EJB). If you are writing the network communication by yourself, then you can set the timeout direct to the socket (as people previously said) or use Java NIO