I have a state machine, which is used by many threads to manage the states of a game. I need to be able to clean the threads if something happens and an instance of a game does not change state for whatever reason.
How could i do this, i have the state machine?
If I'm getting you properly you mean to terminate threads given an event and check out if a state doesn't change. You could do it in a number of ways. Let me suggest you a possible solution.
1 - A simple way to terminate your threads would be to put those threads in a list, array, etc. a way to identify what threads you want to terminate.
2 - Create a method that terminateTheadList(...).
3 - To terminate them a way to do it would be to use the java.lang.Thread .interrupt() method. Iterate over your Thread list, array,etc. and interrupt() them. Then once you get the exception allow the thread to terminate.(that is to say, do not catch this exception and loop).
4 - To check you if the instance of the game create a new thread (this one should not be in your thread terminate list) that checks out that instance game and puts the last time this changed. Sleeps and checks out again. if the instance haven't changed call your terminateThreadList method again.
I hope this helps
Related
I am working on a console game in Java that has 10 rounds that are 2 minutes each. I need a way to use threads to alert the user every 30 seconds that have passed and then override the main method and break the user out of the game loop after two minutes have passed.
How could I use threads to break the main loop out of one function and return the user to another one?
Thanks
You can either use non blocking API (that give you for example the current state of the keyboard/mouse rather than block) in a loop and just not run the next iteration when the time out occured.
Or you can interrupt the thread if this one is blocked/sleeping on a blocking API calling this the interupt method of that thread:
refToThreadToInterupt.interupt();
This will generate an InteruptedException if the thread is blocked or set the interupt flag to true otherwise. Can be checked by the thread with:
refToThreadToInterup.isInterupted();
As you have no idea as of what the thread is doing when you interupt it, you don't know if you get the exception or if the boolean will just be set to true, so you have to deal with both cases.
While reading about Java synchronized, I just wondered, if the processing should be in synchronization, why not just creating a single thread (not main thread) and process one by one instead of creating multiple threads.
Because, by 'synchronized', all other threads will be just waiting except single running thread. It seems like the only single thread is working in the time.
Please advise me what I'm missing it.
I would very appreciate it if you could give some use cases.
I read an example, that example about accessing bank account from 2 ATM devices. but it makes me more confused, the blocking(Lock) should be done by the Database side, I think. and I think the 'synchronized' would not work in between multiple EC2 instances.
If my thinking is wrong, please fix me.
If all the code you run with several threads is within a synchronized block, then indeed it makes no difference vs. using a single thread.
However in general your code contains parts which can be run on several threads in parallel and parts which can't. The latter need synchronization but not the former. By using several threads you can speed up the "parallelisable" bits.
Let's consider the following use-case :
Your application is a internet browser game. Every player has a score and can click a button. Every time a player clicks the button, their score is increased and their opponent's is decreased. The first player to reach 10 wins.
As per the nature of the game, and to single a unique winner, you have to consider the two counters increase (and the check for the winner) atomically.
You'll have each player send clickEvents on their own thread and every event will be translated into the increase of the owner's counter, the check on whether the counter reached 10 and the decrease of the opponent's counter.
This is very easily done by synchronizing the method which handles modifying the counters : every concurrent thread will try to obtain the lock, and when they do, they'll execute the code (and finally release the lock).
The locking mechanism is pretty lightweight and only requires a single keyword of code.
If we follow your suggestion to implement another thread that will handle the execution, we'd have to implement the whole thread management logic (more code), to initialize that Thread (more resource) and even so, to guarantee fairness in the handling of events, you still need a way for your client threads to pass the event to your executor thread. The only way I see to do so, is to implement a BlockingQueue, which is also synchronized to prevent the race condition that naturally occurs when trying to add elements from two other thread.
I honnestly don't see a way to resolve this very simple use-case without synchronization (or implementing your own locking algorithm that basically does the same).
You can have a single thread and process one-by-one (and this is done), but there are considerable overheads in doing so and it does not remove the need for synchronization.
You are in a situation where you are starting with multiple threads (for example, you have lots of simultaneous web sessions). You want to do a part of the processing in a single thread - let's say updating some common structure with some new data. You need to pass the new data to the single thread - how do you get it there? You would have to use some kind of message queue (or an equivalent thing) and have the single thread pick requests off the message queue and that would have have to be synchronized anyway, plus there is the overhead of managing the queue, plus the issue that you need to get a reply back from the single thread asynchronously. So you are back to square one.
This technique is used where the processing you need to do is considerable and you don't want to block your main threads for a long time.
In summary: having a single thread does not remove the need for synchronization.
In a web app i have a method, this waits for another thread for generate reports if the quantity of customers is less than 10, but if greater than 10 i start my thread but without apply the join method, when the thread finish i notify by e-mail.
I'm a little afraid about the orphan threads with a large execution and the impact on the server.
Is good launch a "heavy" process in background (asynchronically) without use the join method or there is a better way to make it?
try {
thread.start();
if(flagSendEmail > 10){
return "{\"message\":\"success\", \"text\":\"you will be notified by email\"}";
}else{
thread.join(); //the customer waits until finish
}
} catch (InterruptedException e) {
LogError.saveErrorApp(e.getMessage(), e);
return "{\"message\":\"danger\", \"text\":\"can't generate the reports\"}";
}
Orphan threads aren't the problem, simply make sure that the run() method has a finally block that sends out the email.
The problem is that you have no control over the number of threads and that's got nothing to do with calling join(). (Unless you always wait for every single thread in the caller, at which point there's no point launching a background thread in the first place.)
The solution is to use an ExecutorService, which gives you a thread pool, and thus precise control over how many of these background threads are running at any one time. If you submit more tasks than the executor can handle at a given time, the remaining ones are queued up, waiting to be run. This way you can control the load on your server.
An added bonus is that because an executor service will typically recycle the same worker threads, the overhead of submitting a new task is less, meaning that you don't need to bother about whether you've got more than 10 items or not, everything can be run the same way.
In your case you could even consider using two separate executors: one for running the report generation and another one for sending out the emails. The reason for this is that you may want to limit the number of emails sent out in a busy period but without slowing report generation down.
There's no point is starting a thread if the very next thing you do is join() it.
I'm not sure I understand what you're trying to do, but if your example is on the right path, then this would be even better because it avoids creating and destroying a new thread (expensive) in the flagSendEmail <= 10 case:
Runnable r = ...;
if (flagSendEmail > 10) {
Thread thread = new Thread(r);
thread.start();
return "...";
} else {
r.run();
return ???
}
But chances are, you should not be explicitly creating new Threads at all. Any time a program continually creates and destroys threads, that's a sign that it should be using a thread pool instead. (See the javadoc for java.util.concurrent.ThreadPoolExecutor)
By the way: t.join() does not do anything to thread t. It doesn't do anything at all except wait until thread t is dead.
Yes it is safe, I don't recall seeing any Thread#join() actual invocations.
But it will depends on what are you trying to do. I don't know if you mean to use a pool or threads that generate reports or have some resource assigned. In any case you should limit yourself to a maximum number of threads for reports. If they are getting blocked or looped (for some bug or poor synchronization), allowing more and more threads will utterly clog your application.
Thread#join waits for the referred thread to die. Are those threads actually ending? Are you waiting for a thread to die just to launch another thread? Usually synchronization is done with wait() and notify() over the synchronization object.
Launching a process (Runtime#exec()) probably will make things even worse, unless it helps work around some weird limitation.
There are some tools like JConsole which can give you some heads up about threads getting locked and other issues.
So far what I have understood about wait() and yield () methods is that yield() is called when the thread is not carrying out any task and lets the CPU execute some other thread. wait() is used when some thread is put on hold and usually used in the concept of synchronization. However, I fail to understand the difference in their functionality and i'm not sure if what I have understood is right or wrong. Can someone please explain the difference between them(apart from the package they are present in).
aren't they both doing the same task - waiting so that other threads can execute?
Not even close, because yield() does not wait for anything.
Every thread can be in one of a number of different states: Running means that the thread is actually running on a CPU, Runnable means that nothing is preventing the thread from running except, maybe the availability of a CPU for it to run on. All of the other states can be lumped into a category called blocked. A blocked thread is a thread that is waiting for something to happen before it can become runnable.
The operating system preempts running threads on a regular basis: Every so often (between 10 times per second and 100 times per second on most operating systems) the OS tags each running thread and says, "your turn is up, go to the back of the run queue' (i.e., change state from running to runnable). Then it lets whatever thread is at the head of the run queue use that CPU (i.e., become running again).
When your program calls Thread.yield(), it's saying to the operating system, "I still have work to do, but it might not be as important as the work that some other thread is doing. Please send me to the back of the run queue right now." If there is an available CPU for the thread to run on though, then it effectively will just keep running (i.e., the yield() call will immediately return).
When your program calls foobar.wait() on the other hand, it's saying to the operating system, "Block me until some other thread calls foobar.notify().
Yielding was first implemented on non-preemptive operating systems and, in non-preemptive threading libraries. On a computer with only one CPU, the only way that more than one thread ever got to run was when the threads explicitly yielded to one another.
Yielding also was useful for busy waiting. That's where a thread waits for something to happen by sitting in a tight loop, testing the same condition over and over again. If the condition depended on some other thread to do some work, the waiting thread would yield() each time around the loop in order to let the other thread do its work.
Now that we have preemption and multiprocessor systems and libraries that provide us with higher-level synchronization objects, there is basically no reason why an application programs would need to call yield() anymore.
wait is for waiting on a condition. This might not jump into the eye when looking at the method as it is entirely up to you to define what kind of condition it is. But the API tries to force you to use it correctly by requiring that you own the monitor of the object on which you are waiting, which is necessary for a correct condition check in a multi-threaded environment.
So a correct use of wait looks like:
synchronized(object) {
while( ! /* your defined condition */)
object.wait();
/* execute other critical actions if needed */
}
And it must be paired with another thread executing code like:
synchronized(object) {
/* make your defined condition true */)
object.notify();
}
In contrast Thread.yield() is just a hint that your thread might release the CPU at this point of time. It’s not specified whether it actually does anything and, regardless of whether the CPU has been released or not, it has no impact on the semantics in respect to the memory model. In other words, it does not create any relationship to other threads which would be required for accessing shared variables correctly.
For example the following loop accessing sharedVariable (which is not declared volatile) might run forever without ever noticing updates made by other threads:
while(sharedVariable != expectedValue) Thread.yield();
While Thread.yield might help other threads to run (they will run anyway on most systems), it does not enforce re-reading the value of sharedVariable from the shared memory. Thus, without other constructs enforcing memory visibility, e.g. decaring sharedVariable as volatile, this loop is broken.
The first difference is that yield() is a Thread method , wait() is at the origins Object method inheritid in thread as for all classes , that in the shape, in the background (using java doc)
wait()
Causes the current thread to wait until another thread invokes the notify() method or the notifyAll() method for this object. In other words, this method behaves exactly as if it simply performs the call wait(0).
yield()
A hint to the scheduler that the current thread is willing to yield its current use of a processor. The scheduler is free to ignore this hint.
and here you can see the difference between yield() and wait()
Yield(): When a running thread is stopped to give its space to another thread with a high priority, this is called Yield.Here the running thread changes to runnable thread.
Wait(): A thread is waiting to get resources from a thread to continue its execution.
My application will, during runtime, contain multiple threads (in this example 7) doing independent work. However, every once in a while, the threads will have to synchronize their data.
This will be done by the threads calling the DataSynchronizer object which they all have a reference to.
My idea for flow in this class looks like this:
public class DataSynchronizer {
public void synchronizeData(List<Data> threadData) {
// Wait for all 7 threads to call this method
// When all 7 are here, hold them here & do work using one of the threads
// or a new anonymous thread
// Release the threads & let them continue their independent work
}
}
My question is, what is the best way for me to 'wait for all x threads' before doing the synch work?
I know that all threads will call the synchronizeData method within 1, max 2 seconds of each other.
So do I,
1) Wait for 2s after the first thread call the method and assume all threads have now also arrived? or
2) Keep a count to make sure all active threads have arrived? (App will wait for eternity if a thread crashes just before calling method)
3) Count + timeout?
4) ???
This is what a CyclicBarrier is for. It allows you to define spots where threads will wait until all arrive, and then optionally run a Runnable to perform synchronization or other such thing.
I think you need a java.util.concurrent.CyclicBarrier.
Assume and threads is a very risky approach.
How bad is waiting for an eternity? Sounds inconvenient to me.
If you hit the timeout can you do something useful? Crash the program, restart the errant thread, assume something about what it is doing?
Follow-on questions:
What happens if a thread doesn't participate in the synchronisation?
What happens if the sync is late?
Should your method tell one thread from another, or are they just 7 interchangeable workers?