how many threads a traffic signal application need? - java

How many threads in minimum will be required for developing a traffic signal application?
I think it is only one, because when any one of the lights is in green the other 3 directions will be red. And for this application there is no need for multiple operations to be run parallel y.

This question cannot be answered meaningfully without knowing what the scope of entire application is. (In the simple case, a traffic light simulation could certainly be implemented in a single thread ... if the conditions were right.)
However, real world control systems for things like traffic lights are typically written in languages like C that are better at interfacing with controller hardware. That makes your question moot ... kind of.

As a driver, I sure hope it's only one thread!
You would only need one thread, but imagine the implications of non-threadsafe code or threading bugs...
someone could literally die!
Edited
Actually, "it depends" is the correct answer, if there is one.
Simple traffic lights, for example pedestrian crossings, could simply block waiting for a button press then complete the cycle and return to a blocking wait again.
Complex event-driven lights that can receive many inputs, may need multiple threads if the hardware doesn't support interrupts or other single threaded mechanisms for dealing with real time input signals.

As others mentioned, this is an useless exercise without knowing the exact conditions and constraints you have.
But here's my shot: I would probably have 4 threads:
Main thread -- manages the lifecycle
Receiver thread -- receives events from other sources, probably from different hardware parts, like the fire truck "remote control", or communication from the nearby traffic lights to determine if they the nearby traffic lights are in sync.
Broadcaster thread -- dispatches signals from this specific traffic light to other consumers (other traffic lights, command and control center, ...)
Processing thread -- gets delegated the processing of the events sent and received from the other two threads (Receiver, Broadcaster).

In a very trivial case yes. But usually traffic signal depends on so many factors like :
Time for which it will be red/green based on traffic
On any intersection points of 2-3 roads signals of each road should be in sync.
an so on...
So there is no fixed answer to this.

Related

How to share 2 SSH connections in multithreaded environment

Currently I have a ExecuterService to which I submit multiple threads. A single SSH connection is shared among those threads.
A thread acquires a lock on SSH connection (to execute some commands), remaining threads wait.
Now I want to optimize the perfomance and I want to use 2 SSH connections among those threads.
To implement I consider that I will have to split my threads in 2 parts, and share the 2 connections among them.
I am seeking if there is more appropriate way to do this.
Thanks in advance for the responses.
I already have done POC concluding that, 2 SSH connections will work in parallel execution environment just fine.
If the time for one thread to use the SSH connection is constant, splitting the threads in 2 parts is acceptable. Else it may not be optimal, because you could have one part ending its processing before the other, and one of the connection would be available and not used.
On a theorical point of view you have a problem with a number of clients requiring access to multiple servers. Your description is having as many queues as servers, and each client waits on one queue. It is not stupid and is what is used in real world in supermarket checkouts. But there is enough intelligence in a human to change queue if they see that one queue is empty while their is not... The nice point is that it is trivial to implement.
Another possibility is to have a single queue and as soon as a server is available, it signals it to the dispatcher point that sends it the first client. This offers the best repartition in wait time. In real world, it is often used in administrative services with few counters. The bad point is that is slightly more complex to implement. It can be implemented with a queue for the clients, a queue (or a stack) for the available servers and a semaphore that blocks the next client until a server is available.
For trivial short tasks, the first way may be the best because the gain provided by the second algorithm may not be enough for the time lost due the the higher complexity. Else you still have the balance between the developpement (and maintenance) cost for a more complex algo against a less efficient but simpler one.
Create an ArrayBlockingQueue<SSH_connection> q. Let a thread, when it wants to communicate, do conn = q.take(), and when finished, q.put(conn). At the beginning, put both connections to that queue.

How is it possible to write event based single threaded programs?

My knowledge of threads is very limited. I happen to be the guy who can write multi-threaded programs but just by copy-pasting and finding answers to my questions on the internet. But I've finally decided to learn a bit about concurrency and bought the book "Java Concurrency in Practice". After reading a couple of pages, I'm confident that I'll learn a great deal from this book.
Maybe I'm being a little impatient but I cannot resist the temptation of asking this question. It made me create an account on Stack Overflow. I'm not sure I'll be able to correctly phrase the question so I'll try to explain my question using an example.
If I had to write a (extremely unprofessionally coded) peer-to-peer chat client in, say, Java, I'll initiate a socket connection between the clients and keep it alive because messages can arrive at any time. The solution I can imagine would open a socket connection in a new thread and run a while loop continuously to keep the thread alive, as the thread dies as soon as run returns. For some reason, I cannot imagine a similar chat client in a single threaded program. How can you keep "waiting" until a message arrives if all you have is a single thread. Won't that block the execution of entire program?
To solve such a problem, what's the alternative to a continuous while loop?
How can you keep "waiting" until a message arrives if all you have is a single thread.
One possibility is to have the "parallelism" to happen "outside" of your application. Imagine a waiter in a restaurant. Just one guy. He walks from one customer to the next, and writes up the orders. From time to time, he walks over to the counter, puts in the orders, and picks up whatever stuff the chef left for him. Just one guy, walking around, doing "single task" work. But in the end, the overall system still has multiple actors (the guests, the waiter, the chef, the guy beyond the bar preparing the beverages). So, the waiter could be seen as "single threaded", but in the end, the overall system "restaurant" isn't.
Some IT architectures "mimic" that, for example around the idea of "non blocking" IO. That is how node.js works. It is single threaded by nature, but does async IO (see here for details). And you can do similar things with Java, too.
On the other hand, when you learn about concurrency, you still want to learn about the "real" multi threading, what it means, and how you would write code to "use" that concept.

Emergency stopping of Akka Actor

I am using Akka framework to control hardware, and in rare cases I need to freeze an actor in a middle of computation. This prevents damage to the hardware. Is there a way of quickly freezing or killing and Actor, even if it is still running a task?
Sadly this is not something that the JVM supports. While a Thread is running, for example doing some long running operation, it can not be arbitrarily interrupted (yes, there is a interrupt() call however it just sets a flag, and the Thread's user-land code may look at this flag, it's not a forceful interruption). Since Akka Actors utilise Java threads, the same limitation applies to them.
How Actors do help here however is that you can chunk up the work into very small chunks of work, think "steps", and represent them as messages. If you detect you should not proceed further, you simply could stash the messages (or simply no-op on them, instead of performing some action).
It kind of depends what you mean by "freeze". By itself it's not possible, but maybe a very similar effect is achievable?
// The killing part can be done via context stop self inside the Actor or by sending a PoisonPill, this however is asynchronous and goes through the Actors mailbox - as everything in actor communication.

Akka system from a QA perspective

I had been testing an Akka based application for more than a month now. But, if I reflect upon it, I have following conclusions:
Akka actors alone can achieve lot of concurrency. I have reached more than 100,000 messages/sec. This is fine and it is just message passing.
Now, if there is netty layer for connections at one end or you end up with akka actors eventually doing DB calls, REST calls, writing to files, the whole system doesn't make sense anymore. The actors' mailbox gets full and their throughput(here, ability to receive msgs/sec) goes slow.
From a QA perspective, this is like having a huge pipe in which you can forcefully pump lot of water and it can handle. But, if the input hose is bad, or the endpoints cannot handle the pressure, this huge pipe is of no use.
I need answers for the following so that I can suggest or verify in the system:
Should the blocking calls like DB calls, REST calls be handled by actors? Or they good only for message passing?
Can it be like, lets say you have the need of connecting persistently millions of android/ios devices to your akka system. Instead of sockets(so unreliable) etc., can remote actor be implemented as a persistent connection?
Is it ok to do any sort of computation in actor's handleMessage()? Like DB calls etc.
I would request this post to get through by the editors. I cannot ask all of these separately.
1) Yes, they can. But this operation should be done in separate (worker) actor, that uses fork-join-pool in combination with scala.concurrent.blocking around the blocking code, it needs it to prevent thread starvation. If target system (DB, REST and so on) supports several concurrent connections, you may use akka's routers for that (creating one actor per connection in pool). Also you can produce several actors for several different tables (resources, queues etc.), depending on your transaction isolation and storage's consistency requirements.
Another way to handle this is using asynchronous requests with acknowledges instead of blocking. You may also put the blocking operation inside some separate future (thread, worker), which will send acknowledge message at the operation's end.
2) Yes, actor may be implemented as a persistence connection. It will be just an actor, which holds connection's state (as actors are stateful). It may be even more reliable using Akka Persistence, which can save connection to some storage.
3) You can do any non-blocking computations inside the actor's receive (there is no handleMessage method in akka). The failures (like no connection to DB) will be managing automatically by Akka Supervising. For the blocking code, see 1.
P.S. about "huge pipe". The backend-application itself is a pipe (which is becoming huge with akka), so nothing can help you to improve performance if environement can't handle it - there is no pumps in this world. But akka is also a "water tank", which means that outer pressure may be stronger than inner. Btw, it means that developer should be careful with mailboxes - as "too much water" may cause OutOfMemory, the way to prevent that is to organize back pressure. It can be done by not acknowledging incoming message (or simply blocking an endpoint's handler) til it proceeded by akka.
I'm not sure I can understand all of your question, but in general actors are good also for slow work:
1) Yes, they are perfectly fine. Just create/assign 1 actor per every request (maybe behind an akka router for load balancing), and once it's done it can either mark itself as "free for new work" or self-terminate. Remember to execute the slow code in a future. Personally, I like avoiding the ask/pipe pattern due to the implicit timeouts and exception swallowing, just use tells with request id's, but if your latencies and error rates are low, go for ask/pipe.
2) You could, but in that case I'd suggest having a pool of connections rather than spawning them per-request, as that takes longer. If you can provide more details, I can maybe improve this answer.
3) Yes, but think about this: actors are cheap. Create millions of them, every time there is a blocking part, it should be a different, specialized actors. Bring single-responsibility to the extreme. If you have few, blocking actors, you lose all the benefits.

Which is more memory efficient, Threaded Entities or Threaded Sectors for a Java Game?

I'm working on shoot 'em up game, that I'm planning on flooding the screen with entities, (Bullets, Mobs, and the like). I've tried a global timer to update everything on the screen, but I've gotten some serious fps drop when I flood the screen like I want.
So, I see myself as having two options. I can either give each individual entity a timer Thread, or I can section off the level into chunks and give each chunk its own timer.
With the first scenario, entities with their own timer threads, I will end up with hundreds of entities, each with their own thread running a timer.
In the section option, I will have multiple sections of the map with a timer updating multiple entities at once, with detections for when an entity leaves from one section to another.
I'm not familiar with Programming with Memory Efficiency in mind, so which method would be better for me to use?
You could try a ScheduledExecutorService.
It's part of the Java higher-level concurrency API. You can decide how many threads should exist (it re-uses threads for different tasks to avoid the overhead of creating new ones every time and is therefore expected to be much more efficient than creating new Threads all the time) or use a cached thread pool (which will create as many threads are necessary, but once a Thread has died it will re-use it to run new tasks).
Another advantage of this API is that not only can you run Runnables, you can also use Callables, which may return a value for you to use in the future (so you can perform calculations in different Threads and then use the result of each Thread for a final result).
I was experimenting with something similar and don't have a definite answer. But maybe some of the feedback I got from Java-Gaming.org will be helpful or of interest.
What I tried was this: each entity has its own thread, and collisions are handled via a very detailed map of the screen (basically a second version of the screen). Then, I have another thread that handles the display of the screen.
An "early" version of this, with over 500 entities being animated, is online:
http://hexara.com/pond.html
Later versions use more elaborate shapes and borders (rather than letting entities die and freeze at the edges) and collision logic such as bouncing off of each other and gravity. I was also playing with sprite aspects like "firefly" blinking. I mention "actors" on the web page, but the code isn't strictly that.
Some folks at java-gaming.org strongly thought having so many threads was not efficient. There was a lot of interesting feedback from them, which you might be interested in exploring. I haven't had time yet.
http://www.java-gaming.org/topics/multi-threading-and-collision-detection/25967/view.html
They were discussing things like hyperthreading and the acca framework for Actors.

Categories

Resources