Is using a Observable Singleton Class to handle network calls bad? - java

I have to develop a client/server game that uses a lot of network request to complete its task.
Both the client and the server receive multiple Command through socket.
To handle these commands I created a NetworkHandler class that listens for new Input on a separate thread and that allow me to send new commands.
These "commands" are really heterogeneous and and are used by different classes.
(For example the "client ready" command is used by the Main server class, while the "client wants a card" is used by the Game class).
So I created a CommandDispatcher class that listens to the NetworkHandler (Observable pattern) and that dispatch the different commands to the right receivers. (All through interfaces )
The problem is that every class that wants to register as a "command receiver" need to call the CommandDispatcher to set itself as a listener.
Because there are multiple class that needs this reference I'm thinking to transform the CommandDispatcher in a singleton class that could be accessed everywhere.
I know that Singleton and global class are bad and that I'm hiding a dependency and it will be difficult to test, but the only alternative I see is passing the CommandDispatcher from the Main Class to all the other classes.
Can you help me find a better solution?
Thank you.
EDIT: I want to clarify that my app is a turn based game, so I don't have a large number of parallel request.

This is a common pattern that has been addressed many times in many environments.
The primary issue you need to address is the speed of despatch of the commands that arrive. You must ensure that no command can clog up the system otherwise you will get unpredictable response times. To achieve that you must do as little as possible to manage it's arrival.
The classic solution to this is for your network listener to do the minimum amount of work on the command and hand it off to a queue. Essentially you should merely grab the arriving message, place it in a queue and go back to listening. Don't even deserialise it, just throw it at the queue.
At the other end of the queue there can be one or more processes pulling commands out, constructing the appropriate objects and performing any functionality you want on them. This could be where your listeners should listen. Often all that will happen is the deserialised object will be despatched to another appropriate queue for handling so you may find an even more appropriate point to listen to at the other end of those queues.
Both the network listener and the command processor can often be implemented with thread pools.
Is using a Observable Singleton Class to handle network calls bad?
Bad? No - but it will not stand up to high throughput. You would be better to disassociate the network listener from the command dispatcher.

Related

Is Session.sendToTarget() thread-safe?

I am trying to integrate QFJ into a single-threaded application. At first I was trying to utilize QFJ with my own TCP layer, but I haven't been able to work that out. Now I am just trying to integrate an initiator. Based on my research into QFJ, I would think the overall design should be as follows:
The application will no longer be single-threaded, since the QFJ initiator will create threads, so some synchronization is needed.
Here I am using an SocketInitiator (I only handle a single FIX session), but I would expect a similar setup should I go for the threaded version later on.
There are 2 aspects to the integration of the initiator into my application:
Receiving side (fromApp callback): I believe this is straightforward, I simply push messages to a thread-safe queue consumed by my MainProcessThread.
Sending side: I'm struggling to find documentation on this front. How should I handle synchronization? Is it safe to call Session.sendToTarget() from the MainProcessThread? Or is there some synchronization I need to put in place?
As Michael already said, it is perfectly safe to call Session.sendToTarget() from multiple threads, even concurrently. But as far as I see it you only utilize one thread anyway (MainProcessThread).
The relevant part of the Session class is in method sendRaw():
private boolean sendRaw(Message message, int num) {
// sequence number must be locked until application
// callback returns since it may be effectively rolled
// back if the callback fails.
state.lockSenderMsgSeqNum();
try {
.... some logic here
} finally {
state.unlockSenderMsgSeqNum();
}
Other points:
Here I am using an SocketInitiator (I only handle a single FIX session), but I would expect a similar setup should I go for the threaded version later on.
Will you always use only one Session? If yes, then there is no use in utilizing the ThreadedSocketInitiator since all it does is creating a thread per Session.
The application will no longer be single threaded, since the QFJ initiator will create threads
As already stated here Use own TCP layer implementation with QuickFIX/J you could try passing an ExecutorFactory. But this might not be applicable to your specific use case.

Why Use MessageListeners in Apache Pulsar and Not Simply Consumer.receive()?

Apache Pulsar's APIs (https://pulsar.apache.org/api/client/org/apache/pulsar/client/api/Consumer.html) include at least two methods of consuming messages from a Pulsar topic/queue:
Using Consumer.receive() (or Consumer.receiveAsync())
Using ConsumerBuilder.messageListener(MessageListener messageListener) to add a message listener, which sends a reference to the Consumer and Message to an instance of a MessageListener
Mostly it feels like these are on equal ground, and the event-like methodology of using a MessageListener makes sense, except that the Consumer object has other methods that I find might be useful in a controlled while-loop, such as: isConnected(), receiveAsync(), pause(), resume(), and seek(MessageId messageId).
With these additional features in the Consumer class, even though the Consumer is passed into the MessageListener, why not have a simple loop for the consumer instead of using a single MessageListener?
Is there an advantage or preference to using MessageListener in Pulsar, or is this just an option given to the developer?
In the past I've mostly written consumer loops for JMS and Kafka.
Listener pattern is generally a better OOP design. It makes the code more concise and most importantly it applies SOLID principle nicely.
Basically you need to execute a business logic based on each message, and manage the consumer/reader infrastructure like threading model. Java Event Listener pattern is an answer to separate the business logic and managing threads. The business logic is implemented in the listener class. It can be added one line with addListener(theClass) with the consumer/reader creation. It is much cleaner.
With addListener, there is still an option to add lambda function with business logic in the same class/file. I think this is for simple processing needs.
In Pulsar client's particular case, the client API technically can manage the listener threads for you. That is another bonus.

OOA&D / Java / Software Architecture - advise on structuring event handling code to avoid a complicated data flow

I've implemented the producer/consumer paradigm with a message broker in Spring and my producers use WebSocket to extract and publish data into the queue.
A producer is therefore something like:
AcmeProducer.java
handler/AcmeWebSocketHandler.java
and the handler has been a pain to deal with.
Firstly, the handler has two events:
onOpen()
onMessage()
onOpen has to send message to the web socket to subscribe to specific channels
onMessage receives messages from WebSocket and adds them into the queue
onMessage has some dependencies from AcmeProducer.java, such as it needs to know currency pairs to subscribe to, it needs the message broker service, and an ObjectMapper (json deserializer) and a benchmark service.
When messages are consumed from the queue they are transformed into a format for OrderBook.java. Every producer has its own format of OrderBook and therefore its own AcmeOrderBook.java.
I feel the flow is difficult to follow, even though I've put classes for one producer in the same package. Every time I want to fix something in a producer I have to change between classes and search where it is.
Are there some techniques for reducing the complicated data flow into something easy to follow?
As you know, event handlers like AcmeHandler.java hold callbacks that get called from elsewhere in the system (from the websocket implementation) and hence they can be tricky to organize. The data flow with events is also more convoluted because when handlers are in separate files they cannot use variables defined in the main file.
If this code would not use the event driven paradigm the data flow would be easy to follow.
Finally, is there any best practice for structuring code when using web sockets with onOpen and onMessage? Producer/Consumer is the best I could come up with, but I don't want to scatter classes of Acme in different packages. For example, the AcmeOrderbook should be in a consumer class, but as it depends on the AcmeProducer.java and AcmeHandler.java they are often edited at the same time, hence I've put them together.
As the dependencies inside every WebSocket handler are the same (only different implementations of those same interfaces) I think there should be only one thing injected, and that will be some Context variable. Is that the best practice?
I've solved it using Message Dispatcher and Message Handlers.
The dispatcher only checks if the message is a data message (snapshot or update) and passes control to the message handler class which itself checks the type of the message (either snapshot or update) and handles that type properly. If the message is not a data message but something else the dispatcher will dispatch the message depending on what it is (some event).
I've also added callbacks using anonymous functions and they are much shorter now, the callbacks are finally transparent.
For example, inside an anonymous callback function there is only this:
messageDispatcher.dispatchMessage(context);
Another key approach here is the use of context.MessageDispatcher is a separate class (autowired).
I've separated orderbook into its directory inside every producer.
Well, everything requires knowledge of everything to solve this elegantly.
Last pattern for solving this: Java EE uses annotations for their endpoints and control functions such as onOpen, onMessage, etc. That is also a nice approach because with it the callback becomes invisible and onOpen / onMessage are automatically called by the container (using the annotation).

How can I make multiple Jframes consume data from the same Thread in Java?

I have a program that must output the data of a weighting scale. It uses a thread to read continually data from the rs232 source and must output the data graphically. The user can open and close as many Jframes as it wishes and all must show the same data that is read from the rs232 in a JTextArea. How can I approach this?
Thank you very much in advance.
There are a number of ways you might approach this problem
The user can open and close as many Jframes as it wishes and all must show the same data that is read from the rs232
This raises the question if you're only interested in the real time results or the historical results. For argument sake, I'm only going to focus on the real time results.
Basically you need to start with a class which is responsible for actually reading the data from the port. This class should do only two things:
Read the data
Generate events when new data is read
Why? Because then any additional functionality you want to implement (like writing the data to a database or caching the results for some reason) can be added later, simply by monitoring the events that are generated.
Next, you need to define a interface which describes the contract that observers will implement in order to be able to receive events
public interface ScaleDataSourceListener {
public void scaleDataUpdated(ScaleDataSourceEvent evt);
}
You could also add connection events (connect/disconnect) or other events which might be important, but I've kept it simple.
The ScaleDataSourceEvent would be a simple interface which described the data of the event
public interface ScaleDataSourceEvent {
public ScaleDataSource getSource();
public double data();
}
for example (I like interfaces, they describe the expected contract, define the responsibility and limit what other people can do when they receive an instance of an object implementing the interface, but that's me)
Your data source would then allow observers to register themselves to be notified about events generated by it...
public interface ScaleDataSource ... {
//...
public void addDataSourceListener(ScaleDataSourceListener listener);
public void removeDataSourceListener(ScaleDataSourceListener listener);
}
(I'm assuming the data source will be able to do other stuff, but I've left that up to you to fill in, again, I prefer interfaces where possible, that's not a design restriction on your part ;))
So, when data is read from the port, it would generate a new event and notify all the registered listeners.
Now, Swing is not thread safe, what this means is, you shouldn't make updates to the UI from any thread other then the Event Dispatching Thread.
In your case, probably the simplest solution would be to simply use SwingUtilities.invokeLater to move from the data sources thread context to the EDT.
Basically, this is a simple Observer Pattern
There are a lot of other considerations you need to think about as well. Ie, are the frames been opened within the same process as the data source, or does the data source operate within it's own, seperate process. This complicates the process, as you'll need some kind of IPC system, maybe using sockets, but the overriding design is the same.
What happens if the data source is reading data faster then you can generate events? You might need some kind of queue, where the data source simply dumps the data to the queue and you have some kind of dispatcher (on another thread) reading it and dispatching events.
There are number implementations of blocking queues which provide a level of thread safety, have a look through concurrency APIs for more details.
... as some ideas ;)
first, create a frame class extends JFrame, and create a method to receive data from rs232. Then every object of this class can get data using that method.
u can create one frame by creating one object of the class.

Best practice to create new Verticals in Vertx

Could anyone give me the best practice when I need to create new Verticals in Vertx. I know that each vertical can be deployed remotely and put into cluster. However, I still have a question how to design my application. Well, my questions are:
Is it okay to have a lot of verticals?
E.g I create a HttpServer, where a lot of endpoints for services. I would like to make different subroutes and set up them depending on enabled features (services). Some of them will initiate a long-term processes and will use the event bus to generate new events in the system. What is the best approach here?
For example, I can pass vertx into each endpoint as an argument and use it to create Router:
getVertx().createHttpServer()
.requestHandler(router::accept)
.listen(Config.GetEVotePort(), startedEvent -> {..});
...
router.mountSubRouter("/api",HttpEndpoint.createHttpRoutes(
getVertx(), in.getType()));
Or I can create each new Endpoint for service as a Vertical instead of passing Vertx. My question is mostly about is it okay to pass vertx as an argument or when I need to do it I should implement new Vertical?
My 10 cents:
Yes, the point is that there can be thousands of verticles, because as I understand it the name comes from the word "particle" and the whole idea is a kind of UNIX philosophy bet on the JVM. So write each particle / verticle to do 1 thing and do it well. Use text streams to communicate between particles because that's a universal interface.
Then the answer to your question is about how many servers you have? How many JVM's are you going to fire up per server? What memory do you expect each JVM to use? How many verticles can you run per JVM within memory limits? How big are your message sizes? What's the network bandwidth limit? How many messages are going through your system? And, can the event bus handle this traffic?
Then it's all about how verticles work together, which is basically the event bus. What I think you want is your HttpServer to route messages to an event bus where different verticles are configured to listen to different "topics" (different text streams). If 1 verticle initiates a long term process it's triggered by an event on the bus then it puts the output back onto a topic for the next verticle / response verticle.
Again, that depends how many servers / JVM's you have and whether you've a clustered event bus or not.
So 1 verticle ought to serve multiple endpoints, for example using the Router, yeah, to match a given request from HttpServer to a Route, which then selects a Handler, and that Handler is in a given Verticle.
It's best to have a lot of verticles. That way your application is loosely coupled and can be easily load balanced. For example you may want 1-3 routing verticles, but a lot more worker verticles, if your load is high. And that way you can increase only the number of workers, without altering number of routing verticles.
I wouldn't suggest to pass vertx as an argument. Use EventBus instead, as #rupweb already suggested. Pass messages between your routing verticles to workers and back. That's the best practice you're looking for:
http://vertx.io/docs/vertx-core/java/#event_bus

Categories

Resources