I am currently trying to learn netty-socket.io using their
demo project. I keep seeing Thread.sleep(Integer.MAX_VALUE);. Can someone please tell me why this is important?
Addition: To clarify, I am not asking what does the Thread.sleep() function do, obviously it pauses execution on a particular thread. I am asking about the relevance of it in this example socket server.
package com.corundumstudio.socketio.demo;
import com.corundumstudio.socketio.listener.*;
import com.corundumstudio.socketio.*;
public class NamespaceChatLauncher {
public static void main(String[] args) throws InterruptedException {
Configuration config = new Configuration();
config.setHostname("localhost");
config.setPort(9092);
final SocketIOServer server = new SocketIOServer(config);
final SocketIONamespace chat1namespace = server.addNamespace("/chat1");
chat1namespace.addEventListener("message", ChatObject.class, new DataListener<ChatObject>() {
#Override
public void onData(SocketIOClient client, ChatObject data, AckRequest ackRequest) {
// broadcast messages to all clients
chat1namespace.getBroadcastOperations().sendEvent("message", data);
}
});
final SocketIONamespace chat2namespace = server.addNamespace("/chat2");
chat2namespace.addEventListener("message", ChatObject.class, new DataListener<ChatObject>() {
#Override
public void onData(SocketIOClient client, ChatObject data, AckRequest ackRequest) {
// broadcast messages to all clients
chat2namespace.getBroadcastOperations().sendEvent("message", data);
}
});
server.start();
//Thread.sleep(Integer.MAX_VALUE);
Thread.sleep(4000);
server.stop();
}
}
So I figured out that this does not have anything to do with the server at all. This Thread.sleep(Integer.MAX_VALUE); has simply paused execution of the program. To make this answer intuitive, I will change Thread.sleep(Integer.MAX_VALUE) to Thread.sleep(4000) in the posted code block.
ie, this would start the server, run it for 4 seconds and then stop the server.
This seems to only be here to fulfill its purpose; which is to start and stop the server, as this was taken from a demo project.
Related
I'm doing it first time. Where am going to read stream of data using websocket.
Here is my code snippet
RsvpApplication
#SpringBootApplication
public class RsvpApplication {
private static final String MEETUP_RSVPS_ENDPOINT = "ws://stream.myapi.com/2/rsvps";
public static void main(String[] args) {
SpringApplication.run(RsvpApplication.class, args);
}
#Bean
public ApplicationRunner initializeConnection(
RsvpsWebSocketHandler rsvpsWebSocketHandler) {
return args -> {
System.out.println("initializeConnection");
WebSocketClient rsvpsSocketClient = new StandardWebSocketClient();
rsvpsSocketClient.doHandshake(
rsvpsWebSocketHandler, MEETUP_RSVPS_ENDPOINT);
};
}
}
RsvpsWebSocketHandler
#Component
class RsvpsWebSocketHandler extends AbstractWebSocketHandler {
private static final Logger logger =
Logger.getLogger(RsvpsWebSocketHandler.class.getName());
private final RsvpsKafkaProducer rsvpsKafkaProducer;
public RsvpsWebSocketHandler(RsvpsKafkaProducer rsvpsKafkaProducer) {
this.rsvpsKafkaProducer = rsvpsKafkaProducer;
}
#Override
public void handleMessage(WebSocketSession session,
WebSocketMessage<?> message) {
logger.log(Level.INFO, "New RSVP:\n {0}", message.getPayload());
System.out.println("handleMessage");
rsvpsKafkaProducer.sendRsvpMessage(message);
}
}
RsvpsKafkaProducer
#Component
#EnableBinding(Source.class)
public class RsvpsKafkaProducer {
private static final int SENDING_MESSAGE_TIMEOUT_MS = 10000;
private final Source source;
public RsvpsKafkaProducer(Source source) {
this.source = source;
}
public void sendRsvpMessage(WebSocketMessage<?> message) {
System.out.println("sendRsvpMessage");
source.output()
.send(MessageBuilder.withPayload(message.getPayload())
.build(),
SENDING_MESSAGE_TIMEOUT_MS);
}
}
As far I know and read about websocket is that, It needs one time connection and stream of data will be flowing continuously until either party (client or server) stops.
I'm building it first time, so trying to cover major scenarios which can come acroos while dealing with 10000+ messages per minute. Total kafka brokers are two with enough space.
What can be done, if connection gets lost and again start consuming messages from webscoket once connected back where it was left in last failure and push messages into further Kafka broker ?
What can be done to put on hold websocket to keep pushing messages in broker if it has reached to threshold limit of not processed messages (in broker) ?
What can be done, When broker reached to its threshold, run a separate process to check available space in broker to push more messages and give indication to resume pushing messages in kafka broker ?
Please share other issues, which needs to be considered while setting up this thing ?
I want my GameServer to run separately from the game itself. So, players(clients) can join into one static GameServer and I can handle them together and see how many clients are connected, currently.
But the problem is, I can only run only one of these classes(GameServer.main() and DesktopLauncher.main()) at the same time. GameServer must be running always at the background if I'm not wrong, right ? Yet, I can't run the game itself without stopping the GameServer. (It stucks saying Executing task 'DesktopLauncher.main()'...) I have some pictures to realize what's going on and what project structure looks like :
Pic 1 , Pic 2 , Pic 3
Here is my project structure :
core
-java
--com.mygdx.game
---Multiplayer
----Packets
-----GameClient.java
-----GameServer.java
-----GameClientListener.java
-----GameServerListener.java
---screens
---utils
---Application.java
GameServer.class
package com.mygdx.game.Multiplayer;
imports..
public class GameServer {
public int TCP_PORT,UDP_PORT;
public Server server;
public GameServerListener listener;
public static int totalClients = 0;
public GameServer() {
TCP_PORT = UDP_PORT = xxxx;
server = new Server(TCP_PORT,UDP_PORT);
listener = new GameServerListener(this);
startServer();
}
public void startServer() {
server.addListener(listener);
try {
server.bind(TCP_PORT,UDP_PORT);
//server.bind(TCP_PORT);
} catch (IOException e) {
e.printStackTrace();
}
registerPackets();
server.start();
}
private void registerPackets() {
server.getKryo().register(LoginRequest.class);
server.getKryo().register(LoginResponse.class);
server.getKryo().register(ChoiceRequest.class);
server.getKryo().register(ChoiceRespond.class);
}
public static void main(String[] args) {
new GameServer();
}
}
Thanks for any help.
Had the same issue. Downgrading to Android Studio 3.4 worked for me.
Check: https://www.reddit.com/r/libgdx/comments/fxrlsm/unable_to_run_two_or_more_application_in_parallel/
I have an akka (akka-actor_2.11) application that we use for stress testing one of our systems. The top level actor called RunCoordinatorActor is able to know based on responses coming from its subordinates when the work is finished.
When the work is finished the RunCoordinatorActor makes a call to getContext().system().shutdown() and then in the main method there is a loop checking for the system.isTerminated() call to return true. All works fine and I am happy with the way it works. However both system.sutdown() and system.isTerminated() methods are marked as deprecated and I am trying to figure out the right way to implement a graceful shutdown without using them.
Here is my main class:
public static void main(String[] args) throws Exception {
if (new ArgumentsValidator().validate(args)) {
// If the arguments are valid then we can load spring application
// context on here.
final ApplicationContext context = new AnnotationConfigApplicationContext(
M6ApplicationContext.class);
// Use an akka system to be able to send messages in parallel
// without doing the low level thread manipulation ourselves.
final ActorSystem system = context.getBean(ActorSystem.class);
final ActorRef runCoordinator = system.actorOf(SPRING_EXT_PROVIDER.get(system)
.props("RunCoordinatorActor"), "runCoordinator");
Thread.sleep(1000);
runCoordinator.tell(new StartTesting(), ActorRef.noSender());
do {
LOGGER.info("Waiting for the process to finish");
Thread.sleep(60000L);
// What would be the alternative for isTerminated() code below
} while (!system.isTerminated());
}
}
and here is my call to shutdown inside the RunCoordinator class:
#Named("RunCoordinatorActor")
#Scope("prototype")
public class RunCoordinator extends UntypedActor {
#Override
public void onReceive(Object message) throws Exception {
....
if (message instanceof WorkDone) {
getContext().system().shutdown();
}
}
}
I can see there is another method called terminate() that returns a Future and if I replace the shutdown call with that it all works OK too.
if (message instanceof WorkDone) {
Future<Terminated> work = getContext().system().terminate();
// But where should I put the call work.isCompleted()
// and how would I make the main aware of it
}
I could find some scala examples on here shutdown-patterns-in-akka-2 but they still use system.shutdown in the end so not sure how up to date that post still is.
Thank you in advance for your inputs.
The solution was not that hard to find once I looked closer into the ActorSystem API.
All I had to do was to add this to my RunCoordinator class:
if (message instanceof WorkDone) {
getContext().system().terminate();
}
And had a Future<Terminated> workDone = system.whenTerminated(); defined in my main class which after the change became:
public static void main(String[] args) throws Exception {
if (new ArgumentsValidator().validate(args)) {
// If the arguments are valid then we can load spring application
// context on here.
final ApplicationContext context = new AnnotationConfigApplicationContext(
M6ApplicationContext.class);
// Use an akka system to be able to send messages in parallel
// without doing the low level thread manipulation ourselves.
final ActorSystem system = context.getBean(ActorSystem.class);
final Future<Terminated> workDone = system.whenTerminated();
final ActorRef runCoordinator = system.actorOf(SPRING_EXT_PROVIDER.get(system)
.props("RunCoordinatorActor"), "runCoordinator");
runCoordinator.tell(new StartTesting(), ActorRef.noSender());
do {
LOGGER.info("Waiting for the process to finish");
Thread.sleep(60000L);
} while (!workDone.isCompleted());
}
}
All worked very well after this. I am still surprised google cold not take me to any existing example showing how to do it.
I have a Java app that uses the Jetty WebSocket Client, version 9.x. It works fine for text messages sent from the server, but the binary listener is never invoked. I have a Javascript client implementation which I'm basically duplicating. I'm doing the same exact thing in Javascript that I do in Java, calling the same server. The Javascript works, and the Java doesn't. So I'm thinking that something is not configured properly in Jetty for binary listeners.
For example, the server is sending blob data. I know that in the Javascript client, I can set the binarytype to either arraybuffer or blob. I figured there may be a similar setting required in Jetty, but I've looked all through the API and searched many examples online. There are precious few examples of binary listeners online, and no mention anywhere of setting the binarytype, or any other special setting required to make binary llisteners work.
Here's a consolidated representation of my code. The code is spread throughout various classes, so this is not a stand-alone app, but I think it shows what I'm doing. The server is implemented with libwebsockets.
Client implementation
import org.eclipse.jetty.websocket.client.WebSocketClient;
import org.eclipse.jetty.websocket.client.ClientUpgradeRequest;
client = new WebSocketClient();
client.start();
client.setMaxBinaryMessageBufferSize((int) 500e6);//just to be sure
ClientUpgradeRequest request = new ClientUpgradeRequest();
request.setSubProtocols("pipe-data");
client = new SimpleSocket();
client.connect(socket, uri, request);
Socket implementation
#WebSocket
public class SimpleSocket {
#SuppressWarnings("unused")
private Session session;
private SocketHandlerBase handler;
private boolean connected = false;
public SimpleSocket(SocketHandlerBase listener) {
this.handler = listener;
}
#OnWebSocketClose
public void onClose(int statusCode, String reason) {
this.handler.onClose(statusCode, reason);
this.connected = false;
}
#OnWebSocketConnect
public void onConnect(Session session) {
this.handler.onConnect(session);
this.connected = true;
}
//invoked when text messages are sent
#OnWebSocketMessage
public void onMessage(String msg) {
this.handler.onMessage(msg);
}
//does not get invoked when binary data is sent
#OnWebSocketMessage
public void onMessage(byte buf[], int offset, int length) {
this.handler.onMessage(buf, offset, length);
}
public boolean isConnected() {
return this.connected;
}
public SocketHandlerBase getHandler() {
return this.handler;
}
}
There was a hard to find problem with the server I was calling. A very specific configuration of invocation arguments was causing the binary listener to not be called. Nothing about the Jetty client or WebSockets in general involved here.
The question is in the title but to elaborate a bit. If I'm writing an NIO application in Java using the Sun/Oracle NIO APIs or a framework like Netty, is it possible to have a client "connect" as a subscriber even while there is no server bound to the host/port it connects to? What I effectively want to do is just not care if the server is dead but as soon as it is online and sends a message I receive it as if it was there the whole time. Take this ZMQ server and client for e.g.
Starting the client first....
import org.zeromq.ZMQ;
import java.util.Date;
public class ZMQClient {
public static void main(String[] args) {
// Prepare our context and subscriber
ZMQ.Context context = ZMQ.context(1);
ZMQ.Socket subscriber = context.socket(ZMQ.SUB);
subscriber.connect("tcp://localhost:5563");
subscriber.subscribe("".getBytes());
while (true) {
// Read envelope with address
String address = new String(subscriber.recv(0));
// Read message contents
String contents = new String(subscriber.recv(0));
System.out.println(address + " : " + contents+" - "+ new Date());
}
}
}
...and some time later the server
import org.zeromq.ZMQ;
import java.util.Date;
public class ZMQServer {
public static void main(String[] args) throws Exception{
// Prepare our context and publisher
ZMQ.Context context = ZMQ.context(1);
ZMQ.Socket publisher = context.socket(ZMQ.PUB);
publisher.bind("tcp://127.0.0.1:5563");
while (true) {
// Write two messages, each with an envelope and content
publisher.send("".getBytes(), ZMQ.SNDMORE);
publisher.send("We don't want to see this".getBytes(), 0);
publisher.send("".getBytes(), ZMQ.SNDMORE);
publisher.send("We would like to see this".getBytes(), 0);
System.out.println("Sent # "+new Date());
Thread.sleep(1000);
}
}
}
ZMQ supports this behavior (allowing clients to subscribe, etc., before the server is up) because it spawns a separate thread for handling socket communication. If the endpoint of the socket is not available, the thread takes care of queuing requests until the connection becomes available. This is all done transparently for you.
So, sure, you could probably adopt this technique for other APIs, but you'd have to take care of all the grunt work itself.