I have a game in which users must answer questions within 10 second intervals. Which means that when the 10 seconds is up the user is not allowed to answer the question if he/she has not already answered it. At this point the server should also not allow the user to answer the question because the server as well knows the 10 seconds has elapsed.
For validity and making sure the client is not manipulated in any way i have decided to put the timer on the server and simply display the timer to the client (displaying the timer is important).
The question is how do i display the timer to the user and also make sure that the timer is valid on the client.
Socket IO handles the interaction between the client and server as the user is actually playing, http handles everything else the user does in the game(login, signup, account, payment etc).
My stack is android with java, node js server, socket io on both. I have setup all, the only issue is the approach to take when it comes to the synchronization of the timer on both the client and the server.
I was thinking one approach might be to serve a question and basically set a variable on the socket like this (on the server)
var t = new Date();
t.setSeconds(t.getSeconds() + 12); //2 seconds extra for network lag
socket.allowed_time = t
Then send the question to the client and show a 10 second count down to the user. Whenever the client responds check if the current time is greater than socket.allowed_time. This method seems unreliable considering the network lag.
A game like 8 ball pool i play lot has real time game play between 2 users and the time is synchronized between both users (in realtime), i noticed that network lag affects the game a lot as well.
I don't think there is a way that will both 1) cope with network lag1, and 2) cope with clients pretending there is network lag.
If you don't care that a lagged client / user sometimes gets penalized, then you simply run a 10 second timer on the server side. If the client's reply is not received before the timer goes off, then it is rejected.
You can just start the user's on-screen timer when the question is received by the client. If there is substantial lag, the timer may not reach zero until it is (noticeably) too late to reply. You could compensate for this by estimating the average lag in both directions and adjusting the number of seconds on the clock.
1 - ... without "unfairly" penalizing the lagged client.
Related
The PeriodicTimeRequest has a minimum periodic time of 15 minutes. But I see, that for example Google Maps location sharing can update more frequently than that, and facebook messenger can also receive messages almost instantly.
I would like to send a notification to the user, when it got a new message. My application has to work on local network, so Firebase is not an option. I have to send a json request to the server, and if there is a new message, I show a notification to the user.
Regarding FCM:
FCM, which is available in all devices with Google Play takes the weight of subscribing to and receiving push events, under all the resource constraints Android has been ever introducing.
It's tightly coupled with the OS and is unified (one entity, one persistent connection for all apps in your device), which is why it works :)
Regarding Frequency of your Work:
Given your requirement of more frequent pings to the server, you'd need to have a service which runs all the time, i.e. A Foreground Service.
It is resource consuming though, so good luck convincing the user with a good reason why it should stay alive all the time.
I think you've managed to make the client-server interaction possible, since identifying a server in a local network is a huge task in itself.
use this in your service.
#Override
public int onStartCommand(Intent intent, int flags, int startId) {
CountDownTimer timer = new CountDownTimer(15 * 60 * 1000, 1000) {
#Override
public void onTick(long millisUntilFinished) {
// execute your task here, every sec
//if you want increase the count down interval from 1000 to what you want
}
#Override
public void onFinish() {
this.start();
// it will start again.
}
};
timer.start();
return START_STICKY;
}
I am afraid it is not going to be possible without using a set of workarounds. Which means you might not get a consistent behavior.
#Arvind has done a very good job explaining the benefits of a Firebase Service and it is the recommended approach for achieving such task.
First I'd like to point out that such restrictions on the WorkManager exist because Android has been suffering (between other things) of developers trying to abuse some mechanisms to get their software working and at the end of the day, the battery of the users had been suffering from such abuses and since Android 6 Google has started trying to address these issues. There's a good read for you over here about Doze mode and how to work with it
I am pointing this stuff out because I've been trying to build a chat service that wouldn't rely on Firebase and I really don't want you to waste as much time as me banging your head against a wall. There are things that you simply can't fight. That means that if the device enters in a "deep-sleep" mode sometimes you can only accept it.
My approach
Please
keep in mind the user interests and the life of their batteries and try to be as smooth as you can with their devices and this is just a workaround over the restrictions that have been imposed upon us. And that I discourage this approach due to the amount of work that it takes to pull off and for how misused it can be.
My solution
Essentially, to get notified (ie getting your code running) in an Android App you're going to be wanting to receive system events or Broadcasts. This is where you set up a BroadcastReceiver and you get Intents delivered to it and you can act upon them accordingly. BUT YOU HAVE TO BE QUICK BECAUSE YOU HAVE ONLY 10 SECONDS OF RUNTIME BEFORE THE OS KILLS THE PROCESS AGAIN. Ideally you would have a really quick server so you can have very little IO times to ensure you can be within 10 second restriction as frequently as possible.
So essentially you would be using a combination various of services that you would like to be monitoring in order to get notifications (aka Broadcasts) whenever the state of those changes. Here are a few ideas:
WiFi state (which will also be useful to see if you can reach your local server)
Bluetooth Low Energy packets (or Nearby which may solve the entirety of your problem depending on Nearby's capabilities)
WorkManager as you already pointed out.
AlarmManager to schedule a broadcast of intents every so often.
Geofencing (although it involves reading the user's location; you can set really small geofences around the office building and get notified by a Broadacast when users go through that geofence)
So whenever you receive a Broadcast of these sources you would handle such notifications from within the same BroadcastReceiver
From the implementation body of this Broadcast receiver you would poll the local network's server to check whether if your user has new messages or not and lift up a notification. And it's important to keep the amount of work and IO times the app has to do at a minimum since those add up and you've got only 10 seconds.
You can get around the 10 second mark if you launch a ForegroundService. Then, that period of time is going to be extended until a 10 minute mark and you will need a visible notification for the user stating something that you're checking if it's got any new messages.
Keep in mind
Don't stress the user's battery too much. Or Android will penalise your app and you'll end up notified less often or even completely not notified.
Be gentle with the user. If the user has to force-kill your app at some point it will stop receiving any sort of Broadcasts or running any sort of WorkTasks.
This solution can behave differently accross devices. Since the decisions of notifying your app are made by the OS, different OS (redmi, samsung, meizu...) you are likely to not end up with a consistent behavior across all devices
You don't have control over things, the OS does
Within measure, try to time your Broadcasts to your BroadcastReceiver within spans of 3 minutes or so; so you are always receiving a Broadcast below the 15 minute mark.
I am creating an IRC bot using Pircbot that can respond to certain requests (e.g. "!time" provides local time). One of the functions I am building is a giveaway system that randomly selects a user from the currently online users and gives them a prize.
I would like to enhance the system by forcing the winner to type "!accept" within 30 minutes of winning in order to claim the prize. However I would like the bot to still function, meaning I can't freeze the entire thread for 30 minutes waiting for a message.
A few ways I am thinking of doing it feel a bit too hacky to me.
I can store the winner's name in a variable or a .properties file, and constantly be on the lookout for the "!accept" command. If an "!accept" was sent by the winner (the name in the variable) and the message sent time was within 30 minutes, confirm winner. The downside to this is if the bot restarts or is taken offline temporarily in this 30 minute period, it could cause a lot of continuity problems, especially with a .properties file.
Create a runnable thread, sleep for 30 minutes and then check all new messages for the !accept command. This sounds extra hacky with hacky sauce on top.
Dance my problems away.
Mark the time you choose the winner, maybe even have another Thread or Timer event which gets triggered after 30 minutes to reset it.
If the input is "!accept" and is from the correct user AND the difference between the "marked" time and now is less then 30 minutes, happy user
I've been trying to figure out how to go about calling a method when the connection has been forcefully terminated by the client, or if the client just loses connection in general. Currently I have an List<> of all of my online accounts, however if the player doesn't log out of the server naturally, the account will stay in the list.
I've been looking through the documents, and searching google wording my question in dozens of different ways, but I can't find the answer that I'm looking for.
Basically, I need a way to figure out which channel was disconnected, and pass it as a parameter to a method, is this possible? It almost has to be.
i guess this can be done using thread on both client and server side.
Make a Date variable lastActive in client class which will be set by client every 5min (let's say). Another thread will run from server side every 10 min to check for this flag, if lastActive is more than 10min then remove player from list. You can change this frequency time according to your need
Reliably detecting socket disconnects is a common problem and not unique to Netty. The issue as you described is that your peer may not reliably terminate their end of the connection. For example: peer loses power, peer application crashes, peer machine crashes, etc... One common solution is to close the connection if no read activity has been detected for longer than some time interval. Netty provides some utilities to ease this process such as the ReadTimeoutHandler. Setting the time interval is application specific and will depend on your protocol. If your desired interval is sufficiently small you may have to add additional messages to your protocol to serve as a heartbeat message (a simple request/response to indicate each side is talking to each other).
From a Netty specific point of view you can register a listener with the Channel's CloseFuture that will notify you when the channel is closed. If you setup the ReadTimeoutHandler as previously described then you will be notified of close events after your timeout interval passes and no activity is detected or the channel is closed normally.
I am making a client server MMO style game. So far I have the framework set up so that the server and clients interact with each other in order to provide state updates. The server maintains the game state and periodically calculates the next state and then it every once in a while (every n milliseconds) it sends out the new state to all the clients. This new state is able to be viewed and reacted to on the client side by the user. These actions are then sent back to the server to be processed and sent out for the next update.
The obvious problem is that it takes time for these updates to travel between server and clients. If a client acts to attack an enemy, by the time that update has gotten back to the server, it's very possible the server has progressed the game state far enough that the enemy is no longer in the same spot, and out of range.
In order to combat this problem, I have been trying to come up with a good solution. I have looked at the follow, and it has helped some, but not completely: Mutli Player Game synchronization. I already came to the conclusion that instead of just transmitting the current state of the game, I can transmit other information such as direction (or target position for AI movement) and speed. From this, I have part of what is needed to 'guess', on the client side, what the actual state is (as the server sees it) by progressing the game state n milliseconds into the future.
The problem is determining the amount of time to progress the state, because it will depend on the lag time between server and client, which could vary considerably. Also, should I progress the game state to what it would currently be when the client views it (i.e. only account for the time it took the update to get to the client) or should I progress it far enough so that when its response is sent back to the server, it will be the correct state by then (account for both to and from journey).
Any suggestions?
To reiterate:
1) What is the best way to calculate the amount of time between send and receive?
2) Should I progress the client side state far enough to count for the entire round trip, or just the time it takes to get the data from the server to the client?
EDIT: What I have come up with so far
Since I already have many packets going back and forth between the clients and server, I do not want to add to that traffic if I have to. Currently, the clients send status update packets (UDP) to the server ~150 milliseconds (only if something has changed), and then these are receive and processed by the server. Currently, the server sends no response to these packets.
To start off, I will have the clients attempt to estimate their lag time. I will default it to something like 50 to 100 milliseconds. I am proposing that about every 2 seconds (per client) the server will immediately respond to one of these packets, sending back the packet index in a special timing update packet. If the client receives the timing packet, it will use the index to calculate how long ago this packet was sent, and then use the time between packets as the new lag time.
This should keep the clients reasonably up to date on their lag, with out too much excess network traffic.
Sound acceptable, or is there a better way? This still doesn't answer question two.
First off, just as an FYI, if you are worrying about delays of less than 1 second you are starting to get out of the realm of realistic lag for an MMO. The way all of the big MMOs handle this is by basically having two different "games" going at the same time - there is an underlying game engine which is handling all of the math, character states, applying the numerical changes, and then there is the graphical client.
The first "game," the math and calculations, are a lot closer conceptually to a traditional console game (like the old MUDs). Think in terms of messages passing back and forth, with a very high degree of ACID isolation. These messages worry a lot more about accuracy, but you should assume that these may take 1-2 seconds (or more) to be processed and updated. This is the "rules lawyer" that is ensuring that hit points are being calculated correctly, etc.
The second "game" is the graphical client. This client is really focused on maintaining the illusion that things are happening much more quickly than the first game, but also synchronizing the events that are coming in with the graphical appearance. This graphical client often just flat makes things up that aren't critical. This client is responsible for the 30 fps+ graphics. That's why a lot of these graphical clients use tricks like starting the attack animation when the user presses the button, but not actually resolving the animation until the first game gets around to resolving the attack.
I know this is a little off from the literal interpretation of your question, but once you get outside two machines sitting next to each other on a network 100ms is really optimistic...
2) Should I progress the client side state far enough to count for the entire round trip, or just the time it takes to get the data from the server to the client?
Let's assume that the server sends the state at time T0, the client sees it in time T1, the player reacts in time T2, and the server obtains their answer in time T3, and processes it instantly.
Here, the round trip delay is T1-T0 + T3-T2. In an ideal world, T0=T1 and T2=T3,
and the only delay between the observing time and the processing of the player's action is the player's reaction time, i.e., T2-T1.
In the real world it's T3-T0.
So in order to simulate the ideal world you need to subtract the whole round trip delay:
T2-T1 = T3-T0 + (T1-T0 + T3-T2)
This means that a player on a slower network sees more advanced state the a player on a fast network.
However, this is no advantage for them, since it takes longer till their reaction gets processed.
Of course, it could get funny in case of two players sitting next to each other and using different speed networks.
But this is quite improbable scenario, isn't it?
There's a problem with the whole procedure:
You're extrapolating in the future and this may lead to nonsensical situations.
Some of them, like diving into walls can be easily prevented, but those depending on player's interaction can not.1
Maybe you could turn your idea upside down:
Instead of forecasting, try to evaluate player's action at the time T3 - (T1-T0 + T3-T2). If you determine that a character would be hit this way, reduce its hit points accordingly.
This may be easier and more realistic then the original idea, or it may be worse, or not applicable at all. Just an idea.
1 Imagine two players running against each other.
According to the extrapolating they pass each other on the right side.
In fact, one of them changes their direction, and at the end they passes each other on the left side.
One way to solve this kind of problem is running the game simulation on the client and the server.
So instead of simulating the world just on the server, do it on the client as well. Just send what the client did (for example "player hit monster") to the server. The server runs the same simulation and checks the events.
If they don't match (player cheating, lags), it sends a veto to the client and the action isn't recorded as successful on the server. This means all the other clients don't notice it (the server doesn't forward the action to the other clients).
That should be a pretty efficient way to handle the lag, especially if you have a lot of PvM battles (instead of PvP): Since the monster is a simulation, it doesn't matter if there is a long lag between the client and the server.
That said: Most networks are so fast that the lag should be in the area of a few milliseconds. That means you "just" have to make the server fast enough so it can respond withing, say, <100ms and the players won't notice.
I am trying to perform some computations on a server. For this, the client initially inputs some data which I am capturing through Javascript. Now, I would perhaps make a XMLHttpRequest to a server to send this data. Let's say the computation takes an hour and the client leaves the system or switches off the system.
In practice, I would use perhaps polling from the client side to determine if the result is available. But is there some way I could implement this in the form of a call back, for instance, the next time the client logs in, I would just contact the client side Javascript to pass the result... Any suggestions? I am thinking all this requires some kind of a webserver sitting on the client side but I was wondering if there's a better approach to do this.
Your best bet is to just poll when the user gets to the web page.
What I did in something similar was to gradually change my polling time, so I would start with several seconds, then gradually increase the interval. In your case, just poll after 15 minutes, then increase every 5 minutes when it fails, and if the user closes the browser then you can just start the polling again.
If you want some callback, you could just send an email when it is finished, to let the user know.
Also, while you are doing the processing, try to give some feedback as to how far you have gone, how much longer it may be, anything to show that progress is being made, that the browser isn't locked up. If nothing else, show a time with how long the processing has been going on, to give the user some sense of progress.