I've been coding around in circles trying to work this one out, new to java and have been lurking here to find things out for a while but I really can't get passed this one. I adapted some code by Desmond Shaw (http://www.codepool.biz/how-to-implement-a-java-websocket-server-for-image-transmission-with-jetty.html) to create a websocket to tranfer jpg images from a server to remote clients. I want to send files from the server to the browser windows of connected clients when some specific files on the server change (they are pages of music scores that are created using Max/MSP in real-time), but I don't seem to be able to cancel the timers I'm creating to watch these files in my home directory for changes.
More specifically I'm sending messages from the remote browser clients (through javascript buttons operated by the users) over a websocket connection to specify which of the files they wish to see updated on their screen (i.e part one, which refers to a file on the server called "1.png" and is the violin part, or part 2 which is the server file "2.png" and is the cello part etc). This is then used within the websocket handler running on my server to send the right files to that client when a filewatcher detects they have changed on the server. I can get everything going except stopping the timers running the filewatchers, when a different part is requested by the client (say the violin player wants to look at the cello players part). Below is the method I have edited to respond to the messages from the clients:
#OnWebSocketMessage //part request from websocket client (remote browser)
public void onMessage( String message) {
System.out.println("Message: '" + message + "' received");
sFclient = message;
if (sFclient == "1" || sFclient == "2" || sFclient == "3" || sFclient == "4") {
System.out.println("Part " + sFclient + " joined");
}
else {
sFclientOut = 0;
}
}
public void onChange( File file ) {
System.out.println( "File "+ file.getName() +" has changed!" );
TimerTask task = new FileWatcher(new File("/Users/benedict/" + message + ".png")) {
try {
File f = new File("/Users/benedict/" + sFclient + ".png");
BufferedImage bi = ImageIO.read(f);
ByteArrayOutputStream out = new ByteArrayOutputStream();
ImageIO.write(bi, "png", out);
ByteBuffer byteBuffer = ByteBuffer.wrap(out.toByteArray());
mSession.getRemote().sendBytes(byteBuffer);
out.close();
byteBuffer.clear();
}
catch (IOException e) {
e.printStackTrace();
}
Timer timer1 = new Timer(); {
timer1.schedule(task , new Date(), 20 );
if (sFclientOut == 0){
task.cancel();
timer1.cancel();
}
}
}
I had it working mostly using an if statement which I've now abandoned but have been editing and now it doesn't make as much sense probably. Any help at all would be appreciated, but my main question is should I be trying to cancel the threads handling the timertasks or use a completely different approach altogether like a switch statement for example. I have tried sending a message before every new message from the browsers ("0") to cancel the old threads but the Timertasks just don't start at all, which I think is because that cancels the timertask and doesn't let it run again?
Thanks,
Benedict
Ok here's the final working solution, I just had to cancel the tasks based on a message from the clients (in this case a 0)
else if (message.equals("0")) {
zerocounter = zerocounter + 1;
if (zerocounter >= 2) {
task.cancel();
}
Related
I have the task to stream an IP camera's video stream (RTP/RTSP in h264) via J2EE application server to a browser. For this I am using GStreamer 1.21.3 (latest dev release) with the gstreamer-java library on top. We are aiming towards a Websocket solution as the traditional HLS introduces significant latency.
After having figured out what to do with the gst-launch executable on the commandline, I ended up with this code (for the moment):
/*
* Configuration for RTSP over TCP to WebSocket:
* 1. rtspsrc to ip camera
* 2. rtph264depay ! h246parse to extract the h264 content
* 3. mp4mux to create fragmented MP4
* 4. appsink to grab the frames and use them in Websocket server
*/
final String gstPipeline = String.format("rtspsrc onvif-mode=true protocols=tcp user-id=%s user-pw=%s location=%s latency=200"
+ " ! rtph264depay ! h264parse"
+ " ! mp4mux streamable=true fragment-duration=5000"
+ " ! appsink name=sink", USERNAME, PASSWORD, uri);
final Pipeline pipeline = initGStreamerPipeline(gstPipeline);
// Add listener to consume the incoming data
final AppSink sink = (AppSink) pipeline.getElementByName("sink");
sink.setCaps(Caps.anyCaps());
sink.set("emit-signals", true);
sink.set("max-buffers", 50);
sink.connect((NEW_SAMPLE) appsink -> {
final Sample sample = appsink.pullSample();
if (sample == null)
{
return FlowReturn.OK;
}
final Buffer buffer = sample.getBuffer();
try
{
final ByteBuffer buf = buffer.map(false);
LOGGER.debug("Unicast HTTP/TCP message received: {}", new String(Hex.encodeHex(buf, true)));
if (session != null)
{
try
{
buf.flip();
session.getRemote().sendBytes(buf);
}
catch (final Exception e)
{
LOGGER.error("Failed to send data via WebSocket", e);
}
}
}
finally
{
buffer.unmap();
}
return FlowReturn.OK;
});
sink.connect((AppSink.EOS) s -> LOGGER.info("Appsink is EOS"));
sink.connect((AppSink.NEW_PREROLL) s -> {
LOGGER.info("Appsink NEW_PREROLL");
return FlowReturn.OK;
});
LOGGER.info("Connecting to {}", uri);
/**
* Start the pipeline. Attach a bus listener to call Gst.quit on EOS or error.
*/
pipeline.getBus().connect((Bus.ERROR) ((source, code, message) -> {
LOGGER.info(message);
Gst.quit();
}));
pipeline.getBus().connect((Bus.EOS) (source) -> Gst.quit());
pipeline.play();
/**
* Wait until Gst.quit() called.
*/
LOGGER.info("Starting to consume media stream...");
Gst.main();
pipeline.stop();
server.stop();
Now I seem to be stuck here, because the AppSink at the end of the pipeline never gets its new_sample signal triggered. The complete example works like a charme when I replace the appsink with a filesink. I have noticed that there are some other threads (like this one) with similar problems which normally boil down to "you forgot to set emit-signals=true". Any ideas why my appsink gets no data?
Update:
It appears that the problem is the URL I am passing to the pipeline string. It has two query parameters: http://192.168.xx.xx:544/streaming?video=0&meta=1. If I remove the second parameter (and the ambersand along with it), the pipeline works. Unfortunately I found no docs how to escape URLs in the correct way so GStreamer can read it. Can anyone share such documentation?
Update 2:
It starts getting weired now: It looks like the name of the URL parameter is the problem. I have started to replace it with some dummy argument and it works. So the ambersand is not the problem. Then I used VLC media player to consume the stream with the &meta=1 in place which also worked. Is it possible that the string "meta" is treated special in gstreamer?
I'm currently developing a system that gets data from a battery pack of an electric vehicle, stores it in a database and display it on a screen.
So I have a Java - Application that reads the data from a hardware interface, interprets the values and sends it via Socket to a Node.js-Server. (Java App and Webserver are running on the same computer, so Url = localhost)
JAVA APP:
s = new Socket();
s.connect(new InetSocketAddress(URL, PORT));
out = new PrintWriter( s.getOutputStream(), true);
for (DataEntry e : entries){
out.printf(e.toJson());
}
NODE:
sock.on('data', function(data) {
try{
var data = JSON.parse(data);
db.serialize(function(){
db.run("INSERT INTO DataEntry(value, MessageField, time) values(" + data.value + "," + data.messageFieldID + ", STRFTIME('%Y-%m-%d %H:%M:%f'))");
});
} catch(e){}
});
I get about 20 Messages per second from the hardware interface which are converted into 100 Json - Strings. So the webserver has to process one message in 10 ms, which I thought, is manageable.
But here is the problem: If my entries - List (foreach loop) has more than 2 elements, the webserver gets 2 or more of the Json's in one message.
So the first message was divided into 2 parts (ID 41,42) and was processed correctly. But the second message was divided into 5 parts (ID 43-47), and the first 4 of them weren't sent alone, so only the last one was saved correctly.
How can I ensure, that every Json is sent one another?
Isn't there something like a buffer so that the socket.on method is called correctly for every message I send?
I hope somebody of you can help me
Thank you!
Benedikt :)
TCP sockets are just streams and you shouldn't make any assumptions about how much of a "message" is contained in a single packet.
A simple solution to this is to terminate each message with a newline character since JSON cannot contain such a character. From there it's a simple matter of buffering data until you see a newline character. Then call JSON.parse() on your buffer. For example:
var buf = '';
sock.on('data', function(data) {
buf += data;
var p;
// Use a while loop as it may be possible to have multiple
// messages buffered depending on chunk contents
while (~(p = buf.indexOf('\n'))) {
try {
var msg = JSON.parse(buf.slice(0, p));
} catch (ex) {
console.log('Bad JSON message: ' + ex);
}
buf = buf.slice(p + 1);
}
});
You will also need to change printf() to println() on the Java-side so that a newline character will be appended to each message.
I want my app to do some tasks no matter if the app is currently active or at the background.
What I have done is:
private static void myCurrMethod() {
boolean checkIn = false;
if (1==1) {
checkIn = true;
}
// Sending message
Time now = new Time();
now.setToNow();
final String res = "Time is " + now.hour + ":" + now.minute + ":"
+ now.second + " stat " + checkIn;
new Thread(new Runnable() {
public void run() {
try {
Socket clientSocket = new Socket("xx.xx.xx.xx", 6790);
DataOutputStream outToServer = new DataOutputStream(
clientSocket.getOutputStream());
outToServer.writeBytes(res + '\n');
clientSocket.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}).start();
// End of it
Handler handler = new Handler();
handler.postDelayed(new Runnable() {
#Override
public void run() {
myCurrMethod();
}
}, (TIME_INTERVAL_WIFI_EN * MS_PER_MIN));
}
The socket part is to check how often is message being sent.
I have tried this when I am using another app. It works completely fine, if my phone is connected to my computer which has ADT installed. However the problem is, it does not work properly, or works for few times when my phone is not connected to my computer.
I found on the internet that there are ways to run, but they all seem like running things on the background while app is active. Also there are so many things on the internet some people suggest Services, some suggest AsyncTask, some others suggest a way like mine. I am confused, what is the best way to do this?
Note: I don't really discard this from my app. For instance if my app is not started at all, then it shouldn't work. If my app is removed from app list, then this shouldn't work. Basically what I want is the default behaviour that we could have in old Symbian apps.
Apps do not "close" - they might be killed to free resources when they are not active, but the "recents" list does not mean "active" (it can be on the list and be killed already, or not on the list and still be in an active task stack).
If you want to run things off the UI thread, then use another thread, AsyncTask, etc. They will live as long as your app does and continue to run even if another activity is on the screen.
If you need your process to survive your activity lifecycle or start when your app is not active (or continue running the process even if the activity is killed) then you need a service.
There are many references here on SO to help implement any solution. In your particular case, the connection to the computer - or a charger - may make a difference in how many active tasks Android allows to continue running, so your inactive activity is being killed faster when not connected. You likely need a service to "finish up" whatever processing is expected to occur while the app is inactive or in the background.
I'm evaluating Node.js for possible replacement of my current push functionality on a Java Web App. I wrote a simple long polling server that works like an intermediary between the client and the Java back-end. The client makes a request to subscribe, and then the Java server can notify subscribed clients by calling Node.js. It seems to be working fine so far, but I got the following message which points to a memory leak:
(node) warning: possible EventEmitter memory leak detected. 11 listeners added.
Use emitter.setMaxListeners() to increase limit.
Trace
at EventEmitter.addListener (events.js:168:15)
at EventEmitter.once (events.js:189:8)
at route (C:\Users\Juan Pablo\pushserver.js:42:12)
at Server.onRequest (C:\Users\Juan Pablo\pushserver.js:32:3)
at Server.EventEmitter.emit (events.js:91:17)
at HTTPParser.parser.onIncoming (http.js:1793:12)
at HTTPParser.parserOnHeadersComplete [as onHeadersComplete] (http.js:111:23
)
at Socket.socket.ondata (http.js:1690:22)
at TCP.onread (net.js:402:27)
I have a line of code that logs the existing listeners whenever a notify event is emitted. I've had it running for a while and it shows that there is only one listener per subscribed client (as should be), but this line wasn't on the code when I got the warning message. The code was exactly the same except for that line tough.
This is the push server's code (it's a bit rudimentary since I'm still learning Node.js):
var http = require('http');
var url = require("url");
var qs = require("querystring");
var events = require('events');
var util = require('util');
var emitter = new events.EventEmitter;
function onRequest(request, response)
{
var pathname = url.parse(request.url).pathname;
console.log("Request for " + pathname + " received.");
request.setEncoding("utf8");
if (request.method == 'POST')
{
var postData = "";
request.addListener("data", function(postDataChunk)
{
postData += postDataChunk;
console.log("Received POST data chunk '"+ postDataChunk + "'.");
});
request.addListener("end", function()
{
route(pathname, response, postData);
});
}
else if (request.method=='GET')
{
var urlParts = url.parse(request.url, true);
route(pathname, response, urlParts.query);
}
}
function route(pathname, response, data)
{
switch (pathname)
{
case "/subscription":
emitter.once("event:notify", function(ids)
{
response.writeHead(200, {"Content-Type": "text/html", "Access-Control-Allow-Origin": "*"});
response.write(JSON.stringify(ids));
response.end();
});
break;
case "/notification":
//show how many listeners exist
console.log(util.inspect(emitter.listeners('event:notify'));
emitter.emit("event:notify", data.ids);
response.writeHead(200, {"Content-Type": "text/html", "Access-Control-Allow-Origin": "*"});
response.write(JSON.stringify(true));
response.end();
break;
default:
console.log("No request handler found for " + pathname);
response.writeHead(404, {"Content-Type": "text/plain", "Access-Control-Allow-Origin": "*"});
response.write("404 - Not found");
response.end();
break;
}
}
http.createServer(onRequest).listen(8888);
console.log('Server running at http://127.0.0.1:8888/');
I was under the impression that using emitter.once would automatically remove the event listener once it was used, so I don't know how 11 listeners could've been added if there was only one client connected. I'm thinking that perhaps if the client disconnects while waiting for a notification then the associated connection resources are not disposed.
I'm wondering whether I have to manually handle disconnections and if there is actually a leak in there somewhere. Any advice is welcome. Thanks.
If anyone is interested, the above code does leak. The leak occurs when a client disconnects before a notification is sent. To fix this, it is necessary to remove the event listener when a client disconnects abruptly, such as:
case "/subscription":
var notify = function(ids)
{
response.writeHead(200, {"Content-Type": "text/html", "Access-Control-Allow-Origin": "*"});
response.write(JSON.stringify(ids));
response.end();
}
emitter.once("event:notify", notify);
//event will be removed when connection is closed
request.on("close", function()
{
emitter.removeListener("event:notify", notify);
});
break;
I'm working on a simple program to read a continuous stream of data from a plain old serial port. The program is written in Processing. Performing a simple read of data and dumping into to the console works perfectly fine, but whenever I add any other functionality (graphing,db entry) to the program, the port starts to become de-synchronized and all data from the serial port starts to become corrupt.
The incoming data from the serial port is in the following format :
A [TAB] //start flag
Data 1 [TAB]
Data 2 [TAB]
Data 3 [TAB]
Data 4 [TAB]
Data 5 [TAB]
Data 6 [TAB]
Data 7 [TAB]
Data 8 [TAB]
COUNT [TAB] //count of number of messages sent
Z [CR] //end flag followed by carriage return
So as stated, if I run the program below and simply have it output to the console, it runs fine without issue for several hours. If I add the graphing functionality or database connectivity, the serial data starts to come in garbled and serial port handler is never able to decode the message correctly again. I've tried all sorts of workarounds to this issue, thinking it is a timing problem but reducing the speed of the serial port doesn't seem to make a change.
If you see the serial port handler, I provide a large buffer just in case the terminating Z character is chopped off. I check to make sure the A and Z characters are in the correct place and in turn that the created "substring" is the correct length. When the program starts to fail, the substring will continuously fail this check until the program just crashes. Any ideas? I've tried several different ways of reading the serial port and am just beginning to wonder if I am missing something stupid here.
//Serial Port Tester
import processing.serial.*;
import processing.net.*;
import org.gwoptics.graphics.graph2D.Graph2D;
import org.gwoptics.graphics.graph2D.traces.ILine2DEquation;
import org.gwoptics.graphics.graph2D.traces.RollingLine2DTrace;
import de.bezier.data.sql.*;
SQLite db;
RollingLine2DTrace r1,r2,r3,r4;
Graph2D g;
Serial mSerialport; //the serial port
String[] svalues = new String[8]; //string values
int[] values = new int[8]; //int values
int endflag = 90; //Z
byte seperator = 13; //carriage return
class eq1 implements ILine2DEquation {
public double computePoint(double x,int pos) {
//data function for graph/plot
return (values[0] - 32768);
}
}
void connectDB()
{
db = new SQLite( this, "data.sqlite" );
if ( db.connect() )
{
db.query( "SELECT name as \"Name\" FROM SQLITE_MASTER where type=\"table\"" );
while (db.next())
{
println( db.getString("Name") );
}
}
}
void setup () {
size(1200, 1000);
connectDB();
println(Serial.list());
String portName = Serial.list()[3];
mSerialport = new Serial(this, portName, 115200);
mSerialport.clear();
mSerialport.bufferUntil(endflag); //generate serial event when endflag is received
background(0);
smooth();
//graph setup
r1 = new RollingLine2DTrace(new eq1(),250,0.1f);
r1.setTraceColour(255, 0, 0);
g = new Graph2D(this, 1080, 500, false);
g.setYAxisMax(10000);
g.addTrace(r1);
g.position.y = 50;
g.position.x = 100;
g.setYAxisTickSpacing(500);
g.setXAxisMax(10f);
}
void draw () {
background(200);
//g.draw(); enable this and program crashes quickly
}
void serialEvent (Serial mSerialport)
{
byte[] inBuffer = new byte[200];
mSerialport.readBytesUntil(seperator, inBuffer);
String inString = new String(inBuffer);
String subString = "";
int startFlag = inString.indexOf("A");
int endFlag = inString.indexOf("Z");
if (startFlag == 0 && endFlag == 48)
{
subString = inString.substring(startFlag+1,endFlag);
}
else
{
println("ERROR: BAD MESSAGE DISCARDED!");
subString = "";
}
if ( subString.length() == 47)
{
svalues = (splitTokens(subString));
values = int(splitTokens(subString));
println(svalues);
// if (db.connect()) //enable this and program crashes quickly
// {
// if ( svalues[0] != null && svalues[7] != null)
// {
// statement = svalues[7] + ", " + svalues[0] + ", " + svalues[1] + ", " + svalues[2] + ", " + svalues[3] + ", " + svalues[4] + ", " + svalues[5] + ", " + svalues[6];
// db.execute( "INSERT INTO rawdata (messageid,press1,press2,press3,press4,light1,light2,io1) VALUES (" + statement + ");" );
// }
// }
}
}
While I'm not familiar with your specific platform, my first thought from reading your problem description is that you still have a timing problem. At 115,200bps, data is coming in rather quickly-- more than 10 characters every millisecond. As such, if you spend precious time opening a database (slow file IO) or drawing graphics (also potentially slow), you might well not be able to keep up with the data.
As such, it might be a good idea to put the serial port processing on its own thread, interrupt, etc. That might make the multitasking much easier. Again, this is just an educated guess.
Also, you say that your program "crashes" when you enable the other operations. Do you mean that the entire process actually crashes, or that you get corrupted data, or both? Is it possible that you are overrunning your 200 byte inBuffer[]? At 115kbps, it wouldn't take but 20ms to do so.