I have a server that sends bytes back to a client app, when the client app receives a finished response from the server i want to gather the bytes before the finish response comes back to the client. How do i append theses bytes back together again.
So when the bytes are sent to the server these bytes are split up into segments of say 100 bytes and when the server sends the bytes back to the client i want to to gather these segments back into its normal form again.
I have had a look at Concatenating to arrays but is there a simple way?
You could create a ByteArrayOutputStream, then write() the arrays to it, and finally use toByteArray().
Guava's Bytes class provides a Bytes.concat method, though it's more useful when you have a fixed number of arrays you want to concatenate than if you're gathering a variable number of arrays to concatenate. ByteArrayOutputStream is probably what you want here, though, based on your description, because it doesn't require you to keep each individual array you receive around in order to concatenate them... you can just add them to the output stream.
Related
Ok, so I have a server cient program(java) for broadcasting the server screen to the client. I convert the image to a byte array and send that to the client. Im having issues sending multiple byte arrays as after the first one is sent and recieved by the client, I get errors for the second byte array. On the server side I have the code:
imageInByte=baos.toByteArray();
outToClient.writeInt(imageInByte.length);
System.out.println(Integer.toString(imageInByte.length));
This tells the client the size of the byte array coming in. On the client I have the code:
int c = inFromServer.readInt();
System.out.println(Integer.toString(c));
This is all well and good and the size that is estimated on the server is the size that the client receives and prints but when I actual try to write the byte array to the dataoutputstream, the byte array that the client receives is nothing like what ".length" estimated.
For example the first byte array is said to be 118207(without sending the actual byte array.
Then when I try to send the first byte array to the client, the client receives it as 118207.
The second one which without trying to actual write the array was 126205 but when I try to write the object the size is -2555936
The third one is supposed to be 125709 but comes out at the client as 1229324289 bytes.
This is driving me crazy and I have a project due next Thursday and cant continue without sorting this out. Any help would be greatly appreciated and if this is in a weird layout or you dont understand me just ask questions .Thanks.
I need to write an UDP server which will wait for packets from uncorrelated devices (max 10000 of them) sending small packets periodically; do some processing with the payload and write the results on SQL. Now I'm done with the SQL part through jdbc, but the payload bytes keep bugging me, how should I access them? Until now I've worked with the payload mapped to a string and then converting the string to hex (two hex chars representing one byte). I'm aware that there's a better way to do this but I don't know it...
Do you not just want to create a DatagramSocket and receive DatagramPackets on it?
You need to specify a maximum length of packet by virtue of the buffer you use to create it, but then you'll be able to find out how much data was actually sent in the packet using getLength().
See the Java Tutorial for more details and an example.
There is a string (message body) and 3 different headers to be sent to 3 users using java nio socket.
A way is to create a large byte buffer and put the message body in some postion and put header in front of message boday.
In this way, I still need to one copy for message body and rewrite headers. In my project, the message body is around 14 K bytes. If the memory page is 2K bytes, it is not efficient for memory efficiency management.
My question: is there any way to avoid copying large message string to the byte buffer? I guess C can support it using pointers. Is it true?
Thanks.
I wouldn't create the String, but create the ByteBuffer with the text you would place in the String.
Note: String is not mutable so it will be a copy of some other source eg a StringBuilder. Using a ByteBuffer instead will save you two copies.
You can place the message body in the ByteBuffer with enough padding at the start to put in the header later. This way the message body won't need to copied again.
This is a job for gathering writes: the write(ByteBuffer[], ...) method.
Hope everyone of you doing great.I really need your help.My scenario is given below.
1-I am getting a continuous data (byte array[]) from my camera .
2-Now sending those byte[] through UDP but i have to halve that array because i can't send that big array. (P.S i can't use JMF as its not supported at my device(server side) so have to send byte[] manually through UDP)
3-I am receiving those byte [] chunks at client side.
Now i have following requirement.
-I want a player at the client side which plays my these byte [] chunks but in continuous way.(At client side i can use JMF)
Now i don't know how should i combine all these byte[] chunks at client side so that my video gets play continuously.
Please help as you guys always do.
Best regards
ZB
As an option, you may consider vlcj for video streaming.
Examples how to stream media from camera with VLC player, which may be also of some interest.
If you are transmitting over UDP I assume you are aware of the standard caveats regarding ordering and dropped packets.
I would send the data in the following fashion.
Define a datagram format which has a header and payload with the header being something quite simple like
<packetnumber><timestamp><payloadlength>
<payload>
So you'd create you chunk which is an array of bytes, calculate the payload length, current packet number and timestamp before appending the chunk. Then transmit the whole array and when it's received you can remove the packet number, timestamp and use the payload length to retrieve the data.
Then load the payload into buffer. I'd be tempted to create an object which has the packet number as a key and an array of bytes, then have a doubly linked list of these objects as the buffer. You use the packet number to see where to insert into list and to play back you just keep getting the object with the lowest packet number.
You'll need to define some control data for packet number reseting etc and flow control.
I may have made this more complex by ignoring common libraries but this is the logic I'd follow.
I have a server which waits for a connection from a client then sends an image to that client using the Socket class.
Several clients will be connecting over a short time, so I would like to compress the image before sending it.
The images are 1000 by 1000 pixel BufferedImages, and my current way of sending them is to iterate over all pixels and send that value, then reconstruct it on the other side. I suspect this is not the best way to do things.
Can anyone give any advice on compression and a better method for sending images over a network?
Thanks.
Compression is very much horses for courses: which method will actually work better depends on where your image came from in the first place, and what your requirements are (principally, whether you allow lossy compression, and if so, what constraints you place on it).
To get started, try using ImageIO.write() to write the image in JPEG or PNG format to a ByteArrayOutputStream, whose resulting byte array you can then send down the socket[1]. If that gives you an acceptable result, then the advantage is it'll involve next-to-no development time.
If they don't give an acceptable result (either because you can't use lossy compression or because PNG compression doesn't give an acceptable compression ratio), then you may have to come up with something custom to suit your data; at that point. Only you know your data at the end of the day, but a general technique is to try and get your data in to a form where it works well with a Deflater or some other standard algorithm. So with a deflater, for example, you transform/re-order your data so that repeating patterns and runs of similar bytes are likely to occur close to one another. That might mean sending all of the top bits of pixels, then all the next-top bits, etc, and actually not sending the bottom few bits of each component if they're effectively just noise.
Hopefully the JPEG/PNG option will get you the result you need, though, and you won't have to worry much further.
[1] Sorry, should have said -- you can obviously make the socket output stream the one that the image data is writte into if you don't need to first query it for length, take a hash code...
JPEG is lossy, so if you need the exact same image on the other side, you can use a GZIPOutputStream on top of the socket's OutputStream to send the compressed data, and receive it on the other side through a GZIPInputStream on top of the socket's InputStream.
It's been a long time since I did any image processing in Java, but you can save the image on the server as a JPEG, and then send them a URI and let them retrieve it themselves.
If you are using the getInputStream and getOutputStream methods on Socket, try wrapping the streams with java.util.net.GZIPInputStream and java.util.net.GZIPOutputStream.