I have a server which waits for a connection from a client then sends an image to that client using the Socket class.
Several clients will be connecting over a short time, so I would like to compress the image before sending it.
The images are 1000 by 1000 pixel BufferedImages, and my current way of sending them is to iterate over all pixels and send that value, then reconstruct it on the other side. I suspect this is not the best way to do things.
Can anyone give any advice on compression and a better method for sending images over a network?
Thanks.
Compression is very much horses for courses: which method will actually work better depends on where your image came from in the first place, and what your requirements are (principally, whether you allow lossy compression, and if so, what constraints you place on it).
To get started, try using ImageIO.write() to write the image in JPEG or PNG format to a ByteArrayOutputStream, whose resulting byte array you can then send down the socket[1]. If that gives you an acceptable result, then the advantage is it'll involve next-to-no development time.
If they don't give an acceptable result (either because you can't use lossy compression or because PNG compression doesn't give an acceptable compression ratio), then you may have to come up with something custom to suit your data; at that point. Only you know your data at the end of the day, but a general technique is to try and get your data in to a form where it works well with a Deflater or some other standard algorithm. So with a deflater, for example, you transform/re-order your data so that repeating patterns and runs of similar bytes are likely to occur close to one another. That might mean sending all of the top bits of pixels, then all the next-top bits, etc, and actually not sending the bottom few bits of each component if they're effectively just noise.
Hopefully the JPEG/PNG option will get you the result you need, though, and you won't have to worry much further.
[1] Sorry, should have said -- you can obviously make the socket output stream the one that the image data is writte into if you don't need to first query it for length, take a hash code...
JPEG is lossy, so if you need the exact same image on the other side, you can use a GZIPOutputStream on top of the socket's OutputStream to send the compressed data, and receive it on the other side through a GZIPInputStream on top of the socket's InputStream.
It's been a long time since I did any image processing in Java, but you can save the image on the server as a JPEG, and then send them a URI and let them retrieve it themselves.
If you are using the getInputStream and getOutputStream methods on Socket, try wrapping the streams with java.util.net.GZIPInputStream and java.util.net.GZIPOutputStream.
Related
Good afternoon everyone,
First of all, I'll say that it's only for personal purpose in a certain way, it's made to use for little projects to improve my Java knowledge, but my idea is to make this kind of things to understand better the way developers works with sockets and bytes, as I really like to understand this kind of things better for my future ideas.
Actually I'm making a lightweight HTTP server in Java to understand the way it works, and I've been reading documentation but still have some difficulties to actually understand part of the official documentation. The main problem I'm facing is that, something I'd like to know if it's related or not, the content-length seems to have a higher length than the one I get from the BufferedReader. I don't know if the issue is about the way chars are managed and bytes are being parsed to chars on the BufferedReader, so it has less data, so probably what I have to do is treat this part as a binary, so I'd have to read the bytes of the InputStream, but here comes the real deal I'm facing.
As Readers reads a certain amount of bytes, and then it stops and uses this as buffer, this means the data from the InputStream is being used on the Reader, and it's no longer on the stream, so using read() would end up on a -1 as there aren't more bytes to read. A multipart is divided in multiple elements separated with a boundary, and a newline that delimiters the information from the content. I still have to get the information as an String to process it, but the content should be parsed into a binary data, and, without modifying the buffer length, implying I'd require knowledge about the exact length I require to get only the information, the most probably result would be the content being transferred to the BufferedReader buffer. Is it possible to do it even with the processed data from the BufferedStream, or should I find a way to get that certain content as binary without being processed?
As I said, I'm new working with sockets and services, so I don't exactly know which are the possibilities it's another kind of issue, so any help would be appreciated, thank you in advance.
Answer from Remy Lebeau, that can be found on the comments, which become useful for me:
since multipart data is both textual and binary, you are going to have to do your own buffering of the socket data so you have more control and know where the data switches back and forth. At the very least, since you can read binary data directly from a BufferedInputStream, and access its internal buffer, you can let it handle the actual buffering for you, and it is not difficult to write a custom readLine() method that can read a line of text from a BufferedInputStream without using BufferedReader
I'm trying to record from the microphone to a wav file as per this example. At the same time, I need to be able to test for input level/volume and send an alert if it's too low. I've tried what's described in this link and seems to work ok.
The issue comes when trying to record and read bytes at the same time using one TargetDataLine (bytes read for monitoring are being skipped for recording and vice-versa.
Another thing is that these are long processes (hours probably) so memory usage should be considered.
How should I proceed here? Any way to clone TargetDataLine? Can I buffer a number of bytes while writing them with AudioSystem.write()? Is there any other way to write to a .wav file without filling the system memory?
Thanks!
If you are using a TargetDataLine for capturing audio similar to the example given in the Java Tutorials, then you have access to a byte array called "data". You can loop through this array to test the volume level before outputting it.
To do the volume testing, you will have to convert the bytes to some sort of sensible PCM data. For example, if the format is 16-bit stereo little-endian, you might take two bytes and assemble to either a signed short or a signed, normalized float, and then test.
I apologize for not looking more closely at your examples before posting my "solution".
I'm going to suggest that you extend InputStream, making a customized version that also performs the volume test. Override the 'read' method so that it obtains the byte that it returns from the code you have that tests the volume. You'll have to modify the volume-testing code to work on a per-byte basis and to pass through the required byte.
You should then be able to use this extended InputStream as an argument when you create the AudioInputStream for the output-to-wav stage.
I've used this approach to save audio successfully via two data sources: once from an array that is populated beforehand, once from a streaming audio mix passing through a "mixer" I wrote to combine audio data sources. The latter would be more like what you need to do. I haven't done it from a microphone source, though. But the same approach should work, as far as I can tell.
I'm under the impression that a ByteArrayOutputStream is not memory efficient, since all it's contents are stored in memory.
Similarly, calling toByteArray on a large stream seems like it "scales poorly".
Why, then, in the example in The example in Tom White's book Hadoop: the Definitive Guide use them both:
ByteArrayOutputStream out = new ByteArrayOutputStream;
Decoder decoder = DecoderFactory().defaultFactory().createBinaryDecoder(out.toByteArray(), null);
Isn't "Big Data" the norm for Avro? What am I missing?
Edit 1: What I'm trying to do - Say I'm streaming avros over a websocket. What would the example look like if I wanted to deserialize multiple records, not just one that was put in it's own ByteArrayOutoputStream?
Is there a better way to supply BinaryDecoder with a byte[]? Or perhaps a different type of stream? Or I should be sending 1 record per stream instead of loading streams with multiple records?
ByteArrayOutputStream makes sense when dealing with small objects like small to medium images, or fixed-size request/response. It is in memory and doesn't touch the disk so this can be great for performance. It doesn't make any sense to use it for 1 TerraByte of data. Possibly this is a case of trying to keep an example in a book small and self-contained so as not to detract from the main point.
EDIT: Now that I see where your going I'd be looking to setup a pipeline. Pull a message off the stream (so I'm assuming you can get in InputStream from your HTTP object) and either process it with a memory-less method or throw it at a queue and have a thread-pool process the queue with a memory-less method. So the requirements for this are 1) being able to detect the boundary between Avro messages as you pull them off the stream and having the method for decoding.
The way to decode appears to be read the bytes for each message into a byte-array and hand that to your BinaryDecoder (after you find the message boundary).
Hope everyone of you doing great.I really need your help.My scenario is given below.
1-I am getting a continuous data (byte array[]) from my camera .
2-Now sending those byte[] through UDP but i have to halve that array because i can't send that big array. (P.S i can't use JMF as its not supported at my device(server side) so have to send byte[] manually through UDP)
3-I am receiving those byte [] chunks at client side.
Now i have following requirement.
-I want a player at the client side which plays my these byte [] chunks but in continuous way.(At client side i can use JMF)
Now i don't know how should i combine all these byte[] chunks at client side so that my video gets play continuously.
Please help as you guys always do.
Best regards
ZB
As an option, you may consider vlcj for video streaming.
Examples how to stream media from camera with VLC player, which may be also of some interest.
If you are transmitting over UDP I assume you are aware of the standard caveats regarding ordering and dropped packets.
I would send the data in the following fashion.
Define a datagram format which has a header and payload with the header being something quite simple like
<packetnumber><timestamp><payloadlength>
<payload>
So you'd create you chunk which is an array of bytes, calculate the payload length, current packet number and timestamp before appending the chunk. Then transmit the whole array and when it's received you can remove the packet number, timestamp and use the payload length to retrieve the data.
Then load the payload into buffer. I'd be tempted to create an object which has the packet number as a key and an array of bytes, then have a doubly linked list of these objects as the buffer. You use the packet number to see where to insert into list and to play back you just keep getting the object with the lowest packet number.
You'll need to define some control data for packet number reseting etc and flow control.
I may have made this more complex by ignoring common libraries but this is the logic I'd follow.
This is the situation:
I'm working on a project where I need to be able to send one or more images once in a while to/from the server as well as a lot of other types of data represented with text. The way it is currently done, is by sending a message with that says "incoming image of size x to be used as y" (It's not "formulated" that way of course), and then I call a method that reads the next x bytes through a DataInputStream. At first I met some problems with latency screwing things up, but I made the server spawn a new thread to send the "incoming image" message, and then wait for a flag that is set when the client responds with a "I'm ready for the image" message. It works in a way now, but if anything else, for instance a chat message, is sent while the image is being transfered, that message meant for a BufferedReader will be read as raw bytes and used as part of the image. So I will have to block all outgoing data (and add it to a queue) when there is an image that is being sent. But this seems very wrong and annoying, as users of the application will not be able to chat while receiving/ uploading a big image.
This is what I need:
So, I either need to set up an independent channel to use for raw data. Which, as far as I understand from some tinkering, I will have to set up a new socket over a new port, which seems unnecessary. The other way I can see to solve this, would be to somehow use tag each packet with a "this is text/raw data" bit, but I have no idea how to do this with java? Can you add information to the packet-header when you write something to the stream (that every packet containing that info will contain) and then read the packet data on the other end and act accordingly?
As you can see, I do not have much experienced with networking, nor have I used Java for a long time. This is also my first post here, so be kind. If anything was unclear, please ask, and I'll specify. All ideas are welcome! (There is possibly a standard way to do this?)
Thanks a lot!
There is nothing in TCP protocol itself that can help you.
You either open a new socket connection (can be to the same server port), or you split your images in, smaller chunks and wrap these chunks in envelopes saying what type of message it is: image or chat. And then reconstruct the image on the receiving end from these chunks. But this will waste bandwidth and add complexities of its own (e.g. how big do you make a chunk of that image?).
I would go with the separate binary data connection.
Java should have a standard support for HTTP protocol - use HTTP to do your picture transfers as you can set the type of data being transmitted in the header. Basically, you would have your client/server architecture establish a separate request for each new data transfer (be it text or image), that way enabling you to do processing in a simple loop.
This might be of some help to you : How to use java.net.URLConnection to fire and handle HTTP requests?