I have a ServerSocket and a Socket set up so the ServerSocket sends a stream of images using ImageIO.write(....) and the Socket tries to read them and update a JFrame with them. So I wondered if ImageIO could detect the end of an image. (I have absolutely no knowledge of the JPEG format, so I tested it instead)
Apparently, not.
On the server side, I sent images continuously by using ImageIO.write(...) in loop with some sleeping in between. On the client side, ImageIO read the first image no problem, but on the next one it returned null. This is confusing. I was expecting it to either block on reading the first image (because it thinks the next image is still part of the same image), or succeed at reading all of them (because it works). What is going on? It looks like ImageIO detects the end of the first image, but not the second one. (The images, by the way, are similar to each other roughly) Is there an easy way to stream images like this or do I have to make my own mechanism that reads the bytes into a buffer until it reaches a specified byte or sequence of bytes, at which point it reads the image out of the buffer?
This is the useful part of my server code:
while(true){
Socket sock=s.accept();
System.out.println("Connection");
OutputStream out=sock.getOutputStream();
while(!socket.isClosed()){
BufferedImage img=//get image
ImageIO.write(img, "jpg", out);
Thread.sleep(100);
}
System.out.println("Closed");
}
And my client code:
Socket s=new Socket(InetAddress.getByName("localhost"), 1998);
InputStream in=s.getInputStream();
while(!s.isClosed()){
BufferedImage img=ImageIO.read(in);
if(img==null)//this is what happens on the SECOND image
else // do something useful with the image
}
ImageIO.read(InputStream) creates an ImageInputStream and calls read(ImageInputStream) internally. That latter method is documented to close the stream when it's done reading the image.
So, in theory, you can just get the ImageReader, create an ImageInputStream yourself, and have the ImageReader read from the ImageInputStream repeatedly.
Except, it appears an ImageInputStream is designed to work with one and only one image (which may or may not contain multiple frames). If you call ImageReader.read(0) more than once, it will rewind to the beginning of the (cached) stream data each time, giving you the same image over and over. ImageReader.read(1) will look for a second frame in a multi-frame image, which of course makes no sense with a JPEG.
So, maybe we can create an ImageInputStream, have the ImageReader read from it, and then create a new ImageInputStream to handle subsequent image data in the stream, right? Except, it appears ImageInputStream does all sorts of caching, read-ahead and pushback, which makes it quite difficult to know the read position of the wrapped InputStream. The next ImageInputStream will start reading data from somewhere, but it's not at the end of the first image's data like we would expect.
The only way to be certain of your underlying stream's position is with mark and reset. Since images can be large, you'll probably need a BufferedInputStream to allow a large readLimit.
This worked for me:
private static final int MAX_IMAGE_SIZE = 50 * 1024 * 1024;
static void readImages(InputStream stream)
throws IOException {
stream = new BufferedInputStream(stream);
while (true) {
stream.mark(MAX_IMAGE_SIZE);
ImageInputStream imgStream =
ImageIO.createImageInputStream(stream);
Iterator<ImageReader> i =
ImageIO.getImageReaders(imgStream);
if (!i.hasNext()) {
logger.log(Level.FINE, "No ImageReaders found, exiting.");
break;
}
ImageReader reader = i.next();
reader.setInput(imgStream);
BufferedImage image = reader.read(0);
if (image == null) {
logger.log(Level.FINE, "No more images to read, exiting.");
break;
}
logger.log(Level.INFO,
"Read {0,number}\u00d7{1,number} image",
new Object[] { image.getWidth(), image.getHeight() });
long bytesRead = imgStream.getStreamPosition();
stream.reset();
stream.skip(bytesRead);
}
}
While perhaps not the optimal way to do this the following code would get you past the issue your having. As a previous answer noted the ImageIO is not leaving the stream at the end of the image, this will find it's way to the next image.
int imageCount = in.read();
for (int i = 0; i < imageCount; i ++){
BufferedImage img = ImageIO.read(in);
while (img == null){img = ImageIO.read(in);}
//Do what ever with img
}
I hit the same problem and found this post. The comment of #VGR inspired me to dig into the problem, an eventually I realized that the ImageIO can not deal with a set of images in the same stream. So I've created the solution (in Scala, sorry) and wrote the blog post with some details and internals.
http://blog.animatron.com/post/80779366767/a-fix-for-imageio-making-animated-gifs-from-streaming
perhaps it will help somebody as well.
Related
Goal: Decrypt data from one source and write the decrypted data to a file.
try (FileInputStream fis = new FileInputStream(targetPath.toFile());
ReadableByteChannel channel = newDecryptedByteChannel(path, associatedData))
{
FileChannel fc = fis.getChannel();
long position = 0;
while (position < ???)
{
position += fc.transferFrom(channel, position, CHUNK_SIZE);
}
}
The implementation of newDecryptedByteChannel(Path,byte[]) should not be of interest, it just returns a ReadableByteChannel.
Problem: What is the condition to end the while loop? When is the "end of the byte channel" reached? Is transferFrom the right choice here?
This question might be related (answer is to just set the count to Long.MAX_VALUE). Unfortunately this doesn't help me because the docs say that up to count bytes may be transfered, depending upon the natures and states of the channels.
Another thought was to just check whether the amount of bytes actually transferred is 0 (returned from transferFrom), but this condition may be true if the source channel is non-blocking and has fewer than count bytes immediately available in its input buffer.
It is one of the bizarre features of FileChannel. transferFrom() that it never tells you about end of stream. You have to know the input length independently.
I would just use streams for this: specifically, a CipherInputStream around a BufferedInputStream around a FileInputStream, and a FileOutputStream.
But the code you posted doesn't make any sense anyway. It can't work. You are transferring into the input file, and via a channel that was derived from a FileInputStream, so it is read-only, so transferFrom() will throw an exception.
As commented by #user207421, as you are reading from ReadableByteChannel, the target channel needs to be derived from FileOutputStream rather than FileInputStream. And the condition for ending loop in your code should be the size of file underlying the ReadableByteChannel which is not possible to get from it unless you are able to get FileChannel and find the size through its size method.
The way I could find for transferring is through ByteBuffer as below.
ByteBuffer buf = ByteBuffer.allocate(1024*8);
while(readableByteChannel.read(buf)!=-1)
{
buf.flip();
fc.write(buf); //fc is FileChannel derived from FileOutputStream
buf.compact();
}
buf.flip();
while(buf.hasRemainig())
{
fc.write(buf);
}
I am writing a Java code to download large amount of zip files on site using http protocol, and each file is around 1MB(1024KB) size.
I know there are a lot of ways to doing that. I am just wandering which is the fastest, and I would like to know the progress of each downloading like showing a percentage number or something.
I am just giving my version of code , any ideas on how to improve it?
Thanks All.
public static void downloadFile(String downloadUrl , String fileName) throws Exception {
URL url=new URL(downloadUrl);
URLConnection connection = url.openConnection();
int filesize = connection.getContentLength();
float totalDataRead=0;
java.io.BufferedInputStream in = new java.io.BufferedInputStream(connection.getInputStream());
java.io.FileOutputStream fos = new java.io.FileOutputStream(fileName);
java.io.BufferedOutputStream bout = new BufferedOutputStream(fos,1024);
byte[] data = new byte[1024];
int i=0;
while((i=in.read(data,0,1024))>=0) {
totalDataRead=totalDataRead+i;
bout.write(data,0,i);
float Percent=(totalDataRead*100)/filesize;
System.out.println((int)Percent);
}
bout.close();
in.close();
}
You are optimizing prematurely. The network bandwidth bottleneck is likely going to far outweigh any processing you are doing.
You don't need to wrap the InputStream in a BufferedInputStream. You may want to favor larger read buffer sizes, but that may have minimal effect depending on the underlying implementation of the InputStream returned by the connection, kernel level buffering, etc.
For a progress bar, take what you've read so far and divide it by connection.getContentLength(), but note that getContentLength() may return -1 if the length is unknown (it simply gives you the value of the Content-length header). As you're reading the data, pass the progress info along to whatever you choose to do to display it to the user.
I don't know, mine took 8 hours. To reduce it from 24 hours I cancelled all other downloads, didn't use the internet, and killed all other background tasks.
As can be seen below I have 1st image an original JPEG image .Second one was taken to buffer image and than save using http://www.lac.inpe.br/JIPCookbook/6040-howto-compressimages.jsp with 1.0 quality . Still image became smaller in size and a really small destortion. Is it possible to save image to its quality as it is ? Pleas not that saving image as it is was just a sample test. After adding text I save it with highest quality which looses information too.
Do not redraw the image and save it. Just copy the raw bytes instead!
I suspect your current code is something like this:
BufferedImage image = ImageIO.read(new File("my.jpg");
ImageIO.write(image, "jpg", new File("copy.jpg"));
Every time you repeat this the image will change a little (as you saw you always loose some quality). If you only want to copy the JPEG/file without changing anything you can do something like this (from this page http://www.exampledepot.com/egs/java.io/CopyFile.html):
void copy(File src, File dst) throws IOException {
InputStream in = new FileInputStream(src);
OutputStream out = new FileOutputStream(dst);
// Transfer bytes from in to out
byte[] buf = new byte[1024];
int len;
while ((len = in.read(buf)) > 0) {
out.write(buf, 0, len);
}
in.close();
out.close();
}
JPEG, even with highest quality settings, is always lossy, even if the original image data came from a JPEG.
There are some operations like rotation/mirroring/crop that can be done lossless on a JPEG (using tools like jpegtran), but these are rare exceptions.
Anyway, it seems you have access to the original JPG image and you don't change it, so I don't understand why you compress it again.
If you really have to store such images lossless, best choice would be using the lossless mode of JPEG2000, this gives a smaller filesize than other alternatives like PNG for image data that has been compressed using JPG (although it is still much larger than the original JPG). For example, for the first of your example pictures:
hAw2d.jpg -> 268,678 bytes (Original)
hAw2d.jp2 -> 1,021,007 bytes (JPEG 2000, lossless)
hAw2d.png -> 1,213,392 bytes (PNG)
I'm accepting an image as input from the user. I want to only allow a JPEG image. The image is arriving as an InputStream (called myInputStream below). In the code below, the Iterator returned by ImageIO.getImageReaders() is always empty.
ImageInputStream imageInputStream = ImageIO.createImageInputStream(
myInputStream);
Iterator<ImageReader> iter = ImageIO.getImageReaders(imageInputStream);
if (!iter.hasNext()) {
// this always happens
}
ImageReader reader = (ImageReader) iter.next();
if (!reader.getFormatName().equals("jpeg")) {
// haven't got this far yet
}
I have also tried passing myInputStream directly to ImageIO.getImageReaders() with the same result.
An empty iterator usually means ImageIO hasn't found a good image reader for decoding your image. This may be because you'are missing the right decoder in your classpath, or you image has an unsupported color model.
We are streaming data between a server (written in .Net running on Windows) to a client (written in Java running on Ubuntu) in batches. The data is in XML format. Occasionally the Java client throws an unexpected EOF while trying decompress the stream. The message content always varies and is user driven. The response from the client is also compressed using GZip. This never fails and seems to be rock solid. The response from the client is controlled by the system.
Is there a chance that some arrangement of characters or some special characters are creating false EOF markers? Could it be white-space related? Is GZip suitable for compressing XML?
I am assuming that the code to read and write from the input/output streams works because we only occasionally gets this exception and when we inspect the user data at the time there seems to be special characters (which is why I asked the question) such as the '#' sign.
Any ideas?
UPDATE:
The actual code as requested. I thought it wasn't this due to the fact that I had been to a couple of sites to get help on this issue and they all more or less had the same code. Some sites mentioned appended GZip. Something to do with GZip creating multiple segments?
public String receive() throws IOException {
byte[] buffer = new byte[8192];
ByteArrayOutputStream baos = new ByteArrayOutputStream(8192);
do {
int nrBytes = in.read(buffer);
if (nrBytes > 0) {
baos.write(buffer, 0, nrBytes);
}
} while (in.available() > 0);
return compressor.decompress(baos.toByteArray());
}
public String decompress(byte[] data) throws IOException {
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
ByteArrayInputStream in = new ByteArrayInputStream(data);
try {
GZIPInputStream inflater = new GZIPInputStream(in);
byte[] byteBuffer = new byte[8192];
int r;
while((r = inflater.read(byteBuffer)) > 0 ) {
buffer.write(byteBuffer, 0, r);
}
} catch (IOException e) {
log.error("Could not decompress stream", e);
throw e;
}
return new String(buffer.toByteArray());
}
At first I thought there must be something wrong with the way that I am reading in the stream and I thought perhaps I am not looping properly. I then generated a ton of data to be streamed and checked that it was looping. Also the fact they it happens so seldom and so far has not been reproducable lead me to believe that it was the content rather than the scenario. But at this point I am totally baffled and for all I know it is the code.
Thanks again everyone.
Update 2:
As requested the .Net code:
Dim DataToCompress = Encoding.UTF8.GetBytes(Data)
Dim CompressedData = Compress(DataToCompress)
To get the raw data into bytes. And then it gets compressed
Private Function Compress(ByVal Data As Byte()) As Byte()
Try
Using MS = New MemoryStream()
Using Compression = New GZipStream(MS, CompressionMode.Compress)
Compression.Write(Data, 0, Data.Length)
Compression.Flush()
Compression.Close()
Return MS.ToArray()
End Using
End Using
Catch ex As Exception
Log.Error("Error trying to compress data", ex)
Throw
End Try
End Function
Update 3: Also added more java code. the in variable is the InputStream return from socket.getInputStream()
It certainly shouldn't be due to the data involved - the streams deal with binary data, so that shouldn't make any odds at all.
However, without seeing your code, it's hard to say for sure. My first port of call would be to check anywhere that you're using InputStream.read() - check that you're using the return value correctly, rather than assuming a single call to read() will fill the buffer.
If you could provide some code, that would help a lot...
I would suspect that for some reason the data is altered underway, by treating it as text, not as binary, so it may either be \n conversions or a codepage alteration.
How is the gzipped stream transferred between the two systems?
It is not pssible. EOF in TCP is delivered as an out of band FIN segment, not via the data.