I need to write a resampling function that takes an input image and generates an output image in Java.
The image type is TYPE_BYTE_GRAY.
As all pixels will be read and written, I need an efficient method to access the image buffer(s).
I don't trust that methods like getRGB/setRGB will be appropriate as they will perform conversions. I am after functions that will allow me the most direct access to the stored buffer, with efficient address computation, no image copy and minimum overhead.
Can you help me ? I have found examples of many kinds, for instance using a WritableRaster, but nothing sufficiently complete.
Update:
As suggested by #FiReTiTi, the trick is to get a WritableRaster from the image and get its associated buffer as a DataBufferByte object.
DataBufferByte SrcBuffer= (DataBufferByte)Src.getRaster().getDataBuffer();
Then you have the option to directly access the buffer using its getElem/setElem methods
SrcBuffer.setElem(i, getElem(i) + 1);
or to extract an array of bytes
byte [] SrcBytes= SrcBuffer.getData();
SrcBytes[i]= SrcBytes[i] + 1;
Both methods work. I don't know yet it there's a difference in performance...
The easiest way (but not the fastest) is to use the Raster myimage.getRaster(), and then use the methods getSample(x,y,c) and setSample(x,y,c,v) to access and modify the pixels values.
The fastest way to do it is to access the DataBuffer (direct access to the array representing the image), so for a TYPE_BYTE_GRAY BufferedImage, it would be byte[] buffer = ((DataBufferByte)myimage.getRaster().getDataBuffer()).getData(). Just be careful that the pixels are encoded on byte and not unsigned byte, so every time you want to read a pixel value, you have to do buffer[x] & 0xFF.
Here is a simple test:
BufferedImage image = new BufferedImage(256, 256, BufferedImage.TYPE_BYTE_GRAY) ;
byte[] buffer = ((DataBufferByte)image.getRaster().getDataBuffer()).getData() ;
System.out.println("buffer[0] = " + (buffer[0] & 0xFF)) ;
buffer[0] = 1 ;
System.out.println("buffer[0] = " + (buffer[0] & 0xFF)) ;
And here is the outputs:
buffer[0] = 0
buffer[0] = 1
It is possible to get the underlying buffer yourimage.getData().getDataBuffer() but it will require some conversion since this is one long array. You could find order of pixels by setting some elements to a extreme value and render the picture to see how the pixels are affected.
Related
I have a full red image I made using MS Paint (red = 255, blue = 0, green = 0)
I read that image into a File object file
Then I extracted the bytes using Files.readAllBytes(file.toPath()) into a byte array byteArray
Now my expectation is that :
a) byteArray[0], when converted to bitstream, should be all 1
b) byteArray[1], when converted to bitstream, should be all 0
c) byteArray[2], when converted to bitstream, should be all 0
because, as I understand, the pixels values are stored in the order RGB with 8 bits for each color.
When I run my code, I don't get expected outcome. byteArray[0] is all 1 alright, but the other 2 aren't 0s.
Where am I going wrong?
Edit
As requested, I'm including image size, saved format and code used to read it.
Size = 1920p x 1080p
Format = JPG
Code:
File file = new File("image_path.jpg");
byte byteArray[]= new byte[(int) file.length()];
try {
byteArray = Files.readAllBytes(file.toPath());
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
int bits[] = new int[8];
for(int j=0; j<8; j++)
{
bits[j] = (b[0] & (1 << j))==0 ? 0:1 ;
//System.out.println("bitsb :"+bitsb[j]);
}
Update
Unfortunately I am unable to make use of other questions containing ImageIO library functions. I'm here partly trying to understand how the image itself is stored, and how I can write my own logic for retrieving and manipulating the image files.
JPEG is a complex image format.
It does not hold the raw image pixel data, but instead has a header, optional metadata and compressed image data.
The algorithm to decompress it to raw pixel values is quite complex, but there are libraries that will do the work for you.
Here is a short tutorial:
https://docs.oracle.com/javase/tutorial/2d/images/loadimage.html
Here is the documentation of the BufferedImage class which will hold the image data:
https://docs.oracle.com/javase/7/docs/api/java/awt/image/BufferedImage.html
You will need to use one of the getRGB functions to access the raw pixel data.
Make sure to check that your image is in 24 bit color format, if you want each color component to take 1 byte exactly!
JPEG supports other formats such as 32 and 16 bits!
Alternatively, save your image as 24 bit uncompressed BMP.
The file will be much larger, but reading it is much simpler so you don't have to use a library.
Just skip the header, then read raw bytes.
An even simpler image format to work with would be PBM/PPM.
I am trying to encrypt a file(txt, pdf, doc) using Google Tink - streaming AEAD encryption, below is the Java code which I am trying to execute. But all I get is 1 KB output encrypted file and no errors. All Input files whether 2 MB or more than 10 MB, output file will be always of 1 KB. I am unable to figure out what could be going wrong, can someone please help.
TinkConfig.register();
final int chunkSize = 256;
KeysetHandle keysetHandle = KeysetHandle.generateNew(
StreamingAeadKeyTemplates.AES128_CTR_HMAC_SHA256_4KB);
// 2. Get the primitive.
StreamingAead streamingAead = keysetHandle.getPrimitive(StreamingAead.class);
// 3. Use the primitive to encrypt some data and write the ciphertext to a file,
FileChannel ciphertextDestination =
new FileOutputStream("encyptedOutput.txt").getChannel();
String associatedData = "Tinks34";
WritableByteChannel encryptingChannel =
streamingAead.newEncryptingChannel(ciphertextDestination, associatedData.getBytes());
ByteBuffer buffer = ByteBuffer.allocate(chunkSize);
InputStream in = new FileInputStream("FileToEncrypt.txt");
while (in.available() > 0) {
in.read(buffer.array());
System.out.println(in);
encryptingChannel.write(buffer);
}
encryptingChannel.close();
in.close();
System.out.println("completed");
This is all about understanding ByteBuffer and how it operates. Let me explain.
in.read(buffer.array());
This writes data to the underlying array, but since array is decoupled from the state of the original buffer, the position of the buffer is not advanced. This is not good, as the next call:
encryptingChannel.write(buffer);
will now think that the position is 0. The limit hasn't changed either and is therefore still set to the capacity: 256. That means the result of the write operation is to write 256 bytes and set the position to the limit (the position).
Now the read operation still operates on the underlying byte array, and that's still 256 bytes in size. So all next read operations take place perfectly. However, all the write operations will assume that there are no bytes to be written, as the position remains at 256.
To use ByteBuffer you can use FileBuffer.read. Then you need to flip the buffer before writing the read data. Finally, after writing you need to clear the buffer's position (and limit, but that only changes on the last read) to prepare the buffer for the next read operation. So the order is commonly read, flip, write, clear for instances of Buffer.
Don't mix Channels and I/O streams, it will makes your life unnecessarily complicated, and learning how to use ByteBuffer is hard enough all by itself.
I'm using com.sun.media.imageioimpl.plugins.tiff.TIFFPackBitsCompressor to try and encode an array of tiff bytes I have using PackBits. I'm unfamiliar with this class and haven't been finding many examples on how to use it. But, when following the javadoc, I've been getting an NPE every time I try to encode my data. So far as I can see, none of my values are null. I've tried these tests with multiple values at this point, but below is my most recent iteration:
TIFFPackBitsCompressor pack = new TIFFPackBitsCompressor();
//bImageFromConvert is a 16-bit BufferedImage with all desired data.
short[] bufferHolder = ((DataBufferUShort) bImageFromConvert.getRaster().getDataBuffer()).getData();
//Since bImageFromConvert is 16-bits, the short array isn't the right length.
//The below conversion handles tihs issue
byte[] byteBuffer = convertShortToByte(bufferHolder);
//I'm not entirely sure what this int[] in the parameters should be.
//For now, it is a test int[] array containing all 1s
int[] testint = new int[byteBuffer.length];
Arrays.fill(testint, 1);
//0 offset. dimWidth = 1760, dimHeight = 2140. Not sure what that last param is supposed to be in layman's terms.
//npe thrown at this line.
int testOut = pack.encode(byteBuffer, 0, dimWidth, dimHeight, testint, 1);
Does anyone have any insight as to what's happening? Also, if available, does anyone know a better way to encode my TIFF files using PackBits in a java program?
Let me know if there's anything to make my question clearer.
Thank you!
As said in the comment, you are not supposed to use the TIFFPackBitsCompressor directly, instead it's used internally by the JAI ImageIO TIFF plugin (the TIFFImageWriter) when you specify "PackBits" as compression type in the ImageWriteParam. You may also pass a compressor instance in the param, if you cast it to TIFFImageWriteParam first, but this is more useful for custom compressions not known by the plugin.
Also note that the compressor will only write PackBits compressed pixel data, it will not create a full TIFF file.
The normal way of writing a PackBits compressed TIFF file is:
BufferedImage image = ...; // Your input image
ImageWriter writer = ImageIO.getImageWritersByFormatName("TIFF").next(); // Assuming a TIFF plugin is installed
try (ImageOutputStream out = ImageIO.createImageOutputStream(...)) { // Your output file or stream
writer.setOutput(out);
ImageWriteParam param = writer.getDefaultWriteParam();
param.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
param.setCompressionType("PackBits");
writer.write(null, new IIOImage(image, null, null), param);
}
writer.dispose();
The above code should work fine using both JAI ImageIO and the TwelveMonkeys ImageIO TIFF plugins.
PS: PackBits is a very simple compression algorithm based on run-length encoding of byte data. As 16 bit data may vary wildly between the high and low byte of a single sample, PackBits is generally not a good choice for compression of such data.
As stated in my comments, using completely random values I got the following results:
Compression | File size
-----------------|-----------------
None | 7 533 680 bytes
PackBits | 7 593 551 bytes
LZW w/predictor | 10 318 091 bytes
ZLib w/predictor | 10 318 444 bytes
This is not very surprising, as completely random data isn't generally compressible (without data loss). For a linear gradient, which may be more similar to "photographic" image data I got completely different results:
Compression | File size
-----------------|-----------------
None | 7 533 680 bytes
PackBits | 7 588 779 bytes
LZW w/predictor | 200 716 bytes
ZLib w/predictor | 144 136 bytes
As you see, here the LZW and Deflate/Zlib algorithms (with predictor step) performs MUCH better. For "real" data, there's likely more noise, so your results is likely somewhere in between these extremes.
Previously I posed a question about converting a byte[] to short[] and a new problem I encountered is converting/ not converting the data from byte[] to BigEndian.
Here is what is going on:
I am using TargetDataLine to read data into a byte[10000].
The AudioFormat object has BigEndian set to true, arbitrarily.
This byte[] needs to be converted to short[] so that it can be encoded using Xuggler
I don't know whether the AudioFormat BigEndian should be set to true or false.
I have tried both the cases and I get an Exception in both the cases.
To convert byte[] to short[], I do this:
fromMic.read(tempBufferByte, 0, tempBufferByte.length);
for(int i=0;i<tempBufferShort.length;i++){
tempBufferShort[i] = (short) tempBufferByte[i];
}
where:
fromMic is TargetDataLine
tempBufferbyte is byte[10000]
tempBufferShort is short[10000]
I get the Exception:
java.lang.RuntimeException: failed to write packet: com.xuggle.xuggler.IPacket#90098448[complete:true;dts:12;pts:12;size:72;key:true;flags:1;stream index:1;duration:1;position:-1;time base:9/125;]
Miscellaneous information that may be needed:
How I set the stream for adding audio in Xuggler:
writer.addAudioStream(0,1,fmt.getChannels(),(int)fmt.getSampleRate());
How I perform the encoding
writer.encodeAudio(1,tempBufferShort,timeStamp,TimeUnit.NANOSECONDS);
Java Doc on AudioFormat
...In addition to the encoding, the audio format includes other
properties that further specify the exact arrangement of the data.
These include the number of channels, sample rate, sample size, byte
order, frame rate, and frame size...
and
For 16-bit samples (or any other sample size larger than a byte), byte
order is important; the bytes in each sample are arranged in either
the "little-endian" or "big-endian" style.
Questions:
Do I need to keep the BigEndian as true in javax.sound.sampled.AudioFormat object?
What is causing the error? Is it the format?
I guess I get BigEndian data preformatted by the AudioFormat object.
If your data is indeed big endian, you can directly convert it to a (big endian) short array like this:
ByteBuffer buf = ByteBuffer.wrap(originalByteArray);
short[] shortArray = buf.asShortBuffer().array();
The resulting short array will have all the original byte array directly, and correctly, mapped, given that your data is big endian. So, an original array, such as:
// bytes
[00], [ae], [00], [7f]
will be converted to:
// shorts
[00ae], [007f]
You need to convert two bytes into one short, so this line is wrong:
tempBufferShort[i] = (short) tempBufferByte[i];
You need something along the lines of
tempBufferShort[i] = (short)
(tempBufferByte[i*2] & 0xFF)*256 + (tempBufferByte[i*2+1] & 0xFF);
This would be in line with a big-endian byte array.
What others here have said about the byte-to-short conversion is correct, but it cannot cause the problem you see, it would just cause the output audio to be mostly noise. You can call writeAudio with a buffer of all zeros (or anything, really) so, everything else being equal, the values in the buffer don't matter to whether the call succeeds (they do matter to what you hear in the output, of course :)
Does the exception happen at the beginning of the stream (first audio chunk)? Can you write an audio-only stream successfully?
Set the audio codec when you call addAudioStream. Try ICodec.ID.CODEC_ID_MP3 or ICodec.ID.CODEC_ID_AAC.
Check that fmt.getChannels() and fmt.getSampleRate() are correct. Not all possible values are supported by any particular codec. (2 ch, 44100 Hz should be supported by just about anything).
Are you writing your audio and video such that the timestamps are strictly non-decreasing?
Do you have enough audio samples for the duration your timestamps are indicating? Does tempBufferShort.length == ((timeStamp - lastTimeStamp) / 1e+9) * sampleRate * channels ? (This may be only approximately equal, but it should be very close, slight rounding errors probably ok).
Is there a good way to combine ByteBuffer & FloatBuffer ?
For example I get byte[] data and I need to convert it to float[] data and vice versa :
byte[] to float[] (java.lang.UnsupportedOperationException) :
byte[] bytes = new bytes[N];
ByteBuffer.wrap(bytes).asFloatBuffer().array();
float[] to byte[] (works) :
float[] floats = new float[N];
FloatBuffer floatBuffer = FloatBuffer.wrap(floats);
ByteBuffer byteBuffer = ByteBuffer.allocate(floatBuffer.capacity() * 4);
byteBuffer.asFloatBuffer().put(floats);
byte[] bytes = byteBuffer.array();
array() is an optional operation for ByteBuffer and FloatBuffer, to be supported only when the backing Buffer is actually implemented on top of an array with the appropriate type.
Instead, use get to read the contents of the buffer into an array, when you don't know how the buffer is actually implemented.
To add to Louis's answer, arrays in Java are limited in that they must be an independent region of memory. It is not possible to have an array that is a view of another array, whether to point at some offset in another array or to reinterpret the bytes of another array as a different type.
Buffers (ByteBuffer, FloatBuffer, etc) were created to overcome this limitation. They are equivalent to arrays in that they compile into machine instructions that are as fast as array accesses, despite requiring the programmer to use what appear to be function calls.
For top performance and minimum memory usage, you should use ByteBuffer.wrap(bytes).asFloatBuffer() and then call get() and put().
To get a float array, you must allocate a new array and copy the data into it with ByteBuffer.wrap(bytes).asFloatBuffer().get(myFloatArray).
The array method of a ByteBuffer is not something that anyone should normally use. It will fail unless the Buffer is wrapping an array (instead of pointing to some memory-mapped region like a file or raw non-GC memory) and the array is of the same type as the buffer.