I am trying to produce images without gamma information so that IE8 can display them correctly. Used the following code but the result is a distorted image that looks nothing like the original image.
///PNG
PNGEncodeParam params= PNGEncodeParam.getDefaultEncodeParam(outImage);
params.unsetGamma();
params.setChromaticity(DEFAULT_CHROMA);
params.setSRGBIntent(PNGEncodeParam.INTENT_ABSOLUTE);
ImageEncoder encoder= ImageCodec.createImageEncoder("PNG", response.getOutputStream(), params);
encoder.encode(outImage);
response.getOutputStream().close();
Here is the original image and the distorted one resulting from the code above.
Thanks!
I saw the same question asked several places but there seems to be no answer, so I am offering mine here. I have no idea whether Java imageio saves gamma or not. Given the fact gamma is system dependent, it is unlikely imageio could handle it. One thing is for sure: imageio ignores gamma when reading pngs.
PNG is a chunk based image format. Gamma is one of the 14 Ancillary chunks which takes care of the differences of the computer systems that create the image to make them looks more or less equally "bright" on different systems. Each trunk starts with a data length and a trunk identifier followed by a 4 bytes CRC checksum. The data length doesn't include the data length property itself and the trunk identifier. The gAMA chunk is identified by hex 0x67414D41.
Here is the raw way to remove the gAMA from png image: we assume the input stream is in valid PNG format. First read 8 bytes which is the png identifier 0x89504e470d0a1a0aL. Then read another 25 bytes which comprise of the image header. Altogether we have read 33 bytes from the top of the file. Now save them to another temp file with png extension. Now it comes to a while loop. We read chunks one by one: if it's not IEND and it's not a gAMA chunk, we copy it to the output tempfile. If it's a gAMA trunk, we skip it, until we reach IEND which should be the last chunk and we copy it to the tempfile. Done. Here is the whole test code to show how things are done (it is just for demo purpose, not optimized):
import java.io.*;
public class RemoveGamma
{
/** PNG signature constant */
public static final long SIGNATURE = 0x89504E470D0A1A0AL;
/** PNG Chunk type constants, 4 Critical chunks */
/** Image header */
private static final int IHDR = 0x49484452; // "IHDR"
/** Image data */
private static final int IDAT = 0x49444154; // "IDAT"
/** Image trailer */
private static final int IEND = 0x49454E44; // "IEND"
/** Palette */
private static final int PLTE = 0x504C5445; // "PLTE"
/** 14 Ancillary chunks */
/** Transparency */
private static final int tRNS = 0x74524E53; // "tRNs"
/** Image gamma */
private static final int gAMA = 0x67414D41; // "gAMA"
/** Primary chromaticities */
private static final int cHRM = 0x6348524D; // "cHRM"
/** Standard RGB color space */
private static final int sRGB = 0x73524742; // "sRGB"
/** Embedded ICC profile */
private static final int iCCP = 0x69434350; // "iCCP"
/** Textual data */
private static final int tEXt = 0x74455874; // "tEXt"
/** Compressed textual data */
private static final int zTXt = 0x7A545874; // "zTXt"
/** International textual data */
private static final int iTXt = 0x69545874; // "iTXt"
/** Background color */
private static final int bKGD = 0x624B4744; // "bKGD"
/** Physical pixel dimensions */
private static final int pHYs = 0x70485973; // "pHYs"
/** Significant bits */
private static final int sBIT = 0x73424954; // "sBIT"
/** Suggested palette */
private static final int sPLT = 0x73504C54; // "sPLT"
/** Palette histogram */
private static final int hIST = 0x68495354; // "hIST"
/** Image last-modification time */
private static final int tIME = 0x74494D45; // "tIME"
public void remove(InputStream is) throws Exception
{
//Local variables for reading chunks
int data_len = 0;
int chunk_type = 0;
long CRC = 0;
byte[] buf=null;
DataOutputStream ds = new DataOutputStream(new FileOutputStream("temp.png"));
long signature = readLong(is);
if (signature != SIGNATURE)
{
System.out.println("--- NOT A PNG IMAGE ---");
return;
}
ds.writeLong(SIGNATURE);
//*******************************
//Chuncks follow, start with IHDR
//*******************************
/** Chunk layout
Each chunk consists of four parts:
Length
A 4-byte unsigned integer giving the number of bytes in the chunk's data field.
The length counts only the data field, not itself, the chunk type code, or the CRC.
Zero is a valid length. Although encoders and decoders should treat the length as unsigned,
its value must not exceed 2^31-1 bytes.
Chunk Type
A 4-byte chunk type code. For convenience in description and in examining PNG files,
type codes are restricted to consist of uppercase and lowercase ASCII letters
(A-Z and a-z, or 65-90 and 97-122 decimal). However, encoders and decoders must treat
the codes as fixed binary values, not character strings. For example, it would not be
correct to represent the type code IDAT by the EBCDIC equivalents of those letters.
Additional naming conventions for chunk types are discussed in the next section.
Chunk Data
The data bytes appropriate to the chunk type, if any. This field can be of zero length.
CRC
A 4-byte CRC (Cyclic Redundancy Check) calculated on the preceding bytes in the chunk,
including the chunk type code and chunk data fields, but not including the length field.
The CRC is always present, even for chunks containing no data. See CRC algorithm.
*/
/** Read header */
/** We are expecting IHDR */
if ((readInt(is)!=13)||(readInt(is) != IHDR))
{
System.out.println("--- NOT A PNG IMAGE ---");
return;
}
ds.writeInt(13);//We expect length to be 13 bytes
ds.writeInt(IHDR);
buf = new byte[13+4];//13 plus 4 bytes CRC
is.read(buf,0,17);
ds.write(buf);
while (true)
{
data_len = readInt(is);
chunk_type = readInt(is);
//System.out.println("chunk type: 0x"+Integer.toHexString(chunk_type));
if (chunk_type == IEND)
{
System.out.println("IEND found");
ds.writeInt(data_len);
ds.writeInt(IEND);
int crc = readInt(is);
ds.writeInt(crc);
break;
}
switch (chunk_type)
{
case gAMA://or any non-significant chunk you want to remove
{
System.out.println("gamma found");
is.skip(data_len+4);
break;
}
default:
{
buf = new byte[data_len+4];
is.read(buf,0, data_len+4);
ds.writeInt(data_len);
ds.writeInt(chunk_type);
ds.write(buf);
break;
}
}
}
is.close();
ds.close();
}
private int readInt(InputStream is) throws Exception
{
byte[] buf = new byte[4];
is.read(buf,0,4);
return (((buf[0]&0xff)<<24)|((buf[1]&0xff)<<16)|
((buf[2]&0xff)<<8)|(buf[3]&0xff));
}
private long readLong(InputStream is) throws Exception
{
byte[] buf = new byte[8];
is.read(buf,0,8);
return (((buf[0]&0xffL)<<56)|((buf[1]&0xffL)<<48)|
((buf[2]&0xffL)<<40)|((buf[3]&0xffL)<<32)|((buf[4]&0xffL)<<24)|
((buf[5]&0xffL)<<16)|((buf[6]&0xffL)<<8)|(buf[7]&0xffL));
}
public static void main(String args[]) throws Exception
{
FileInputStream fs = new FileInputStream(args[0]);
RemoveGamma rg = new RemoveGamma();
rg.remove(fs);
}
}
Since the input is a Java InputStream, we could use some kind of encoder to encode image as a PNG and write it to a ByteArrayOutputStream which later will be fed to the above test class as a ByteArrayInputSteam and the the gamma information (if any) will be removed. Here is the result:
The left side is the original image with gAMA, the right side is the same image with gAMA removed.
Image source: http://r6.ca/cs488/kosh.png
Edit: here is a revised version of the code to remove any ancillary chunks.
import java.io.*;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Set;
public class PNGChunkRemover
{
/** PNG signature constant */
private static final long SIGNATURE = 0x89504E470D0A1A0AL;
/** PNG Chunk type constants, 4 Critical chunks */
/** Image header */
private static final int IHDR = 0x49484452; // "IHDR"
/** Image data */
private static final int IDAT = 0x49444154; // "IDAT"
/** Image trailer */
private static final int IEND = 0x49454E44; // "IEND"
/** Palette */
private static final int PLTE = 0x504C5445; // "PLTE"
//Ancillary chunks keys
private static String[] KEYS = { "TRNS", "GAMA","CHRM","SRGB","ICCP","TEXT","ZTXT",
"ITXT","BKGD","PHYS","SBIT","SPLT","HIST","TIME"};
private static int[] VALUES = {0x74524E53,0x67414D41,0x6348524D,0x73524742,0x69434350,0x74455874,0x7A545874,
0x69545874,0x624B4744,0x70485973,0x73424954,0x73504C54,0x68495354,0x74494D45};
private static HashMap<String, Integer> TRUNK_TYPES = new HashMap<String, Integer>()
{{
for(int i=0;i<KEYS.length;i++)
put(KEYS[i],VALUES[i]);
}};
private static HashMap<Integer, String> REVERSE_TRUNK_TYPES = new HashMap<Integer,String>()
{{
for(int i=0;i<KEYS.length;i++)
put(VALUES[i],KEYS[i]);
}};
private static Set<Integer> REMOVABLE = new HashSet<Integer>();
private static void remove(InputStream is, File dir, String fileName) throws Exception
{
//Local variables for reading chunks
int data_len = 0;
int chunk_type = 0;
byte[] buf=null;
DataOutputStream ds = new DataOutputStream(new FileOutputStream(new File(dir,fileName)));
long signature = readLong(is);
if (signature != SIGNATURE)
{
System.out.println("--- NOT A PNG IMAGE ---");
return;
}
ds.writeLong(SIGNATURE);
/** Read header */
/** We are expecting IHDR */
if ((readInt(is)!=13)||(readInt(is) != IHDR))
{
System.out.println("--- NOT A PNG IMAGE ---");
return;
}
ds.writeInt(13);//We expect length to be 13 bytes
ds.writeInt(IHDR);
buf = new byte[13+4];//13 plus 4 bytes CRC
is.read(buf,0,17);
ds.write(buf);
while (true)
{
data_len = readInt(is);
chunk_type = readInt(is);
//System.out.println("chunk type: 0x"+Integer.toHexString(chunk_type));
if (chunk_type == IEND)
{
System.out.println("IEND found");
ds.writeInt(data_len);
ds.writeInt(IEND);
int crc = readInt(is);
ds.writeInt(crc);
break;
}
if(REMOVABLE.contains(chunk_type))
{
System.out.println(REVERSE_TRUNK_TYPES.get(chunk_type)+"Chunk removed!");
is.skip(data_len+4);
}
else
{
buf = new byte[data_len+4];
is.read(buf,0, data_len+4);
ds.writeInt(data_len);
ds.writeInt(chunk_type);
ds.write(buf);
}
}
is.close();
ds.close();
}
private static int readInt(InputStream is) throws Exception
{
byte[] buf = new byte[4];
int bytes_read = is.read(buf,0,4);
if(bytes_read<0) return IEND;
return (((buf[0]&0xff)<<24)|((buf[1]&0xff)<<16)|
((buf[2]&0xff)<<8)|(buf[3]&0xff));
}
private static long readLong(InputStream is) throws Exception
{
byte[] buf = new byte[8];
int bytes_read = is.read(buf,0,8);
if(bytes_read<0) return IEND;
return (((buf[0]&0xffL)<<56)|((buf[1]&0xffL)<<48)|
((buf[2]&0xffL)<<40)|((buf[3]&0xffL)<<32)|((buf[4]&0xffL)<<24)|
((buf[5]&0xffL)<<16)|((buf[6]&0xffL)<<8)|(buf[7]&0xffL));
}
public static void main(String args[]) throws Exception
{
if(args.length>0)
{
File[] files = {new File(args[0])};
File dir = new File(".");
if(files[0].isDirectory())
{
dir = files[0];
files = files[0].listFiles(new FileFilter(){
public boolean accept(File file)
{
if(file.getName().toLowerCase().endsWith("png")){
return true;
}
return false;
}
}
);
}
if(args.length>1)
{
FileInputStream fs = null;
if(args[1].equalsIgnoreCase("all")){
REMOVABLE = REVERSE_TRUNK_TYPES.keySet();
}
else
{
String key = "";
for (int i=1;i<args.length;i++)
{
key = args[i].toUpperCase();
if(TRUNK_TYPES.containsKey(key))
REMOVABLE.add(TRUNK_TYPES.get(key));
}
}
for(int i= files.length-1;i>=0;i--)
{
String outFileName = files[i].getName();
outFileName = outFileName.substring(0,outFileName.lastIndexOf('.'))
+"_slim.png";
System.out.println("<<"+files[i].getName());
fs = new FileInputStream(files[i]);
remove(fs, dir, outFileName);
System.out.println(">>"+outFileName);
System.out.println("************************");
}
}
}
}
}
Usage: java PNGChunkRemover filename.png all will remove any of the predefined 14 ancillary chunks.
java PNGChunkRemover filename.png gama time ... will only remove the chunks specified after the png file.
Note: If a folder name is specified as the first argument to the PNGChunkRemover, all png file in the folder will be processed.
The above example has become part of a Java image library which can be found at https://github.com/dragon66/icafe
You can also do it with the (my) PNGJ library
http://code.google.com/p/pngj/
Eg
PngReader pngr = FileHelper.createPngReader(new File(origFilename));
PngWriter pngw = FileHelper.createPngWriter(new File(destFilename), pngr.imgInfo, false);
pngw.copyChunksFirst(pngr, ChunkCopyBehaviour.COPY_ALL); // all chunks are queued
PngChunkGAMA gama = (PngChunkGAMA) pngw.getChunkList().getQueuedById1(ChunkHelper.gAMA);
if (gama != null) {
System.out.println("removing gama chunk gamma=" + gama.getGamma());
pngw.getChunkList().removeChunk(gama);
}
for (int row = 0; row < pngr.imgInfo.rows; row++) {
ImageLine l1 = pngr.readRow(row);
pngw.writeRow(l1, row);
}
pngw.copyChunksLast(pngr, ChunkCopyBehaviour.COPY_ALL); // in case some new metadata has been read
pngw.end();
Included in the library samples.
The tool pngcrush can remove the gamma information and other unwanted chunks:
pngcrush -m 3 -rem gAMA -rem cHRM -rem iCCP -rem sRGB in.png out.png
It recompresses the PNG at the same time, trying different methods. The -m 3 option tries only method number 3, which seems to be quick and reasonably effective. Omit that if you want the smallest png.
Related
I am currently trying to make a .wav file that will play sos in morse.
The way I went about this is: I have a byte array that contains one wave of a beep. I then repeated that until I had the desired length.
After that I inserted those bytes into a new array and put bytes containing 00 (in hexadecimal) to separate the beeps.
If I add 1 beep to a WAVE file, it creates the file correctly (i.e. I get a beep of the desired length).
Here is a picture of the waves zoomed in (I opened the file in Audacity):
And here is a picture of the entire wave part:
The problem now is that when I add a second beep, the second one becomes completely distorted:
So this is what the entire file looks like now:
If I add another beep, it will be the correct beep again, If I add yet another beep it's going to be distorted again, etc.
So basically, every other wave is distorted.
Does anyone know why this happens?
Here is a link to a .txt file I generated containing the the audio data of the wave file I created: byteTest19.txt
And here is a lint to a .txt file that I generated using file format.info that is a hexadecimal representation of the bytes in the .wav file I generated containing 5 beeps (with two of them, the even beeps being distorted): test3.txt
You can tell when a new beep starts because it is preceded by a lot of 00's.
As far as I can see, the bytes of the second beep does not differ from the first one, which is why I am asking this question.
If anyone knows why this happens, please help me. If you need more information, don't hesitate to ask. I hope I explained well what I'm doing, if not, that's my bad.
EDIT
Here is my code:
// First I calculate the byte array for a single beep
// This file is just a single wave of the audio (up and down)
// (see below for the fileToAudioByteArray method) (In my
// actual code I only take in half of the wave and then I
// invert it, but I didn't want to make this too complicated,
// I'll put the full code below
final byte[] wave = fileToAudioByteArray(new File("path to my wav file");
// This is how long that audio fragment is in seconds
final double secondsPerWave = 0.0022195;
// This is the amount of seconds a beep takes up (e.g. the seconds picture)
double secondsPerBeep = 0.25;
final int amountWaveInBeep = (int) Math.ceil((secondsPerBeep/secondsPerWave));
// this is the byte array containing the audio data of
// 1 beep (see below for the repeatArray method)
final byte[] beep = repeatArray(wave, amountWaveInBeep);
// Now for the silence between the beeps
final byte silenceByte = 0x00,
// The amount of seconds a silence byte takes up
final double secondsPerSilenceByte = 0.00002;
// The amount of silence bytes I need to make one second
final int amountOfSilenceBytesForOneSecond = (int) (Math.ceil((1/secondsPerSilenceByte)));
// The space between 2 beeps will be 0.25 * secondsPerBeep
double amountOfBeepsEquivalent = 0.25;
// This is the amount of bytes of silence I need
// between my beeps
final int amntSilenceBytesPerSpaceBetween = (int) Math.ceil(secondsPerBeep * amountOfBeepsEquivalent * amountOfSilenceBytesForOneSecond);
final byte[] spaceBetweenBeeps = new byte[amntSilenceBytesPerSpaceBetween];
for (int i = 0; i < amntSilenceBytesPerSpaceBetween; i++) {
spaceBetweenBeeps[i] = silenceByte;
}
WaveFileBuilder wavBuilder = new WaveFileBuilder(WaveFileBuilder.AUDIOFORMAT_PCM, 1, 44100, 16);
// Adding all the beeps and silence to the WAVE file (test3.wav)
wavBuilder.addBytes(beep);
wavBuilder.addBytes(spaceBetweenDigits);
wavBuilder.addBytes(beep);
wavBuilder.addBytes(spaceBetweenDigits);
wavBuilder.addBytes(beep);
wavBuilder.addBytes(spaceBetweenDigits);
wavBuilder.addBytes(beep);
wavBuilder.addBytes(spaceBetweenDigits);
wavBuilder.addBytes(beep);
wavBuilder.addBytes(nextChar);
File outputFile = new File("path/test3.wav");
wavBuilder.saveFile(outputFile);
These are the 2 methods I used in the beginning:
/**
* Converts a wav file to a byte array containing its audio data
* #param file the wav file you want to convert
* #return the data part of a wav file in byte form
*/
public static byte[] fileToAudioByteArrray(File file) throws UnsupportedAudioFileException, IOException {
AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(file);
AudioFormat audioFormat = audioInputStream.getFormat();
int bytesPerSample = audioFormat.getFrameSize();
if (bytesPerSample == AudioSystem.NOT_SPECIFIED) {
bytesPerSample = -1;
}
long numSamples = audioInputStream.getFrameLength();
int numBytes = (int) (numSamples * bytesPerSample);
byte[] audioBytes = new byte[numBytes];
int numBytesRead;
while((numBytesRead = audioInputStream.read(audioBytes)) != -1);
return audioBytes;
}
/**
* Repeats an array into a new array x times
* #param array the array you want to copy x times
* #param repeat the amount of times you want to copy the array into the new array
* #return an array containing the content of {#code array} {#code repeat} times.
*/
public static byte[] repeatArray(byte[] array, int repeat) {
byte[] result = new byte[array.length * repeat];
for (int i = 0; i < result.length; i++) {
result[i] = array[i % array.length];
}
return result;
}
Now for my WaveFileBuilder class:
/**
* <p> Constructs a WavFileBuilder which can be used to create wav files.</p>
*
* <p>The builder takes care of the subchunks based on the parameters that are given in the constructor.</p>
*
* <h3>Adding audio to the wav file</h3>
* There are 2 methods that can be used to add audio data to the WavFile.
* One is {#link #addBytes(byte[]) addBytes} which lets you directly inject bytes
* into the data section of the wav file.
* The other is {#link #addAudioFile(File) addAudioFile} which lets you add the audio
* data of another wav file to the wav file's audio data.
*
* #param audioFormat The be.jonaseveraert.util.audio format of the wav file {#link #AUDIOFORMAT_PCM PCM} = 1
* #param numChannels The number of channels the wav file will have {#link #NUM_CHANNELS_MONO MONO} = 1,
* {#link #NUM_CHANNELS_STEREO STEREO} = 2
* #param sampleRate The sample rate of the wav file in Hz (e.g. 22050, 44100, ...)
* #param bitsPerSample The amount of bits per sample. If 16 bits, the audio sample will contain 2 bytes per
* channel. (e.g. 8, 16, ...). This is important to take into account when using the
* {#link #addBytes(byte[]) addBytes} method to insert data into the wav file.
*/
public WaveFileBuilder(int audioFormat, int numChannels, int sampleRate, int bitsPerSample) {
this.audioFormat = audioFormat;
this.numChannels = numChannels;
this.sampleRate = sampleRate;
this.bitsPerSample = bitsPerSample;
// Subchunk 1 calculations
this.byteRate = this.sampleRate * this.numChannels * (this.bitsPerSample / 8);
this.blockAlign = this.numChannels * (this.bitsPerSample / 8);
}
/**
* Contains the audio data for the wav file that is being constructed
*/
byte[] audioBytes = null;
// For debug purposes
int counter = 0;
/**
* Adds audio data to the wav file from bytes
* <p>See the "see also" for the structure of the "Data" part of a wav file</p>
* #param audioBytes audio data
* #see Wave PCM Soundfile Format
*/
public void addBytes(byte[] audioBytes) throws IOException {
// This is all debug code that I used to maker byteText19.txt
// which I have linked in my question
String test1;
try {
test1 = (temp.bytesToHex(this.audioBytes, true));
} catch (NullPointerException e) {
test1 = "null";
}
File file = new File("/Users/jonaseveraert/Desktop/Morse Sound Test/debug/byteTest" + counter + ".txt");
file.createNewFile();
counter++;
BufferedWriter writer = new BufferedWriter(new FileWriter(file));
writer.write(test1);
writer.close();
// This is where the actual code starts //
if (this.audioBytes != null)
this.audioBytes = ArrayUtils.addAll(this.audioBytes, audioBytes);
else
this.audioBytes = audioBytes;
// End of code //
// This is for debug again
String test2 = (temp.bytesToHex(this.audioBytes, true));
File file2 = new File("/Users/jonaseveraert/Desktop/Morse Sound Test/debug/byteTest" + counter + ".txt");
file2.createNewFile();
counter++;
BufferedWriter writer2 = new BufferedWriter(new FileWriter(file2));
writer2.write(test2);
writer2.close();
}
/**
* Saves the file to the location of the {#code outputFile}.
* #param outputFile The file that will be outputted (not created yet), contains the path
* #return true if the file was created and written to successfully. Else false.
* #throws IOException If an I/O error occurred
*/
public boolean saveFile(File outputFile) throws IOException {
// subchunk2 calculations
//int numBytesInData = data.length()/2;
int numBytesInData = audioBytes.length;
int numSamples = numBytesInData / (2 * numChannels);
subchunk2Size = numSamples * numChannels * (bitsPerSample / 8);
// chunk calculation
chunkSize = 4 + (8 + subchunk1Size) + (8 + subchunk2Size);
// convert everything to hex string //
// Chunk descriptor
String f_chunkID = asciiStringToHexString(chunkID);
String f_chunkSize = intToLittleEndianHexString(chunkSize, 4);
String f_format = asciiStringToHexString(format);
// fmt subchunck
String f_subchunk1ID = asciiStringToHexString(subchunk1ID);
String f_subchunk1Size = intToLittleEndianHexString(subchunk1Size, 4);
String f_audioformat = intToLittleEndianHexString(audioFormat, 2);
String f_numChannels = intToLittleEndianHexString(numChannels, 2);
String f_sampleRate = intToLittleEndianHexString(sampleRate, 4);
String f_byteRate = intToLittleEndianHexString(byteRate, 4);
String f_blockAlign = intToLittleEndianHexString(blockAlign, 2);
String f_bitsPerSample = intToLittleEndianHexString(bitsPerSample, 2);
// data subchunk
String f_subchunk2ID = asciiStringToHexString(subchunk2ID);
String f_subchunk2Size = intToLittleEndianHexString(subchunk2Size, 4);
// data is stored in audioData
// Combine all hex data into one String (except for the
// audio data, which is passed in as a byte array)
final String AUDIO_BYTE_STREAM_STRING = f_chunkID + f_chunkSize + f_format
+ f_subchunk1ID + f_subchunk1Size + f_audioformat + f_numChannels + f_sampleRate + f_byteRate + f_blockAlign + f_bitsPerSample
+ f_subchunk2ID + f_subchunk2Size;
// Convert the hex data to a byte array
final byte[] BYTES = hexStringToByteArray(AUDIO_BYTE_STREAM_STRING);
// Create & write file
if (outputFile.createNewFile()) {
// Combine byte arrays
// This array now contains the full WAVE file
byte[] audioFileBytes = ArrayUtils.addAll(BYTES, audioBytes);
try (FileOutputStream fos = new FileOutputStream(outputFile)) {
fos.write(audioFileBytes); // Write the bytes into a file
}
catch (IOException e) {
logger.log(Level.SEVERE, "IOException occurred");
logger.log(Level.SEVERE, null, e);
return false;
}
logger.log(Level.INFO, "File created: " + outputFile.getName());
}
return true;
} else {
//System.out.println("File already exists.");
logger.log(Level.WARNING, "File already exists.");
}
return false;
}
}
// Aiding methods
/**
* Converts a string containing hexadecimal to bytes
* #param s e.g. 00014F
* #return an array of bytes e.g. {00, 01, 4F}
*/
private byte[] hexStringToByteArray(String s) {
int len = s.length();
byte[] bytes = new byte[len / 2];
for (int i = 0; i < len; i+= 2) {
bytes[i / 2] = (byte) ((Character.digit(s.charAt(i), 16) << 4) + Character.digit(s.charAt(i+1), 16));
}
return bytes;
}
/**
* Converts an int to a hexadecimal string in the little-endian format
* #param input an integer number
* #param numberOfBytes The number of bytes the the integer is stored in
* #return The integer as a hexadecimal string in the little-endian byte ordering
*/
private String intToLittleEndianHexString(int input, int numberOfBytes) {
String hexBigEndian = Integer.toHexString(input);
StringBuilder hexLittleEndian = new StringBuilder();
int amountOfNumberProcessed = 0;
for (int i = 0; i < hexBigEndian.length()/2f; i++) {
int endIndex = hexBigEndian.length() - (i * 2);
try {
hexLittleEndian.append(hexBigEndian.substring(endIndex-2, endIndex));
} catch (StringIndexOutOfBoundsException e ) {
hexLittleEndian.append(0).append(hexBigEndian.charAt(0));
}
amountOfNumberProcessed++;
}
while (amountOfNumberProcessed != numberOfBytes) {
hexLittleEndian.append("00");
amountOfNumberProcessed++;
}
return hexLittleEndian.toString();
}
/**
* Converts a string containing ascii to its hexadecimal notation
* #param input The string that has to be converted
* #return The string as a hexadecimal notation in the big-endian byte ordering
*/
private String asciiStringToHexString(String input) {
byte[] bytes = input.getBytes(StandardCharsets.US_ASCII);
StringBuilder hex = new StringBuilder();
for (byte b : bytes) {
String hexChar = String.format("%02X", b);
hex.append(hexChar);
}
return hex.toString().trim();
}
And lastly: if you want the full code, replace
final byte[] wave = fileToAudioByteArray(new File("path to my wav file"); in the beginning of my code with:
File morse_half_wave_file = new File("/Users/jonaseveraert/Desktop/Morse Sound Test/morse_audio_fragment.wav");
final byte[] half_wave = temp.fileToAudioByteArrray(morse_half_wave_file);
final byte[] half_wave_inverse = temp.invertByteArray(half_wave);
// Then the wave byte array becomes:
final byte[] wave = ArrayUtils.addAll(half_wave, half_wave_inverse); // This ArrayUtils.addAll comes from the Apache Commons lang3 library
// And this is the invertByteArray method
/**
* Inverts bytes e.g. 000101 becomes 111010
*/
public static byte[] invertByteArray(byte[] bytes) {
if (bytes == null) {
return null;
// TODO: throw empty byte array expcetion
}
byte[] outputArray = new byte[bytes.length];
for(int i = 0; i < bytes.length; i++) {
outputArray[i] = (byte) ~bytes[i];
}
return outputArray;
}
P.S. Here is the morse_audio_fragment.wav: morse_audio_fragment.wav
Thanks in advance,
Jonas
The problem
Your .wav file is Signed 16 bit Little Endian, Rate 44100 Hz, Mono - which means that each sample in the file is 2 bytes long, and describes a signed amplitude. So you can copy-and-paste chunks of samples without any problems, as long as their lengths are divisible by 2 (your block size). Your silences are likely of odd length, so that the 1st sample after a silence is interpreted as
0x00 0x65 // last byte of silence, 1st byte of actual beep: weird
and all subsequent pairs bytes are interpreted wrong (taking the 2nd byte from each sample with the 1st byte from the next sample) due to this initial mis-alignment, until you find the next odd-length silence, when suddenly everything gets re-aligned correctly again; instead of the expected
0x65 0x05 // 1st and 2nd byte of beep: actual expected sample
How to fix it
Do not allow calls to addBytes that would add a number of bytes that does not evenly divide the block-size.
public class WaveFileBuilder() {
byte[] audioBytes = null;
// ... other attributes, methods, constructor
public void addBytes(byte[] audioBytes) throws IOException {
// ... debug code above, handle empty
// THIS SHOULD CHECK audioBytes IS MULTIPLE OF blockSize
this.audioBytes = ArrayUtils.addAll(this.audioBytes, audioBytes);
// ... debug code below
}
public boolean saveFile(File outputFile) throws IOException {
// ... prepare headers
// concatenate header (BYTES) and contents
byte[] audioFileBytes = ArrayUtils.addAll(BYTES, audioBytes);
// ... write out bytes
try (FileOutputStream fos = new FileOutputStream(outputFile)) {
fos.write(audioFileBytes);
}
// ...
}
}
First, you could have avoided some confusion using a different name for the attribute and the parameter. Then, you are constantly growing an array over and over; this is wasteful, making code that could run in O(n) run in O(n^2), because you are calling it like this:
wavBuilder.addBytes(beep);
wavBuilder.addBytes(spaceBetweenDigits);
wavBuilder.addBytes(beep);
wavBuilder.addBytes(spaceBetweenDigits);
wavBuilder.addBytes(beep);
wavBuilder.addBytes(spaceBetweenDigits);
wavBuilder.addBytes(beep);
wavBuilder.addBytes(spaceBetweenDigits);
wavBuilder.addBytes(beep);
wavBuilder.addBytes(nextChar);
Instead, I propose the following:
public class WaveFileBuilder() {
List<byte[]> chunks = new ArrayList<>();
// ... other attributes, methods, constructor
public void addBytes(byte[] audioBytes) throws IOException {
if ((audioBytes.length % blockAlign) != 0) {
throw new IllegalArgumentException("Trying to add a chunk that does not fit evenly; this would cause un-aligned blocks")
}
chunks.add(audioBytes);
}
public boolean saveFile(File outputFile) throws IOException {
// ... prepare headers
// ... write out bytes
try (FileOutputStream fos = new FileOutputStream(outputFile)) {
for (byte[] chunk : chunks) fos.write(chunk);
}
}
}
This version uses no concatenation at all, and should be much faster and easier to test. It also requires less memory, because it is not copying all those arrays around to concatenate them to each other.
I want find out, if two audio files are same or one contains the other.
For this I use Fingerprint of musicg
byte[] firstAudio = readAudioFileData("first.mp3");
byte[] secondAudio = readAudioFileData("second.mp3");
FingerprintSimilarityComputer fingerprint =
new FingerprintSimilarityComputer(firstAudio, secondAudio);
FingerprintSimilarity fingerprintSimilarity = fingerprint.getFingerprintsSimilarity();
System.out.println("clip is found at " + fingerprintSimilarity.getScore());
to convert audio to byte array I use sound API
public static byte[] readAudioFileData(final String filePath) {
byte[] data = null;
try {
final ByteArrayOutputStream baout = new ByteArrayOutputStream();
final File file = new File(filePath);
final AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(file);
byte[] buffer = new byte[4096];
int c;
while ((c = audioInputStream.read(buffer, 0, buffer.length)) != -1) {
baout.write(buffer, 0, c);
}
audioInputStream.close();
baout.close();
data = baout.toByteArray();
} catch (Exception e) {
e.printStackTrace();
}
return data;
}
but when I execute it, I became at fingerprint.getFingerprintsSimilarity() an Exception.
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 15999
at com.musicg.fingerprint.PairManager.getPairPositionList(PairManager.java:133)
at com.musicg.fingerprint.PairManager.getPair_PositionList_Table(PairManager.java:80)
at com.musicg.fingerprint.FingerprintSimilarityComputer.getFingerprintsSimilarity(FingerprintSimilarityComputer.java:71)
at Main.main(Main.java:42)
How can I compare 2 mp3 files with fingerprint in Java?
I never did any audio stuff in Java before, but I looked into your code briefly. I think that musicg only works for WAV files, not for MP3. Thus, you need to convert the files first. A web search reveals that you can e.g. use JLayer for that purpose. The corresponding code looks like this:
package de.scrum_master.so;
import com.musicg.fingerprint.FingerprintManager;
import com.musicg.fingerprint.FingerprintSimilarity;
import com.musicg.fingerprint.FingerprintSimilarityComputer;
import com.musicg.wave.Wave;
import javazoom.jl.converter.Converter;
import javazoom.jl.decoder.JavaLayerException;
public class Application {
public static void main(String[] args) throws JavaLayerException {
// MP3 to WAV
new Converter().convert("White Wedding.mp3", "White Wedding.wav");
new Converter().convert("Poison.mp3", "Poison.wav");
// Fingerprint from WAV
byte[] firstFingerPrint = new FingerprintManager().extractFingerprint(new Wave("White Wedding.wav"));
byte[] secondFingerPrint = new FingerprintManager().extractFingerprint(new Wave("Poison.wav"));
// Compare fingerprints
FingerprintSimilarity fingerprintSimilarity = new FingerprintSimilarityComputer(firstFingerPrint, secondFingerPrint).getFingerprintsSimilarity();
System.out.println("Similarity score = " + fingerprintSimilarity.getScore());
}
}
Of course you should make sure that you do not convert each file again whenever the program starts, i.e. you should check if the WAV files already exist. I skipped this step and reduced the sample code to a minimal working version.
For FingerprintSimilarityComputer(input1, input2), it suppose to take in the fingerprint of the loaded audio data and not the loaded audio data itself.
In your case, it should be:
// Convert your audio to wav using FFMpeg
Wave w1 = new Wave("first.wav");
Wave w2 = new Wave("second.wav");
FingerprintSimilarityComputer fingerprint =
new FingerprintSimilarityComputer(w1.getFingerprint(), w2.getFingerprint());
// print fingerprint.getFingerprintSimilarity()
Maybe I am missing a point, but if I understood you right, this should do:
byte[] firstAudio = readAudioFileData("first.mp3");
byte[] secondAudio = readAudioFileData("second.mp3");
byte[] smaller = firstAudio.length <= secondAudio.lenght ? firstAudio : secondAudio;
byte[] bigger = firstAudio.length > secondAudio.length ? firstAudio : secondAudio;
int ixS = 0;
int ixB = 0;
boolean contians = false;
for (; ixB<bigger.length; ixB++) {
if (smaller[ixS] == bigger[ixB]) {
ixS++;
if (ixS == smaller.lenght) {
contains = true;
break;
}
}
else {
ixS = 0;
}
}
if (contains) {
if (smaller.length == bigger.length) {
System.out.println("Both tracks are equal");
}
else {
System.out.println("The bigger track, fully contains the smaller track starting at byte: "+(ixB-smaller.lenght));
}
}
else {
System.out.println("No track completely contains the other track");
}
Is there a Java way to pre-allocate drive space for exclusive usage in the application?
There is no requirement for this space to be a separate filesystem or a part of existing filesystem (so could easily be a database), but it should allow for reserving the specified amount of space and allow for random reads/writes with high enough throughput.
Here's a stripped down version of my JNA-based fallocate solution. The main trick is obtaining the native file descriptor. I've only tested it on Linux so far, but it should work on all modern POSIX/non-Windows systems. It's not necessary on Windows, as Windows does not create sparse files by default (only with StandardOpenOption.SPARSE), so RandomAccessFile.setLength(size) or FileChannel.write(ByteBuffer.allocate(1), size - 1) are adequate there.
/**
* Provides access to operating system-specific {#code fallocate} and
* {#code posix_fallocate} functions.
*/
public final class Fallocate {
private static final boolean IS_LINUX = Platform.isLinux();
private static final boolean IS_POSIX = !Platform.isWindows();
private static final int FALLOC_FL_KEEP_SIZE = 0x01;
private final int fd;
private int mode;
private long offset;
private final long length;
private Fallocate(int fd, long length) {
if (!isSupported()) {
throwUnsupported("fallocate");
}
this.fd = fd;
this.length = length;
}
public static boolean isSupported() {
return IS_POSIX;
}
public static Fallocate forChannel(FileChannel channel, long length) {
return new Fallocate(getDescriptor(channel), length);
}
public static Fallocate forDescriptor(FileDescriptor descriptor, long length) {
return new Fallocate(getDescriptor(descriptor), length);
}
public Fallocate fromOffset(long offset) {
this.offset = offset;
return this;
}
public Fallocate keepSize() {
requireLinux("fallocate keep size");
mode |= FALLOC_FL_KEEP_SIZE;
return this;
}
private void requireLinux(String feature) {
if (!IS_LINUX) {
throwUnsupported(feature);
}
}
private void throwUnsupported(String feature) {
throw new UnsupportedOperationException(feature +
" is not supported on this operating system");
}
public void execute() throws IOException {
final int errno;
if (IS_LINUX) {
final int result = FallocateHolder.fallocate(fd, mode, offset, length);
errno = result == 0 ? 0 : Native.getLastError();
} else {
errno = PosixFallocateHolder.posix_fallocate(fd, offset, length);
}
if (errno != 0) {
throw new IOException("fallocate returned " + errno);
}
}
private static class FallocateHolder {
static {
Native.register(Platform.C_LIBRARY_NAME);
}
private static native int fallocate(int fd, int mode, long offset, long length);
}
private static class PosixFallocateHolder {
static {
Native.register(Platform.C_LIBRARY_NAME);
}
private static native int posix_fallocate(int fd, long offset, long length);
}
private static int getDescriptor(FileChannel channel) {
try {
// sun.nio.ch.FileChannelImpl declares private final java.io.FileDescriptor fd
final Field field = channel.getClass().getDeclaredField("fd");
field.setAccessible(true);
return getDescriptor((FileDescriptor) field.get(channel));
} catch (final Exception e) {
throw new UnsupportedOperationException("unsupported FileChannel implementation", e);
}
}
private static int getDescriptor(FileDescriptor descriptor) {
try {
// Oracle java.io.FileDescriptor declares private int fd
final Field field = descriptor.getClass().getDeclaredField("fd");
field.setAccessible(true);
return (int) field.get(descriptor);
} catch (final Exception e) {
throw new UnsupportedOperationException("unsupported FileDescriptor implementation", e);
}
}
}
You could try using a RandomAccessFile object and use the setLength() method.
Example:
File file = ... //Create a temporary file on the filesystem your trying to reserve.
long bytes = ... //number of bytes you want to reserve.
RandomAccessFile rf = null;
try{
rf = new RandomAccessFile(file, "rw"); //rw stands for open in read/write mode.
rf.setLength(bytes); //This will cause java to "reserve" memory for your application by inflating/truncating the file to the specific size.
//Do whatever you want with the space here...
}catch(IOException ex){
//Handle this...
}finally{
if(rf != null){
try{
rf.close(); //Lets be nice and tidy here.
}catch(IOException ioex){
//Handle this if you want...
}
}
}
Note: The file must exist before you create the RandomAccessFile object.
The RandomAccessFile object can then be used to read/write to the file. Make sure the target filesystem has enough free space. The space may not be "exclusive" per-say but you can always use File Locks to do that.
P.S: If you end up realizing hard drives are slow and useless (or meant to use RAM from the start) you can use the ByteBuffer object from java.nio. The allocate() and allocateDirect() methods should be more than enough. The byte buffer will be allocated into RAM (and possible SwapFile) and will be exclusive to this java program. Random access can be done by changing the position of the buffer. Since these buffers use signed integers to reference position, max sizes are limited to 2^31 - 1.
Read more on RandomAccessFile here.
Read more on FileLock (the java object) here.
Read more on ByteBuffer here.
On Linux systems you can use fallocate() system call. It's extremely fast. Just run Bash command.
UPD:
fallocate -l 10G 10Gigfile
You can pre-allocate space by writing a large file, but to be honest I wouldn't bother. Performance will be pretty good/ probably better than you need.
If you really needed performance, you'd be writing C++/C# and doing RAW I/O.
But that's typically only done when writing an RDBMS engine, high-volume media capture or similar.
I need to generate a waveform of an audio stream of a video,
currently I'm using xuggler and java to do some little things, but seems like I'm not able to get a byte array of the inputstream audio of my video from IAudioSamples.
Now I'm searching for an easier way to do it since xuggler is really becoming hard to understand, I've searched online and I've found this:
http://codeidol.com/java/swing/Audio/Build-an-Audio-Waveform-Display/
should work on .wav files, but when I try the code on a video or a .mp3 the AudioInputStream returns "cannot find an audio input stream"
can someone tell me a way to get byte[] array of the audiostream of one video so that I can follow the tutorial to create a waveform?
also if you have suggestion or other library that could help me I would be glad
Because mp3 it's an encoded format, you'll need before to decode it to get ray data (bytes) from it.
class Mp3FileXuggler {
private boolean DEBUG = true;
private String _sInputFileName;
private IContainer _inputContainer;
private int _iBitRate;
private IPacket _packet;
private int _iAudioStreamId;
private IStreamCoder _audioCoder;
private int _iSampleBufferSize;
private int _iInputSampleRate;
private static SourceDataLine mLine;
private int DECODED_AUDIO_SECOND_SIZE = 176375; /** bytes */
private int _bytesPerPacket;
private byte[] _residualBuffer;
/**
* Constructor, prepares stream to be readed
* #param input input File
* #throws UnsuportedSampleRateException
*/
public Mp3FileXuggler(String sFileName) throws UnsuportedSampleRateException{
this._sInputFileName = sFileName;
this._inputContainer = IContainer.make();
this._iSampleBufferSize = 18432;
this._residualBuffer = null;
/** Open container **/
if (this._inputContainer.open(this._sInputFileName, IContainer.Type.READ, null) < 0)
throw new IllegalArgumentException("Could not read the file: " + this._sInputFileName);
/** How many streams does the file actually have */
int iNumStreams = this._inputContainer.getNumStreams();
this._iBitRate = this._inputContainer.getBitRate();
if (DEBUG) System.out.println("Bitrate: " + this._iBitRate);
/** Iterate the streams to find the first audio stream */
this._iAudioStreamId = -1;
this._audioCoder = null;
boolean bFound = false;
int i = 0;
while (i < iNumStreams && bFound == false){
/** Find the stream object */
IStream stream = this._inputContainer.getStream(i);
IStreamCoder coder = stream.getStreamCoder();
/** If the stream is audio, stop looking */
if (coder.getCodecType() == ICodec.Type.CODEC_TYPE_AUDIO){
this._iAudioStreamId = i;
this._audioCoder = coder;
this._iInputSampleRate = coder.getSampleRate();
bFound = true;
}
++i;
}
/** If none was found */
if (this._iAudioStreamId == -1)
throw new RuntimeException("Could not find audio stream in container: " + this._sInputFileName);
/** Otherwise, open audiocoder */
if (this._audioCoder.open(null,null) < 0)
throw new RuntimeException("could not open audio decoder for container: " + this._sInputFileName);
this._packet = IPacket.make();
//openJavaSound(this._audioCoder);
/** Dummy read one packet to avoid problems in some audio files */
this._inputContainer.readNextPacket(this._packet);
/** Supported sample rates */
switch(this._iInputSampleRate){
case 22050:
this._bytesPerPacket = 2304;
break;
case 44100:
this._bytesPerPacket = 4608;
break;
}
}
public byte[] getSamples(){
byte[] rawBytes = null;
/** Go to the correct packet */
while (this._inputContainer.readNextPacket(this._packet) >= 0){
//System.out.println(this._packet.getDuration());
/** Once we have a packet, let's see if it belongs to the audio stream */
if (this._packet.getStreamIndex() == this._iAudioStreamId){
IAudioSamples samples = IAudioSamples.make(this._iSampleBufferSize, this._audioCoder.getChannels());
// System.out.println(">> " + samples.toString());
/** Because a packet can contain multiple set of samples (frames of samples). We may need to call
* decode audio multiple times at different offsets in the packet's data */
int iCurrentOffset = 0;
while(iCurrentOffset < this._packet.getSize()){
int iBytesDecoded = this._audioCoder.decodeAudio(samples, this._packet, iCurrentOffset);
iCurrentOffset += iBytesDecoded;
if (samples.isComplete()){
rawBytes = samples.getData().getByteArray(0, samples.getSize());
//playJavaSound(samples);
}
}
return rawBytes;
}
else{
/** Otherwise drop it */
do{}while(false);
}
}
return rawBytes; /** This will return null at this point */
}
}
Use this class in order to get the raw data from a mp3 file, and with them feed your spectrum drawer.
I'm currently trying to write a custom streams proxy (let's call it in that way) that can change the content from the given input stream and produce a modified, if necessary, output. This requirement is really necessary because sometimes I have to modify the streams in my application (e.g. compress the data truly on the fly). The following class is pretty easy and it uses internal buffering.
private static class ProxyInputStream extends InputStream {
private final InputStream iStream;
private final byte[] iBuffer = new byte[512];
private int iBufferedBytes;
private final ByteArrayOutputStream oBufferStream;
private final OutputStream oStream;
private byte[] oBuffer = emptyPrimitiveByteArray;
private int oBufferIndex;
ProxyInputStream(InputStream iStream, IFunction<OutputStream, ByteArrayOutputStream> oStreamFactory) {
this.iStream = iStream;
oBufferStream = new ByteArrayOutputStream(512);
oStream = oStreamFactory.evaluate(oBufferStream);
}
#Override
public int read() throws IOException {
if ( oBufferIndex == oBuffer.length ) {
iBufferedBytes = iStream.read(iBuffer);
if ( iBufferedBytes == -1 ) {
return -1;
}
oBufferIndex = 0;
oStream.write(iBuffer, 0, iBufferedBytes);
oStream.flush();
oBuffer = oBufferStream.toByteArray();
oBufferStream.reset();
}
return oBuffer[oBufferIndex++];
}
}
Let's assume we also have a sample test output stream that simply adds a space character before every written byte ("abc" -> " a b c") like this:
private static class SpacingOutputStream extends OutputStream {
private final OutputStream outputStream;
SpacingOutputStream(OutputStream outputStream) {
this.outputStream = outputStream;
}
#Override
public void write(int b) throws IOException {
outputStream.write(' ');
outputStream.write(b);
}
}
And the following test method:
private static void test(final boolean useDeflater) throws IOException {
final FileInputStream input = new FileInputStream(SOURCE);
final IFunction<OutputStream, ByteArrayOutputStream> outputFactory = new IFunction<OutputStream, ByteArrayOutputStream>() {
#Override
public OutputStream evaluate(ByteArrayOutputStream outputStream) {
return useDeflater ? new DeflaterOutputStream(outputStream) : new SpacingOutputStream(outputStream);
}
};
final InputStream proxyInput = new ProxyInputStream(input, outputFactory);
final OutputStream output = new FileOutputStream(SOURCE + ".~" + useDeflater);
int c;
while ( (c = proxyInput.read()) != -1 ) {
output.write(c);
}
output.close();
proxyInput.close();
}
This test method simply reads the file content and writes it to another stream, that's probably can be modified somehow. If the test method is running with useDeflater=false, the expected approach works fine as it's expected. But if the test method is invoked with the useDeflater set on, it behaves really strange and simply writes almost nothing (if omit the header 78 9C). I suspect that the deflater class may not be designed to meet the approach I like to use, but I always believed that ZIP format and the deflate compression are designed to work on-fly.
Probably I'm wrong at some point with the specifics of the deflate compression algorithm. What do I really miss?.. Perhaps there could be another approach to write a "streams proxy" to behave exactly as I want it to work... How can I compress the data on the fly being limited with the streams only?
Thanks in advance.
UPD: The following basic version works pretty nice with deflater and inflater:
public final class ProxyInputStream<OS extends OutputStream> extends InputStream {
private static final int INPUT_BUFFER_SIZE = 512;
private static final int OUTPUT_BUFFER_SIZE = 512;
private final InputStream iStream;
private final byte[] iBuffer = new byte[INPUT_BUFFER_SIZE];
private final ByteArrayOutputStream oBufferStream;
private final OS oStream;
private final IProxyInputStreamListener<OS> listener;
private byte[] oBuffer = emptyPrimitiveByteArray;
private int oBufferIndex;
private boolean endOfStream;
private ProxyInputStream(InputStream iStream, IFunction<OS, ByteArrayOutputStream> oStreamFactory, IProxyInputStreamListener<OS> listener) {
this.iStream = iStream;
oBufferStream = new ByteArrayOutputStream(OUTPUT_BUFFER_SIZE);
oStream = oStreamFactory.evaluate(oBufferStream);
this.listener = listener;
}
public static <OS extends OutputStream> ProxyInputStream<OS> proxyInputStream(InputStream iStream, IFunction<OS, ByteArrayOutputStream> oStreamFactory, IProxyInputStreamListener<OS> listener) {
return new ProxyInputStream<OS>(iStream, oStreamFactory, listener);
}
#Override
public int read() throws IOException {
if ( oBufferIndex == oBuffer.length ) {
if ( endOfStream ) {
return -1;
} else {
oBufferIndex = 0;
do {
final int iBufferedBytes = iStream.read(iBuffer);
if ( iBufferedBytes == -1 ) {
if ( listener != null ) {
listener.afterEndOfStream(oStream);
}
endOfStream = true;
break;
}
oStream.write(iBuffer, 0, iBufferedBytes);
oStream.flush();
} while ( oBufferStream.size() == 0 );
oBuffer = oBufferStream.toByteArray();
oBufferStream.reset();
}
}
return !endOfStream || oBuffer.length != 0 ? (int) oBuffer[oBufferIndex++] & 0xFF : -1;
}
}
I don't believe that DeflaterOutputStream.flush() does anything meaningful. the deflater will accumulate data until it has something to write out to the underlying stream. the only way to force the remaining bit of data out is to call DeflaterOutputStream.finish(). however, this would not work for your current implementation, as you can't call finish until you are entirely done writing.
it's actually very difficult to write a compressed stream and read it within the same thread. In the RMIIO project i actually do this, but you need an arbitrarily sized intermediate output buffer (and you basically need to push data in until something comes out compressed on the other end, then you can read it). You might be able to use some of the util classes in that project to accomplish what you want to do.
Why don't use GZipOutputStream?
I'm a little lost. But I should simple use the original outputStream when I don't want to compress and new GZipOutputStream(outputStream) when I DO want to compress. That's all. Anyway, check you are flushing the output streams.
Gzip vs zip
Also: one thing is GZIP (compress a stream, that's what you're doing) and another thing is writing a valid zip file (file headers, file directory, entries (header,data)*). Check ZipOutputStream.
Be careful, if somewhere you use method
int read(byte b[], int off, int len) and in case of exception in line
final int iBufferedBytes = iStream.read(iBuffer);
you will get stuck in infinite loop