do I need to close InputStream/OutputStream if I close BluetoothSocket? - java

I have a BluetoothSocket and two streams.
Method m = device.getClass().getMethod("createRfcommSocket", new Class[] {int.class});
BluetoothSocket s = (BluetoothSocket) m.invoke(device, 1);
s.connect();
InputStream in = s.getInputStream();
OutputStream out = s.getOutputStream();
At some moment I want to close the socket. Do I have to close the streams?
The problem is that each close() may throw an exception, I have to catch them,
and the code becomes bloated.
IIRC in some similar case it was enough to close the main object (which would be the socket in this case), and other objects were closed automatically. But this behavior is not documented for BluetoothSocket (or I could not find it).
So:
If I close a bluetooth socket, do I have to close its streams?
(And what about Sockets? Are they different? BluetoothSocket does not inherit from Socket.)

I have been working on Android Bluetooth recently. I checked the sources. And it seems that you don't need to close your streams.
Indeed, you stream are respectively objects of type BluetoothInputStream and BluetoothOutputStream in the BluetoothSocket constructor:
mInputStream = new BluetoothInputStream(this);
mOutputStream = new BluetoothOutputStream(this);
Those are streams returned when you call:
InputStream in = s.getInputStream();
OutputStream out = s.getOutputStream();
But when you call .close() on these streams, you call:
public void close() throws IOException {
mSocket.close();
}
So you only close the BluetoothSocket again.
In conclusion, you don't need to close those streams.
For your second question, the only thing that Socket and BluetoothSocket have in common is that they implement Closable: they will have a .close() method. It does not mean they do the same
Here is the complete code for BluetoothOutputStream:
/*package*/ final class BluetoothOutputStream extends OutputStream {
private BluetoothSocket mSocket;
/*package*/ BluetoothOutputStream(BluetoothSocket s) {
mSocket = s;
}
/**
* Close this output stream and the socket associated with it.
*/
public void close() throws IOException {
mSocket.close();
}
/**
* Writes a single byte to this stream. Only the least significant byte of
* the integer {#code oneByte} is written to the stream.
*
* #param oneByte
* the byte to be written.
* #throws IOException
* if an error occurs while writing to this stream.
* #since Android 1.0
*/
public void write(int oneByte) throws IOException {
byte b[] = new byte[1];
b[0] = (byte)oneByte;
mSocket.write(b, 0, 1);
}
/**
* Writes {#code count} bytes from the byte array {#code buffer} starting
* at position {#code offset} to this stream.
*
* #param b
* the buffer to be written.
* #param offset
* the start position in {#code buffer} from where to get bytes.
* #param count
* the number of bytes from {#code buffer} to write to this
* stream.
* #throws IOException
* if an error occurs while writing to this stream.
* #throws IndexOutOfBoundsException
* if {#code offset < 0} or {#code count < 0}, or if
* {#code offset + count} is bigger than the length of
* {#code buffer}.
* #since Android 1.0
*/
public void write(byte[] b, int offset, int count) throws IOException {
if (b == null) {
throw new NullPointerException("buffer is null");
}
if ((offset | count) < 0 || count > b.length - offset) {
throw new IndexOutOfBoundsException("invalid offset or length");
}
mSocket.write(b, offset, count);
}
/**
* Wait until the data in sending queue is emptied. A polling version
* for flush implementation. Use it to ensure the writing data afterwards will
* be packed in the new RFCOMM frame.
* #throws IOException
* if an i/o error occurs.
* #since Android 4.2.3
*/
public void flush() throws IOException {
mSocket.flush();
}
}

You should close the streams first. Only calling close() on the socket will usually work out, but if a new connection is opened immediately following (read: unit testing), you'll have problems.

At some moment I want to close the socket. Do I have to close the
streams? The problem is that each close() may throw an exception, I
have to catch them, and the code becomes bloated.
If the documentation does not mention nothing about it, I think you should close it. If your problem is just the exception, you can have an utility method, such
public void closeStream(Closeable stream) {
if (stream == null) {
return;
}
try {
stream.close();
} catch (IOException e) {
e.printStackTrace();
}
}

Related

InputStream.read() stuck

I have searched for this problem but all people encoutering it are using sockets.
I am just using a solid file so everything should already be in the Stream ...
Here is my code :
AudioInputStream in = null;
MpegAudioFileReader mp = new MpegAudioFileReader();
in = mp.getAudioInputStream(new File(this.directory + currentBeatmaps.get(0).getAudioFileName()));
AudioFormat baseFormat = in.getFormat();
AudioFormat decodedFormat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED,
baseFormat.getSampleRate(),
16,
baseFormat.getChannels(),
baseFormat.getChannels() * 2,
baseFormat.getSampleRate(),
false);
final AudioInputStream din2 = AudioSystem.getAudioInputStream(decodedFormat, in);
long length = din2.getFrameLength();
byte[] bytes = IOUtils.toByteArray(din2);
in.close();
InputStream is1 = new ByteArrayInputStream(bytes);
AudioInputStream din = new AudioInputStream(
is1,
decodedFormat,
length
);
IOUtils.toByteArray(din2) gets stuck for some files, and works for other files so I checked what was inside, and there is this function that gets stuck in debugging mode :
/**
* Copies bytes from a large (over 2GB) <code>InputStream</code> to an
* <code>OutputStream</code>.
* <p>
* This method uses the provided buffer, so there is no need to use a
* <code>BufferedInputStream</code>.
* <p>
*
* #param input the <code>InputStream</code> to read from
* #param output the <code>OutputStream</code> to write to
* #param buffer the buffer to use for the copy
* #return the number of bytes copied
* #throws NullPointerException if the input or output is null
* #throws IOException if an I/O error occurs
* #since 2.2
*/
public static long copyLarge(final InputStream input, final OutputStream output, final byte[] buffer)
throws IOException {
long count = 0;
int n;
while (EOF != (n = input.read(buffer))) {
output.write(buffer, 0, n);
count += n;
}
return count;
}
It dies when doing input.read(buffer) at the first iteration.
And I repeat, this works for some files but not for others, so I don't know how to handle this...
If someone may at least find a solution to at least, detect when it will do this so I can print an error that would be great (This part is not critical to my software)

Java Socket Corrupting PNG-image

I am currently trying to use a Socket to send a PNG or JPEG image from one Client to another (in Java) but the images always becomes corrupted (when I try to open it it just says that it can't be opened because it's damaged, faulty or too big).
I have tried the methods that load the images into byte[] and if I just load an image into a byte[] and then save it back down it works perfectly so the problem must be in the sending of the byte[].
Here are the functions I use for the sending:
/**
* Attempts to send data through the socket with the BufferedOutputStream. <p>
* Any safety checks should be done beforehand
* #param data - the byte[] containing the data that shall be sent
* #return - returns 'true' if the sending succeeded and 'false' in case of IOException
*/
public boolean sendData(byte[] data){
try {
//We simply try to send the data
outS.write(data, 0, data.length);
outS.flush();
return true; //Success
} catch (IOException e) {
e.printStackTrace();
return false; //Failed
}
}
/**
* Attempts to receive data sent to the socket. It uses a BufferedInputStream
* #param size - the number of bytes that should be read
* #return - byte[] with the received bytes or 'null' in case of an IOException
*/
public byte[] receiveData(int size){
try {
int read = 0, r;
byte[] data = new byte[size];
do{
//We keep reading until we have gotten all data
r = inS.read(data, read, size-read);
if(r > 0)read += r;
}while(r>-1 && read<size); //We stop only if we either hit the end of the
//data or if we have received the amount of data we expected
return data;
} catch (IOException e) {
e.printStackTrace();
return null;
}
}
The images that arrive seems to be the correct size and all so the data is at least arriving, just corrupted.
Throw your receiveData() method away and use DataInputStream.readFully().

use FileInputStream and FileOutputStream as a buffer

I would like to use the hard drive as a buffer for audio signals. My Idea was to just write the samples in byte form to a file, and read the same with another thread. However, I have to problem that FIS.read(byte[]); returns 0 and gives me an empty buffer.
What is the problem here?
This is my operation for writing bytes:
try {
bufferOS.write(audioChunk);
bufferOS.flush();
} catch (IOException ex) {
//...
}
And this is what my reader does:
byte audioChunk[] = new byte[bufferSize];
int readBufferSize;
int freeBufferSize = line.available(); // line = audioline, available returns free space in buffer
try {
readBufferSize = bufferIS.read(audioChunk,freeBufferSize, 0);
} catch(IOException e) {
//...
}
I create both bufferOS and bufferIS with the same file, both work.
The writer works, the file gets created and has the correct data in it.
However the bufferIS.read();-call always returns 0.
The fileInputStream returns the correct amount of available bytes with buffer.available(); and parameters like freeBufferSize and audioChunk.length are correct.
Is there a problem with running FileInputStream and FileOutputStream on the same file in windows?
You're passing the arguments in the wrong order to the read call, it should be:
readBufferSize = bufferIS.read(audioChunk, 0, freeBufferSize);
Right now you're passing freeBufferSize as the offset to store the result of the read call and 0 as the number of bytes to read at maximum. It's not surprising that, if you tell the read call to read at most zero bytes, that it returns that it has read zero bytes.
Javadoc:
* #param b the buffer into which the data is read.
* #param off the start offset in array <code>b</code>
* at which the data is written.
* #param len the maximum number of bytes to read.
* #return the total number of bytes read into the buffer, or
* <code>-1</code> if there is no more data because the end of
* the stream has been reached.
public abstract class InputStream implements Closeable {
// ....
public int read(byte b[], int off, int len) throws IOException

Why gzip compressed buffer size is greater then uncompressed buffer?

I'm trying to write a compress utils class.
But during the test, I find the result it greater than original buffer.
Is my codes right ?
Please see codes:
/**
* This class provide compress ability
* <p>
* Support:
* <li>GZIP
* <li>Deflate
*/
public class CompressUtils {
final public static int DEFAULT_BUFFER_SIZE = 4096; // Compress/Decompress buffer is 4K
/**
* GZIP Compress
*
* #param data The data will be compressed
* #return The compressed data
* #throws IOException
*/
public static byte[] gzipCompress(byte[] data) throws IOException {
Validate.isTrue(ArrayUtils.isNotEmpty(data));
ByteArrayInputStream bis = new ByteArrayInputStream(data);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
try {
gzipCompress(bis, bos);
bos.flush();
return bos.toByteArray();
} finally {
bis.close();
bos.close();
}
}
/**
* GZIP Decompress
*
* #param data The data to be decompressed
* #return The decompressed data
* #throws IOException
*/
public static byte[] gzipDecompress(byte[] data) throws IOException {
Validate.isTrue(ArrayUtils.isNotEmpty(data));
ByteArrayInputStream bis = new ByteArrayInputStream(data);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
try {
gzipDecompress(bis, bos);
bos.flush();
return bos.toByteArray();
} finally {
bis.close();
bos.close();
}
}
/**
* GZIP Compress
*
* #param is The input stream to be compressed
* #param os The compressed result
* #throws IOException
*/
public static void gzipCompress(InputStream is, OutputStream os) throws IOException {
GZIPOutputStream gos = null;
byte[] buffer = new byte[DEFAULT_BUFFER_SIZE];
int count = 0;
try {
gos = new GZIPOutputStream(os);
while ((count = is.read(buffer)) != -1) {
gos.write(buffer, 0, count);
}
gos.finish();
gos.flush();
} finally {
if (gos != null) {
gos.close();
}
}
}
/**
* GZIP Decompress
*
* #param is The input stream to be decompressed
* #param os The decompressed result
* #throws IOException
*/
public static void gzipDecompress(InputStream is, OutputStream os) throws IOException {
GZIPInputStream gis = null;
int count = 0;
byte[] buffer = new byte[DEFAULT_BUFFER_SIZE];
try {
gis = new GZIPInputStream(is);
while ((count = is.read(buffer)) != -1) {
os.write(buffer, 0, count);
}
} finally {
if (gis != null) {
gis.close();
}
}
}
}
And here's the testing codes:
public class CompressUtilsTest {
private Random random = new Random();
#Test
public void gzipTest() throws IOException {
byte[] buffer = new byte[1023];
random.nextBytes(buffer);
System.out.println("Orignal: " + Hex.encodeHexString(buffer));
byte[] result = CompressUtils.gzipCompress(buffer);
System.out.println("Compressed: " + Hex.encodeHexString(result));
byte[] decompressed = CompressUtils.gzipDecompress(result);
System.out.println("DeCompressed: " + Hex.encodeHexString(decompressed));
Assert.assertArrayEquals(buffer, decompressed);
}
}
And the result is:
original is 1023 bytes long
compressed is 1036 bytes long
How is it happen ?
In your test you initialize the buffer with a set of random characters.
GZIP consists of two parts:
LZW compression
Encoding using a Huffman code
The former relies heavily on repeated sequences in the input. Basically it says something like: "The next 10 characters are the same as the 10 characters staring at index X".
In your case there are (possibly) no such repeated sequences, thus no compression by the first algorithm.
The Huffman encoding on the other hand should work, but in total the GZIP overhead (storing the used Huffman encoding, e.g.) outweighs the advantages of compressing the input.
If you test your algorithm with real files, you will get some meaningful results.
Best results are usually acquired when trying to compress structured files like XML.
It's because compression generally works great on medium to large data length (1023 bytes is quite small) and moreover it also works the best on data that contains repeated patterns not on random ones.

Iterable gzip deflate/inflate in Java

Is there a library for gzip-deflating in terms of ByteBuffers hidden in the Internet? Something which allows us to push raw data then pull deflated data? We have searched for it but found only libraries which deal with InputStreams and OutputStreams.
We are tasked with creating gzip filters for deflating a flow of ByteBuffers in a pipeline architecture. This is a pull architecture where the last element pulls data from earlier elements. Our gzip filter deals with a flow of ByteBuffers, there is no single Stream object available.
We have toyed with adapting the data flow as some kind of InputStream and then use GZipOutputStream to satisfy our requirements but the amount of adaptor code is annoying to say the least.
Post-accept edit: for the record, our architecture is similar to that of GStreamer and the likes.
I don't understand the "hidden in the internet" part, but zlib does in-memory gzip format compression and decompression. The java.util.zip API provides some access to zlib, though it is limited. Due to the interface limitations, you cannot request that zlib produce and consume gzip streams directly. You can however use the nowrap option to produce and consume raw deflate data. Then it's easy to roll your own gzip header and trailer, using the CRC32 class in java.util.zip. You can prepend a fixed 10-byte header, append the four-byte CRC and then the four-byte uncompressed length (modulo 232), both in little-endian order, and you're good to go.
Much credit to Mark Adler for suggesting this approach, which is much better than my original answer.
package stack;
import java.io.*;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import java.util.zip.CRC32;
import java.util.zip.Deflater;
public class BufferDeflate2 {
/** The standard 10 byte GZIP header */
private static final byte[] GZIP_HEADER = new byte[] { 0x1f, (byte) 0x8b,
Deflater.DEFLATED, 0, 0, 0, 0, 0, 0, 0 };
/** CRC-32 of uncompressed data. */
private final CRC32 crc = new CRC32();
/** Deflater to deflate data */
private final Deflater deflater = new Deflater(Deflater.BEST_COMPRESSION,
true);
/** Output buffer building area */
private final ByteArrayOutputStream buffer = new ByteArrayOutputStream();
/** Internal transfer space */
private final byte[] transfer = new byte[1000];
/** The flush mode to use at the end of each buffer */
private final int flushMode;
/**
* New buffer deflater
*
* #param syncFlush
* if true, all data in buffer can be immediately decompressed
* from output buffer
*/
public BufferDeflate2(boolean syncFlush) {
flushMode = syncFlush ? Deflater.SYNC_FLUSH : Deflater.NO_FLUSH;
buffer.write(GZIP_HEADER, 0, GZIP_HEADER.length);
}
/**
* Deflate the buffer
*
* #param in
* the buffer to deflate
* #return deflated representation of the buffer
*/
public ByteBuffer deflate(ByteBuffer in) {
// convert buffer to bytes
byte[] inBytes;
int off = in.position();
int len = in.remaining();
if( in.hasArray() ) {
inBytes = in.array();
} else {
off = 0;
inBytes = new byte[len];
in.get(inBytes);
}
// update CRC and deflater
crc.update(inBytes, off, len);
deflater.setInput(inBytes, off, len);
while( !deflater.needsInput() ) {
int r = deflater.deflate(transfer, 0, transfer.length, flushMode);
buffer.write(transfer, 0, r);
}
byte[] outBytes = buffer.toByteArray();
buffer.reset();
return ByteBuffer.wrap(outBytes);
}
/**
* Write the final buffer. This writes any remaining compressed data and the GZIP trailer.
* #return the final buffer
*/
public ByteBuffer doFinal() {
// finish deflating
deflater.finish();
// write all remaining data
int r;
do {
r = deflater.deflate(transfer, 0, transfer.length,
Deflater.FULL_FLUSH);
buffer.write(transfer, 0, r);
} while( r == transfer.length );
// write GZIP trailer
writeInt((int) crc.getValue());
writeInt((int) deflater.getBytesRead());
// reset deflater
deflater.reset();
// final output
byte[] outBytes = buffer.toByteArray();
buffer.reset();
return ByteBuffer.wrap(outBytes);
}
/**
* Write a 32 bit value in little-endian order
*
* #param v
* the value to write
*/
private void writeInt(int v) {
System.out.println("v="+v);
buffer.write(v & 0xff);
buffer.write((v >> 8) & 0xff);
buffer.write((v >> 16) & 0xff);
buffer.write((v >> 24) & 0xff);
}
/**
* For testing. Pass in the name of a file to GZIP compress
* #param args
* #throws IOException
*/
public static void main(String[] args) throws IOException {
File inFile = new File(args[0]);
File outFile = new File(args[0]+".test.gz");
FileChannel inChan = (new FileInputStream(inFile)).getChannel();
FileChannel outChan = (new FileOutputStream(outFile)).getChannel();
BufferDeflate2 def = new BufferDeflate2(false);
ByteBuffer buf = ByteBuffer.allocate(500);
while( true ) {
buf.clear();
int r = inChan.read(buf);
if( r==-1 ) break;
buf.flip();
ByteBuffer compBuf = def.deflate(buf);
outChan.write(compBuf);
}
ByteBuffer compBuf = def.doFinal();
outChan.write(compBuf);
inChan.close();
outChan.close();
}
}
Processing ByteBuffers is not hard. See my sample code below. You need to know how the buffers are created. The options are:
Each buffer is compressed independently. This is so simple to handle I assume this is not the case. You would just transform the buffer into a byte array and wrap it in an ByteArrayInputStream within a GZIPInputStream.
Each buffer was ended with a SYNC_FLUSH by the writer, and thus comprises an entire block of data within a stream. All the data written by the writer to the buffer can be read immediately by the reader.
Each buffer is just part of a GZIP stream. There is no guarantee the reader can read anything from the buffer.
Data generated by GZIP must be processed in order. The ByteBuffers will have to be processed in the same order they are generated.
Sample code:
package stack;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.nio.channels.Channels;
import java.nio.channels.Pipe;
import java.nio.channels.SelectableChannel;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.zip.GZIPInputStream;
public class BufferDeflate {
static AtomicInteger idSrc = new AtomicInteger(1);
/** Queue for transferring buffers */
final BlockingQueue<ByteBuffer> buffers = new LinkedBlockingQueue<ByteBuffer>();
/** The entry point for deflated buffers */
final Pipe.SinkChannel bufSink;
/** The source for the inflater */
final Pipe.SourceChannel infSource;
/** The destination for the inflater */
final Pipe.SinkChannel infSink;
/** The source for the outside world */
public final SelectableChannel source;
class Relayer extends Thread {
public Relayer(int id) {
super("BufferRelayer" + id);
}
public void run() {
try {
while( true ) {
ByteBuffer buf = buffers.take();
if( buf != null ) {
bufSink.write(buf);
} else {
bufSink.close();
break;
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
class Inflater extends Thread {
public Inflater(int id) {
super("BufferInflater" + id);
}
public void run() {
try {
InputStream in = Channels.newInputStream(infSource);
GZIPInputStream gzip = new GZIPInputStream(in);
OutputStream out = Channels.newOutputStream(infSink);
int ch;
while( (ch = gzip.read()) != -1 ) {
out.write(ch);
}
out.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
/**
* New buffer inflater
*/
public BufferDeflate() throws IOException {
Pipe pipe = Pipe.open();
bufSink = pipe.sink();
infSource = pipe.source();
pipe = Pipe.open();
infSink = pipe.sink();
source = pipe.source().configureBlocking(false);
int id = idSrc.incrementAndGet();
Thread thread = new Relayer(id);
thread.setDaemon(true);
thread.start();
thread = new Inflater(id);
thread.setDaemon(true);
thread.start();
}
/**
* Add the buffer to the stream. A null buffer closes the stream
*
* #param buf
* the buffer to add
* #throws IOException
*/
public void add(ByteBuffer buf) throws IOException {
buffers.offer(buf);
}
}
Simply pass the buffers to the add method and read from the public source channel. The amount of data that can be read from GZIP after processing a given number of bytes is impossible to predict. I have therefore made the source channel non-blocking so you can safely read from it in the same thread that you add the byte buffers.

Categories

Resources