java socket speed strangely low - java

I just copied a working example of data transfer from this forum and used it with little change in my program, but I can't see what's the problem with its speed. I tested the main example and it transfers some 1MB in less than 30ms. Both read and write speed is very good. But when I use it in my case the same amount of data is not transfered in less than 400ms! The writing remains effecient, but reading is somehow problematic.
Here the data is written. For now, I'm not trying to speed up the first part, i.e. the serialization of the object. My question is about the second part.
private static void writeObject(OutputStream out, Object obj) throws IOException {
long t1 = System.currentTimeMillis();
ByteArrayOutputStream bArr = new ByteArrayOutputStream();
ObjectOutputStream ojs = new ObjectOutputStream(bArr);
ojs.writeObject(obj);
ojs.close();
long t2 = System.currentTimeMillis();
byte[] arr = bArr.toByteArray();
int len = arr.length;
for (int i = 0; i < arr.length; i += BUFFER_SIZE)
out.write(arr, i, Math.min(len - i, BUFFER_SIZE));
out.close();
long t3 = System.currentTimeMillis();
System.out.println(t3 - t2);
}
Well, this is not that bad! t3 - t2 prints some 30ms.
The problem is here, in readObject(), and not in its second part, where the object is deserialized, at least not for now, but the problem is in the first part, where t2 - t1 turns out to be more than 400ms, as I mentioned.
private static Object readObject(InputStream in) throws IOException, ClassNotFoundException {
long t1 = System.currentTimeMillis();
ByteArrayOutputStream bao = new ByteArrayOutputStream();
byte[] buff = new byte[BUFFER_SIZE];
int read;
while ((read = in.read(buff)) != -1) {
bao.write(buff, 0, read);
}
in.close();
long t2 = System.currentTimeMillis();
ByteArrayInputStream bArr = new ByteArrayInputStream(bao.toByteArray());
Object o = new ObjectInputStream(new BufferedInputStream(bArr)).readObject();
long t3 = System.currentTimeMillis();
System.out.println(t2 - t1);
return o;
}
And here is the main():
final static int BUFFER_SIZE = 64 * 1024;
public static void main(String[] args) throws Exception {
final String largeFile1 = "store.aad";
final Table t = (Table) new ObjectInputStream(new FileInputStream(largeFile1)).readObject();
new Thread(new Runnable() {
public void run() {
try {
ServerSocket serverSocket = new ServerSocket(12345);
Socket clientSocket = serverSocket.accept();
readObject(clientSocket.getInputStream());
} catch (Exception e) {
}
}
}).start();
new Thread(new Runnable() {
public void run() {
try {
Thread.sleep(1000);
Socket socket = new Socket("localhost", 12345);
OutputStream socketOutputStream = socket.getOutputStream();
writeObject(socketOutputStream, t);
} catch (Exception e) {
}
}
}).start();
}
Where am I going wrong?!

(An obligatory comment about the difficulty of getting Java benchmarks correct).
It seems that you are launching two threads, a reader thread and a writer thread. It is entirely feasible that things proceed in the following order:
The reader thread starts.
The reader thread records t1.
The reader thread calls read(), but is blocked because no data is available yet.
The writer thread starts.
The writer thread sleeps for a second.
The writer thread calls write().
The writer thread exits.
The reader thread's read() call returns.
The reader thread t2, etc., and exits.
Now, if you are seeing ~400ms for t2 - t1, this is probably not what is happening: it seems probable that the writer thread's call to sleep() must be happening before t1 is recorded. But the short answer is that it seems unclear what t2 - t1 is measuring. In particular, it seems incorrect to expect it to measure simply the time read() takes doing work (as opposed to waiting for the data to read).

If you want to read and write with buffered IO, you can use BufferedInputStream and BufferedOutputStream to read and write respectively. And, you could use a try-with-resources Statement to close. To write, something like
private static final int BUFFER_SIZE = 32 * 1024;
private static void writeObject(OutputStream out, Object obj) //
throws IOException {
try (ObjectOutputStream ojs = new ObjectOutputStream(//
new BufferedOutputStream(out, BUFFER_SIZE));
ojs.writeObject(obj);
}
}
and to read like
private static Object readObject(InputStream in) throws IOException,//
ClassNotFoundException {
try (ObjectInputStream ois = new ObjectInputStream(//
new BufferedInputStream(in, BUFFER_SIZE))) {
return ois.readObject();
}
}

When you are performing a micro-benchmark I suggest you ignore all the results you get for at least the first 2 seconds of CPU time to give you JVM a chance to warmup.
I would write this without using sleep.
For the purpose of your test, writing the object is irrelevant. You just need to write a new byte[size] and see how long it takes.
For testing short latencies, I would use System.nanoTime()
I would start by writing a small message first and looking at the round trip time. i.e. client sends a packet to the server and the server sends it back again.
Last but not least, you will get better performance by using NIO which was added in Java 1.4 (2004)
Here is some code wrote earlier EchoServerMainand EchoClientMain which produces results like this.
On a E5-2650 v2 over loopback
Throughput was 2880.4 MB/s
Loop back echo latency was 5.8/6.2 9.6/19.4 23.2us for 50/90 99/99.9 99.99%tile
Note: These timing are the full round trip time to send the packet to the server and back again. The timings are in micro-seconds.

I just copied a working example ...
No you didn't. You made up something completely different.
int len = arr.length;
for (int i = 0; i < arr.length; i += BUFFER_SIZE)
out.write(arr, i, Math.min(len - i, BUFFER_SIZE));
This loop is complete nonsense. It can be replaced completely by
out.write(arr, 0, len);
However you're just adding latency and wasting space with all this.
private static void writeObject(OutputStream out, Object obj) throws IOException {
long t1 = System.currentTimeMillis();
out.writeObject(obj);
out.close();
long t2 = System.currentTimeMillis();
System.out.println(t2-t1);
}
There is no point in the ByteArrayOutputStream, and writing more bytes than are in it to the real ObjectOutputStream is simply invalid. There's no point in benchmarking operations that should never sanely take place.
Why you're closing the ObjectOutputStream is another mystery. And presumably you have similar code at the receiving side: reapcle it all with ObjectInputStream.readObject().

Related

unlock InputStream Java [duplicate]

I'm waiting for the data on the stream such a strange way... Because i think throwing exceptions each time stream tries readObject() is not a good idea. That's why I use PushBackInputStream and read just one byte from that stream each 10 ms.
#Override
public void run() {
try {
ObjectOutputStream oos = new ObjectOutputStream(new BufferedOutputStream(
clientSocket.getOutputStream()));
oos.flush();
ObjectInputStream ois = new ObjectInputStream(clientSocket.getInputStream());
PushbackInputStream pis = new PushbackInputStream(clientSocket.getInputStream());
while (true) {
int tempByte = -1;
if ((tempByte = pis.read()) == -1) {
sleep(10);
} else {
pis.unread(tempByte);
ArrayList<Object> arrList = (ArrayList<Object>) ois.readObject();
int command = (Integer) arrList.get(0);
if (command == CommandDescriptor.ADD_STRING.getCode()) {
String tempStr = (String) arrList.get(1);
boolean result = Server.colleciton.add(tempStr);
if (result) {
oos.writeInt(1);
oos.flush();
} else {
oos.writeInt(0);
oos.flush();
}
} else if (command == CommandDescriptor.REMOVE_STRING.getCode()) {
...
I do something wrong with streams... I get an exception:
Exception in thread "Thread-0" java.lang.ClassCastException: java.io.ObjectStreamClass cannot be cast to java.util.ArrayList
at com.rizhov.main.ClientHandler.run(ClientHandler.java:39)
At that part of code:
ArrayList<Object> arrList = (ArrayList<Object>) ois.readObject();
What am I doing wrong? Is there any better solution to wait a data.
UPDATE:
ArrayList<Object> arrList = null;
for (;;) {
try {
arrList = ((ArrayList<Object>) ois.readObject());
break;
} catch (Exception e) {
}
}
int command = (Integer) arrList.get(0);
There is no need for any of this peeking and sleeping. It is a complete waste of your time and energy and of CPU time and space.
All Java streams block while there is no data. They block for exactly the correct amount of time, too, not 10ms or whatever at a time, and without wasting CPU cycles in spinning as you are doing.
You don't have to do any of that yourself in any way shape or form. Just call readObject().
And never ignore an IOException.
You can only wrap a Stream once. If you wrap it multiple times you are more likely to get confused than it be useful.
Once a stream has closed it won't re-open so reading a character to check if the stream has finished and discarding it is not very useful. Sleeping when the operation would block anyway is not very useful either.
Instead of using Integer codes I would use enum values. This will be cleaner and you will be able to use a switch statement.

How to write bytes by byte and display it continuously

I have encrypted video file and while decrypting it i have defined Bytebyte[] input = new byte[1024]; size to written it in output file.
Here i want to write first 1024 bytes in output files while at same time if want to play that video file i can play that output file without waiting to whole file written like video streaming.
when first 1024 bytes written , video file will start playing till whole file will written.
You'll have to setup your input stream and output stream depending on where you're getting the data and where you're saving/viewing it. Performance could also likely be improved with some buffering on the output. You should get the general idea.
public class DecryptionWotsit {
private final BlockingDeque<Byte> queue = new LinkedBlockingDeque<Byte>();
private final InputStream in;
private final OutputStream out;
public DecryptionWotsit(InputStream in, OutputStream out) {
this.in = in;
this.out = out;
}
public void go() {
final Runnable decryptionTask = new Runnable() {
#Override
public void run() {
try {
byte[] encrypted = new byte[1024];
byte[] decrypted = new byte[1024];
while (true) {
int encryptedBytes = in.read(encrypted);
// TODO: decrypt into decrypted, set decryptedBytes
int decryptedBytes = 0;
for (int i = 0; i < decryptedBytes; i++)
queue.addFirst(decrypted[i]);
}
}
catch (Exception e) {
// exception handling left for the reader
throw new RuntimeException(e);
}
}
};
final Runnable playTask = new Runnable() {
#Override
public void run() {
try {
while (true) {
out.write(queue.takeLast());
}
}
catch (Exception e) {
throw new RuntimeException(e);
}
}
};
Executors.newSingleThreadExecutor().execute(decryptionTask);
Executors.newSingleThreadExecutor().execute(playTask);
}
}
You will have to do the writing in a separate thread.
Since writing to file is a lot slower than displaying video, expect the file-writing thread to be running long after you've quit watching the video. Unless (as I understand it) you intend to write only the first 1024 bytes to file.
If you intend to write the entire video to file, a single 1024 byte buffer will slow you down. You will either have to use a buffer that is a lot larger, or need a lot of these 1024-byte buffers. (I suppose the 1024 byte buffer size is a consequence of the decryption algorithm?)
Also, you may want to look at how much memory is available for the JVM, to make sure that you won't get an OutOfMemoryException halfway. You can use the -Xms and -Xmx options to set the amount of memory available to the JVM.
A simple way to write to a file, you also want to process is to open the file twice (or more times). In one thread you write to the file and update a counter to say how much you have written e.g. a long protected by a synchronized block. In the reading thread(s) you can get this value and read up to that point, repeatedly until the writer has finished. A simple way to signal the write has finished is to set the size to Long.MAX_VALUE, causing the readers to read until the EOF. To stop the readers busy waiting, you can have them wait() until the data written is greater than the amount read.
This approach always uses a fixed amount of memory e.g. 16 - 128K, regardless of how far behind the readers are from the writer.

Fastest way to write an array of integers to a file in Java?

As the title says, I'm looking for the fastest possible way to write integer arrays to files. The arrays will vary in size, and will realistically contain anywhere between 2500 and 25 000 000 ints.
Here's the code I'm presently using:
DataOutputStream writer = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(filename)));
for (int d : data)
writer.writeInt(d);
Given that DataOutputStream has a method for writing arrays of bytes, I've tried converting the int array to a byte array like this:
private static byte[] integersToBytes(int[] values) throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
DataOutputStream dos = new DataOutputStream(baos);
for (int i = 0; i < values.length; ++i) {
dos.writeInt(values[i]);
}
return baos.toByteArray();
}
and like this:
private static byte[] integersToBytes2(int[] src) {
int srcLength = src.length;
byte[] dst = new byte[srcLength << 2];
for (int i = 0; i < srcLength; i++) {
int x = src[i];
int j = i << 2;
dst[j++] = (byte) ((x >>> 0) & 0xff);
dst[j++] = (byte) ((x >>> 8) & 0xff);
dst[j++] = (byte) ((x >>> 16) & 0xff);
dst[j++] = (byte) ((x >>> 24) & 0xff);
}
return dst;
}
Both seem to give a minor speed increase, about 5%. I've not tested them rigorously enough to confirm that.
Are there any techniques that will speed up this file write operation, or relevant guides to best practice for Java IO write performance?
I had a look at three options:
Using DataOutputStream;
Using ObjectOutputStream (for Serializable objects, which int[] is); and
Using FileChannel.
The results are
DataOutputStream wrote 1,000,000 ints in 3,159.716 ms
ObjectOutputStream wrote 1,000,000 ints in 295.602 ms
FileChannel wrote 1,000,000 ints in 110.094 ms
So the NIO version is the fastest. It also has the advantage of allowing edits, meaning you can easily change one int whereas the ObjectOutputStream would require reading the entire array, modifying it and writing it out to file.
Code follows:
private static final int NUM_INTS = 1000000;
interface IntWriter {
void write(int[] ints);
}
public static void main(String[] args) {
int[] ints = new int[NUM_INTS];
Random r = new Random();
for (int i=0; i<NUM_INTS; i++) {
ints[i] = r.nextInt();
}
time("DataOutputStream", new IntWriter() {
public void write(int[] ints) {
storeDO(ints);
}
}, ints);
time("ObjectOutputStream", new IntWriter() {
public void write(int[] ints) {
storeOO(ints);
}
}, ints);
time("FileChannel", new IntWriter() {
public void write(int[] ints) {
storeFC(ints);
}
}, ints);
}
private static void time(String name, IntWriter writer, int[] ints) {
long start = System.nanoTime();
writer.write(ints);
long end = System.nanoTime();
double ms = (end - start) / 1000000d;
System.out.printf("%s wrote %,d ints in %,.3f ms%n", name, ints.length, ms);
}
private static void storeOO(int[] ints) {
ObjectOutputStream out = null;
try {
out = new ObjectOutputStream(new FileOutputStream("object.out"));
out.writeObject(ints);
} catch (IOException e) {
throw new RuntimeException(e);
} finally {
safeClose(out);
}
}
private static void storeDO(int[] ints) {
DataOutputStream out = null;
try {
out = new DataOutputStream(new FileOutputStream("data.out"));
for (int anInt : ints) {
out.write(anInt);
}
} catch (IOException e) {
throw new RuntimeException(e);
} finally {
safeClose(out);
}
}
private static void storeFC(int[] ints) {
FileOutputStream out = null;
try {
out = new FileOutputStream("fc.out");
FileChannel file = out.getChannel();
ByteBuffer buf = file.map(FileChannel.MapMode.READ_WRITE, 0, 4 * ints.length);
for (int i : ints) {
buf.putInt(i);
}
file.close();
} catch (IOException e) {
throw new RuntimeException(e);
} finally {
safeClose(out);
}
}
private static void safeClose(OutputStream out) {
try {
if (out != null) {
out.close();
}
} catch (IOException e) {
// do nothing
}
}
I would use FileChannel from the nio package and ByteBuffer. This approach seems (on my computer) give 2 to 4 times better write performance:
Output from program:
normal time: 2555
faster time: 765
This is the program:
public class Test {
public static void main(String[] args) throws IOException {
// create a test buffer
ByteBuffer buffer = createBuffer();
long start = System.currentTimeMillis();
{
// do the first test (the normal way of writing files)
normalToFile(new File("first"), buffer.asIntBuffer());
}
long middle = System.currentTimeMillis();
{
// use the faster nio stuff
fasterToFile(new File("second"), buffer);
}
long done = System.currentTimeMillis();
// print the result
System.out.println("normal time: " + (middle - start));
System.out.println("faster time: " + (done - middle));
}
private static void fasterToFile(File file, ByteBuffer buffer)
throws IOException {
FileChannel fc = null;
try {
fc = new FileOutputStream(file).getChannel();
fc.write(buffer);
} finally {
if (fc != null)
fc.close();
buffer.rewind();
}
}
private static void normalToFile(File file, IntBuffer buffer)
throws IOException {
DataOutputStream writer = null;
try {
writer =
new DataOutputStream(new BufferedOutputStream(
new FileOutputStream(file)));
while (buffer.hasRemaining())
writer.writeInt(buffer.get());
} finally {
if (writer != null)
writer.close();
buffer.rewind();
}
}
private static ByteBuffer createBuffer() {
ByteBuffer buffer = ByteBuffer.allocate(4 * 25000000);
Random r = new Random(1);
while (buffer.hasRemaining())
buffer.putInt(r.nextInt());
buffer.rewind();
return buffer;
}
}
Benchmarks should be repeated every once in a while, shouldn't they?
:) After fixing some bugs and adding my own writing variant, here are
the results I get when running the benchmark on an ASUS ZenBook UX305
running Windows 10 (times given in seconds):
Running tests... 0 1 2
Buffered DataOutputStream 8,14 8,46 8,30
FileChannel alt2 1,55 1,18 1,12
ObjectOutputStream 9,60 10,41 11,68
FileChannel 1,49 1,20 1,21
FileChannel alt 5,49 4,58 4,66
And here are the results running on the same computer but with Arch
Linux and the order of the write methods switched:
Running tests... 0 1 2
Buffered DataOutputStream 31,16 6,29 7,26
FileChannel 1,07 0,83 0,82
FileChannel alt2 1,25 1,71 1,42
ObjectOutputStream 3,47 5,39 4,40
FileChannel alt 2,70 3,27 3,46
Each test wrote an 800mb file. The unbuffered DataOutputStream took
way to long so I excluded it from the benchmark.
As seen, writing using a file channel still beats the crap out of all
other methods, but it matters a lot whether the byte buffer is
memory-mapped or not. Without memory-mapping the file channel write
took 3-5 seconds:
var bb = ByteBuffer.allocate(4 * ints.length);
for (int i : ints)
bb.putInt(i);
bb.flip();
try (var fc = new FileOutputStream("fcalt.out").getChannel()) {
fc.write(bb);
}
With memory-mapping, the time was reduced to between 0.8 to 1.5
seconds:
try (var fc = new RandomAccessFile("fcalt2.out", "rw").getChannel()) {
var bb = fc.map(READ_WRITE, 0, 4 * ints.length);
bb.asIntBuffer().put(ints);
}
But note that the results are order-dependent. Especially so on
Linux. It appears that the memory-mapped methods doesn't write the
data in full but rather offloads the job request to the OS and returns
before it is completed. Whether that behaviour is desirable or not
depends on the situation.
Memory-mapping can also lead to OutOfMemory problems so it is not
always the right tool to
use. Prevent OutOfMemory when using java.nio.MappedByteBuffer.
Here is my version of the benchmark code:
https://gist.github.com/bjourne/53b7eabc6edea27ffb042e7816b7830b
I think you should consider using file channels (the java.nio library) instead of plain streams (java.io). A good starting point is this interesting discussion: Java NIO FileChannel versus FileOutputstream performance / usefulness
and the relevant comments below.
Cheers!
The main improvement you can have for writing int[] is to either;
increase the buffer size. The size is right for most stream, but file access can be faster with a larger buffer. This could yield a 10-20% improvement.
Use NIO and a direct buffer. This allows you to write 32-bit values without converting to bytes. This may yield a 5% improvement.
BTW: You should be able to write at least 10 million int values per second. With disk caching you increase this to 200 million per second.
Array is Serializable - can't you just use writer.writeObject(data);? That's definitely going to be faster than individual writeInt calls.
If you have other requirements on the output data format than retrieval into int[], that's a different question.

Piping a string into Java Runtime.exec() as input

I have a string that I need to pipe into an external program, then read the output back. I understand how to read the output back, but how do I pipe this string as input? Thanks!
Careful that you don't create a deadlock. Reading/writing to the process in the same thread can be problematic if the process is writing data you are not reading and meanwhile you are writing data it is not reading.
I tend to use a little pattern line this to get the io going in different threads:
import java.io.InputStream;
import java.io.OutputStream;
public final class Pipe implements Runnable {
private final InputStream in;
private final OutputStream out;
public Pipe(InputStream in, OutputStream out) {
this.in = in;
this.out = out;
}
public static void pipe(Process process) {
pipe(process.getInputStream(), System.out);
pipe(process.getErrorStream(), System.err);
pipe(System.in, process.getOutputStream());
}
public static void pipe(InputStream in, OutputStream out) {
final Thread thread = new Thread(new Pipe(in, out));
thread.setDaemon(true);
thread.start();
}
public void run() {
try {
int i = -1;
byte[] buf = new byte[1024];
while ((i = in.read(buf)) != -1) {
out.write(buf, 0, i);
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
This may or may not apply to what you want to do -- maybe your dealing with an established set of input that is guaranteed to show up after certain expected output (or vice versa, or several iterations of that). I.e. some synchronous block and step kind of interaction.
But if you are simply 'watching' for something that might show up then this is a good start in the right direction. You would pass in some different streams and work in some java.util.concurrent 'await(long,TimeUnit)' type code for waiting for the response. Java IO blocks basically forever on read() ops, so separating yourself from those threads will allow you to give up after a certain time if you don't get the expected response from the external process.
It goes like this (uncompiled and untested)
Process p = Runtime . getRuntime ( ) . exec ( ... ) ;
Writer w = new java . io . OutputStreamWriter ( p . getOutputStream ( ) ) ;
w . append ( yourString ) ;
w. flush ( ) ;
// read the input back

Capturing large amounts of output from Apache Commons-Exec

I am writing a video application in Java by executing ffmpeg and capturing its output to standard output. I decided to use Apache Commons-Exec instead of Java's Runtime, because it seems better. However, I am have a difficult time capturing all of the output.
I thought using pipes would be the way to go, because it is a standard way of inter-process communication. However, my setup using PipedInputStream and PipedOutputStream is wrong. It seems to work, but only for the first 1042 bytes of the stream, which curiously happens to be the value of PipedInputStream.PIPE_SIZE.
I have no love affair with using pipes, but I want to avoid use disk I/O (if possible), because of speed and volume of data (a 1m 20s video at 512x384 resolution produces 690M of piped data).
Thoughts on the best solution to handle large amounts of data coming from a pipe? My code for my two classes are below. (yes, sleep is bad. Thoughts on that? wait() and notifyAll() ?)
WriteFrames.java
public class WriteFrames {
public static void main(String[] args) {
String commandName = "ffmpeg";
CommandLine commandLine = new CommandLine(commandName);
File filename = new File(args[0]);
String[] options = new String[] {
"-i",
filename.getAbsolutePath(),
"-an",
"-f",
"yuv4mpegpipe",
"-"};
for (String s : options) {
commandLine.addArgument(s);
}
PipedOutputStream output = new PipedOutputStream();
PumpStreamHandler streamHandler = new PumpStreamHandler(output, System.err);
DefaultExecutor executor = new DefaultExecutor();
try {
DataInputStream is = new DataInputStream(new PipedInputStream(output));
YUV4MPEGPipeParser p = new YUV4MPEGPipeParser(is);
p.start();
executor.setStreamHandler(streamHandler);
executor.execute(commandLine);
} catch (IOException e) {
e.printStackTrace();
}
}
}
YUV4MPEGPipeParser.java
public class YUV4MPEGPipeParser extends Thread {
private InputStream is;
int width, height;
public YUV4MPEGPipeParser(InputStream is) {
this.is = is;
}
public void run() {
try {
while (is.available() == 0) {
Thread.sleep(100);
}
while (is.available() != 0) {
// do stuff.... like write out YUV frames
}
} catch (IOException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
The problem is in the run method of YUV4MPEGPipeParser class. There are two successive loops. The second loop terminates immediately if there are no data currently available on the stream (e.g. all input so far was processed by parser, and ffmpeg or stream pump were not fast enough to serve some new data for it -> available() == 0 -> loop is terminated -> pump thread finishes).
Just get rid of these two loops and sleep and just perform a simple blocking read() instead of checking if any data are available for processing. There is also probably no need for wait()/notify() or even sleep() because the parser code is started on a separate thread.
You can rewrite the code of run() method like this:
public class YUV4MPEGPipeParser extends Thread {
...
// optimal size of buffer for reading from pipe stream :-)
private static final int BUFSIZE = PipedInputStream.PIPE_SIZE;
public void run() {
try {
byte buffer[] = new byte[BUFSIZE];
int len = 0;
while ((len = is.read(buffer, 0, BUFSIZE) != -1) {
// we have valid data available
// in first 'len' bytes of 'buffer' array.
// do stuff.... like write out YUV frames
}
} catch ...
}
}

Categories

Resources