Recently, I wrote a simple client server program for file transfer over standard TCP sockets. The average throughput was around 2.2Mbps over WiFi channel. My question is:
Is it possible to transfer a large file (say 5 GB) over multiple data IO streams so that each stream could transfer several parts of the same file in a parallel manner (different threads could be used for this purpose)? These file parts could be re-assembled at the receiving end.
I tried to split a small file and transfered it over a dataoutputstream. The first segment works fine, but I don't know how to read a file input stream in selective manner (I also tried mark() and reset() methods for selective reading but no use)
Here is my code (for testing purpose, I have redirected the output to fileoutputstream):
public static void main(String[] args) {
// TODO Auto-generated method stub
final File myFile=new File("/home/evinish/Documents/Android/testPicture.jpg");
long N=myFile.length();
try {
FileInputStream in=new FileInputStream(myFile);
FileOutputStream f0=new FileOutputStream("/home/evinish/Documents/Android/File1.jpg");
FileOutputStream f1=new FileOutputStream("/home/evinish/Documents/Android/File2.jpg");
FileOutputStream f2=new FileOutputStream("/home/evinish/Documents/Android/File3.jpg");
byte[] buffer=new byte[4096];
int i=1, noofbytes;
long acc=0;
while(acc<=(N/3)) {
noofbytes=in.read(buffer, 0, 4096);
f0.write(buffer, 0, noofbytes);
acc=i*noofbytes;
i++;
}
f0.close();
I got the first segment of my file (this can be copied to a DataOutputStream in one thread). Can any one suggest, how to read remaining part of the file (after N/3 byte) in a segment of N/3 so that three streams could be used in three threads for concurrent operation?
Here is the code to merge file segments at receiver end:
package com.mergefilespackage;
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.Closeable;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
public class MergeFiles {
/**
* #param args
*/
public static void main(String[] args) throws Exception{
// TODO Auto-generated method stub
IOCopier.joinFiles(new File("/home/evinish/Documents/Android/File1.jpg"), new File[] {
new File("/home/evinish/Documents/Android/File2.jpg"), new File("/home/evinish/Documents/Android/File3.jpg")});
}
}
class IOCopier {
public static void joinFiles(File destination, File[] sources)
throws IOException {
OutputStream output = null;
try {
output = createAppendableStream(destination);
for (File source : sources) {
appendFile(output, source);
}
} finally {
IOUtils.closeQuietly(output);
}
}
private static BufferedOutputStream createAppendableStream(File destination)
throws FileNotFoundException {
return new BufferedOutputStream(new FileOutputStream(destination, true));
}
private static void appendFile(OutputStream output, File source)
throws IOException {
InputStream input = null;
try {
input = new BufferedInputStream(new FileInputStream(source));
IOUtils.copy(input, output);
} finally {
IOUtils.closeQuietly(input);
}
}
}
class IOUtils {
private static final int BUFFER_SIZE = 1024 * 4;
public static long copy(InputStream input, OutputStream output)
throws IOException {
byte[] buffer = new byte[BUFFER_SIZE];
long count = 0;
int n = 0;
while (-1 != (n = input.read(buffer))) {
output.write(buffer, 0, n);
count += n;
}
return count;
}
public static void closeQuietly(Closeable output) {
try {
if (output != null) {
output.close();
}
} catch (IOException ioe) {
ioe.printStackTrace();
}
}
}
Any help would be highly appreciated! Thanks in advance!
You can't get any more speed over the same link with more sockets. Each socket sends a certain number of packets, each of a certain size. As we double the number of sockets, the number of packets/sec*socket is halved, and then decreased even more due to collisions, overhead, and contention. Packets start to bump, jumble, and otherwise panic. The OS cannot handle the pandemonium of lost ACKs, and the WiFi card struggles to transmit at such a rate. It is losing its low-level acks as well. As packets get lost, a desperate TCP stack dials down the transmit rate. If this were to be able to come up due to signal improvement, it's now stuck at the lower speed due to silly window syndrome or another form of TCP deadlock.
Any attempt of WiFi to get any higher speeds out of wider carrier bands, MiMo, or multiple paths, has already been realized as gains, even with one socket. You can't take it any farther.
Now, wait. We're quite below WiFi speed, aren't we? Of course, we need to use buffering!
Make sure you create BufferedWriter and BufferedReader objects from your socket's getInputStream or getOutputStream methods. Then write to/read from those buffers. Your speed may increase somewhat.
You could get the byte array of the FileInputStream and split it every 10 KB (every 10.000 bytes).
Then send these parts through the streams in order.
On the server you can put the arrays together again and read the file from this giant byte array.
Related
I am sending a protobuf from C++ to Java via a raw socket, the C++ program being the client and the java program being the server. The C++ program generates packets almost every 1ms which is sent to the java program.
If I run the program normally, I see that there are only the half the packets being received.
If I set a breakpoint in the C++ program and then run the client and the server, all the packets are received.
How do I ensure that all packets are received without setting a breakpoint? Can I introduce a delay?
All the packets have bytes sizes upto a maximum of 15 bytes.
By default TCP sockets use the "Nagle Algorithm" which will delay transmission of the next "unfilled" fragment in order to reduce congestion. Your packet size is small enough and the time delay between packets is small enough that the nagle algorithm will have an effect on your transmissions.
As already discussed in the comments, what you are trying to do won't work in a reliable way. This is also described in the Protobuf documentation:
If you want to write multiple messages to a single file or stream, it
is up to you to keep track of where one message ends and the next
begins. The Protocol Buffer wire format is not self-delimiting, so
protocol buffer parsers cannot determine where a message ends on their
own. The easiest way to solve this problem is to write the size of
each message before you write the message itself. When you read the
messages back in, you read the size, then read the bytes into a
separate buffer, then parse from that buffer. (If you want to avoid
copying bytes to a separate buffer, check out the CodedInputStream
class (in both C++ and Java) which can be told to limit reads to a
certain number of bytes.)
The bold italic part is where you code isn't correct.
On the write side you should write
the Protobuf's length in some format that is understandable for both sender and receiver (selecting the proper format is especially important when transporting between systems whose endianness is different).
the protobuf
On the receiving end you need to
perform a read with the fixed, known size of the length field
a read for the length learned in step 1. This read will retriev the protobuf.
There's example code here on SO in this question: Sending struct via Socket using JAVA and C++
#fvu: This is my code which I am trying:
import Visualization.DataSetProtos.PacketData; // protos import
import java.io.InputStream;
import java.util.Arrays;
import javax.swing.JFrame;
import javax.swing.JScrollBar;
import javax.swing.JScrollPane;
class WorkerThread extends Thread {
Socket service;
static DynamicData demo;
static int size;
static int times;
static byte[] buffer;
WorkerThread(Socket service)
{
this.service = service;
buffer = new byte[500];
size = 1;
times = 0;
}
static void Print(PacketData packetData)
{
System.out.print("Packet Number: " + (++times));
System.out.print(" DataSet Size: " + packetData.getLength() + "\n");
}
static void Print(PacketHeader packetHeader)
{
System.out.print("Packet Number: " + (++times));
System.out.print(" DataSet Size: " + packetHeader.getLength() + "\n");
}
public void run() {
boolean flag=true; //you can change this flag's condition, to test if the client disconects
if(demo == null)
{
demo = new DynamicData("GridMate Data Visualization");
demo.pack();
RefineryUtilities.centerFrameOnScreen(demo);
//demo.setVisible(true);
}
try
{
while (flag)
{
InputStream inputStream = service.getInputStream();
int read;
read = inputStream.read(buffer);
byte[] readBuffer = new byte[read];
readBuffer = Arrays.copyOfRange(buffer, 0, read);
PacketData packetData = PacketData.parseFrom(readBuffer);
Print(packetData);
}
service.close();
}
catch(Exception e)
{
e.printStackTrace();
}
}
}
public class Test
{
Test()
{
server = null;
client= null;
}
public static void main(final String[] args) {
int i =0;
try
{
server = new ServerSocket(25715);
System.out.println("Server setup and waiting for client connection ...");
while(true)
{
client = server.accept();
WorkerThread wt = new WorkerThread(client);
wt.start();
i++;
}
}
catch(IOException e)
{ System.out.println("IO Error in streams " + e);
e.printStackTrace();
}
}
public void finalize()
{
try
{
server.close();
client.close();
}
catch(Exception e)
{
e.printStackTrace();
}
}
static ServerSocket server;
static Socket client;
}
I have to send a dynamic buffer size to the socket stream.
It works correctly, but when I try to send multiple buffers with a size
bigger than
int my_buffer_size =18 * 1024; (this is an indicative value)
I get the error (for some write):
Java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
My code is very simple:
For example If I want to send a big file I read a file stream with
byte[] bs = new byte[my_buffer_size];
while (... ){
fileInputStream.read(bs);
byte[] myBufferToSend = new byte[sizeBuffer];
DataOutputStream out = new DataOutputStream(cclient.getoutputStream());
out.writeInt(myBufferToSend.length);
out.write(myBufferToSend);
out.flush();
}
(The file is just a test the buffer size can be variable)
the SendBufferSize is 146988.
Is there a way to fix the broken pipe error? I read around but actually I didn’t solve the problem.
Thank you
any help is appreciated
I use the classic ServerSocket serverSocket;
and Socket cclient
'Broken pipe' means that you've written data to a connection that has already been closed by the other end.
Ergo the problem lies at the other end, not in this code. Possibly the other end doesn't really understand your length-word protocol for example, or doesn't implement it correctly.
If it's anything like this code it won't, because you're ignoring the result returned by read() and assuming that it fills the buffer. It isn't specified to do that, only to transfer at least one byte.
In common, receiving huge blocks is not supported by DataInputStream, because the readmethod just delegates to the underlying socket input stream and that socket input stream does not complain about not having read all. E.g. in Oracle Java 8 you get some 2^16 bytes and the rest is ignored. So when you close the socket after DataInputStream.read has returned, the sender node observes a "pipe broken" while still trying the send the rest of the huge block. Solution is a windowed read. Below is a DataInputStream-subclass, which does precisely this.
import java.io.DataInputStream;
import java.io.IOException;
import java.io.InputStream;
public class HugeDataInputStream extends DataInputStream
{
int maxBlockLength;
public HugeDataInputStream(InputStream in)
{
this(in, 0x8000);
}
public HugeDataInputStream(InputStream in, int maxBlockLength)
{
super(in);
this.maxBlockLength = maxBlockLength;
}
public int readHuge(byte[] block) throws IOException
{
int n = block.length;
if (n > maxBlockLength)
{
int cr = 0;
while (cr < n)
{
cr += super.read(block, cr, Math.min(n - cr, maxBlockLength));
}
return cr;
}
else
{
return super.read(block);
}
}
}
I have just started learning java. I modified the client side code for a server/client communication program, by creating two threads for the client side, main thread for receiving user's input, and inputThread for receiving server's response. I am sure that server has sent the response to client, however, no response message is obtain at client.
Here is my code. Can anyone help me to figure it out? Thanks
package clientnio;
import java.net.*;
import java.nio.*;
import java.io.*;
import java.nio.channels.*;
import java.util.Scanner;
public class ClientNIO {
public static int bufferLen = 50;
public static SocketChannel client;
public static ByteBuffer writeBuffer;
public static ByteBuffer readBuffer;
public static void main(String[] args) {
writeBuffer = ByteBuffer.allocate(bufferLen);
readBuffer = ByteBuffer.allocate(bufferLen);
try {
SocketAddress address = new InetSocketAddress("localhost",5505);
System.out.println("Local address: "+ address);
client=SocketChannel.open(address);
client.configureBlocking(false);
//readBuffer.flip();
new inputThread(readBuffer);
/*
String a="asdasdasdasddffasfas";
writeBuffer.put(a.getBytes());
writeBuffer.clear();
int d=client.write(writeBuffer);
writeBuffer.flip();
*/
while (true) {
InputStream inStream = System.in;
Scanner scan = new Scanner(inStream);
if (scan.hasNext()==true) {
String inputLine = scan.nextLine();
writeBuffer.put(inputLine.getBytes());
//writeBuffer.clear();
System.out.println(writeBuffer.remaining());
client.write(writeBuffer);
System.out.println("Sending data: "+new String(writeBuffer.array()));
writeBuffer.flip();
Thread.sleep(300);
}
}
}
catch(Exception e) {
System.out.println(e);
}
}
}
class inputThread extends Thread {
private ByteBuffer readBuffer;
public inputThread(ByteBuffer readBuffer1) {
System.out.println("Receiving thread starts.");
this.readBuffer = readBuffer1;
start();
}
#Override
public void run() {
try {
while (true) {
readBuffer.flip();
int i=ClientNIO.client.read(readBuffer);
if(i>0) {
byte[] b=readBuffer.array();
System.out.println("Receiving data: "+new String(b));
//client.close();
//System.out.println("Connection closed.");
//break;
}
Thread.sleep(100);
}
}
catch (Exception e) {
System.out.println(e);
}
}
}
Disclaimer: I'm not an active user of Java. (I only used it in school.)
Advice: I think it will greatly simplify the debugging process if you use blocking mode, at least until your code example is working correctly. (Currently your code does not seem to benefit from the non-blocking mode.)
I have identified two issues, culminating into four possible lines of code that may require changing:
When a ByteBuffer allocates its backing array, it sets itself ready to write by setting position to zero and limit to the capacity of that array. Your two uses of ByteBuffer.flip() (in the writing loop and the reading loop respectively) seem to be contrary to the convention.
Calling the ByteBuffer.array() method always returns the whole backing array, thus it always has size bufferLen. Because of this, a String constructed from the full-size array may contain junk from a previous transmission.
Typically, the array needs to be trimmed to the transmission size, and the conversion between a String and a byte array must use the same encoding as the server.
My suggested changes for first issue: (Note: I don't know how to fix the array trimming and encoding issue.)
writeBuffer.put(inputLine.getBytes());
writeBuffer.flip(); // <--here
client.write(writeBuffer);
...
writeBuffer.clear(); // <-- should be clear() instead of flip()
Thread.sleep(300);
// readBuffer.flip(); // <-- remove this line
int i=ClientNIO.client.read(readBuffer);
if(i>0) {
readBuffer.flip(); // <-- move it here
byte[] b=readBuffer.array();
System.out.println("Receiving data: "+new String(b));
...
}
References
http://docs.oracle.com/javase/1.4.2/docs/api/java/nio/ByteBuffer.html
http://docs.oracle.com/javase/1.4.2/docs/api/java/nio/channels/SocketChannel.html
Socketchannel always null
http://www.exampledepot.com/egs/java.nio.charset/ConvertChar.html
Calling flip() on a buffer prior to reading it is wrong. Don't do that. You need to flip it prior to writing from it, or getting from it, and compact() afterwards.
Just assume there is a program which takes inputs from the standard input.
For example:
cin>>id;
What I want to figure out is how to execute the process and give some input to its standard input. Getting the output of the process is not an issue for me. It works properly. The question is how to feed inputs for such processes using java.lang.Process class.
If there are any other third party libraries like Apache commons please mention them also.
Thanks in advance!
Use Process.getOutputStream() and write() to it. It's a bit tricky since you use the output stream to input data to the process, but the name reflects the interface that is returned (from your app's point of view it's output because you are writing to it).
You need to start a separate thread which reads from the output of one process and writes it as input to the other process.
Something like this should do:
class DataForwarder extends Thread {
OutputStream out;
InputStream in;
public DataForwarder(InputStream in, OutputStream out) {
this.out = out;
this.in = in;
}
#Override
public void run() {
byte[] buf = new byte[1024];
System.out.println("Hej");
try {
int n;
while (-1 != (n = in.read(buf)))
out.write(buf, 0, n);
out.close();
} catch (IOException e) {
// Handle in some suitable way.
}
}
}
Which would be used for prod >> cons as follows:
class Test {
public static void main(String[] args) throws IOException {
Process prod = new ProcessBuilder("ls").start();
Process cons = new ProcessBuilder("cat").start();
// Start feeding cons with output from prod.
new DataForwarder(prod.getInputStream(), cons.getOutputStream()).start();
}
}
I have encrypted video file and while decrypting it i have defined Bytebyte[] input = new byte[1024]; size to written it in output file.
Here i want to write first 1024 bytes in output files while at same time if want to play that video file i can play that output file without waiting to whole file written like video streaming.
when first 1024 bytes written , video file will start playing till whole file will written.
You'll have to setup your input stream and output stream depending on where you're getting the data and where you're saving/viewing it. Performance could also likely be improved with some buffering on the output. You should get the general idea.
public class DecryptionWotsit {
private final BlockingDeque<Byte> queue = new LinkedBlockingDeque<Byte>();
private final InputStream in;
private final OutputStream out;
public DecryptionWotsit(InputStream in, OutputStream out) {
this.in = in;
this.out = out;
}
public void go() {
final Runnable decryptionTask = new Runnable() {
#Override
public void run() {
try {
byte[] encrypted = new byte[1024];
byte[] decrypted = new byte[1024];
while (true) {
int encryptedBytes = in.read(encrypted);
// TODO: decrypt into decrypted, set decryptedBytes
int decryptedBytes = 0;
for (int i = 0; i < decryptedBytes; i++)
queue.addFirst(decrypted[i]);
}
}
catch (Exception e) {
// exception handling left for the reader
throw new RuntimeException(e);
}
}
};
final Runnable playTask = new Runnable() {
#Override
public void run() {
try {
while (true) {
out.write(queue.takeLast());
}
}
catch (Exception e) {
throw new RuntimeException(e);
}
}
};
Executors.newSingleThreadExecutor().execute(decryptionTask);
Executors.newSingleThreadExecutor().execute(playTask);
}
}
You will have to do the writing in a separate thread.
Since writing to file is a lot slower than displaying video, expect the file-writing thread to be running long after you've quit watching the video. Unless (as I understand it) you intend to write only the first 1024 bytes to file.
If you intend to write the entire video to file, a single 1024 byte buffer will slow you down. You will either have to use a buffer that is a lot larger, or need a lot of these 1024-byte buffers. (I suppose the 1024 byte buffer size is a consequence of the decryption algorithm?)
Also, you may want to look at how much memory is available for the JVM, to make sure that you won't get an OutOfMemoryException halfway. You can use the -Xms and -Xmx options to set the amount of memory available to the JVM.
A simple way to write to a file, you also want to process is to open the file twice (or more times). In one thread you write to the file and update a counter to say how much you have written e.g. a long protected by a synchronized block. In the reading thread(s) you can get this value and read up to that point, repeatedly until the writer has finished. A simple way to signal the write has finished is to set the size to Long.MAX_VALUE, causing the readers to read until the EOF. To stop the readers busy waiting, you can have them wait() until the data written is greater than the amount read.
This approach always uses a fixed amount of memory e.g. 16 - 128K, regardless of how far behind the readers are from the writer.