I am working with a android application in which I am wanting to share the file from one device to another through Wi-Fi. I am getting the speed of around 1.5 Mega Byte/s. Is there any way by which I can transfer the file by much higher data rate?
Can you please tell why we are getting this less data rate even the devices and router is capable of handling more than 150Mbps (18.75MBps) data rate...
Is it possible to use UFTP and will it be solving the purpose??
here is the code :
byte[] buf = new byte[2048];
try {
int bytesRead = 0;
while ((bytesRead = dis.read(buf, 0, buf.length)) != -1) {
fLength = fLength - bytesRead;
dos.write(buf, 0, bytesRead);
Log.i("File Tranfer Thread", String.valueOf(fLength) + Thread.currentThread().getName());
}
}
}
Thanks
Your code is fast.
There one thing you can try that worth the cost is playing with packet size. Try to modify the pack size in order to see the faster solution. Sometimes bigger packet is faster to send.
packet size bigger byte[] buf = new byte[2048*10];
packet size smaller byte[] buf = new byte[512];
packet size 3 byte[] buf = new byte[2048*5];
Related
As a hobby project, I'm writing an android voip client. When writing voice data to the socket (Vars.mediaSocket), many times, the data isn't immediately sent out over the wifi but just stalls and then all at once it will send 20 seconds worth of voice. Then it will stall again and wait for 30 seconds and then send 30 seconds of voice. The wait is not consistent but after a while it will continuously send voice data immediately. I've tried everything from using DataOutputStream to setting the socket output buffer size, setting the sendbuffer size huge, small, and lastly, buffering the voice data from its 32 byte chunks to anything from 128bytes to 32kb.
Utils.logcat(Const.LOGD, encTag, "MediaCodec encoder thread has started");
isEncoding = true;
byte[] amrbuffer = new byte[32];
short[] wavbuffer = new short[160];
int outputCounter = 0;
//setup the wave audio recorder. since it is released and restarted, it needs to be setup here and not onCreate
wavRecorder = null; //remove pointer to the old recorder for safety
wavRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, SAMPLESWAV, AudioFormat.CHANNEL_IN_MONO, FORMAT, 160);
wavRecorder.startRecording();
AmrEncoder.init(0);
while(!micMute)
{
int totalRead = 0, dataRead;
while(totalRead < 160)
{//although unlikely to be necessary, buffer the mic input
dataRead = wavRecorder.read(wavbuffer, totalRead, 160 - totalRead);
totalRead = totalRead + dataRead;
}
int encodeLength = AmrEncoder.encode(AmrEncoder.Mode.MR122.ordinal(), wavbuffer, amrbuffer);
try
{
Vars.mediaSocket.getOutputStream().write(amrbuffer);
Vars.mediaSocket.getOutputStream().flush();
}
catch (IOException i)
{
Utils.logcat(Const.LOGE, encTag, "Cannot send amr out the media socket");
Utils.dumpException(tag, i);
}
Is there something I'm missing? To simulate a second cell phone, I have another client which just simply reads the voice data, throws it away, and reads again in a loop. I can confirm in the simulated second cell phone when the real cell phone stops sending voice, the simulated one's socket.read hangs until the real one starts sending voice again.
I'm really hoping not to have to write a jni for the socket as I don't know anything about that and was hoping I could write the app as a standard java app.
CASE CLOSED: turned out to be a server side bug but the simplifying back to basics suggestions is still a good idea.
You are adding most of the latency yourself by reading large amounts of data before writing any of it. You should just use the standard Java copy loop:
byte[] buffer = new byte[8192];
int count;
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
You need to adapt this to incorporate your codec step. Note that you don't need a buffer the size of the entire input. You can tune its size to suit yourself but 8192 is a good starting point. You can increase it to say 32k but don't decrease it. If your codec needs the data in fixed-size chunks, use a buffer of that size and DataInputStream.readFully(). But the larger the buffer the more the latency.
EDIT Specific issues with your code:
byte[] amrbuffer = new byte[AMRBUFFERSIZE];
byte[] outputbuffer = new byte [outputBufferSize];
Remove (see below).
short[] wavbuffer = new short[WAVBUFFERSIZE];
int outputCounter = 0;
Remove outputCounter.
//setup the wave audio recorder. since it is released and restarted, it needs to be setup here and not onCreate
wavRecorder = null; //remove pointer to the old recorder for safety
Pointless. Remove.
wavRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, SAMPLESWAV, AudioFormat.CHANNEL_IN_MONO, FORMAT, WAVBUFFERSIZE);
wavRecorder.startRecording();
AmrEncoder.init(0);
OK.
try
{
Vars.mediaSocket.setSendBufferSize(outputBufferSize);
}
catch (SocketException e)
{
e.printStackTrace();
}
Pointless. Remove. The socket send buffer should be as large as possible. Unless you know that its default size is < outputBufferSize there is no benefit to this. In any case we are getting rid of outputBuffer altogether.
while(!micMute)
{
int totalRead = 0, dataRead;
while(totalRead < WAVBUFFERSIZE)
{//although unlikely to be necessary, buffer the mic input
dataRead = wavRecorder.read(wavbuffer, totalRead, WAVBUFFERSIZE - totalRead);
totalRead = totalRead + dataRead;
}
int encodeLength = AmrEncoder.encode(AmrEncoder.Mode.MR122.ordinal(), wavbuffer, amrbuffer);
OK.
if(outputCounter == outputBufferSize)
{
Utils.logcat(Const.LOGD, encTag, "Sending output buffer");
try
{
Vars.mediaSocket.getOutputStream().write(outputbuffer);
Vars.mediaSocket.getOutputStream().flush();
}
catch (IOException i)
{
Utils.logcat(Const.LOGE, encTag, "Cannot send amr out the media socket");
Utils.dumpException(tag, i);
}
outputCounter = 0;
}
System.arraycopy(amrbuffer, 0, outputbuffer, outputCounter, encodeLength);
outputCounter = outputCounter + encodeLength;
Utils.logcat(Const.LOGD, encTag, "Output buffer fill: " + outputCounter);
Remove all the above and substitute
Vars.mediaSocket.getOutputStream().write(amrbuffer, 0, encodeLength);
This also means you can get rid of 'outputBuffer' as promised.
NB Don't flush inside loops. As a matter of fact flushing a socket output stream does nothing, but the general principle still holds.
Welcome
I need to download sycnronously (one at time) a lot of small remote images (between 50kb and 100kb) from a server and to store them as PNG in the device. I need to achieve this without third party libraries and I'm using this code but it is too munch slow:
URL javaUrl = new URL(URLParser.parse(this.url));
URLConnection connection = javaUrl.openConnection();
InputStream input = new BufferedInputStream(javaUrl.openStream());
ByteArrayOutputStream output = new ByteArrayOutputStream();
byte data[] = new byte[1024];
long total = 0;
int count;
while ((count = input.read(data)) != -1) {
total += count;
output.write(data, 0, count);
}
// conversion to bitmap
InputStream in = new ByteArrayInputStream(output.toByteArray());
Bitmap original = BitmapFactory.decodeStream(in);
// storing bitmap as PNG file
FileOutputStream out = new FileOutputStream(filename);
original.compress(Bitmap.CompressFormat.PNG, 90, out);
output.flush();
output.close();
input.close();
in.close();
original.recycle();
The problem is that the download is very slow. With very fast WIFI internet in the device (13MB, download speed of 1.4mbytes/s), it is taking 3-4 seconds to download the image in the device, but only 100-200ms to download the image in my PC using google chrome for example.
It is something wrong in my download algorithm? can be improved?
Thanks
You have a totally unnecessary byte array in the middle.
BitmapFactory.decodeStream() accepts an InputStream and you get an InputStream from URL.openStream().
It might not give you the speed boost you're looking for, but it'll at least get rid of a completely useless step in your code.
I am trying to submit a 500 MB file.
I can load it but I want to improve the performance.
This is the slow code:
File dest = getDestinationFile(source, destination);
if(dest == null) return false;
in = new BufferedInputStream(new FileInputStream(source));
out = new BufferedOutputStream(new FileOutputStream(dest));
byte[] buffer = new byte[1024 * 20];
int i = 0;
// this while loop is very slow
while((i = in.read(buffer)) != -1){
out.write(buffer, 0, i); //<-- SLOW HERE
out.flush();
}
How can I find why it is slow?
Isn't the byte array size / buffer size sufficient?
Do you have any ideas to improve the performance or?
Thanks in advance for any help
You should not flush in loop.
You are using BufferedOutputStream. This mean that after "caching" some amount of data it flushes data to file.
Your code just kills performance by flushing data after writing a little amount of data.
try do this like that:
while((i = in.read(buffer)) != -1){
out.write(buffer, 0, i); <-- SLOW HERE
}
out.flush();
..:: Edit: in response of comment below ::..
In my opinion you should not use buffer at all. You are using Buffered(Output/Input)Stream which means that they have his own buffer to read "package" of data from disk and save "package" of data. Im not 100% sure about performance in using additional buffer but I want you to show how I would do that:
File dest = getDestinationFile(source, destination);
if(dest == null) return false;
in = new BufferedInputStream(new FileInputStream(source));
out = new BufferedOutputStream(new FileOutputStream(dest));
int i;
while((i = in.read()) != -1){
out.write(i);
}
out.flush();
In my version you will just read a BYTE (no a int. Read doc: http://docs.oracle.com/javase/7/docs/api/java/io/InputStream.html#read()
this method returns int but this is just a BYTE) but there is no need to read a whole buffer (so you don't need to be worry about size of it).
Probably you should read more about streams to better understand what is nessesary to do with them.
When I try to send a large file from server by splitting it, some of the packages don't arrive at the client... as you can see in the console output
http://s7.postimg.org/94yjfame3/error.png
the client receive only 19799.. bytes , and the server sent 62800.. bytes.
the code is too long to past here... but here are the basics:
// server side -> send data
BufferedOutputStream out = new BufferedOutputStream(socket.getOutputStream());
byte[] somePackageInfo= new byte[500];
byte[] streamOut = new byte[20000];
while(getDataFromLargeFile(somePackageInfo,streamOut) != 0) {
out.write(somePackageInfo,0,500);
out.write(streamOut);
out.flush();
}
out.write(0);
out.flush();
// client side -> get data
BufferedInputStream in = new BufferedInputStream(socket.getInputStream());
byte[] somePackageInfo= new byte[500];
byte[] streamIn= new byte[20000];
while(true) {
if(in.read(somePackageInfo,0,500) == 0) break;
in.read(streamIn);
saveDataToLargeFile(somePackageInfo,streamIn);
}
I tried to slow down the transfer (sleep(500)) but only most of the packages arrived.
tried to remove the flush() but still only most of the packages arrived.
what causes this problem and how can i fix it?
Your copy code is wrong. You are ignoring the count returned by read, and assuming that it fills the buffer. It isn't required to do that. See the Javadoc.
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
Use with any buffer size greater than zero, typically 8192. Use at both ends.
Adding sleeps is literally a waste of time.
This question already has an answer here:
Receiving multiple images over TCP socket using InputStream
(1 answer)
Closed 9 years ago.
I am trying to send two images with a difference of 5 seconds between them from an android phone (client) to PC(server).
I am using InputStream to do this for me.
ServerSocket servsock = new ServerSocket(27508);
Socket sock = servsock.accept();
System.out.println("connection accepted ");
int count;
FileOutputStream fos = null;
BufferedOutputStream bos = null;
InputStream is = null;
is = sock.getInputStream();
int bufferSize = sock.getReceiveBufferSize();
byte[] bytes = new byte[bufferSize];
System.out.println("Here1");
fos = new FileOutputStream("D:\\fypimages\\image" + imgNum + ".jpeg");
bos = new BufferedOutputStream(fos);
imgNum++;
while ((count = is.read(bytes)) > 0)
{
bos.write(bytes, 0, count);
System.out.println("count: " + count);
}
bos.flush();
bytes = new byte[bufferSize];
System.out.println("Here2");
fos = new FileOutputStream("D:\\fypimages\\image" + imgNum + ".jpeg");
bos = new BufferedOutputStream(fos);
imgNum++;
while ((count = is.read(bytes)) > 0)
{
bos.write(bytes, 0, count);
System.out.println("count: " + count);
}
bos.flush();
System.out.println("Here3");
The problem is is.read(bytes) blocks the code only for the first image and then the program is terminated and it does not block for the second image.
I know it returns -1 when the first image is recieved completely, but how do I make it work for the second time ?
If read returns -1, it means other side closed the connection. But your basic problem seems to be, you're not handling the connection as stream. In a data stream, there are no inherent "packages", in this case no built-in way to distinguish one image from next.
You can proceed in at least 3 different ways:
Add your own simple protocol, for example: at sending side, write number of bytes in image, then write image bytes, then write number of bytes in next image, then write next image, etc, without closing the connection. And at receiving side, loop first reading the number of bytes, then reading that many bytes of image data.
Write one image per connection, then close the connection and create new connection for next image.
In this case, because data is JPEG images, just write all JPEG images as one data stream, then on receiving side, parse the JPEG format to see where the image boundaries are.
First choice is most efficient, and also is easily extended to deliver image name or other extra data in addition to image file length. Second is ok, and most simple and robust (for example, no need to worry about byte order, or worry about getting out of sync between sender and receiver), if there aren't too many images, but if there are hundreds of images, then re-connecting is going to slow things down a bit. Third choice is probably not the way to go with JPEGs, just listed is as a possiblity.