Android CTS code about storage throughput calculation - java

I need to calculate the storage throughput in my Android device, and I found source code about calculating storage sequential RW throughput in Android CTS.
FileUtil.java
public static long getFileSizeExceedingMemory(Context context, int bufferSize) {
long freeDisk = SystemUtil.getFreeDiskSize(context);
long memSize = SystemUtil.getTotalMemory(context);
long diskSizeTarget = (2 * memSize / bufferSize) * bufferSize;
final long minimumDiskSize = (512L * 1024L * 1024L / bufferSize) * bufferSize;
final long reservedDiskSize = (50L * 1024L * 1024L / bufferSize) * bufferSize;
if ( diskSizeTarget < minimumDiskSize ) {
diskSizeTarget = minimumDiskSize;
}
if (diskSizeTarget > freeDisk) {
Log.i(TAG, "Free disk size " + freeDisk + " too small");
return 0;
}
if ((freeDisk - diskSizeTarget) < reservedDiskSize) {
diskSizeTarget -= reservedDiskSize;
}
return diskSizeTarget;
}
This function is about to create a file from RAM to storage(write) and then read back.
I was just wondering:
long diskSizeTarget = (2 * memSize / bufferSize) * bufferSize;
Why they need to prepare a file which is around double ram size?
I've ever tried file size which is a half of ram size (my device ram size=2GB), and the write throughput looks normal but read throughput is too fast(around 200MB/s).
But the result looks fine when I use around 4GB file(double ram size) and 2GB.
(The buffersize parameters are 10MB for read and write)
Here are the read and write codes:
SequentialRWTest.java
public void testSingleSequentialRead() throws Exception {
final long fileSize = FileUtil.getFileSizeExceedingMemory(getContext(), BUFFER_SIZE);
if (fileSize == 0) { // not enough space, give up
return;
}
long start = System.currentTimeMillis();
final File file = FileUtil.createNewFilledFile(getContext(),
DIR_SEQ_RD, fileSize);
long finish = System.currentTimeMillis();
String streamName = "test_single_sequential_read";
DeviceReportLog report = new DeviceReportLog(REPORT_LOG_NAME, streamName);
report.addValue("file_size", fileSize, ResultType.NEUTRAL, ResultUnit.NONE);
report.addValue("write_throughput",
Stat.calcRatePerSec((double)fileSize / 1024 / 1024, finish - start),
ResultType.HIGHER_BETTER, ResultUnit.MBPS);
final int NUMBER_READ = 10;
final byte[] data = new byte[BUFFER_SIZE];
double[] times = MeasureTime.measure(NUMBER_READ, new MeasureRun() {
#Override
public void run(int i) throws IOException {
final FileInputStream in = new FileInputStream(file);
long read = 0;
while (read < fileSize) {
in.read(data);
read += BUFFER_SIZE;
}
in.close();
}
});
double[] mbps = Stat.calcRatePerSecArray((double)fileSize / 1024 / 1024, times);
report.addValues("read_throughput", mbps, ResultType.HIGHER_BETTER, ResultUnit.MBPS);
Stat.StatResult stat = Stat.getStat(mbps);
report.setSummary("read_throughput_average", stat.mAverage, ResultType.HIGHER_BETTER,
ResultUnit.MBPS);
report.submit(getInstrumentation());
}
And createNewFilledFile function in FileUtil.java
public static File createNewFilledFile(Context context, String dirName, long length)
throws IOException {
final int BUFFER_SIZE = 10 * 1024 * 1024;
File file = createNewFile(context, dirName);
FileOutputStream out = new FileOutputStream(file);
byte[] data = generateRandomData(BUFFER_SIZE);
long written = 0;
while (written < length) {
out.write(data);
written += BUFFER_SIZE;
}
out.flush();
out.close();
return file;
}

Related

Java 8: How to chunk multipart file for POST request

I have a multipart file, it will be an image or video, which needs to be chunked for POST request. How can I chunk the file into byte array segments?
edit: I'm using Twitter API to upload image, according to their docs, media must be chunked
I've found a solution thanks to https://www.baeldung.com/2013/04/04/multipart-upload-on-s3-with-jclouds/
public final class MediaUtil {
public static int getMaximumNumberOfParts(byte[] byteArray) {
int numberOfParts = byteArray.length / (1024 * 1024); // 1MB
if (numberOfParts == 0) {
return 1;
}
return numberOfParts;
}
public static List<byte[]> breakByteArrayIntoParts(byte[] byteArray, int maxNumberOfParts) {
List<byte[]> parts = new ArrayList<>();
int fullSize = byteArray.length;
long dimensionOfPart = fullSize / maxNumberOfParts;
for (int i = 0; i < maxNumberOfParts; i++) {
int previousSplitPoint = (int) (dimensionOfPart * i);
int splitPoint = (int) (dimensionOfPart * (i + 1));
if (i == (maxNumberOfParts - 1)) {
splitPoint = fullSize;
}
byte[] partBytes = Arrays.copyOfRange(byteArray, previousSplitPoint, splitPoint);
parts.add(partBytes);
}
return parts;
}
}
// Post the request
int maxParts = MediaUtil.getMaximumNumberOfParts(multipartFile.getBytes());
List<byte[]> bytes = MediaUtil.breakByteArrayIntoParts(multipartFile.getBytes(), maxParts);
int segment = 0;
for (byte[] b : bytes) {
// POST request here
segment++;
}
Well, you may need this:
File resource = ResourceUtils.getFile(path);
if (resource.isFile()) {
byte[] bytes = readFile2Bytes(new FileInputStream(resource));
}
private byte[] readFile2Bytes(FileInputStream fis) throws IOException {
int length = 0;
byte[] buffer = new byte[size];
ByteArrayOutputStream baos = new ByteArrayOutputStream();
while ((length = fis.read(buffer)) != -1) {
baos.write(buffer, 0, length);
}
return baos.toByteArray();
}

Java Internet Speed Test

Hello friends,
I am trying to verify the internet speed through the sample java code.
But when I compare the same with the website let say fast.com the
results are very different. Can you help me to understand if I am
missing anything?
public static void testSpeed() throws MalformedURLException, IOException {
long totalDownload = 0; // total bytes downloaded
final int BUFFER_SIZE = 1024; // size of the buffer
byte[] data = new byte[BUFFER_SIZE]; // buffer
int dataRead = 0; // data read in each try
long startTime = System.nanoTime(); // starting time of download
BufferedInputStream in = new BufferedInputStream(
new URL(
"https://www.google.com/")
.openStream());
while ((dataRead = in.read(data, 0, 1024)) > 0) {
totalDownload += dataRead; // adding data downloaded to total data
}
double downloadTime=(System.nanoTime() - startTime);
/* download rate in bytes per second */
double bytesPerSec = totalDownload
/ ((downloadTime) / 1000000000 );
System.out.println(bytesPerSec + " Bps");
/* download rate in kilobytes per second */
double kbPerSec = bytesPerSec / (1024);
System.out.println(kbPerSec + " KBps ");
/* download rate in megabytes per second */
double mbPerSec = kbPerSec / (1024);
System.out.println(mbPerSec + " MBps ");
}
The results i am getting as :
66785.29693193253 Bps
65.22001653509037 KBps
0.06369142239754919 MBps
Results from fast.com is : 140 MBPS

FileChannel close is too slow. Will pre allocating space improve things?

I use a FileChannel to write 4GB files to a spin disc and although I have tweaked the buffer size to maximise write speed and flush the channel every second the file channel close can take 200 ms. This is enough time that the queue that I read from overflows and starts dropping packets.
I use a direct byte buffer, but I am struggling to understand what is happening here. I have removable discs and write caching has been disabled so I would not expect the OS to be buffering the data?
The benchmark speed of the discs is around 80 MB/Sec, but I am seeing the long file channel close times even when writing at speeds of ~ 40 MB/Sec.
I appreciate that as the discs fills then write performance will decrease, but these discs are empty.
Is there any tweaks I can do to remove the long delay when closing the file channel. Should I be allocating the file space upfront and I write the file with a .lock extension and then do a rename once the file has been completed?
Just hoping someone who has done high throughput IO can provide some pointers as to possible options above and beyond what is usually documented when writing files using NIO.
The code is below and I cannot see anything immediately wrong.
public final class DataWriter implements Closeable {
private static final Logger LOG = Logger.getLogger("DataWriter");
private static final long MB = 1024 * 1024;
private final int flushPeriod;
private FileOutputStream fos;
private FileChannel fileChannel;
private long totalBytesWritten;
private long lastFlushTime;
private final ByteBuffer buffer;
private final int bufferSize;
private final long startTime;
private long totalPackets = 0;
private final String fileName;
public DataWriter(File recordFile, int bSize, int flushPeriod) throws IOException {
this.flushPeriod = flushPeriod;
if (!recordFile.createNewFile()) {
throw new IllegalStateException("Record file has not been created");
}
totalBytesWritten = 0;
fos = new FileOutputStream(recordFile);
fileChannel = fos.getChannel();
buffer = ByteBuffer.allocateDirect(bSize);
bufferSize = bSize;
startTime = System.currentTimeMillis();
this.fileName = recordFile.getAbsolutePath();
}
/**
* Appends the supplied ByteBuffer to the main buffer if there is space
* #param packet
* #return
* #throws IOException
*/
public int write(ByteBuffer packet) throws IOException {
int bytesWritten = 0;
totalPackets++;
//If the buffer cannot accommodate the supplied buffer then write straight out
if(packet.limit() > buffer.capacity()) {
bytesWritten = writeBuffer(packet);
totalBytesWritten += bytesWritten;
} else {
//write the currently filled buffer if no space exists to accomodate the current buffer
if(packet.limit() > buffer.remaining()) {
buffer.flip();
bytesWritten = writeBuffer(buffer);
totalBytesWritten += bytesWritten;
}
buffer.put(packet);
}
if(System.currentTimeMillis()-lastFlushTime > flushPeriod) {
fileChannel.force(true);
lastFlushTime=System.currentTimeMillis();
}
return bytesWritten;
}
public long getTotalBytesWritten() {
return totalBytesWritten;
}
/**
* Writes the buffer and then clears it
* #throws IOException
*/
private int writeBuffer(ByteBuffer byteBuffer) throws IOException {
int bytesWritten = 0;
while(byteBuffer.hasRemaining()) {
bytesWritten += fileChannel.write(byteBuffer);
}
//Reset the buffer ready for writing
byteBuffer.clear();
return bytesWritten;
}
#Override
public void close() throws IOException {
//Write the buffer if data is present
if(buffer.position() != 0) {
buffer.flip();
totalBytesWritten += writeBuffer(buffer);
fileChannel.force(true);
}
long time = System.currentTimeMillis() - startTime;
if(LOG.isDebugEnabled()) {
LOG.debug( totalBytesWritten + " bytes written in " + (time / 1000d) + " seconds using ByteBuffer size ["+bufferSize/1024+"] KB");
LOG.debug( (totalBytesWritten / MB) / (time / 1000d) + " MB per second written to file " + fileName);
LOG.debug( "Total packets written ["+totalPackets+"] average packet size ["+totalBytesWritten / totalPackets+"] bytes");
}
if (fos != null) {
fos.close();
fos = null;
}
}
}

Testing disk performance: differs with and without using Java

I've been asked to measure current disk performance, as we are planning to replace local disk with network attached storage on our application servers. Since our applications which write data are written in Java, I thought I would measure the performance directly in Linux, and also using a simple Java test. However I'm getting significantly different results, particularly for reading data, using what appear to me to be similar tests. Directly in Linux I'm doing:
dd if=/dev/zero of=/data/cache/test bs=1048576 count=8192
dd if=/data/cache/test of=/dev/null bs=1048576 count=8192
My Java test looks like this:
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
public class TestDiskSpeed {
private byte[] oneMB = new byte[1024 * 1024];
public static void main(String[] args) throws IOException {
new TestDiskSpeed().execute(args);
}
private void execute(String[] args) throws IOException {
long size = Long.parseLong(args[1]);
testWriteSpeed(args[0], size);
testReadSpeed(args[0], size);
}
private void testWriteSpeed(String filePath, long size) throws IOException {
File file = new File(filePath);
BufferedOutputStream writer = null;
long start = System.currentTimeMillis();
try {
writer = new BufferedOutputStream(new FileOutputStream(file), 1024 * 1024);
for (int i = 0; i < size; i++) {
writer.write(oneMB);
}
writer.flush();
} finally {
if (writer != null) {
writer.close();
}
}
long elapsed = System.currentTimeMillis() - start;
String message = "Wrote " + size + "MB in " + elapsed + "ms at a speed of " + calculateSpeed(size, elapsed) + "MB/s";
System.out.println(message);
}
private void testReadSpeed(String filePath, long size) throws IOException {
File file = new File(filePath);
BufferedInputStream reader = null;
long start = System.currentTimeMillis();
try {
reader = new BufferedInputStream(new FileInputStream(file), 1024 * 1024);
for (int i = 0; i < size; i++) {
reader.read(oneMB);
}
} finally {
if (reader != null) {
reader.close();
}
}
long elapsed = System.currentTimeMillis() - start;
String message = "Read " + size + "MB in " + elapsed + "ms at a speed of " + calculateSpeed(size, elapsed) + "MB/s";
System.out.println(message);
}
private double calculateSpeed(long size, long elapsed) {
double seconds = ((double) elapsed) / 1000L;
double speed = ((double) size) / seconds;
return speed;
}
}
This is being invoked with "java TestDiskSpeed /data/cache/test 8192"
Both of these should be creating 8GB files of zeros, 1MB at a time, measuring the speed, and then reading it back and measuring again. Yet the speeds I'm consistently getting are:
Linux: write - ~650MB/s
Linux: read - ~4.2GB/s
Java: write - ~500MB/s
Java: read - ~1.9GB/s
Can anyone explain the large discrepancy?
When I run this using NIO on my system. Ubuntu 15.04 with an i7-3970X
public class Main {
static final int SIZE_GB = Integer.getInteger("sizeGB", 8);
static final int BLOCK_SIZE = 64 * 1024;
public static void main(String[] args) throws IOException {
ByteBuffer buffer = ByteBuffer.allocateDirect(BLOCK_SIZE);
File tmp = File.createTempFile("delete", "me");
tmp.deleteOnExit();
int blocks = (int) (((long) SIZE_GB << 30) / BLOCK_SIZE);
long start = System.nanoTime();
try (FileChannel fc = new FileOutputStream(tmp).getChannel()) {
for (int i = 0; i < blocks; i++) {
buffer.clear();
while (buffer.remaining() > 0)
fc.write(buffer);
}
}
long mid = System.nanoTime();
try (FileChannel fc = new FileInputStream(tmp).getChannel()) {
for (int i = 0; i < blocks; i++) {
buffer.clear();
while (buffer.remaining() > 0)
fc.read(buffer);
}
}
long end = System.nanoTime();
long size = tmp.length();
System.out.printf("Write speed %.1f GB/s, read Speed %.1f GB/s%n",
(double) size/(mid-start), (double) size/(end-mid));
}
}
prints
Write speed 3.8 GB/s, read Speed 6.8 GB/s
You may get better performance if you drop the BufferedXxxStream. It's not helping since you're doing 1Mb read/writes, and is cause extra memory copy of the data.
Better yet, you should be using the NIO classes instead of the regular IO classes.
try-finally
You should clean up your try-finally code.
// Original code
BufferedOutputStream writer = null;
try {
writer = new ...;
// use writer
} finally {
if (writer != null) {
writer.close();
}
}
// Cleaner code
BufferedOutputStream writer = new ...;
try {
// use writer
} finally {
writer.close();
}
// Even cleaner, using try-with-resources (since Java 7)
try (BufferedOutputStream writer = new ...) {
// use writer
}
To complement Peter's great answer, I am adding the code below. It compares head-to-head the performance of the good-old java.io with NIO. Unlike Peter, instead of just reading data into a direct buffer, I do a typical thing with it: transfer it into an on-heap byte array. This steals surprisingly little from the performance: where I was getting 7.5 GB/s with Peter's code, here I get 6.0 GB/s.
For the java.io approach I can't have a direct buffer, but instead I call the read method directly with my target on-heap byte array. Note that this array is smallish and has an awkward size of 555 bytes. Nevertheless I retrieve almost identical performance: 5.6 GB/s. The difference is so small that it would evaporate completely in normal usage, and even in this artificial scenario if I wasn't reading directly from the disk cache.
As a bonus I include at the bottom a method which can be used on Linux and Mac to purge the disk caches. You'll see a dramatic turn in performance if you decide to call it between the write and the read step.
public final class MeasureIOPerformance {
static final int SIZE_GB = Integer.getInteger("sizeGB", 8);
static final int BLOCK_SIZE = 64 * 1024;
static final int blocks = (int) (((long) SIZE_GB << 30) / BLOCK_SIZE);
static final byte[] acceptBuffer = new byte[555];
public static void main(String[] args) throws IOException {
for (int i = 0; i < 3; i++) {
measure(new ChannelRw());
measure(new StreamRw());
}
}
private static void measure(RW rw) throws IOException {
File file = File.createTempFile("delete", "me");
file.deleteOnExit();
System.out.println("Writing " + SIZE_GB + " GB " + " with " + rw);
long start = System.nanoTime();
rw.write(file);
long mid = System.nanoTime();
System.out.println("Reading " + SIZE_GB + " GB " + " with " + rw);
long checksum = rw.read(file);
long end = System.nanoTime();
long size = file.length();
System.out.printf("Write speed %.1f GB/s, read Speed %.1f GB/s%n",
(double) size/(mid-start), (double) size/(end-mid));
System.out.println(checksum);
file.delete();
}
interface RW {
void write(File f) throws IOException;
long read(File f) throws IOException;
}
static class ChannelRw implements RW {
final ByteBuffer directBuffer = ByteBuffer.allocateDirect(BLOCK_SIZE);
#Override public String toString() {
return "Channel";
}
#Override public void write(File f) throws IOException {
FileChannel fc = new FileOutputStream(f).getChannel();
try {
for (int i = 0; i < blocks; i++) {
directBuffer.clear();
while (directBuffer.remaining() > 0) {
fc.write(directBuffer);
}
}
} finally {
fc.close();
}
}
#Override public long read(File f) throws IOException {
ByteBuffer buffer = ByteBuffer.allocateDirect(BLOCK_SIZE);
FileChannel fc = new FileInputStream(f).getChannel();
long checksum = 0;
try {
for (int i = 0; i < blocks; i++) {
buffer.clear();
while (buffer.hasRemaining()) {
fc.read(buffer);
}
buffer.flip();
while (buffer.hasRemaining()) {
buffer.get(acceptBuffer, 0, Math.min(acceptBuffer.length, buffer.remaining()));
checksum += acceptBuffer[acceptBuffer[0]];
}
}
} finally {
fc.close();
}
return checksum;
}
}
static class StreamRw implements RW {
final byte[] buffer = new byte[BLOCK_SIZE];
#Override public String toString() {
return "Stream";
}
#Override public void write(File f) throws IOException {
FileOutputStream out = new FileOutputStream(f);
try {
for (int i = 0; i < blocks; i++) {
out.write(buffer);
}
} finally {
out.close();
}
}
#Override public long read(File f) throws IOException {
FileInputStream in = new FileInputStream(f);
long checksum = 0;
try {
for (int i = 0; i < blocks; i++) {
for (int remaining = acceptBuffer.length, read;
(read = in.read(buffer)) != -1 && (remaining -= read) > 0; )
{
in.read(acceptBuffer, acceptBuffer.length - remaining, remaining);
}
checksum += acceptBuffer[acceptBuffer[0]];
}
} finally {
in.close();
}
return checksum;
}
}
public static void purgeCache() throws IOException, InterruptedException {
if (System.getProperty("os.name").startsWith("Mac")) {
new ProcessBuilder("sudo", "purge")
// .inheritIO()
.start().waitFor();
} else {
new ProcessBuilder("sudo", "su", "-c", "echo 3 > /proc/sys/vm/drop_caches")
// .inheritIO()
.start().waitFor();
}
}
}

How do I read an NSInputStream while writing it to an NSOutputStream in iOS?

I am porting an Android app to iPhone (more like improving the iPhone app based on the Android version) and I need to split and combine large uncompressed audio files.
Currently, I load all the files into memory and split them and combine them in separate functions. It crashes with 100MB+ files.
This is the new process needed to do it:
I have two recordings (file1 and file2) and a split position where I want file2 to be inserted inside file1.
-create the input streams for file1 and file2 and the output stream for the outputfile.
-rewrite the new CAF header
-read the data from inputStream1 until it reaches the split point and I write all that data to the output file.
and write it to the output stream.
-read all data from inputStream2 and write it to output file.
-read remaining data from inputStream1 and write it to output file.
Here is my Android code for the process:
File file1File = new File(file1);
File file2File = new File(file2);
long file1Length = file1File.length();
long file2Length = file2File.length();
FileInputStream file1ByteStream = new FileInputStream(file1);
FileInputStream file2ByteStream = new FileInputStream(file2);
FileOutputStream outputFileByteStream = new FileOutputStream(outputFile);
// time = fileLength / (Sample Rate * Channels * Bits per sample / 8)
// convert position to number of bytes for this function
long sampleRate = eRecorder.RECORDER_SAMPLERATE;
int channels = 1;
long bitsPerSample = eRecorder.RECORDER_BPP;
long bytePositionLength = (position * (sampleRate * channels * bitsPerSample / 8)) / 1000;
//calculate total data size
int dataSize = 0;
dataSize = (int)(file1Length + file2Length);
WriteWaveFileHeaderForMerge(outputFileByteStream, dataSize,
dataSize + 36,
eRecorder.RECORDER_SAMPLERATE, 1,
2 * eRecorder.RECORDER_SAMPLERATE);
long bytesWritten = 0;
int length = 0;
//set limit for bytes read, and write file1 bytes to outputfile until split position reached
int limit = (int)bytePositionLength;
//read bytes to limit
writeBytesToLimit(file1ByteStream, outputFileByteStream, limit);
file1ByteStream.close();
file2ByteStream.skip(44);//skip wav file header
writeBytesToLimit(file2ByteStream, outputFileByteStream, (int)file2Length);
file2ByteStream.close();
//calculate length of remaining file1 bytes to be written
long file1offset = bytePositionLength;
//reinitialize file1 input stream
file1ByteStream = new FileInputStream(file1);
file1ByteStream.skip(file1offset);
writeBytesToLimit(file1ByteStream, outputFileByteStream, (int)file1Length);
file1ByteStream.close();
outputFileByteStream.close();
And this is my writeBytesToLimit function:
private void writeBytesToLimit(FileInputStream inputStream, FileOutputStream outputStream, int byteLimit) throws IOException
{
int bytesRead = 0;
int chunkSize = 65536;
int length = 0;
byte[] buffer = new byte[chunkSize];
while((length = inputStream.read(buffer)) != -1)
{
bytesRead += length;
if(bytesRead >= byteLimit)
{
int leftoverBytes = byteLimit % chunkSize;
byte[] smallBuffer = new byte[leftoverBytes];
System.arraycopy(buffer, 0, smallBuffer, 0, leftoverBytes);
outputStream.write(smallBuffer);
break;
}
if(length == chunkSize)
outputStream.write(buffer);
else
{
byte[] smallBuffer = new byte[length];
System.arraycopy(buffer, 0, smallBuffer, 0, length);
outputStream.write(smallBuffer);
}
}
}
How do I do this in iOS? Using the same delegate for two NSInputStreams and an NSOutputStream looks like it will get very messy.
Has anyone seen an example of how to do this (and do it clean)?
I ended up using NSFileHandle. For example, this is the first part of what I am doing.
NSData *readData = [[NSData alloc] init];
NSFileHandle *reader1 = [NSFileHandle fileHandleForReadingAtPath:file1Path];
NSFileHandle *writer = [NSFileHandle fileHandleForWritingAtPath:outputFilePath];
//start reading data from file1 to split point and writing it to file
long bytesRead = 0;
while(bytesRead < splitPointInBytes)
{
//read a chunk of data
readData = [reader1 readDataOfLength:chunkSize];
if(readData.length == 0)break;
//trim data if too much was read
if(bytesRead + readData.length > splitPointInBytes)
{
//get difference of read bytes and byte limit
long difference = bytesRead + readData.length - splitPointInBytes;
//trim data
NSMutableData *readDataMutable = [NSMutableData dataWithData:readData];
[readDataMutable setLength:readDataMutable.length - difference];
readData = [NSData dataWithData:readDataMutable];
NSLog(#"Too much data read, trimming");
}
//write data to output file
[writer writeData:readData];
//update byte counter
bytesRead += readData.length;
}
long file1BytesWritten = bytesRead;

Categories

Resources