Java bmp from Databbase - java

I need to create a BMP (bitmap) image from a database using Java. The problem is that I have huge sets of integers ranging from 10 to 100.
I would like to represent the whole database as a bmp. The amount of data 10000x10000 per table (and growing) exceeds the amount of data I can handle with int arrays.
Is there a way to write the BMP directly to the hard drive, pixel by pixel, so I don't run out of memory?

A file would work (I definitely woudln't do a per pixel call, you'll be waiting hours for the result). You just need a buffer. Break the application apart along the lines of ->
int[] buffer = new int[BUFFER_SIZE];
ResultSet data = ....; //Forward paging result set
while(true)
{
for(int i = 0; i < BUFFER_SIZE; i++)
{
//Read result set into buffer
}
//write buffer to cache (HEAP/File whatever)
if(resultSetDone)
break;
}
Read the documentation on your database driver, but any major database is going to optimize your ResultSet object so you can use a cursor and not worry about memory.
All that being said... an int[10000][10000] isn't why you're running out of memory. Its probably what you're doing with those values and your algorithm. Example:
public class Test
{
public static void main(String... args)
{
int[][] ints = new int[10000][];
System.out.println(System.currentTimeMillis() + " Start");
for(int i = 0; i < 10000; i++)
{
ints[i] = new int[10000];
for(int j = 0; j < 10000; j++)
ints[i][j] = i*j % Integer.MAX_VALUE / 2;
System.out.print(i);
}
System.out.println();
System.out.println(Integer.valueOf(ints[500][999]) + " <- value");
System.out.println(System.currentTimeMillis() + " Stop");
}
}
Output ->
1344554718676 Start
//not even listing this
249750 <- value
1344554719322 Stop
Edit--Or if I misinterpreted your question try this ->
http://www.java2s.com/Code/Java/Database-SQL-JDBC/LoadimagefromDerbydatabase.htm
I see... well take a look around, I'm rusty but this seems to be a way to do it. I'd double check my buffering...
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.ByteArrayInputStream;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
public class Test
{
public static void main(String... args)
{
// 2 ^ 24 bytes, streams can be bigger, but this works...
int size = Double.valueOf((Math.floor((Math.pow(2.0, 24.0))))).intValue();
byte[] bytes = new byte[size];
for(int i = 0; i < size; i++)
bytes[i] = (byte) (i % 255);
ByteArrayInputStream stream = new ByteArrayInputStream(bytes);
File file = new File("test.io"); //kill the hard disk
//Crappy error handling, you'd actually want to catch exceptions and recover
BufferedInputStream in = new BufferedInputStream(stream);
BufferedOutputStream out = null;
byte[] buffer = new byte[1024 * 8];
try
{
//You do need to check the buffer as it will have crap in it on the last read
out = new BufferedOutputStream(new FileOutputStream(file));
while(in.available() > 0)
{
int total = in.read(buffer);
out.write(buffer, 0, total);
}
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
if(out != null)
try
{
out.flush();
out.close();
}
catch (IOException e)
{
e.printStackTrace();
}
}
System.out.println(System.currentTimeMillis() + " Start");
System.out.println();
System.out.println(Integer.valueOf(bytes[bytes.length - 1]) + " <- value");
System.out.println("File size is-> " + file.length());
System.out.println(System.currentTimeMillis() + " Stop");
}
}

You could save it as a file, which is conceptually just a sequence of bytes.

Related

Stream of short[]

Hi I need to calculate the entropy of order m of a file where m is the number of bit (m <= 16).
So:
H_m(X)=-sum_i=0 to i=2^m-1{(p_i,m)(log_2 (p_i,m))}
So, I thought to create an input stream to read the file and then calculate the probability of each sequence composed by m bit.
For m = 8 it's easy because I consider a byte.
Since that m<=16 I tought to consider as primitive type short, save each short of the file in an array short[] and then manipulate bits using bitwise operators to obtain all the sequences of m bit in the file.
Is this a good idea?
Anyway, I'm not able to create a stream of short. This is what I've done:
public static void main(String[] args) {
readFile(FILE_NAME_INPUT);
}
public static void readFile(String filename) {
short[] buffer = null;
File a_file = new File(filename);
try {
File file = new File(filename);
FileInputStream fis = new FileInputStream(filename);
DataInputStream dis = new DataInputStream(fis);
int length = (int)file.length() / 2;
buffer = new short[length];
int count = 0;
while(dis.available() > 0 && count < length) {
buffer[count] = dis.readShort();
count++;
}
System.out.println("length=" + length);
System.out.println("count=" + count);
for(int i = 0; i < buffer.length; i++) {
System.out.println("buffer[" + i + "]: " + buffer[i]);
}
fis.close();
}
catch(EOFException eof) {
System.out.println("EOFException: " + eof);
}
catch(FileNotFoundException fe) {
System.out.println("FileNotFoundException: " + fe);
}
catch(IOException ioe) {
System.out.println("IOException: " + ioe);
}
}
But I lose a byte and I don't think this is the best way to proced.
This is what I think to do using bitwise operator:
int[] list = new int[l];
foreach n in buffer {
for(int i = 16 - m; i > 0; i-m) {
list.add( (n >> i) & 2^m-1 );
}
}
I'm assuming in this case to use shorts.
If I use bytes, how can I do a cycle like that for m > 8?
That cycle doesn't work because I have to concatenate multiple bytes and each time varying the number of bits to be joined..
Any ideas?
Thanks
I think you just need to have a byte array:
public static void readFile(String filename) {
ByteArrayOutputStream outputStream=new ByteArrayOutputStream();
try {
FileInputStream fis = new FileInputStream(filename);
byte b=0;
while((b=fis.read())!=-1) {
outputStream.write(b);
}
byte[] byteData=outputStream.toByteArray();
fis.close();
}
catch(IOException ioe) {
System.out.println("IOException: " + ioe);
}
Then you can manipulate byteData as per your bitwise operations.
--
If you want to work with shorts you can combine bytes read this way
short[] buffer=new short[(int)(byteData.length/2.)+1];
j=0;
for(i=0; i<byteData.length-1; i+=2) {
buffer[j]=(short)((byteData[i]<<8)|byteData[i+1]);
j++;
}
To check for odd bytes do this
if((byteData.length%2)==1) last=(short)((0x00<<8)|byteData[byteData.length-1]]);
last is a short so it could be placed in buffer[buffer.length-1]; I'm not sure if that last position in buffer is available or occupied; I think it is but you need to check j after exiting the loop; if j's value is buffer.length-1 then it is available; otherwise might be some problem.
Then manipulate buffer.
The second approach with working with bytes is more involved. It's a question of its own. So try this above.

Android: how to have random access from an InputStream?

I have an InputStream, and the relative file name and size.
I need to access/read some random (increasing) positions in the InputStream. This positions are stored in an integer array (named offsets).
InputStream inputStream = ...
String fileName = ...
int fileSize = (int) ...
int[] offsets = new int[]{...}; // the random (increasing) offsets array
Now, given an InputStream, I've found only two possible solutions to jump to random (increasing) positions of the file.
The first one is to use the skip() method of the InputStream (note that I actually use BufferedInputStream, since I will need to mark() and reset() the file pointer).
//Open a BufferInputStream:
BufferedInputStream bufferedInputStream = new BufferedInputStream(inputStream);
byte[] bytes = new byte[1];
int curFilePointer = 0;
long numBytesSkipped = 0;
long numBytesToSkip = 0;
int numBytesRead = 0;
//Check the file size:
if ( fileSize < offsets[offsets.length-1] ) { // the last (bigger) offset is bigger then the file size...
//Debug:
Log.d(TAG, "The file is too small!\n");
return;
}
for (int i=0, k=0; i < offsets.length; i++, k=0) { // for each offset I have to jump...
try {
//Jump to the offset [i]:
while( (curFilePointer < offsets[i]) && (k < 10) ) { // until the correct offset is reached (at most 10 tries)
numBytesToSkip = offsets[i] - curFilePointer;
numBytesSkipped = bufferedInputStream.skip(numBytesToSkip);
curFilePointer += numBytesSkipped; // move the file pointer forward
//Debug:
Log.d(TAG, "FP: " + curFilePointer + "\n");
k++;
}
if ( curFilePointer != offsets[i] ) { // it did NOT jump properly... (what's going on?!)
//Debug:
Log.d(TAG, "InputStream.skip() DID NOT JUMP PROPERLY!!!\n");
break;
}
//Read the content of the file at the offset [i]:
numBytesRead = bufferedInputStream.read(bytes, 0, bytes.length);
curFilePointer += numBytesRead; // move the file pointer forward
//Debug:
Log.d(TAG, "READ [" + curFilePointer + "]: " + bytes[0] + "\n");
}
catch ( IOException e ) {
e.printStackTrace();
break;
}
catch ( IndexOutOfBoundsException e ) {
e.printStackTrace();
break;
}
}
//Close the BufferInputStream:
bufferedInputStream.close()
The problem is that, during my tests, for some (usually big) offsets, it has cycled 5 or more times before skipping the correct number of bytes. Is it normal? And, above all, can/should I thrust skip()? (That is: Are 10 cycles enough to be SURE it will ALWAYS arrive to the correct offset?)
The only alternative way I've found is the one of creating a RandomAccessFile from the InputStream, through File.createTempFile(prefix, suffix, directory) and the following function.
public static RandomAccessFile toRandomAccessFile(InputStream inputStream, File tempFile, int fileSize) throws IOException {
RandomAccessFile randomAccessFile = new RandomAccessFile(tempFile, "rw");
byte[] buffer = new byte[fileSize];
int numBytesRead = 0;
while ( (numBytesRead = inputStream.read(buffer)) != -1 ) {
randomAccessFile.write(buffer, 0, numBytesRead);
}
randomAccessFile.seek(0);
return randomAccessFile;
}
Having a RandomAccessFile is actually a much better solution, but the performance are exponentially worse (above all because I will have more than a single file).
EDIT: Using byte[] buffer = new byte[fileSize] speeds up (and a lot) the RandomAccessFile creation!
//Create a temporary RandomAccessFile:
File tempFile = File.createTempFile(fileName, null, context.getCacheDir());
RandomAccessFile randomAccessFile = toRandomAccessFile(inputStream, tempFile, fileSize);
byte[] bytes = new byte[1];
int numBytesRead = 0;
//Check the file size:
if ( fileSize < offsets[offsets.length-1] ) { // the last (bigger) offset is bigger then the file size...
//Debug:
Log.d(TAG, "The file is too small!\n");
return;
}
for (int i=0, k=0; i < offsets.length; i++, k=0) { // for each offset I have to jump...
try {
//Jump to the offset [i]:
randomAccessFile.seek(offsets[i]);
//Read the content of the file at the offset [i]:
numBytesRead = randomAccessFile.read(bytes, 0, bytes.length);
//Debug:
Log.d(TAG, "READ [" + (randomAccessFile.getFilePointer()-4) + "]: " + bytes[0] + "\n");
}
catch ( IOException e ) {
e.printStackTrace();
break;
}
catch ( IndexOutOfBoundsException e ) {
e.printStackTrace();
break;
}
}
//Delete the temporary RandomAccessFile:
randomAccessFile.close();
tempFile.delete();
Now, is there a better (or more elegant) solution to have a "random" access from an InputStream?
It's a bit unfortunate you have an InputStream to begin with, but in this situation buffering the stream in a file is of no use iff you are always skipping forward. But you don't have to count the number of times you have called skip, that's not really of interest.
What you do have to check if the stream has ended already, to prevent an infinite loop. Checking the source of the default skip implementation, I'd say you'll have to keep calling skip until it returns 0. This will indicate the end of stream has been reached. The JavaDoc was a bit unclear about this for my taste.
You can't. An InputStream is a stream, that is to say a sequential construct. Your question embodies a contradiction in terms.

Performance of MappedByteBuffer vs ByteBuffer

I'm trying to do a few performance enhancements and am looking to use memory mapped files for writing data. I did a few tests and surprisingly, MappedByteBuffer seems slower than allocating direct buffers. I'm not able to clearly understand why this would be the case. Can someone please hint at what could be going on behind the scenes? Below are my test results:
I'm allocating 32KB buffers. I've already created the files with sizes 3Gigs before starting the tests. So, growing the file isn't the issue.
I'm adding the code that I used for this performance test. Any input / explanation about this behavior is much appreciated.
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.nio.ByteBuffer;
import java.nio.MappedByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.channels.FileChannel.MapMode;
public class MemoryMapFileTest {
/**
* #param args
* #throws IOException
*/
public static void main(String[] args) throws IOException {
for (int i = 0; i < 10; i++) {
runTest();
}
}
private static void runTest() throws IOException {
// TODO Auto-generated method stub
FileChannel ch1 = null;
FileChannel ch2 = null;
ch1 = new RandomAccessFile(new File("S:\\MMapTest1.txt"), "rw").getChannel();
ch2 = new RandomAccessFile(new File("S:\\MMapTest2.txt"), "rw").getChannel();
FileWriter fstream = new FileWriter("S:\\output.csv", true);
BufferedWriter out = new BufferedWriter(fstream);
int[] numberofwrites = {1,10,100,1000,10000,100000};
//int n = 10000;
try {
for (int j = 0; j < numberofwrites.length; j++) {
int n = numberofwrites[j];
long estimatedTime = 0;
long mappedEstimatedTime = 0;
for (int i = 0; i < n ; i++) {
byte b = (byte)Math.random();
long allocSize = 1024 * 32;
estimatedTime += directAllocationWrite(allocSize, b, ch1);
mappedEstimatedTime += mappedAllocationWrite(allocSize, b, i, ch2);
}
double avgDirectEstTime = (double)estimatedTime/n;
double avgMapEstTime = (double)mappedEstimatedTime/n;
out.write(n + "," + avgDirectEstTime/1000000 + "," + avgMapEstTime/1000000);
out.write("," + ((double)estimatedTime/1000000) + "," + ((double)mappedEstimatedTime/1000000));
out.write("\n");
System.out.println("Avg Direct alloc and write: " + estimatedTime);
System.out.println("Avg Mapped alloc and write: " + mappedEstimatedTime);
}
} finally {
out.write("\n\n");
if (out != null) {
out.flush();
out.close();
}
if (ch1 != null) {
ch1.close();
} else {
System.out.println("ch1 is null");
}
if (ch2 != null) {
ch2.close();
} else {
System.out.println("ch2 is null");
}
}
}
private static long directAllocationWrite(long allocSize, byte b, FileChannel ch1) throws IOException {
long directStartTime = System.nanoTime();
ByteBuffer byteBuf = ByteBuffer.allocateDirect((int)allocSize);
byteBuf.put(b);
ch1.write(byteBuf);
return System.nanoTime() - directStartTime;
}
private static long mappedAllocationWrite(long allocSize, byte b, int iteration, FileChannel ch2) throws IOException {
long mappedStartTime = System.nanoTime();
MappedByteBuffer mapBuf = ch2.map(MapMode.READ_WRITE, iteration * allocSize, allocSize);
mapBuf.put(b);
return System.nanoTime() - mappedStartTime;
}
}
You're testing the wrong thing. This is not how to write the code in either case. You should allocate the buffer once, and just keep updating its contents. You're including allocation time in the write time. Not valid.
Swapping data to disk is the main reason for MappedByteBuffer to be slower than DirectByteBuffer.
cost of allocation and deallocation is high with direct buffers , including MappedByteBuffer, and this is cost is accrued to both the examples hence the only difference in writing to disk , which is the case with MappedByteBuffer but not with Direct Byte Buffer

Generated integers to a binary file

I have assignment question I could not get the final answer.
the question was :
Write a program that will write 100 randomly generated
integers to a binary file using the writeInt(int) method in
DataOutputStream. Close the file. Open the file using a
DataInputStream and a BufferedInputStream. Read the integer
values as if the file contained an unspecified number (ignore
the fact that you wrote the file) and report the sum and average
of the numbers.
I believe I done first part of the question which is (write into file), but I don't know how to report the sum.
so far that what I have
import java.io.*;
public class CreateBinaryIO {
public static void main(String [] args)throws IOException {
DataOutputStream output = new DataOutputStream(new FileOutputStream("myData.dat"));
int numOfRec = 0 + (int)(Math.random()* (100 - 0 +1));
int[] counts = new int[100];
for(int i=0;i<=100;i++){
output.writeInt(numOfRec);
counts[i] += numOfRec;
}// Loop i closed
output.close();
}
}
This ReadBinaryIO class:
import java.io.*;
public class ReadBinaryIO {
public static void main(String [] args)throws IOException {
DataInputStream input = new DataInputStream (new BufferedInputStream(new FileInputStream("myData.dat")));
int value = input.readInt();
System.out.println(value + " ");
input.close();
}
}
Try to divide the problem in parts to organice your code, don't forget to flush the OutputStream before you close it.
package javarandomio;
import java.io.DataInputStream;
import java.io.DataOutputStream;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.util.Random;
public class JavaRandomIO {
public static void main(String[] args) {
writeFile();
readFile();
}
private static void writeFile() {
DataOutputStream output=null;
try {
output = new DataOutputStream(new FileOutputStream("myData.txt"));
Random rn = new Random();
for (int i = 0; i <= 100; i++) {
output.writeInt(rn.nextInt(100));
}
output.flush();
output.close();
} catch (Exception e) {
System.out.println(e.getMessage());
} finally{
try{
output.close();
}catch(Exception e){
System.out.println(e.getMessage());
}
}
}
private static void readFile() {
DataInputStream input=null;
try {
input = new DataInputStream(new FileInputStream("myData.txt"));
int cont = 0;
int number = input.readInt();
while (true) {
System.out.println("cont =" + cont + " number =" + number);
if (input.available() == 4) {
break;
}
number = input.readInt();
cont++;
}
input.close();
} catch (Exception e) {
System.out.println(e.getMessage());
} finally{
try{
input.close();
}catch(Exception e){
System.out.println(e.getMessage());
}
}
}
}
int numOfRec = 0 + (int)(Math.random()* (100 - 0 +1));
That's not generating a random number. Look into java.util.Random.nextInt().
int[] counts = new int[100];
for(int i=0;i<=100;i++){
output.writeInt(numOfRec);
counts[i] += numOfRec;
}// Loop i closed
That wil actually break because you are using i<=100 instead of just i<100 but I'm not sure why you are populating that array to begin with? Also, that code just writes the same number 101 times. The generation of that random number needs to be within the loop so a new one is generated each time.
As far as reading it back, you can loop through your file by using a loop like this:
long total = 0;
while (dataInput.available() > 0) {
total += dataInput.readInt();
}
Try below code where you are trying to read one integer:
DataInputStream input = new DataInputStream (new BufferedInputStream(new FileInputStream("myData.dat")));
int sum = 0;
for(int i =0; i<=100; i++){
int value = input.readInt();
sum += value;
}
System.out.println(value + " ");
input.close();
Or if you want to dynamically set the lenght of the for loop then
create a File object on myData.dat file and then divide the size of file with 32bits
File file = new File("myData.dat");
int length = file.length() / 32;
for(int i =0; i <= length;i++)
So far I submit the assignment and I think I got.
/** Munti ... Sha
course code (1047W13), assignment 5 , question 1 , 25/03/2013,
This file read the integer values as if the file contained an unspecified number (ignore
the fact that you wrote the file) and report the sum and average of the numbers.
*/
import java.io.*;
public class ReadBinaryIO {
public static void main(String [] args)throws ClassNotFoundException, IOException {
//call the file to read
DataInputStream input = new DataInputStream (new BufferedInputStream(new FileInputStream("myData.dat")));
// total to count the numbers, count to count loops process
long total = 0;
int count = 0;
System.out.println("generator 100 numbers are ");
while (input.available() > 0) {
total += input.readInt();
count ++;
System.out.println(input.readInt());
}
//print the sum and the average
System.out.println("The sum is " + total);
System.out.println("The average is " + total/count);
input.close();
}
}
CreateBinaryIO Class:
import java.io.*; import java.util.Random;
public class CreateBinaryIO { //Create a binary file public static
void main(String [] args)throws ClassNotFoundException, IOException {
DataOutputStream output = new DataOutputStream(new
FileOutputStream("myData.dat"));
Random randomno = new Random();
for(int i=0;i<100;i++){ output.writeInt(randomno.nextInt(100)); }// Loop i closed output.close(); } }

What's the difference between DataOutputStream and ObjectOutputStream?

I'm learning about socket programming in Java. I've seen client/server app examples with some using DataOutputStream, and some using ObjectOutputStream.
What's the difference between the two?
Is there a performance difference?
DataInput/OutputStream performs generally better because its much simpler. It can only read/write primtive types and Strings.
ObjectInput/OutputStream can read/write any object type was well as primitives. It is less efficient but much easier to use if you want to send complex data.
I would assume that the Object*Stream is the best choice until you know that its performance is an issue.
This might be useful for people still looking for answers several years later... According to my tests on a recent JVM (1.8_51), the ObjectOutput/InputStream is surprisingly almost 2x times faster than DataOutput/InputStream for reading/writing a huge array of double!
Below are the results for writing 10 million items array (for 1 million the results are the essentially the same). I also included the text format (BufferedWriter/Reader) for the sake of completeness:
TestObjectStream written 10000000 items, took: 409ms, or 24449.8778 items/ms, filesize 80390629b
TestDataStream written 10000000 items, took: 727ms, or 13755.1582 items/ms, filesize 80000000b
TestBufferedWriter written 10000000 items, took: 13700ms, or 729.9270 items/ms, filesize 224486395b
Reading:
TestObjectStream read 10000000 items, took: 250ms, or 40000.0000 items/ms, filesize 80390629b
TestDataStream read 10000000 items, took: 424ms, or 23584.9057 items/ms, filesize 80000000b
TestBufferedWriter read 10000000 items, took: 6298ms, or 1587.8057 items/ms, filesize 224486395b
I believe Oracle has heavily optimized the JVM for using ObjectStreams in last Java releases, as this is the most common way of writing/reading data (including serialization), and thus is located on the Java performance critical path.
So looks like today there's no much reason anymore to use DataStreams. "Don't try to outsmart JVM", just use the most straightforward way, which is ObjectStreams :)
Here's the code for the test:
class Generator {
private int seed = 1235436537;
double generate(int i) {
seed = (seed + 1235436537) % 936855463;
return seed / (i + 1.) / 524323.;
}
}
class Data {
public final double[] array;
public Data(final double[] array) {
this.array = array;
}
}
class TestObjectStream {
public void write(File dest, Data data) {
try (ObjectOutputStream out = new ObjectOutputStream(new BufferedOutputStream(new FileOutputStream(dest)))) {
for (int i = 0; i < data.array.length; i++) {
out.writeDouble(data.array[i]);
}
} catch (IOException e) {
throw new RuntimeIoException(e);
}
}
public void read(File dest, Data data) {
try (ObjectInputStream in = new ObjectInputStream(new BufferedInputStream(new FileInputStream(dest)))) {
for (int i = 0; i < data.array.length; i++) {
data.array[i] = in.readDouble();
}
} catch (IOException e) {
throw new RuntimeIoException(e);
}
}
}
class TestDataStream {
public void write(File dest, Data data) {
try (DataOutputStream out = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(dest)))) {
for (int i = 0; i < data.array.length; i++) {
out.writeDouble(data.array[i]);
}
} catch (IOException e) {
throw new RuntimeIoException(e);
}
}
public void read(File dest, Data data) {
try (DataInputStream in = new DataInputStream(new BufferedInputStream(new FileInputStream(dest)))) {
for (int i = 0; i < data.array.length; i++) {
data.array[i] = in.readDouble();
}
} catch (IOException e) {
throw new RuntimeIoException(e);
}
}
}
class TestBufferedWriter {
public void write(File dest, Data data) {
try (BufferedWriter out = new BufferedWriter(new FileWriter(dest))) {
for (int i = 0; i < data.array.length; i++) {
out.write(Double.toString(data.array[i]));
out.newLine();
}
} catch (IOException e) {
throw new RuntimeIoException(e);
}
}
public void read(File dest, Data data) {
try (BufferedReader in = new BufferedReader(new FileReader(dest))) {
String line = in.readLine();
int i = 0;
while (line != null) {
if(!line.isEmpty()) {
data.array[i++] = Double.parseDouble(line);
}
line = in.readLine();
}
} catch (IOException e) {
throw new RuntimeIoException(e);
}
}
}
#Test
public void testWrite() throws Exception {
int N = 10000000;
double[] array = new double[N];
Generator gen = new Generator();
for (int i = 0; i < array.length; i++) {
array[i] = gen.generate(i);
}
Data data = new Data(array);
Map<Class, BiConsumer<File, Data>> subjects = new LinkedHashMap<>();
subjects.put(TestDataStream.class, new TestDataStream()::write);
subjects.put(TestObjectStream.class, new TestObjectStream()::write);
subjects.put(TestBufferedWriter.class, new TestBufferedWriter()::write);
subjects.forEach((aClass, fileDataBiConsumer) -> {
File f = new File("test." + aClass.getName());
long start = System.nanoTime();
fileDataBiConsumer.accept(f, data);
long took = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start);
System.out.println(aClass.getSimpleName() + " written " + N + " items, took: " + took + "ms, or " + String.format("%.4f", (N / (double)took)) + " items/ms, filesize " + f.length() + "b");
});
}
#Test
public void testRead() throws Exception {
int N = 10000000;
double[] array = new double[N];
Data data = new Data(array);
Map<Class, BiConsumer<File, Data>> subjects = new LinkedHashMap<>();
subjects.put(TestDataStream.class, new TestDataStream()::read);
subjects.put(TestObjectStream.class, new TestObjectStream()::read);
subjects.put(TestBufferedWriter.class, new TestBufferedWriter()::read);
subjects.forEach((aClass, fileDataBiConsumer) -> {
File f = new File("test." + aClass.getName());
long start = System.nanoTime();
fileDataBiConsumer.accept(f, data);
long took = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start);
System.out.println(aClass.getSimpleName() + " read " + N + " items, took: " + took + "ms, or " + String.format("%.4f", (N / (double)took)) + " items/ms, filesize " + f.length() + "b");
});
}
DataOutputStream and ObjectOutputStream: when handling basic types, there is no difference apart from the header that ObjectOutputStream creates.
With the ObjectOutputStream class, instances of a class that implements Serializable can be written to the output stream, and can be read back with ObjectInputStream.
DataOutputStream can only handle basic types.
Only objects that implement the java.io.Serializable interface can be written to streams using ObjectOutputStream.Primitive data types can also be written to the stream using the appropriate methods from DataOutput. Strings can also be written using the writeUTF method. But DataInputStream on the other hand lets an application write primitive Java data types to an output stream in a portable way.
Object OutputStream
Data Input Stream

Categories

Resources