Hi I need to calculate the entropy of order m of a file where m is the number of bit (m <= 16).
So:
H_m(X)=-sum_i=0 to i=2^m-1{(p_i,m)(log_2 (p_i,m))}
So, I thought to create an input stream to read the file and then calculate the probability of each sequence composed by m bit.
For m = 8 it's easy because I consider a byte.
Since that m<=16 I tought to consider as primitive type short, save each short of the file in an array short[] and then manipulate bits using bitwise operators to obtain all the sequences of m bit in the file.
Is this a good idea?
Anyway, I'm not able to create a stream of short. This is what I've done:
public static void main(String[] args) {
readFile(FILE_NAME_INPUT);
}
public static void readFile(String filename) {
short[] buffer = null;
File a_file = new File(filename);
try {
File file = new File(filename);
FileInputStream fis = new FileInputStream(filename);
DataInputStream dis = new DataInputStream(fis);
int length = (int)file.length() / 2;
buffer = new short[length];
int count = 0;
while(dis.available() > 0 && count < length) {
buffer[count] = dis.readShort();
count++;
}
System.out.println("length=" + length);
System.out.println("count=" + count);
for(int i = 0; i < buffer.length; i++) {
System.out.println("buffer[" + i + "]: " + buffer[i]);
}
fis.close();
}
catch(EOFException eof) {
System.out.println("EOFException: " + eof);
}
catch(FileNotFoundException fe) {
System.out.println("FileNotFoundException: " + fe);
}
catch(IOException ioe) {
System.out.println("IOException: " + ioe);
}
}
But I lose a byte and I don't think this is the best way to proced.
This is what I think to do using bitwise operator:
int[] list = new int[l];
foreach n in buffer {
for(int i = 16 - m; i > 0; i-m) {
list.add( (n >> i) & 2^m-1 );
}
}
I'm assuming in this case to use shorts.
If I use bytes, how can I do a cycle like that for m > 8?
That cycle doesn't work because I have to concatenate multiple bytes and each time varying the number of bits to be joined..
Any ideas?
Thanks
I think you just need to have a byte array:
public static void readFile(String filename) {
ByteArrayOutputStream outputStream=new ByteArrayOutputStream();
try {
FileInputStream fis = new FileInputStream(filename);
byte b=0;
while((b=fis.read())!=-1) {
outputStream.write(b);
}
byte[] byteData=outputStream.toByteArray();
fis.close();
}
catch(IOException ioe) {
System.out.println("IOException: " + ioe);
}
Then you can manipulate byteData as per your bitwise operations.
--
If you want to work with shorts you can combine bytes read this way
short[] buffer=new short[(int)(byteData.length/2.)+1];
j=0;
for(i=0; i<byteData.length-1; i+=2) {
buffer[j]=(short)((byteData[i]<<8)|byteData[i+1]);
j++;
}
To check for odd bytes do this
if((byteData.length%2)==1) last=(short)((0x00<<8)|byteData[byteData.length-1]]);
last is a short so it could be placed in buffer[buffer.length-1]; I'm not sure if that last position in buffer is available or occupied; I think it is but you need to check j after exiting the loop; if j's value is buffer.length-1 then it is available; otherwise might be some problem.
Then manipulate buffer.
The second approach with working with bytes is more involved. It's a question of its own. So try this above.
Related
I can not mix two audio extension files wav. My work:
byte[] bufData1 = null;
byte[] bufData2 = null;
ArrayList<Byte> bufData3 = new ArrayList<Byte>();
Creating two arrays with raw audio data
public void bootloadInputData(String p1, String p2) throws IOException {
bufData1 = bootloadReadFileByte(p1);
bufData2 = bootloadReadFileByte(p2);
System.arraycopy(bufData1, 44, bufData1, 0, (bufData1.length - 44));
System.arraycopy(bufData2, 44, bufData2, 0, (bufData2.length - 44));
}
public byte[] bootloadReadFileByte(String path) throws IOException{
ByteArrayOutputStream out = null;
InputStream input = null;
try{
out = new ByteArrayOutputStream();
input = new BufferedInputStream(new FileInputStream(path));
int data = 0;
while((data = input.read()) != -1){
out.write(data);
}
}
finally{
if(null != input){
input.close();
}
if(null != out){
out.close();
}
}
return out.toByteArray();
}
Mixing the bytes of raw audio data
public void bootloadOutputData() throws IOException {
for(int i = 0; i < ((bufData1.length + bufData2.length) / 4); i += 4) {
if(i < bufData1.length){
bufData3.add(bufData1[i]);
bufData3.add(bufData1[i+1]);
bufData3.add(bufData1[i+2]);
bufData3.add(bufData1[i+3]);
}
if(i < bufData2.length){
bufData3.add(bufData2[i]);
bufData3.add(bufData2[i+1]);
bufData3.add(bufData2[i+2]);
bufData3.add(bufData2[i+3]);
}
}
}
Create a new file, fill in the header and raw audio data.
private void bootloadCreateWaveMix(String p1, String p2, String p3) throws IOException {
int size1 = 0;
int size2 = 0;
FileInputStream fis1 = null;
FileInputStream fis2 = null;
try {
fis1 = new FileInputStream(p1);
fis2 = new FileInputStream(p2);
size1 = fis1.available();
size2 = fis2.available();
} finally {
if(fis1 != null){
fis1.close();
}
if(fis2 != null){
fis2.close();
}
}
int mNumBytes = (size1 + size2);
DataOutputStream out = null;
try {
out = new DataOutputStream(new FileOutputStream(p3));
writeId(out, "RIFF");
writeInt(out, 36 + mNumBytes);
writeId(out, "WAVE");
writeId(out, "fmt ");
writeInt(out, 16);
writeShort(out, (short) 1);
writeShort(out, (short) 4);
writeInt(out, (int) 44100);
writeInt(out, 2 * 44100 * 16 / 8);
writeShort(out, (short)(2 * 16 / 8));
writeShort(out, (short) 16);
writeId(out, "data");
writeInt(out, mNumBytes);
out.write(toByteArray(bufData3));
} finally {
if(out != null){
out.close();
}
}
}
private static void writeId(OutputStream out, String id) throws IOException {
for (int i = 0; i < id.length(); i++) out.write(id.charAt(i));
}
private static void writeInt(OutputStream out, int val) throws IOException {
out.write(val >> 0);
out.write(val >> 8);
out.write(val >> 16);
out.write(val >> 24);
}
private static void writeShort(OutputStream out, short val) throws IOException {
out.write(val >> 0);
out.write(val >> 8);
}
public static byte[] toByteArray(ArrayList<Byte> in) {
byte[] data = new byte[in.size()];
for (int i = 0; i < data.length; i++) {
data[i] = (byte) in.get(i);
}
return data;
}
Question:
This code does not correctly create a file that the computer can not
play, but the device can. Reproduction is bad, there is some kind of
interference at the end of the merged files. Also, playback ends when
the first file ends, even if the second file is larger than the first
one. Another problem with the channels on the idea is two stereo
files, and in the title I indicate 4 life even though 2. The files
will always be 44100/16 bit / stereo
If I understand correctly, you want to do the following:
Given 2 input WAV files, mix them together to a single WAV file.
The contents of the output will be the input files played at the same time, not one after the other.
The length of the new file will be the length of the longest of the input files.
All files, input and output, are 16 bit, stereo 44100Hz.
If that's the case, here are (some of) your mistakes:
You need to parse the incoming files so that you don't read their headers as audio data (Do not skip this step just because you already know the format of the audio. You need to read the headers to confirm the data format and accurately determine the number of samples in your input. Also, note that 2/16/44100 WAV files can have different size headers because they can contain various chunks, so you can't just skip over X bytes and then read the file -- you must parse the header!).
If the WAV files are all 16-bit, you need to convert the incoming data from bytes to shorts (note, this is not a simple typecasting -- you must pack 2 bytes into each short. I believe you can use a DataInputStream for this, but be sure to take endianness into account -- WAV files are little-endian and Java is big-endian). Once you've got the shorts representing your samples, average the shorts from the separate files to do the mixing. Your averaged values must then be converted back to bytes (DataOutputStream) to save the resulting file. When you've run out of data from one file, substitute zero.
Your calculation of numBytes is incorrect -- it is not the sum of raw bytes in both files, but a somewhat more complex calculation. In your case, you want it to be equal to something like this:
n1 = number of samples in file 1
n2 = number of samples in file 2
n = MAX( n1 + n2 )
numBytes = n * (number of channels) * (number of bytes per channel) = n * 2 * 2
I strongly urge you to consider using a library like JMF to tackle 1 & 2.
I have an InputStream, and the relative file name and size.
I need to access/read some random (increasing) positions in the InputStream. This positions are stored in an integer array (named offsets).
InputStream inputStream = ...
String fileName = ...
int fileSize = (int) ...
int[] offsets = new int[]{...}; // the random (increasing) offsets array
Now, given an InputStream, I've found only two possible solutions to jump to random (increasing) positions of the file.
The first one is to use the skip() method of the InputStream (note that I actually use BufferedInputStream, since I will need to mark() and reset() the file pointer).
//Open a BufferInputStream:
BufferedInputStream bufferedInputStream = new BufferedInputStream(inputStream);
byte[] bytes = new byte[1];
int curFilePointer = 0;
long numBytesSkipped = 0;
long numBytesToSkip = 0;
int numBytesRead = 0;
//Check the file size:
if ( fileSize < offsets[offsets.length-1] ) { // the last (bigger) offset is bigger then the file size...
//Debug:
Log.d(TAG, "The file is too small!\n");
return;
}
for (int i=0, k=0; i < offsets.length; i++, k=0) { // for each offset I have to jump...
try {
//Jump to the offset [i]:
while( (curFilePointer < offsets[i]) && (k < 10) ) { // until the correct offset is reached (at most 10 tries)
numBytesToSkip = offsets[i] - curFilePointer;
numBytesSkipped = bufferedInputStream.skip(numBytesToSkip);
curFilePointer += numBytesSkipped; // move the file pointer forward
//Debug:
Log.d(TAG, "FP: " + curFilePointer + "\n");
k++;
}
if ( curFilePointer != offsets[i] ) { // it did NOT jump properly... (what's going on?!)
//Debug:
Log.d(TAG, "InputStream.skip() DID NOT JUMP PROPERLY!!!\n");
break;
}
//Read the content of the file at the offset [i]:
numBytesRead = bufferedInputStream.read(bytes, 0, bytes.length);
curFilePointer += numBytesRead; // move the file pointer forward
//Debug:
Log.d(TAG, "READ [" + curFilePointer + "]: " + bytes[0] + "\n");
}
catch ( IOException e ) {
e.printStackTrace();
break;
}
catch ( IndexOutOfBoundsException e ) {
e.printStackTrace();
break;
}
}
//Close the BufferInputStream:
bufferedInputStream.close()
The problem is that, during my tests, for some (usually big) offsets, it has cycled 5 or more times before skipping the correct number of bytes. Is it normal? And, above all, can/should I thrust skip()? (That is: Are 10 cycles enough to be SURE it will ALWAYS arrive to the correct offset?)
The only alternative way I've found is the one of creating a RandomAccessFile from the InputStream, through File.createTempFile(prefix, suffix, directory) and the following function.
public static RandomAccessFile toRandomAccessFile(InputStream inputStream, File tempFile, int fileSize) throws IOException {
RandomAccessFile randomAccessFile = new RandomAccessFile(tempFile, "rw");
byte[] buffer = new byte[fileSize];
int numBytesRead = 0;
while ( (numBytesRead = inputStream.read(buffer)) != -1 ) {
randomAccessFile.write(buffer, 0, numBytesRead);
}
randomAccessFile.seek(0);
return randomAccessFile;
}
Having a RandomAccessFile is actually a much better solution, but the performance are exponentially worse (above all because I will have more than a single file).
EDIT: Using byte[] buffer = new byte[fileSize] speeds up (and a lot) the RandomAccessFile creation!
//Create a temporary RandomAccessFile:
File tempFile = File.createTempFile(fileName, null, context.getCacheDir());
RandomAccessFile randomAccessFile = toRandomAccessFile(inputStream, tempFile, fileSize);
byte[] bytes = new byte[1];
int numBytesRead = 0;
//Check the file size:
if ( fileSize < offsets[offsets.length-1] ) { // the last (bigger) offset is bigger then the file size...
//Debug:
Log.d(TAG, "The file is too small!\n");
return;
}
for (int i=0, k=0; i < offsets.length; i++, k=0) { // for each offset I have to jump...
try {
//Jump to the offset [i]:
randomAccessFile.seek(offsets[i]);
//Read the content of the file at the offset [i]:
numBytesRead = randomAccessFile.read(bytes, 0, bytes.length);
//Debug:
Log.d(TAG, "READ [" + (randomAccessFile.getFilePointer()-4) + "]: " + bytes[0] + "\n");
}
catch ( IOException e ) {
e.printStackTrace();
break;
}
catch ( IndexOutOfBoundsException e ) {
e.printStackTrace();
break;
}
}
//Delete the temporary RandomAccessFile:
randomAccessFile.close();
tempFile.delete();
Now, is there a better (or more elegant) solution to have a "random" access from an InputStream?
It's a bit unfortunate you have an InputStream to begin with, but in this situation buffering the stream in a file is of no use iff you are always skipping forward. But you don't have to count the number of times you have called skip, that's not really of interest.
What you do have to check if the stream has ended already, to prevent an infinite loop. Checking the source of the default skip implementation, I'd say you'll have to keep calling skip until it returns 0. This will indicate the end of stream has been reached. The JavaDoc was a bit unclear about this for my taste.
You can't. An InputStream is a stream, that is to say a sequential construct. Your question embodies a contradiction in terms.
I need to create a BMP (bitmap) image from a database using Java. The problem is that I have huge sets of integers ranging from 10 to 100.
I would like to represent the whole database as a bmp. The amount of data 10000x10000 per table (and growing) exceeds the amount of data I can handle with int arrays.
Is there a way to write the BMP directly to the hard drive, pixel by pixel, so I don't run out of memory?
A file would work (I definitely woudln't do a per pixel call, you'll be waiting hours for the result). You just need a buffer. Break the application apart along the lines of ->
int[] buffer = new int[BUFFER_SIZE];
ResultSet data = ....; //Forward paging result set
while(true)
{
for(int i = 0; i < BUFFER_SIZE; i++)
{
//Read result set into buffer
}
//write buffer to cache (HEAP/File whatever)
if(resultSetDone)
break;
}
Read the documentation on your database driver, but any major database is going to optimize your ResultSet object so you can use a cursor and not worry about memory.
All that being said... an int[10000][10000] isn't why you're running out of memory. Its probably what you're doing with those values and your algorithm. Example:
public class Test
{
public static void main(String... args)
{
int[][] ints = new int[10000][];
System.out.println(System.currentTimeMillis() + " Start");
for(int i = 0; i < 10000; i++)
{
ints[i] = new int[10000];
for(int j = 0; j < 10000; j++)
ints[i][j] = i*j % Integer.MAX_VALUE / 2;
System.out.print(i);
}
System.out.println();
System.out.println(Integer.valueOf(ints[500][999]) + " <- value");
System.out.println(System.currentTimeMillis() + " Stop");
}
}
Output ->
1344554718676 Start
//not even listing this
249750 <- value
1344554719322 Stop
Edit--Or if I misinterpreted your question try this ->
http://www.java2s.com/Code/Java/Database-SQL-JDBC/LoadimagefromDerbydatabase.htm
I see... well take a look around, I'm rusty but this seems to be a way to do it. I'd double check my buffering...
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.ByteArrayInputStream;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
public class Test
{
public static void main(String... args)
{
// 2 ^ 24 bytes, streams can be bigger, but this works...
int size = Double.valueOf((Math.floor((Math.pow(2.0, 24.0))))).intValue();
byte[] bytes = new byte[size];
for(int i = 0; i < size; i++)
bytes[i] = (byte) (i % 255);
ByteArrayInputStream stream = new ByteArrayInputStream(bytes);
File file = new File("test.io"); //kill the hard disk
//Crappy error handling, you'd actually want to catch exceptions and recover
BufferedInputStream in = new BufferedInputStream(stream);
BufferedOutputStream out = null;
byte[] buffer = new byte[1024 * 8];
try
{
//You do need to check the buffer as it will have crap in it on the last read
out = new BufferedOutputStream(new FileOutputStream(file));
while(in.available() > 0)
{
int total = in.read(buffer);
out.write(buffer, 0, total);
}
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
if(out != null)
try
{
out.flush();
out.close();
}
catch (IOException e)
{
e.printStackTrace();
}
}
System.out.println(System.currentTimeMillis() + " Start");
System.out.println();
System.out.println(Integer.valueOf(bytes[bytes.length - 1]) + " <- value");
System.out.println("File size is-> " + file.length());
System.out.println(System.currentTimeMillis() + " Stop");
}
}
You could save it as a file, which is conceptually just a sequence of bytes.
I'm learning about socket programming in Java. I've seen client/server app examples with some using DataOutputStream, and some using ObjectOutputStream.
What's the difference between the two?
Is there a performance difference?
DataInput/OutputStream performs generally better because its much simpler. It can only read/write primtive types and Strings.
ObjectInput/OutputStream can read/write any object type was well as primitives. It is less efficient but much easier to use if you want to send complex data.
I would assume that the Object*Stream is the best choice until you know that its performance is an issue.
This might be useful for people still looking for answers several years later... According to my tests on a recent JVM (1.8_51), the ObjectOutput/InputStream is surprisingly almost 2x times faster than DataOutput/InputStream for reading/writing a huge array of double!
Below are the results for writing 10 million items array (for 1 million the results are the essentially the same). I also included the text format (BufferedWriter/Reader) for the sake of completeness:
TestObjectStream written 10000000 items, took: 409ms, or 24449.8778 items/ms, filesize 80390629b
TestDataStream written 10000000 items, took: 727ms, or 13755.1582 items/ms, filesize 80000000b
TestBufferedWriter written 10000000 items, took: 13700ms, or 729.9270 items/ms, filesize 224486395b
Reading:
TestObjectStream read 10000000 items, took: 250ms, or 40000.0000 items/ms, filesize 80390629b
TestDataStream read 10000000 items, took: 424ms, or 23584.9057 items/ms, filesize 80000000b
TestBufferedWriter read 10000000 items, took: 6298ms, or 1587.8057 items/ms, filesize 224486395b
I believe Oracle has heavily optimized the JVM for using ObjectStreams in last Java releases, as this is the most common way of writing/reading data (including serialization), and thus is located on the Java performance critical path.
So looks like today there's no much reason anymore to use DataStreams. "Don't try to outsmart JVM", just use the most straightforward way, which is ObjectStreams :)
Here's the code for the test:
class Generator {
private int seed = 1235436537;
double generate(int i) {
seed = (seed + 1235436537) % 936855463;
return seed / (i + 1.) / 524323.;
}
}
class Data {
public final double[] array;
public Data(final double[] array) {
this.array = array;
}
}
class TestObjectStream {
public void write(File dest, Data data) {
try (ObjectOutputStream out = new ObjectOutputStream(new BufferedOutputStream(new FileOutputStream(dest)))) {
for (int i = 0; i < data.array.length; i++) {
out.writeDouble(data.array[i]);
}
} catch (IOException e) {
throw new RuntimeIoException(e);
}
}
public void read(File dest, Data data) {
try (ObjectInputStream in = new ObjectInputStream(new BufferedInputStream(new FileInputStream(dest)))) {
for (int i = 0; i < data.array.length; i++) {
data.array[i] = in.readDouble();
}
} catch (IOException e) {
throw new RuntimeIoException(e);
}
}
}
class TestDataStream {
public void write(File dest, Data data) {
try (DataOutputStream out = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(dest)))) {
for (int i = 0; i < data.array.length; i++) {
out.writeDouble(data.array[i]);
}
} catch (IOException e) {
throw new RuntimeIoException(e);
}
}
public void read(File dest, Data data) {
try (DataInputStream in = new DataInputStream(new BufferedInputStream(new FileInputStream(dest)))) {
for (int i = 0; i < data.array.length; i++) {
data.array[i] = in.readDouble();
}
} catch (IOException e) {
throw new RuntimeIoException(e);
}
}
}
class TestBufferedWriter {
public void write(File dest, Data data) {
try (BufferedWriter out = new BufferedWriter(new FileWriter(dest))) {
for (int i = 0; i < data.array.length; i++) {
out.write(Double.toString(data.array[i]));
out.newLine();
}
} catch (IOException e) {
throw new RuntimeIoException(e);
}
}
public void read(File dest, Data data) {
try (BufferedReader in = new BufferedReader(new FileReader(dest))) {
String line = in.readLine();
int i = 0;
while (line != null) {
if(!line.isEmpty()) {
data.array[i++] = Double.parseDouble(line);
}
line = in.readLine();
}
} catch (IOException e) {
throw new RuntimeIoException(e);
}
}
}
#Test
public void testWrite() throws Exception {
int N = 10000000;
double[] array = new double[N];
Generator gen = new Generator();
for (int i = 0; i < array.length; i++) {
array[i] = gen.generate(i);
}
Data data = new Data(array);
Map<Class, BiConsumer<File, Data>> subjects = new LinkedHashMap<>();
subjects.put(TestDataStream.class, new TestDataStream()::write);
subjects.put(TestObjectStream.class, new TestObjectStream()::write);
subjects.put(TestBufferedWriter.class, new TestBufferedWriter()::write);
subjects.forEach((aClass, fileDataBiConsumer) -> {
File f = new File("test." + aClass.getName());
long start = System.nanoTime();
fileDataBiConsumer.accept(f, data);
long took = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start);
System.out.println(aClass.getSimpleName() + " written " + N + " items, took: " + took + "ms, or " + String.format("%.4f", (N / (double)took)) + " items/ms, filesize " + f.length() + "b");
});
}
#Test
public void testRead() throws Exception {
int N = 10000000;
double[] array = new double[N];
Data data = new Data(array);
Map<Class, BiConsumer<File, Data>> subjects = new LinkedHashMap<>();
subjects.put(TestDataStream.class, new TestDataStream()::read);
subjects.put(TestObjectStream.class, new TestObjectStream()::read);
subjects.put(TestBufferedWriter.class, new TestBufferedWriter()::read);
subjects.forEach((aClass, fileDataBiConsumer) -> {
File f = new File("test." + aClass.getName());
long start = System.nanoTime();
fileDataBiConsumer.accept(f, data);
long took = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start);
System.out.println(aClass.getSimpleName() + " read " + N + " items, took: " + took + "ms, or " + String.format("%.4f", (N / (double)took)) + " items/ms, filesize " + f.length() + "b");
});
}
DataOutputStream and ObjectOutputStream: when handling basic types, there is no difference apart from the header that ObjectOutputStream creates.
With the ObjectOutputStream class, instances of a class that implements Serializable can be written to the output stream, and can be read back with ObjectInputStream.
DataOutputStream can only handle basic types.
Only objects that implement the java.io.Serializable interface can be written to streams using ObjectOutputStream.Primitive data types can also be written to the stream using the appropriate methods from DataOutput. Strings can also be written using the writeUTF method. But DataInputStream on the other hand lets an application write primitive Java data types to an output stream in a portable way.
Object OutputStream
Data Input Stream
I want to write first a sequence of strings and then a sequence of bytes into a file, using Java. I started by using FileOutputStream because of the array of bytes. After searching the API, I realised that FileOutputStream cannot write Strings, only ints and bytes, so I switched to DataOutputStream. When I run the program, I get an exception. Why?
Here's a portion of my code:
try {
// Create the file
FileOutputStream fos;
DataOutputStream dos; // = new DataOutputStream("compressedfile.ecs_h");
File file= new File("C:\\MyFile.txt");
fos = new FileOutputStream(file);
dos=new DataOutputStream(fos);
/* saves the characters as a dictionary into the file before the binary seq*/
for (int i = 0; i < al.size(); i++) {
String name= al.get(i).name; //gets the string from a global arraylist, don't pay attention to this!
dos.writeChars(name); //saving the name in the file
}
System.out.println("\nIS SUCCESFULLY WRITTEN INTO FILE! ");
dos.writeChars("><");
String strseq;
/*write all elements from the arraylist into a string variable*/
strseq= seq.toString();
System.out.println("sTringSeq: " + strseq);
/*transpose the sequence string into a byte array*/
byte[] data = new byte[strseq.length() / 8];
for (int i = 0; i < data.length; i++) {
data[i] = (byte) Integer.parseInt(strseq.substring(i * 8, (i + 1) * 8), 2);
dos.write(data[i]);
}
dos.flush();
//Close the output stream
dos.close();
} catch(Exception e){}
The problem with your code is that the last for loop was counting over the wrong number of bytes. The code below fixes your problem writing your test data to a file. This works on my machine.
public static void main(String[] args) {
ArrayList<String> al = new ArrayList<String>();
al.add("String1");
al.add("String2");
try {
// Create the file
FileOutputStream fos = new FileOutputStream("MyFile.txt");
DataOutputStream dos = new DataOutputStream(fos);
/* saves the characters as a dictionary into the file before the binary seq */
for (String str : al) {
dos.writeChars(str);
}
System.out.println("\nIS SUCCESFULLY WRITTEN INTO FILE! ");
dos.writeChars("><");
String strseq = "001100111100101000101010111010100100111000000000";
// Ensure that you have a string of the correct size
if (strseq.length() % 8 != 0) {
throw new IllegalStateException(
"Input String is cannot be converted to bytes - wrong size: "
+ strseq.length());
}
int numBytes = strseq.length() / 8;
for (int i = 0; i < numBytes; i++) {
int start = i * 8;
int end = (i + 1) * 8;
byte output = (byte) Integer.parseInt(strseq.substring(start, end), 2);
dos.write(output);
}
dos.writeChars("> Enf of File");
dos.flush();
// Close the output stream
dos.close();
} catch (Exception e) {
e.printStackTrace();
}
}
The approach of writing bytes directly to a test file does have a few problems (I assume that it's a text file in that your test file name ends with .txt), the most obvious one being that some text editors don't handle/display null characters very well (your last test byte was: 00000000 or null). If you want to see the bytes as readable bytes then you could investigate encoding them using Base64 encoding.
Line:
data[i] = (byte) Integer.parseInt(strseq.substring(i * 8, (i + 1) * 8), 2);
looks very suspiciously...
can you provide move details about strseq and its value?
What about this code ?
this code :
byte[] data = new byte[strseq.length() / 8];
for (int i = 0; i < data.length; i++) {
data[i] = (byte) Integer.parseInt(strseq.substring(i * 8, (i + 1) * 8), 2);
dos.write(data[i]);
}
becomes
byte[] data = strseq.getBytes();
With the FileWriter class you have a nice abstraction of a file writing operation.
May this class can help you to write your file...
You can substitute the other OutputStreams by only this class. It have all the methods of you want for write a string and a byte array in a file.