I am using Android Studio and Oracle Java 8. I am trying to get all bytes from a file and pass them to a byte array. The code below acts like it does not see import java.io.File;
I get the error message:
cannot resolve method getBytesFromFile(java.io.File)
code
import java.io.File;
// ...
File path = new File(
Environment.getExternalStorageDirectory().getAbsolutePath()
+ "/zTest-Records/");
path.mkdirs();
try {
recordingFile = File.createTempFile("recording", ".pcm", path);
} catch (IOException e) {
throw new RuntimeException("Couldn't create pcm file", e);
}
// NOTE: The code below gives error message: cannot resolve method 'getBytesFromFile(java.io.File)'
byte[] data = getBytesFromFile(recordingFile);
That function is not defined, perhaps you copy paste this code from somewhere.
A quick google search points to this link:
https://code.google.com/p/picturesque/source/browse/myClasses/GetBytesFromFile.java?r=1d9332c4c969b4d35847c10f7c83b04c1ccb834f
package myClasses;
import java.util.*;
import java.io.*;
public class GetBytesFromFile {
public static byte[] getBytesFromFile(File file) throws IOException {
InputStream is = new FileInputStream(file);
// Get the size of the file
long length = file.length();
// You cannot create an array using a long type.
// It needs to be an int type.
// Before converting to an int type, check
// to ensure that file is not larger than Integer.MAX_VALUE.
if (length > Integer.MAX_VALUE) {
// File is too large
}
// Create the byte array to hold the data
byte[] bytes = new byte[(int)length];
// Read in the bytes
int offset = 0;
int numRead = 0;
while (offset < bytes.length
&& (numRead = is.read(bytes, offset, Math.min(bytes.length - offset, 512*1024))) >= 0) {
offset += numRead;
}
// Ensure all the bytes have been read in
if (offset < bytes.length) {
throw new IOException("Could not completely read file "+file.getName());
}
// Close the input stream and return bytes
is.close();
return bytes;
}
}
You probably need to add this class to your project
Related
I am struggling with finding a solution to write my bytes array to a playable AAC audio file.
From my Flutter.io front-end, I am encoding my .aac audio files as a list of UInt8List and sending it to my Spring-Boot server. Then I am able to convert them to a proper bytes array where I then attempt to write it back to a .aac file as seen below:
public void writeToAudioFile(ArrayList<Double> audioData) {
byte[] byteArray = new byte[1024];
Iterator<Double> iterator = audioData.iterator();
System.out.println(byteArray);
while (iterator.hasNext()) {
// for some reason my list came in as a list of doubles
// so I am making sure to get these values back to an int
Integer i = iterator.next().intValue();
byteArray[i] = i.byteValue();
}
try {
File someFile = new File("test.aac");
FileOutputStream fos = new FileOutputStream(someFile);
fos.write(byteArray);
fos.flush();
fos.close();
System.out.println("File created");
} catch (Exception e) {
// TODO: handle exception
System.out.println("Error: " + e);
}
I am able to write my bytes array back to an audio file, however, it is unplayable. So I am wondering if this approach is possible and If my issue does lie in Java.
I have been doing extraneous research and I think that I need to say that this file is a specific type of media file? Or maybe the encoded audio file is corrupt when reaching my server?
Your conversion loop
while (iterator.hasNext()) {
// for some reason my list came in as a list of doubles
// so I am making sure to get these values back to an int
Integer i = iterator.next().intValue();
byteArray[i] = i.byteValue();
}
gets the value i from the iterator, and then tries to write it at the position i in the byteArray, which kind of jumbles your audio bytes in a weird way.
A working function that converts List<Double> to byte[] would look something like this
byte[] inputToBytes(List<Double> audioData) {
byte[] result = new byte[audioData.size()];
for (int i = 0; i < audioData.size(); i++) {
result[i] = audioData.get(i).byteValue();
}
return result;
}
then you could use it in the writeToAudioFile():
void writeToAudioFile(ArrayList<Double> audioData) {
try (FileOutputStream fos = new FileOutputStream("test.aac")) {
fos.write(inputToBytes(audioData));
System.out.println("File created");
} catch (Exception e) {
// TODO: handle exception
System.out.println("Error: " + e);
}
}
This certainly produces the playable file if you have the valid bytes in the audioData. The contents and the extension should be enough for the OS/player to recognize the format.
If this doesn’t work, I would look into the data received to see if it is correct.
I wonder if it is possible to compress an arbitrary file (or folder, or any other file structure) by independent chunks and then get a valid archive (e.g. gzip) by concatenating them together. Some requirements:
java 8
chunks <= 16MB
folder structure does not change during the process
chunks are compressed independently, but order is preserved
each compressed chunk is appended to the end of the resulting archive
resulting archive should be valid and decompressable by any standard tool
It looks like to achieve that I would need to create an archive header first and then just append compressed blocks to it https://www.rfc-editor.org/rfc/rfc1952, however I'm not sure if it is supported by any of standard java utils or 3rd party libraries. Does anybody have any ideas on where to start from?
Some background:
I have a client-server app, which allows user to upload files to a cloud storage. Communication via REST api, client side is going to be responsible for dividing files into chunks and upload them one by one. It is possible to do compression in browser, however I wonder if we can move that load to the backend.
Yes. A concatenation of gzip files is a valid gzip file, per the standard (RFC 1952). gzip certainly handles this.
You are correct to be concerned that some code out there might not support it, since it is not very common to have concatenated gzip members. If you want to be super-safe, you can combine the gzip files into a single gzip member, without having to recompress. You do however need to read through all of the compressed data, effectively decompressing it in memory (which is still much faster than compressing). You can find an example of that in gzjoin.c.
You can try something like this for tar + gzip:
Maven dependency:
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-compress</artifactId>
<version>1.18</version>
</dependency>
Java code to compress into chunks:
import org.apache.commons.compress.archivers.tar.TarArchiveEntry;
import org.apache.commons.compress.archivers.tar.TarArchiveOutputStream;
import org.apache.commons.compress.compressors.gzip.GzipCompressorOutputStream;
import org.apache.commons.compress.utils.IOUtils;
import java.io.*;
import java.nio.file.Files;
import java.nio.file.Paths;
[..]
private static final int MAX_CHUNK_SIZE = 16000000;
public void compressTarGzChunks(String inputDirPath, String outputDirPath) throws Exception {
PipedInputStream in = new PipedInputStream();
final PipedOutputStream out = new PipedOutputStream(in);
new Thread(() -> {
try {
int chunkIndex = 0;
int n = 0;
byte[] buffer = new byte[8192];
do {
String chunkFileName = String.format("archive-part%d.tar.gz", chunkIndex);
try (OutputStream fOut = Files.newOutputStream(Paths.get(outputDirPath, chunkFileName));
BufferedOutputStream bOut = new BufferedOutputStream(fOut);
GzipCompressorOutputStream gzOut = new GzipCompressorOutputStream(bOut)) {
int currentChunkSize = 0;
if (chunkIndex > 0) {
gzOut.write(buffer, 0, n);
currentChunkSize += n;
}
while ((n = in.read(buffer)) != -1 && currentChunkSize + n < MAX_CHUNK_SIZE) {
gzOut.write(buffer, 0, n);
currentChunkSize += n;
}
chunkIndex++;
}
} while (n != -1);
in.close();
} catch (IOException e) {
// logging and exception handling should go here
}
}).start();
try (TarArchiveOutputStream tOut = new TarArchiveOutputStream(out)) {
compressTar(tOut, inputDirPath, "");
}
}
private static void compressTar(TarArchiveOutputStream tOut, String path, String base)
throws IOException {
File file = new File(path);
String entryName = base + file.getName();
TarArchiveEntry tarEntry = new TarArchiveEntry(file, entryName);
tarEntry.setSize(file.length());
tOut.putArchiveEntry(tarEntry);
if (file.isFile()) {
try (FileInputStream in = new FileInputStream(file)) {
IOUtils.copy(in, tOut);
tOut.closeArchiveEntry();
}
} else {
tOut.closeArchiveEntry();
File[] children = file.listFiles();
if (children != null) {
for (File child : children) {
compressTar(tOut, child.getAbsolutePath(), entryName + "/");
}
}
}
}
Java code to concatenate the chunks into a single archive:
public void concatTarGzChunks(List<InputStream> sortedTarGzChunks, String outputFile) throws IOException {
try {
try (FileOutputStream fos = new FileOutputStream(outputFile)) {
for (InputStream in : sortedTarGzChunks) {
int len;
byte[] buf = new byte[1024 * 1024];
while ((len = in.read(buf)) != -1) {
fos.write(buf, 0, len);
}
}
}
} finally {
sortedTarGzChunks.forEach(is -> {
try {
is.close();
} catch (IOException e) {
// logging and exception handling should go here
}
});
}
}
I am working a project in which I have to play with some file reading writing tasks. I have to read 8 bytes from a file at one time and perform some operations on that block and then write that block to second file, then repeat the cycle until first file is completely read in chuncks of 8 bytes everytime and the after manipulation the data should be added/appended to the second. However, in doing so, I am facing some problems. Following is what I am trying:
private File readFromFile1(File file1) {
int offset = 0;
long message= 0;
try {
FileInputStream fis = new FileInputStream(file1);
byte[] data = new byte[8];
file2 = new File("file2.txt");
FileOutputStream fos = new FileOutputStream(file2.getAbsolutePath(), true);
DataOutputStream dos = new DataOutputStream(fos);
while(fis.read(data, offset, 8) != -1)
{
message = someOperation(data); // operation according to business logic
dos.writeLong(message);
}
fos.close();
dos.close();
fis.close();
} catch (IOException e) {
System.out.println("Some error occurred while reading from File:" + e);
}
return file2;
}
I am not getting the desired output this way. Any help is appreciated.
Consider the following code:
private File readFromFile1(File file1) {
int offset = 0;
long message = 0;
File file2 = null;
try {
FileInputStream fis = new FileInputStream(file1);
byte[] data = new byte[8]; //Read buffer
byte[] tmpbuf = new byte[8]; //Temporary chunk buffer
file2 = new File("file2.txt");
FileOutputStream fos = new FileOutputStream(file2.getAbsolutePath(), true);
DataOutputStream dos = new DataOutputStream(fos);
int readcnt; //Read count
int chunk; //Chunk size to write to tmpbuf
while ((readcnt = fis.read(data, 0, 8)) != -1) {
//// POINT A ////
//Skip chunking system if an 8 byte octet is read directly.
if(readcnt == 8 && offset == 0){
message = someOperation(tmpbuf); // operation according to business logic
dos.writeLong(message);
continue;
}
//// POINT B ////
chunk = Math.min(tmpbuf.length - offset, readcnt); //Determine how much to add to the temp buf.
System.arraycopy(data, 0, tmpbuf, offset, chunk); //Copy bytes to temp buf
offset = offset + chunk; //Sets the offset to temp buf
if (offset == 8) {
message = someOperation(tmpbuf); // operation according to business logic
dos.writeLong(message);
if (chunk < readcnt) {
System.arraycopy(data, chunk, tmpbuf, 0, readcnt - chunk);
offset = readcnt - chunk;
} else {
offset = 0;
}
}
}
//// POINT C ////
//Process remaining bytes here...
//message = foo(tmpbuf);
//dos.writeLong(message);
fos.close();
dos.close();
fis.close();
} catch (IOException e) {
System.out.println("Some error occurred while reading from File:" + e);
}
return file2;
}
In this excerpt of code, what I did was:
Modify your reading code to include the amount of bytes actually read from the read() method (noted readcnt).
Added a byte chunking system (the processing does not happen until there are at least 8 bytes in the chunking buffer).
Allowed for separate processing of the final bytes (that do not make up a 8 byte octet).
As you can see from the code, the data being read is first stored in a chunking buffer (denoted tmpbuf) until at least 8 bytes are available. This will happen only if 8 bytes are not always available (If 8 bytes are available directly and nothing is chunked, directly process. See "Point A" in code). This is done as a form of optimization to prevent excess array copies.
The chunking system uses offsets which increment every time bytes are written to tmpbuf until it reaches a value of 8 (it will not go over as the Math.min() method used in the assignment of 'chunk' will limit the value). Upon offset == 8, proceed to execute the processing code.
If that particular read produced more bytes than actually processed, continue writing them to tmpbuf, from the beginning again, whilst setting offset appropriately, otherwise set offset to 0.
Repeat cycle.
The code will leave the last few bytes of data that do not fit in an octet in the array tmpbuf with the offset variable indicating how much has actually been written. This data can then be processed separately at point C.
Seems a lot more complicating than it should be, and there probably is a better solution (possibly using existing java library methods), but off the top of my head, this is what I got. Hope this is clear enough for you to understand.
You could use the following, it uses NIO and especially the ByteBuffer class for the long handling. You can of course implement it the standard java way, but since i am a NIO fan, here is a possible solution.
The major problem in your code is that while(fis.read(data, offset, 8) != -1) will read up to 8 bytes, and not always 8 bytes, plus reading in such small portions is not very efficient.
I have put some comments in my code, if something is unclear please leave a comment. My someOperation(...) function just copies the next long value from the buffer.
Update:
added finally block to close the files.
import java.io.File;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.file.StandardOpenOption;
public class TestFile {
static final int IN_BUFFER_SIZE = 1024 * 8;
static final int OUT_BUFFER_SIZE = 1024 *9; // make the out-buffer > in-buffer, i am lazy and don't want to check for overruns
static final int MIN_READ_BYTES = 8;
static final int MIN_WRITE_BYTES = 8;
private File readFromFile1(File inFile) {
final File outFile = new File("file2.txt");
final ByteBuffer inBuffer = ByteBuffer.allocate(IN_BUFFER_SIZE);
final ByteBuffer outBuffer = ByteBuffer.allocate(OUT_BUFFER_SIZE);
FileChannel readChannel = null;
FileChannel writeChannel = null;
try {
// open a file channel for reading and writing
readChannel = FileChannel.open(inFile.toPath(), StandardOpenOption.READ);
writeChannel = FileChannel.open(outFile.toPath(), StandardOpenOption.CREATE, StandardOpenOption.WRITE);
long totalReadByteCount = 0L;
long totalWriteByteCount = 0L;
boolean readMore = true;
while (readMore) {
// read some bytes into the in-buffer
int readOp = 0;
while ((readOp = readChannel.read(inBuffer)) != -1) {
totalReadByteCount += readOp;
} // while
// prepare the in-buffer to be consumed
inBuffer.flip();
// check if there where errors
if (readOp == -1) {
// end of file reached, read no more
readMore = false;
} // if
// now consume the in-buffer until there are at least MIN_READ_BYTES in the buffer
while (inBuffer.remaining() >= MIN_READ_BYTES) {
// add data to the write buffer
outBuffer.putLong(someOperation(inBuffer));
} // while
// compact the in-buffer and prepare for the next read, if we need to read more.
// that way the possible remaining bytes of the in-buffer can be consumed after leaving the loop
if (readMore) inBuffer.compact();
// prepare the out-buffer to be consumed
outBuffer.flip();
// write the out-buffer until the buffer is empty
while (outBuffer.hasRemaining())
totalWriteByteCount += writeChannel.write(outBuffer);
// prepare the out-buffer for writing again
outBuffer.flip();
} // while
// error handling
if (inBuffer.hasRemaining()) {
System.err.println("Truncated data! Not a long value! bytes remaining: " + inBuffer.remaining());
} // if
System.out.println("read total: " + totalReadByteCount + " bytes.");
System.out.println("write total: " + totalWriteByteCount + " bytes.");
} catch (IOException e) {
System.out.println("Some error occurred while reading from File: " + e);
} finally {
if (readChannel != null) {
try {
readChannel.close();
} catch (IOException e) {
System.out.println("Could not close read channel: " + e);
} // catch
} // if
if (writeChannel != null) {
try {
writeChannel.close();
} catch (IOException e) {
System.out.println("Could not close write channel: " + e);
} // catch
} // if
} // finally
return outFile;
}
private long someOperation(ByteBuffer bb) {
// consume the buffer, do whatever you want with the buffer.
return bb.getLong(); // consumes 8 bytes of the buffer.
}
public static void main(String[] args) {
TestFile testFile = new TestFile();
File source = new File("input.txt");
testFile.readFromFile1(source);
}
}
I received this python script that generates a file checksum:
import sys,os
if __name__=="__main__":
#filename=os.path.abspath(sys.argv[1])
#filename=r"H:\Javier Ortiz\559-7 From Pump.bin"
cksum=0
offset=0
pfi=open(filename,'rb')
while 1:
icks=0
chunk=pfi.read(256)
if not chunk: break #if EOF exit loop
for iter in chunk:
icks+=ord(iter)
print ord(iter)
cksum=(cksum+icks) & 0xffff
pfi.close()
print "cksum=0x%4.4x"%cksum
And I'm trying to convert it to Java but I'm not geting the same results.
Here's my Java code:
import java.io.BufferedInputStream;
import java.io.DataInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
public class ChecksumCalculator {
private ChecksumCalculator() {
}
public static int getChecksum(File file) {
int cksum = 0;
FileInputStream fis = null;
BufferedInputStream bis = null;
DataInputStream dis = null;
try {
fis = new FileInputStream(file);
// Here BufferedInputStream is added for fast reading.
bis = new BufferedInputStream(fis);
dis = new DataInputStream(bis);
byte[] buffer = new byte[256];
// dis.available() returns 0 if the file does not have more lines.
while (dis.read(buffer) != -1) {
int icks = 0;
for (byte b : buffer) {
icks += b & 0xff;
System.out.println(b & 0xff);
}
cksum = (cksum + icks) & 0xffff;
System.out.println("Checksum: " + cksum);
}
// dispose all the resources after using them.
fis.close();
bis.close();
dis.close();
return cksum;
} catch (FileNotFoundException e) {
e.printStackTrace();
return -1;
} catch (IOException e) {
e.printStackTrace();
return -1;
}
}
static public void main(String[] s) {
System.out.println("0x" + getChecksum(new File("H:\\Javier Ortiz\\559-7 From Pump.bin")));
}
}
But I get different results on a file. For example if I run it on a plain txt file containing only the word test it gives out the following result:
python: cksum=0x01c0
java: cksum=0x448
Any idea?
Your Python version prints the checksum in hex, while your Java version prints it in decimal. You should make your Java version print in hex, too. 0x1c0 == 448.
To use the cksum=0x%4.4x format string as you had in your Python version, use this:
System.out.printf("cksum=0x%4.4x%n", ...);
or even better
System.out.printf("cksum=%#04x%n", ...);
Also, you don't need a DataInputStream for this. Just use bis.read(buffer) instead of dis.read(buffer).
1C016 = 44810
I think that's your problem.
dis.read(buffer) returns the number of bytes that was actually read. For the last chunk, it will probably be less than 256. So the for loop shouldn't always be performed 256 times - it should be performed as many times as the actual byte count that was read from the stream.
I'm not a Python developer, but it doesn't look like ord(icks) in Python does the same as b & 0xff in Java.
Keep in mind that all Java types are signed; this might affect the calculation.
Also, although this doesn't affect correctness - it's a good practice to clean all the resources (i.e. to close the streams) in a finally block.
I've got some Java code using a servlet and Apache Commons FileUpload to upload a file to a set directory. It's working fine for character data (e.g. text files) but image files are coming out garbled. I can open them but the image doesn't look like it should. Here's my code:
Servlet
protected void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
try {
String customerPath = "\\leetest\\";
// Check that we have a file upload request
boolean isMultipart = ServletFileUpload.isMultipartContent(request);
if (isMultipart) {
// Create a new file upload handler
ServletFileUpload upload = new ServletFileUpload();
// Parse the request
FileItemIterator iter = upload.getItemIterator(request);
while (iter.hasNext()) {
FileItemStream item = iter.next();
String name = item.getFieldName();
if (item.isFormField()) {
// Form field. Ignore for now
} else {
BufferedInputStream stream = new BufferedInputStream(item
.openStream());
if (stream == null) {
LOGGER
.error("Something went wrong with fetching the stream for field "
+ name);
}
byte[] bytes = StreamUtils.getBytes(stream);
FileManager.createFile(customerPath, item.getName(), bytes);
stream.close();
}
}
}
} catch (Exception e) {
throw new UploadException("An error occured during upload: "
+ e.getMessage());
}
}
StreamUtils.getBytes(stream) looks like:
public static byte[] getBytes(InputStream src, int buffsize)
throws IOException {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
byte[] buff = new byte[buffsize];
while (true) {
int nBytesRead = src.read(buff);
if (nBytesRead < 0) {
break;
}
byteStream.write(buff);
}
byte[] result = byteStream.toByteArray();
byteStream.close();
return result;
}
And finally FileManager.createFile looks like:
public static void createFile(String customerPath, String filename,
byte[] fileData) throws IOException {
customerPath = getFullPath(customerPath + filename);
File newFile = new File(customerPath);
if (!newFile.getParentFile().exists()) {
newFile.getParentFile().mkdirs();
}
FileOutputStream outputStream = new FileOutputStream(newFile);
outputStream.write(fileData);
outputStream.close();
}
Can anyone spot what I'm doing wrong?
Cheers,
Lee
One thing I don't like is here in this block from StreamUtils.getBytes():
1 while (true) {
2 int nBytesRead = src.read(buff);
3 if (nBytesRead < 0) {
4 break;
5 }
6 byteStream.write(buff);
7 }
At line 6, it writes the entire buffer, no matter how many bytes are read in. I am not convinced this will always be the case. It would be more correct like this:
1 while (true) {
2 int nBytesRead = src.read(buff);
3 if (nBytesRead < 0) {
4 break;
5 } else {
6 byteStream.write(buff, 0, nBytesRead);
7 }
8 }
Note the 'else' on line 5, along with the two additional parameters (array index start position and length to copy) on line 6.
I could imagine that for larger files, like images, the buffer returns before it is filled (maybe it is waiting for more). That means you'd be unintentionally writing old data that was remaining in the tail end of the buffer. This is almost certainly happening most of the time at EoF, assuming a buffer > 1 byte, but extra data at EoF is probably not the cause of your corruption...it is just not desirable.
I'd just use commons io Then you could just do an IOUtils.copy(InputStream, OutputStream);
It's got lots of other useful utility methods.
Are you sure that the image isn't coming through garbled or that you aren't dropping some packets on the way in.
I don't know what difference it makes, but there seems to be a mismatch of method signatures. The getBytes() method called in your doPost() method has only one argument:
byte[] bytes = StreamUtils.getBytes(stream);
while the method source you included has two arguments:
public static byte[] getBytes(InputStream src, int buffsize)
Hope that helps.
Can you perform a checksum on your original file, and the uploaded file and see if there is any immediate differences?
If there are then you can look at performing a diff, to determine the exact part(s) of the file that are missing changed.
Things that pop to mind is beginning or end of stream, or endianness.