I have a program who read information from database. Sometimes, the message is bigger that expected. So, before send that message to my broker I zip it with this code:
public static byte[] zipBytes(byte[] input) throws IOException {
ByteArrayOutputStream bos = new ByteArrayOutputStream(input.length);
OutputStream ej = new DeflaterOutputStream(bos);
ej.write(input);
ej.close();
bos.close();
return bos.toByteArray();
}
Recently, i retrieved a 80M message from DB and when execute my code above just OutOfMemory throw on the "ByteArrayOutputStream" line. My java program just have 512 of memory to do all process and cant give it more memory.
How can I solve this?
This is no a duplicate question. I cant increase heap size memory.
EDIT:
This is flow of my java code:
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo"); //rs is a valid resulset
while(rs.next()){
byte[] bt = rs.getBytes("data");
if(bt.length > 60) { //only rows with content > 60. I need to know the size of message, so if I use rs.getBinaryStream I cannot know the size, can I?
if(bt.length >= 10000000){
//here I need to zip the bytearray before send it, so
bt = zipBytes(bt); //this method is above
// then publish bt to my broker
} else {
//here publish byte array to my broker
}
}
}
EDIT
Ive tried with PipedInputStream and the memory that process consume is same as zipBytes(byte[] input) method.
private InputStream zipInputStream(InputStream in) throws IOException {
PipedInputStream zipped = new PipedInputStream();
PipedOutputStream pipe = new PipedOutputStream(zipped);
new Thread(
() -> {
try(OutputStream zipper = new DeflaterOutputStream(pipe)){
IOUtils.copy(in, zipper);
zipper.flush();
} catch (IOException e) {
IOUtils.closeQuietly(zipped); // close it on error case only
e.printStackTrace();
} finally {
IOUtils.closeQuietly(in);
IOUtils.closeQuietly(zipped);
IOUtils.closeQuietly(pipe);
}
}
).start();
return zipped;
}
How can I compress by Deflate my InputStream?
EDIT
That information is sent to JMS in Universal Messaging Server by Software AG. This use a Nirvana client documentation: https://documentation.softwareag.com/onlinehelp/Rohan/num10-2/10-2_UM_webhelp/um-webhelp/Doc/java/classcom_1_1pcbsys_1_1nirvana_1_1client_1_1n_consume_event.html and data is saved in nConsumeEvent objects and the documentation show us only 2 ways to send that information:
nConsumeEvent (String tag, byte[] data)
nConsumeEvent (String tag, Document adom)
https://documentation.softwareag.com/onlinehelp/Rohan/num10-5/10-5_UM_webhelp/index.html#page/um-webhelp%2Fco-publish_3.html%23
The code for connection is:
nSessionAttributes nsa = new nSessionAttributes("nsp://127.0.0.1:9000");
MyReconnectHandler rhandler = new MyReconnectHandler();
nSession mySession = nSessionFactory.create(nsa, rhandler);
if(!mySession.isConnected()){
mySession.init();
}
nChannelAttributes chaAtt = new nChannelAttributes();
chaAttr.setName("mychannel"); //This is a topic
nChannel myChannel = mySession.findChannel(chaAtt);
List<nConsumeEvent> messages = new ArrayList<nConsumeEvent>();
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo");
while(rs.next){
byte[] bt = rs.getBytes("data");
if(bt.length > 60){
nEventProperties prop = new nEventProperties();
if(bt.length > 10000000){
bt = compressData(bt); //here a need compress data without ByteArrayInputStream
prop.put("isZip", "true");
nConsumeEvent ncon = new nconsumeEvent("1",bt);
ncon.setProperties(prop);
messages.add(ncon);
} else {
prop.put("isZip", "false");
nConsumeEvent ncon = new nconsumeEvent("1",bt);
ncon.setProperties(prop);
messages.add(ncon);
}
}
ntransactionAttributes tatrib = new nTransactionAttributes(myChannel);
nTransaction myTransaction = nTransactionFactory.create(tattrib);
Vector<nConsumeEvent> m = new Vector<nConsumeEvent>(messages);
myTransaction.publish(m);
myTransaction.commit();
}
Because API exection, to the end of the day I need send the information in byte array, but if this is the only one byte array in my code is OK. How can I compress the byte array or InputStream with rs.getBinaryStream() in this implementation?
EDIT
The database server used is PostgreSQL v11.6
EDIT
I've applied the first one solution of #VGR and works fine.
Only one thing, SELECT query is in a while(true) like:
while(true){
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo"); //rs is a valid resulset
// all that implementation you know for this entire post
Thread.sleep(10000);
}
So, a SELECT is execute each 3 second.
But I've done a test with my program running and the memory just increase in each process. Why? If the information that database return is same in each request, should not the memory keep like first request? Or maybe I forgot close a stream?
while(true){
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo"); //rs is a valid resulset
while(rs.next()) {
//byte[] bt = rs.getBytes("data");
byte[] bt;
try (BufferedInputStream source = new BufferedInputStream(
rs.getBinaryStream("data"), 10_000_001)) {
source.mark(this.zip+1);
boolean sendToBroker = true;
boolean needCompression = true;
for (int i = 0; i <= 10_000_000; i++) {
if (source.read() < 0) {
sendToBroker = (i > 60);
needCompression = (i >= this.zip);
break;
}
}
if (sendToBroker) {
nEventProperties prop = new nEventProperties();
// Rewind stream
source.reset();
if (needCompression) {
System.out.println("Tamaño del mensaje mayor al esperado. Comprimiendo mensaje");
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
try (OutputStream brokerStream = new DeflaterOutputStream(byteStream)) {
IOUtils.copy(source, brokerStream);
}
bt = byteStream.toByteArray();
prop.put("zip", "true");
} else {
bt = IOUtils.toByteArray(source);
}
System.out.println("size: "+bt.length);
prop.put("host", this.host);
nConsumeEvent ncon = new nConsumeEvent(""+rs.getInt("xid"), bt);
ncon.setProperties(prop);
messages.add(ncon);
}
}
}
}
For example, this is the heap memory in two times. First one memory use above 500MB and second one (with the same information of database) used above 1000MB
rs.getBytes("data") reads the entire 80 megabytes into memory at once. In general, if you are reading a potentially large amount of data, you don’t want to try to keep it all in memory.
The solution is to use getBinaryStream instead.
Since you need to know whether the total size is larger than 10,000,000 bytes, you have two choices:
Use a BufferedInputStream with a buffer of at least that size, which will allow you to use mark and reset in order to “rewind” the InputStream.
Read the data size as part of your query. You may be able to do this by using a Blob or using a function like LENGTH.
The first approach will use up 10 megabytes of program memory for the buffer, but that’s better than hundreds of megabytes:
while (rs.next()) {
try (BufferedInputStream source = new BufferedInputStream(
rs.getBinaryStream("data"), 10_000_001)) {
source.mark(10_000_001);
boolean sendToBroker = true;
boolean needCompression = true;
for (int i = 0; i <= 10_000_000; i++) {
if (source.read() < 0) {
sendToBroker = (i > 60);
needCompression = (i >= 10_000_000);
break;
}
}
if (sendToBroker) {
// Rewind stream
source.reset();
if (needCompression) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
}
}
}
Notice that no byte arrays and no ByteArrayOutputStreams are used. The actual data is not kept in memory, except for the 10 megabyte buffer.
The second approach is shorter, but I’m not sure how portable it is across databases:
while (rs.next()) {
Blob data = rs.getBlob("data");
long length = data.length();
if (length > 60) {
try (InputStream source = data.getBinaryStream()) {
if (length >= 10000000) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
}
}
}
Both approaches assume there is some API available for your “broker” which allows the data to be written to an OutputStream. I’ve assumed, for the sake of example, that it’s broker.getOutputStream().
Update
It appears you are required to create nConsumeEvent objects, and that class only allows its data to be specified as a byte array in its constructors.
That byte array is unavoidable, obviously. And since there is no way to know the exact number of bytes a compressed version will require, a ByteArrayOutputStream is also unavoidable. (It’s possible to avoid using that class, but the replacement would be essentially a reimplementation of ByteArrayOutputStream.)
But you can still read the data as an InputStream in order to reduce your memory usage. And when you aren’t compressing, you can still avoid ByteArrayOutputStream, thereby creating only one additional byte array.
So, instead of this, which is not possible for nConsumeEvent:
if (needCompression) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
It should be:
byte[] bt;
if (needCompression) {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
try (OutputStream brokerStream = new DeflaterOutputStream(byteStream)) {
source.transferTo(byteStream);
}
bt = byteStream.toByteArray();
} else {
bt = source.readAllBytes();
}
prop.put("isZip", String.valueOf(needCompression));
nConsumeEvent ncon = new nconsumeEvent("1", bt);
ncon.setProperties(prop);
messages.add(ncon);
Similarly, the second example should replace this:
if (length >= 10000000) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
with this:
byte[] bt;
if (length >= 10000000) {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
try (OutputStream brokerStream = new DeflaterOutputStream(byteStream)) {
source.transferTo(byteStream);
}
bt = byteStream.toByteArray();
prop.put("isZip", "true");
} else {
bt = source.readAllBytes();
prop.put("isZip", "false");
}
nConsumeEvent ncon = new nconsumeEvent("1", bt);
ncon.setProperties(prop);
messages.add(ncon);
Related
I use the code below to generate the zip file and return it to the frontend, The performance is normal when the file is small. However, when the compressed file exceeds 1GB, the downloaded file will be conrrupted, and the number of compressed files is reduced, and this phenomenon does not always occur, sometimes downloading 3.2GB files is normal again =_=
Looking forward to hearing from everyone
outputStream = response.getOutputStream();
ZipOutputStream zipOutStream = null;
FileInputStream filenputStream = null;
BufferedInputStream bis = null;
try {
zipOutStream = new ZipOutputStream(new BufferedOutputStream(outputStream));
zipOutStream.setMethod(ZipOutputStream.DEFLATED);
for (int i = 0; i < files.size(); i++) {
File file = files.get(i);
filenputStream = new FileInputStream(file);
bis = new BufferedInputStream(filenputStream);
zipOutStream.putNextEntry(new ZipEntry(fileName[i]));
int len = 0;
byte[] bs = new byte[40];
while ((len = bis.read(bs)) != -1) {
zipOutStream.write(bs,0,len);
}
bis.close();
filenputStream.close();
zipOutStream.closeEntry();
}
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if(Objects.nonNull(filenputStream)) {
filenputStream.close();
}
if(Objects.nonNull(bis)) {
bis.close();
}
if (Objects.nonNull(zipOutStream)) {
zipOutStream.flush();
zipOutStream.close();
}
if (Objects.nonNull(outputStream)) {
outputStream.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
One of the potential issues with your code is with memory leaks and streams that are not properly flushed and closed if the code runs in to any exceptions.
One way to help reduce these leaks and to ensure all resources are properly closed, even when exceptions occur, is to use try-with-resources.
Here is an example of how your code example can be rewritten to utilize try-with-resources.
In this example, to get it to compile in a detached IDE environment, I've put it in a function named testStream since I do not have knowledge as to what contributes to the source of the parameters files and fileName. Therefore, validation and content of these two parameters are left to chance, and it is assumed both have the same number of entries, and are paired with each other (element 0 of each are paired together). Since this odd relationship exists, it's linked by the variable named i to mirror the original source code.
public void testStream( HttpServletResponse response, List<File> files, String[] fileName )
{
int i = 0;
try (
ServletOutputStream outputStream = response.getOutputStream();
ZipOutputStream zipOutStream = new ZipOutputStream( new BufferedOutputStream( outputStream ) );
)
{
zipOutStream.setMethod( ZipOutputStream.DEFLATED );
for ( File file : files )
{
try (
BufferedInputStream bis = new BufferedInputStream(
new FileInputStream( file ) );
) {
zipOutStream.putNextEntry( new ZipEntry( fileName[i++] ) );
int len = 0;
byte[] bs = new byte[4096];
while ( (len = bis.read( bs )) != -1 )
{
zipOutStream.write( bs, 0, len );
}
// optional since putNextEntry will auto-close open entries:
zipOutStream.closeEntry();
}
}
}
catch ( IOException e )
{
e.printStackTrace();
}
}
This compiles successfully and should work; but may need some other adjustments to get it to work in your environment with your data. Hopefully it eliminates issues you may have been having with closing streams and other resources.
In my applet I have GET call to download file from a remote location. When I am trying to download some large file of around 13MB, then my Applet memory consumption is increasing more than 50MB. I am using the below code to get my memory consumption:
public static long getMemoryUsage()
{
long memory = 0;
// Get the Java runtime
Runtime runtime = Runtime.getRuntime();
memory = runtime.totalMemory() - runtime.freeMemory();
return memory;
}
Code for my get call is
public void getFiles(String filePath, long fileSize)throws MyException
{
InputStream objInputStream = null;
HttpURLConnection conn = null;
BufferedReader br = null;
try
{
URL fileUrl=new URL(filePath);
final String strAPICall=fileUrl.getPath();
final String strHost="some.test.com";
final int iPort=1000;
URL url = null;
url = new java.net.URL
( "https",
strHost, iPort , "/" + strAPICall,
new myHandler() );
conn = (HttpURLConnection)new HttpsURLConn(url);
conn.setRequestMethod("GET");
conn.connect();
if (conn.getResponseCode() != 200) {
objInputStream=conn.getInputStream();
br = new BufferedReader(new InputStreamReader(
(objInputStream)));
String output;
while ((output = br.readLine()) != null) {
System.out.println(output);
}
throw new MyException("Bad response from server",
MyError.BAD_RESPONSE_ERROR);
}
else
{
notifyProgressToObservers(0);
System.out.println("conn.getResponseCode()"+conn.getResponseCode());
System.out.println("conn.getResponseMessage()"+conn.getResponseMessage());
objInputStream = conn.getInputStream();
int count=objInputStream.available();
System.out.println("Stream size: "+count);
System.out.println("fileSize size: "+fileSize);
byte []downloadedData = getBytesFromInputStream
(objInputStream, count,fileSize);
notifyChunkToObservers(downloadedData);
notifyIndivisualFileEndToObservers(true, null);
}
}
catch (MyException pm)
{
throw new MyException
(pm, MyError.CONNECTION_TIMEOUT);
}
catch (IOException pm)
{
throw new MyException
(pm, MyError.CONNECTION_TIMEOUT);
}
catch (Exception e)
{
notifyIndivisualFileEndToObservers(false,new MyException(e.toString()));
}
finally
{
System.out.println("Closing all the streams after getting file");
if(conn !=null)
{
try
{
conn.disconnect();
}
catch(Exception e)
{
}
}
if(objInputStream != null)
{
try {
objInputStream.close();
} catch (IOException e) {
}
}
if (br != null)
{
try {
br.close();
} catch (IOException e) {
}
}
}
}
In the above method, I tried putting the log for memory consumption after each line and found that after conn.connect();, the memory consumption of applet increases by atleast 50MB even though the file I am trying to download is only 13MB.
Is there any memory leak anywhere?
EDIT: Added Implementation for getBytesFromInputStream()
public byte[] getBytesFromInputStream(InputStream is, int len, long fileSize)
throws IOException
{
byte[] readBytes= new byte[8192];
ByteArrayOutputStream getBytes= new ByteArrayOutputStream();
int numRead = 0;
while ((numRead = is.read(readBytes)) != -1) {
getBytes.write(readBytes, 0, numRead);
}
return getBytes.toByteArray();
}
it's because of this line:
byte []downloadedData = getBytesFromInputStream(objInputStream, count,fileSize);
here you are reading the complete amount of bytes of file into the heap. After that you need to track down what happens with this array. Maybe you are copying it somewhere and the GC needs some time to kick in even if you do not use the reference to the object anymore.
Large files should never be read completly to memory, but rather streamed directly to some processor of the data.
The only way to optimize getBytesFromInputStream() is if you know beforehand exactly how many by bytes there are to read. Then you allocate a byte[] of the required size, and read from the input directly into the byte[]. For example:
byte[] buffer = new byte[len];
int pos = 0;
while (pos < len) {
int nosRead = is.read(buffer, pos, len - pos);
if (nosRead == -1) {
throw new IOException("incomplete response");
}
pos += nosRead;
}
return buffer;
(For more information, read the javadoc.)
Unfortunately, your (apparent) attempt at getting the size is incorrect.
int count = objInputStream.available();
This doesn't return the total number of bytes that can be read from the stream. It returns the number of bytes that can be read right now without the possibility of blocking.
If the server is setting the Content-Length header in the response, then you could use that; call getContentLength() (or getContentLengthLong() in other use-cases) once you have the response. But be prepared for the case where that gives you -1.
I have a Client-Server system where server is written in cpp and the client is written is Java (Android application).
The server reads an image from a local directory as an ifstream using read method.
The reading process is done inside a loop, where the program reads parts of the image every time. Every time a part of the image is read, it's sent over a socket to the client that collects all the piece inside a byteBuffer and when all the bytes of the image are transfered to the client, the client attempts to turn that array of bytes (after using byteBuffer.array() method) into a Bitmap.
This is where the problem begins - I've tried a few methods but it seems that I'm unable to turn this array of bytes into a Bitmap.
From what I understood, this byte array is probably a raw representation of the image, which can't be decodded using methods like BitmapFactory.decodeByteArray() since it wasn't encoded in the first place.
Ultimately, my question is - how can I proccess this array of bytes so that I'll be able to set the image as a source to an ImageView?
Note: I've already made sure that all the data is sent over the socket correctly and the pieces are collected in the right order.
Client code:
byte[] image_bytes
byte[] response_bytes;
private void receive_image ( final String protocol, final int image_size, final int buffer_size)
{
if (image_size <= 0 || buffer_size <= 0)
return;
Thread image_receiver = new Thread(new Runnable() {
#Override
public void run() {
ByteBuffer byteBuffer = ByteBuffer.allocate(image_size);
byte[] buffer = new byte[buffer_size];
int bytesReadSum = 0;
try {
while (bytesReadSum != image_size) {
activeReader.read(buffer);
String message = new String(buffer);
if (TextUtils.substring(message, 0, 5len_of_protocol_number).equals(protocol)) {
int bytesToRead = Integer.parseInt(TextUtils.substring(message,
len_of_protocol_number,
len_of_protocol_number + len_of_data_len));
byteBuffer.put(Arrays.copyOfRange(buffer,
len_of_protocol_number + len_of_data_len,
bytesToRead + len_of_protocol_number + len_of_data_len));
bytesReadSum += bytesToRead;
} else {
response_bytes = null;
break;
}
}
if (bytesReadSum == image_size) {
image_bytes = byteBuffer.array();
if (image_bytes.length > 0)
response_bytes = image_bytes;
else
response_bytes = null;
}
} catch (IOException e) {
response_bytes = null;
}
}
});
image_receiver.start();
try {
image_receiver.join();
} catch (InterruptedException e) {
response_bytes = null;
}
if (response_bytes != null)
{
final ImageView imageIV = (ImageView) findViewById(R.id.imageIV);
File image_file = new File(Environment.getExternalStorageDirectory(), "image_file_jpg");
try
{
FileOutputStream stream = new FileOutputStream(image_file);
stream.write(image_bytes);
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
//Here the method returns null
final Bitmap image_bitmap = BitmapFactory.decodeFile(image_file.getAbsolutePath());
main.this.runOnUiThread(new Runnable() {
#Override
public void run() {
imageIV.setImageBitmap(image_bitmap);
imageIV.invalidate();
}
}
}
}
Whenever you exchange data between two machines of different architectures via sockets you need to know the Endianness (big-endian/little-endian) of each machine. If different, you will need to convert bytes to correct the data. Perhaps that's your issue. Here's a link with sample code: Converting Little Endian to Big Endian. You should be able to easily find more articles explaining the concept.
It turned out that something was wrong with my sending protocol.
After patching it up a bit it actually worked.
Thanks for the help.
I am trying to query from database with Java JDBC and compress data in one column to gzip file in specific directory. I have tested my JDBC query and it working fine, but the Gzip code not going with the while loop, it's run with the loop firt row and stuck there. Why it's stuck? help me please!
These folders already existed: D:\Directory\My\year\id1\id2
//Some query JDBC code here, it's work well. I query all rows Data, year, id1,id2,id3
while (myRs1.next()){
String str = Data;
File myGzipFile = new File("D:\\Directory\\My\\"+year+"\\"+id1+"\\"+id2+"\\"+id3+".gzip");
GZIPOutputStream gos = null;
InputStream is = new ByteArrayInputStream(str.getBytes());
gos = new GZIPOutputStream(new FileOutputStream(myGzipFile));
byte[] buffer = new byte[1024];
int len;
while ((len = is.read(buffer)) != -1) {
gos.write(buffer, 0, len);
System.out.print("done for:"+id3);
}
try { gos.close(); } catch (IOException e) { }
}
Try formatting the source like this to catch exceptions.
public class InputStreamDemo {
public static void main(String[] args) throws Exception {
InputStream is = null;
int i;
char c;
try{
is = new FileInputStream("C://test.txt");
System.out.println("Characters printed:");
// reads till the end of the stream
while((i=is.read())!=-1)
{
// converts integer to character
c=(char)i;
// prints character
System.out.print(c);
}
}catch(Exception e){
// if any I/O error occurs
e.printStackTrace();
}finally{
// releases system resources associated with this stream
if(is!=null)
is.close();
}
}
}
I am working a project in which I have to play with some file reading writing tasks. I have to read 8 bytes from a file at one time and perform some operations on that block and then write that block to second file, then repeat the cycle until first file is completely read in chuncks of 8 bytes everytime and the after manipulation the data should be added/appended to the second. However, in doing so, I am facing some problems. Following is what I am trying:
private File readFromFile1(File file1) {
int offset = 0;
long message= 0;
try {
FileInputStream fis = new FileInputStream(file1);
byte[] data = new byte[8];
file2 = new File("file2.txt");
FileOutputStream fos = new FileOutputStream(file2.getAbsolutePath(), true);
DataOutputStream dos = new DataOutputStream(fos);
while(fis.read(data, offset, 8) != -1)
{
message = someOperation(data); // operation according to business logic
dos.writeLong(message);
}
fos.close();
dos.close();
fis.close();
} catch (IOException e) {
System.out.println("Some error occurred while reading from File:" + e);
}
return file2;
}
I am not getting the desired output this way. Any help is appreciated.
Consider the following code:
private File readFromFile1(File file1) {
int offset = 0;
long message = 0;
File file2 = null;
try {
FileInputStream fis = new FileInputStream(file1);
byte[] data = new byte[8]; //Read buffer
byte[] tmpbuf = new byte[8]; //Temporary chunk buffer
file2 = new File("file2.txt");
FileOutputStream fos = new FileOutputStream(file2.getAbsolutePath(), true);
DataOutputStream dos = new DataOutputStream(fos);
int readcnt; //Read count
int chunk; //Chunk size to write to tmpbuf
while ((readcnt = fis.read(data, 0, 8)) != -1) {
//// POINT A ////
//Skip chunking system if an 8 byte octet is read directly.
if(readcnt == 8 && offset == 0){
message = someOperation(tmpbuf); // operation according to business logic
dos.writeLong(message);
continue;
}
//// POINT B ////
chunk = Math.min(tmpbuf.length - offset, readcnt); //Determine how much to add to the temp buf.
System.arraycopy(data, 0, tmpbuf, offset, chunk); //Copy bytes to temp buf
offset = offset + chunk; //Sets the offset to temp buf
if (offset == 8) {
message = someOperation(tmpbuf); // operation according to business logic
dos.writeLong(message);
if (chunk < readcnt) {
System.arraycopy(data, chunk, tmpbuf, 0, readcnt - chunk);
offset = readcnt - chunk;
} else {
offset = 0;
}
}
}
//// POINT C ////
//Process remaining bytes here...
//message = foo(tmpbuf);
//dos.writeLong(message);
fos.close();
dos.close();
fis.close();
} catch (IOException e) {
System.out.println("Some error occurred while reading from File:" + e);
}
return file2;
}
In this excerpt of code, what I did was:
Modify your reading code to include the amount of bytes actually read from the read() method (noted readcnt).
Added a byte chunking system (the processing does not happen until there are at least 8 bytes in the chunking buffer).
Allowed for separate processing of the final bytes (that do not make up a 8 byte octet).
As you can see from the code, the data being read is first stored in a chunking buffer (denoted tmpbuf) until at least 8 bytes are available. This will happen only if 8 bytes are not always available (If 8 bytes are available directly and nothing is chunked, directly process. See "Point A" in code). This is done as a form of optimization to prevent excess array copies.
The chunking system uses offsets which increment every time bytes are written to tmpbuf until it reaches a value of 8 (it will not go over as the Math.min() method used in the assignment of 'chunk' will limit the value). Upon offset == 8, proceed to execute the processing code.
If that particular read produced more bytes than actually processed, continue writing them to tmpbuf, from the beginning again, whilst setting offset appropriately, otherwise set offset to 0.
Repeat cycle.
The code will leave the last few bytes of data that do not fit in an octet in the array tmpbuf with the offset variable indicating how much has actually been written. This data can then be processed separately at point C.
Seems a lot more complicating than it should be, and there probably is a better solution (possibly using existing java library methods), but off the top of my head, this is what I got. Hope this is clear enough for you to understand.
You could use the following, it uses NIO and especially the ByteBuffer class for the long handling. You can of course implement it the standard java way, but since i am a NIO fan, here is a possible solution.
The major problem in your code is that while(fis.read(data, offset, 8) != -1) will read up to 8 bytes, and not always 8 bytes, plus reading in such small portions is not very efficient.
I have put some comments in my code, if something is unclear please leave a comment. My someOperation(...) function just copies the next long value from the buffer.
Update:
added finally block to close the files.
import java.io.File;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.file.StandardOpenOption;
public class TestFile {
static final int IN_BUFFER_SIZE = 1024 * 8;
static final int OUT_BUFFER_SIZE = 1024 *9; // make the out-buffer > in-buffer, i am lazy and don't want to check for overruns
static final int MIN_READ_BYTES = 8;
static final int MIN_WRITE_BYTES = 8;
private File readFromFile1(File inFile) {
final File outFile = new File("file2.txt");
final ByteBuffer inBuffer = ByteBuffer.allocate(IN_BUFFER_SIZE);
final ByteBuffer outBuffer = ByteBuffer.allocate(OUT_BUFFER_SIZE);
FileChannel readChannel = null;
FileChannel writeChannel = null;
try {
// open a file channel for reading and writing
readChannel = FileChannel.open(inFile.toPath(), StandardOpenOption.READ);
writeChannel = FileChannel.open(outFile.toPath(), StandardOpenOption.CREATE, StandardOpenOption.WRITE);
long totalReadByteCount = 0L;
long totalWriteByteCount = 0L;
boolean readMore = true;
while (readMore) {
// read some bytes into the in-buffer
int readOp = 0;
while ((readOp = readChannel.read(inBuffer)) != -1) {
totalReadByteCount += readOp;
} // while
// prepare the in-buffer to be consumed
inBuffer.flip();
// check if there where errors
if (readOp == -1) {
// end of file reached, read no more
readMore = false;
} // if
// now consume the in-buffer until there are at least MIN_READ_BYTES in the buffer
while (inBuffer.remaining() >= MIN_READ_BYTES) {
// add data to the write buffer
outBuffer.putLong(someOperation(inBuffer));
} // while
// compact the in-buffer and prepare for the next read, if we need to read more.
// that way the possible remaining bytes of the in-buffer can be consumed after leaving the loop
if (readMore) inBuffer.compact();
// prepare the out-buffer to be consumed
outBuffer.flip();
// write the out-buffer until the buffer is empty
while (outBuffer.hasRemaining())
totalWriteByteCount += writeChannel.write(outBuffer);
// prepare the out-buffer for writing again
outBuffer.flip();
} // while
// error handling
if (inBuffer.hasRemaining()) {
System.err.println("Truncated data! Not a long value! bytes remaining: " + inBuffer.remaining());
} // if
System.out.println("read total: " + totalReadByteCount + " bytes.");
System.out.println("write total: " + totalWriteByteCount + " bytes.");
} catch (IOException e) {
System.out.println("Some error occurred while reading from File: " + e);
} finally {
if (readChannel != null) {
try {
readChannel.close();
} catch (IOException e) {
System.out.println("Could not close read channel: " + e);
} // catch
} // if
if (writeChannel != null) {
try {
writeChannel.close();
} catch (IOException e) {
System.out.println("Could not close write channel: " + e);
} // catch
} // if
} // finally
return outFile;
}
private long someOperation(ByteBuffer bb) {
// consume the buffer, do whatever you want with the buffer.
return bb.getLong(); // consumes 8 bytes of the buffer.
}
public static void main(String[] args) {
TestFile testFile = new TestFile();
File source = new File("input.txt");
testFile.readFromFile1(source);
}
}