Memory Consumption by Java Applet - java

In my applet I have GET call to download file from a remote location. When I am trying to download some large file of around 13MB, then my Applet memory consumption is increasing more than 50MB. I am using the below code to get my memory consumption:
public static long getMemoryUsage()
{
long memory = 0;
// Get the Java runtime
Runtime runtime = Runtime.getRuntime();
memory = runtime.totalMemory() - runtime.freeMemory();
return memory;
}
Code for my get call is
public void getFiles(String filePath, long fileSize)throws MyException
{
InputStream objInputStream = null;
HttpURLConnection conn = null;
BufferedReader br = null;
try
{
URL fileUrl=new URL(filePath);
final String strAPICall=fileUrl.getPath();
final String strHost="some.test.com";
final int iPort=1000;
URL url = null;
url = new java.net.URL
( "https",
strHost, iPort , "/" + strAPICall,
new myHandler() );
conn = (HttpURLConnection)new HttpsURLConn(url);
conn.setRequestMethod("GET");
conn.connect();
if (conn.getResponseCode() != 200) {
objInputStream=conn.getInputStream();
br = new BufferedReader(new InputStreamReader(
(objInputStream)));
String output;
while ((output = br.readLine()) != null) {
System.out.println(output);
}
throw new MyException("Bad response from server",
MyError.BAD_RESPONSE_ERROR);
}
else
{
notifyProgressToObservers(0);
System.out.println("conn.getResponseCode()"+conn.getResponseCode());
System.out.println("conn.getResponseMessage()"+conn.getResponseMessage());
objInputStream = conn.getInputStream();
int count=objInputStream.available();
System.out.println("Stream size: "+count);
System.out.println("fileSize size: "+fileSize);
byte []downloadedData = getBytesFromInputStream
(objInputStream, count,fileSize);
notifyChunkToObservers(downloadedData);
notifyIndivisualFileEndToObservers(true, null);
}
}
catch (MyException pm)
{
throw new MyException
(pm, MyError.CONNECTION_TIMEOUT);
}
catch (IOException pm)
{
throw new MyException
(pm, MyError.CONNECTION_TIMEOUT);
}
catch (Exception e)
{
notifyIndivisualFileEndToObservers(false,new MyException(e.toString()));
}
finally
{
System.out.println("Closing all the streams after getting file");
if(conn !=null)
{
try
{
conn.disconnect();
}
catch(Exception e)
{
}
}
if(objInputStream != null)
{
try {
objInputStream.close();
} catch (IOException e) {
}
}
if (br != null)
{
try {
br.close();
} catch (IOException e) {
}
}
}
}
In the above method, I tried putting the log for memory consumption after each line and found that after conn.connect();, the memory consumption of applet increases by atleast 50MB even though the file I am trying to download is only 13MB.
Is there any memory leak anywhere?
EDIT: Added Implementation for getBytesFromInputStream()
public byte[] getBytesFromInputStream(InputStream is, int len, long fileSize)
throws IOException
{
byte[] readBytes= new byte[8192];
ByteArrayOutputStream getBytes= new ByteArrayOutputStream();
int numRead = 0;
while ((numRead = is.read(readBytes)) != -1) {
getBytes.write(readBytes, 0, numRead);
}
return getBytes.toByteArray();
}

it's because of this line:
byte []downloadedData = getBytesFromInputStream(objInputStream, count,fileSize);
here you are reading the complete amount of bytes of file into the heap. After that you need to track down what happens with this array. Maybe you are copying it somewhere and the GC needs some time to kick in even if you do not use the reference to the object anymore.
Large files should never be read completly to memory, but rather streamed directly to some processor of the data.

The only way to optimize getBytesFromInputStream() is if you know beforehand exactly how many by bytes there are to read. Then you allocate a byte[] of the required size, and read from the input directly into the byte[]. For example:
byte[] buffer = new byte[len];
int pos = 0;
while (pos < len) {
int nosRead = is.read(buffer, pos, len - pos);
if (nosRead == -1) {
throw new IOException("incomplete response");
}
pos += nosRead;
}
return buffer;
(For more information, read the javadoc.)
Unfortunately, your (apparent) attempt at getting the size is incorrect.
int count = objInputStream.available();
This doesn't return the total number of bytes that can be read from the stream. It returns the number of bytes that can be read right now without the possibility of blocking.
If the server is setting the Content-Length header in the response, then you could use that; call getContentLength() (or getContentLengthLong() in other use-cases) once you have the response. But be prepared for the case where that gives you -1.

Related

Java Heap OutOfMemory on ByteArrayOutputStream for deflater zip bytes

I have a program who read information from database. Sometimes, the message is bigger that expected. So, before send that message to my broker I zip it with this code:
public static byte[] zipBytes(byte[] input) throws IOException {
ByteArrayOutputStream bos = new ByteArrayOutputStream(input.length);
OutputStream ej = new DeflaterOutputStream(bos);
ej.write(input);
ej.close();
bos.close();
return bos.toByteArray();
}
Recently, i retrieved a 80M message from DB and when execute my code above just OutOfMemory throw on the "ByteArrayOutputStream" line. My java program just have 512 of memory to do all process and cant give it more memory.
How can I solve this?
This is no a duplicate question. I cant increase heap size memory.
EDIT:
This is flow of my java code:
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo"); //rs is a valid resulset
while(rs.next()){
byte[] bt = rs.getBytes("data");
if(bt.length > 60) { //only rows with content > 60. I need to know the size of message, so if I use rs.getBinaryStream I cannot know the size, can I?
if(bt.length >= 10000000){
//here I need to zip the bytearray before send it, so
bt = zipBytes(bt); //this method is above
// then publish bt to my broker
} else {
//here publish byte array to my broker
}
}
}
EDIT
Ive tried with PipedInputStream and the memory that process consume is same as zipBytes(byte[] input) method.
private InputStream zipInputStream(InputStream in) throws IOException {
PipedInputStream zipped = new PipedInputStream();
PipedOutputStream pipe = new PipedOutputStream(zipped);
new Thread(
() -> {
try(OutputStream zipper = new DeflaterOutputStream(pipe)){
IOUtils.copy(in, zipper);
zipper.flush();
} catch (IOException e) {
IOUtils.closeQuietly(zipped); // close it on error case only
e.printStackTrace();
} finally {
IOUtils.closeQuietly(in);
IOUtils.closeQuietly(zipped);
IOUtils.closeQuietly(pipe);
}
}
).start();
return zipped;
}
How can I compress by Deflate my InputStream?
EDIT
That information is sent to JMS in Universal Messaging Server by Software AG. This use a Nirvana client documentation: https://documentation.softwareag.com/onlinehelp/Rohan/num10-2/10-2_UM_webhelp/um-webhelp/Doc/java/classcom_1_1pcbsys_1_1nirvana_1_1client_1_1n_consume_event.html and data is saved in nConsumeEvent objects and the documentation show us only 2 ways to send that information:
nConsumeEvent (String tag, byte[] data)
nConsumeEvent (String tag, Document adom)
https://documentation.softwareag.com/onlinehelp/Rohan/num10-5/10-5_UM_webhelp/index.html#page/um-webhelp%2Fco-publish_3.html%23
The code for connection is:
nSessionAttributes nsa = new nSessionAttributes("nsp://127.0.0.1:9000");
MyReconnectHandler rhandler = new MyReconnectHandler();
nSession mySession = nSessionFactory.create(nsa, rhandler);
if(!mySession.isConnected()){
mySession.init();
}
nChannelAttributes chaAtt = new nChannelAttributes();
chaAttr.setName("mychannel"); //This is a topic
nChannel myChannel = mySession.findChannel(chaAtt);
List<nConsumeEvent> messages = new ArrayList<nConsumeEvent>();
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo");
while(rs.next){
byte[] bt = rs.getBytes("data");
if(bt.length > 60){
nEventProperties prop = new nEventProperties();
if(bt.length > 10000000){
bt = compressData(bt); //here a need compress data without ByteArrayInputStream
prop.put("isZip", "true");
nConsumeEvent ncon = new nconsumeEvent("1",bt);
ncon.setProperties(prop);
messages.add(ncon);
} else {
prop.put("isZip", "false");
nConsumeEvent ncon = new nconsumeEvent("1",bt);
ncon.setProperties(prop);
messages.add(ncon);
}
}
ntransactionAttributes tatrib = new nTransactionAttributes(myChannel);
nTransaction myTransaction = nTransactionFactory.create(tattrib);
Vector<nConsumeEvent> m = new Vector<nConsumeEvent>(messages);
myTransaction.publish(m);
myTransaction.commit();
}
Because API exection, to the end of the day I need send the information in byte array, but if this is the only one byte array in my code is OK. How can I compress the byte array or InputStream with rs.getBinaryStream() in this implementation?
EDIT
The database server used is PostgreSQL v11.6
EDIT
I've applied the first one solution of #VGR and works fine.
Only one thing, SELECT query is in a while(true) like:
while(true){
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo"); //rs is a valid resulset
// all that implementation you know for this entire post
Thread.sleep(10000);
}
So, a SELECT is execute each 3 second.
But I've done a test with my program running and the memory just increase in each process. Why? If the information that database return is same in each request, should not the memory keep like first request? Or maybe I forgot close a stream?
while(true){
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo"); //rs is a valid resulset
while(rs.next()) {
//byte[] bt = rs.getBytes("data");
byte[] bt;
try (BufferedInputStream source = new BufferedInputStream(
rs.getBinaryStream("data"), 10_000_001)) {
source.mark(this.zip+1);
boolean sendToBroker = true;
boolean needCompression = true;
for (int i = 0; i <= 10_000_000; i++) {
if (source.read() < 0) {
sendToBroker = (i > 60);
needCompression = (i >= this.zip);
break;
}
}
if (sendToBroker) {
nEventProperties prop = new nEventProperties();
// Rewind stream
source.reset();
if (needCompression) {
System.out.println("Tamaño del mensaje mayor al esperado. Comprimiendo mensaje");
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
try (OutputStream brokerStream = new DeflaterOutputStream(byteStream)) {
IOUtils.copy(source, brokerStream);
}
bt = byteStream.toByteArray();
prop.put("zip", "true");
} else {
bt = IOUtils.toByteArray(source);
}
System.out.println("size: "+bt.length);
prop.put("host", this.host);
nConsumeEvent ncon = new nConsumeEvent(""+rs.getInt("xid"), bt);
ncon.setProperties(prop);
messages.add(ncon);
}
}
}
}
For example, this is the heap memory in two times. First one memory use above 500MB and second one (with the same information of database) used above 1000MB
rs.getBytes("data") reads the entire 80 megabytes into memory at once. In general, if you are reading a potentially large amount of data, you don’t want to try to keep it all in memory.
The solution is to use getBinaryStream instead.
Since you need to know whether the total size is larger than 10,000,000 bytes, you have two choices:
Use a BufferedInputStream with a buffer of at least that size, which will allow you to use mark and reset in order to “rewind” the InputStream.
Read the data size as part of your query. You may be able to do this by using a Blob or using a function like LENGTH.
The first approach will use up 10 megabytes of program memory for the buffer, but that’s better than hundreds of megabytes:
while (rs.next()) {
try (BufferedInputStream source = new BufferedInputStream(
rs.getBinaryStream("data"), 10_000_001)) {
source.mark(10_000_001);
boolean sendToBroker = true;
boolean needCompression = true;
for (int i = 0; i <= 10_000_000; i++) {
if (source.read() < 0) {
sendToBroker = (i > 60);
needCompression = (i >= 10_000_000);
break;
}
}
if (sendToBroker) {
// Rewind stream
source.reset();
if (needCompression) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
}
}
}
Notice that no byte arrays and no ByteArrayOutputStreams are used. The actual data is not kept in memory, except for the 10 megabyte buffer.
The second approach is shorter, but I’m not sure how portable it is across databases:
while (rs.next()) {
Blob data = rs.getBlob("data");
long length = data.length();
if (length > 60) {
try (InputStream source = data.getBinaryStream()) {
if (length >= 10000000) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
}
}
}
Both approaches assume there is some API available for your “broker” which allows the data to be written to an OutputStream. I’ve assumed, for the sake of example, that it’s broker.getOutputStream().
Update
It appears you are required to create nConsumeEvent objects, and that class only allows its data to be specified as a byte array in its constructors.
That byte array is unavoidable, obviously. And since there is no way to know the exact number of bytes a compressed version will require, a ByteArrayOutputStream is also unavoidable. (It’s possible to avoid using that class, but the replacement would be essentially a reimplementation of ByteArrayOutputStream.)
But you can still read the data as an InputStream in order to reduce your memory usage. And when you aren’t compressing, you can still avoid ByteArrayOutputStream, thereby creating only one additional byte array.
So, instead of this, which is not possible for nConsumeEvent:
if (needCompression) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
It should be:
byte[] bt;
if (needCompression) {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
try (OutputStream brokerStream = new DeflaterOutputStream(byteStream)) {
source.transferTo(byteStream);
}
bt = byteStream.toByteArray();
} else {
bt = source.readAllBytes();
}
prop.put("isZip", String.valueOf(needCompression));
nConsumeEvent ncon = new nconsumeEvent("1", bt);
ncon.setProperties(prop);
messages.add(ncon);
Similarly, the second example should replace this:
if (length >= 10000000) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
with this:
byte[] bt;
if (length >= 10000000) {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
try (OutputStream brokerStream = new DeflaterOutputStream(byteStream)) {
source.transferTo(byteStream);
}
bt = byteStream.toByteArray();
prop.put("isZip", "true");
} else {
bt = source.readAllBytes();
prop.put("isZip", "false");
}
nConsumeEvent ncon = new nconsumeEvent("1", bt);
ncon.setProperties(prop);
messages.add(ncon);

Memory is not released while using MappedByteBuffer

I am trying to read one big file in chunk . So the read operation will be called multiple times being offset one of the parameter . The read is working perfectly fine .
But the real problem is starts when I try to delete the file after read is complete . It is throwing IO exception .
I do not want to forcefully garbage collect(System.gc()) .
Read Code :
public static GenericExcelRead ReadFileContent(String fileName, int offset, String status) throws IOException
{
GenericExcelRead aGenericExcelRead = new GenericExcelRead();
//FileInputStream fileStream = null;
FileChannel fileChannel = null;
MappedByteBuffer buffer;
try(FileInputStream fileStream = new FileInputStream(fileName)) {
fileChannel = fileStream.getChannel();
buffer = null;
if (status != "Completed")
{
if(fileChannel.size()>=(offset+1048756))
{
buffer = fileChannel.map(FileChannel.MapMode.READ_ONLY, offset, 1048756);
aGenericExcelRead.setStatus("Partial");
aGenericExcelRead.setEndOffset(offset+1048756);
}
else
{
buffer = fileChannel.map(FileChannel.MapMode.READ_ONLY, offset, (fileChannel.size()-offset));
aGenericExcelRead.setStatus("Completed");
aGenericExcelRead.setEndOffset((int)fileChannel.size());
}
byte[] b = new byte[buffer.remaining()];
buffer.get(b);
String encodedcontent = new String(Base64.encodeBase64(b));
buffer.clear();
fileChannel.close();
aGenericExcelRead.setData(encodedcontent);
fileStream.close();
}
} catch (IOException e) {
throw new IOException("IO Exception/File not found");
}finally {
if(fileChannel != null)
fileChannel.close();
}
return aGenericExcelRead;
}

The gzip compressor is not going with the while loop

I am trying to query from database with Java JDBC and compress data in one column to gzip file in specific directory. I have tested my JDBC query and it working fine, but the Gzip code not going with the while loop, it's run with the loop firt row and stuck there. Why it's stuck? help me please!
These folders already existed: D:\Directory\My\year\id1\id2
//Some query JDBC code here, it's work well. I query all rows Data, year, id1,id2,id3
while (myRs1.next()){
String str = Data;
File myGzipFile = new File("D:\\Directory\\My\\"+year+"\\"+id1+"\\"+id2+"\\"+id3+".gzip");
GZIPOutputStream gos = null;
InputStream is = new ByteArrayInputStream(str.getBytes());
gos = new GZIPOutputStream(new FileOutputStream(myGzipFile));
byte[] buffer = new byte[1024];
int len;
while ((len = is.read(buffer)) != -1) {
gos.write(buffer, 0, len);
System.out.print("done for:"+id3);
}
try { gos.close(); } catch (IOException e) { }
}
Try formatting the source like this to catch exceptions.
public class InputStreamDemo {
public static void main(String[] args) throws Exception {
InputStream is = null;
int i;
char c;
try{
is = new FileInputStream("C://test.txt");
System.out.println("Characters printed:");
// reads till the end of the stream
while((i=is.read())!=-1)
{
// converts integer to character
c=(char)i;
// prints character
System.out.print(c);
}
}catch(Exception e){
// if any I/O error occurs
e.printStackTrace();
}finally{
// releases system resources associated with this stream
if(is!=null)
is.close();
}
}
}

Reading large data from an inputstream some data missing at the end

Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Hello,
I am using the httpUrlConnection to retrieve a json string from a webservice. Then I get the inputStream from the connection
jsonString = readJSONInputStream(mHttpUrlconnection.getInputStream());
I then use the following function to read the inputstream to get the JSON.
private String readJSONInputStream(final InputStream inputStream) {
Reader reader = null;
try {
final int SIZE = 16024;
char[] buffer = new char[SIZE];
int bytesRead = 0;
int read = 0;
reader = new BufferedReader(new InputStreamReader(inputStream, "UTF-8"), SIZE);
String line = "";
String jsonString = "";
while((line = reader.readLine()) != null) {
jsonString += line;
}
/* Success */
return jsonString;
}
catch(IndexOutOfBoundsException ex) {
log.log(Level.SEVERE, "UnsupportedEncodingexception: " + ex.getMessage());
}
catch(IOException ex) {
log.log(Level.SEVERE, "IOException: " + ex.getMessage());
}
finally {
/* close resources */
try {
reader.close();
inputStream.close();
}
catch(IOException ex) {
log.log(Level.SEVERE, "IOException: " + ex.getMessage());
}
}
return null;
}
However, if the json is small say 600 bytes then everything is ok. But I have some JSON that is about 15000 bytes in size so I set the maximum size to 16024.
However, the JSON it only reads about about ~6511 and just cuts off.
If the JSON is small there is no problem < 1000 bytes. But for the larger JSON it only read about half of it.
I the data is there as I have tested this in a browswer using the http request plugin.
Am I doing anything wrong here. Anything I should check.
Many thanks for any suggestions,
Problem resolved. Due to the logger not displaying all the information. Not really a problem afterall.

InputStream read() blocks when server returns no input

I am working on a program for Android, which connects to a server via SSH to get some data.
The problem is, in the event a command is sent to the server, that doesn't return anything (such as cat on an empty file), my program hangs, seemingly being blocked by in.read().
I have a breakpoint on the the line
if ((read = in.read(buffer)) != -1){
and on the then/else lines below it. If I debug it, the program breaks normally on the if-statement, but when I hit continue, the program just hangs again, and never makes it to the next breakpoint.
The program works normally if it actually gets a response from the server, but I'd like to protect my program from a hang if the server isn't cooperating properly.
I am using the J2SSH library.
public String command(String command) {
command = command + "\n";
if (session.getSessionType().equals("Uninitialized") || session.isClosed()) {
openShell();
}
OutputStream out = session.getOutputStream();
InputStream in = session.getInputStream();
byte buffer[] = new byte[255];
int read;
String in1 = null;
String fullOutput = "";
try {
try {
out.write(command.getBytes());
} catch (IOException e){
Log.e(TAG,"Error writing IO stream");
e.printStackTrace();
}
boolean retrivingdata = true;
while (retrivingdata){
String iStreamAvail = "Input Stream Available "+ in.available();
if ((read = in.read(buffer)) != -1){
retrivingdata = true;
} else {
retrivingdata = false;
return null;
}
in1 = new String(buffer, 0, read);
fullOutput = fullOutput + in1;
if (read < 255){
break;
}
}
} catch (IOException e) {
e.printStackTrace();
}
return fullOutput;
}
Reading and writing should be done in separate threads. read() is a blocking method that waits until data is available from the server.

Categories

Resources