I am getting a fortify finding for "Unreleased resource stream" on the code below.
Resource[] l_objResource = resourceLoader.getResources(configErrorCode);
Properties l_objProperty = null;
for (int i = 0; i < l_objResource.length; i++) {
l_objProperty = new Properties();
l_objProperty.load(l_objResource[i].getInputStream());
}
The function loadErrorCode() in BaseErrorParser.java sometimes fails to release a system resource allocated by getInputStream();
Can anyone explain the finding or help fix the issue?
From the comment below, but the context is not clear (JW):
ObjectInputStream l_objObjInputStream = null;
Map l_mapRet = null;
try {
l_objObjInputStream = new ObjectInputStream(new FileInputStream(p_objFilename));
Object l_objTemp = l_objObjInputStream.readObject();
l_mapRet = (Map) l_objTemp;
} finally {
if (l_objObjInputStream != null) {
l_objObjInputStream.close();
}
}
You are not closing the input stream which is opened by below line of code
l_objResource[i].getInputStream();
Usually fortify scanner reports Unreleased resource stream issue if there are any input or out streams which are opened but not closed after their usage. The ideal way to deal with these issues is to close all the opened streams in finally block so that even during exception scenarios they won't create any issues.
You can have a try - finally block around the code and close the stream as below.
Resource[] l_objResource = resourceLoader.getResources(configErrorCode);
Properties l_objProperty = null;
InputStream is = null;
for (int i = 0; i < l_objResource.length; i++) {
l_objProperty = new Properties();
try {
is = l_objResource[i].getInputStream();
l_objProperty.load(is);
} finally {
if(is!=null) {
is.close();
}
}
}
Please check if it works in your case.
You can use Try with resource here. This will automatically close your stream.
Map l_mapRet = null;
try (ObjectInputStream l_objObjInputStream = new ObjectInputStream(new FileInputStream(p_objFilename))){
Object l_objTemp = l_objObjInputStream.readObject();
l_mapRet = (Map) l_objTemp;
} Catch(IOException E){
// Handle exception
}
Related
I have a program who read information from database. Sometimes, the message is bigger that expected. So, before send that message to my broker I zip it with this code:
public static byte[] zipBytes(byte[] input) throws IOException {
ByteArrayOutputStream bos = new ByteArrayOutputStream(input.length);
OutputStream ej = new DeflaterOutputStream(bos);
ej.write(input);
ej.close();
bos.close();
return bos.toByteArray();
}
Recently, i retrieved a 80M message from DB and when execute my code above just OutOfMemory throw on the "ByteArrayOutputStream" line. My java program just have 512 of memory to do all process and cant give it more memory.
How can I solve this?
This is no a duplicate question. I cant increase heap size memory.
EDIT:
This is flow of my java code:
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo"); //rs is a valid resulset
while(rs.next()){
byte[] bt = rs.getBytes("data");
if(bt.length > 60) { //only rows with content > 60. I need to know the size of message, so if I use rs.getBinaryStream I cannot know the size, can I?
if(bt.length >= 10000000){
//here I need to zip the bytearray before send it, so
bt = zipBytes(bt); //this method is above
// then publish bt to my broker
} else {
//here publish byte array to my broker
}
}
}
EDIT
Ive tried with PipedInputStream and the memory that process consume is same as zipBytes(byte[] input) method.
private InputStream zipInputStream(InputStream in) throws IOException {
PipedInputStream zipped = new PipedInputStream();
PipedOutputStream pipe = new PipedOutputStream(zipped);
new Thread(
() -> {
try(OutputStream zipper = new DeflaterOutputStream(pipe)){
IOUtils.copy(in, zipper);
zipper.flush();
} catch (IOException e) {
IOUtils.closeQuietly(zipped); // close it on error case only
e.printStackTrace();
} finally {
IOUtils.closeQuietly(in);
IOUtils.closeQuietly(zipped);
IOUtils.closeQuietly(pipe);
}
}
).start();
return zipped;
}
How can I compress by Deflate my InputStream?
EDIT
That information is sent to JMS in Universal Messaging Server by Software AG. This use a Nirvana client documentation: https://documentation.softwareag.com/onlinehelp/Rohan/num10-2/10-2_UM_webhelp/um-webhelp/Doc/java/classcom_1_1pcbsys_1_1nirvana_1_1client_1_1n_consume_event.html and data is saved in nConsumeEvent objects and the documentation show us only 2 ways to send that information:
nConsumeEvent (String tag, byte[] data)
nConsumeEvent (String tag, Document adom)
https://documentation.softwareag.com/onlinehelp/Rohan/num10-5/10-5_UM_webhelp/index.html#page/um-webhelp%2Fco-publish_3.html%23
The code for connection is:
nSessionAttributes nsa = new nSessionAttributes("nsp://127.0.0.1:9000");
MyReconnectHandler rhandler = new MyReconnectHandler();
nSession mySession = nSessionFactory.create(nsa, rhandler);
if(!mySession.isConnected()){
mySession.init();
}
nChannelAttributes chaAtt = new nChannelAttributes();
chaAttr.setName("mychannel"); //This is a topic
nChannel myChannel = mySession.findChannel(chaAtt);
List<nConsumeEvent> messages = new ArrayList<nConsumeEvent>();
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo");
while(rs.next){
byte[] bt = rs.getBytes("data");
if(bt.length > 60){
nEventProperties prop = new nEventProperties();
if(bt.length > 10000000){
bt = compressData(bt); //here a need compress data without ByteArrayInputStream
prop.put("isZip", "true");
nConsumeEvent ncon = new nconsumeEvent("1",bt);
ncon.setProperties(prop);
messages.add(ncon);
} else {
prop.put("isZip", "false");
nConsumeEvent ncon = new nconsumeEvent("1",bt);
ncon.setProperties(prop);
messages.add(ncon);
}
}
ntransactionAttributes tatrib = new nTransactionAttributes(myChannel);
nTransaction myTransaction = nTransactionFactory.create(tattrib);
Vector<nConsumeEvent> m = new Vector<nConsumeEvent>(messages);
myTransaction.publish(m);
myTransaction.commit();
}
Because API exection, to the end of the day I need send the information in byte array, but if this is the only one byte array in my code is OK. How can I compress the byte array or InputStream with rs.getBinaryStream() in this implementation?
EDIT
The database server used is PostgreSQL v11.6
EDIT
I've applied the first one solution of #VGR and works fine.
Only one thing, SELECT query is in a while(true) like:
while(true){
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo"); //rs is a valid resulset
// all that implementation you know for this entire post
Thread.sleep(10000);
}
So, a SELECT is execute each 3 second.
But I've done a test with my program running and the memory just increase in each process. Why? If the information that database return is same in each request, should not the memory keep like first request? Or maybe I forgot close a stream?
while(true){
rs = stmt.executeQuery("SELECT data FROM tbl_alotofinfo"); //rs is a valid resulset
while(rs.next()) {
//byte[] bt = rs.getBytes("data");
byte[] bt;
try (BufferedInputStream source = new BufferedInputStream(
rs.getBinaryStream("data"), 10_000_001)) {
source.mark(this.zip+1);
boolean sendToBroker = true;
boolean needCompression = true;
for (int i = 0; i <= 10_000_000; i++) {
if (source.read() < 0) {
sendToBroker = (i > 60);
needCompression = (i >= this.zip);
break;
}
}
if (sendToBroker) {
nEventProperties prop = new nEventProperties();
// Rewind stream
source.reset();
if (needCompression) {
System.out.println("Tamaño del mensaje mayor al esperado. Comprimiendo mensaje");
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
try (OutputStream brokerStream = new DeflaterOutputStream(byteStream)) {
IOUtils.copy(source, brokerStream);
}
bt = byteStream.toByteArray();
prop.put("zip", "true");
} else {
bt = IOUtils.toByteArray(source);
}
System.out.println("size: "+bt.length);
prop.put("host", this.host);
nConsumeEvent ncon = new nConsumeEvent(""+rs.getInt("xid"), bt);
ncon.setProperties(prop);
messages.add(ncon);
}
}
}
}
For example, this is the heap memory in two times. First one memory use above 500MB and second one (with the same information of database) used above 1000MB
rs.getBytes("data") reads the entire 80 megabytes into memory at once. In general, if you are reading a potentially large amount of data, you don’t want to try to keep it all in memory.
The solution is to use getBinaryStream instead.
Since you need to know whether the total size is larger than 10,000,000 bytes, you have two choices:
Use a BufferedInputStream with a buffer of at least that size, which will allow you to use mark and reset in order to “rewind” the InputStream.
Read the data size as part of your query. You may be able to do this by using a Blob or using a function like LENGTH.
The first approach will use up 10 megabytes of program memory for the buffer, but that’s better than hundreds of megabytes:
while (rs.next()) {
try (BufferedInputStream source = new BufferedInputStream(
rs.getBinaryStream("data"), 10_000_001)) {
source.mark(10_000_001);
boolean sendToBroker = true;
boolean needCompression = true;
for (int i = 0; i <= 10_000_000; i++) {
if (source.read() < 0) {
sendToBroker = (i > 60);
needCompression = (i >= 10_000_000);
break;
}
}
if (sendToBroker) {
// Rewind stream
source.reset();
if (needCompression) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
}
}
}
Notice that no byte arrays and no ByteArrayOutputStreams are used. The actual data is not kept in memory, except for the 10 megabyte buffer.
The second approach is shorter, but I’m not sure how portable it is across databases:
while (rs.next()) {
Blob data = rs.getBlob("data");
long length = data.length();
if (length > 60) {
try (InputStream source = data.getBinaryStream()) {
if (length >= 10000000) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
}
}
}
Both approaches assume there is some API available for your “broker” which allows the data to be written to an OutputStream. I’ve assumed, for the sake of example, that it’s broker.getOutputStream().
Update
It appears you are required to create nConsumeEvent objects, and that class only allows its data to be specified as a byte array in its constructors.
That byte array is unavoidable, obviously. And since there is no way to know the exact number of bytes a compressed version will require, a ByteArrayOutputStream is also unavoidable. (It’s possible to avoid using that class, but the replacement would be essentially a reimplementation of ByteArrayOutputStream.)
But you can still read the data as an InputStream in order to reduce your memory usage. And when you aren’t compressing, you can still avoid ByteArrayOutputStream, thereby creating only one additional byte array.
So, instead of this, which is not possible for nConsumeEvent:
if (needCompression) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
It should be:
byte[] bt;
if (needCompression) {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
try (OutputStream brokerStream = new DeflaterOutputStream(byteStream)) {
source.transferTo(byteStream);
}
bt = byteStream.toByteArray();
} else {
bt = source.readAllBytes();
}
prop.put("isZip", String.valueOf(needCompression));
nConsumeEvent ncon = new nconsumeEvent("1", bt);
ncon.setProperties(prop);
messages.add(ncon);
Similarly, the second example should replace this:
if (length >= 10000000) {
try (OutputStream brokerStream =
new DeflaterOutputStream(broker.getOutputStream())) {
source.transferTo(brokerStream);
}
} else {
try (OutputStream brokerStream =
broker.getOutputStream())) {
source.transferTo(brokerStream);
}
}
with this:
byte[] bt;
if (length >= 10000000) {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
try (OutputStream brokerStream = new DeflaterOutputStream(byteStream)) {
source.transferTo(byteStream);
}
bt = byteStream.toByteArray();
prop.put("isZip", "true");
} else {
bt = source.readAllBytes();
prop.put("isZip", "false");
}
nConsumeEvent ncon = new nconsumeEvent("1", bt);
ncon.setProperties(prop);
messages.add(ncon);
Below is the code Snippet.
FileInputStream fin = new FileInputStream(zipFile);
ZipInputStream zin = new ZipInputStream(fin);
ZipEntry entry = null;
String routerListUCM = "";
try {
entries:
while ((entry = zin.getNextEntry()) != null) {
if (entry.getName().startsWith("routes")) {
BufferedReader in = new BufferedReader(new InputStreamReader(zin, "UTF-8"));
if (true) {
//parse the xml of the route...
DOMParser dp = new DOMParser();
dp.parse(in);
Element e = (Element) dp.getDocument().getFirstChild();
String transferid = e.getElementsByTagName("transferId").item(0).getTextContent();
System.out.println("transferId=" + transferid);
int fileid = Integer.parseInt(transferid.split("-")[1]);
System.out.println("fileid=" + transferid);
String userList = e.getElementsByTagName("userList").item(0).getTextContent();
System.out.println("userList=" + userList);
String routeList = e.getElementsByTagName("routeList").item(0).getTextContent();
System.out.println("routeList=" + routeList);
routerListUCM = routeList;
if (routeList.toLowerCase().indexOf(myname().toLowerCase()) == -1) {
//my server is not in the current route...
//so skip this route table.
continue entries;
}
}
}
}
} catch (Exception e) {
System.err.println(e.getMessage());
e.printStackTrace();
}
in some cases after "continue entries;" and trying for next loop i see "stream close Exception" :/
error:Stream closed
Stack Trace:
java.io.IOException: Stream closed
at java.util.zip.ZipInputStream.ensureOpen(ZipInputStream.java:67)
at java.util.zip.ZipInputStream.getNextEntry(ZipInputStream.java:116)
at org.parsisys.test.mina.view.SimpleFtplet$beaVersion0_1155.isTransferFinished(SimpleFtplet.java:299)
at org.parsisys.test.mina.view.SimpleFtplet.isTransferFinished(SimpleFtplet.java)
at org.parsisys.test.mina.view.SimpleFtplet.beaAccessisTransferFinished(SimpleFtplet.java)
at org.parsisys.test.mina.view.SimpleFtplet$beaVersion0_1155.onUploadEnd(SimpleFtplet.java:208)
at org.parsisys.test.mina.view.SimpleFtplet.onUploadEnd(SimpleFtplet.java)
at org.apache.ftpserver.ftplet.DefaultFtplet.afterCommand(DefaultFtplet.java:89)
at org.parsisys.test.mina.view.SimpleFtplet.afterCommand(SimpleFtplet.java)
at org.apache.ftpserver.ftpletcontainer.impl.DefaultFtpletContainer.afterCommand(DefaultFtpletContainer.java:144)
at org.apache.ftpserver.impl.DefaultFtpHandler.messageReceived(DefaultFtpHandler.java:220)
at org.apache.ftpserver.listener.nio.FtpHandlerAdapter.messageReceived(FtpHandlerAdapter.java:61)
at org.apache.mina.core.filterchain.DefaultIoFilterChain$TailFilter.messageReceived(DefaultIoFilterChain.java:716)
at org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
at org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1200(DefaultIoFilterChain.java:46)
at org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:796)
at org.apache.ftpserver.listener.nio.FtpLoggingFilter.messageReceived(FtpLoggingFilter.java:85)
at org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
at org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1200(DefaultIoFilterChain.java:46)
at org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:796)
at org.apache.mina.core.filterchain.IoFilterEvent.fire(IoFilterEvent.java:75)
at org.apache.mina.filter.logging.MdcInjectionFilter.filter(MdcInjectionFilter.java:136)
at org.apache.mina.filter.util.CommonEventFilter.messageReceived(CommonEventFilter.java:70)
at org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
at org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1200(DefaultIoFilterChain.java:46)
at org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:796)
at org.apache.mina.filter.codec.ProtocolCodecFilter$ProtocolDecoderOutputImpl.flush(ProtocolCodecFilter.java:427)
at org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:245)
at org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
at org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1200(DefaultIoFilterChain.java:46)
at org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:796)
at org.apache.mina.core.filterchain.IoFilterEvent.fire(IoFilterEvent.java:75)
at org.apache.mina.core.session.IoEvent.run(IoEvent.java:63)
at org.apache.mina.filter.executor.OrderedThreadPoolExecutor$Worker.runTask(OrderedThreadPoolExecutor.java:780)
at org.apache.mina.filter.executor.OrderedThreadPoolExecutor$Worker.runTasks(OrderedThreadPoolExecutor.java:772)
at org.apache.mina.filter.executor.OrderedThreadPoolExecutor$Worker.run(OrderedThreadPoolExecutor.java:714)
at java.lang.Thread.run(Thread.java:748)
Please Help Me....................................................................................................................................................................................................................................................
It seems that your outermost BufferedReader object closes your nested streams (in particular ZipInputStream). Try to move your BufferedReader initalization code higher the looping logic.
Also this topic might be helpful: closing nested streams.
Update:
Ok, now everything is clear. Implementation code of DOMParser class shows clearly that parse method closes underlying InputStream source (just an excerpt of finally block):
} finally {
this.parser.reader.close();
}
What can be done in this situation is hacking your BufferedReader which is passed to DOMParser object. Here's an example:
public class HackedReader extends BufferedReader {
public HackedReader(InputStreamReader inputStreamReader) {
super(inputStreamReader);
}
#Override
public void close() {
// Close method doesn't do anything, that's the main sense of overriding.
}
// But you know exact method which will close your underlying stream.
public void hackedClose() throws IOException {
super.close();
}
}
I found using the org.apache.poi.util.CloseIgnoringInputStream worked for me. I was able to wrap the ZipInputStream that I was passing into another method.
For example:
ExcelUtility.getLineCount(new CloseIgnoringInputStream(zipStream)
If you can implemet a logic to understand if there aren't more "routes" to read, at the end of If block you can insert a break instruction to exit the while block and avoid to attempt to read the closed stream
My application streams twitter data and writes them to files.
while(true){
Status status = queue.poll();
if (status == null) {
Thread.sleep(100);
}
if(status!=null){
list.add(status);
}
if(list.size()==10){
FileOutputStream fos = null;
ObjectOutputStream out = null;
try {
String uuid = UUID.randomUUID().toString();
String filename = "C:/path/"+topic+"-"+uuid+".ser";
fos = new FileOutputStream(filename);
out = new ObjectOutputStream(fos);
out.writeObject(list);
tweetsDownloaded += list.size();
if(tweetsDownloaded % 100==0)
System.out.println(tweetsDownloaded+" tweets downloaded");
// System.out.println("File: "+filename+" written.");
out.close();
} catch (IOException e) {
e.printStackTrace();
}
list.clear();
}
I have this code which gets data from files.
while(true){
File[] files = folder.listFiles();
if(files != null){
Arrays.sort(//sorting...);
//Here we manage each single file, from data-load until the deletion
for(int i = 0; i<files.length; i++){
loadTweets(files[i].getAbsolutePath());
//TODO manageStatuses
files[i].delete();
statusList.clear();
}
}
}
The method loadTweets() does the following operations:
private static void loadTweets(String filename) {
FileInputStream fis = null;
ObjectInputStream in = null;
try{
fis = new FileInputStream(filename);
in = new ObjectInputStream(fis);
statusList = (List<Status>) in.readObject();
in.close();
}
catch(IOException | ClassNotFoundException ex){
ex.printStackTrace();
}
}
Unfortunately, I don't know why sometimes it throws a
EOFException
when running this line
statusList = (List<Status>) in.readObject();
Anybody knows how I can solve this? Thank you.
I've seen that you're passing the file correctly with the getAbsolutePath() based on a previous question of yours
From what I've read that can be a couple of things, one of them the file being null.
Explaining this idea, you might have written the file but something caused the file to have nothing inside, this might cause an EOFException. The file in fact exists it's just empty
EDIT
Try to enclose the code in while(in.available() > 0)
It would look like this
private static void loadTweets(String filename) {
FileInputStream fis = null;
ObjectInputStream in = null;
try{
fis = new FileInputStream(filename);
in = new ObjectInputStream(fis);
while(in.available() > 0) {
statusList = (List<Status>) in.readObject();
}
in.close();
}
catch(IOException | ClassNotFoundException ex){
ex.printStackTrace();
}
}
Found out what was necessary to solve this. Thanks to #VGR's comment, I thought to pause the executing thread for 0.2 seconds if the file has been created less than a second ago.
if(System.currentTimeMillis()-files[i].lastModified()<1000){
Thread.sleep(200);
This prevents the exception and the application works now fine.
Edit: forgot to mention I'm using java 6
I was wondering about how to close resources in java.
See, I always have initialized streams like this:
ZipInputStream zin = null;
try {
zin = new ZipInputStream(new BufferedInputStream(new FileInputStream(file)));
// Work with the entries...
// Exception handling
} finally {
if (zin!=null) { try {zin.close();} catch (IOException ignored) {} }
}
But, if an exception is thrown in new ZipInputStream(...), would the opened streams in new BufferedInputStream and underliying FileInputStream be leaked?
If they are, what would be the most efficient way to ensure the resources are closed?, should I have to keep a reference to each new ...Stream and close them also in the finally block?, or should the final stream (ZipInputStream in this case) instantiated in some other way?.
Any comments are welcome.
You can do
try (InputStream s1 = new FileInputStream("file");
InputStream s2 = new BufferedInputStream(s1);
ZipInputStream zin = new ZipInputStream(s2)) {
// ...
} catch (IOException e) {
// ...
}
Further reading: The Java™ Tutorials: The try-with-resources Statement.
It can be done in this way:
BufferedInputStream bis = new BufferedInputStream(new FileInputStream(file));
try {
ZipInputStream zin = new ZipInputStream(bis);
try {
zin = ;
// Work with the entries...
// Exception handling
} finally {
zin.close();
}
} finally {
bis.close();
}
And you can add error caching where you want.
First lets take a look at what you have and what can go wrong with it:
try {
zin = new ZipInputStream(new BufferedInputStream(new FileInputStream(file)));
// Work with the entries...
// Exception handling
} finally {
if (zin!=null) { try {zin.close();} catch (IOException ignored) {} }
}
a.) new FileInputStream() throws, zin will not be assigned. Nothing to close in this case. Ok.
b.) new BufferedInputStream() throws (possibly OutOfMemoryError), zin not assigned. Leaked FileInputStream(). Bad.
c.) new ZipInputStream() throws, zin will not be assigned. BufferedInputStream and FileInputStream to close. Closing either would be enough. Bad.
Whenever you wrap one stream into another, you are in danger of leaking the stream youre wrapping. You need to have a reference to it and close it somewhere.
A viable way top this is to declare a single InputStream variably to hold the last create stream (or in other words, the outermost of the nested streams):
InputStream input = null;
try {
input = new FileInputStream(...);
input = new BufferedInputStream(input);
input = new ZipInputStream(input);
ZipInputStream zin = (ZipInputStream) input;
// work here
} finally {
if (input != null)
try { input.close(); } catch (IOException ignored) {}
}
This works, because if any of the new *Stream() throws, the variable input is still keeping track of the stream created before. The ugly cast from input to ZipInputStream is necessary, because you must declare input to be a type assignment compatible to all streams created.
Yes, an Exception in new ZipInputStream() or new BufferedInputStream() would leak the enclosed Streams, unless you do a cascading check in the exception handling:
FileInputStream fin = null;
BufferedInputStream bin = null;
ZipInputStream zin = null;
try {
fin = new FileInputStream(file);
bin = new BufferedInputStream(fin)
zin = new ZipInputStream(bin);
// Work with the entries...
// Exception handling
} finally {
try {
if (zin!=null) {
zin.close();
} else if (bin != null) {
bin.close();
} else if (fin != null) {
fin.close();
}
} catch (Exception e) {
// ignore
}
}
However, since BufferedInputStream and ZipInputStream are mere wrapper around the FileInputStream the probability of an Exception is rather low. If at all, an Exception if most likely to happen once you start reading and processing data. And in that case zin is created, and a zin.close() will suffice.
private static void deleteProxy(File proxyOld, String host, int port) {
try {
String lines, tempAdd;
boolean removeLine = false;
File proxyNew = new File("proxies_" + "cleaner$tmp");
BufferedReader fileStream = new BufferedReader(new InputStreamReader(new FileInputStream(proxyOld)));
BufferedWriter replace = new BufferedWriter(new FileWriter(proxyNew));
while ((lines = fileStream.readLine()) != null) {
tempAdd = lines.trim();
if (lines.trim().equals(host + ":" + port)) {
removeLine = true;
}
if (!removeLine) {
replace.write(tempAdd);
replace.newLine();
}
}
fileStream.close();
replace.close();
proxyOld.delete();
proxyNew.renameTo(proxyOld);
} catch (Exception e) {
e.printStackTrace();
}
}
Calling the function:
File x = new File("proxies.txt");//is calling a new file the reason why it's being flushed out?
deleteProxy(x, host, port);
Before I run the program the file proxies.txt had data inside of it. However when I run the program it appears to be flushed out. It becomes empty.
I noticed while the program is running, if I move my mouse over the file proxies.txt, Windows displays the "Date Modified" and the time it displays is the current time, or last time the function deleteProxy(...) was executed.
Does anyone know what I'm doing wrong? And why won't the list update instead of appearing to be empty?
Updated code:
private static void deleteProxy(File proxyOld, String host, int port) {
try {
String lines, tempAdd;
boolean removeLine = false;
File proxyNew = new File("proxies_" + "cleaner$tmp");
FileInputStream in = new FileInputStream(proxyOld);
InputStreamReader read = new InputStreamReader(in);
BufferedReader fileStream = new BufferedReader(read);
FileWriter write = new FileWriter(proxyNew);
BufferedWriter replace = new BufferedWriter(write);
while ((lines = fileStream.readLine()) != null) {
tempAdd = lines.trim();
if (lines.trim().equals(host + ":" + port)) {
removeLine = true;
}
if (!removeLine) {
replace.write(tempAdd);
replace.newLine();
}
}
in.close();
read.close();
fileStream.close();
write.close();
replace.close();
if (proxyOld.delete()) {
throw new Exception("Error deleting " + proxyOld);
}
if (proxyNew.renameTo(proxyOld)) {
throw new Exception("Error renaming " + proxyOld);
}
} catch (Exception e) {
e.printStackTrace();
}
}
Running the updated code it deletes proxies.txt just fine but it fails to make the new file:\
Maybe I should find a new method to update a text file, do you have any suggestions?
Your rename operation may not work, as per the File.renameTo() documentation:
Many aspects of the behavior of this method are inherently platform-dependent: The rename operation might not be able to move a file from one filesystem to another, it might not be atomic, and it might not succeed if a file with the destination abstract pathname already exists. The return value should always be checked to make sure that the rename operation was successful.
So basically, you're wiping your old file, and you're not guaranteed the new file will take its place. You must check the return value of File.renameTo():
if(proxyNew.renameTo(proxyOld)){
throw new Exception("Could not rename proxyNew to proxyOld");
}
As for why your renameTo may be failing: you're not closing the nested set of streams that you open to read from the old file, so the operating system may still consider an abstract pathname to exist. Try making sure you close all of the nested streams you open.
Try this:
FileInputStream in = new FileInputStream(proxyOld);
BufferedReader fileStream = new BufferedReader(new InputStreamReader(in));
...
in.close();