Java Synchronize issue: PrintWriter slower than other operations - java

I am pretty new to thread programming in Java, and I am currently building an application which, among other, takes a number of SQL scripts and calls them.
For each file it performs the call and, in case that some Exception is thrown it catches it and writes the concerning info into a log file by using a PrintWriter constructed with a FileWriter.
Of course, all this is carried on by a for loop.
The problem comes when the file writing which is slower than the rest of the operations is not completed successfully: the process finishes before the writing is over, so the file ends up incompleted.
I have tried my way by using synchronized blocks of code, wait() and notify() methods, but no success yet. I attach the excerpt of code:
boolean waiting_for_end_of_file_writing = true;
FileWriter fw = new FileWriter(FICHIER_LOG_ERREURS_SQL, true);
PrintWriter pw = new PrintWriter(fw);
for(int j = 0; j < input_paths_sql.length; j++){
System.out.println("Script " + m + " dont " + input_paths_sql.length
+ " avec nom " + oftp.get_locals()[j]
+ " à exécuter");
try {
Generic_library.Call_Fichier_SQL(oftp.get_locals()[j],
ojdbc.get_sybase_connection());
} catch (IOException ex) {
Logger.getLogger(Form_table_clients.class.getName()).log(Level.SEVERE, null, ex);
} catch (SQLException ex) {
pw.write("Exception en fichier " + oftp.get_locals()[j] + "\r\n");
ex.printStackTrace(pw);
pw.write("\r\n");
Logger.getLogger(Form_table_clients.class.getName()).log(Level.SEVERE, null, ex);
}
m++;
}
synchronized(pw){
pw.write(" ----------------- END OF UPDATE PROCESS ----------------- \r\n");
waiting_for_end_of_file_writing = false;
pw.notify();
}
synchronized(pw){
try {
while(waiting_for_end_of_file_writing)
pw.wait();
} catch (InterruptedException ex) {
Logger.getLogger(Form_table_clients.class.getName()).log(Level.SEVERE, null, ex);
}
}
return success;
So this goes:
- Generic_library.Call_Fichier_SQL() takes the path and the Connection object to the DataBase, and uses them with a CallableStatement to call the script
My goal is to stop the thread BEFORE IT REACHES the line of "return success" which finishes the method until pw has completed all the writing and eventually performed the line which goes
pw.write(" ----------------- END OF UPDATE PROCESS ----------------- \r\n");
Otherwise, the log file, as foresaid, ends up incomplete.
Thanks for any help you could give me. Likewise, if anyone can come up with some idea to bypassing the problem (may be by ussing some thread safe way to write into the file, for instance) it could make too.

When you finish writing, you need to flush it, or preferably close() it. If you leave the application running it can flush and clean up the resource itself (at some random time) but you should always do this yourself so you know it is done.
In short, always close Stream/Reader/Writer/Statement/Connection/ResultSet when yo are finished with them. (In fact anything which can be close() )
In your case, I would remove both synchronized blocks and use pw.flush();
While println() is not fast, it should be 10x - 100x faster than an SQL query from an JDBC database.

Related

Program design when using BufferedWriter, do I repeatedly open and close file?

I have a program that does a lot of processing with loops and writes strings to a file at many different points. I'm not sure about the overall design for how best to do this. I won't need to read from the file at any point during running, though will want to view it afterwards.
Firstly, is a BufferedWriter with FileWriter a reasonable way of doing this?
Secondly, presumably I don't want to be opening and closing this every time I want to write something (several times per second).
But if I use try with resources then I'd have to put practically the entire program inside that try, is this normal?
At the moment the skeleton looks like:
try (FileWriter writer = new FileWriter("filename.txt");
BufferedWriter bw = new BufferedWriter(writer)) {
} catch (IOException e) {
//catch IO error
}
for (//main loop){
bw.write(string);
for (//several sub loops){
bw.write(//more strings);
}
for (//several sub loops){
bw.write(//more strings);
}
}
bw.write(//final string);
try {
bw.close();
} catch (IOException ex) {
//catch IO error
}
Does this look reasonable or is there a better way? Thanks very much in advance for the help.
Edit - thanks to you all for the help, totally answered my questions.
Firstly, is a BufferedWriter with FileWriter a reasonable way of doing this?
Yes, it should be the most convenient way to do this.
Secondly, presumably I don't want to be opening and closing this every time I want to write something (several times per second).
You really shouldn't. But you would actually overwrite your progress this way everytime you open the file anyway. That's because you didn't tell the FileWriter to append to an existing file (via new FileWriter("filename.txt", true);.
But if I use try with resources then I'd have to put practically the entire program inside that try, is this normal?
I don't see a problem with that. You can (and should) always move your logic into own methods or classes, which may return the Strings to write. This way you get the actual business logic separated from the technical file writing logic and structure your code, making it easier to understand.
You could also just write into a giant big String and then write that String in the try-with-resources block. But that has it's limits with really big files and may not be the best choice sometimes.
It is totally OK to put the whole Code into a try-catch routine. Whenever you have issues to write into the file it will just catch it and does not give you an error. However, I would recommend you to try this structure with just one try-catch routine.
try { (FileWriter writer = new FileWriter("filename.txt");
BufferedWriter bw = new BufferedWriter(writer))
for (/*main loop*/){
bw.write(string);
for (/*several sub loops*/){
bw.write(/*more strings*/);
}
for (/*several sub loops*/){
bw.write(/*more strings*/);
}
}
bw.write(/*final string*/);
bw.close();
} catch (IOException e) {
System.out.println("error");
}
PS: If you comment something between some code use this:/* comment */ instead of this:// because it will comment out the whole line.
But if I use try with resources then I'd have to put practically the
entire program inside that try, is this normal?
Thats just how try-catch-with-resources work - it closes resources on exiting try block. If that is bothering you, don't use that construct and you manage writer yourself.
Above skeleton will not work as first try will open and close your writers;
Here is an alternate that does finer exception handling. In many cases, this is preferred. Having a catch block handle too many exceptions gets to be very confusing: Control flow is obscured, and diagnosing errors can be a lot harder.
Having a file open through the entire time a program is running is very usual. This is often the case for log files. If you know your program will be running for a long time, and if you suspect there will be long delays between output to a single file, you could open and close the file for each batch of close in time operations. But you would have to have a clear idea of the pattern of activity to do this, as you will want to match the open time of the file with expected close-in-time batches of writes. You should very much avoid high frequency open and close operations. That has all sorts of unwanted extra overhead.
package my.tests;
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.IOException;
import java.io.Writer;
import java.util.function.Consumer;
public class WriterTest {
public static final String TARGET_NAME = "filename.txt";
public void performMainLoop() {
performWrites( this::mainLoop, TARGET_NAME );
}
public void performWrites( Consumer<Writer> writeActor, String targetName ) {
FileWriter fileWriter;
try {
fileWriter = new FileWriter(targetName);
} catch ( IOException e ) {
System.out.println("Open failure: " + e.getMessage());
e.printStackTrace();
return;
}
BufferedWriter bufferedWriter = null;
try {
bufferedWriter = new BufferedWriter(fileWriter);
writeActor.accept(bufferedWriter);
} finally {
if ( bufferedWriter != null ) {
try {
bufferedWriter.close();
} catch ( IOException e ) {
System.out.println("Unexpected close failure: " + e.getMessage());
e.printStackTrace();
}
} else {
try {
fileWriter.close();
} catch ( IOException e ) {
System.out.println("Unexpected close failure: " + e.getMessage());
e.printStackTrace();
}
}
}
}
public void mainLoop(Writer writer) {
for ( int loopNo = 0; loopNo < 10; loopNo++ ) {
try {
writer.write("Loop [ " + Integer.toString(loopNo) + " ]\n");
} catch ( IOException e ) {
System.out.println("Unexpected write failure: " + e.getMessage());
e.printStackTrace();
return;
}
}
}
}

Where to use Thread interupt

I have some old code I am working with, and I'm not too experienced with Threads (mostly work on the front end). Anyway, this Thread.sleep is causing the thread to hang and I'm unsure what to do about it. I thought about using a counter and throwing a Thread.currentThread.interupt, but unsure of where to put it or which thread it will interupt. Here is an example of the dump. As you can see the thread count is getting pretty high at 1708.
Any advice?
"Thread-1708" prio=6 tid=0x2ceec400 nid=0x2018 waiting on condition
[0x36cdf000] java.lang.Thread.State: TIMED_WAITING (sleeping) at
java.lang.Thread.sleep(Native Method) Locked ownable synchronizers:
- None "Thread-1707" prio=6 tid=0x2d16b800 nid=0x215c waiting on condition [0x36c8f000] java.lang.Thread.State: TIMED_WAITING
(sleeping) at java.lang.Thread.sleep(Native Method) Locked ownable
synchronizers:
- None
#Override
public void run()
{
Connection con = null;
int i = 0;
while (is_running)
{
try
{
con = ConnectionManager.getConnection();
while (!stack.isEmpty())
{
COUNT++;
String line = (String) stack.pop();
getPartMfr(line);
try
{
if (this.mfr != null && !this.mfr.equals(EMPTY_STR))
{
lookupPart(con, line);
}
}
catch (SQLException e)
{
e.printStackTrace();
}
if (COUNT % 1000 == 0)
{
Log log = LogFactory.getLog(this.getClass());
log.info("Processing Count: " + COUNT);
}
}
}
catch (NamingException e)
{
e.printStackTrace();
}
catch (SQLException e)
{
e.printStackTrace();
}
finally
{
try
{
ConnectionManager.close(con);
}
catch (SQLException e)
{
e.printStackTrace();
}
}
try {
Thread.sleep(80);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
this.finished = true;
}
Here is where it calls the run method, as you can see it does set it to false, but I guess it is missing threads?
HarrisWorker w[] = new HarrisWorker[WORKER_POOL_SIZE];
try
{
for (int i = 0; i < w.length; i++)
{
w[i] = new HarrisWorker(pw);
w[i].start();
}
pw.println(headers());
File inputDir = new File(HARRIS_BASE);
String files[] = inputDir.list();
for (String file : files)
{
try
{
File f = new File(HARRIS_BASE + File.separator + file);
if (f.isDirectory())
continue;
final String workFile = workDir + File.separator + file;
f.renameTo(new File(workFile));
FileReader fr = new FileReader(workFile);
BufferedReader br = new BufferedReader(fr);
String line = br.readLine();
boolean firstLine = true;
while (line != null)
{
if (firstLine)
{
firstLine = false;
line = br.readLine();
continue;
}
if (line.startsWith(","))
{
line = br.readLine();
continue;
}
// if(line.indexOf("103327-1") == -1)
// {
// line = br.readLine();
// continue;
// }
HarrisWorker.stack.push(line);
line = br.readLine();
}
br.close();
fr.close();
for (int i = 0; i < w.length; i++)
{
w[i].is_running = false;
while (!w[i].finished)
{
Thread.sleep(80);
}
}
move2Processed(file, workFile);
long etime = System.currentTimeMillis();
System.out.println("UNIQUE PARTS TOTAL FOUND: " + HarrisWorker.getFoundCount() + " of " + HarrisWorker.getUniqueCount() + ", "
+ (HarrisWorker.getFoundCount() / HarrisWorker.getUniqueCount()));
System.out.println("Time: " + (etime - time));
}
catch (Exception e)
{
e.printStackTrace();
File f = new File(workDir + File.separator + file);
if (f.exists())
{
f.renameTo(new File(HARRIS_BASE + File.separator + ERROR + File.separator + file));
}
}
}
}
As a direct answer to the question in your title - nowhere. There is nowhere in this code that needs a Thread.interrupt().
The fact that the thread name is Thread-1708 does not necessarily mean there are 1708 threads. One can choose arbitrary names for threads. I usually include the name of the executor or service in the thread name. Maybe 1600 are now long stopped and there are only around a hundred alive. Maybe this particular class starts naming at 1700 to distinguish from other uses.
1708 threads may not be a problem. If you have a multi-threaded server that is serving 2000 connections in parallel, then it certainly expectable that there are 2000 threads doing that, along with a bunch of other threads.
You have to understand why the sleep is there and what purpose it serves. It's not there to just hog memory for nothing.
Translating the code to "plaintext" (btw it can be greatly simplified by using try-with-resources to acquire and close the connection):
Acquire a connection
Use the connection to send (I guess) whatever is in the stack
When failed or finished - wait 80ms (THIS is your sleep)
If run flag is still set - repeat from step 1
Finish the thread.
Now reading through this, it's obvious that it's not the sleep that's the problem. It's that the run flag is not set to false. And your thread just continues looping, even if it can't get the connection at all - it will simply spend most of its time waiting for the retry. In fact - even if you completely strip the sleep out (instead of interrupting it mid-way), all you will achieve is that the Threads will start using up more resources. Given that you have both a logger and you print to stdout via printStackTrace, I would say that you have 2 problems:
Something is spawning threads and not stopping them afterwards (not setting their run flag to false when done)
You are likely getting exceptions when getting the Connection, but you never see them in the log.
It might be that the Thread is supposed to set it's own run flag (say when the stack is drained), but you would have to decide that yourself - that depends on a lot of specifics.
Not an answer but some things you should know if you are writing code for a live, production systemn:
:-( Variable and method both have the same name, run. A better name for the variable might be keep_running Or, change the sense of it so that you can write while (! time_to_shut_down) { ... }
:-( Thread.sleep(80) What is this for? It looks like a big red flag to me. You can never fix a concurrency bug by adding a sleep() call to your code. All you can do is make the bug less likely to happen in testing. That means, when the bug finally does bite, it will bite you in the production system.
:-( Your run() method is way too complicated (the keyword try appears four times). Break it up, please.
:-( Ignoring five different exceptions catch (MumbleFoobarException e) { e.printStackTrace(); } Most of those exceptions (but maybe not the InterruptedException) mean that something is wrong. Your program should do something more than just write a message to the standard output.
:-( Writing error messages to standard output. You should be calling log.error(...) so that your application can be configured to send the messages to someplace where somebody might actually see them.

what is the exact order of execution for try, catch and finally?

In this java code,
import java.io.IOException;
public class Copy
{
public static void main(String[] args)
{
if (args.length != 2)
{
System.err.println("usage: java Copy srcFile dstFile");
return;
}
int fileHandleSrc = 0;
int fileHandleDst = 1;
try
{
fileHandleSrc = open(args[0]);
fileHandleDst = create(args[1]);
copy(fileHandleSrc, fileHandleDst);
}
catch (IOException ioe)
{
System.err.println("I/O error: " + ioe.getMessage());
return;
}
finally
{
close(fileHandleSrc);
close(fileHandleDst);
}
}
static int open(String filename)
{
return 1; // Assume that filename is mapped to integer.
}
static int create(String filename)
{
return 2; // Assume that filename is mapped to integer.
}
static void close(int fileHandle)
{
System.out.println("closing file: " + fileHandle);
}
static void copy(int fileHandleSrc, int fileHandleDst) throws IOException
{
System.out.println("copying file " + fileHandleSrc + " to file " +
fileHandleDst);
if (Math.random() < 0.5)
throw new IOException("unable to copy file");
System.out.println("After exception");
}
}
the output that I expect is
copying file 1 to file 2
I/O error: unable to copy file
closing file: 1
closing file: 2
However sometimes I get this expected output and at other times I get the following output:
copying file 1 to file 2
closing file: 1
closing file: 2
I/O error: unable to copy file
and sometimes even this output:
I/O error: unable to copy file
copying file 1 to file 2
closing file: 1
closing file: 2
and whether I get the first, second or third output seems to happen randomly during every execution. I found THIS POST that apparently talks about the same problem, but I still don't understand why I sometimes get output 1, 2 or 3. If I understand this code correctly then output 1 should be what I get every time (the exception occurs). How do I ensure that I get output 1 consistently, or be able to tell when I will be getting output 1 or when I will
be getting output 2 or 3?
The issue is that you are writing some output to System.out and some to System.err. These are independent streams, with independent buffering. The timing of when they are flushed is, as far as I know, not specified.
The short of it is that when writing to different streams, you cannot use the order in which the output shows up to determine the order in which the calls to println() occurred. Note that the output to System.out always appears in the expected order.
As far as order of execution, the body of the try is executed first. If it throws an exception, the body of the appropriate catch clause is then executed. The finally block is always executed last.
First execute Try block if it is success finally will execute, if try block fail then catch will execute and finally execute. What ever happen finally block will execute.
But
If you call System.exit(0) finally block not executed
The thing with exception handling using try catch block is that the control will go inside try if any exception it will get inside catch block. But the control will go to final block every time it executs.
You are writing your error message to both stdout and stderr. They have different buffers, so there is no guarantee that the output you see will be in the same order as you created it, between the two output streams.
Since I can see no errors in your code (although the superfluous return; in your catch segment stuck in my craw a little bit), let me suggest that you write all of your messages to stderr, and see if the message order is a little more in line with what you were expecting.
You have one glitch in your example which I would remove. You are writing to both System.out and System.err and expection your console to synchronize both streams correctly. To remove side effects I would just use one stream here.

java socket Object memory leak

i've a memory leak problem on java Socket Object communication.
this is my send thread.
// create a new thread to send the packet
#Override
public synchronized void run() {
if(!genericSocket.isConnected()){
if(logger.isEnabled())
logger.logMessage(PFLogging.LEVEL_WARN, "Socket is close");
return;
}
int retry = 0;
boolean packetSent = false;
synchronized (objWriter) {
while ((retry < RETRY) && (!packetSent) && (genericSocket.isConnected())) {
try {
objWriter.writeObject(bean);
objWriter.flush();
// Try until the cache is reset and the memory is free
/*
boolean resetDone = false;
while(!resetDone) {
try {
objWriter.reset();
resetDone = true;
} catch (IOException r) {
Thread.sleep(1);
}
}
*/
// No error and packet sent
continuousError = 0;
packetSent = true;
} catch (Exception e) {
continuousError++;
if(logger.isEnabled())
logger.logMessage(PFLogging.LEVEL_ERROR, "Continuous Error [" + continuousError + "] sending message [" + e.getMessage() + "," + e.getCause() + "]");
// control the number of continuous errors
if(continuousError >= CONTINUOUS_ERROR) {
if(logger.isEnabled())
logger.logMessage(PFLogging.LEVEL_WARN, "I close the socket");
genericSocket.disconnect();
}
// next time is the time!
retry++;
}
}
}
}
the cache, when i sent about i packet per ms grow and grow!
if i add the commented part the cache is clean but when i need to send an async long message (about 3000 char) i see that the other message are lost!
There's another way to clean the cache without reset it??
ObjectOutputStream.reset() is not avoidable as it is the only means of clearing local hash tables, you can refer java source code for ObjectOutputStream for details of what happens in reset(), or else you will get OutOfMemoryError eventually
But you can very well implement a function like
private void writeObject(Object obj, ObjectOutputStream oos) throws IOException
{
synchronized(oos)
{
oos.writeObject(obj);
oos.flush();
oos.reset();
}
}
However you must ensure that all writes to ObjectOutputStream happens through this method.
the only solution i find is, first of starting a sending thread, to check if the thread pool is empty and in that case i reset the output stream.
I run the software all this night to check this.
Thanks all!
Matteo
I would use ObjectOutputStream.reset() periodically to clear the object cache for the stream.
You could even use it after sending every object. ;)
ciao :),
after ObjectOutputStream.flush() you can saftely use ObjectOutputStream.reset()
unless you are using the objWriter somwhere in another thread without using the synchronized (objWriter) statement.
In this case the best way IMHO is to use the objWriter in a thread, it will send object from a syncornized queue (see Queue sub-class http://docs.oracle.com/javase/1.5.0/docs/api/java/util/Queue.html, for example http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/ConcurrentLinkedQueue.html) that is filled from the other thread (remeber to use object.clone(), because the objcet itself isn't syncornized it can be modified by other thread while you are writing it or is in queue! if you clone it your clone will be a safe copy).
That way you don't need synchronized statment because data-flow between thread and ObjectOutputStream is already synchronized, and you will be less error-prone

Webapp that runs process won't complete

So I've got a couple of shell scripts that run on a server. They do some time intensive data gathering and then complete. They seem to work fine when I run them from the server. I'm now trying to automate these with a Spring webapp. Everything is running and I can run the scripts through ProcessBuilder, but for some reason, when the scripts are run through ProcessBuilder they only get about halfway and then just stop responding.
I'm really hoping someone will have some thoughts on why this might be. Unfortunately due to the work I can't really post much in the way of code. I can post the webapp code that runs the processes, which I'll do down below, but I can't post the scripts. If anyone has some thoughts please chime in. Thanks.
#Entity
public class Job implements Runnable {
#Id #GeneratedValue
private Long id;
//getters and setters
#Override
public void run() {
Process p = null;
try {
BufferedWriter bw = new BufferedWriter(new FileWriter("/opt/condor/bin/datafile"));
bw.write(this.getName());
bw.close();
p = new ProcessBuilder("/opt/condor/bin/scripts/create-filter.sh").start();
jobHelper(p);
List<String> dates = datesBetween();
status = "Running Master";
for(String temp : dates) {
String[] splitDate = temp.split("-");
String tmpYear = splitDate[0];
String tmpMonth = splitDate[1];
String tmpDay = splitDate[2];
log.info("Running Master script: master.sh " + this.getCustomer() + ", " + this.getProject() + ", " + tmpYear + ", " + tmpMonth + ", " + tmpDay);
p = new ProcessBuilder("/opt/condor/bin/scripts/master.sh", this.getCustomer(), this.getProject(), tmpYear, tmpMonth, tmpDay).start();
log.info("Entering job helper");
jobHelper(p);
log.info("exited job helper");
}
status = "Finished Master";
log.info("Finished Master");
} catch (IOException ioe) {
log.error("IO Error: " , ioe);
ioe.printStackTrace();
}
log.info("Done running script");
endTime = Long.toString(System.currentTimeMillis());
status = "Ended";
JobManager.FinishJob(this);
}
private boolean jobHelper(Process p) {
log.info("inside job helper");
BufferedReader br = new BufferedReader(new InputStreamReader(p.getInputStream()));
String line;
try {
while ((line = br.readLine()) != null) {
log.info(line);
if(line.contains("Uh oh!"))
return true;
}
boolean running = true;
while(running) {
log.info("waiting...");
p.waitFor();
log.info("done waiting");
running = false;
}
} catch (IOException e) {
log.error("IO Error: " , e);
e.printStackTrace();
} catch (InterruptedException e) {
log.error("Interrupted Exception: ", e);
e.printStackTrace();
p.destroy();
}
return false;
}
}
I apologize for any syntactical errors you see, the code does compile and run so please just ignore them. I was copying and pasting the relevant bits of code and may have messed up something in that regard.
EDIT
I added some log statements in different places and can see that the code is entering my helper, which is why it is displaying output, but at some point it just stops. it doesn't ever seem to hit the log statements surrounding the p.waitFor() method. Clearly I'm not doing something right, which is understandable since threads are a huge weakpoint of mine. I'm guessing maybe it is getting hung up displaying stuff and I'm then getting a deadlock situation but I really don't understand where or how to fix it. Can anyone let me know what I'm screwing up and what I need to do to fix it? I could really use an example as well, thanks.
I can not help much without more context on why your process is hanging. However, your entity should not be runable. Extract this to a service, you can store your process id in you entity if you need to map it back to a process.
Well after more research it seems that the problem was related to me not properly getting all the data from the input and error streams. I guess you're supposed to have multiple threads for each stream, which I still don't understand. I added a line that called the redirectErrorStream() method on the processbuilder object and that seems to have helped. I'm still not sure it won't hang again when processing greater amounts of data as I've seen a bunch of talk about all the streams needing to be in their own threads as I mentioned, but I'm not really sure how I'm supposed to do that. It's very hard to find a good concise example of how to use ProcessBuilder. However, this seems to have fixed the problem I was having.

Categories

Resources