NullPointerException, at thread - java

I'm coding Java application that decode TCAP frame which will be reading from a text file, then insert decoded data in a database(Oracle)! So, at the beginning decoding and integration are perfectly performed, but when it reachs a finite of number decoded and inserted data, it starts triggering this error at the thread that assumes the insertion in the database:
" java.lang.OutOfMemoryError: unable to create new native thread "
" Exception in thread "Thread-465" java.lang.NullPointerException "
Code extract:
public void run(){
Conn_BD connexion=new Conn_BD("thin:#localhost:1521:XE", "SYSTEM", "SYSTEM");
java.sql.Connection cn=connexion.connect();
try {
Statement instruction = cn.createStatement();
instruction.executeUpdate("update tcapBegin set "+
trame+"='"+trame_val+"' where "+message+" like '"+trameId+"'");
cn.close();
} catch(SQLException e) {
System.out.print(e);
}
}
Does anyone have an idea to resolve this problem?

Instead of instantiating a thread per insert (or whatever other action you do), try and create a queue of "tasks", each task will represent an insert such thread should perform.
When you have such a queue, you need to have a thread that "pushes" tasks into the queue and threads that perform the actual tasks by "pulling" them out of the queue and performing them.
By working this way you won't need a thread per task but instead you'll be able to use a small set of general purpose threads that will take a task from the queue, execute it, and return to the queue for more work.
p.s. when you reuse your thread, don't create a connection in the run method, you don't have to recreate the connection each time.
Read about Executors and Thread Pooling
See Producer Consumer
See DB Connection pooling

You have this statement at the beginning of trhead
Conn_BD connexion=new Conn_BD("thin:#localhost:1521:XE", "SYSTEM", "SYSTEM");
Seems like you are creating a new connection everytime a new thread is created. Creating a connetion and then executing statement takes time so by the time your first connection gets closed so many other connections have been created that you can not create any more.
a better option will be if you use one static reference for connection.
private static Conn_BD connexion=new Conn_BD("thin:#localhost:1521:XE", "SYSTEM", "SYSTEM");
private static java.sql.Connection cn=connexion.connect();
public void run(){
Statement instruction = cn.createStatement();
//code here
instruction.close();
}
once all the threads are done executing close the connection.

Related

Multiple DB connections using a connection pool vs Single connection with multiple statements

I am developing a server working with MySQL, and I have been trying to understand advantage of working with a connection pool vs a single connection that is kept open, and being passed down to the different methods through out the application.
The idea of working with a connection pool is understood, however there could be scenarios that this could create a bottleneck, that wouldn't be in case of working without the pool.
Better explain my meaning using code:
Lets say the following method is called simultaneously connectionPoolSize + 1 (e.g. 10) times, meaning that we have exhausted our connections from the connection pool, the last query attempt will fail since no connections available:
public void getData(con) {
Connection con = null;
Statement s = null;
ResultSet rs = null;
try {
con = connectionPool.getConnection();
s = con.createStatement();
rs = s.executeQuery("SELECT * FROM MY_TABLE;");
// Some long process that takes a while....
catch(Exception e) {
throw new Exception(e.getMessage())
} finally {
s.close();
rs.close();
con.close();
}
}
However if we are using a single connection, that is kept open, and all methods can use it, there is no need for any of the methods to wait for the connection to be sent back to pool (which as we saw above, could take some time).
e.g. call this method also 10 times, this would work
public void getData(con) {
Statement s = null;
ResultSet rs = null;
try {
s = con.createStatement();
rs = s.executeQuery("SELECT * FROM MY_TABLE;");
// Some long process that takes a while....
// But this time we don't care that this will take time,
// since nobody is waiting for us to release the connection
catch(Exception e) {
throw new Exception(e.getMessage())
} finally {
s.close();
rs.close();
}
}
Obviously the statements and result sets will still be kept open until the method is finished, but this doesn't affect the connection itself, so it doesn't hold back any other attempts to use this connection.
I assume there is some further insight that I am missing, I understand the standard is working with connection pools, so how do you handle these issues?
Depends on your use case. Suppose you are building a web application that would be used by multiple users simultaneously. Now if you have a single connection, all the queries from multiple user threads will be queued. And single db connection will process them one by one. So in a multi-user system(mostly all normal cases), single db connection will be a bottleneck & won't work. Additionally, you need to take care of thread safety in case you are writing & committing data to db.
If you need truly simultaneous query execution in db, then you should go ahead with connection pool. Then different user threads can use different connections & can execute queries in parallel.
Connection pools are used to keep a number of opened connections ready for use and to eliminate the need to open a new connection each time it is required.
If your application is single threaded then you probably don’t need a pool and can use a single connection instead.
Even though sharing a connection between multiple threads is permitted there are some pitfalls of this approach. Here is a description for Java DB: https://docs.oracle.com/javadb/10.8.3.0/devguide/cdevconcepts89498.html. You should check if this is also the case for MySQL.
In many cases it is easier to have an individual connection for each thread.

Sharing a jdbc "Connection" across threads

I have a main thread that runs periodically. It opens a connection, with setAutoCommit(false), and is passed as reference to few child threads to do various database read/write operations. A reasonably good number of operations are performed in the child threads. After all the child threads had completed their db operations, the main thread commits the transaction with the opened connection. Kindly note that I run the threads inside the ExecutorService. My question, is it advisable to share a connection across threads? If "yes" see if the below code is rightly implementing it. If "no", what are other way to perform a transaction in multi-threaded scenario? comments/advise/a-new-idea are welcome. pseudo code...
Connection con = getPrimaryDatabaseConnection();
// let me decide whether to commit or rollback
con.setAutoCommit(false);
ExecutorService executorService = getExecutor();
// connection is sent as param to the class constructor/set-method
// the jobs uses the provided connection to do the db operation
Callable jobs[] = getJobs(con);
List futures = new ArrayList();
// note: generics are not mentioned just to keep this simple
for(Callable job:jobs) {
futures.add(executorService.submit(job));
}
executorService.shutdown();
// wait till the jobs complete
while (!executorService.isTerminated()) {
;
}
List result = ...;
for (Future future : futures) {
try {
results.add(future.get());
} catch (InterruptedException e) {
try {
// a jobs has failed, we will rollback the transaction and throw exception
connection.rollback();
result = null;
throw SomeException();
} catch(Exception e) {
// exception
} finally {
try {
connection.close();
} catch(Exception e) {//nothing to do}
}
}
}
// all the jobs completed successfully!
try {
// some other checks
connection.commit();
return results;
} finally {
try {
connection.close();
} catch(Exception e){//nothing to do}
}
I wouldn't recommend you to share connection between threads, as operations with connection is quite slow and overall performance of you application may harm.
I would rather suggest you to use Apache Connections Pool and provide separate connection to each thread.
You could create a proxy class that holds the JDBC connection and gives synchronized access
to it. The threads should never directly access the connection.
Depending on the use and the operations you provide you could use synchronized methods, or lock on objects if the proxy needs to be locked till he leaves a certain state.
For those not familiar with the proxy design pattern. Here the wiki article. The basic idea is that the proxy instance hides another object, but offers the same functionality.
In this case, consider creating a separate connection for each worker. If any one worker fails, roll back all the connections. If all pass, commit all connections.
If you're going to have hundreds of workers, then you'll need to provide synchronized access to the Connection objects, or use a connection pool as #mike and #NKukhar suggested.

Threads and exception handling

I have two linux machines. On one machine I'm using a thread which starts up an executable and another internal thread reads the data from the executable and populates the database with the values from the executable, I'm using myBatis to persist the data. Later it continuously checks if the process and the internal thread is up and running. On the other machine I have the database connected remotely which is continuously deployed every night, due to this the database is being dropped and recreated. So as the database table is not available during this build an exception:
org.apache.ibatis.exceptions.PersistenceException
### Error updating database. Cause: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException:
Table 'updates_table' doesn't exist
is thrown. Then the thread which is continuously checking for the process and internal thread is killed and it stops checking.
Can anyone help me how to handle the thread from not being killed and once the db is available and running it should try to repopulate the table. when the db is not available it should always keep trying until the db is available.
Thank you.
Consider switching to a system where you submit jobs to an Executor from the thread pulling stuff off of the process:
public class MyThread extends Thread {
private final InputStream processStream;
private final Executor executor = Executors.newSingleThreadExecutor();
public MyThread(InputStream processStream) {
this.processStream = processStream;
}
#Override
public void run() {
while ([processStream has stuff]) {
final Object obj = // Get your object from the stream
executor.execute(new Runnable() {
#Override
public void run() {
// Do database stuff with obj
}
});
}
}
private static Object getSomethingFromStream(InputStream stream) {
// return something off the stream
}
}
If an exception is thrown by your Runnable, it will be logged, but it won't be stopped, and it will just continue to the next job in the queue. Also note that this is using a single-threaded executor, so everything submitted will be executed one at a time, and in the order they're submitted. If you want concurrent execution, use Executors.newFixedThreadPool(int) or Executors.newCachedThreadPool(). Note that this answers how to keep your thread alive. If you want to resubmit a runnable for re-execution if the job fails, change its run method to:
#Override
public void run() {
try {
// Do database stuff with obj
} catch (PeristenceException ex) {
// Try again
executor.execute(this);
}
}
You can add logic to this to tailor when it will try again on an exception.
At high level, you can use Observable pattern (built in JDK) so that your code is notified during the maintenance. You can re-establish the connection back by spawning a new thread.
Use this construct :
try{
// code to update db
}
catch(MySQLSyntaxErrorException exception){
// handle db exception
}
inside your thread that runs to work with the db.

Which is a suitable architecture?

I have tested a socket connection programme with the idea where the socket connection will be one separate thread by itself and then it will enqueue and another separate thread for dbprocessor will pick from the queue and run through a number of sql statement. So I notice here is where the bottle neck that the db processing. I would like to get some idea is what I am doing the right architecture or I should change or improve on my design flow?
The requirement is to capture data via socket connections and run through a db process then store it accordingly.
public class cServer
{
private LinkedBlockingQueue<String> databaseQueue = new LinkedBlockingQueue<String>();
class ConnectionHandler implements Runnable {
ConnectionHandler(Socket receivedSocketConn1) {
this.receivedSocketConn1=receivedSocketConn1;
}
// gets data from an inbound connection and queues it for databse update
public void run(){
databaseQueue.add(message); // put to db queue
}
}
class DatabaseProcessor implements Runnable {
public void run(){
// open database connection
createConnection();
while (true){
message = databaseQueue.take(); // keep taking message from the queue add by connectionhandler and here I will have a number of queries to run in terms of select,insert and updates.
}
}
void createConnection(){
System.out.println("Crerate Connection");
connCreated = new Date();
try{
dbconn = DriverManager.getConnection("jdbc:mysql://localhost:3306/test1?"+"user=user1&password=*******");
dbconn.setAutoCommit(false);
}
catch(Throwable ex){
ex.printStackTrace(System.out);
}
}
}
public void main()
{
new Thread(new DatabaseProcessor()).start(); //calls the DatabaseProcessor
try
{
final ServerSocket serverSocketConn = new ServerSocket(8000);
while (true){
try{
Socket socketConn1 = serverSocketConn.accept();
new Thread(new ConnectionHandler(socketConn1)).start();
}
catch(Exception e){
e.printStackTrace(System.out);
}
}
}
catch (Exception e){
e.printStackTrace(System.out);
}
}
}
It's hard (read 'Impossible') to judge a architecture without the requirements. So I will just make some up:
Maximum Throughput:
Don't use a database, write to a flatfile, possibly stored on something fast like a solid state disc.
Guaranteed Persistence (If the user gets an answer not consisting of an error, the data must be stored securely):
Make the whole thing single threaded, save everything in a database with redundant discs. Make sure you have a competent DBA who knows about Back up and Recovery. Test those on regular intervals.
Minimum time for finishing the user request:
Your approach seems reasonable.
Minimum time for finishing the user request + Maximizing Throughput + Good Persistence (what ever that means):
Your approach seems good. You might plan for multiple threads processing the DB requests. But test how much (more) throughput you actually get and where precisely the bottleneck is (Network, DB CPU, IO, Lock contention ...). Make sure you don't introduce bugs by using a concurrent approach.
Generally, your architecture sounds correct. You need to make sure that your two threads are synchronised correctly when reading/writing from/to the queue.
I am not sure what you mean by "bottle neck that the db processing"? If DB processing takes a long time and and you end up with a long queue, there's not much you can do apart from having multiple threads performing the DB processing (assuming the processing can be parallelised, of course) or do some performance tuning in the DB thread.
If you post some specific code that you believe is causing the problem, we can have another look.
You don't need two threads for this simple task. Just read the socket and execute the statements.

Is this use of PreparedStatements in a Thread in Java correct?

I'm still an undergrad just working part time and so I'm always trying to be aware of better ways to do things. Recently I had to write a program for work where the main thread of the program would spawn "task" threads (for each db "task" record) which would perform some operations and then update the record to say that it has finished. Therefore I needed a database connection object and PreparedStatement objects in or available to the ThreadedTask objects.
This is roughly what I ended up writing, is creating a PreparedStatement object per thread a waste? I thought static PreparedStatments could create race conditions...
Thread A stmt.setInt();
Thread B stmt.setInt();
Thread A stmt.execute();
Thread B stmt.execute();
A´s version never gets execed..
Is this thread safe? Is creating and destroying PreparedStatement objects that are always the same not a huge waste?
public class ThreadedTask implements runnable {
private final PreparedStatement taskCompleteStmt;
public ThreadedTask() {
//...
taskCompleteStmt = Main.db.prepareStatement(...);
}
public run() {
//...
taskCompleteStmt.executeUpdate();
}
}
public class Main {
public static final db = DriverManager.getConnection(...);
}
I believe it is not a good idea to share database connections (and prepared statements) between threads. JDBC does not require connections to be thread-safe, and I would expect most drivers to not be.
Give every thread its own connection (or synchronize on the connection for every query, but that probably defeats the purpose of having multiple threads).
Is creating and destroying PreparedStatement objects that are always the same not a huge waste?
Not really. Most of the work happens on the server, and will be cached and re-used there if you use the same SQL statement. Some JDBC drivers also support statement caching, so that even the client-side statement handle can be re-used.
You could see substantial improvement by using batched queries instead of (or in addition to) multiple threads, though. Prepare the query once, and run it for a lot of data in a single big batch.
The threadsafety is not the issue here. All looks syntactically and functionally fine and it should work for about half a hour. Leaking of resources is however the real issue here. The application will crash after about half a hour because you never close them after use. The database will in turn sooner or later close the connection itself so that it can claim it back.
That said, you don't need to worry about caching of preparedstatements. The JDBC driver and the DB will take care about this task. Rather worry about resource leaking and make your JDBC code as solid as possible.
public class ThreadedTask implements runnable {
public run() {
Connection connection = null;
Statement statement = null;
try {
connection = DriverManager.getConnection(url);
statement = connection.prepareStatement(sql);
// ...
} catch (SQLException e) {
// Handle?
} finally {
if (statement != null) try { statement.close(); } catch (SQLException logOrIgnore) {}
if (connection != null) try { connection.close(); } catch (SQLException logOrIgnore) {}
}
}
}
To improve connecting performance, make use of a connection pool like c3p0 (this by the way does not mean that you can change the way how you write the JDBC code; always acquire and close the resources in the shortest possible scope in a try-finally block).
You're best to use a connection pool and get each thread to request a connection from the pool. Create your statements on the connection you're handed, remembering to close it and so release it back to the pool when you're done. The benefit of using the pool is that you can easily increase the number of available connections should you find that thread concurrency is becoming an issue.

Categories

Resources