Synchronized List keeps locking permanently - java

I have created a Server with ServerSocket and any connection made is put into a List of Connection. The list is synchronized (Collections.synchronizedList(new ArrayList<Connection>())), and I access it properly such as:
synchronized(getConnections()) {
Iterator<Connection> it = getConnections().iterator();
while(it.hasNext()) {
try {
if(!it.next().keepAlive()) {
it.remove();
}
}catch(Exception error) {
ErrorLogger.error(error);
}
}
}
BUT it locks up somewhere at random times. Once it locks, it remains locked forever.
I have created a logging system which logs a message before and after every synchronized block used for this list. (COMPLETE means it went through the synchronized block). Pastebin of the logs with descriptions.
Here is my code for the ADDED part seen in the logs:
DebugFile.write("ADDED connection");
//This is used to dump empty connections
synchronized(getConnections()) {//Synchronized
getConnections().add(mini);//Adding to List
Iterator<Connection> it = getConnections().iterator();
while(it.hasNext()) {
Connection con = it.next();
if(con.getSocket() == null || !con.getSocket().isConnected() ||
(con.getBungeeServer().equals("Unknown") && System.currentTimeMillis() > con.getCreateTime()+30000)){
System.out.println("[MoltresBungee] Connection disconnecting because they have not identified within 30 seconds.");
con.stop(false);
it.remove();//Removes expired sessions
}
}
Bungee.instance.getLogger().info("MoltresBungee - Active connections: " + getConnections().size());
}
DebugFile.write("ADDED connection - COMPLETE");//Logs complete
Here is code for REMOVE ALL:
DebugFile.write("REMOVE ALL : " + name);
synchronized(getConnections()) {
Iterator<Connection> it = getConnections().iterator();
while(it.hasNext()) {
Connection con = it.next();
if(con == null || con.getSocket() == null || con.getSocket().isClosed() || con.getBungeeServer().equals(name)){
DebugFile.write("REMOVE ALL : " + name + " removing: " + con.getBungeeServer());
if(con != null && con.getSocket() != null) {
try{
con.getSocket().close();
}catch(Exception e){ErrorLogger.error(e);}
}
it.remove();
}
}
}
DebugFile.write("REMOVE ALL : " + name + " - COMPLETE");
The adding code is suppose to add a connection to this list then remove any items in this list which have lost connection or never completed the handshake.
The remove all code is suppose to remove anything from the list that has the same name or has closed already.
I have about ten more places I used the synchronized(getConnections()) like what is above.
If needed, here is getConnections() method:
public synchronized List<Connection> getConnections(){
return connections;
}
Why is this locking, and how do I fix it?
EDIT:
Also, clients attempting to connect retry over and over until connected, and it seems that all the previous tries are still there when the server turns on. This is when the list locks as well.

Related

How to know that rollback has been executed ? [#Transactional]

I have the following case:
I'm iterating over my Affiliate entities and for each of them I need to persist and update data in one unique transaction. So I have a service with a method annotated with Spring #Transactional annotation (where data is created and updated) but I don't know how can I see that the transaction has been rollback for an affiliate ?
I would like to know that for a special Affiliate the transaction has been rollback and retrieve a custom error code from my service..
This was my service before using Spring:
public void savePostingPaymentDetails(List<Posting> postingsToUpdate, List<PaymentPostingDetail> detailsToInsert, Payment payment) {
logger.info("DB ACCESS : INSERT PAYMENT DETAILS & UPDATE POSTINGS");
long begin = System.nanoTime();
this.em.getTransaction().begin();
try {
// TEST
// 1 - Save Payments
this.em.persist(payment);
// 2 - Save Details
for (PaymentPostingDetail ppd : detailsToInsert) {
this.em.persist(ppd);
}
// 3 - Update Postings
for (Posting p : postingsToUpdate) {
if(p.getSignature() != null)
{
p.getSignature().setModification("withholding-tax.pay", new Date());
}
else
{
logger.error("The Posting with id = " + p.getIdentifier() + " has no PersistenceSignature ?!");
}
this.em.merge(p);
}
}
catch (Exception e)
{
logger.error("Unexpected error on saving/updating the DB.", e);
this.em.getTransaction().rollback();
logger.info("RollBack done.");
e.printStackTrace();
System.exit(JobStatus.ABNORMAL_END_OF_EXECUTION_ERROR.getCode());
}
this.em.getTransaction().commit();
logger.info("Details inserted & Postings updated.");
long end = System.nanoTime();
logger.info("Execution time = " + ((end-begin) / 1000000) + " milliseconds.");
logger.info("----------------------------------------------------------");
}
Now I have this:
#Transactional
public void savePostingPaymentDetails(List<Posting> postings, List<PaymentPostingDetail> paymentDetails, Payment payment)
{
logger.info("DB ACCESS : INSERT PAYMENT DETAILS & UPDATE POSTINGS");
long begin = System.nanoTime();
this.paymentRepository.save(payment);
this.ppdRepository.save(paymentDetails);
for(Posting p : postings){
if(p.getSignature() != null)
{
p.getSignature().setModifiedAt(LocalDate.now());
p.getSignature().setModifiedBy(PayCopyrightWithholdingTaxProcess.SIGNATURE);
}
else{
p.setSignature(new PersistenceSignature(LocalDate.now(), PayCopyrightWithholdingTaxProcess.SIGNATURE));
}
this.postingRepository.save(p);
}
long end = System.nanoTime();
logger.info("Execution time = " + ((end-begin) / 1000000) + " milliseconds.");
logger.info("----------------------------------------------------------");
}
But how can I return let us say a special integer (instead of System.exit()) if the transaction has been rollback ?
There is something called User managed Transaction(UMT) and Container managed Transaction (CMT)
When you are using #Transactional you are actually delegating the transaction management to your Spring container (CMT), which is responsible for e.g opening and closing the transaction for you. It
rolls back automatically when unchecked Exception is thrown like NullPointerException, or RuntimeException ). For checked
exceptions you have to specify when the rollback is supposed to occured #Transactional(rollbackFor=myCheckedException.class).
You can also listen, observe how the transaction is doing with a TransactionalEventListener and react with some AOP listening code like shown here. But You are not ultimately managing the Transaction, Spring is doing for you. The client code can't react with some custom code, when something special happens, because the management of the transaction is delegated to Spring.
Therefore you have to fall back on the User managed Transaction, where you open your transaction, commit it and react in case of a rollback. That is exactly the purpose of UMT: giving total control of your transaction.
from your old code you may get something like:
public int savePostingPaymentDetails(List<Posting> postingsToUpdate, List<PaymentPostingDetail> detailsToInsert, Payment payment) {
int returnCode = 1 // 1 -> "success" , 0 -> "failure"
logger.info("DB ACCESS : INSERT PAYMENT DETAILS & UPDATE POSTINGS");
long begin = System.nanoTime();
long end = 0;
this.em.getTransaction().begin();
try {
// TEST
// 1 - Save Payments
this.em.persist(payment);
// 2 - Save Details
for (PaymentPostingDetail ppd : detailsToInsert) {
this.em.persist(ppd);
}
// 3 - Update Postings
for (Posting p : postingsToUpdate) {
if(p.getSignature() != null)
{
p.getSignature().setModification("withholding-tax.pay", new Date());
}
else
{
logger.error("The Posting with id = " + p.getIdentifier() + " has no PersistenceSignature ?!");
}
this.em.merge(p);
}
this.em.getTransaction().commit();
end = System.nanoTime();
}
catch (Exception e)
{
returnCode = 0;
logger.error("Unexpected error on saving/updating the DB.", e);
this.em.getTransaction().rollback();
logger.info("RollBack done.");
// e.printStackTrace();
System.exit(JobStatus.ABNORMAL_END_OF_EXECUTION_ERROR.getCode());
return returnCode;
}
//this.em.getTransaction().commit();
logger.info("Details inserted & Postings updated.");
//long end = System.nanoTime();
logger.info("Execution time = " + ((end-begin) / 1000000) + " milliseconds.");
logger.info("----------------------------------------------------------");
return returnCode = 1;
}
PS: on a side note, best practice would have you to throw an Exception when your commit fails, instead of special code.
your new method signature could be:
public void savePostingPaymentDetails(List<Posting> postingsToUpdate, List<PaymentPostingDetail> detailsToInsert, Payment payment)
throws MyFailedDbOperationException, OtherException {
}
and Throw the exception on your catch block
catch (Exception e)
{
logger.error("Unexpected error on saving/updating the DB.", e);
this.em.getTransaction().rollback();
logger.info("RollBack done.");
throw MyFailedDbOperationException("my db operation failed");
}
You can use a listener (#TransactionalEventListener) to be informed of a rolled back transaction (the listener can be bound to the different phases of a transaction). See section 16.8 of https://docs.spring.io/spring/docs/4.2.x/spring-framework-reference/html/transaction.html for more information (requires Spring >= 4.2)

JAVA code to use semaphore with Cassandra to throttle executeAsync writes to eliminate NoHostAvailableException errors

I have some basic code that uses a prepared statement in a for loop and writes the result into a Cassandra Table with some throttling using a semaphore.
Session session = null;
try {
session = connector.openSession();
} catch( Exception ex ) {
// .. moan and complain..
System.err.printf("Got %s trying to openSession - %s\n", ex.getClass().getCanonicalName(), ex.getMessage() );
}
if( session != null ) {
// Prepared Statement for Cassandra Inserts
PreparedStatement statement = session.prepare(
"INSERT INTO model.base " +
"(channel, " +
"time_key, " +
"power" +
") VALUES (?,?,?);");
BoundStatement boundStatement = new BoundStatement(statement);
//Query Cassandra Table that has capital letters in the column names
ResultSet results = session.execute("SELECT \"Time_Key\",\"Power\",\"Bandwidth\",\"Start_Frequency\" FROM \"SB1000_49552019\".\"Measured_Value\" limit 800000;");
// Get the Variables from each Row of Cassandra Data
for (Row row : results){
// Upper Case Column Names in Cassandra
time_key = row.getLong("Time_Key");
start_frequency = row.getDouble("Start_Frequency");
power = row.getFloat("Power");
bandwidth = row.getDouble("Bandwidth");
// Create Channel Power Buckets, place information into prepared statement binding, write to cassandra.
for(channel = 1.6000E8; channel <= channel_end; channel+=increment ){
if( (channel >= start_frequency) && (channel <= (start_frequency + bandwidth)) ) {
ResultSetFuture rsf = session.executeAsync(boundStatement.bind(channel,time_key,power));
backlogList.add( rsf ); // put the new one at the end of the list
if( backlogList.size() > 10000 ) { // wait till we have a few
while( backlogList.size() > 5432 ) { // then harvest about half of the oldest ones of them
rsf = backlogList.remove(0);
rsf.getUninterruptibly();
} // end while
} // end if
} // end if
} // end for
} // end "row" for
} // end session
My connection is built with the following:
public static void main(String[] args) {
if (args.length != 2) {
System.err.println("Syntax: com.neutronis.Spark_Reports <Spark Master URL> <Cassandra contact point>");
System.exit(1);
}
SparkConf conf = new SparkConf();
conf.setAppName("Spark Reports");
conf.setMaster(args[0]);
conf.set("spark.cassandra.connection.host", args[1]);
Spark_Reports app = new Spark_Reports(conf);
app.run();
}
With this code im attempting to use a semaphore but my Cassandra Cluster still seems to get overloaded and kick out the error:
ERROR ControlConnection: [Control connection] Cannot connect to any
host, scheduling retry in 1000 milliseconds Exception in thread "main"
com.datastax.driver.core.exceptions.NoHostAvailableException: All
host(s) tried for query failed (no host was tried)
It seems odd that it says no host was tried.
I've looked at other semaphore throttling issues such as this and this and attempted to apply to my code above but am still getting the error.
Read my answer to this question for how to back-pressure when using asynchronous calls: What is the best way to get backpressure for Cassandra Writes?

Using a Commonj Work Manager to send Asynchronous HTTP calls

I switched from making sequential HTTP calls to 4 REST services, to making 4 simultaneous calls using a commonj4 work manager task executor. I'm using WebLogic 12c. This new code works on my development environment, but in our test environment under load conditions, and occasionally while not under load, the results map is not populated with all of the results. The logging suggests that each work item did receive back the results though. Could this be a problem with the ConcurrentHashMap? In this example from IBM, they use their own version of Work and there's a getData() method, although it doesn't like that method really exists in their class definition. I had followed a different example that just used the Work class but didn't demonstrate how to get the data out of those threads into the main thread. Should I be using execute() instead of schedule()? The API doesn't appear to be well documented. The stuckthreadtimeout is sufficiently high. component.processInbound() actually contains the code for the HTTP call, but I the problem isn't there because I can switch back to the synchronous version of the class below and not have any issues.
http://publib.boulder.ibm.com/infocenter/wsdoc400/v6r0/index.jsp?topic=/com.ibm.websphere.iseries.doc/info/ae/asyncbns/concepts/casb_workmgr.html
My code:
public class WorkManagerAsyncLinkedComponentRouter implements
MessageDispatcher<Object, Object> {
private List<Component<Object, Object>> components;
protected ConcurrentHashMap<String, Object> workItemsResultsMap;
protected ConcurrentHashMap<String, Exception> componentExceptionsInThreads;
...
//components is populated at this point with one component for each REST call to be made.
public Object route(final Object message) throws RouterException {
...
try {
workItemsResultsMap = new ConcurrentHashMap<String, Object>();
componentExceptionsInThreads = new ConcurrentHashMap<String, Exception>();
final String parentThreadID = Thread.currentThread().getName();
List<WorkItem> producerWorkItems = new ArrayList<WorkItem>();
for (final Component<Object, Object> component : this.components) {
producerWorkItems.add(workManagerTaskExecutor.schedule(new Work() {
public void run() {
//ExecuteThread th = (ExecuteThread) Thread.currentThread();
//th.setName(component.getName());
LOG.info("Child thread " + Thread.currentThread().getName() +" Parent thread: " + parentThreadID + " Executing work item for: " + component.getName());
try {
Object returnObj = component.processInbound(message);
if (returnObj == null)
LOG.info("Object returned to work item is null, not adding to producer components results map, for this producer: "
+ component.getName());
else {
LOG.info("Added producer component thread result for: "
+ component.getName());
workItemsResultsMap.put(component.getName(), returnObj);
}
LOG.info("Finished executing work item for: " + component.getName());
} catch (Exception e) {
componentExceptionsInThreads.put(component.getName(), e);
}
}
...
}));
} // end loop over producer components
// Block until all items are done
workManagerTaskExecutor.waitForAll(producerWorkItems, stuckThreadTimeout);
LOG.info("Finished waiting for all producer component threads.");
if (componentExceptionsInThreads != null
&& componentExceptionsInThreads.size() > 0) {
...
}
List<Object> resultsList = new ArrayList<Object>(workItemsResultsMap.values());
if (resultsList.size() == 0)
throw new RouterException(
"The producer thread results are all empty. The threads were likely not created. In testing this was observed when either 1)the system was almost out of memory (Perhaps the there is not enough memory to create a new thread for each producer, for this REST request), or 2)Timeouts were reached for all producers.");
//** The problem is identified here. The results in the ConcurrentHashMap aren't the number expected .
if (workItemsResultsMap.size() != this.components.size()) {
StringBuilder sb = new StringBuilder();
for (String str : workItemsResultsMap.keySet()) {
sb.append(str + " ");
}
throw new RouterException(
"Did not receive results from all threads within the thread timeout period. Only retrieved:"
+ sb.toString());
}
LOG.info("Returning " + String.valueOf(resultsList.size()) + " results.");
LOG.debug("List of returned feeds: " + String.valueOf(resultsList));
return resultsList;
}
...
}
}
I ended up cloning the DOM document used as a parameter. There must be some downstream code that has side effects on the parameter.

Using two while loop or single loop for different resultsets

Here is what I have right now where, I am making multiple JDBC connections based on the IP address I have.
Map<String,Connection> connections = new HashMap<>();
DeviceStmt = connRemote.createStatement();
DeviceRS = DeviceStmt.executeQuery(maindbsql);
while (DeviceRS.next()) {
try {
final String ip_address = DeviceRS.getString("IP_vch");
System.out.println("Value of IP_vch Field:"+ip_address);
connections.put(ip_address,DriverManager.getConnection("jdbc:mysql://"
+ ip_address + ":3306/test",RemoteUser,RemotePass));
if (connections.isEmpty()) {
System.err.println("Not successfull");
} else {
System.out.println("Connection to"+ip_address+"is successfull");
}
} catch(SQLException ex1) {
System.out.println("While Loop Stack Trace Below:");
ex1.printStackTrace();
}
}//END Of while(DeviceRS.next())
What I want to have :
Say for example I am making connections to following IP's : 11.11.1.111 and 22.22.2.222
I want to excute an additional query on 11.11.1.111 just after connection is established after the following line in the above code:
connections.put(ip_address,DriverManager.getConnection("jdbc:mysql://" + ip_address + ":3306/test",RemoteUser,RemotePass));
Since, I can't use the same resultset(DeviceRS) which I have already used , I am wondering how can I run a different query.
I want to do something like the following:
DeviceStmttwo = connRemote.createStatement();
DeviceRStwo = DeviceStmttwo.executeQuery(maindbsqlnew);
below the following code I mentioned in the above code:
DeviceStmt = connRemote.createStatement();
DeviceRS = DeviceStmt.executeQuery(maindbsql);
Should I go for different while(DeviceRStwo.next()) option just before the existing while(DeviceRS.next()) loop and make connections again which seems to be a overhead to me.

Neo4J create relationship hangs on remote, but node creation succeeds

My relationship creation hangs, yet the nodes underneath manage to persist to my remote client.
public class Baz
{
private static enum CustomRelationships implements RelationshipType {
CATEGORY
}
public void foo()
{
RestGraphDatabse db = new RestGraphDatabase("http://remoteIp:7474/db/data",username,password);
Transaction tx = db.beginTx();
try{
Node a = db.createNode();
a.setProperty("foo", "foo"); // finishes
Node b = db.createNode();
b.setProperty("bar", "bar"); //finishes
a.createRelationshipTo(b, CustomRelationships .CATEGORY); // hangs
System.out.println("Finished relationship");
tx.success();
} finally {
tx.finish();
}
}
}
And I cannot figure out why. There is no stack and the connection doesn't time out.
a.createRelationshipTo(b, DynamicRelationshipType.withName("CATEGORY"));
also hangs
This query executes correctly from the admin shell:
start first=node(19), second=node(20) Create first-[r:RELTYPE {
linkage : first.Baz + '<-->' + second.BazCat }]->second return r
Yet when run in this fashion:
ExecutionResult result = engine.execute("start first=node("
+ entityNode.getId() + "), second=node("
+ categoryNode.getId() + ") "
+ " Create first-[r:RELTYPE { linkage : first.Baz"
+ " + '<-->' + second.BazCat" + " }]->second return r");
Also hangs.
There are no real transactions over rest.
It is a bug in the Java-Rest-Binding that internal threads are not started as daemon threads. It actually doesn't hang just the program is not ended.
You can System.exit(0) to end the program as a workaround.

Categories

Resources