gremlin transaction handling using java - java

I am using java to connect to JanusGraph using gremlin and using following code to create Vertex and Edge. Currently i am using g.tx().commit() as part of client.submit as shown below code :
try {
String sessionId = UUID.randomUUID().toString();
Client client = cluster.connect(sessionId);
client.submit("graph.tx().open()");
client.submit("g.addV('Person').property('Name', 'Justin').next()");
**client.submit("graph.tx().commit()");**
List<Result> rs = client.submit("g.V().count()").all().join();
System.out.println("Result size is "+rs.size());
System.out.println(rs.get(0).getString());
client.closeAsync();
} catch (Exception e) {}
So want to know if there is any other more appropriate way to handle transactions using java or this is the only way to do so.
Thanks,
Atul.

If you are submitting requests to a remote JanusGraph server then that is the way to do it. You use connect(<sessionId>) to create a session and then submit scripts against it. In the recently released TinkerPop 3.5.0 however there are changes to that rule. You can now do bytecode based sessions as well as script based sessions which means that the transaction API is now unified for both embedded and remote use cases. You can see more in the 3.5.0 Upgrade Documentation found here.
The 3.5.0 release is quite recent, having only been announced a couple of weeks ago. As a result at the time of this answer, JanusGraph does not yet support it (though work has started on it here). Until you are on a release of JanusGraph that supports TinkerPop 3.5.0 you have two options for transactions:
The one you are doing for remote use cases or,
Use JanusGraph in the embedded style.
For the latter, as taken from the documentation in the link provided:
graph = JanusGraphFactory.open("berkeleyje:/tmp/janusgraph")
juno = graph.addVertex() //Automatically opens a new transaction
juno.property("name", "juno")
graph.tx().commit() //Commits transaction

public boolean transactionExample() {
System.out.println("Begin Transaction");
Transaction tx = g.tx();
String id = "123321";
GraphTraversalSource gtx = tx.begin();
try {
gtx.addV("T").property(T.id, id).next();
System.out.println("Searching before commit ==> " + gtx.V().hasId(id).elementMap().next());
if (2/0 == 0) {
throw new TransactionException("throwing exception");
}
tx.commit();
System.out.println("Committed Transaction");
} catch (Exception ex) {
System.out.println("Catching exception " + ex);
System.out.println(gtx);
tx.rollback();
System.out.println("Rollbacked Transaction");
}
System.out.println(gtx.tx().isOpen());
return true;
}
For more information refer https://github.com/m-thirumal/gremlin-dsl

Related

Deleting contour httpproxy from Kubernetes namespace using java client

I am working on the K8s implementation using kubernetes java client. I am looking for the solution to delete Contour HTTPProxy which are in invalid state. However I am not able to figure out how to do it with help of Java Client.
I am aware that we can delete the ingress using below code
k8sClient.extensions().ingresses().withName("my-ingress").delete();
Any help on how to delete Contour HTTPProxy object from K8s namespace using java client will be appreciated?
Contour HTTPProxy seems to be a custom resource. You can either use our typed(required CustomResource POJOs) or typeless API(CustomResource manipulation using raw maps) for deleting HTTPProxy.
Here is an example of doing it using the typeless API(based on KubernetesClient v5.4.1):
try (KubernetesClient client = new DefaultKubernetesClient()) {
CustomResourceDefinitionContext context = new CustomResourceDefinitionContext.Builder()
.withKind("HTTPProxy")
.withPlural("httpproxies")
.withGroup("projectcontour.io")
.withVersion("v1")
.withScope("Namespaced")
.build();
boolean isDeleted = client.customResource(context).inNamespace("default").withName("root").delete();
if (!isDeleted) {
logger.warn("Unable to Delete HTTPProxy {} in {} namespace", "root", "default");
}
logger.info("HTTPProxy {} successfully deleted.", "root");
} catch (KubernetesClientException exception) {
logger.error("Exception in interacting with Kubernetes API", exception);
}

Use of Java Smack 4.3.4 in a JUnit Testcase in Maven

I am working on a Java library with some services based on xmpp. For XMPP communication, I use Smack version 4.3.4. The development has so far been without problems and I have also created some test routines that can all be run without errors. After I migrated to a Maven project to generate a FatJar, I wanted to convert the executable test cases into JUnit tests. Unexpectedly, an error occurs, the reason of which I cannot explain. As I said, the code can be run outside of JUnit without any problems.
Below is the simplified test code (establishing a connection to the xmpp server):
#Test
public void connect()
{
Builder builder = XMPPTCPConnectionConfiguration.builder();
builder.setSecurityMode(SecurityMode.disabled);
builder.setUsernameAndPassword("iec61850client", "iec61850client");
builder.setPort(5222);
builder.setSendPresence(true);
try
{
builder.setXmppDomain("127.0.0.1");
builder.setHostAddress(InetAddress.getByName("127.0.0.1"));
}
catch (Exception e)
{
e.printStackTrace();
}
XMPPTCPConnectionConfiguration config = builder.build();
XMPPTCPConnection c = new XMPPTCPConnection(config);
c.setReplyTimeout(5000);
try
{
c.connect().login();
}
catch (Exception e)
{
e.printStackTrace();
}
}
And here is the error message I get:
Exception in thread "Smack Reader (0)" java.lang.AssertionError
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader.parsePackets(XMPPTCPConnection.java:1154)
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader.access$1000(XMPPTCPConnection.java:1092)
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader$1.run(XMPPTCPConnection.java:1112)
In Smack it boils down to this 'assert' instruction:
assert (config.getXMPPServiceDomain().equals(reportedServerDomain));
Any idea what the problem might be or similar problems? I'm grateful for any help!
Thanks a lot,
Markus
If you look at the source code you will find that reportedServerDomain is extracted from the server's stream open tag. In this case the xmpp domain reported by the server does not match the one that is configured. This should usually not happen, but I assume it is related to the way you run the unit tests. Or more precisely, related to the remote server or mocked server that is used in the tests. If you enable smack's debug output, you will see the stream open tag and the 'from' attribute and its value. Compare this with the configured XMPP service domain in the ConnectionConfiguration.

c3p0 says - "java.lang.Exception: DEBUG ONLY: Overdue resource check-out stack trace" on starting a hibernate transaction

Recently, my tomcat started hanging up. The requests were never replied. I figured out that it was due to connections never being returned to the connection pool.
I have used c3p0 with hibernate and the database is mysql 5.5
In order to debug the connection leaks, I added the following properties in my hibernate.cfg.xml
<property name="hibernate.c3p0.unreturnedConnectionTimeout">30</property>
<property name="hibernate.c3p0.debugUnreturnedConnectionStackTraces">true</property>
After adding them, in the logs it says :
[2013-10-12 23:40:22.487] [ INFO] BasicResourcePool.removeResource:1392 - A checked-out resource is overdue, and will be destroyed: com.mchange.v2.c3p0.impl.NewPooledConnection#1f0c0dd
[2013-10-12 23:40:22.487] [ INFO] BasicResourcePool.removeResource:1395 - Logging the stack trace by which the overdue resource was checked-out.
java.lang.Exception: DEBUG ONLY: Overdue resource check-out stack trace.
Pointing to at dao.DAOBasicInfo.getBean(DAOBasicInfo.java:69)
public static Basicinfo getBean(Integer iduser) {
Basicinfo u = null;
Session sess = NewHibernateUtil.getSessionFactory().openSession();
try {
Transaction tx = sess.beginTransaction(); //line 69
Query q = sess.createQuery("from Basicinfo where iduser=" + iduser);
u = (Basicinfo) q.uniqueResult();
if (u == null) {
u = new Basicinfo();
u.setIduser(iduser);
}
tx.commit();
} catch (Exception ex) {
ex.printStackTrace();
} finally {
sess.close();
}
return u;
}
I cross checked and Mysql says it supports transactions with InnoDB
Because of the above error, I'm having un-returned connections and then they pile up making the app unresponsive.
Please let me know what's wrong in starting a transaction and even I'm using finally and there's no exception thrown.
Some suggestions to debug it
As Steve mentioned in comments. Try to see what happens when you remove the unreturnedConnectionTimeout option.
May be your queries are taking too long. Try to log some performance stats on your code and see how much time your query is taking. May be you need to tune your query. and for short term you can also increase the unreturnedConnectionTimeout to be more than the response time on your queries.
Also try transaction timeout option in hibernate. May be set tx.setTimeout(20) and play with the timeout numbers and see if some queries timeout.
You may also want to use some profiling tool. Try VisualVM in case your Java version is supported on it. Otherwise (if on linux or mac) you may want to try Java Debugging commands on older version of java. Some of those commands are also available from JDK.
Small improvements on the code
Not sure if it will really fix your issue however you may want to add rollback for transaction in exception block. Added another try catch for tx.close to avoid another exception.
Also added a null check for session close. You may already know that one condition when finally may not completely execute - if another exception is thrown in finally block. Currently it may not be applicable in your code however in case you add more than one line in finally block make sure any exceptions are covered so next line can execute.
One more suggestion is to reduce the scope of transaction itself. Looking at the code it seems you may need the transaction only in case a uid is not found. How about limiting the transaction code inside if(u==null) block. Not sure if helps but you need not have transaction for read.
Below is my sample code
public static Basicinfo getBean(Integer iduser) {
Basicinfo u = null;
Transaction tx = null;
Session sess = NewHibernateUtil.getSessionFactory().openSession();
try {
Query q = sess.createQuery("from Basicinfo where iduser=" + iduser);
u = (Basicinfo) q.uniqueResult();
if (u == null) {
tx = sess.beginTransaction(); //line 69
u = new Basicinfo();
u.setIduser(iduser);
tx.commit();
}
} catch (Exception ex) {
ex.printStackTrace();
if(tx != null) {
try {
tx.rollback();
} catch(Exception e){e.printStackTrace;}
}
} finally {
if(sess!=null) {
sess.close();
}
}
return u;
}
One of the reasons this error comes up is when you don't make sure to
transaction.commit();
not an answer to this question, but someone who forgot to commit will also land on this page after googling the error

Does SolrJ perform caching?

I'm using Solr in my web application as search engine. I use the DataImportHandler to automatically import data from my database into the search index. When the DataImportHandler adds new data, the data is successfully added to the index, but it isn't returned when I query the index using SolrJ: I have to restart my application server for the data to be found by SolrJ. Is there some kind of caching going on? I used SolrJ in embedded mode. Here's my SolrJ code:
private static final SolrServer solrServer = initSolrServer();
private static SolrServer initSolrServer() {
try {
CoreContainer.Initializer initializer = new CoreContainer.Initializer();
coreContainer = initializer.initialize();
EmbeddedSolrServer server = new EmbeddedSolrServer(coreContainer, "");
return server;
} catch (Exception ex) {
logger.log(Level.SEVERE, "Error initializing SOLR server", ex);
return null;
}
}
Then to query I do the following:
SolrQuery query = new SolrQuery(keyword);
QueryResponse response = solrServer.query(query);
As you can see, my SolrServer is declared as static. Should I create a new EmbeddedSolrServer for each query instead? I'm afraid that will incur a big performance penalty.
Standard configuration of Solr Server doesn't provide auto-commit. If you have solrconfig.xml file, look for the commented tag "autoCommit". If not, then after each document added you can call server.commit();, although with large stream of documents this could prove a big performance issue (as commit is relatively heavy operation).
If you are using it in a web application, I'd advise using solr-x.x.war in your deploy instead of EmbeddedSolrServer. This will provide you with rich Http interface for updating, administrating and searching the index.

How to use Derby Client in Felix?

I want to run Derby Client from within an OSGi bundle. The bundle gets built by Maven so I added a dependency to org.apache.derby:derbyclient. At runtime I get the following exception: java.sql.SQLException: No suitable driver found for jdbc:derby://localhost:1527/testdb.
Interestingly the whole thing works when I use the embedded driver and a dependency to org.apache.derby.derby. I just don't see the difference between those two.
What am I doing wrong and how can I fix it?
Some tidbits:
After some advice I found on the Internet I set the following OSGi header: DynamicImport-Package: *. This fixed problems with the embedded driver but the client still fails.
The version of Derby I use is 10.7.1.1 which should be OSGi enabled (at least it has OSGi headers).
In OSGi it is recommended to not use the DrivverManager to get a connection. The better way is to use a DataSource.
So for derby client you could use this:
ClientDataSource ds = new ClientDataSource();
... // set properties here
Connection connection = dataSource.getConnection();
As the DataSource approach does not fiddle with the classloader it is much more reliable in OSGi.
Additionally it is a good practice to separate the DataSource from your client code and bind it as an OSGi service. This allows to keep the dependency to the database impl out of your code.
The easiest approach is to use pax-jdbc-config and let it create the DataSource for you from a configuration. In your own code you then just bind the DataSource as a service and are fine.
The current release version of pax-jdbc does not yet support the derbyclient but I just added this to the master. So the next release should contain it.
Okay, although not even half an hour passed since I asked the question I found a solution. I don't know how clean it is but it seems to get the job done:
ClassLoader ctxtCl = Thread.currentThread().getContextClassLoader();
try {
Thread.currentThread().setContextClassLoader(getClass().getClassLoader());
try {
Class.forName("org.apache.derby.jdbc.ClientDriver");
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
dbConnection = DriverManager.getConnection("jdbc:derby://localhost:1527/testdb");
} catch (SQLException e) {
/* log, etc. */
} finally {
Thread.currentThread().setContextClassLoader(ctxtCl);
}

Categories

Resources