SMBJ and DFS and "Nested Session" - java

I have a project, where I am given an id, and then using that ID look up files paths and process them... these files are on various mounted drives, so I am using the SMBJ java libraries to access them.
The problem I am having is that some (most) of the files are using a DFS mountpoint... Now, this in and of itself is NOT a problem per se, but apparently the SMBJ libraries appear to create nested sessions for each distinct DFS location. So even though I am closing the actual FILE after I am done reading it the DiskSession object is holding onto all these nested sessions... and eventually either through the DFS config settings, or through these libraries I am hitting some point where it just blows up and stops allowing more sessions to be created.
I am processing hundreds of thousands of records, and the "crash" appears to happen somewhere around 500ish records(session) being processed. I do not see anything obvious looking at the code to explicitly close these nested sessions.. in fact I see no external access to them at all externally from the DiskShare object.
Is there some sort of setting I am missing that maximizes the sessions that this is holding onto? Other than me managing some sort of my own counter around this, and closing and reopening sessions/connections I am at a loss how to handle this.
Does anyone know what I am missing here?
Code below:
public class Smb {
private static SMBClient client;
private static String[] DFSMounts = {"DFS1","dfs1"};
private static final Logger Log = LoggerFactory.getLogger(Smb.class);
private static HashMap<String,DiskShare> shares = new HashMap<>();
private static HashMap<String,Connection> connections = new HashMap<>();
private static HashMap<Connection,Session> sessions = new HashMap<>();
private synchronized static SMBClient getClient(){
if (client == null){
SmbConfig cfg = SmbConfig.builder().withDfsEnabled(true).build();
client = new SMBClient(cfg);
}
return client;
}
private synchronized static Connection getConnection(String realDomainName) throws IOException{
Log.info("DOMAIN NAME "+realDomainName);
Connection connection = (connections.get(realDomainName) == null) ? client.connect(realDomainName) : connections.get(realDomainName);
if(!connection.isConnected()) {
connection.close();
sessions.remove(connection);
connection = client.connect(realDomainName);
}
// connection = client.connect(realDomainName);
connections.put(realDomainName,connection);
return connection;
}
private synchronized static Session getSession(Connection connection,SMBClient client){
Session session = sessions.get(connection);
if(session==null) {
PropertiesCache props = PropertiesCache.getInstance();
String sambaUsername = props.getProperty("smb.user");
String sambaPass = props.getProperty("smb.password");
String sambaDomain = props.getProperty("smb.domain");
Log.info("CLIENT " + client);
session = (sessions.get(connection) != null) ? sessions.get(connection) : connection.authenticate(new AuthenticationContext(sambaUsername, sambaPass.toCharArray(), sambaDomain));
sessions.put(connection, session);
}
return session;
}
#SuppressWarnings("UnusedReturnValue")
public synchronized static DiskShare getShare(String domainName, String shareName) throws SmbException
{
DiskShare share = shares.get(domainName+"/"+shareName);
if((share!=null)&&(!share.isConnected())) share=null;
if(share == null){
try {
PropertiesCache props = PropertiesCache.getInstance();
String sambaUsername = props.getProperty("smb.user");
String sambaPass = props.getProperty("smb.password");
String sambaDomain = props.getProperty("smb.domain");
String dfsIP = props.getProperty("smb.sambaIP");
SMBClient client = getClient();
String realDomainName = (Arrays.stream(DFSMounts).anyMatch(domainName::equals)) ? dfsIP: domainName;
Connection connection = getConnection(realDomainName);
Session session = getSession(connection,client);
share = (DiskShare) session.connectShare(shareName);
shares.put(domainName+"/"+shareName,share);
}
catch (Exception e){
Log.info("EXCEPTION E "+e);
Log.info("EX "+e.getMessage());
throw new SmbException();
}
}
return(share);
}
public static String fixFilename(String filename){
String[] parts = filename.split("\\\\");
ArrayList<String> partsList = new ArrayList<>(Arrays.asList(parts));
partsList.remove(0);
partsList.remove(0);
partsList.remove(0);
partsList.remove(0);
return String.join("/",partsList);
}
public static File open(String filename) throws SmbException {
String[] parts = filename.split("\\\\");
String domainName = parts[2];
String shareName = parts[3];
DiskShare share = getShare(domainName,shareName);
Set<SMB2ShareAccess> s = new HashSet<>();
s.add(SMB2ShareAccess.ALL.iterator().next());
filename = fixFilename(filename);
return(share.openFile(filename, EnumSet.of(AccessMask.GENERIC_READ), null, s, SMB2CreateDisposition.FILE_OPEN, null));
}
}
And here is how the OPEN is being used (to show it is closing the file after use):
String filename = documents.get(0).getUNCPath();
try (File f = Smb.open(filename)){
Process the file code...
f.closeSilently();
}
And:
while(i.hasNext()){
String filename = (String)i.next();
Log.info("FILENAME "+filename);
try(File f = Smb.open(filename)){
Process the file stuff here
}
}

I have created a PR for SMBJ which changes this. It will reuse the nested session for same host. I have successfully used it myself to avoid the exact same problem you are having. https://github.com/hierynomus/smbj/pull/489

Related

Config-file for JavaFX-Project

I would like to add a file or class to my JavaFX project that only contains the configuration data of the project, e.g. the access data for the database, system paths etc. How would you do this?
Just write everything in a normal class? There is definitely a better way, right?
You're right, of course I'll be happy to do that.
First I created a property file in the project folder and call it app.properties:
db_url=jdbc:mysql://localhost:3306/db name
db_user=user name
db_pwd=secret password
instructions_folder=/home/username/documents/
Then I created a class that loads the properties and makes them available throughout the project.
public class AppProperties {
// FILENAME = Path to properties-file
// Store and protect it where ever you want
private final String FILENAME = "app.properties";
private static final AppProperties config_file = new AppProperties();
private Properties prop = new Properties();
private String msg = "";
private AppProperties(){
InputStream input = null;
try{
input = new FileInputStream(FILENAME);
// Load a properties
prop.load(input);
}catch(IOException ex){
msg = "Can't find/open property file";
ex.printStackTrace();
}finally{
if (input != null){
try{
input.close();
}catch(IOException e){
e.printStackTrace();
}
}
}
}
public String getProperty (String key){
return prop.getProperty(key);
}
public String getMsg () {
return msg;
}
// == Singleton design pattern == //
// Where ever you call this methode in application
// you always get the same and only instance (config_file)
public static AppProperties getInstance(){
return config_file;
}
}
In the DBUtilitis class, where I do my database queries, I now load the properties into final variables and use them in the query methods.
private static final String db_url = AppProperties.getInstance().getProperty("db_url");
private static final String db_user = AppProperties.getInstance().getProperty("db_user");
private static final String db_pwd = AppProperties.getInstance().getProperty("db_pwd");
If I have not completely misunderstood this, the advantage of property files is that they can be stored and protected somewhere on the server. I hope the solution is not completely wrong - it works well anyway. I am always happy to receive suggestions and / or improvements.

Amazon Keyspace (Cassandra) query no node was available to execute query

I'm using AWS Keyspace (Cassandra 3.11.2) run on Apache Flink in AWS EMR. Some time below query throws Exception. The same code used on AWS Lambda also had the same Exception NoHost. What did I do wrong?
String query = "INSERT INTO TEST (field1, field2) VALUES(?, ?)";
PreparedStatement prepared = CassandraConnector.prepare(query);
int i = 0;
BoundStatement bound = prepared.bind().setString(i++, "Field1").setString(i++, "Field2")
.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);
ResultSet rs = CassandraConnector.execute(bound);
at com.datastax.oss.driver.api.core.NoNodeAvailableException.copy(NoNodeAvailableException.java:40)
at com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:149)
at com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:53)
at com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:30)
at com.datastax.oss.driver.internal.core.session.DefaultSession.execute(DefaultSession.java:230)
at com.datastax.oss.driver.api.core.cql.SyncCqlSession.execute(SyncCqlSession.java:53)
at com.test.manager.connectors.CassandraConnector.execute(CassandraConnector.java:16)
at com.test.repository.impl.BackupRepositoryImpl.insert(BackupRepositoryImpl.java:36)
at com.test.service.impl.BackupServiceImpl.insert(BackupServiceImpl.java:18)
at com.test.flink.function.AsyncBackupFunction.processMessage(AsyncBackupFunction.java:78)
at com.test.flink.function.AsyncBackupFunction.lambda$asyncInvoke$0(AsyncBackupFunction.java:35)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
at java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1596)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
This is my code:
CassandraConnector.java:
Because cost of init preparedStatement is huge, I'm cached this.
public class CassandraConnector {
private static final ConcurrentHashMap<String, PreparedStatement> preparedStatementCache = new ConcurrentHashMap<String, PreparedStatement>();
public static ResultSet execute(BoundStatement bound) {
CqlSession session = CassandraManager.getSessionInstance();
return session.execute(bound);
}
public static ResultSet execute(String query) {
CqlSession session = CassandraManager.getSessionInstance();
return session.execute(query);
}
public static PreparedStatement prepare(String query) {
PreparedStatement result = preparedStatementCache.get(query);
if (result == null) {
CqlSession session = CassandraManager.getSessionInstance();
result = session.prepare(query);
preparedStatementCache.putIfAbsent(query, result);
}
return result;
}
}
CassandraManager.java:
I'm using singleton double-check locking for session object.
public class CassandraManager {
private static final Logger logger = LoggerFactory.getLogger(CassandraManager.class);
private static final String SSL_CASSANDRA_PASSWORD = "password";
private static volatile CqlSession session;
static {
try {
initSession();
} catch (Exception e) {
logger.error("Error CassandraManager getSessionInstance", e);
}
}
private static void initSession() {
List<InetSocketAddress> contactPoints = Collections.singletonList(InetSocketAddress.createUnresolved(
"cassandra.ap-southeast-1.amazonaws.com", 9142));
DriverConfigLoader loader = DriverConfigLoader.fromClasspath("application.conf");
Long start = BaseHelper.getTime();
session = CqlSession.builder().addContactPoints(contactPoints).withConfigLoader(loader)
.withAuthCredentials(AppUtil.getProperty("cassandra.username"),
AppUtil.getProperty("cassandra.password"))
.withSslContext(getSSLContext()).withLocalDatacenter("ap-southeast-1")
.withKeyspace(AppUtil.getProperty("cassandra.keyspace")).build();
logger.info("End connect: " + (new Date().getTime() - start));
}
public static CqlSession getSessionInstance() {
if (session == null || session.isClosed()) {
synchronized (CassandraManager.class) {
if (session == null || session.isClosed()) {
initSession();
}
}
}
return session;
}
public static SSLContext getSSLContext() {
InputStream in = null;
try {
KeyStore ks = KeyStore.getInstance("JKS");
in = CassandraManager.class.getClassLoader().getResourceAsStream("cassandra_truststore.jks");
ks.load(in, SSL_CASSANDRA_PASSWORD.toCharArray());
TrustManagerFactory tmf = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
tmf.init(ks);
SSLContext ctx = SSLContext.getInstance("TLS");
ctx.init(null, tmf.getTrustManagers(), null);
return ctx;
} catch (Exception e) {
logger.error("Error CassandraConnector getSSLContext", e);
} finally {
if (in != null) {
try {
in.close();
} catch (IOException e) {
logger.error("", e);
}
}
}
return null;
}
}
application.conf
datastax-java-driver {
basic.request {
timeout = 5 seconds
consistency = LOCAL_ONE
}
advanced.connection {
max-requests-per-connection = 1024
pool {
local.size = 1
remote.size = 1
}
}
advanced.reconnect-on-init = true
advanced.reconnection-policy {
class = ExponentialReconnectionPolicy
base-delay = 1 second
max-delay = 60 seconds
}
advanced.retry-policy {
class = DefaultRetryPolicy
}
advanced.protocol {
version = V4
}
advanced.heartbeat {
interval = 30 seconds
timeout = 1 second
}
advanced.session-leak.threshold = 8
advanced.metadata.token-map.enabled = false
}
There are two scenarios where the driver would report NoNodeAvailableException:
Nodes are unresponsive/unavailable and the driver has marked all of them as down.
All the contact points provided are invalid.
If some inserts are working but eventually runs into NoNodeAvailableException, that indicates to me that the nodes are getting overloaded and eventually become unresponsive so the driver no longer picks a coordinator since they're all marked as "down".
If none of the requests work at all, it means that the contact points are unreachable or unresolvable so the driver can't connect to the cluster. Cheers!
The NoHostAvailableException is a client side exception thrown by the open source driver after it has retried available hosts. The open source driver encapsulated the root cause for retry, which can be confusing.
I suggest first improving you observability by setting up these CloudWatch metrics. You can follow this prebuild CloudFormation template to get started it only takes a few seconds.
Here is a set up for Keyspace & Table Metrics for Amazon Keyspaces using Cloud Watch:
https://github.com/aws-samples/amazon-keyspaces-cloudwatch-cloudformation-templates
You can also replace retry policy with the following examples found in this helper project. The retry policy in this project will either try or throw the original exception which will remove the occurrences of NoHostAvailableException this will provide you with better transparency to your application. Here's the like to the Github repo: https://github.com/aws-samples/amazon-keyspaces-java-driver-helpers
If you're using the private VPC endpoint you want to add the following permissions to enable more entries in the system.peers table.,
Amazon Keyspaces just announced new functionality that will provide more connection points when establishing a session with a private VPC endpoints.
Here is a link about how Keyspaces now automatically optimizes client connection made through AWS PrivateLink to improve availability and write and read: https://aws.amazon.com/about-aws/whats-new/2021/07/amazon-keyspaces-for-apache-cassandra-now-automatically-optimi/
This link that talks about Using Amazon Keypscaes with Interface VPC Endpoints: https://docs.aws.amazon.com/keyspaces/latest/devguide/vpc-endpoints.html . To enable this new functionality you will need to provide additional permissions to DescribeNetworkInterfaces and DescribeVpcEndpoints.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"ListVPCEndpoints",
"Effect":"Allow",
"Action":[
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeVpcEndpoints"
],
"Resource":"*"
}
]
}
I suspect that this:
.withLocalDatacenter(AppUtil.getProperty("cassandra.localdatacenter"))
Pulls back a data center name which either does not match the keyspace replication definition or the configured data center name:
nodetool status | grep Datacenter
Basically, if your connection is defined with a local data center which does not exist, it will still try to read/write with replicas in that data center. This will fail, because it obviously cannot find nodes in a non-existent data center.
Similar question here: NoHostAvailable error in cqlsh console

Java Servlet Method Parameters, response, and thread safety

I have written some REST APIs using Java Servlets on Tomcat. These are my first experiences with Java and APIs and Tomcat. As I research and read about servlets, methods and parameter passing, and more recently thread safety, I realize I need some review, suggestions, and tutorial guidance from those of you who I see are far more experienced. I have found many questions / answers that seem to address pieces but my lack of experience clouds the clarity I desire.
The code below shows the top portion of one servlet example along with an example private method. I have "global" variables defined at the class level so that I may track the success of a method and determine if I need to send an error response. I do this because the method(s) already return a value.
Are those global variables creating an unsafe thread environment
Since the response is not visible in the private methods, how else might I determine the need to stop the process and send an error response if those global variables are unsafe
Though clipped for space, should I be doing all of the XML handling in the doGet method
Should I be calling all of the different private methods for the various data retrieval tasks and data handling
Should each method that accesses the same database open a Connection or should the doGet method create a Connection and pass it to each method
Assist, suggest, teach, guide to whatever you feel appropriate, or point me to the right learning resources so I may learn how to do better. Direct and constructive criticism welcome -- bashing and derogatory statements not preferred.
#WebServlet(name = "SubPlans", urlPatterns = {"*omitted*"})
public class SubPlans extends HttpServlet {
private transient ServletConfig servletConfig;
private String planSpecialNotes,
planAddlReqLinks,
legalTermsHeader,
legalTermsMemo,
httpReturnMsg;
private String[] subPlanInd = new String[4];
private boolean sc200;
private int httpReturnStatus;
private static final long serialVersionUID = 1L;
{
httpReturnStatus = 0;
httpReturnMsg = "";
sc200 = true;
planAddlReqLinks = null;
planSpecialNotes = null;
legalTermsHeader = "";
legalTermsMemo = null;
}
#Override
public void init(ServletConfig servletConfig)
throws ServletException {
this.servletConfig = servletConfig;
}
#Override
public ServletConfig getServletConfig() {
return servletConfig;
}
#Override
public String getServletInfo() {
return "SubPlans";
}
#Override
public void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
List<HashMap<String, Object>> alSubDeps = new ArrayList<HashMap<String, Object>>();
String[] coverageDates = new String[6],
depDates = new String[8];
String eeAltId = null,
eeSSN = null,
carrier = null,
logosite = null,
fmtSSN = "X",
subSQL = null,
healthPlan = null,
dentalPlan = null,
visionPlan = null,
lifePlan = null,
tier = null,
healthGroupNum = null,
effdate = null,
holdEffDate = null,
planDesc = "",
planYear = "",
summaryBenefitsLink = null;
int[][] effdates = new int[6][4];
int holdDistrictNumber = 0,
districtNumber = 0,
holdUnit = 0,
unit = 0;
boolean districtHasHSA = false;
XMLOutputFactory outputFactory = XMLOutputFactory.newInstance();
try {
eeAltId = request.getParameter("*omitted*");
if ( eeAltId != null ) {
Pattern p = Pattern.compile(*omitted*);
Matcher m = p.matcher(eeAltId);
if ( m.find(0) ) {
eeSSN = getSSN(eeAltId);
} else {
httpReturnStatus = 412;
httpReturnMsg = "Alternate ID format incorrect.";
System.err.println("Bad alternate id format " + eeAltId);
sc200 = false;
}
} else {
httpReturnStatus = 412;
httpReturnMsg = "Alternate ID missing.";
System.err.println("alternate id not provided.");
sc200 = false;
}
if ( sc200 ) {
coverageDates = determineDates();
subSQL = buildSubSQLStatement(eeSSN, coverageDates);
alSubDeps = getSubDeps(subSQL);
if ( sc200 ) {
XMLStreamWriter writer = outputFactory.createXMLStreamWriter(response.getOutputStream());
writer.writeStartDocument("1.0");
writer.writeStartElement("subscriber");
// CLIPPED //
writer.writeEndElement(); // subscriber
writer.writeEndDocument();
if ( sc200 ) {
response.setStatus(HttpServletResponse.SC_OK);
writer.flush();
} else {
response.sendError(httpReturnStatus, httpReturnMsg);
}
}
}
} catch (Exception e) {
e.printStackTrace();
System.err.println("Error writing XML");
System.err.println(e);
}
}
#Override
public void destroy() {
}
private String getPlanDescription(String planID) {
String planDesc = null;
String sqlEE = "SELECT ...";
Connection connGPD = null;
Statement stGPD = null;
ResultSet rsGPD = null;
try {
connGPD = getDbConnectionEE();
try {
stGPD = connGPD.createStatement();
planDesc = "Statement error";
try {
rsGPD = stGPD.executeQuery(sqlEE);
if ( !rsGPD.isBeforeFirst() )
planDesc = "No data";
else {
rsGPD.next();
planDesc = rsGPD.getString("Plan_Description");
}
} catch (Exception rsErr) {
httpReturnStatus = 500;
httpReturnMsg = "Error retrieving plan description.";
System.err.println("getPlanDescription: " + httpReturnMsg + " " + httpReturnStatus);
System.err.println(rsErr);
sc200 = false;
} finally {
if ( rsGPD != null ) {
try {
rsGPD.close();
} catch (Exception rsErr) {
System.err.println("getPlanDescription: Error closing result set.");
System.err.println(rsErr);
}
}
}
} catch (Exception stErr) {
httpReturnStatus = 500;
httpReturnMsg = "Error creating plan description statement.";
System.err.println("getPlanDescription: " + httpReturnMsg + " " + httpReturnStatus);
System.err.println(stErr);
sc200 = false;
} finally {
if ( stGPD != null ) {
try {
stGPD.close();
} catch (Exception stErr) {
System.err.println("getPlanDescription: Error closing query statement.");
System.err.println(stErr);
}
}
}
} catch (Exception connErr) {
httpReturnStatus = 500;
httpReturnMsg = "Error closing database.";
System.err.println("getPlanDescription: " + httpReturnMsg + " " + httpReturnStatus);
System.err.println(connErr);
sc200 = false;
} finally {
if ( connGPD != null ) {
try {
connGPD.close();
} catch (Exception connErr) {
System.err.println("getPlanDescription: Error closing connection.");
System.err.println(connErr);
}
}
}
return planDesc.trim();
}
I have "global" variables defined at the class level
You have instance variables declared at the class level. There are no globals in Java.
so that I may track the success of a method and determine if I need to send an error response.
Poor technique.
I do this because the method(s) already return a value.
You should use exceptions for this if the return values are already taken.
Are those global variables creating an unsafe thread environment
Those instance variables are creating an unsafe thread environment.
Since the response is not visible in the private methods, how else might I determine the need to stop the process and send an error response if those global variables are unsafe?
Via exceptions thrown by the methods, see above. If there is no exception, send an OK response, whatever form that takes, otherwise whatever error response is appropriate to the exception.
Though clipped for space, should I be doing all of the XML handling in the doGet method
Not if it's long or repetitive (used in other places too).
Should I be calling all of the different private methods for the various data retrieval tasks and data handling
Sure, why not?
Should each method that accesses the same database open a Connection or should the doGet() method create a Connection and pass it to each method
doGet() should open the connection, pass it to each method, and infallibly close it.
NB You don't need the ServletConfig variable, or the init() or getServletConfig() methods. If you remove all this you can get it from the base class any time you need it via the getServletConfig() method you have pointlessly overridden.
The variables you have defined are instance members. They are not global and are not class-level. They are variables scoped to one instance of your servlet class.
The servlet container typically creates one instance of your servlet and sends all requests to that one instance. So you will have concurrent requests overwriting these variables’ contents unpredictably.
It can be ok for a servlet to have static variables or instance member variables, but only if their contents are thread safe and they contain no state specific to a request. For instance it would be normal to have a (log4j or java.util.logging) Logger object as a static member, where the logger is specifically designed to be called concurrently without the threads interfering with each other.
For error handling use exceptions to fail fast once something goes wrong.
Servlets are painful to write and hard to test. Consider using a MVC web framework instead. Frameworks like spring or dropwizard provide built-in capabilities that make things like data access and error handling easier, but most importantly they encourage patterns where you write separate well-focused classes that each do one thing well (and can be reasoned about and tested independently). The servlet approach tends to lead people to cram disparate functions into one increasingly-unmanageable class file, which seems to be the road you’re headed down.

Mongo Connection is created multiple times in RESTful API and never released

I have written a RESTful API using Apache Jersey. I am using MongoDB as my backend. I used Morphia (v.1.3.4) to map and persist POJO to database. I tried to follow "1 application 1 connection" in my API as recommended everywhere but I am not sure I am successful. I run my API in Tomcat 8. I also ran Mongostat to see the details and connection. At start, Mongostat showed 1 connection to MongoDB server. I tested my API using Postman and it was working fine. I then created a load test in SoapUI where I simulated 100 users per second. I saw the update in Mongostat. I saw there were 103 connections. Here is the gif which shows this behaviour.
I am not sure why there are so many connections. The interesting fact is that number of mongo connection are directly proportional to number of users I create on SoapUI. Why is that? I found other similar questions but I think I have implemented there suggestions.
Mongo connection leak with morphia
Spring data mongodb not closing mongodb connections
My code looks like this.
DatabaseConnection.java
// Some imports
public class DatabaseConnection {
private static volatile MongoClient instance;
private static String cloudhost="localhost";
private DatabaseConnection() { }
public synchronized static MongoClient getMongoClient() {
if (instance == null ) {
synchronized (DatabaseConnection.class) {
if (instance == null) {
ServerAddress addr = new ServerAddress(cloudhost, 27017);
List<MongoCredential> credentialsList = new ArrayList<MongoCredential>();
MongoCredential credentia = MongoCredential.createCredential(
"test", "test", "test".toCharArray());
credentialsList.add(credentia);
instance = new MongoClient(addr, credentialsList);
}
}
}
return instance;
}
}
PourService.java
#Secured
#Path("pours")
public class PourService {
final static Logger logger = Logger.getLogger(Pour.class);
private static final int POUR_SIZE = 30;
#POST
#Consumes(MediaType.APPLICATION_JSON)
#Produces(MediaType.APPLICATION_JSON)
public Response createPour(String request)
{
WebApiResponse response = new WebApiResponse();
Gson gson = new GsonBuilder().setDateFormat("dd/MM/yyyy HH:mm:ss").create();
String message = "Pour was not created.";
HashMap<String, Object> data = null;
try
{
Pour pour = gson.fromJson(request, Pour.class);
// Storing the pour to
PourRepository pourRepository = new PourRepository();
String id = pourRepository.createPour(pour);
data = new HashMap<String, Object>();
if ("" != id && null != id)
{
data.put("id", id);
message = "Pour was created successfully.";
logger.debug(message);
return response.build(true, message, data, 200);
}
logger.debug(message);
return response.build(false, message, data, 500);
}
catch (Exception e)
{
message = "Error while creating Pour.";
logger.error(message, e);
return response.build(false, message, new Object(),500);
}
}
PourDao.java
public class PourDao extends BasicDAO<Pour, String>{
public PourDao(Class<Pour> entityClass, Datastore ds) {
super(entityClass, ds);
}
}
PourRepository.java
public class PourRepository {
private PourDao pourDao;
final static Logger logger = Logger.getLogger(PourRepository.class);
public PourRepository ()
{
try
{
MongoClient mongoClient = DatabaseConnection.getMongoClient();
Datastore ds = new Morphia().map(Pour.class)
.createDatastore(mongoClient, "tilt45");
pourDao = new PourDao(Pour.class,ds);
}
catch (Exception e)
{
logger.error("Error while creating PourDao", e);
}
}
public String createPour (Pour pour)
{
try
{
return pourDao.save(pour).getId().toString();
}
catch (Exception e)
{
logger.error("Error while creating Pour.", e);
return null;
}
}
}
When I work with Mongo+Morphia I get better results using a Factory pattern for the Datastore and not for the MongoClient, for instance, check the following class:
public DatastoreFactory(String dbHost, int dbPort, String dbName) {
final Morphia morphia = new Morphia();
MongoClientOptions.Builder options = MongoClientOptions.builder().socketKeepAlive(true);
morphia.getMapper().getOptions().setStoreEmpties(true);
final Datastore store = morphia.createDatastore(new MongoClient(new ServerAddress(dbHost, dbPort), options.build()), dbName);
store.ensureIndexes();
this.datastore = store;
}
With that approach, everytime you need a datastore you can use the one provided by the factory. Of course, this can implemented better if you use a framework/library that support factory pattern (e.g.: HK2 with org.glassfish.hk2.api.Factory), and also singleton binding.
Besides, you can check the documentation of MongoClientOptions's builder method, perhaps you can find a better connection control there.

Adding logs to wso2 to track logs implemented in custom java code

Below I have a code snippet for a custom API manager mediator, I'm suppose to modify this code for our use. I'm having trouble though getting the logs out of the code when I'm running it in our wso2 environment. What would be the process to be able to the outputs of these logs. This is going to be a jar file I add to the repository/components/lib/ directory of the APIM. The jar file name is com.domain.wso2.apim.extensions. I need to be able to see whats being passed and what parts of the code are being hit for testing
public class IdentifiersLookup extends AbstractMediator implements ManagedLifecycle {
private static Log log = LogFactory.getLog(IdentifiersLookup.class);
private String propertyPrefix = "";
private String netIdPropertyToUse = "";
private DataSource ds = null;
private String DsName = null;
public void init(SynapseEnvironment synapseEnvironment) {
if (log.isInfoEnabled()) {
log.info("Initializing IdentifiersLookup Mediator");
}
if (log.isDebugEnabled())
log.debug("IdentifiersLookup: looking up datasource" + DsName);
try {
this.ds = (DataSource) new InitialContext().lookup(DsName);
} catch (NamingException e) {
e.printStackTrace();
}
if (log.isDebugEnabled())
log.debug("IdentifiersLookup: acquired datasource");
}
Add the below line to log4j.properties file resides wso2am-2.0.0/repository/conf/ folder and restart the server.
log4j.logger.com.domain.wso2.apim.extensions=INFO

Categories

Resources