Background:
We have 4 physical servers (4 IPS), each one running in JBOSS 6 EAP running on port 80.All requests are redirected to any one of these servers via Load balancer.
Now I tried to implement Java cache system for such distributed env so that our properties gets updated in each servers cache.
POC:
For that we did a small POC on our local systems implementing JCS v1.3 lateral caching.
Enabled it in our maven project. The following config is used in .ccf file :
jcs.default=
jcs.default.cacheattributes=org.apache.jcs.engine.CompositeCacheAttributes
jcs.default.cacheattributes.MaxObjects=1000
jcs.default.cacheattributes.MemoryCacheName=org.apache.jcs.engine.memory.lru.LRUMemoryCache
# PRE-DEFINED CACHE REGION
##############################################################
##### AUXILIARY CACHES
# LTCP AUX CACHE
jcs.auxiliary.LTCP=org.apache.commons.jcs.auxiliary.lateral.socket.tcp.LateralTCPCacheFactory
jcs.auxiliary.LTCP.attributes=org.apache.commons.jcs.auxiliary.lateral.socket.tcp.TCPLateralCacheAttributes
#jcs.auxiliary.LTCP.attributes.TcpServers=152.144.219.209:8080
jcs.auxiliary.LTCP.attributes.TcpListenerPort=1118
jcs.auxiliary.LTCP.attributes.UdpDiscoveryAddr=228.5.6.8
jcs.auxiliary.LTCP.attributes.UdpDiscoveryPort=6780
jcs.auxiliary.LTCP.attributes.UdpDiscoveryEnabled=true
jcs.auxiliary.LTCP.attributes.Receive=true
jcs.auxiliary.LTCP.attributes.AllowGet=true
jcs.auxiliary.LTCP.attributes.IssueRemoveOnPut=false
jcs.auxiliary.LTCP.attributes.FilterRemoveByHashCode=false
jcs.auxiliary.LTCP.attributes.SocketTimeoOt=1001
jcs.auxiliary.LTCP.attributes.OpenTimeOut=2002
jcs.auxiliary.LTCP.attributes.ZombieQueueMaxSize=2000
And implementing the getter and setter methods for saving a string attribute in cache and getting it from cache
public void addProp(String propId)
throws PimsAppException {
try {
configMSCache.put(propId, propId);
} catch (CacheException e) {
e.printStackTrace();
}
}
#Override
public String testProp(String propId) throws PimsAppException {
if(configMSCache!=null){
return (String) configMSCache.get(propId);
}else{
return "It dint work";
}
}
The application is deployed fine no error in getting it up.
TEST METHOD:
deployed the project.war in my local server and in a remote server with different IP. Both machines are in same network, so no firewall issue in accessing each others IP.
Have saved a property in my server and get it. (Worked fine)
Tried to get the saved property via my local by the remote machine. (It returns blank response).
Means the distributed cache feature is NOT achieved.
Doubts :
1. Does the auxiliary caches set up properly? I mean the configurations
2. Am I testing it properly or how can I test it in dev environment.
3. As JCS UDP Discovery,lets us support the same config on multiple machines, then why it dint work on remote machine?
4. Or is there any caching mechanism, with good examples and documentation can suffice my application needs(as mentioned in background section).
Thanks in advance.
This reply might be too late. But I will suggest in case, to log the stats on both servers and see. As could be possible that it is propagating the cache but just in the processing time, there is an issue reading it.
For example:
JCSAdminBean admin = new JCSAdminBean();
LinkedList linkedList = admin.buildCacheInfo();
ListIterator iterator = linkedList.listIterator();
while (iterator.hasNext()) {
CacheRegionInfo info = (CacheRegionInfo)iterator.next();
CompositeCache compCache = info.getCache();
System.out.println("Cache Name: " + compCache.getCacheName());
System.out.println("Cache Type: " + compCache.getCacheType());
System.out.println("Cache Misses (not found): " + compCache.getMissCountNotFound());
System.out.println("Cache Misses (expired): " + compCache.getMissCountExpired());
System.out.println("Cache Hits (memory): " + compCache.getHitCountRam());
System.out.println("Cache value: " + compCache.get(propId));
}
Related
I have been following the ehCache documentation regarding replication on https://www.ehcache.org/documentation/2.8/replication/rmi-replicated-caching.html
EhCache is already implemented in our project, but going through the code base, I am seeing at multiple places regarding remoteObjectPort being added in properties while configuring the listener.
Here is our code to add peer information to the peerProviderConfig
FactoryConfiguration peerProviderConfig = new FactoryConfiguration();
peerProviderConfig.setClass(RMICacheManagerPeerProviderFactory.class
.getName());
for (CacheHost remoteHost : remoteHosts) {
// create a RMI URL
rmiUrls += separator
createRmiUrl(remoteHost, cacheName);
separator = "|";
}
// and set it in configuration
peerProviderConfig.setProperties("peerDiscovery=manual, rmiUrls=" + rmiUrls);
I understand this piece of code. This will build and append the url for all the peers.
The createRmiUrl() method returns url as follows: //<hostname>:<hostport>/cacheName
For all the hosts, the hostport is same that is 40001. This means that on all the different hosts, ehCache will be running on port 40001. According to my understanding following things happen:
Whenever a peer comes alive, it will start the ehCache on port 40001
It will also configure all the peers in the cluster as we are using manual discovery
It will listen for some events and then update/ replicate its own data in the cache.
Is my understanding right?
Our PeerListenerProperties are as follows:
String peerListenerProps = "port=" + rmiPort;
peerListenerProps += ", remoteObjectPort=40002";
peerListenerProps += ", socketTimeoutMillis=" + socketTimeoutMillis;
FactoryConfiguration peerListenerConfig = new FactoryConfiguration();
peerListenerConfig.setClass(RMICacheManagerPeerListenerFactory.class.getName());
peerListenerConfig.setProperties(peerListenerProps);
So in the listener properties, we are adding port = rmiPort (which is 40001 and the same will be used by all the peers/hosts). What I don't understand is what is the usage of remoteObjectPort in the listener properties?
I have problem with vertx HttpClient.
Here's code which shows that tests GET using vertx and plain java.
Vertx vertx = Vertx.vertx();
HttpClientOptions options = new HttpClientOptions()
.setTrustAll(true)
.setSsl(false)
.setDefaultPort(80)
.setProtocolVersion(HttpVersion.HTTP_1_1)
.setLogActivity(true);
HttpClient client = vertx.createHttpClient(options);
client.getNow("google.com", "/", response -> {
System.out.println("Received response with status code " + response.statusCode());
});
System.out.println(getHTML("http://google.com"));
Where getHTML() is from here: How do I do a HTTP GET in Java?
This is my output:
<!doctype html><html... etc <- correct output from plain java
Feb 08, 2017 11:31:21 AM io.vertx.core.http.impl.HttpClientRequestImpl
SEVERE: java.net.UnknownHostException: failed to resolve 'google.com'. Exceeded max queries per resolve 3
But vertx can't connect. What's wrong here? I'm not using any proxy.
For reference: a solution, as described in this question and in tsegismont's comment here, is to set the flag vertx.disableDnsResolver to true:
-Dvertx.disableDnsResolver=true
in order to fall back to the JVM DNS resolver as explained here:
sometimes it can be desirable to use the JVM built-in resolver, the JVM system property -Dvertx.disableDnsResolver=true activates this behavior
I observed this DNS resolution issue with a redis client in a kubernetes environment.
I had this issue, what caused it for me was stale DNS servers being picked up by the Java runtime, i.e. servers registered for a network the machine was no longer connected to. The issue is first in the Sun JNDI implementation, it also exists in Netty which uses JNDI to bootstrap its list of name servers on most platforms, then finally shows up in VertX.
I think a good place to fix this would be in the Netty layer where the set of default DNS servers is bootstrapped. I have raised a ticket with the Netty project so we'll see if they agree with me! Here is the Netty ticket
In the mean time a fairly basic workaround is to filter the default DNS servers detected by Netty, based on whether they are reachable or not. Here is a code Sample in Kotlin to apply before constructing the main VertX instance.
// The default set of name servers provided by JNDI can contain stale entries
// This default set is picked up by Netty and in turn by VertX
// To work around this, we filter for only reachable name servers on startup
val nameServers = DefaultDnsServerAddressStreamProvider.defaultAddressList()
val reachableNameServers = nameServers.stream()
.filter {ns -> ns.address.isReachable(NS_REACHABLE_TIMEOUT)}
.map {ns -> ns.address.hostAddress}
.collect(Collectors.toList())
if (reachableNameServers.size == 0)
throw StartupException("There are no reachable name servers available")
val opts = VertxOptions()
opts.addressResolverOptions.servers = reachableNameServers
// The primary Vertx instance
val vertx = Vertx.vertx(opts)
A little more detail in case it is helpful. I have a company machine, which at some point was connected to the company network by a physical cable. Details of the company's internal name servers were set up by DHCP on the physical interface. Using the wireless interface at home, DNS for the wireless interface gets set to my home DNS while the config for the physical interface is not updated. This is fine since that device is not active, ipconfig /all does not show the internal company DNS servers. However, looking in the registry they are still there:
Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces
They get picked up by the JNDI mechanism, which feeds Netty and in turn VertX. Since they are not reachable from my home location, DNS resolution fails. I can imagine this home/office situation is not unique to me! I don't know whether something similar could occur with multiple virtual interfaces on containers or VMs, it could be worth looking at if you are having problems.
Here is the sample code which works for me.
public class TemplVerticle extends HttpVerticle {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
// Create the web client and enable SSL/TLS with a trust store
WebClient client = WebClient.create(vertx,
new WebClientOptions()
.setSsl(true)
.setTrustAll(true)
.setDefaultPort(443)
.setKeepAlive(true)
.setDefaultHost("www.w3schools.com")
);
client.get("www.w3schools.com")
.as(BodyCodec.string())
.send(ar -> {
if (ar.succeeded()) {
HttpResponse<String> response = ar.result();
System.out.println("Got HTTP response body");
System.out.println(response.body().toString());
} else {
ar.cause().printStackTrace();
}
});
}
}
Try using web client instead of httpclient, here you have an example (with rx):
private val client: WebClient = WebClient.create(vertx, WebClientOptions()
.setSsl(true)
.setTrustAll(true)
.setDefaultPort(443)
.setKeepAlive(true)
)
open fun <T> get(uri: String, marshaller: Class<T>): Single<T> {
return client.getAbs(host + uri).rxSend()
.map { extractJson(it, uri, marshaller) }
}
Another option is to use getAbs.
When I run the following Java code on a Lotus Domino server, I get different results depending on where the code runs.
private void doViewStuff(Session session, PrintStream out) throws NotesException {
Database db = session.getDatabase(null, "myDatabase.nsf");
View view = db.getView("myViewName");
Document doc = view.getFirstDocument();
while (doc != null) {
out.println("doc: " + doc.getUniversalID());
doc = view.getNextDocument(doc);
}
ViewEntryCollection entries = view.getAllEntries();
ViewEntry entry = entries.getFirstEntry();
while (entry != null) {
System.out.println("entry: " + entry.getColumnValues());
entry = entries.getNextEntry(entry);
}
}
When I run the code on the server as a Java agent, there are 37235 documents in the view.
When I run the code in a standalone client, there are only 37217 documents in the view, and the code is much, much slower.
Details and execution environment:
The server version is 8.5.3, the NCSO.jar I used for the client has SHA-1 d879f8992aae49a06769a564217633a9e0fbd1b6.
The database myDatabase.nsf contains about 150000 documents, each with a file attachment.
The missing documents do not appear in a block, they appear between index 10000 and 20000.
In both cases the code runs as the same user account.
What might be the reason that 18 of the documents cannot be found?
Update and Clarification
Upon further inspection, it turned out that I had indeed run the code with different user accounts, and that the inaccessible document had some Reader Names fields.
On the server I had this configuration, although I configured the agent to "Run on behalf of" CN=User Name/O=domain. It didn't matter whether I ran the agent from the Domino Console or via HTTP:
effectiveUserName=CN=User Name/O=domain
commonUserName=domino01
userName=CN=domino01/O=domain
On the client I had this configuration:
effectiveUserName=[NotesException: Not implemented]
commonUserName=User Name
userName=User Name/O=domain
And that was even though I used this code in the client:
Session session = NotesFactory.createSession("127.0.0.1", "User Name", "password");
You say that in both cases the code runs as the same user account, so I want to trust that this is true. I presume, therefore, that you have ruled out Reader Names fields as cause of the discrepancy.
In that case, have you checked the IsValid() property of the ViewEntry objects when you process them in the agent running on the server? Perhaps the NCSO.jar implementation that you are using for the client-side code is filtering out the objects where IsValid() would return false.
I have a section of code within a Web App, running in Tomcat 5.0, that makes a call to the javax.print.PrintServiceLookup method lookupPrintServices(null, null). Previously, this code returned an array of substantial size, listing all the printers on the server as expected. Rather suddenly one day recently, it started behaving differently, now returning a zero-size array of no printers instead. Checking rather thoroughly, I was not able to determine what might have changed to cause this method to behave differently now than it did before.
I made a small, stand-alone test program that contained this same method call.
PrintService[] printers = PrintServiceLookup.lookupPrintServices(null, null);
System.out.println("Java Version: " + System.getProperty("java.version"));
System.out.println("Printers found:");
if (printers != null) {
for (PrintService printer : printers) {
if (printer != null) {
System.out.println(" " + printer.toString());
}
}
}
System.out.println("End");
Running this program, it reacted differently, returning the full list of printers. Double-checking, I put the same code (using logging statements instead of System.print statements) in the context initialization method of the Web App, and it still returns zero printers. The method returns different results depending on whether it is run from the web app war or the stand-alone jar.
Some of my colleagues suggested that it might have to do with the Security Manager, and indeed, the documentation for the PrintService class says that certain properties of a Security Manager can alter results from the method call. However, after adding some code to my test to retrieve and view the Security Manager, it appears that there is none in either case.
try {
if (sec != null) {
System.out.println(sec.toString());
sec.checkPrintJobAccess();
}
System.out.println("*-*-*-*-*Printer Access allowed!!");
}
catch (SecurityException e) {
System.out.println("*-*-*-*-*Printer NOT Access allowed!!");
}
The result is that the Security Manager is null in both cases.
Trying it on a different server, both the web app and the stand-alone jar versions of doing things return no printers. There is no consistency that I can find.
What is going on here? What is causing this javax method call to return different results in different situations? What could have changed about the web app to alter its behavior between one day and the next?
Try starting the server with this option -DUseSunHttpHandler=true to initiate the HTTP URL Connection with JDK API instead of server API.
Hope this works for you too.
I'm working with the DFS Java API and was wondering whether anyone knows a simple way to configure a client-side timeout for service-calls that can be configured on the service context, for example?
I have experienced some rare occasions where a Documentum repository was not responding, that's why I am considering a general timeout for all DFS calls.
For testing a hanging service call, I created a dummy TBO implementation that simply blocks the thread for 10 minutes when updating the document:
#Override
public void saveEx(boolean keepLock, String versionLabels) throws DfException {
if (isNew() == false) {
try {
Thread.sleep(1000*60*10);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
super.saveEx(keepLock, versionLabels);
}
I'm not sure if this behaves exactly like a hanging service call, but at least in my tests it worked as expected - my invocations of the update method of the Object Service took about 10minutes.
Is there any configuration I have not yet found, or maybe a runtime-property to pass to the service context to configure the timeout?
I would prefer using existing features of DFS for this instead of implementing my own mechanism.
Have you tried editing the value in dfs-runtime.properties? I don't think the timeout can be context-specific, but you should be able to change it for the client as a whole.
Reposted from https://community.emc.com/message/3249#3249
"Please see the Server runtime startup settings section of the Deployment guide.
The following list describes the precedence that dfs-runtime.properties files take depending on their location:
local-dfs‑runtime.properties file in the local classpath
runtime properties file specified with ‑Ddfs.runtime.properties.file
dfs‑runtime.properties packaged with emc‑dfs‑rt.jar
For example, settings in the local-dfs‑runtime.properties file on the local classpath will take precedence of identical settings in the dfs‑runtime.properties file that is located in emc‑dfs‑rt.jar or the one specified with the ‑D parameter. The DFS application must be restarted after any changes to the configuration. As a best practice, use the provided configuration file that is deployed in the emc‑dfs‑rt.jar file for your base settings and use an external file to override settings that you specifically wish to change."