I am working in a web application which will run on Tomcat and use Apache Ignite as cache.
The application must run in a clustered environment which already has a Zookeeper for other purposes.
My questions are how should I best configure and fine tune the Ignite nodes?
Q1. Should I: a) I run each Ignite node inside Tomcat in the same client webapp?
or
b) Have a separate process running Ignite and start Ignite in the webapp as client only.
Q2. How do I limit the amount ofmemory allocated to Ignite?
If I run in a separate process I can just limit that JVM on start, but can I achieve a similar restriction on resource consumption and garbage collection thrashing running inside Tomcat?
My current configuration is in the code excerpt below for a CacheConfiguration set to CacheMode.PARTITIONED.
private ZookeeperDiscoverySpi getZookeeperDiscoverySpi() {
ZookeeperDiscoverySpi zkDiscoverySpi = new ZookeeperDiscoverySpi();
zkDiscoverySpi.setZkConnectionString("127.0.0.1:2181");
zkDiscoverySpi.setZkRootPath("/apacheIgnite");
return zkDiscoverySpi;
}
private IgniteConfiguration getDefaultConfiguration(CacheConfiguration cacheCfg) {
IgniteConfiguration igniteConfig = new IgniteConfiguration();
igniteConfig.setIgniteInstanceName("IgniteInstanceName");
igniteConfig.setCacheConfiguration(cacheCfg);
igniteConfig.setClientMode(clientMode); // set to true for Tomcat webapp, false for Ignite node process
igniteConfig.setPeerClassLoadingEnabled(false);
igniteConfig.setMetricsLogFrequency(0);
igniteConfig.setDiscoverySpi(getZookeeperDiscoverySpi());
igniteConfig.setMetricsLogFrequency(0);
return igniteConfig;
}
Q1 you can use both approaches. You can start with having Ignite server node in the same JVM, see if it fits your case.
Q2 Starting from Ignite 2.0 it will not use much heap but rather Off-Heap memory to store data. You can specify memory allowance by changing size of (default) data region in data storage configuration. Then enable page eviction to make sure you do not run out of this memory.
Related
All I have got an apache FUSION server and configured jetty for the same.
I can see using newrelic that the count of threads is increasing linearly. After a time these threads are increased to a limit and cause out of memory exception until I restart my proxy server.
Please find below the start.ini configs I did to regulate the number of threads.
--module=server
jetty.threadPool.minThreads=10
jetty.threadPool.maxThreads=150
jetty.threadPool.idleTimeout=5000
jetty.server.dumpAfterStart=false
jetty.server.dumpBeforeStop=false
jetty.httpConfig.requestHeaderSize=32768
etc/jetty-stop-timeout.xml
--module=continuation
--module=deploy
--module=jsp
--module=ext
--module=resources
--module=client
--module=annotations
--module=servlets
etc/jetty-logging.xml
--module=jmx
--module=stats
I tried adding thread enabled property too but it didn't work. Can anyone help how can I limit these threads? For the same configurations on other servers, I can see the threads are not increasing and are well in range on newrelic.
I am updating to latest Hazelcast version [3.12] and I am facing problem to obtain a instance of AtomicLong. The new version, HZ introduces the concept of CAP Theorem, to grant Consistency and Partition Tolerance, but he problem is the CP subsystem must have at least 3 members.
Config config = new Config();
config.getCPSubsystemConfig().setCPMemberCount(3);
config.getCPSubsystemConfig().setGroupSize(3);
HazelcastInstance instance1 = Hazelcast.newHazelcastInstance(config);
How can I configure the CP Subsystem to provides me an instance of atomicLong with just two hazelcast nodes?
If I start my application with just one node, the follow message is printed:
MetadataRaftGroupManager.log:65 [127.0.0.1]:6000 [dev] [3.12] CP Subsystem is waiting for 3 members to join the cluster. Current member count: 1
I will have just two nodes, so, the CP Subsystem doesn't allow me to use an atomicLong because it will waits for ever for at leats 3 nodes..
The version 3.11 I just called hazelcast.getAtomicLong("count").
How can I handle with this?
In 3.12, The CP subsystem cannot be configured to operate with fewer than 3 nodes, because this would sacrifice the "consistency" which is the entire purpose of the CP subsystem under some failure scenarios (network partitioning). [EDIT: See new comment below regarding 4.0 and later behavior]
You can still use the 3.11 APIs, so the code you have from your 3.11 implementation will continue to work. Although the 3.11 APIs are marked deprecated, they are not removed or disabled; the deprecation is a warning that the APIs are known to be vulnerable to consistency issues in a split brain scenario. Some application code is tolerant of such issues and the vulnerability isn't a concern; if your application is not tolerant of a potential consistency issue with the atomic long then adding an additional node in order to migrate to the CP implementation will be required.
I am looking for a solution using Java and Redis (currently using the Jedis library) of having a cold standby redis server. I am looking for an intermediate solution to a single server, and a cluster of servers. Specifically, I want to have two servers setup, each standalone, and have my application only use the first Redis server if it is available, and fail over to the second server, only if the first server is not available - a standard cold standby scenario - no replication.
The current connection factory is setup as
public JedisConnectionFactory redisConnectionFactory() {
JedisConnectionFactory redisConnectionFactory = new JedisConnectionFactory();
redisConnectionFactory.setHostName(redisUrl);
redisConnectionFactory.setPort(redisPort);
redisConnectionFactory.setDatabase(redisDbIndex);
return redisConnectionFactory;
}
where redisUrl resolves to something like 'my-redis-server.some-domain.com'. I would like to be able to specify the redis host name something like 'my-redis-server-1.some-domain.com,my-redis-server-2.some-domain.com' and have the second server used as the cold standby.
I have a Tomcat installation where I suspect the thread pool may be decreasing over time due to threads not being properly released. I get an error in catalina.out when maxthreads is reached, but I would like to log the number of threads in use to a file every five minutes so I can verify this hypothesis. Would anyone please be able to advise how this can be be done?
Also in this installation there is no Tomcat manager, it appears whoever did the original installation deleted the manager webapp for some reason. I'm not sure if manager would be able to do the above or if I can reinstall it without damaging the existing installation? All I really want to do is keep track of the thread pool.
Also, I noticed that maxthreads for Tomcat is 200, but the max number of concurrent connections for Apache is lower (Apache is using mod_proxy and mod_proxy_ajp (AJP 1.3) to feed Tomcat). That seems wrong too, what is the correct relationship between these numbers?
Any help much appreciated :D
Update: Just a quick update to say the direct JMX access worked. However I also had to set Dcom.sun.management.jmxremote.host. I set it to localhost and it worked, however without it no dice. If anyone else has a similar problem trying to enable JMX I recommend you set this value also, even if you are connecting from the local machine. Seems it is required with some versions of Tomcat.
Just a quick update to say the direct JMX access worked. However I also had to set Dcom.sun.management.jmxremote.host. I set it to localhost and it worked, however without it no dice. If anyone else has a similar problem trying to enable JMX I recommend you set this value also, even if you are connecting from the local machine. Seems it is required with some versions of Tomcat.
Direct JMX access
Try adding this to catalina.sh/bat:
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=5005
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
UPDATE: Alex P suggest that the following settings might also be required in some situations:
-Dcom.sun.management.jmxremote.host=localhost
This enables remote anonymous JMX connections on port 5005. You may also consider JVisualVM which is much more please and allows to browse JMX via plugin.
What you are looking for is Catalina -> ThreadPool -> http-bio-8080 -> various interesting metrics.
JMX proxy servlet
Easier method might be to use Tomcat's JMX proxy servlet under: http://localhost:8080/manager/jmxproxy. For instance try this query:
$ curl --user tomcat:tomcat http://localhost:8080/manager/jmxproxy?qry=Catalina:name=%22http-bio-8080%22,type=ThreadPool
A little bit of grepping and scripting and you can easily and remotely monitor your application. Note that tomcat:tomcat is the username/password of user having manager-jmx role in conf/tomcat-users.xml.
You can deploy jolokia.war and then retrieve mbeans values in JSON (without the manager):
http://localhost:8080/jolokia/read/Catalina:name=*,type=ThreadPool?ignoreErrors=true
If you want only some values (currentThreadsBusy, maxThreads, currentThreadCount, connectionCount):
http://localhost:8080/jolokia/read/Catalina:name=*,type=ThreadPool/currentThreadsBusy,maxThreads,currentThreadCount,connectionCount?ignoreErrors=true
{
request: {
mbean: "Catalina:name="http-nio-8080",type=ThreadPool",
attribute: [
"currentThreadsBusy",
"maxThreads",
"currentThreadCount",
"connectionCount"
],
type: "read"
},
value: {
currentThreadsBusy: 1,
connectionCount: 4,
currentThreadCount: 10,
maxThreads: 200
},
timestamp: 1490396960,
status: 200
}
Note: This example works on Tomcat7 +.
For a more enterprise solution. I have been using New Relic in our production environment.
This provides a graph of the changes to the threadpool over time.
There are cheaper tools out meanwhile: I am using this jar here: https://docs.cyclopsgroup.org/jmxterm
You can automate it via shell/batch scripts. I regexed the output and let prometheus poll it for displaying it in grafana.
We have 2 applications that run under JBoss. I am looking for a way to reduce the overhead of the server. The main app runs under Tomcat. The other app is made up of MBeans. Is there a way to run MBeans under Tomcat?
Alternative suggestions are appreciated.
MBeans are a part of the JMX specification which is included in the JRE. It should be possible to run MBeans under Tomcat. Tomcat 5 or later provides an MBean server.
You can use the following JVM arguments to startup Tomcat with MBean enabled
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=4444 (could be anything)
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
You also should use the MBean server that is in tomcat - you have to find that one via:
// find the existing MBean server (tomcat's) in lieu of
// creating our own
//
ArrayList<MBeanServer> mbservers = MBeanServerFactory
.findMBeanServer(null);
int nservers = mbservers.size();
if (nservers > 0) {
//
// TODO: A better way to get the currently active server ?
// For some reason, every time the webapp is reloaded there is one
// more instance of the MBeanServer
mbserver = (MBeanServer) mbservers.get(nservers - 1);
}
if (mbserver == null) {
mbserver = MBeanServerFactory.createMBeanServer();
}
Try this http://community.jboss.org/wiki/JBossASTuningSliming. Sure you have many services without usage.