In our project we are migrating from JBoss5 to Jboss EAP 6.1.
When I was going through the configuration to be used in Jboss EAP 6.1, I stumbled upon below:
<pools>
<bean-instance-pools>
<strict-max-pool name="slsb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="1" instance-acquisitiontimeout-unit="MILLISECONDS"/>
<strict-max-pool name="mdb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="1" instance-acquisitiontimeout-unit="MILLISECONDS"/>
</bean-instance-pools>
</pools>
I am not clear about the max-pool-size argument.Is this limit 20 instances per Stateless EJB bean deployed on JBoss or pool will go only up to 20 instances irrespective of the no of stateless EJB beans.
I don't agree with eis.
Here is code of Wildfly 8.2.1
StatelessSessionComponent.java
public StatelessSessionComponent(final StatelessSessionComponentCreateService slsbComponentCreateService) {
super(slsbComponentCreateService);
StatelessObjectFactory<StatelessSessionComponentInstance> factory = new StatelessObjectFactory<StatelessSessionComponentInstance>() {
#Override
public StatelessSessionComponentInstance create() {
return (StatelessSessionComponentInstance) createInstance();
}
#Override
public void destroy(StatelessSessionComponentInstance obj) {
obj.destroy();
}
};
final PoolConfig poolConfig = slsbComponentCreateService.getPoolConfig();
if (poolConfig == null) {
ROOT_LOGGER.debug("Pooling is disabled for Stateless EJB " + slsbComponentCreateService.getComponentName());
this.pool = null;
this.poolName = null;
} else {
ROOT_LOGGER.debug("Using pool config " + poolConfig + " to create pool for Stateless EJB " + slsbComponentCreateService.getComponentName());
this.pool = poolConfig.createPool(factory);
this.poolName = poolConfig.getPoolName();
}
this.timeoutMethod = slsbComponentCreateService.getTimeoutMethod();
this.weakAffinity = slsbComponentCreateService.getWeakAffinity();
}
As I see pool is non-static field and is created for every type of Component(ejb class).
Red Hat documentation says
the maximum size of the bean pool.
Also, if you go to admin panel of EAP and go to Profile -> Container -> EJB3 -> Bean Pools -> "Need Help?" it says
Max Pool Size: The maximum number of bean instances that the pool can
hold at a given point in time
I would interpret that to mean that pool will go only up to 20 instances.
Edit: in retrospect, answer by Sergey Kosarev saying it is per instance seems convincing enough that you should probably believe that instead.
Related
I am trying to make it so I can redeploy a JBoss 7.1.0 cluster with a WAR that has apache ignite.
I am starting the cache like this:
System.setProperty("IGNITE_UPDATE_NOTIFIER", "false");
igniteConfiguration = new IgniteConfiguration();
int failureDetectionTimeout = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_FAILURE_DETECTION_TIMEOUT", "60000"));
igniteConfiguration.setFailureDetectionTimeout(failureDetectionTimeout);
String igniteVmIps = getProperty("IGNITE_VM_IPS");
List<String> addresses = Arrays.asList("127.0.0.1:47500");
if (StringUtils.isNotBlank(igniteVmIps)) {
addresses = Arrays.asList(igniteVmIps.split(","));
}
int networkTimeout = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_NETWORK_TIMEOUT", "60000"));
boolean failureDetectionTimeoutEnabled = Boolean.parseBoolean(getProperty("IGNITE_TCP_DISCOVERY_FAILURE_DETECTION_TIMEOUT_ENABLED", "true"));
int tcpDiscoveryLocalPort = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_LOCAL_PORT", "47500"));
int tcpDiscoveryLocalPortRange = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_LOCAL_PORT_RANGE", "0"));
TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi();
tcpDiscoverySpi.setLocalPort(tcpDiscoveryLocalPort);
tcpDiscoverySpi.setLocalPortRange(tcpDiscoveryLocalPortRange);
tcpDiscoverySpi.setNetworkTimeout(networkTimeout);
tcpDiscoverySpi.failureDetectionTimeoutEnabled(failureDetectionTimeoutEnabled);
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(addresses);
tcpDiscoverySpi.setIpFinder(ipFinder);
igniteConfiguration.setDiscoverySpi(tcpDiscoverySpi);
Ignite ignite = Ignition.start(igniteConfiguration);
ignite.cluster().active(true);
Then I am stopping the cache when the application undeploys:
ignite.close();
When I try to redeploy, I get the following error during initialization.
org.apache.ignite.spi.IgniteSpiException: Failed to marshal custom event: StartRoutineDiscoveryMessage [startReqData=StartRequestData [prjPred=org.apache.ignite.internal.cluster.ClusterGroupAdapter$CachesFilter#7385a997, clsName=null, depInfo=null, hnd=org.apache.ignite.internal.GridEventConsumeHandler#2aec6952, bufSize=1, interval=0, autoUnsubscribe=true], keepBinary=false, deserEx=null, routineId=bbe16e8e-2820-4ba0-a958-d5f644498ba2]
If I full restart the server, starts up fine.
Am I missing some magic in the shutdown process?
I see what I did wrong, and it was code I omitted from the ticket.
ignite.events(ignite.cluster().forCacheNodes(cacheConfig.getKey())).remoteListen(locLsnr, rmtLsnr,
EVT_CACHE_OBJECT_PUT, EVT_CACHE_OBJECT_READ, EVT_CACHE_OBJECT_REMOVED);
When it was trying to register this code twice, it was causing that strange error.
I put a try-catch ignore around it for now and things seem to be ok.
enter image description here public class GemfireTest {
public static void main(String[] args) throws NameResolutionException, TypeMismatchException, QueryInvocationTargetException, FunctionDomainException {
ServerLauncher serverLauncher = new ServerLauncher.Builder()
.setMemberName("server1")
.setServerPort(40404)
.set("start-locator", "127.0.0.1[9090]")
.build();
serverLauncher.start();
String queryString = "SELECT * FROM /gemregion";
ClientCache cache = new ClientCacheFactory().create();
QueryService queryService = cache.getQueryService();
Query query = queryService.newQuery(queryString);
SelectResults results = (SelectResults)query.execute();
int size = results.size();
System.out.println(size);
}
}
trying to run a locator and a server inside my java application getting an exception below:
Exception in thread "main" java.lang.IllegalStateException: A
connection to a distributed system already exists in this VM. It has
the following configuration: ack-severe-alert-threshold="0"
ack-wait-threshold="15" archive-disk-space-limit="0"
archive-file-size-limit="0" async-distribution-timeout="0"
async-max-queue-size="8" async-queue-timeout="60000"
bind-address="" cache-xml-file="cache.xml"
cluster-configuration-dir="" cluster-ssl-ciphers="any"
cluster-ssl-enabled="false" cluster-ssl-keystore=""
cluster-ssl-keystore-password="" cluster-ssl-keystore-type=""
cluster-ssl-protocols="any"
cluster-ssl-require-authentication="true" cluster-ssl-truststore=""
cluster-ssl-truststore-password="" conflate-events="server"
conserve-sockets="true" delta-propagation="true"
deploy-working-dir="C:\Users\Saranya\IdeaProjects\Gemfire"
disable-auto-reconnect="false" disable-tcp="false"
distributed-system-id="-1" distributed-transactions="false"
durable-client-id="" durable-client-timeout="300"
enable-cluster-configuration="true"
enable-network-partition-detection="true"
enable-time-statistics="false" enforce-unique-host="false"
gateway-ssl-ciphers="any" gateway-ssl-enabled="false"
gateway-ssl-keystore="" gateway-ssl-keystore-password=""
gateway-ssl-keystore-type="" gateway-ssl-protocols="any"
gateway-ssl-require-authentication="true" gateway-ssl-truststore=""
gateway-ssl-truststore-password="" groups=""
http-service-bind-address="" http-service-port="7070"
http-service-ssl-ciphers="any" http-service-ssl-enabled="false"
http-service-ssl-keystore="" http-service-ssl-keystore-password=""
http-service-ssl-keystore-type="" http-service-ssl-protocols="any"
http-service-ssl-require-authentication="false"
http-service-ssl-truststore=""
http-service-ssl-truststore-password="" jmx-manager="false"
jmx-manager-access-file="" jmx-manager-bind-address=""
jmx-manager-hostname-for-clients="" jmx-manager-http-port="7070"
jmx-manager-password-file="" jmx-manager-port="1099"
jmx-manager-ssl-ciphers="any" jmx-manager-ssl-enabled="false"
jmx-manager-ssl-keystore="" jmx-manager-ssl-keystore-password=""
jmx-manager-ssl-keystore-type="" jmx-manager-ssl-protocols="any"
jmx-manager-ssl-require-authentication="true"
jmx-manager-ssl-truststore="" jmx-manager-ssl-truststore-password=""
jmx-manager-start="false" jmx-manager-update-rate="2000"
load-cluster-configuration-from-dir="false" locator-wait-time="0"
locators="127.0.0.1[9090]" (wanted "") lock-memory="false"
log-disk-space-limit="0"
log-file="C:\Users\Saranya\IdeaProjects\Gemfire\server1.log"
(wanted "") log-file-size-limit="0" log-level="config" max-num-reconnect-tries="3" max-wait-time-reconnect="60000"
mcast-address="/239.192.81.1" mcast-flow-control="1048576, 0.25,
5000" mcast-port="0" mcast-recv-buffer-size="1048576"
mcast-send-buffer-size="65535" mcast-ttl="32"
member-timeout="5000" membership-port-range="[1024,65535]"
memcached-bind-address="" memcached-port="0"
memcached-protocol="ASCII" name="server1" (wanted "")
off-heap-memory-size="" redis-bind-address="" redis-password=""
redis-port="0" redundancy-zone="" remote-locators=""
remove-unresponsive-client="false" roles=""
security-client-accessor="" security-client-accessor-pp=""
security-client-auth-init="" security-client-authenticator=""
security-client-dhalgo="" security-log-file=""
security-log-level="config" security-manager=""
security-peer-auth-init="" security-peer-authenticator=""
security-peer-verifymember-timeout="1000" security-post-processor=""
security-shiro-init="" security-udp-dhalgo=""
serializable-object-filter="!" server-bind-address=""
server-ssl-ciphers="any" server-ssl-enabled="false"
server-ssl-keystore="" server-ssl-keystore-password=""
server-ssl-keystore-type="" server-ssl-protocols="any"
server-ssl-require-authentication="true" server-ssl-truststore=""
server-ssl-truststore-password="" socket-buffer-size="32768"
socket-lease-time="60000" ssl-ciphers="any" ssl-cluster-alias=""
ssl-default-alias="" ssl-enabled-components="[]"
ssl-gateway-alias="" ssl-jmx-alias="" ssl-keystore=""
ssl-keystore-password="" ssl-keystore-type="" ssl-locator-alias=""
ssl-protocols="any" ssl-require-authentication="true"
ssl-server-alias="" ssl-truststore="" ssl-truststore-password=""
ssl-truststore-type="" ssl-web-alias=""
ssl-web-require-authentication="false" start-dev-rest-api="false"
start-locator="127.0.0.1[9090]" (wanted "")*
statistic-archive-file="" statistic-sample-rate="1000"
statistic-sampling-enabled="true" tcp-port="0"
udp-fragment-size="60000" udp-recv-buffer-size="1048576"
udp-send-buffer-size="65535" use-cluster-configuration="true"
user-command-packages="" validate-serializable-objects="false"
at
org.apache.geode.distributed.internal.InternalDistributedSystem.validateSameProperties(InternalDistributedSystem.java:2959)
at
org.apache.geode.distributed.DistributedSystem.connect(DistributedSystem.java:199)
at
org.apache.geode.cache.client.ClientCacheFactory.basicCreate(ClientCacheFactory.java:243)
at
org.apache.geode.cache.client.ClientCacheFactory.create(ClientCacheFactory.java:214)
at GemfireTest.main(GemfireTest.java:61)
How to solve this exception?
The error here is pretty self explanatory: you can’t have more than one connection to a distributed system within a single JVM. In this particular case you’re starting both a server cache (ServerLauncher) and a client cache (ClientCacheFactory) within the same JVM, which is not supported.
To solve the issue, use two different applications or JVMs, one for the server and another one for the client executing the query.
Cheers.
I am working with java in a maven project. I was using couchbase 2.3.1 but in trying to resolve this issue I rolled back to 2.2.8 to no avail.
The issue I get is that while I do get date through to my couchbase cluster I am seeing alot of this:
java.lang.RuntimeException: java.util.concurrent.TimeoutException
at com.couchbase.client.java.util.Blocking.blockForSingle(Blocking.java:75)
at com.couchbase.client.java.CouchbaseBucket.upsert(CouchbaseBucket.java:359)
at com.couchbase.client.java.CouchbaseBucket.upsert(CouchbaseBucket.java:354)
Below are the settings for my couchbase environment:
CouchbaseEnvironment: {sslEnabled=false, sslKeystoreFile='null', sslKeystorePassword='null', queryEnabled=false, queryPort=8093, bootstrapHttpEnabled=true, bootstrapCarrierEnabled=true, bootstrapHttpDirectPort=8091, bootstrapHttpSslPort=18091, bootstrapCarrierDirectPort=11210, bootstrapCarrierSslPort=11207, ioPoolSize=24, computationPoolSize=24, responseBufferSize=16384, requestBufferSize=16384, kvServiceEndpoints=1, viewServiceEndpoints=1, queryServiceEndpoints=1, searchServiceEndpoints=1, ioPool=NioEventLoopGroup, coreScheduler=CoreScheduler, eventBus=DefaultEventBus, packageNameAndVersion=couchbase-java-client/2.2.8 (git: 2.2.8, core: 1.2.9), dcpEnabled=false, retryStrategy=BestEffort, maxRequestLifetime=75000, retryDelay=ExponentialDelay{growBy 1.0 MICROSECONDS, powers of 2; lower=100, upper=100000}, reconnectDelay=ExponentialDelay{growBy 1.0 MILLISECONDS, powers of 2; lower=32, upper=4096}, observeIntervalDelay=ExponentialDelay{growBy 1.0 MICROSECONDS, powers of 2; lower=10, upper=100000}, keepAliveInterval=30000, autoreleaseAfter=2000, bufferPoolingEnabled=true, tcpNodelayEnabled=true, mutationTokensEnabled=false, socketConnectTimeout=1000, dcpConnectionBufferSize=20971520, dcpConnectionBufferAckThreshold=0.2, dcpConnectionName=dcp/core-io, callbacksOnIoPool=false, queryTimeout=75000, viewTimeout=75000, kvTimeout=2500, connectTimeout=5000, disconnectTimeout=25000, dnsSrvEnabled=false}
Im not really too sure what to look at here. As far as I can tell there should be a decent enough connection between the server where the app is running and the couchbase cluster. Any help or direction on this would be helpful. Here is a snippet from where the error is being thrown.
LockableItem<InnerVertex> lv = this.getInnerVertex(id);
lv.lock();
try {
String content;
try {
content = mapper.writeValueAsString(lv.item);
} catch (JsonProcessingException e) {
LOG.warning(e.getMessage());
return;
}
RawJsonDocument d = RawJsonDocument.create(VertexId.toKey(id), content);
bucket.upsert(d);
} finally {
lv.unlock();
}
I was searching for the answer. I got there are many solutions all are talking about exception. I also checked the jar code, it is called that it is timeout exception.
Root Cause Analysis
The error occured from the following section of couchbase: https://github.com/couchbase/couchbase-java-client/blob/master/src/main/java/com/couchbase/client/java/util/Blocking.java#L71
public static <T> T blockForSingle(final Observable<? extends T> observable, final long timeout,
final TimeUnit tu) {
final CountDownLatch latch = new CountDownLatch(1);
TrackingSubscriber<T> subscriber = new TrackingSubscriber<T>(latch);
observable.subscribe(subscriber);
try {
if (!latch.await(timeout, tu)) { // From here, this error occurs.
throw new RuntimeException(new TimeoutException());
}
}
If the timeout kicks in, a TimeoutException nested in a
RuntimeException is thrown to be fully compatible with the
Observable.timeout(long, TimeUnit) behavior.
Resource Link:
http://docs.couchbase.com/sdk-api/couchbase-java-client-2.2.0/com/couchbase/client/java/util/Blocking.html
Your configuration analysis and solution:
Your couchbase environment connectionTimeout is 5000ms or 5sec, which is the default value of connection timeout.
You need to increase this value to 10000ms or greater. Your problem will be solved.
//this tunes the SDK (to customize connection timeout)
CouchbaseEnvironment env = DefaultCouchbaseEnvironment.builder()
.connectTimeout(10000) //10000ms = 10s, default is 5s
.build();
A full Solution
Simonbasle has given a full solution in this tutorial:
From the short log, it looks like the SDK is able to connect to the
node, but takes a little much time to open the bucket. How good is the
network link between the two machines? Is this a VM/cloud machine?
What you can try to do is increase the connect timeout:
public class NoSQLTest {
public static void main(String[] args) {
try {
//this tunes the SDK (to customize connection timeout)
CouchbaseEnvironment env = DefaultCouchbaseEnvironment.builder()
.connectTimeout(10000) //10000ms = 10s, default is 5s
.build();
System.out.println("Create connection");
//use the env during cluster creation to apply
Cluster cluster = CouchbaseCluster.create(env, "10.115.224.94");
System.out.println("Try to openBucket");
Bucket bucket = cluster.openBucket("beer-sample"); //you can also force a greater timeout here (cluster.openBucket("beer-sample", 10, TimeUnit.SECONDS))
System.out.println("disconnect");
cluster.disconnect();
}
catch (Exception e) {
e.printStackTrace();
}
}
}
As a side note, you should always reuse the CouchbaseEnvironment,
CouchbaseCluster and Bucket instances once created (usually by making
them public static somewhere, or a Spring singleton, etc...). These
are thread safe and should be shared (and they are expensive to create
anyway).
Resource Link:
Couchbase connection timeout with Java SDK
Thanks for the question, and for #SkyWalker's Answer.
They helped when I encountered this annoying timeout.
For Spring Data Couchbase 2, adding the following to application.properties solved it
spring.couchbase.env.timeouts.connect=20000
I invoke my custom Monitor registered on the Weblogic MBeanServer, but weblogic give me the updated value only after 15 seconds.
Does Weblogic cache call?
found!
I marked my MBean with the following (spring) annotatioon:
#ManagedResource(
objectName = "bean:name=obuInterfaceMonitor", description = "obuInterface Monitor", log = true,
logFile = "jmx.log", currencyTimeLimit = 15, persistPolicy = "OnUpdate", persistPeriod = 200, persistLocation = "interfaceMonitor", persistName = "bar"
)
I want to execute two tasks on scheduled time (23:59 CET and 08:00 CET). I have created an EJB singleton bean that maintains those methods:
#Singleton
public class OfferManager {
#Schedule(hour = "23", minute = "59", timezone = "CET")
#AccessTimeout(value = 0) // concurrent access is not permitted
public void fetchNewOffers() {
Logger.getLogger(OfferManager.class.getName()).log(Level.INFO, "Fetching new offers started");
// ...
Logger.getLogger(OfferManager.class.getName()).log(Level.INFO, "Fetching new offers finished");
}
#Schedule(hour="8", minute = "0", timezone = "CET")
public void sendMailsWithReports() {
Logger.getLogger(OfferManager.class.getName()).log(Level.INFO, "Generating reports started");
// ...
Logger.getLogger(OfferManager.class.getName()).log(Level.INFO, "Generating reports finished");
}
}
The problem is that both tasks are executed twice. The server is WildFly Beta1, configured in UTC time.
Here are some server logs, that might be useful:
2013-10-20 11:15:17,684 INFO [org.jboss.as.server] (XNIO-1 task-7) JBAS018559: Deployed "crawler-0.3.war" (runtime-name : "crawler-0.3.war")
2013-10-20 21:59:00,070 INFO [com.indeed.control.OfferManager] (EJB default - 1) Fetching new offers started
....
2013-10-20 22:03:48,608 INFO [com.indeed.control.OfferManager] (EJB default - 1) Fetching new offers finished
2013-10-20 23:59:00,009 INFO [com.indeed.control.OfferManager] (EJB default - 2) Fetching new offers started
....
2013-10-20 23:59:22,279 INFO [com.indeed.control.OfferManager] (EJB default - 2) Fetching new offers finished
What might be the cause of such behaviour?
I solved the problem with specifying scheduled time with server time (UTC).
So
#Schedule(hour = "23", minute = "59", timezone = "CET")
was replaced with:
#Schedule(hour = "21", minute = "59")
I don't know the cause of such beahaviour, maybe the early release of Wildfly is the issue.
I had the same problem with TomEE plume 7.0.4. In my case the solution was to change #Singleton to #Stateless.