Weblogic MBeanServer provides the value only after 15 secs - java

I invoke my custom Monitor registered on the Weblogic MBeanServer, but weblogic give me the updated value only after 15 seconds.
Does Weblogic cache call?

found!
I marked my MBean with the following (spring) annotatioon:
#ManagedResource(
objectName = "bean:name=obuInterfaceMonitor", description = "obuInterface Monitor", log = true,
logFile = "jmx.log", currencyTimeLimit = 15, persistPolicy = "OnUpdate", persistPeriod = 200, persistLocation = "interfaceMonitor", persistName = "bar"
)

Related

When doing a redeploy of JBoss WAR with Apache Ignite, Failed to marshal custom event: StartRoutineDiscoveryMessage

I am trying to make it so I can redeploy a JBoss 7.1.0 cluster with a WAR that has apache ignite.
I am starting the cache like this:
System.setProperty("IGNITE_UPDATE_NOTIFIER", "false");
igniteConfiguration = new IgniteConfiguration();
int failureDetectionTimeout = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_FAILURE_DETECTION_TIMEOUT", "60000"));
igniteConfiguration.setFailureDetectionTimeout(failureDetectionTimeout);
String igniteVmIps = getProperty("IGNITE_VM_IPS");
List<String> addresses = Arrays.asList("127.0.0.1:47500");
if (StringUtils.isNotBlank(igniteVmIps)) {
addresses = Arrays.asList(igniteVmIps.split(","));
}
int networkTimeout = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_NETWORK_TIMEOUT", "60000"));
boolean failureDetectionTimeoutEnabled = Boolean.parseBoolean(getProperty("IGNITE_TCP_DISCOVERY_FAILURE_DETECTION_TIMEOUT_ENABLED", "true"));
int tcpDiscoveryLocalPort = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_LOCAL_PORT", "47500"));
int tcpDiscoveryLocalPortRange = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_LOCAL_PORT_RANGE", "0"));
TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi();
tcpDiscoverySpi.setLocalPort(tcpDiscoveryLocalPort);
tcpDiscoverySpi.setLocalPortRange(tcpDiscoveryLocalPortRange);
tcpDiscoverySpi.setNetworkTimeout(networkTimeout);
tcpDiscoverySpi.failureDetectionTimeoutEnabled(failureDetectionTimeoutEnabled);
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(addresses);
tcpDiscoverySpi.setIpFinder(ipFinder);
igniteConfiguration.setDiscoverySpi(tcpDiscoverySpi);
Ignite ignite = Ignition.start(igniteConfiguration);
ignite.cluster().active(true);
Then I am stopping the cache when the application undeploys:
ignite.close();
When I try to redeploy, I get the following error during initialization.
org.apache.ignite.spi.IgniteSpiException: Failed to marshal custom event: StartRoutineDiscoveryMessage [startReqData=StartRequestData [prjPred=org.apache.ignite.internal.cluster.ClusterGroupAdapter$CachesFilter#7385a997, clsName=null, depInfo=null, hnd=org.apache.ignite.internal.GridEventConsumeHandler#2aec6952, bufSize=1, interval=0, autoUnsubscribe=true], keepBinary=false, deserEx=null, routineId=bbe16e8e-2820-4ba0-a958-d5f644498ba2]
If I full restart the server, starts up fine.
Am I missing some magic in the shutdown process?
I see what I did wrong, and it was code I omitted from the ticket.
ignite.events(ignite.cluster().forCacheNodes(cacheConfig.getKey())).remoteListen(locLsnr, rmtLsnr,
EVT_CACHE_OBJECT_PUT, EVT_CACHE_OBJECT_READ, EVT_CACHE_OBJECT_REMOVED);
When it was trying to register this code twice, it was causing that strange error.
I put a try-catch ignore around it for now and things seem to be ok.

when called Securityutils.getsubject().hasRole("any") with multipe realms, for a nonexistent role throws an exception

I created two realms for authentication in apache shiro, but when i tried to call hasRole("any") it throws the following exception (if the role exists, it returns true):
java.lang.ClassCastException: org.apache.shiro.subject.SimplePrincipalCollection cannot be cast to java.lang.String
at com.ws.shiro.RedisStringSerializer.serialize(RedisStringSerializer.java:13) ~[shiro-redis-3.0.2.jar:?]
at org.crazycake.shiro.RedisCache.get(RedisCache.java:79) ~[shiro-redis-3.2.2.jar:?]
at org.apache.shiro.realm.AuthorizingRealm.getAuthorizationInfo(AuthorizingRealm.java:328) ~[shiro-core-1.3.2.jar:1.3.2]
at org.apache.shiro.realm.AuthorizingRealm.hasRole(AuthorizingRealm.java:573) ~[shiro-core-1.3.2.jar:1.3.2]
at org.apache.shiro.authz.ModularRealmAuthorizer.hasRole(ModularRealmAuthorizer.java:374) ~[shiro-core-1.3.2.jar:1.3.2]
at org.apache.shiro.mgt.AuthorizingSecurityManager.hasRole(AuthorizingSecurityManager.java:153) ~[shiro-core-1.3.2.jar:1.3.2]
at org.apache.shiro.subject.support.DelegatingSubject.hasRole(DelegatingSubject.java:224) ~[shiro-core-1.3.2.jar:1.3.2]
at com.ws.user.login.LoginResource.login(LoginResource.java:65) ~[main/:?]
the SHIRO.INI is:
# =======================
# Shiro INI configuration
# =======================
## Using Sha256 cryptography
credentialsMatcher = org.apache.shiro.authc.credential.HashedCredentialsMatcher
credentialsMatcher.hashAlgorithmName=SHA-256
credentialsMatcher.hashIterations = 1024
credentialsMatcher.storedCredentialsHexEncoded = false
dbRealm = com.ws.user.realm.DataBaseRealm
dbRealm.credentialsMatcher = $credentialsMatcher
credentialsMatcherToken = com.ws.user.realm.CustomCredentialMatcherToken
credentialsMatcherToken.hashAlgorithmName=SHA-256
credentialsMatcherToken.hashIterations = 1024
credentialsMatcherToken.storedCredentialsHexEncoded = false
tokenRealm = com.ws.user.realm.DataBaseBearerRealm
tokenRealm.credentialsMatcher = $credentialsMatcherToken
securityManager.realms = $dbRealm, $tokenRealm
#redisManager
redisManager = com.ws.shiro.RedisManager
redisManager.host = <THERE IS A HOST HERE>
redisManager.port = 6379
redisManager.expire = 1000
redisManager.timeout = 0
#============redisSessionDAO=============
redisSessionDAO = com.ws.shiro.RedisSessionDAO
redisSessionDAO.redisManager = $redisManager
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
sessionManager.sessionDAO = $redisSessionDAO
securityManager.sessionManager = $sessionManager
#============redisCacheManager===========
cacheManager = com.ws.shiro.RedisCacheManager
cacheManager.redisManager = $redisManager
securityManager.cacheManager = $cacheManager
It seems to be some config, cause when debugging, it goes just for the actual token that I tried to authenticate, but in the class ModularRealmAuthorizer, method hasRole, it was called twice, one each realm, the first was ok, and then in the second Realm, it throws the exception.
Problem solved! I forgot to override the method getAuthorizationCacheKey, in one of my custom realm.

Grails groovy too many hibernate connections

I am struggling with manual the transaction management. Background: I need to run quarz crons which run batch processes. It is recommended for batch processing to manually decide when to flush to the db to not slow down the application to much.
I have a pooled hibernate connection as the following
dataSource {
pooled = true
driverClassName = "com.mysql.jdbc.Driver"
dialect = "org.hibernate.dialect.MySQL5InnoDBDialect"
properties {
maxActive = 50
maxIdle = 25
minIdle = 1
initialSize = 1
minEvictableIdleTimeMillis = 60000
timeBetweenEvictionRunsMillis = 60000
numTestsPerEvictionRun = 3
maxWait = 10000
testOnBorrow = true
testWhileIdle = true
testOnReturn = false
validationQuery = "SELECT 1"
validationQueryTimeout = 3
validationInterval = 15000
jmxEnabled = true
maxAge = 10 * 60000
// http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#JDBC_interceptors
jdbcInterceptors = "ConnectionState;StatementCache(max=200)"
}
}
hibernate {
cache.use_second_level_cache = false
cache.use_query_cache = false
cache.region.factory_class = 'net.sf.ehcache.hibernate.EhCacheRegionFactory'
show_sql = false
logSql = false
}
the cron job calls a service in the service i run do the following:
for(int g=0; g<checkResults.size() ;g++) {
def tmpSearchTerm = SearchTerm.findById((int)results[g+i][0])
tmpSearchTerm.count=((String)checkResults[g]).toInteger()
batch.add(tmpSearchTerm)
}
//increase counter
i+=requestSizeTMP
if (i%(requestSize*4)==0 || i+1==results.size()){
println "PREPARATION TO WRITE:" + i
SearchTerm.withSession{
def tx = session.beginTransaction()
for (SearchTerm s: batch) {
s.save()
}
batch.clear()
tx.commit()
println ">>>>>>>>>>>>>>>>>>>>>writing: ${i}<<<<<<<<<<<<<<<<<<<<<<"
}
session.flush()
session.clear()
}
}
So I am adding things to a batch until I have enough (4x the request size or the last item) and then I am trying to write it to the db.
Everything works fine.. but somehow the code seems to open hibernate transactions and does not close them. I don't really understand why but I am getting a hard error and tomcat crashes with too many connections. I have 2 Problems with that, which i do not understand:
1) If the dataSource is pooled and the maxActive is 50 how can i get a too many connection errors if the limit of tomcat is 500.
2) How do I explicitly terminate the transaction so that i do not have so many open connections?
You can use withTransaction because it will manage transaction.
For example
Account.withTransaction { status ->
def source = Account.get(params.from)
def dest = Account.get(params.to)
int amount = params.amount.toInteger()
if (source.active) {
source.balance -= amount
if (dest.active) {
dest.amount += amount
}
else {
status.setRollbackOnly()
}
}
}
You can look about withTransaction in http://grails.org/doc/latest/ref/Domain%20Classes/withTransaction.html
You can see the difference between withSession and withTransaction in https://stackoverflow.com/a/19692615/1610918
------UPDATE-----------
But I would prefer you to use service and it can be called from job.

ejb bean instance pool jboss EAP 6.1

In our project we are migrating from JBoss5 to Jboss EAP 6.1.
When I was going through the configuration to be used in Jboss EAP 6.1, I stumbled upon below:
<pools>
<bean-instance-pools>
<strict-max-pool name="slsb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="1" instance-acquisitiontimeout-unit="MILLISECONDS"/>
<strict-max-pool name="mdb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="1" instance-acquisitiontimeout-unit="MILLISECONDS"/>
</bean-instance-pools>
</pools>
I am not clear about the max-pool-size argument.Is this limit 20 instances per Stateless EJB bean deployed on JBoss or pool will go only up to 20 instances irrespective of the no of stateless EJB beans.
I don't agree with eis.
Here is code of Wildfly 8.2.1
StatelessSessionComponent.java
public StatelessSessionComponent(final StatelessSessionComponentCreateService slsbComponentCreateService) {
super(slsbComponentCreateService);
StatelessObjectFactory<StatelessSessionComponentInstance> factory = new StatelessObjectFactory<StatelessSessionComponentInstance>() {
#Override
public StatelessSessionComponentInstance create() {
return (StatelessSessionComponentInstance) createInstance();
}
#Override
public void destroy(StatelessSessionComponentInstance obj) {
obj.destroy();
}
};
final PoolConfig poolConfig = slsbComponentCreateService.getPoolConfig();
if (poolConfig == null) {
ROOT_LOGGER.debug("Pooling is disabled for Stateless EJB " + slsbComponentCreateService.getComponentName());
this.pool = null;
this.poolName = null;
} else {
ROOT_LOGGER.debug("Using pool config " + poolConfig + " to create pool for Stateless EJB " + slsbComponentCreateService.getComponentName());
this.pool = poolConfig.createPool(factory);
this.poolName = poolConfig.getPoolName();
}
this.timeoutMethod = slsbComponentCreateService.getTimeoutMethod();
this.weakAffinity = slsbComponentCreateService.getWeakAffinity();
}
As I see pool is non-static field and is created for every type of Component(ejb class).
Red Hat documentation says
the maximum size of the bean pool.
Also, if you go to admin panel of EAP and go to Profile -> Container -> EJB3 -> Bean Pools -> "Need Help?" it says
Max Pool Size: The maximum number of bean instances that the pool can
hold at a given point in time
I would interpret that to mean that pool will go only up to 20 instances.
Edit: in retrospect, answer by Sergey Kosarev saying it is per instance seems convincing enough that you should probably believe that instead.

Java EE Singleton Scheduled Task Executed twice

I want to execute two tasks on scheduled time (23:59 CET and 08:00 CET). I have created an EJB singleton bean that maintains those methods:
#Singleton
public class OfferManager {
#Schedule(hour = "23", minute = "59", timezone = "CET")
#AccessTimeout(value = 0) // concurrent access is not permitted
public void fetchNewOffers() {
Logger.getLogger(OfferManager.class.getName()).log(Level.INFO, "Fetching new offers started");
// ...
Logger.getLogger(OfferManager.class.getName()).log(Level.INFO, "Fetching new offers finished");
}
#Schedule(hour="8", minute = "0", timezone = "CET")
public void sendMailsWithReports() {
Logger.getLogger(OfferManager.class.getName()).log(Level.INFO, "Generating reports started");
// ...
Logger.getLogger(OfferManager.class.getName()).log(Level.INFO, "Generating reports finished");
}
}
The problem is that both tasks are executed twice. The server is WildFly Beta1, configured in UTC time.
Here are some server logs, that might be useful:
2013-10-20 11:15:17,684 INFO [org.jboss.as.server] (XNIO-1 task-7) JBAS018559: Deployed "crawler-0.3.war" (runtime-name : "crawler-0.3.war")
2013-10-20 21:59:00,070 INFO [com.indeed.control.OfferManager] (EJB default - 1) Fetching new offers started
....
2013-10-20 22:03:48,608 INFO [com.indeed.control.OfferManager] (EJB default - 1) Fetching new offers finished
2013-10-20 23:59:00,009 INFO [com.indeed.control.OfferManager] (EJB default - 2) Fetching new offers started
....
2013-10-20 23:59:22,279 INFO [com.indeed.control.OfferManager] (EJB default - 2) Fetching new offers finished
What might be the cause of such behaviour?
I solved the problem with specifying scheduled time with server time (UTC).
So
#Schedule(hour = "23", minute = "59", timezone = "CET")
was replaced with:
#Schedule(hour = "21", minute = "59")
I don't know the cause of such beahaviour, maybe the early release of Wildfly is the issue.
I had the same problem with TomEE plume 7.0.4. In my case the solution was to change #Singleton to #Stateless.

Categories

Resources