Error Reading JMS queue on Solace - java

I have a queue on Solace, where i am able to write data to queue.
But getting error while reading data from queue.
Below is the error i am getting.
Any idea what could be the issue.
Oct 26, 2017 3:13:54 PM com.solacesystems.jcsmp.protocol.impl.TcpClientChannel close
INFO: Channel Closed (smfclient 6)
Oct 26, 2017 3:13:54 PM com.solacesystems.jcsmp.impl.flow.BindRequestTask execute
INFO: Error Response (403) - Permission Not Allowed
javax.jms.JMSSecurityException: Error creating consumer - access
denied (Permission Not Allowed)

The issue is likely that the client is not the owner of the queue and the queue is configured with "Read Only" or "No Access" for its permission level.
The queue's permission level defines the level of access given to consuming clients that are not defined as the owner of the queue.
To resolve this issue, you can edit the queue's permission level to "Consume", "Modify-Topic", or "Delete". Note that you will need to disable the queue before making these changes.

Related

ElasticSearch Node Client on Tomcat - failed to read requesting data

I have exception on tomcat console while using embedded Elastic Search instance. I have configured the embedded instance as a node client cluster, starts off with application runs on tomcat. I've got everything working fine for this cluster, however I'm getting following exception while starting off the instance. I also get same exception when I start another node or shut off existing node for the same cluster.
Apr 07, 2015 4:13:28 PM org.elasticsearch.discovery.zen.ping.multicast
WARNING: [Base] failed to read requesting data from /10.4.1.94:54328
java.io.IOException: Expected handle header, got [15]
at org.elasticsearch.common.io.stream.HandlesStreamInput.readString(Hand
lesStreamInput.java:65)
at org.elasticsearch.cluster.ClusterName.readFrom(ClusterName.java:64)
at org.elasticsearch.cluster.ClusterName.readClusterName(ClusterName.jav
a:58)
at org.elasticsearch.discovery.zen.ping.multicast.MulticastZenPing$Recei
ver.run(MulticastZenPing.java:402)
at java.lang.Thread.run(Thread.java:745)
Apr 07, 2015 4:13:30 PM org.elasticsearch.discovery.zen.ping.multicast
WARNING: [Base] failed to read requesting data from /10.4.1.94:54328
java.io.IOException: Expected handle header, got [15]
at org.elasticsearch.common.io.stream.HandlesStreamInput.readString(Hand
lesStreamInput.java:65)
at org.elasticsearch.cluster.ClusterName.readFrom(ClusterName.java:64)
at org.elasticsearch.cluster.ClusterName.readClusterName(ClusterName.jav
a:58)
at org.elasticsearch.discovery.zen.ping.multicast.MulticastZenPing$Recei
ver.run(MulticastZenPing.java:402)
at java.lang.Thread.run(Thread.java:745)
Apr 07, 2015 4:13:32 PM org.elasticsearch.discovery.zen.ping.multicast
WARNING: [Base] failed to read requesting data from /10.4.1.94:54328
java.io.IOException: Expected handle header, got [15]
at org.elasticsearch.common.io.stream.HandlesStreamInput.readString(Hand
lesStreamInput.java:65)
at org.elasticsearch.cluster.ClusterName.readFrom(ClusterName.java:64)
at org.elasticsearch.cluster.ClusterName.readClusterName(ClusterName.jav
a:58)
at org.elasticsearch.discovery.zen.ping.multicast.MulticastZenPing$Recei
ver.run(MulticastZenPing.java:402)
at java.lang.Thread.run(Thread.java:745)
From the exception it looks like a handshaking problem with other cluster nodes, despite this issue cluster remains healthy and happy to serve payload. I'm using same ElasticSearch version (1.4.4) for both java client and external installations. So answer for this question is not valid anymore (ElasticSearch - failed to read requesting data). Also note I've checked this with isolated node client (java main program) and I don't get to see this exception there.

OutOfMemoryError in Weblogic FileStore

I'm trying to stress my Application into Weblogic11g and I'm sending it many JMS Message on the queues.
But, the FileStore crashes with an OOE around 20K messages and a max size of 647,169 ko
Exception in thread "Thread-13" java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:39)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
at weblogic.store.io.file.StoreFile.expand(StoreFile.java:324)
at weblogic.store.io.file.Heap.reserveSpace(Heap.java:305)
at weblogic.store.io.file.Heap.multiWrite(Heap.java:438)
at weblogic.store.io.file.FileStoreIO.flush(FileStoreIO.java:497)
at weblogic.store.internal.PersistentStoreImpl.run(PersistentStoreImpl.java:638)
at weblogic.store.internal.PersistentStoreImpl$2.run(PersistentStoreImpl.java:383)
And a few line of the log file
Feb 25, 2014 7:53:19 PM CET Warning JTA BEA-110484 The JTA health state has changed from HEALTH_OK to HEALTH_WARN with reason codes: Resource WLStore_MyFS_stores-Node1-file-jms declared unhealthy.
Feb 25, 2014 7:53:19 PM CET Warning JTA BEA-110030 XA resource [WLStore_MyFS_stores-Node1-file-jms] has not responded in the last 120 second(s).
Feb 25, 2014 7:53:19 PM CET Warning JTA BEA-110405 Resource WLStore_MyFS_stores-Node1-file-jms was not assigned to any of these servers: Node1
Feb 25, 2014 7:54:19 PM CET Warning JTA BEA-110486 Transaction BEA1-5DA4B1F8A57C83AEDB1B cannot complete commit processing because resource [WLStore_MyFS_stores-Node1-file-jms] is unavailable. The transaction will be abandoned after 3,420 seconds unless all resources acknowledge the commit decision.
Is it possible to increase the size of this FileStore ?
When sending messages to WLS it keeps the message + header in memory until the message is consumed.
If your rate of production of messages is faster then rate of consumption of messages then you will eventually hit an OOM.
There's couple of things you can do to avoid getting OOM
1) Ensure you have enough consumers for the messages and they are able to consume messages quickly.
2) By default there's a JMS Paging feature which gets triggered when the memory consumption of JMS messages is about 1/3 of the overall heap. You can tune your server to trigger Paging earlier if you want. What paging does is, it leaves the header part of each message in memory and the body moves to the paging file, thereby releasing some memory. For simplistic calculation JMS headers only would consume about 1k of memory.
- Note Pending messages will have both header and body in memory.
3) Ofcourse increasing the JVM size for the managed server hosting your JMS server would directly enable you to keep more messages in memory.
Try increasing the heap allocation for the managed server by providing a larger value for the -Xmx parameter in the Server Start parameters or start script.
See How to increase memory in weblogic for more details.

How to remove errors UnsupportedOperationException and Possible memory leak

I have been searching a lot but have not found a perfect one which can help me to solve this problem.
I have web services and am generating the stub using JAX-WS.
To access the methods of web services I have wriiten a class in which all the methods are static like
public static String getLocation()
{
//call to the web service
}
I am specifying static because I want to confirm this is not the root cause of my problem.
Now when I am checking the logs in the Tomcat directory the catilina log shows some thins like this...This error is occured when I Startup or shutdown the tomcat server
Mar 18, 2010 11:13:07 PM org.apache.catalina.core.ApplicationContext log
INFO: HTMLManager: stop: Stopping web application at '/testWeb'
Mar 18, 2010 11:13:07 PM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: A web application appears to have started a thread named [leakingThread] but has failed to stop it. This is very likely to create a memory leak.
Another error I am seeing is
SEVERE: Unable to determine string representation of value of type [com.sun.xml.stream.writers.XMLStreamWriterImpl]
java.lang.UnsupportedOperationException
at com.sun.xml.stream.writers.XMLStreamWriterImpl.entrySet (XMLStreamWriterImpl.java:2134)
at java.util.AbstractMap.toString(AbstractMap.java:478)
at org.apache.catalina.loader.WebappClassLoader.clearThreadLocalMap(WebappClassLoader.java:2433)
at org.apache.catalina.loader.WebappClassLoader.clearReferencesThreadLocals(WebappClassLoader.java:2349)
at org.apache.catalina.loader.WebappClassLoader.clearReferences(WebappClassLoader.java:1921)
at org.apache.catalina.loader.WebappClassLoader.stop(WebappClassLoader.java:1833)
at org.apache.catalina.loader.WebappLoader.stop(WebappLoader.java:740)
at org.apache.catalina.core.StandardContext.stop(StandardContext.java:4920)
at org.apache.catalina.core.ContainerBase.removeChild(ContainerBase.java:936)
Please can any one help me to clear these errors.
Thanks in advance.
My diagnossis: you have a Map implementation in a thread local, and this map doesn't support the operation entrySet, which is triggered by Map#toString. To be precise, your exception is thrown from this line of code in com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.
Tomcat's code that clears the thread local is quite unfortunately written to unconditionally call toString on objects just to be able to log them if the debug level is on.
If you can't get rid of using a thread-local for this, you may have quite some trouble working around this problem.
Your thread leak, by the way, is very probably the result of failed cleanup due to the above error.

Maven repository blocking specific ip addresses?

I am trying to download the file http://repo.maven.apache.org/maven2/org/sonatype/aether/aether-api/1.13.1/aether-api-1.13.1.jar as part of a maven build by my ci server but I get the error:
Downloading: http://repo.maven.apache.org/maven2/org/sonatype/aether/aether-api/1.13.1/aether-api-1.13.1.jar
01-Nov-2012 08:44:26 Nov 01, 2012 8:44:26 AM org.apache.maven.wagon.providers.http.httpclient.impl.client.DefaultRequestDirector tryExecute
01-Nov-2012 08:44:26 INFO: I/O exception (java.net.SocketException) caught when processing request: Connection reset
01-Nov-2012 08:44:26 Nov 01, 2012 8:44:26 AM org.apache.maven.wagon.providers.http.httpclient.impl.client.DefaultRequestDirector tryExecute
01-Nov-2012 08:44:26 INFO: Retrying request
After locating this error in the logs I tried to download this artefact by hand with wget which didn't work also. Further investigation revealed that downloading form another server form this provider (different ip/same ip-range) is not possible either.
downloading this file to servers form other providers was successful at the same time.
I was able to ping repo.maven.apache.org so the server was reachable.
Is it possible that the ip-address of my ci-server is blocked for download?
Do I have to move my ci-server to a different provider?
(atm my ci-server is hosted at jiffybox/domainfactory, if that helps answering the question)
There are currently serious problems with maven's CDN provider, see
the support forum at
https://getsatisfaction.com/sonatype/topics/unable_to_dowload_maven_jetty_plugin_version_6_1_26_from_central
current issue reports at
https://issues.sonatype.org/browse/MVNCENTRAL-257
https://issues.sonatype.org/browse/MVNCENTRAL-259
https://issues.sonatype.org/browse/MVNCENTRAL-260
So obviously they are working on it.
The only workaround for me was to download through a VPN tunnel.

Solr Read Timeout (only in production environment)

I am working with a Java application that uses SolrJ to index documents to a Solr server.
In my local test environment, I run a local Solr instance on a Tomcat server on my windows xp box. When I run the Java app from a different Windows box, the indexing completes successfully and the Solr log files look normal.
However, running the same Java application deployed on linux webserver communicating to another linux webserver running Solr, I receive "read timed out" messages after every solr update command:
Jul 14, 2011 3:12:31 AM org.apache.solr.core.SolrCore execute INFO: []
webapp=/solr path=/update params={wt=javabin&version=1} status=400
QTime=20020 Jul 14, 2011 3:12:51 AM
org.apache.solr.update.processor.LogUpdateProcessor finish INFO: {} 0
20021 Jul 14, 2011 3:12:51 AM org.apache.solr.common.SolrException log
SEVERE: org.apache.solr.common.SolrException:
java.net.SocketTimeoutException: Read timed out at
org.apache.solr.handler.XMLLoader.load(XMLLoader.java:72) at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54)
at...
Caused by:
javax.xml.stream.XMLStreamException: java.net.SocketTimeoutException:
Read timed out
Any idea why this might be happening? My suspicion is that something is closing these connections after they are initiated (e.g. web filtering software, firewall...), but the network admins at my workplace say that no traffic is being blocked.
Is it getting timedout only with updates or even with querying?
Check the server settings on the linux server machine whether it has very less timeout value.

Categories

Resources