java SSLEngine says NEED_WRAP, call .wrap() and still NEED_WRAP - java

I am seeing a weird issue with SSLEngine and wondering if there is an issue with my code or SSLEngine. Here is the order in which I see things
HandshakeStatus is NEED_WRAP
We call SSLEngine.WRAP
after, there is ZERO data written to the buffer, and SSLEngineResult.result=OK(not overflow nor underflow :( ) and HandshakeStatus is STILL NEED_WRAP
Most important question: How to debug thoroughly? How to 'see' each message somehow? I can capture the byte stream easily enough but is there some library that can parse that into SSL handshake objects?
line 298 (recording previous handshake status) to line 328(where we throw the exception with info) is the relevant code here
https://github.com/deanhiller/webpieces/blob/sslEngineFartingExample/core/core-ssl/src/main/java/org/webpieces/ssl/impl/AsyncSSLEngine3Impl.java
The stack trace was
2019-06-21 08:58:24,562 [-] [webpiecesThreadPool6] Caller+1 at org.webpieces.util.threading.SessionExecutorImpl$RunnableWithKey.run(SessionExecutorImpl.java:123)
ERROR: Uncaught Exception
java.lang.IllegalStateException: Engine issue. hsStatus=NEED_WRAP status=OK previous hsStatus=NEED_WRAP
at org.webpieces.ssl.impl.AsyncSSLEngine3Impl.sendHandshakeMessageImpl(AsyncSSLEngine3Impl.java:328)
at org.webpieces.ssl.impl.AsyncSSLEngine3Impl.sendHandshakeMessage(AsyncSSLEngine3Impl.java:286)
at org.webpieces.ssl.impl.AsyncSSLEngine3Impl.doHandshakeWork(AsyncSSLEngine3Impl.java:133)
at org.webpieces.ssl.impl.AsyncSSLEngine3Impl.doHandshakeLoop(AsyncSSLEngine3Impl.java:246)
at org.webpieces.ssl.impl.AsyncSSLEngine3Impl.unwrapPacket(AsyncSSLEngine3Impl.java:210)
at org.webpieces.ssl.impl.AsyncSSLEngine3Impl.doWork(AsyncSSLEngine3Impl.java:109)
at org.webpieces.ssl.impl.AsyncSSLEngine3Impl.feedEncryptedPacket(AsyncSSLEngine3Impl.java:82)
at org.webpieces.nio.impl.ssl.SslTCPChannel$SocketDataListener.incomingData(SslTCPChannel.java:175)
at org.webpieces.nio.impl.threading.ThreadDataListener$1.run(ThreadDataListener.java:26)
at org.webpieces.util.threading.SessionExecutorImpl$RunnableWithKey.run(SessionExecutorImpl.java:121)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
any ideas? How can I really dig into this further? My preference is a library that takes bytes and spits out ssl objects representing each handshake message or decrypted packet(with any header info that comes with the original encrypted thing).
Specifically, here is the code mentioned above
HandshakeStatus previousStatus = sslEngine.getHandshakeStatus();
//CLOSE and all the threads that call feedPlainPacket can have contention on wrapping to encrypt and
//must synchronize on sslEngine.wrap
Status lastStatus;
HandshakeStatus hsStatus;
synchronized (wrapLock ) {
HandshakeStatus beforeWrapHandshakeStatus = sslEngine.getHandshakeStatus();
if (beforeWrapHandshakeStatus != HandshakeStatus.NEED_WRAP)
throw new IllegalStateException("we should only be calling this method when hsStatus=NEED_WRAP. hsStatus=" + beforeWrapHandshakeStatus);
//KEEEEEP This very small. wrap and then listener.packetEncrypted
SSLEngineResult result = sslEngine.wrap(SslMementoImpl.EMPTY, engineToSocketData);
lastStatus = result.getStatus();
hsStatus = result.getHandshakeStatus();
}
log.trace(()->mem+"write packet pos="+engineToSocketData.position()+" lim="+
engineToSocketData.limit()+" status="+lastStatus+" hs="+hsStatus);
if(lastStatus == Status.BUFFER_OVERFLOW || lastStatus == Status.BUFFER_UNDERFLOW)
throw new RuntimeException("status not right, status="+lastStatus+" even though we sized the buffer to consume all?");
boolean readNoData = engineToSocketData.position() == 0;
engineToSocketData.flip();
try {
CompletableFuture<Void> sentMsgFuture;
if(readNoData) {
log.trace(() -> "ssl engine is farting. READ 0 data. hsStatus="+hsStatus+" status="+lastStatus);
throw new IllegalStateException("Engine issue. hsStatus="+hsStatus+" status="+lastStatus+" previous hsStatus="+previousStatus);
//A big hack since the Engine was not working in live testing with FireFox and it would tell us to wrap
//and NOT output any data AND not BufferOverflow.....you have to do 1 or the other, right!
//instead cut out of looping since there seems to be no data
//sslEngineIsFarting = true;
//sentMsgFuture = CompletableFuture.completedFuture(null);
thanks,
Dean

System.setProperty("jdk.tls.server.protocols", "TLSv1.2");
System.setProperty("jdk.tls.client.protocols", "TLSv1.2");
The system property should be set before the JSSE get loaded. For example, set the property within command line. Is it the cause that the system property does not work for you?
... the SSLEngine tells us we need to WRAP which is correct as we need to be replying with a close_notify as well to prevent truncation attacks BUT instead the engine returns 0 bytes and tells us the state of the engine is still NEED_WRAP.
TLS 1.3 is using a half-close policy (See RFC 8446). When receiving the close_notify, the inbound side will be closed and the outbound side keeps open. The local side cannot receive any data, but is allowed to send more application data.
There are a few compatibility impact by the half-close policy (See JDK 11 release note, https://www.oracle.com/technetwork/java/javase/11-relnote-issues-5012449.html). The system property, "jdk.tls.acknowledgeCloseNotify", can be used as a workaround. For more details, please refer to the JDK 11 release note.

oh, even better, downgrading to 1.8.0_111 yields success
2019-06-30 00:11:54,813 [main] Caller+1 at
WEBPIECESxPACKAGE.DevelopmentServer.main(DevelopmentServer.java:32)
INFO: Starting Development Server under java version=1.8.0_111
webpiecesThreadPool5, READ: TLSv1.2 Alert, length = 26
webpiecesThreadPool2, RECV TLSv1.2 ALERT: warning, close_notify
webpiecesThreadPool2, closeInboundInternal()
webpiecesThreadPool2, closeOutboundInternal()
webpiecesThreadPool5, RECV TLSv1.2 ALERT: warning, close_notify
webpiecesThreadPool5, closeInboundInternal()
webpiecesThreadPool5, closeOutboundInternal()
webpiecesThreadPool2, SEND TLSv1.2 ALERT: warning, description = close_notify
webpiecesThreadPool5, SEND TLSv1.2 ALERT: warning, description = close_notify
webpiecesThreadPool2, WRITE: TLSv1.2 Alert, length = 26
webpiecesThreadPool5, WRITE: TLSv1.2 Alert, length = 26

Ok, more and more this seems like a jdk bug and this seems to prove it now. My log shows I was on jdk11.0.3(on a MAC)
2019-06-29 23:37:18,793 [main][] [-] Caller+1 at
WEBPIECESxPACKAGE.DevelopmentServer.main(DevelopmentServer.java:32)
INFO: Starting Development Server under java version=11.0.3
I used debug ssl flag settings of
-Djavax.net.debug=ssl:handshake:verbose:keymanager:trustmanager -Djava.security.debug=access:stack
I surrounded all my calls to sslEngine.wrap and sslEngine.unwrap with my own logs as well and thankfully everyone had the thread name in their logs as well.
It seems when clients send a warning of this
javax.net.ssl|DEBUG|14|webpiecesThreadPool1|2019-06-29 23:27:14.860
MDT|Alert.java:232|Received alert message (
"Alert": {
"level" : "warning",
"description": "close_notify"
}
)
the SSLEngine tells us we need to WRAP which is correct as we need to be replying with a close_notify as well to prevent truncation attacks BUT instead the engine returns 0 bytes and tells us the state of the engine is still NEED_WRAP.
ALSO, there was ZERO ssl debug logs from java between my logs before and after the sslEngine.wrap call almost like the sslEngine farted and did nothing.
Then, I thought I would be really cool and added this as the first lines in main() method
System.setProperty("jdk.tls.server.protocols", "TLSv1.2");
System.setProperty("jdk.tls.client.protocols", "TLSv1.2");
but the selected version in the debug info was still TLSv1.3....grrrrrr...oh well, I give up.

Related

How to avoid "failed to send operations" errors after switching to the "exactly-once" delivery strategy?

I recently tried to switch my subscriptions in GCP Pub/Sub to the "exactly-once" delivery strategy. However, I started seeing the following warnings ~10 times every 30 minutes in my application logs:
com.google.api.gax.rpc.InvalidArgumentException: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Some acknowledgement ids in the request were invalid. This could be because the acknowledgement ids have expired or the acknowledgement ids were malformed.
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:92)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:98)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:66)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:67)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1041)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1215)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:574)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:544)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at com.google.api.gax.grpc.ChannelPool$ReleasingClientCall$1.onClose(ChannelPool.java:535)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:563)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:744)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:723)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Some acknowledgement ids in the request were invalid. This could be because the acknowledgement ids have expired or the acknowledgement ids were malformed.
at io.grpc.Status.asRuntimeException(Status.java:535)
... 17 more
They're immediately followed by the following INFO log messages in the same thread:
Permanent error invalid ack id message, will not resend
I didn't see any problems caused by these warnings, but it's a bit hard to tell because my application is processing a decent number of messages (~1000/hour). I initially thought that these warnings were just short-term "aftershocks" from switching to the "exactly-once" strategy. However, I waited for about 2 hours and they kept occurring with the same frequency, showing no sign of disappearing. I then disabled the "exactly-once" strategy and they went away immediately after. Can anyone tell me whether these warnings are dangerous, what side effects I can expect, and most importantly how I can get rid of them?
I'm using version 3.4.0 of spring-cloud-gcp-dependencies and spring-cloud-gcp-starter-pubsub. I'm also using Spring Cloud Stream to process the incoming messages and I rely on it to automatically acknowledge the messages.
I have the following configuration set in my application.yaml file:
spring:
cloud:
gcp:
pubsub:
subscriber:
executor-threads: 15
max-ack-extension-period: 23400 # 6 hours and 30 minutes
acknowledgement-deadline: 600 # Maximum value
For context: The messages in my application represent jobs for execution, and they could take quite a while to finish - hence the 6h30m maximum acknowledgement extension period.
I also saw the following StackOverflow question:
How to handle errors during message acknowledgement using google pubsub java library?
From what I understand, the consequence of these warnings is that the messages will be redelivered to my application, but this is exactly what I want to avoid.
Thanks for the question, Alexander.
The errors you are seeing happen when the modifyAckDeadline or Acknowledgement request to the service fail because the acknowledgement id is already expired. In such cases, the service considers the expired acknowledgement id as invalid, since a newer delivery might already be in-flight. This is as-per the guarantees for exactly once delivery. There could be a few reasons for it:
The request was delayed due to network delays and by the time it arrived at the server, the acknowledgement id lease is already expired.
The tasks issuing the modifyAckDeadline or Acknowledgement request is overwhelmed (high CPU/memory/network usage), leading to delay in issuing those requests.
I suggest setting min-duration-per-ack-extension to a high number to reduce issues mentioned above. This will help mitigate the cases where the acknowledgement id lease expired. The highest value you can set for this field is 600 seconds.
Additionally, as mentioned in the other stack overflow question, you should consider checking the response of your acknowledgement operations. This can be used to guide your application if it can expect a redelivery. Sample.

Consumer connect errors from AWS MSK via IAM - SSL problem or something else?

"thread_name":"org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1","message":"[Consumer clientId=consumer-groupId-1-1, groupId=groupId-1] Initiating connection to node node.kafka.us-west-2.amazonaws.com:9098 (id: -2 rack: null) using address node.kafka.us-west-2.amazonaws.com/10.x.x.x"}
"thread_name":"org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1","message":"[Consumer clientId=consumer-groupId-1-1, groupId=groupId-1] Set SASL client state to SEND_APIVERSIONS_REQUEST"}
"thread_name":"org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1","message":"[Consumer clientId=consumer-groupId-1-1, groupId=groupId-1] Creating SaslClient: client=null;service=kafka;serviceHostname=node.kafka.us-west-2.amazonaws.com;mechs=[AWS_MSK_IAM]"}
"thread_name":"org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1","message":"Setting SASL/AWS_MSK_IAM client state to SEND_CLIENT_FIRST_MESSAGE"}
"thread_name":"org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1","message":"[Consumer clientId=consumer-groupId-1-1, groupId=groupId-1] Created socket with SO_RCVBUF = 65562, SO_SNDBUF = 131124, SO_TIMEOUT = 0 to node -2"}
"thread_name":"org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1","message":"[Consumer clientId=consumer-groupId-1-1, groupId=groupId-1] Completed connection to node -2. Fetching API versions."}
"thread_name":"org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1","message":"[Consumer clientId=consumer-groupId-1-1, groupId=groupId-1] Connection with node.kafka.us-west-2.amazonaws.com/10.x.x.x disconnected","stack_trace":"java.io.IOException: Connection reset by peer\n\tat sun.nio.ch.FileDispatcherImpl.read0(FileDispatcherImpl.java)\n\tat sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)\n\tat sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:276)\n\tat sun.nio.ch.IOUtil.read(IOUtil.java:245)\n\tat sun.nio.ch.IOUtil.read(IOUtil.java:223)\n\tat sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:356)\n\tat org.apache.kafka.common.network.SslTransportLayer.readFromSocketChannel(SslTransportLayer.java:228)\n\tat org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:291)\n\tat org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:178)\n\tat org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:543)\n\tat org.apache.kafka.common.network.Selector.poll(Selector.java:481)\n\tat org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)\n\tat org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:246)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.coordinatorUnknownAndUnready(ConsumerCoordinator.java:459)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:487)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1262)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)\n\tat org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollConsumer(KafkaMessageListenerContainer.java:1584)\n\tat org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doPoll(KafkaMessageListenerContainer.java:1559)\n\tat org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1360)\n\tat org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1274)\n\t... 4 frames truncated\n"}
"level":"DEBUG","logger_name":"o.a.k.common.network.SslTransportLayer","thread_name":"org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1","message":"[SslTransportLayer channelId=-2 key=channel=java.nio.channels.SocketChannel[connection-pending remote=node.kafka.us-west-2.amazonaws.com/10.x.x.x:9098], selector=sun.nio.ch.KQueueSelectorImpl#690f136f, interestOps=8, readyOps=0] Failed to send SSL Close message","stack_trace":"java.io.IOException: Unexpected status returned by SSLEngine.wrap, expected CLOSED, received OK. Will not send close message to peer.\n\tat org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:194)\n\tat org.apache.kafka.common.utils.Utils.closeAll(Utils.java:974)\n\tat org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:155)\n\tat org.apache.kafka.common.network.Selector.doClose(Selector.java:955)\n\tat org.apache.kafka.common.network.Selector.close(Selector.java:939)\n\tat org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:625)\n\tat org.apache.kafka.common.network.Selector.poll(Selector.java:481)\n\tat org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)\n\tat org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:246)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.coordinatorUnknownAndUnready(ConsumerCoordinator.java:459)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:487)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1262)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)\n\tat org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollConsumer(KafkaMessageListenerContainer.java:1584)\n\tat org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doPoll(KafkaMessageListenerContainer.java:1559)\n\tat org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1360)\n\tat org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1274)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java)\n\tat java.lang.Thread.run(Thread.java:829)\n"}
{"#timestamp":"2022-09-29 08:05:12.245-0700","level":"INFO","logger_name":"org.apache.kafka.clients.NetworkClient","thread_name":"org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1","message":"[Consumer clientId=consumer-groupId-1-1, groupId=groupId-1] Node -2 disconnected."}
I have attempted to follow all the directions from here and here.
Ive tested my iam policy using aws kafka <various> .
Ive checked that port 9098 is open.
My policy has everything permitted. If I can get this working I'll deal with limiting permissions later.
Just trying to get a consumer to start at this point.
Suggestions on where to look for the issue?
Edit:
Added some ssl debug to the call. Am seeing CLIENTHELLO being sent but nothing back from the server. Just the auth failure.
Using openssl s_client -connect <host> -tls1_2 I was able to get a "Verify return code: 0 (ok)" from the server
More:
I think something is blocking the SSL request:
CONNECTED(00000005)
write:errno=54
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 0 bytes
I can view the open port via NMAP so Im not sure what's happening there either.
Just to close this off: I think MSK doesn't allow iam auth from outside a VPC. I was able to get my test working from EKS but not otherwise.

MQ7 with Java 7 and SSL is not working., it was working before 6 months

We have One QM and One CHANNEL and many QUEUES created for clients. Around 5 clients are connected to this QM for their transactions. Each 5 clients connected to their respective QUEUES . There is a jks file created in this QM for SSL connection. Each 5 clients connect with jks file + SSL_RSA_WITH_RC4_128_SHA from their javaClient. QM is also configured with SSLCIPH(RC4_SHA_US).
Now all of a sudden , without any javaClient change , 1 client could not able to connect to configured QM. All others are able to connect to same QM , without any issue.
AMQERR01.LOG is not logged with any specific exception or error
In application logs its saying common MQ exception
Error as com.ibm.mq.MQException: MQJE001: Completion Code '2', Reason '2397'
2397 - Cipher spec<>suite not matching--is any possibility?
we enabled tracing (strmqtrc -m TEST.QM -t detail -t all) and saw Trace logs in path (C:\Program Files (x86)\IBM\Websphere MQ\trace) ,but could not get any details on why SSL-connection could not happening?
We done one more exercise like created a new QM for issue client and tested without SSL and its working. When we enabled SSL in new QM and javaClient , the same 2397 started logging.
Could someone guide me for better logging and tracing in MQ , which can see why 2397 is throwing?
Could someone guide me for better logging and tracing in Java using -D [-Djavax.net.debug=all] , which can see why 2397 is throwing?
MQ Version ->7
MQ Server in ->Windows
from trace logs
returning TEST.QM
Freeing cbmindex:0 pointer:24DDB540 length:2080
-----} TreeNode.getMQQmgrExtObject (rc=OK)
cbmindex:10
-------------} xcsFreeMemFn (rc=OK)
------------} amqjxcoa.wmqGetAttrs (rc=OK)
-----{ UiQueueManager.testQmgrAttribute
-------------{ Message.getMessage
testing object 'TEST.QM'
An internal method detected an unexpected system return code. The method {0} returned {1}. (AMQ4580)
checking attribute 'QmgrCmdLevelGreaterThan'
-------------} Message.getMessage (rc=OK)
for value '510'
-----------}! NativeCalls.getAttrs (rc=Unknown(C35E))
-----} UiQueueManager.testQmgrAttribute (rc=OK)
Message = An internal method detected an unexpected system return code. The method wmq_get_attrs returned "retval.rc2 = 268460388". (AMQ4580), msgID = AMQ4580, rc = 50014, reason = 268460388, severity = 30
result = true
---} TreeNode.testAttribute (rc=OK)
---{ TreeNode.testAttribute
-----{ QueueManagerTreeNode.toString
-----} QueueManagerTreeNode.toString (rc=OK)
testing object 'TEST.QM'
checking attribute 'OamTreeNode'
-----------{ NativeCalls.getAttrs
------------{ amqjxcoa.wmqGetAttrs
qmgr:2A7B32C8, stanza:2A7B32C4, version:1
for value 'true'
QMgrName('TEST.QM')
-----{ TreeNode.getMQQmgrExtObject
StanzaName('QMErrorLog')
testing object 'TEST.QM'
Full QM.INI filename: SOFTWARE\IBM\MQSeries\CurrentVersion\Configuration\QueueManager\TEST!QM, Multi-Instance: FALSE
--------------} xcsGetIniFilename (rc=OK)
--------------{ xcsGetIniAttrs
---------------{ xcsBrowseIniCallback
FileType = (1)
----------------{ xcsBrowseRegistryCallback
xcsBrowseRegistryCallback
-----------------{ xusAddStanzaLineList
------------------{ xcsGetMemFn
checking attribute 'PluginEnabled'
component:24 function:15 length:2080 options:0 cbmindex:0 *pointer:24DDB540
------------------} xcsGetMemFn (rc=OK)
for value 'com.ibm.mq.explorer.oam'
RetCode (OK)
-----------------} xusAddStanzaLineList (rc=OK)
-----------------{ xusAddStanzaLineList
------------------{ xcsGetMemFn
-----{ UiPlugin.isPluginEnabled
component:24 function:15 length:2080 options:0 cbmindex:1 *pointer:24DDDFE8
------------------} xcsGetMemFn (rc=OK)
RetCode (OK)
-----------------} xusAddStanzaLineList (rc=OK)
testing plugin_id: com.ibm.mq.explorer.oam
-----------------{ xurGetSpecificRegStanza
-------{ PluginRegistrationManager.isPluginEnabled
Couldn't open key (QMErrorLog) result 2: The system cannot find the file specified.
MQ version 7.0.1.9
jdk1.8.0_181-i586
com.ibm.mq*jar Version
Specification -version : 6.0.2.1
Implementation-Version :6.0.2.1 -j600-201-070305

SSLSocketImpl.startHandshake() throws SSLHanshakeException/EOFException when resuming cached sessions

Using Apache FTPSClient to listFiles(String)....
The aplication crashes sometimes after resuming an SSL Session and then calling sslSocketImpl.startHandshake() from the Apache FTPSClient code.
I set javax.net.debug to print the ssl information...
System.setProperty("javax.net.debug", "all");
And this is what I get.
%% Client cached [Session-3, SSL_RSA_WITH_3DES_EDE_CBC_SHA]
%% Try resuming [Session-3, SSL_RSA_WITH_3DES_EDE_CBC_SHA] from port 4149
*** ClientHello, TLSv1
....
main, received EOFException: error
main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
main, SEND TLSv1 ALERT: fatal, description = handshake_failure
main, WRITE: TLSv1 Alert, length = 2
[Raw write]: length = 7
0000: 15 03 01 00 02 02 28 ......(
main, called closeSocket()
[Mon Aug 30 17:41:52 PDT 2010][class com.smgtec.sff.fileupload.poller.BasicFTPAccess] - Could not list directory: sqjavax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:808)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1096)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1123)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1107)
at com.smgtec.sff.fileupload.poller.FixedFTPSClient._openDataConnection_(FixedFTPSClient.java:525)
at org.apache.commons.net.ftp.FTPClient.initiateListParsing(FTPClient.java:2296)
at org.apache.commons.net.ftp.FTPClient.initiateListParsing(FTPClient.java:2269)
Padded plaintext before ENCRYPTION: len = 32
0000: 50 41 at org.apache.commons.net.ftp.FTPClient.listFiles(FTPClient.java:2046)
at com.smgtec.sff.fileupload.poller.BasicFTPAccess.listFiles(BasicFTPAccess.java:100)
at com.smgtec.sff.fileupload.poller.FTPPoller.addFileForProcessing(FTPPoller.java:67)
at com.smgtec.sff.fileupload.poller.FTPPoller.main(FTPPoller.java:385)
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at com.sun.net.ssl.internal.ssl.InputRecord.read(InputRecord.java:333)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:789)
... 10 more
We also have jscape FTPS client here and it produces the same problem.
I suggest you include some retry logic in your FTPPoller - it looks like the host is closing the connection rather than your code. We used to see occasional connection closed by remote host errors which are best handled by simply retrying.
I solved it like this using SSLSession.invalidate() it seems to work now... although we aren't using FTPS anymore. If this is a true solution there is a problem in Apache commons-net FTPSClient or the FTP Server we are connecting to.
ftp = new FTPSClient()
{
private Socket socket;
protected Socket _openDataConnection_(int command, String arg) throws IOException
{
if (socket != null && socket instanceof SSLSocket)
{
// We have problems resuming cached SSL Sessions. Exceptions are
// thrown and the system crashes... So we invalidate each SSL
// session we used last.
SSLSocket sslSocket = (SSLSocket) socket;
sslSocket.getSession().invalidate();
}
socket = super._openDataConnection_(command, arg);
return socket;
}
};
BTW I believe we were connecting to a FileZilla FTP server. I suspect this fix will cause more network chatter passing back and forth keys/certs and so forth.

"bad record MAC" SSL error between Java and PortgreSQL

We've got here a problem of random disconnections between our Java apps and our PostgreSQL 8.3 server with a "bad record MAC" SSL error.
We run Debian / Lenny on both side. On the client side, we see :
main, WRITE: TLSv1 Application Data, length = 104
main, READ: TLSv1 Application Data, length = 24
main, READ: TLSv1 Application Data, length = 104
main, WRITE: TLSv1 Application Data, length = 384
main, READ: TLSv1 Application Data, length = 24
main, READ: TLSv1 Application Data, length = 8216
%% Invalidated: [Session-1, SSL_RSA_WITH_3DES_EDE_CBC_SHA]
main, SEND TLSv1 ALERT: fatal, description = bad_record_mac
main, WRITE: TLSv1 Alert, length = 24
main, called closeSocket()
main, handling exception: javax.net.ssl.SSLException: bad record MAC
2010-03-09 02:36:27,980 WARN org.hibernate.util.JDBCExceptionReporter.logExceptions(JDBCExceptionReporter.java:100) - SQL Error: 0, SQLState: 08006
2010-03-09 02:36:27,980 ERROR org.hibernate.util.JDBCExceptionReporter.logExceptions(JDBCExceptionReporter.java:101) - An I/O error occured while sending to the backend.
2010-03-09 02:36:27,981 ERROR org.hibernate.transaction.JDBCTransaction.toggleAutoCommit(JDBCTransaction.java:232) - Could not toggle autocommit
org.postgresql.util.PSQLException: An I/O error occured while sending to the backend.
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:220)
at org.postgresql.jdbc2.AbstractJdbc2Connection.executeTransactionCommand(AbstractJdbc2Connection.java:650)
at org.postgresql.jdbc2.AbstractJdbc2Connection.commit(AbstractJdbc2Connection.java:670)
at org.postgresql.jdbc2.AbstractJdbc2Connection.setAutoCommit(AbstractJdbc2Connection.java:633)
at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.springframework.jdbc.datasource.SingleConnectionDataSource$CloseSuppressingInvocationHandler.invoke(SingleConnectionDataSource.java:336)
at $Proxy17.setAutoCommit(Unknown Source)
at org.hibernate.transaction.JDBCTransaction.toggleAutoCommit(JDBCTransaction.java:228)
at org.hibernate.transaction.JDBCTransaction.rollbackAndResetAutoCommit(JDBCTransaction.java:220)
at org.hibernate.transaction.JDBCTransaction.rollback(JDBCTransaction.java:196)
at org.hibernate.ejb.TransactionImpl.rollback(TransactionImpl.java:85)
at org.springframework.orm.jpa.JpaTransactionManager.doRollback(JpaTransactionManager.java:482)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processRollback(AbstractPlatformTransactionManager.java:823)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.rollback(AbstractPlatformTransactionManager.java:800)
at org.springframework.transaction.interceptor.TransactionAspectSupport.completeTransactionAfterThrowing(TransactionAspectSupport.java:339)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:110)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor.intercept(Cglib2AopProxy.java:635)
...
Caused by: javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLException: bad record MAC
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1255)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1267)
at com.sun.net.ssl.internal.ssl.AppOutputStream.write(AppOutputStream.java:43)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
at org.postgresql.core.PGStream.flush(PGStream.java:508)
at org.postgresql.core.v3.QueryExecutorImpl.sendSync(QueryExecutorImpl.java:692)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:193)
... 22 more
Caused by: javax.net.ssl.SSLException: bad record MAC
at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:190)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1611)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1569)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:850)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:746)
at com.sun.net.ssl.internal.ssl.AppInputStream.read(AppInputStream.java:75)
at org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:135)
at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:104)
at org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:186)
at org.postgresql.core.PGStream.Receive(PGStream.java:445)
at org.postgresql.core.PGStream.ReceiveTupleV3(PGStream.java:350)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1322)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:194)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:451)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:350)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:254)
at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:208)
at org.hibernate.loader.Loader.getResultSet(Loader.java:1808)
at org.hibernate.loader.Loader.doQuery(Loader.java:697)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:259)
at org.hibernate.loader.Loader.loadCollection(Loader.java:2015)
at org.hibernate.loader.collection.CollectionLoader.initialize(CollectionLoader.java:59)
at org.hibernate.persister.collection.AbstractCollectionPersister.initialize(AbstractCollectionPersister.java:587)
at org.hibernate.event.def.DefaultInitializeCollectionEventListener.onInitializeCollection(DefaultInitializeCollectionEventListener.java:83)
at org.hibernate.impl.SessionImpl.initializeCollection(SessionImpl.java:1743)
at org.hibernate.collection.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:366)
at org.hibernate.collection.PersistentSet.add(PersistentSet.java:212)
...
the cypher suites SSL_RSA_WITH_3DES_EDE_CBC_SHA or SSL_RSA_WITH_RC4_128_SHA were used.
We tried on the client side :
the OpenJDK package
the sun JDK package
the sun tar package
the libbcprov-java package
the PostgreSQL driver 8.3 instead of 8.4
On the server side we see :
2010-03-01 08:26:05 CET [18513]: [161833-1] LOG: SSL error: sslv3 alert bad record mac
2010-03-01 08:26:05 CET [18513]: [161834-1] LOG: could not receive data from client: Connection reset by peer
2010-03-01 08:26:05 CET [18513]: [161835-1] LOG: unexpected EOF on client connection
the error type seams to be SSL_R_SSLV3_ALERT_BAD_RECORD_MAC.
the SSL layer is configured with :
ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:#STRENGTH'
and on the server side we changed the cipher suites to :
'ALL:!SSLv2:!MEDIUM:!AES:!ADH:!LOW:!EXP:!MD5:#STRENGTH'
but none of these changes fixed the problem. Suggestions appreciated !
I don't know how Java deals with it, but a number of vendors recently released security updates that disabled SSL renegotiation support, sometimes in broken ways. Some have been fixed since, some not. This could be your problem, in case this is something that happens after a fairly large amount of data (512Mb by default) has passed over a connection. Since you're likely using a connection pool, that seems quite possible.
In PostgreSQL 8.4.3 (due out this week), we have added a configuration parameter that lets you disable SSL renegotiation completely - that might be worth a try. (there are also new versions in the previous trees (like 8.3) that contain this feature)

Categories

Resources