I have a simple application which will expose a RESTFul GET endpoint called 'getAllDeviceData' which will simply return List of fetched data of all devices from the device table in a DB.
For each request I am authenticating the user by validating the HttpServletRequest.getUserPrincipal() method.
To speedup the process I have used parallelStream with lambda expressions.
In ParallelStream I am invoking another method called 'getDeviceData' in which I am doing the authentication and fetch data from DB.
The problem is, when parallel stream process invokes getDeviceData method, I am getting a NullPointer exception and failed to complete the parallel steam.
The caues is, HttpServletRequest.getUserPrincipal() is null inside the method. But it actually exists in 'getAllDeviceData' (where the lambda expression is).
This works without any issue if I replace 'parallelStream()' with just 'stream()' but in this case parallel nature is not there.
#Override
#ResponseBody
#RequestMapping(value = "getAllDeviceData", method = RequestMethod.GET, consumes = "*")
public List<List<Data>> getAllDeviceData(
#RequestParam(value = "recordLimit", required = false) final Integer recordLimit,
final HttpServletRequest request) {
final List<Device> deviceList = deviceService.getAllDevices();
final List<List<Data>> dataList = deviceList.parallelStream().map(device -> getDeviceData(recordLimit, device.getDeviceId(), request)).collect(Collectors.toList());
return alerts;
}
private List<Data> getDeviceData(#RequestParam(value = "recordLimit", required = false) Integer recordLimit, String deviceId, HttpServletRequest request) {
if(request.getUserPrincipal() == null){
logger.info("User Principle Null - 1");
}else {
logger.info("User Principle Not Null - 1");
}
authService.doAuthenticate(request);
// if authrnticated proceed with following...
List<Data> deviceData = deviceService.getGetDeviceData(deviceId);
return deviceData;
}
However, I have observed something.
Look at the following log (unnecessary parts have been ommitted) of above application.
In it, main threads (eg : http-nio-7070-exec-2 etc. - which are main threads of thread pool of this application's server) are working fine because it prints out 'User Principle Not Null - 1' but, in broken down threads of parallel stream such as ForkJoinPool.commonPool-worker-2 etc. HTTPServletRequest.getUserPrincipal() is becoming null.
2018-01-15 15:28:06,897 INFO [http-nio-7070-exec-2] User Principle Not Null - 1
2018-01-15 15:28:06,897 INFO [ForkJoinPool.commonPool-worker-2] User Principle Null - 1
2018-01-15 15:28:06,906 INFO [ForkJoinPool.commonPool-worker-3] User Principle Null - 1
2018-01-15 15:28:06,955 INFO [ForkJoinPool.commonPool-worker-2] User Principle Null - 1
2018-01-15 15:28:06,955 INFO [ForkJoinPool.commonPool-worker-1] User Principle Null - 1
2018-01-15 15:28:06,957 INFO [ForkJoinPool.commonPool-worker-2] User Principle Null - 1
2018-01-15 15:28:06,959 INFO [ForkJoinPool.commonPool-worker-3] User Principle Null - 1
2018-01-15 15:28:07,064 INFO [ForkJoinPool.commonPool-worker-2] User Principle Null - 1
2018-01-15 15:28:07,076 INFO [http-nio-7070-exec-2] User Principle Not Null -1
2018-01-15 15:28:07,078 INFO [ForkJoinPool.commonPool-worker-1] User Principle Null - 1
I am still new to lambda expressions and parallel stream.
Please help me understand what is the issue here.
Java Details:
java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
the root cause is that Spring is injecting SecurityContextHolderAwareRequestWrapper instance into your method. This wrapper is calling the following lines when equest.getUserPrincipal() is called:
private Authentication getAuthentication() {
Authentication auth = SecurityContextHolder.getContext().getAuthentication();
SecurityContextHolder has different strategies. By default MODE_THREADLOCAL strategy is used. That's why you have user principle in main threads but don't have one in forkjoinpool threads.
-Dspring.security.strategy=MODE_INHERITABLETHREADLOCAL VM option is a solution to your problem. InheritableThreadLocal javadoc and InheritableThreadLocalSecurityContextHolderStrategy source code might bring additional value to the understanding.
Related
I'm trying to measure the latency of my service at a lower level. Poking around I saw that it is possible to add a addStreamTracerFactory to the grpc builder.
I've done this simple implementation like this and printed the logs:
val server = io.grpc.netty.NettyServerBuilder.forPort(ApplicationConfig.Service.bindPort).addStreamTracerFactory(ServerStreamTracerFactory)....
class Telemetry(fullMethodName: String, headers: Metadata) extends ServerStreamTracer with LazyLogging {
override def serverCallStarted(callInfo: ServerStreamTracer.ServerCallInfo[_, _]): Unit = {
logger.info(s"Telemetry '$fullMethodName' '$headers' callinfo:$callInfo")
super.serverCallStarted(callInfo)
}
override def inboundMessage(seqNo: Int): Unit = {
logger.info(s"inboundMessage $seqNo")
super.inboundMessage(seqNo)
}
override def inboundMessageRead(seqNo: Int, optionalWireSize: Long, optionalUncompressedSize: Long): Unit = {
logger.info(s"inboundMessageRead $seqNo $optionalWireSize $optionalUncompressedSize")
super.inboundMessageRead(seqNo, optionalWireSize, optionalUncompressedSize)
}
override def outboundMessage(seqNo: Int): Unit = {
logger.info(s"outboundMessage $seqNo")
super.outboundMessage(seqNo)
}
override def outboundMessageSent(seqNo: Int, optionalWireSize: Long, optionalUncompressedSize: Long): Unit = {
logger.info(s"outboundMessageSent $seqNo $optionalWireSize $optionalUncompressedSize")
super.outboundMessageSent(seqNo, optionalWireSize, optionalUncompressedSize)
}
override def streamClosed(status: Status): Unit = {
logger.info(s"streamClosed $status")
super.streamClosed(status)
}
}
object ServerStreamTracerFactory extends Factory with LazyLogging{
logger.info("called")
override def newServerStreamTracer(fullMethodName: String, headers: Metadata): ServerStreamTracer = {
logger.info(s"called with $fullMethodName $headers")
new Telemetry(fullMethodName, headers)
}
}
I'm running a simple grpc client in a loop and examining the output of the server stream tracer.
I see that the "lifecycle" of logs repeats itself. Here is one iteration (but it spews out the exact same again and again):
22:15:06 INFO [grpc-default-worker-ELG-3-2] [newServerStreamTracer:38] [ServerStreamTracerFactory$] called with com.dy.affinity.service.AffinityService/getAffinities Metadata(content-type=application/grpc,user-agent=grpc-python/1.15.0 grpc-c/6.0.0 (osx; chttp2; glider),grpc-accept-encoding=identity,deflate,gzip,accept-encoding=identity,gzip)
22:15:06 INFO [grpc-default-executor-0] [serverCallStarted:8] [Telemetry] Telemetry 'com.dy.affinity.service.AffinityService/getAffinities' 'Metadata(content-type=application/grpc,user-agent=grpc-python/1.15.0 grpc-c/6.0.0 (osx; chttp2; glider),grpc-accept-encoding=identity,deflate,gzip,accept-encoding=identity,gzip)' callinfo:io.grpc.internal.ServerCallInfoImpl#5badffd8
22:15:06 INFO [grpc-default-worker-ELG-3-2] [inboundMessage:13] [Telemetry] inboundMessage 0
22:15:06 INFO [grpc-default-worker-ELG-3-2] [inboundMessageRead:17] [Telemetry] inboundMessageRead 0 19 -1
22:15:06 INFO [pool-1-thread-5] [outboundMessage:21] [Telemetry] outboundMessage 0
22:15:06 INFO [pool-1-thread-5] [outboundMessageSent:25] [Telemetry] outboundMessageSent 0 0 0
22:15:06 INFO [grpc-default-worker-ELG-3-2] [streamClosed:29] [Telemetry] streamClosed Status{code=OK, description=null, cause=null}
A few things that aren't quite clear to me from just looking at these logs:
Why is a new stream being created for each request? I though that the grpc client is supposed to re-use the connection. "stream closed" shouldn't be called right?
If the stream is being re-used, how come I see that the inboundMessage number (and outboundMessage) is always "0". (Also when I've started multiple clients in parallel this is always 0). In what case should the message number not be 0?
If the stream isn't being re-used, how should I be configuring the clients differently to re-use the connection?
In gRPC one HTTP2 stream is created for each RPC (while if retries or hedging is enabled there can be more than one streams for each RPC). HTTP2 streams are multiplexed on one connection, and it's pretty cheap to open and close streams. So, it's the connection being re-used, not the stream.
The seqNo you get from the tracer methods is the seqNo of messages for this stream, which starts from 0. Looks like you are doing unary RPCs, which makes one request and gets one response then closes. What you see is totally normal.
One of our application just suffered from some nasty deadlocks. I had quite a hard time recreating the problem because the deadlock (or stacktrace) did not show up immediately in my java application logs.
To my surprise the marklogic java api retries failing requests (e.g because of a deadlock). This might make sense, if your request is not a multi statement request, but otherwise i'm not sure if it does.
So lets stick with this deadlock problem. I created a simple code snippet in which i create a deadlock on purpose. The snippet creates a document test.xml and then tries to read and write from two different transactions, each on a new thread.
public static void main(String[] args) throws Exception {
final Logger root = (Logger) LoggerFactory.getLogger(Logger.ROOT_LOGGER_NAME);
final Logger ok = (Logger) LoggerFactory.getLogger(OkHttpServices.class);
root.setLevel(Level.ALL);
ok.setLevel(Level.ALL);
final DatabaseClient client = DatabaseClientFactory.newClient("localhost", 8000, new DatabaseClientFactory.DigestAuthContext("username", "password"));
final StringHandle handle = new StringHandle("<doc><name>Test</name></doc>")
.withFormat(Format.XML);
client.newTextDocumentManager().write("test.xml", handle);
root.info("t1: opening");
final Transaction t1 = client.openTransaction();
root.info("t1: reading");
client.newXMLDocumentManager()
.read("test.xml", new StringHandle(), t1);
root.info("t2: opening");
final Transaction t2 = client.openTransaction();
root.info("t2: reading");
client.newXMLDocumentManager()
.read("test.xml", new StringHandle(), t2);
new Thread(() -> {
root.info("t1: writing");
client.newXMLDocumentManager().write("test.xml", new StringHandle("<doc><t>t1</t></doc>").withFormat(Format.XML), t1);
t1.commit();
}).start();
new Thread(() -> {
root.info("t2: writing");
client.newXMLDocumentManager().write("test.xml", new StringHandle("<doc><t>t2</t></doc>").withFormat(Format.XML), t2);
t2.commit();
}).start();
TimeUnit.MINUTES.sleep(5);
client.release();
}
This code will produce the following log:
14:12:27.437 [main] DEBUG c.m.client.impl.OkHttpServices - Connecting to localhost at 8000 as admin
14:12:27.570 [main] DEBUG c.m.client.impl.OkHttpServices - Sending test.xml document in transaction null
14:12:27.608 [main] INFO ROOT - t1: opening
14:12:27.609 [main] DEBUG c.m.client.impl.OkHttpServices - Opening transaction
14:12:27.962 [main] INFO ROOT - t1: reading
14:12:27.963 [main] DEBUG c.m.client.impl.OkHttpServices - Getting test.xml in transaction 5298588351036278526
14:12:28.283 [main] INFO ROOT - t2: opening
14:12:28.283 [main] DEBUG c.m.client.impl.OkHttpServices - Opening transaction
14:12:28.286 [main] INFO ROOT - t2: reading
14:12:28.286 [main] DEBUG c.m.client.impl.OkHttpServices - Getting test.xml in transaction 8819382734425123844
14:12:28.289 [Thread-1] INFO ROOT - t1: writing
14:12:28.289 [Thread-1] DEBUG c.m.client.impl.OkHttpServices - Sending test.xml document in transaction 5298588351036278526
14:12:28.289 [Thread-2] INFO ROOT - t2: writing
14:12:28.290 [Thread-2] DEBUG c.m.client.impl.OkHttpServices - Sending test.xml document in transaction 8819382734425123844
Neither t1 or t2 will get commited. MarkLogic logs confirm that there actually is a deadlock:
==> /var/opt/MarkLogic/Logs/8000_AccessLog.txt <==
127.0.0.1 - admin [24/Nov/2018:14:12:30 +0000] "PUT /v1/documents?txid=5298588351036278526&category=content&uri=test.xml HTTP/1.1" 503 1034 - "okhttp/3.9.0"
==> /var/opt/MarkLogic/Logs/ErrorLog.txt <==
2018-11-24 14:12:30.719 Info: Deadlock detected locking Documents test.xml
This would not be a problem, if one of the requests would fail and throw an exception, but this is not the case. MarkLogic Java Api retries every request up to 120 seconds and one of the updates timeouts after like 120 seconds or so:
Exception in thread "Thread-1" com.marklogic.client.FailedRequestException: Service unavailable and maximum retry period elapsed: 121 seconds after 65 retries
at com.marklogic.client.impl.OkHttpServices.putPostDocumentImpl(OkHttpServices.java:1422)
at com.marklogic.client.impl.OkHttpServices.putDocument(OkHttpServices.java:1256)
at com.marklogic.client.impl.DocumentManagerImpl.write(DocumentManagerImpl.java:920)
at com.marklogic.client.impl.DocumentManagerImpl.write(DocumentManagerImpl.java:758)
at com.marklogic.client.impl.DocumentManagerImpl.write(DocumentManagerImpl.java:717)
at Scratch.lambda$main$0(scratch.java:40)
at java.lang.Thread.run(Thread.java:748)
What are possible ways to overcome this problem? One way might be to set a maximum time to live for a transaction (like 5 seconds), but this feels hacky and unreliable. Any other ideas? Are there any other settings i should check out?
I'm on MarkLogic 9.0-7.2 and using marklogic-client-api:4.0.3.
Edit: One way to solve the deadlock would be by syncronizing the calling function, this is actually the way i solved it in my case (see comments). But i think the underlying problem still exists. Having a deadlock in a multi statement transaction should not be hidden away in a 120 second timeout. I rather have a immediately failing request than a 120 second lock on one of my documents + 64 failing retries per thread.
Deadlocks are usually resolvable by retrying. Internally, the server does a inner-retry loop because usually deadlocks are transient and incidental, lasting a very short time. In your case you have constructed a case that will never succeed with any timeout that's equal for both threads.
Deadlocks can be avoided at the application layer by avoiding multi-statement transactions when using the REST API. (which is what the Java api uses).
Multi statement transactions over REST cannot be implemented 100% safely due to the client's responsibility to manage the transaction ID and the server's inability to detect client-side errors or client-side identity. Very subtle problems can and do occur unless you are aggressively proactive wrt handling errors and multithreading. If you 'push' the logic to the server (xquery or javascript) the server is able to manage things much better.
As for if its 'good' or not for the Java API to implement retries for this case, that's debatable either way. (The compromise for an seemingly easy-to-use interface is that many things that would otherwise be options are decided for you as a convention. There's generally no one-size-fits-all answer. In this case I am presuming the thought was that a deadlock is more likely caused by independant code/logic by 'accident' as opposed to identical code running in tangent -- a retry in that case would be a good choice. In your example its not, but then an earlier error would still fail predictably until you change your code to 'not do that' ).
If it doesn't already exist, a feature request for a configurable timeout and retry behaviour does seem a reasonable request. I would recommend, however, to attempt to avoid any REST calls that result in an open transaction -- inherently that is problematic, particularly if you don't notice the problem upfront (then its more likely to bite you in production). Unlike JDBC, which keeps a connection open so that the server can detect client disconnects, HTTP and the ML Rest API do not -- which leads to a different programming model then traditional database coding in java.
I have web app that allow sent messages to queue, it deployed on Websphere Application Server and work very well.
I try to build light environment for autotests, but when i try to sent message to queue from test it returns to me MQJE001: Completion Code '2', Reason '2035'
I thought that problem in CHLAUTH rules but seems that i have all rights.
C:/> dspmqaut -m M00.EDOGO -n OEP.FROM.GW_SBAST.DLV -t q -p out-bychek-ao
Entity out-bychek-ao has the following authorizations for object OEP.FROM.GW_SBA
ST.DLV:
get
browse
put
inq
set
crt
dlt
chg
dsp
passid
passall
setid
setall
clr
error from logs :
AMQ8075: Authorization failed because the SID for entity 'out-bychek-a' cannot
be obtained.
EXPLANATION:
The Object Authority Manager was unable to obtain a SID for the specified
entity. This could be because the local machine is not in the domain to locate
the entity, or because the entity does not exist.
ACTION:
Ensure that the entity is valid, and that all necessary domain controllers are
available. This might mean creating the entity on the local machine.
----- amqzfubn.c : 2252 -------------------------------------------------------
7/9/2018 15:39:57 - Process(2028.3) User(MUSR_MQADMIN) Program(amqrmppa.exe)
Host(SBT-ORSEDG-204) Installation(Installation1)
VRMF(7.5.0.4) QMgr(M00.EDOGO)
AMQ9557: Queue Manager User ID initialization failed.
EXPLANATION:
The call to initialize the User ID failed with CompCode 2 and Reason 2035.
ACTION:
Correct the error and try again.
----- cmqxrsrv.c : 1975 -------------------------------------------------------
7/9/2018 15:39:57 - Process(2028.3) User(MUSR_MQADMIN) Program(amqrmppa.exe)
Host(SBT-ORSEDG-204) Installation(Installation1)
VRMF(7.5.0.4) QMgr(M00.EDOGO)
AMQ9999: Channel 'SC.EDOGO' to host '10.82.38.188' ended abnormally.
EXPLANATION:
The channel program running under process ID 2028(11564) for channel 'SC.EDOGO'
ended abnormally. The host name is '10.82.38.188'; in some cases the host name
cannot be determined and so is shown as '????'.
ACTION:
Look at previous error messages for the channel program in the error logs to
determine the cause of the failure. Note that this message can be excluded
completely or suppressed by tuning the "ExcludeMessage" or "SuppressMessage"
attributes under the "QMErrorLog" stanza in qm.ini. Further information can be
found in the System Administration Guide.
----- amqrmrsa.c : 909 --------------------------------------------------------
notice AMQ8075: Authorization failed because the SID for entity 'out-bychek-a' cannot in my account name lost last letter. Is it normal?
and this
DISPLAY CHLAUTH('SYSTEM.DEF.SVRCONN') MATCH(RUNCHECK) ALL ADDRESS('127.0.0.1') CLNTUSER('out-bychek-ao')
7 : DISPLAY CHLAUTH('SYSTEM.DEF.SVRCONN') MATCH(RUNCHECK) ALL ADDRESS('127.0.0.1') CLNTUSER('out-bychek-ao')
AMQ8898: Display channel authentication record details - currently disabled.
CHLAUTH(SYSTEM.*) TYPE(ADDRESSMAP)
DESCR(Default rule to disable all SYSTEM channels)
CUSTOM( ) ADDRESS(*)
USERSRC(NOACCESS) WARN(NO)
ALTDATE(2016-11-14) ALTTIME(17.33.34)
dmpmqaut -m M00.EDOGO -n OEP.FROM.GW_SBAST.DLV -t q -p out-bychek-ao -e
profile : OEP.FROM.GW_SBAST.DLV
object type: queue
entity : out-bychek-ao#alpha
entity tyoe: principal
authority : allmqi dlt chg dsp clr
- - - - - - - - -
profile : CLASS
object type: queue
entity : out-bychek-ao#alpha
entity tyoe: principal
authority : clt
Update
I dug deeper in dcm4che's source code and found that an IncompatibleConnectionException is thrown if either
a connection is "not installed"
or the types of protocols are not set or don't match.
I don't know what it means that a connection is "installed" but this flag can be set manually, so I set it for both the local and remote connections to true (even checked them with getInstalled() whether they are "installed" - and yes they are now - previously this property was null).
And as to the protocols, they weren't specified, so for both connections I set them to DICOM.
Results: I still get the same Exception.
I'd like to establish a DICOM association between dcm4chee (2.18.3) and my JAVA application using the dcm4che (5.12.0) toolkit.
The problem is that it doesn't seem to be any documentation available on how to use dcm4che in a JAVA application, so all I can do is read dcm4che's source code and try to figure out what its classes and methods are for, but I'm stuck. If someone already has a working example it would be very helpful.
So far I have:
import org.dcm4che3.net.ApplicationEntity;
import org.dcm4che3.net.Association;
import org.dcm4che3.net.Connection;
import org.dcm4che3.net.Device;
import org.dcm4che3.net.pdu.AAssociateRQ;
import org.dcm4che3.net.pdu.PresentationContext;
...
ApplicationEntity locAE = new ApplicationEntity();
locAE.setAETitle("THIS_JAVA_APP");
Connection localConn = new Connection();
localConn.setCommonName("loc_conn");
localConn.setHostname("localhost");
localConn.setPort(11112);
localConn.setProtocol(Connection.Protocol.DICOM);
localConn.setInstalled(true);
locAE.addConnection(localConn);
ApplicationEntity remAE = new ApplicationEntity();
remAE.setAETitle("DCM4CHEE");
Connection remoteConn = new Connection();
remoteConn.setCommonName("rem_conn");
remoteConn.setHostname("localhost");
remoteConn.setPort(11112);
remoteConn.setProtocol(Connection.Protocol.DICOM);
remoteConn.setInstalled(true);
remAE.addConnection(remoteConn);
AAssociateRQ assocReq = new AAssociateRQ();
assocReq.setCalledAET(remAE.getAETitle());
assocReq.setCallingAET(locAE.getAETitle());
assocReq.setApplicationContext("1.2.840.10008.3.1.1.1");
assocReq.setImplClassUID("1.2.40.0.13.1.3");
assocReq.setImplVersionName("dcm4che-5.12.0");
assocReq.setMaxPDULength(16384);
assocReq.setMaxOpsInvoked(0);
assocReq.setMaxOpsPerformed(0);
assocReq.addPresentationContext(new PresentationContext(
1, "1.2.840.10008.1.1", "1.2.840.10008.1.2"));
Device device = new Device("device");
device.addConnection(localConn);
device.addApplicationEntity(locAE);
Association assoc = locAE.connect(remAE, assocReq);
but I don't know whether I'm on the right path doing it.
The error I get:
org.dcm4che3.net.IncompatibleConnectionException: No compatible connection to DCM4CHEE available on THIS_JAVA_APP
at org.dcm4che3.net.ApplicationEntity.findCompatibelConnection(ApplicationEntity.java:646)
at org.dcm4che3.net.ApplicationEntity.connect(ApplicationEntity.java:651)
Could it be, that You are missing a Device instance from Your setup? It seems, that You need a Device, to which You attach both ApplicationEntity and Connection.
Looking at FindSCU.java source from dcm4che source.
private final Device device = new Device("findscu");
private final ApplicationEntity ae = new ApplicationEntity("FINDSCU");
private final Connection conn = new Connection();
public FindSCU() throws IOException {
device.addConnection(conn);
device.addApplicationEntity(ae);
ae.addConnection(conn);
}
I also think, that maybe the local Connection object can be instantiated without any parameters as the FindSCU example here demonstrates. Maybe the parameters are confusing it somehow, especially considering, that you have both local and remote connections pointing to localhost:11112.
But yes, one has to agree, that the documentation for dcm4che3 API is totally inadequate.
Here is the working code: (I don't know if it's the minimal solution, feel free to experiment with it...)
ApplicationEntity locAE = new ApplicationEntity();
locAE.setAETitle("THIS_JAVA_APP");
locAE.setInstalled(true);
Connection localConn = new Connection();
localConn.setCommonName("loc_conn");
localConn.setHostname("localhost");
localConn.setPort(11112);
localConn.setProtocol(Connection.Protocol.DICOM);
localConn.setInstalled(true);
locAE.addConnection(localConn);
ApplicationEntity remAE = new ApplicationEntity();
remAE.setAETitle("DCM4CHEE");
remAE.setInstalled(true);
Connection remoteConn = new Connection();
remoteConn.setCommonName("rem_conn");
remoteConn.setHostname("localhost");
remoteConn.setPort(11112);
remoteConn.setProtocol(Connection.Protocol.DICOM);
remoteConn.setInstalled(true);
remAE.addConnection(remoteConn);
AAssociateRQ assocReq = new AAssociateRQ();
assocReq.setCalledAET(remAE.getAETitle());
assocReq.setCallingAET(locAE.getAETitle());
assocReq.setApplicationContext("1.2.840.10008.3.1.1.1");
assocReq.setImplClassUID("1.2.40.0.13.1.3");
assocReq.setImplVersionName("dcm4che-5.12.0");
assocReq.setMaxPDULength(16384);
assocReq.setMaxOpsInvoked(0);
assocReq.setMaxOpsPerformed(0);
assocReq.addPresentationContext(new PresentationContext(
1, "1.2.840.10008.1.1", "1.2.840.10008.1.2"));
Device device = new Device("device");
device.addConnection(localConn);
device.addApplicationEntity(locAE);
Executor exec = (Runnable command) -> {};
device.setExecutor(exec);
Association assoc = locAE.connect(localConn, remoteConn, assocReq);
And the relevant dcm4chee log:
2018-03-02 23:21:42,832 INFO THIS_JAVA_APP->DCM4CHEE (TCPServer-1) [org.dcm4cheri.net.FsmImpl] received AAssociateRQ
appCtxName: 1.2.840.10008.3.1.1.1/DICOM Application Context Name
implClass: 1.2.40.0.13.1.3
implVersion: dcm4che-5.12.0
calledAET: DCM4CHEE
callingAET: THIS_JAVA_APP
maxPDULen: 16378
asyncOpsWindow:
pc-1: as=1.2.840.10008.1.1/Verification SOP Class
ts=1.2.840.10008.1.2/Implicit VR Little Endian
2018-03-02 23:21:42,843 INFO THIS_JAVA_APP->DCM4CHEE (TCPServer-1) [org.dcm4cheri.net.FsmImpl] sending AAssociateAC
appCtxName: 1.2.840.10008.3.1.1.1/DICOM Application Context Name
implClass: 1.2.40.0.13.1.1.1
implVersion: dcm4che-1.4.34
calledAET: DCM4CHEE
callingAET: THIS_JAVA_APP
maxPDULen: 16352
asyncOpsWindow:
pc-1: 0 - acceptance
ts=1.2.840.10008.1.2/Implicit VR Little Endian
After you have the association, see this other post for how to perform a C-FIND.
Edit
Apparently, I solved the problem. Changing the executor from
Executor exec = (Runnable command) -> {};
device.setExecutor(exec);
to
ExecutorService executorService = Executors.newSingleThreadExecutor();
ScheduledExecutorService scheduledExecutorService = Executors.newSingleThreadScheduledExecutor();
device.setExecutor(executorService);
device.setScheduledExecutor(scheduledExecutorService);
made it so my application correctly received the association response from the server. This might serve as reference for someone else.
Thank you for sharing your code. It was really helpful to me.
Original Post
I am unable to perform the connection with a code similar to the solution you proposed. I am trying to request an association with a dcm4chee-arc-light with dcm4che (both 5.14.1), and I have as it follows:
Device device = new Device(deviceName);
ApplicationEntity locAE = new ApplicationEntity(localAE);
Connection conn = new Connection();
Connection remote = new Connection();
AAssociateRQ rq = new AAssociateRQ();
device.addConnection(conn);
device.addApplicationEntity(locAE);
locAE.addConnection(conn);
ApplicationEntity remAE = new ApplicationEntity();
remAE.setAETitle(remoteAE);
remote.setCommonName("rem_conn");
remote.setHostname(remoteIP);
remote.setPort(remotePort);
remote.setProtocol(Connection.Protocol.DICOM);
remAE.addConnection(remote);
rq.setCalledAET(remAE.getAETitle());
rq.setCallingAET(locAE.getAETitle());
rq.setApplicationContext("1.2.840.10008.3.1.1.1");
rq.setImplClassUID("1.2.40.0.13.1.3");
rq.setImplVersionName("dcm4che-5.14.1");
rq.setMaxPDULength(16384);
rq.setMaxOpsInvoked(0);
rq.setMaxOpsPerformed(0);
rq.addPresentationContext(new PresentationContext(
1, "1.2.840.10008.5.1.4.1.2.2.1", "1.2.840.10008.1.2"));
Executor exec = (Runnable command) -> {};
device.setExecutor(exec);
//Opens association and connects to remote server
Association as = locAE.connect(conn, remote, rq);
But when trying to connect to a remote AET, it doesn't seem to receive the AAssociation response from the remote AET. My Java application hangs in Sta5 (waiting for association response) while the server hangs in Sta6 (ready for data transfer).
Java log:
[main] INFO org.dcm4che3.net.Connection - Initiate connection from 0.0.0.0/0.0.0.0:0 to localhost:11112
[main] INFO org.dcm4che3.net.Connection - Established connection Socket[addr=localhost/127.0.0.1,port=11112,localport=50101]
[main] DEBUG org.dcm4che3.net.Association - /127.0.0.1:50101>localhost/127.0.0.1:11112(1): enter state: Sta4 - Awaiting transport connection opening to complete
[main] INFO org.dcm4che3.net.Association - DEVICEAE->DCMQRSCP(1) << A-ASSOCIATE-RQ
[main] DEBUG org.dcm4che3.net.Association - A-ASSOCIATE-RQ[
calledAET: DCMQRSCP
callingAET: DEVICEAE
applicationContext: 1.2.840.10008.3.1.1.1 - DICOM Application Context Name
implClassUID: 1.2.40.0.13.1.3
implVersionName: dcm4che-5.14.1
maxPDULength: 16378
maxOpsInvoked/maxOpsPerformed: 1/1
PresentationContext[id: 1
as: 1.2.840.10008.5.1.4.1.2.2.1 - Study Root Query/Retrieve Information Model - FIND
ts: 1.2.840.10008.1.2 - Implicit VR Little Endian
]
]
[main] DEBUG org.dcm4che3.net.Association - DEVICEAE->DCMQRSCP(1): enter state: Sta5 - Awaiting A-ASSOCIATE-AC or A-ASSOCIATE-RJ PDU
Server log:
19:11:29,397 INFO - Accept connection Socket[addr=/127.0.0.1,port=50101,localport=11112]
19:11:29,397 DEBUG - /127.0.0.1:11112<-/127.0.0.1:50101(3): enter state: Sta2 - Transport connection open
19:11:29,416 INFO - DCMQRSCP<-DEVICEAE(3) >> A-ASSOCIATE-RQ
19:11:29,416 DEBUG - A-ASSOCIATE-RQ[
calledAET: DCMQRSCP
callingAET: DEVICEAE
applicationContext: 1.2.840.10008.3.1.1.1 - DICOM Application Context Name
implClassUID: 1.2.40.0.13.1.3
implVersionName: dcm4che-5.14.1
maxPDULength: 16378
maxOpsInvoked/maxOpsPerformed: 1/1
PresentationContext[id: 1
as: 1.2.840.10008.5.1.4.1.2.2.1 - Study Root Query/Retrieve Information Model - FIND
ts: 1.2.840.10008.1.2 - Implicit VR Little Endian
]
]
19:11:29,419 DEBUG - DCMQRSCP<-DEVICEAE(3): enter state: Sta3 - Awaiting local A-ASSOCIATE response primitive
19:11:29,419 INFO - DCMQRSCP<-DEVICEAE(3) << A-ASSOCIATE-AC
19:11:29,419 DEBUG - A-ASSOCIATE-AC[
calledAET: DCMQRSCP
callingAET: DEVICEAE
applicationContext: 1.2.840.10008.3.1.1.1 - DICOM Application Context Name
implClassUID: 1.2.40.0.13.1.3
implVersionName: dcm4che-5.14.1
maxPDULength: 16378
maxOpsInvoked/maxOpsPerformed: 1/1
PresentationContext[id: 1
result: 0 - acceptance
ts: 1.2.840.10008.1.2 - Implicit VR Little Endian
]
]
19:11:29,427 DEBUG - DCMQRSCP<-DEVICEAE(3): enter state: Sta6 - Association established and ready for data transfer
I feel like I am missing something, but I cannot find the source of the problem. Any help is appreciated, as I am still new to dcm4che and DICOM protocol.
Thank you.
I'm a newbie in programming with dcm4che2 libraries and I'm writing a simple program to query a PACS server, by setting Query/Retrieve Level to Patient/Series/Image.
The code is very simple and, in some cases, it works fine:
dcmqr.setCalledAET("AET_REMOTE", true);
dcmqr.setRemoteHost("aa.bb.cc.dd");
dcmqr.setRemotePort(xxxx);
dcmqr.getKeys();
dcmqr.setDateTimeMatching(true);
dcmqr.setCFind(true);
dcmqr.setCGet(false);
dcmqr.configureTransferCapability(true);
dcmqr.setQueryLevel(DcmQR.QueryRetrieveLevel.IMAGE);
dcmqr.addMatchingKey(new int[]{Tag.PatientName},sPatientName);
dcmqr.addMatchingKey(new int[]{Tag.Modality},sModality);
dcmqr.addMatchingKey(new int[]{Tag.AccessionNumber},sAccession);
dcmqr.addMatchingKey(new int[]{Tag.SeriesNumber},sSeriesNumber);
dcmqr.addReturnKey(new int[]{Tag.SeriesDescription});
dcmqr.addReturnKey(new int[]{Tag.StudyDescription});
dcmqr.addReturnKey(new int[]{Tag.PatientBirthDate});
dcmqr.addReturnKey(new int[]{Tag.PatientSex});
List<DicomObject> result = null;
try{
dcmqr.start();
dcmqr.open();
result = dcmqr.query();
dcmqr.stop();
dcmqr.close();
}
catch(Exception e){
...
}
However in some cases (and whenever I set Query/Retrieve Level to "Image"), the query() method fails ("unexpected message ID in DIMSE RSP") and an A-Abort command is thrown, as reported below:
...
[main] INFO org.dcm4che2.net.PDUEncoder - AET_REMOTE(1) << 3:C-FIND-RQ[pcid=1, prior=0
cuid=xyz/Study Root Query/Retrieve Information Model - FIND
ts=xyz/Implicit VR Little Endian]
[AE_TITLE_X] INFO org.dcm4che2.net.PDUDecoder - AET_REMOTE(1) >> 2:C-FIND-RSP[
pcid=1, status=0H cuid=xyz/Study Root Query/Retrieve Information Model - FIND]
[main] INFO org.dcm4che2.tool.dcmqr.DcmQR - Send Query Request #3/15 using .../Study Root Query/Retrieve Information Model - FIND:
(0008,0052) CS #6 [IMAGE] Query/Retrieve Level
(0008,0060) CS #2 [CT] Modality
(0010,0010) PN #12 [xxx^yyyy] PatientÆs Name
(0020,000D) UI #42 [x.y.z.zyx...] Study Instance UID
(0020,000E) UI #56 [y.x.z.zyx...] Series Instance UID
[AE_TITLE_X] WARN org.dcm4che2.net.Association - unexpected message ID in DIMSE RSP:
(0000,0002) UI #28 [x.y.z.zax...] Affected SOP Class UID
(0000,0100) US #2 [32800] Command Field
(0000,0120) US #2 [2] Message ID Being Responded To
(0000,0800) US #2 [257] Command Data Set Type
(0000,0900) US #2 [0] Status
[AE_TITLE_X] INFO org.dcm4che2.net.PDUEncoder - AET_REMOTE(1) << A-ABORT[source=0, reason=0]
[AE_TITLE_X] INFO org.dcm4che2.net.Association - AET_REMOTE(1): close Socket[addr=/aa.bb.cc.dd,port=xxx,localport=yyy]
[main] INFO org.dcm4che2.net.PDUEncoder - AET_REMOTE(1) << 4:C-FIND-RQ[pcid=1, prior=0
cuid=.../Study Root Query/Retrieve Information Model - FIND
ts=.../Implicit VR Little Endian]
[main] WARN org.dcm4che2.net.Association - unable to send P-DATA-TF in state: Sta1
Indeed, I can't understand what does this error mean and figure out a solution.
I guess it's a communication issue..
Do anyone could help me?
Thanks.
Your logging indicates that you've made query request #3, then received the response for query request #2. If the listener is now expecting a response for 3, then it will throw an exception because it has received a message ID for message 2.
If you are looping over the query call to do this, you could try specifying the instances as a list instead:
addMatchingKey( new int[] { Tag.SeriesInstanceUID }, "uid1\\uid2\\uid3" );