I'm trying to call 2 xsl transforms in my route via recipientList. I have one wrapped in a choice step, and the other added as normal:
from("direct:cosTransform")
.routeId(TransformerConstants.TRANSFORM_XSLT_ROUTE)
.process(exchange -> {
// Get the xml payload from the exchange body
final String xml = exchange.getIn().getBody(String.class);
// Determine LOB and set as header. If the LOB is invalid, the xslt step will fail.
final String lob = TransformerUtil.getLOBFromAcordXml(xml);
if (null == lob) {
throw new GeneralException("Could not derive the LOB from the ACORD Form. ACORD Form["
+ TransformerUtil.getAcordFormFromAcordXml(xml) + "]");
}
exchange.getIn().setHeader("lineOfBusiness", lob);
// Market should be NI or BI. If the market is invalid, the xslt step will fail.
final String market = TransformerUtil.getMarketFromAccordXml(xml);
if (!StringUtils.equals(market, TransformerConstants.BI_MARKET)
&& !StringUtils.equals(market, TransformerConstants.NI_MARKET)) {
throw new GeneralException("Missing or invalid market[" + market + "].");
}
exchange.getIn().setHeader("market", market);
})
.log("Executing an xsl transform for Market=${header.market} and LOB=${header.lineOfBusiness}")
.choice()
.when(header("market").isEqualTo("NI"))
.recipientList(simple("xslt:./xsl/${header.market}/Common.xsl?saxon=true&contentCache=false")).id("commonTransformNI")
.log("after common :${body}")
.endChoice()
.recipientList(simple("xslt:./xsl/${header.market}/${header.lineOfBusiness}.xsl?saxon=true&contentCache=false"))
.log("after lob : ${body}")
.choice().id("postTransform")
.when(header("market").isEqualTo("NI"))
.process(exchange -> {
String xml = exchange.getIn().getBody(String.class);
xml = TransformerUtil.setUniqueIds(xml);
exchange.getIn().setBody(xml);
})
.endChoice()
.end();
}
`
When the exchange with a header of to be transformed hits the first recipientList call(Common.xsl), the xml is transformed and it works fine. When it hits the second call, I get the following in the console:
[ #0 - seda://transform-receive] [CLM] [CID=UNKNOWN] o.a.c.impl.ProcessorEndpoint$1.doStart DEBUG Starting producer: Producer[xslt://./xsl/NI/CGL.xsl?contentCache=false&saxon=true]
[ #0 - seda://transform-receive] [CLM] [CID=UNKNOWN] o.a.camel.impl.ProducerCache .doGetProducer DEBUG Adding to producer cache with key: Endpoint[xslt://./xsl/NI/CGL.xsl?contentCache=false&saxon=true] for producer: Producer[xslt://./xsl/NI/CGL.xsl?contentCache=false&saxon=true]
[ #0 - seda://transform-receive] [CLM] [CID=UNKNOWN] o.a.c.b.xml.XsltUriResolver .resolve DEBUG Resolving URI from classpath:: classpath:./xsl/NI/CGL.xsl
[ #0 - seda://transform-receive] [CLM] [CID=UNKNOWN] o.a.c.p.DefaultErrorHandler .log DEBUG Failed delivery for (MessageId: ID-LIBP03P-QK70A9V-60563-1487254928941-1-7 on ExchangeId: ID-LIBP03P-QK70A9V-60563-1487254928941-1-8). On delivery attempt: 0 caught: java.lang.NullPointerException
[ #0 - seda://transform-receive] [CLM] [CID=UNKNOWN] TRANSFORM.COR.XSLT.ROUTE .log ERROR null
[ #0 - seda://transform-receive] [CLM] [CID=UNKNOWN] o.a.c.processor.SendProcessor .process DEBUG >>>> Endpoint[log://showException=true] Exchange[ID-LIBP03P-QK70A9V-60563-1487254928941-1-8]
I've tried a few tests, adding in a mock endpoint after the second call and checking for a message, using log messages to track how far along the route the exchange gets to and it never seems to make it to the second call, it always throws an exception when heading to the second xslt step. Even when I send in an exchange that won't match the when condition, it doesn't hit the second xslt step either. The paths in the xsl strings are correct, when i remove the choice logic and second xslt step, the xsl runs fine, whether the string is for the common.xsl file or a line of business xsl file
Note: Due to certain templates in the xsl, I have to split the transforms out into their own steps, i.e. i can't import one of the files in, so the transforms have to be called separately
EDIT: I have implemented a processor to run the xslt manually for now. I'd still like to get to the bottom of this though, even when using a .to call and hardcoding the parameters I still get the same issue. Is it something around the XSLT component in camel?
Related
I just realized that choice() in Apache Camel is for dynamic routing, so every choice() needs a to(). It is not equivalent to if in Java.
But, does that mean that I cannot conditionally set header to my camel exchange?
I want to do something like this:
from("direct:eventHttpChoice") // last step returns a composite POJO with config details and the actual message and token, let's call it MyCompositePojo
.log(....) // I see this message in log
.setHeader("Authorization", simple("Bearer ${body.token}"))
.setHeader(Exchange.HTTP_METHOD, simple("${body.httpMethod.name}"))
.choice()
.when(simple("${body.httpMethod.name} in 'PUT,DELETE'"))
.setHeader(Exchange.HTTP_PATH, simple("${body.newEvent.number}"))
.endChoice()
.end()
.choice()
.when(simple("${body.httpMethod.name} in 'POST,PUT'"))
.setHeader(HttpHeaders.CONTENT_TYPE, constant(MediaType.APPLICATION_JSON))
.setBody(simple("${body.newEvent}")).marshal().json(JsonLibrary.Jsonb) // marshall here as toD() needs InputStream; and I believe here it converts my message to MyMessagePojo, the actual payload to send
.endChoice()
.otherwise() // DELETE
.when(simple("${body.configDetail.http.deleteSomeField} == 'true' && ${body.newEvent.someField} != null && ${body.newEvent.someField} != ''"))
.setHeader(Exchange.HTTP_QUERY, simple("someField=${body.newEvent.someField}&operationId=${body.newEvent.operationId}"))
.endChoice()
.otherwise()
.setHeader(Exchange.HTTP_QUERY, simple("operationId=${body.newEvent.operationId}"))
.endChoice()
.endChoice()
.end()
.log(LoggingLevel.INFO, "Sending to this url: ${body.configDetail.url}") // I don't see this log
.toD("${body.configDetail.url}", 10) // only cache at most 10 urls; I still need MyCompositePojo here
But I receive error:
2022-12-14 10:44:49,213 ERROR [org.apa.cam.pro.err.DefaultErrorHandler] (Camel (camel-1) thread #6 - JmsConsumer[my.queue]) Failed delivery for (MessageId: A9371D97F55900C-0000000000000001 on ExchangeId: A9371D97F55900C-0000000000000001). Exhausted after delivery attempt: 1 caught: org.apache.camel.language.bean.RuntimeBeanExpressionException: Failed to invoke method: configDetail on null due to: org.apache.camel.component.bean.MethodNotFoundException: Method with name: configDetail not found on bean: [B#330cd22d of type: [B on the exchange: Exchange[A9371D97F55900C-0000000000000001]
MyCompositePojo has this field. But I don't know where I get it wrong.
If you think I am doing marshall() too early, but if not like this, how can I set body? Because without .marshal() I see this error:
2022-12-14 12:25:41,772 ERROR [org.apa.cam.pro.err.DefaultErrorHandler] (Camel (camel-1) thread #7 - JmsConsumer[page.large.sm.provisioning.events.online]) Failed delivery for (MessageId: 65FF01C9FC61E66-0000000000000011 on ExchangeId: 65FF01C9FC61E66-0000000000000011). Exhausted after delivery attempt: 1 caught: org.apache.camel.language.bean.RuntimeBeanExpressionException: Failed to invoke method: configDetail on null due to: org.apache.camel.component.bean.MethodNotFoundException: Method with name: configDetail not found on bean: MyPojo{xxx=xxx, ...} of type: com.example.MyPojo on the exchange: Exchange[65FF01C9FC61E66-0000000000000011]
So, it means without .marshal(), it is changing the body to my MessagePojo; but I don't want it, I just need body to be part of my original body, and when it's HTTP DELETE, I don't want to set body. And, later in the route, I still need my composite pojo. I mean, I want to set the HTTP body and only conditionally, I don't want to change the exchange body.
So, how to conditionally set header and send to dynamic URL and set body?
An alternative would be to replace the (Camel) choice logic by a custom (Java) processor.
from("direct:demo")
.process( e -> setDynamicUri(e) );
.toD("${headers.nextUri}");
private void setDynamicUri(Exchange e) {
String httpMethod = e.getMessage().getHeader("...", String.class);
String endpointUrl = ( Arrays.asList("PUT", "DELETE").contains(httpMethod) ? "url1" : "url2" );
e.getMessage().setHeader("nextUri", endpointUrl);
}
Is there any way to ignore oversized messages without Flink job restarting?
If I try to produce (using KafkaSink ) a message which is too large (greater than max.message.bytes) then the RecordTooLargeException occurs and the Flink job restarts, then this "exception&restart" cycle is repeating endlessly!
I don't need to increase messages size limits such as max.message.bytes (Kafka Topic Config) and max.request.size (Flink Producer Config), they are good, they are already big. I just want to handle the situation when an unrealistically large message is trying to be produced. In this case, this big message should be ignored, and an error should be logged, and any Runtime Exception should NOT occur, and the endless restarting loop should NOT start.
I tried to use ProducerInterceptor -> it cannot intercept/reject a message, it can just modify it.
I tried to ignore oversized messages in SerializationSchema (implemented a custom wrapper of SerializationSchema) -> it cannot discard message producing too.
I am trying to overwrite KafkaWriter and KafkaSink classes, but it seems to be challenging.
I will be grateful for any advice!
A few quick environment details:
Kafka version is 2.8.1
Flink code is Java code based on the newer KafkaSource/KafkaSink API, not the
older KafkaConsumer/KafkaProduer API.
The flink-clients and flink-connector-kafka version is 1.15.0
Code sample which throws the RecordTooLargeException:
int numberOfRows = 1;
int rowsPerSecond = 1;
DataStream<String> stream = environment.addSource(
new DataGeneratorSource<>(
RandomGenerator.stringGenerator(1050000), // max.message.bytes=1048588
rowsPerSecond,
(long) numberOfRows),
TypeInformation.of(String.class))
.setParallelism(1)
.name("string-generator");
KafkaSinkBuilder<String> builder = KafkaSink.<String>builder()
.setBootstrapServers("localhost:9092")
.setDeliverGuarantee(DeliveryGuarantee.AT_LEAST_ONCE)
.setRecordSerializer(
KafkaRecordSerializationSchema.builder().setTopic("test.output")
.setValueSerializationSchema(new SimpleStringSchema())
.build());
KafkaSink<String> sink = builder.build();
stream.sinkTo(sink).setParallelism(1).name("output-producer");
Exception Stack Trace:
2022-06-02/14:01:45.066/PDT [flink-akka.actor.default-dispatcher-4] INFO output-producer: Writer -> output-producer: Committer (1/1) (a66beca5a05c1c27691f7b94ca6ac025) switched from RUNNING to FAILED on 271b1b90-7d6b-4a34-8116-3de6faa8a9bf # 127.0.0.1 (dataPort=-1). org.apache.flink.util.FlinkRuntimeException: Failed to send data to Kafka null with FlinkKafkaInternalProducer{transactionalId='null', inTransaction=false, closed=false} at org.apache.flink.connector.kafka.sink.KafkaWriter$WriterCallback.throwException(KafkaWriter.java:440) ~[flink-connector-kafka-1.15.0.jar:1.15.0] at org.apache.flink.connector.kafka.sink.KafkaWriter$WriterCallback.lambda$onCompletion$0(KafkaWriter.java:421) ~[flink-connector-kafka-1.15.0.jar:1.15.0] at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50) ~[flink-streaming-java-1.15.0.jar:1.15.0] at org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90) ~[flink-streaming-java-1.15.0.jar:1.15.0] at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsNonBlocking(MailboxProcessor.java:353) ~[flink-streaming-java-1.15.0.jar:1.15.0] at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:317) ~[flink-streaming-java-1.15.0.jar:1.15.0] at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:201) ~[flink-streaming-java-1.15.0.jar:1.15.0] at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:804) ~[flink-streaming-java-1.15.0.jar:1.15.0] at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:753) ~[flink-streaming-java-1.15.0.jar:1.15.0] at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948) ~[flink-runtime-1.15.0.jar:1.15.0] at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927) ~[flink-runtime-1.15.0.jar:1.15.0] at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741) ~[flink-runtime-1.15.0.jar:1.15.0] at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563) ~[flink-runtime-1.15.0.jar:1.15.0] at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_292] Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1050088 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.
I would like to notice that the scenarion I will describe happen rare enough and in most cases everything works as expected.
I have 1 topic and 1 subscription on Pub/Sub side.
My java application listens for subscription, does some processing and sends acknowledge back. Because of fact that google Pub/Sub guarantees at least once delivery, we do message deduplication on our side based on objectGeneration header and 'objectId' header.
Sometimes we see that message that was acknowldged is accepted by our application again and again and it is unexpected behaviour.
Log example:
//first
2019-12-17 20:51:57.375 INFO 1 --- [sub-subscriber3] bucketNotificationFlow : Received new message from pub-sub: GenericMessage [payload={....}, headers={.....objectGeneration=1576615916875106, eventTime=2019-12-17T20:51:56.874940Z, objectId=Small_files_bunch/100_12_1.csv, ....
....
2019-12-17 20:51:57.698 INFO 1 --- [sub-subscriber3] .i.g.PubSubMessageAcknowledgementHandler : Acknowledged message - 1576615916875106
...
//duplicate 1
2019-12-17 20:51:59.663 INFO 1 --- [sub-subscriber4] bucketNotificationFlow : Received new message from pub-sub: GenericMessage [payload={...}, headers={ objectGeneration=1576615916875106, eventTime=2019-12-17T20:51:56.874940Z, objectId=Small_files_bunch/100_12_1.csv", ....
...
2019-12-17 20:51:59.704 INFO 1 --- [sub-subscriber4] c.b.m.i.DiscardedMessagesHandler : Duplicate message received GenericMessage [ headers={idempotent.keys=[objectGeneration.1576615916875106, objectId.Small_files_bunch/100_12_1.csv], ...
....
//duplicate 2
2019-12-17 22:52:02.239 INFO 1 --- [sub-subscriber1] bucketNotificationFlow : Received new message from pub-sub: GenericMessage [payload={...}, headers={objectGeneration=1576615916875106, eventTime=2019-12-17T20:51:56.874940Z, objectId=Small_files_bunch/100_12_1.csv, ...
...
2019-12-17 22:52:02.339 INFO 1 --- [sub-subscriber1] c.b.m.i.DiscardedMessagesHandler : Duplicate message received GenericMessage [ headers={idempotent.keys=[objectGeneration.1576615916875106, objectId.Small_files_bunch/100_12_1.csv], ...
// and so on each 2 hours
Code for acknowledgement:
var generation = message.getHeaders().get("objectGeneration");
pubSubMessage = message.getHeaders().get(GcpPubSubHeaders.ORIGINAL_MESSAGE, BasicAcknowledgeablePubsubMessage.class)
pubSubMessage.ack().addCallback(
v -> {
removeFromIdempotentStore(targetMessage, false);
log.info("Acknowledged message - {}", generation); //from logs we see that this line was invoked
},
e -> {
removeFromIdempotentStore(targetMessage, false);
log.error("Failed to acknowledge message - {}", generation, e);
}
);
GCP subscription page contains following diagram:
StackDriver acknowledge diagram:
Any ideas what is going on, how to troubleshoot it and fix it ?
Try checking Stackdriver to see if you are missing acknowledgement deadlines.
The two hour wait time between duplicates is very interesting. Have you tried expanding your message deadline before? (Info on this is at the above link.)
See more info here: How to cleanup the JdbcMetadataStore?
According our conclusion, it would be better do not remove entries from metadata store table immediately after processing. Some external job should do the trick from time to time and only for those entries which are old enough for remove and we are definitely sure that Pub/Sub won't redeliver to us the same message anymore.
I am trying to get JSON data from jetty endpoint (another service), create output data and send them to one or more CVS files. I have 2 routes - first one creates files for current date, based on cron settings, second one exposes jetty endpoint to create files for any specified date on GET request. They are exactly the same except starting point, I also tried to send messages from second endpoint to first one... In both cases CSV files are created but second route gives me org.apache.camel.TypeConversionException. My route is:
from(httpServer + "/lineups?throwExceptionOnFailure=false?httpMethodRestrict=GET")
.routeId("manualStart")
.setExchangePattern(ExchangePattern.InOnly)
.setHeader(Exchange.HTTP_URI, simple(apiEndpoint + "/lineups"))
.setHeader("target_date", simple("${in.header.date}"))
.setHeader(Exchange.HTTP_QUERY, simple("date=${in.header.date}"))
.setHeader(Exchange.HTTP_METHOD, constant("GET"))
.to("https://dummyhost")
.process(new MappingProcessor())
.split(body())
.setHeader("prefix", simple("${body.name}"))
.process(new FileNameProcessor())
.marshal(bindy)
.aggregate(header("prefix"), new FileAggregationStrategy())
.completionTimeout(60000L)
.to("file:" + fileLocation + "?fileName=Nielsen.${in.header.prefix}.${in.header.target_date}.txt");
I get following exception:
16:41:50.493 [qtp1583020257-49] ERROR o.a.c.c.j.CamelContinuationServlet - Error processing request
org.apache.camel.TypeConversionException: Error during type conversion from type: java.lang.String to the required type: java.io.InputStream with value
[com...beans.MyOutput#7c9c5406, com...beans.MyOutput#6e3e3511, com.... [Body clipped after 1000 chars, total length is 23865] due Failed to convert from type [java.util.ArrayList<?>] to type [java.io.InputStream] for value ...
...
com....beans.MyOutput#6f0c8349]';
nested exception is org.springframework.core.convert.ConverterNotFoundException: No converter found capable of converting from type [java.util.ArrayList<?>] to type [java.io.InputStream]
at org.apache.camel.impl.converter.BaseTypeConverterRegistry.createTypeConversionException(BaseTypeConverterRegistry.java:610)
at org.apache.camel.impl.converter.BaseTypeConverterRegistry.convertTo(BaseTypeConverterRegistry.java:137)
at org.apache.camel.impl.MessageSupport.getBody(MessageSupport.java:72)
at org.apache.camel.impl.MessageSupport.getBody(MessageSupport.java:47)
at org.apache.camel.http.common.DefaultHttpBinding.doWriteDirectResponse(DefaultHttpBinding.java:396)
at org.apache.camel.http.common.DefaultHttpBinding.doWriteResponse(DefaultHttpBinding.java:332)
at org.apache.camel.http.common.DefaultHttpBinding.writeResponse(DefaultHttpBinding.java:264)
at org.apache.camel.component.jetty.CamelContinuationServlet.service(CamelContinuationServlet.java:227)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:821)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1685)
at org.apache.camel.component.jetty.CamelFilterWrapper.doFilter(CamelFilterWrapper.java:45)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1158)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1090)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:119)
at org.eclipse.jetty.server.Server.handleAsync(Server.java:567)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:325)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:242)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:261)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:75)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:213)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:147)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.springframework.core.convert.ConversionFailedException: Failed to convert from type [java.util.ArrayList<?>] to type [java.io.InputStream] for value
My data format for CSV formatting:
DataFormat bindy = new BindyCsvDataFormat(MyOutput.class);
This is FileAggregationStrategy:
public class FileAggregationStrategy implements AggregationStrategy {
#Override
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
if (oldExchange == null) {
return newExchange;
}
String oldBody = oldExchange.getIn().getBody(String.class);
String newBody = newExchange.getIn().getBody(String.class);
String body = oldBody + newBody;
oldExchange.getIn().setBody(body);
return oldExchange;
}
}
I tried to add .log() after each step and I see that exception is thrown on .aggregate.
What could be wrong ? Another route started from
from("quartz://start/api_cron/?cron=" + cronExpression + "&fireNow=true")
works without any exceptions.
Its the HTTP response that is being attempted to be converted from X to InputStream. You need to set some response to return, either an empty value or something you want to return to the HTTP client.
Even if you set the MEP to InOnly then Jetty will send back a response. You can use wireTap if you want to process and aggregate the message independent of the Jetty route.
Something along the lines of
from jetty
wiretap direct:foo
transform constant "ok"
from direct:foo
// put in all that stuff from your route here
I'm a newbie in programming with dcm4che2 libraries and I'm writing a simple program to query a PACS server, by setting Query/Retrieve Level to Patient/Series/Image.
The code is very simple and, in some cases, it works fine:
dcmqr.setCalledAET("AET_REMOTE", true);
dcmqr.setRemoteHost("aa.bb.cc.dd");
dcmqr.setRemotePort(xxxx);
dcmqr.getKeys();
dcmqr.setDateTimeMatching(true);
dcmqr.setCFind(true);
dcmqr.setCGet(false);
dcmqr.configureTransferCapability(true);
dcmqr.setQueryLevel(DcmQR.QueryRetrieveLevel.IMAGE);
dcmqr.addMatchingKey(new int[]{Tag.PatientName},sPatientName);
dcmqr.addMatchingKey(new int[]{Tag.Modality},sModality);
dcmqr.addMatchingKey(new int[]{Tag.AccessionNumber},sAccession);
dcmqr.addMatchingKey(new int[]{Tag.SeriesNumber},sSeriesNumber);
dcmqr.addReturnKey(new int[]{Tag.SeriesDescription});
dcmqr.addReturnKey(new int[]{Tag.StudyDescription});
dcmqr.addReturnKey(new int[]{Tag.PatientBirthDate});
dcmqr.addReturnKey(new int[]{Tag.PatientSex});
List<DicomObject> result = null;
try{
dcmqr.start();
dcmqr.open();
result = dcmqr.query();
dcmqr.stop();
dcmqr.close();
}
catch(Exception e){
...
}
However in some cases (and whenever I set Query/Retrieve Level to "Image"), the query() method fails ("unexpected message ID in DIMSE RSP") and an A-Abort command is thrown, as reported below:
...
[main] INFO org.dcm4che2.net.PDUEncoder - AET_REMOTE(1) << 3:C-FIND-RQ[pcid=1, prior=0
cuid=xyz/Study Root Query/Retrieve Information Model - FIND
ts=xyz/Implicit VR Little Endian]
[AE_TITLE_X] INFO org.dcm4che2.net.PDUDecoder - AET_REMOTE(1) >> 2:C-FIND-RSP[
pcid=1, status=0H cuid=xyz/Study Root Query/Retrieve Information Model - FIND]
[main] INFO org.dcm4che2.tool.dcmqr.DcmQR - Send Query Request #3/15 using .../Study Root Query/Retrieve Information Model - FIND:
(0008,0052) CS #6 [IMAGE] Query/Retrieve Level
(0008,0060) CS #2 [CT] Modality
(0010,0010) PN #12 [xxx^yyyy] PatientÆs Name
(0020,000D) UI #42 [x.y.z.zyx...] Study Instance UID
(0020,000E) UI #56 [y.x.z.zyx...] Series Instance UID
[AE_TITLE_X] WARN org.dcm4che2.net.Association - unexpected message ID in DIMSE RSP:
(0000,0002) UI #28 [x.y.z.zax...] Affected SOP Class UID
(0000,0100) US #2 [32800] Command Field
(0000,0120) US #2 [2] Message ID Being Responded To
(0000,0800) US #2 [257] Command Data Set Type
(0000,0900) US #2 [0] Status
[AE_TITLE_X] INFO org.dcm4che2.net.PDUEncoder - AET_REMOTE(1) << A-ABORT[source=0, reason=0]
[AE_TITLE_X] INFO org.dcm4che2.net.Association - AET_REMOTE(1): close Socket[addr=/aa.bb.cc.dd,port=xxx,localport=yyy]
[main] INFO org.dcm4che2.net.PDUEncoder - AET_REMOTE(1) << 4:C-FIND-RQ[pcid=1, prior=0
cuid=.../Study Root Query/Retrieve Information Model - FIND
ts=.../Implicit VR Little Endian]
[main] WARN org.dcm4che2.net.Association - unable to send P-DATA-TF in state: Sta1
Indeed, I can't understand what does this error mean and figure out a solution.
I guess it's a communication issue..
Do anyone could help me?
Thanks.
Your logging indicates that you've made query request #3, then received the response for query request #2. If the listener is now expecting a response for 3, then it will throw an exception because it has received a message ID for message 2.
If you are looping over the query call to do this, you could try specifying the instances as a list instead:
addMatchingKey( new int[] { Tag.SeriesInstanceUID }, "uid1\\uid2\\uid3" );