I am using new DefaultConsumer(channel) and overriding handleDeliverymethod.
My goal is to use my consumer as worker queues and for that I know that I have to provide the channel.basicQos(1). Using 1 as my prefetchCount. I have been reading that I also need to provide channel.basicAck in order for my server to know how many unacknowledged messages must be sent(correct me if I am wrong here). Based on this count the channel.basicQos takes effect. Now, I am using the following statements in the handleDelivery method
channel.basicQos(1);
channel.basicAck(envelope.getDeliveryTag(), false);
The issue is, I keep getting the following error:
com.rabbitmq.client.AlreadyClosedException: clean connection shutdown;
reason: Attempt to use closed channel
at com.rabbitmq.client.impl.AMQChannel.ensureIsOpen(AMQChannel.java:190)
at com.rabbitmq.client.impl.AMQChannel.rpc(AMQChannel.java:223)
.................................
..................................
When I remove the channel.basicAck, I don't see that problem.
How can I use the channel.basicQos to work (My understanding is for this to work, I need to provide the basicAck) and I don't want to get the AlreadyClosedException error.
Thanks for taking the time to read this and for any help you guys can offer!
Related
Using the java bigquery storage api as documented here https://cloud.google.com/bigquery/docs/write-api.
Keeping the write stream long lived and refreshing it when one of the non-retry-able errors happened as per this https://cloud.google.com/bigquery/docs/write-api#error_handling
I am sticking with default stream. I have two tables and different parts of code responsible for writing to each table, maintaining its own stream writer.
If data is flowing, everything is fine. No errors. However I want to test refreshing the stream writers work too so I wait for default stream timeout (10mins) which closes the stream and try writing again. I can create the stream fine, no error there, but for one of the table I keep getting cancelled error wrapped in a Pre condition failed making my code refresh again and again.
Original error because stream closed due to inactivity
! io.grpc.StatusRuntimeException: FAILED_PRECONDITION: Stream is closed due to com.google.api.gax.rpc.AbortedException: io.grpc.StatusRuntimeException: ABORTED: Closing the stream because it has been inactive for 600 seconds. Entity: projects/<id>/datasets/<id>/tables/<id>/_default
! at com.google.cloud.bigquery.storage.v1beta2.StreamWriterV2.appendInternal(StreamWriterV2.java:263)
! at com.google.cloud.bigquery.storage.v1beta2.StreamWriterV2.append(StreamWriterV2.java:234)
! at com.google.cloud.bigquery.storage.v1beta2.JsonStreamWriter.append(JsonStreamWriter.java:114)
! at com.google.cloud.bigquery.storage.v1beta2.JsonStreamWriter.append(JsonStreamWriter.java:89)
Further repeating errors on new stream(s)
! io.grpc.StatusRuntimeException: FAILED_PRECONDITION: Stream is closed due to com.google.api.gax.rpc.CancelledException: io.grpc.StatusRuntimeException: CANCELLED: io.grpc.Context was cancelled without error
! at com.google.cloud.bigquery.storage.v1beta2.StreamWriterV2.appendInternal(StreamWriterV2.java:263)
! at com.google.cloud.bigquery.storage.v1beta2.StreamWriterV2.append(StreamWriterV2.java:234)
! at com.google.cloud.bigquery.storage.v1beta2.JsonStreamWriter.append(JsonStreamWriter.java:114)
! at com.google.cloud.bigquery.storage.v1beta2.JsonStreamWriter.append(JsonStreamWriter.java:89)
I am not sure why its being cancelled without error. Any pointers on how I can debug this or recommendation on how to maintain and refresh a long-lived streaming writer?
Updating the Java client library version should solve this problem since reconnect support was added for the JsonStreamwriter. Instead of throwing this error, it should handle retries.
I have a problem while trying my hands on the Hello World example explained here.
Kindly note that I have just modified the HelloEntity.java file to be able to return something other than "Hello, World!". Most certain my changes are taking time and hence I am getting the below Timeout error.
I am currently trying (doing a PoC) on a single node to understand the Lagom framework and do not have liberty to deploy multiple nodes.
I have also tried modifying the default lagom.circuit-breaker in application.conf "call-timeout = 100s" however, this does not seem to have helped.
Following is the exact error message for your reference:
{"name":"akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://hello-impl-application/system/sharding/HelloEntity#1074448247]] after [5000 ms]. Sender[null] sent message of type \"com.lightbend.lagom.javadsl.persistence.CommandEnvelope\".","detail":"akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://hello-impl-application/system/sharding/HelloEntity#1074448247]] after [5000 ms]. Sender[null] sent message of type \"com.lightbend.lagom.javadsl.persistence.CommandEnvelope\".\n\tat akka.pattern.PromiseActorRef$.$anonfun$defaultOnTimeout$1(AskSupport.scala:595)\n\tat akka.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:605)\n\tat akka.actor.Scheduler$$anon$4.run(Scheduler.scala:140)\n\tat scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:866)\n\tat scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:109)\n\tat scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:103)\n\tat scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:864)\n\tat akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)\n\tat akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)\n\tat akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)\n\tat akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)\n\tat java.lang.Thread.run(Thread.java:748)\n"}
Question: Is there a way to increase the akka Timeout by modifying the application.conf or any of the java source files in the Hello World project? Can you please help me with the exact details.
Thanks in advance for you time and help.
The call timeout is the timeout for circuit breakers, which is configured using lagom.circuit-breaker.default.call-timeout. But that's not what is timing out above, the thing that is timing out above is the request to your HelloEntity, that timeout is configured using lagom.persistence.ask-timeout. The reason why there's a timeout on requests to entities is because in a multi-node environment, your entities are sharded across nodes, so an ask on them may go to another node, which is why a timeout is needed in case that node is not responding.
All that said, I don't think changing the ask-timeout will solve your problem. If you have a single node, then your entities should respond instantly if everything is working ok.
Is that the only error you're seeing in the logs?
Are you seeing this in devmode (ie, using the runAll command), or are you running the Lagom service some other way?
Is your database responding?
Thanks James for the help/pointer.
Adding following lines to resources/application.conf did the trick for me:
lagom.persistence.ask-timeout=30s
hello {
..
..
call-timeout = 30s
call-timeout = ${?CIRCUIT_BREAKER_CALL_TIMEOUT}
..
}
A Call is a Service-to-Service communication. That’s a SeviceClient communicating to a remote server. It uses a circuit breaker. It is a extra-service call.
An ask (in the context of lagom.persistence) is sending a command to a persistent entity. That happens across the nodes insied your Lagom service. It is not using circuit breaking. It is an intra-service call.
I am trying to implement a process consisting of several webservice-calls, initiated by a JMS-message read by Spring-integration. Since there are no transactions across these WS-calls, I would like to keep track of how far my process has gone, so that steps that are already carried out are skipped when retrying message processing.
Example steps:
Retrieve A (get A.id)
Create new B for A (using A.id, getting B.id)
Create new C for B (using B.id, getting C.id)
Now, if the first attempt fails in step 3, I already have a created a B, and know it's id. So if I want to retry the message, it will skip the second step, and not leave me with an incomplete B.
So, to the question: Is it possible to decorate a JMS-message read by Spring-integration with additional header properties upon message processing failures? If so, how could I do this?
The way it works at the moment:
Message is read
Some exception is thrown
Message processing halts, and ActiveMQ places the message on DLQ
How I would like it to work:
Message is read
Some exception is thrown
The exception is handled, with the result of this handling being an extra header property added to the original message
ActiveMQ places the message on DLQ
One thing that might achieve this is the following:
Read the message
Start processing, wrapped in try-catch
On exception, get the extra information from the exception, create a new message based on the original one, add extra info to header and send it directly to the DLQ
Swallow the exception so the original message dissappears
This feels kinda hackish though, hopefully there is a more elegant solution.
It's hard to generalize without more information about your flow(s) but you could consider adding a custom request handler advice to decorate and/or re-route failed messages. See Adding Behavior to Endpoints.
As the other answer says, you can't modify the message but you can build a new one from it.
EDIT:
So, to the question: Is it possible to decorate a JMS-message read by Spring-integration with additional header properties upon message processing failures? If so, how could I do this?
Ahhh... now I think I know what you are asking; no, you can't "decorate" the existing message; you can republish it with additional headers instead of throwing an exception.
You can republish in the advice, or in the error flow.
It might seem like a "hack" to you, but the JMS API provides no mechanism to do what you want.
From the spring forum:
To place new header to the MessageHeaders you should use
MessageBuilder, because not only headers, but entire Message is
immutable.
return MessageBuilder.fromMessage(message).setHeader(updateflag, message.getHeaders().get("Lgg_Rid") == "ACK" ? "CONF" : "FAIL").build();
In an asynchronous context, errors will go to an error channel - either one you configure yourself and indicate in the message headers with errorChannel, or a global error channel if none is specified. See for more details here.
I have spent a few good amount of hours reading about Spring Integration. And today I started experimenting with the framework. There are aspects of how it works that I have trouble understanding despite of all my reading. I hope somebody here can put me back on tracks.
I have the following channel and endpoint defined:
<in:channel id="orderSource"/>
<in:service-activator input-channel="orderSource"
ref="defaultOrderService"
method="placeOrder"/>
Since the channel is a DirectChannel I expect everything to happen within a single thread and get a return value at the end.
The placeOrder method look as follows:
#Override
public Order placeOrder(Order order) {
return order;
}
In my main method I have:
MessageChannel input = context.getBean("orderSource", MessageChannel.class);
Message<Order> message = MessageBuilder.withPayload(new Order(123)).build();
MessagingTemplate messenger = new MessagingTemplate(input);
Message<?> result = messenger.sendAndReceive(message);
Object found = result.getPayload();
And this all works like a charm. The found is the order the service activator sends back.
My problem starts when I want to notify a set of subscribers that the order was placed. For simplicity, let's do this synchronously, like this:
<in:channel id="orderSource"/>
<in:service-activator input-channel="orderSource"
output-channel="savedOrders"
ref="defaultOrderService"
method="validateOrder"/>
<in:publish-subscribe-channel id="savedOrders"/>
<in:outbound-channel-adapter channel="savedOrders"
ref="defaultOrderService"
method="notifyCustomerService"/>
<in:outbound-channel-adapter channel="savedOrders"
ref="defaultOrderService"
method="notifyShipmentManager"/>
<in:outbound-channel-adapter channel="savedOrders"
ref="defaultOrderService"
method="notifyWarehouseManager"/>
The question now is what should the input channel expect in return when I invoke sendAndReceive?
My current code blocks and I never reach the end of the main thread.
How can I make sure I receive a reply containing the result of the service activator as it passed it to all subscribers?
Also I am really curious about what a given channel can expect in terms of returning values when there are asynchronous channels in the flow. I'd like to get the result at end of a transaction and before new thread is spawn, but I don't know how to do that.
Any thoughts, advice or guidance?
Presumably, your "notify" methods return null. If that's the case, there's no "reply" sent to the MessagingTemplate.
Make the final one return the order, or add a <bridge/> to nowhere as a fourth subscriber to the pub-sub channel.
A bridge to nowhere is simply a bridge with no output channel. When a message arrives at an endpoint that produces a reply, and there is no output-channel, the message's replyChannel header is used to route the reply to the originator.
It works with async channels too, but I'd need to understand your requirements there before I can provide guidance.
Also, consider using a Messaging Gateway on the calling side instead of building a message yourself and using the MessagingTemplate. Rather than exposing your caller to the messaging infrastructure, the framework will create a proxy for you that will take care of all that and you just interact with the POJI.
I spent some more time reading and I discovered that this is all a matter of configuring the reply channel either in the message or in the gateway and using bridge just as Gary Rusell suggested did the trick for me.
This is my code, now working:
<in:channel id="arrivals"/>
<in:service-activator input-channel="arrivals"
output-channel="validated"
ref="defaultOrderService"
method="validateOrder"/>
<in:channel id="validated"/>
<in:service-activator input-channel="validated"
output-channel="persisted"
ref="defaultOrderService"
method="placeOrder"/>
<in:publish-subscribe-channel id="persisted"/>
<in:channel id="replyChannel"/>
<in:bridge input-channel="persisted" output-channel="replyChannel"/>
<in:outbound-channel-adapter channel="persisted"
ref="defaultOrderService"
method="notifyCustomerService"/>
<in:outbound-channel-adapter channel="persisted"
ref="defaultOrderService"
method="notifyShipmentManager"/>
<in:outbound-channel-adapter channel="persisted"
ref="defaultOrderService"
method="notifyWarehouseManager"/>
<in:gateway id="orderService"
service-interface="codemasters.services.OrderService"
default-request-channel="arrivals"
default-reply-channel="replyChannel"/>
And using a gateway, this all looks much cooler now:
OrderService service = context.getBean("orderService", OrderService.class);
Order result = service.validateOrder(new Order(4321));
I am facing MQ error 2018, on connecting to the broker and havent really been able to figure out what the problem is
It is an extremely simple code and this is how it works
Connects to MQ
Reads
closed read queue
Write
Closed write queue
Disconnects from quueu manager, and repeats the above process .
try
{
if(mqConnect()){
mqRead()
queue.close()
mqWrite()
queue.close()
mqdisconnect()
}
}
finally
{
if (mqQueueManager!= null)
{
mqDisconnect();
}
Can someone suggest me what I am doing wrong please
2018 means the MQQueueManager instance being used in your application is invalid. There are number of reasons for throwing 2018. Most of common is attempt to use MQQueueManager instance after Disconnect method has been called.
I am not sure which method call in your programming is throwing 2018. It would help if you could post actual code and point failing method call.
Well, it appears you are trying to get messages from a queue when the connect failed. Also, where is the code to open the queue? Please show us the real code for what you are trying to do.