Here is a simple server application using Bonjour and written in Java. The main part of the code is given here:
public class ServiceAnnouncer implements IServiceAnnouncer, RegisterListener {
private DNSSDRegistration serviceRecord;
private boolean registered;
public boolean isRegistered(){
return registered;
}
public void registerService() {
try {
serviceRecord = DNSSD.register(0,0,null,"_killerapp._tcp", null,null,1234,null,this);
} catch (DNSSDException e) {
// error handling here
}
}
public void unregisterService(){
serviceRecord.stop();
registered = false;
}
public void serviceRegistered(DNSSDRegistration registration, int flags,String serviceName, String regType, String domain){
registered = true;
}
public void operationFailed(DNSSDService registration, int error){
// do error handling here if you want to.
}
}
I understand it in the following way. We can try to register a service calling "registerService" method which, in its turn, calls "DNSSD.register" method. "DNSSD.register" try to register the service and, in general case, it can end up with two results: service was "successfully registered" and "registration failed". In both cases "DNSSD.register" calls a corresponding method (either "serviceRegistered" or "operationFailed") of the object which was given to the DNSSD.register as the last argument. And programmer decides what to put into "serviceRegistered" and "operationFailed". It is clear.
But should I try to register a service from the "operationFailed"? I am afraid that in this way my application will try to register the service too frequently. Should I put some "sleep" or "pause" into "operationFailed"? But in any case, it seems to me, that when the application is unable to register a service it will be also unable to do something else (for example to take care of GUI). Or may be DNSSD.register introduce some kind of parallelism? I mean it starts a new thread but that if I try to register service from "operation Failed", I could generate a huge number of the threads. Can it happen? If it is the case, should it be a problem? And if it is the case, how can I resolve this problem?
Yes, callbacks from the DNSSD APIs can come asynchronously from another thread. This exerpt from the O'Reilly book on ZeroConf networking gives some useful information.
I'm not sure retrying the registration from your operationFailed callback is a good idea. At least without some understanding of why the registration failed, is simply retrying it with the same parameters going to make sense?
Related
The opentelemetry-javaagent-all agent (versions 0.17.0 and 1.0.1) has been the starting point for adding trace information to my Java application. Auto-instrumentation works great.
Some of my application cannot be auto-instrumented. For this part of the application, I began by adding #WithSpan annotations to interesting spots in the code.
I now reach the limits of what seems possible with simple #WithSpan annotations. However, the framework underlying my app allows me to register callbacks to be invoked at certain points -- e.g. I can provide handlers that are notified when a client connects / disconnects.
What I think I need is to start a new Span when Foo.onConnect() is called, and set it be the parent for the Spans that correspond to each request.
public class Foo {
void onConnect() {
// called when a client connects to my app
// Here I want to create a Span that will be the parent of the Span created in
// Foo.processEachRequest().
}
#WithSpan
public void processEachRequest() {
// works, but since it is called for each request... each span is in a separate Trace
}
void onDisconnect() {
// called when the client disconnects from my app
// Here I can end the parent Span.
}
}
Other ideas - that didn't work out:
1 - The obvious solution would be to add #WithSpan annotations to the underlying framework. For various reasons, this is not going to be a practical way forward.
2 - Next choice might be to search for a way to tell the javaagent about methods in my underlying framework. (The New Relic agent can do something like this.) That doesn't seem to be a feature of the open-telemetry agent, today anyway.
So, I'm left with looking for a way to do this using the callbacks, as above.
Is there a way to do this?
That should be possible by manually instrumenting your code. You would use the Tracer interface of OpenTelemetry, as described in the OpenTelemetry Java docs.
This should give you a general idea:
public class Foo {
private Span parentSpan; // you might need a Map/List/Stack here
void onConnect() {
Tracer tracer =
openTelemetry.getTracer("instrumentation-library-name", "1.0.0");
Span span = tracer.spanBuilder("my span").startSpan();
this.parentSpan = span; // might need to store span per request/client/connection-id
}
public void processEachRequest() {
final Span parent = this.lookupParentSpan();
if (parent != null) {
try (Scope scope = span.makeCurrent()) {
yourLogic();
} catch (Throwable t) {
span.setStatus(StatusCode.ERROR, "error message");
throw t;
}
} else {
yourLogic();
}
}
void onDisconnect() {
final Span parent = this.lookupParentSpan();
if (parent != null) {
parent.end();
}
}
private Span lookupParentSpan() {
// you probably want to lookup the span by client or connection id from a (weak) map
return this.parentSpan;
}
}
NB: You must guarantee that a span is always ended and does not leak. Make sure to properly scope your spans and eventually call Span#end().
When I trigger a scheduled event via a Command, I do not see the expected Event Handlers trigger. I am trying to isolate a one-time business transaction in a Saga while still allowing Aggregates to be event sourced to be able to replay state changes.
I have configured the following SimpleEventScheduler.
#Bean
public SimpleEventScheduler simpleEventScheduler(EventBus eventBus) {
return SimpleEventScheduler.builder()
.eventBus(eventBus)
.scheduledExecutorService(scheduledExecutorService())
.build();
}
private ScheduledExecutorService scheduledExecutorService() {
return Executors.unconfigurableScheduledExecutorService(Executors.newSingleThreadScheduledExecutor());
}
I have an aggregate modeled that has a #CommandHandler
#CommandHandler
public Letter(ScheduleLetterCommand cmd, EventScheduler scheduler) {
String id = cmd.getLetterId();
log.info("Received schedule command for letter id {}", id);
ScheduleToken scheduleToken = scheduler.schedule(Duration.ofSeconds(5), new BeginSendLetterEvent(id, LetterEventType.BEGIN_SEND));
AggregateLifecycle.apply(new LetterScheduledEvent(id, LetterEventType.SCHEDULED, scheduleToken));
}
and two #EventSourcingHandler
#EventSourcingHandler
public void on(BeginSendLetterEvent event) {
log.info("Letter sending process started {} {}", event.getLetterId(), event.getEventType());
scheduleToken = null;
}
#EventSourcingHandler
public void on(LetterSentEvent event) {
log.info("Letter sent {} {}", event.getLetterId(), event.getEventType());
this.sent = true;
}
I have a saga that does some 'business logic' when BeginSendLetterEvent is triggered and publishes LetterSentEvent.
#Saga
#Slf4j
public class LetterSchedulingSaga {
private EventGateway eventGateway;
public LetterSchedulingSaga() {
//Axon requires empty constructor
}
#StartSaga
#EndSaga
#SagaEventHandler(associationProperty = "letterId")
public void handle(BeginSendLetterEvent event) {
log.info("Sending letter {}...", event.getLetterId());
eventGateway.publish(new LetterSentEvent(event.getLetterId(), LetterEventType.SENT));
}
#Autowired
public void setEventGateway(EventGateway eventGateway) {
this.eventGateway = eventGateway;
}
}
Here is my output:
com.flsh.web.LetterScheduler : Received request to schedule letter
com.flsh.web.LetterScheduler : Finished request to schedule letter
com.flsh.axon.Letter : Received schedule command for letter id b7338082-e0e1-4ba0-b137-c7ff92afe3a1
com.flsh.axon.Letter : LetterScheduledEvent b7338082-e0e1-4ba0-b137-c7ff92afe3a1 SCHEDULED
com.flsh.axon.LetterSchedulingSaga : Sending letter b7338082-e0e1-4ba0-b137-c7ff92afe3a1...
The thing is I am not seeing the above two event handlers being triggered at all. Can someone see what I am doing wrong here? :) Any help would be appreciated...
If this is the wrong way to use Sagas and Event Handlers please let me know. I realize my rudimentary example doesn't facilitate a good domain model.
The short answer to your problem #GoldFish, is that you are expecting to handle events in your Command Model.
The aggregate in Axon terms is a Command Handling Component, as such being part of your Command Model when thinking about CQRS.
As such, it handles command messages and validates whether the given operation (read: command) can be executed. If the outcome of the validation is "yes", that's when you will end up publishing an event in the lifecycle of a given aggregate instances.
The #EventSourcingHandler annotated methods you can introduce into an aggregate are their to "source the aggregate instance based on its own events".
Having said that, you can anticipate that an Aggregate will never handle events directly from any other source then its own.
The EventScheduler is just as much an external source of events as another aggregate's events would be when sourcing. Hence, they will be disregarded for the aggregate.
The EventScheduler will publish an event at a latter stage, so that it might be handled by Event Handling Components, for example Saga instances.
If you want to schedule that something should occur for a specific aggregate or saga instance, you should have a look at the DeadlineManager instead.
Regardless, for what you're trying to achieve, which (I believe) is triggering an operation in your aggregate from a saga, you should use command messages, since the aggregate can only handle command messages.
I have used spring declarative retry in my project like
#Service
class Service {
#Async #Retryable(maxAttempts=12, backoff=#Backoff(delay=100, maxDelay=500))
public service() {
// ... do something
}
}
Now, I have two questions.
Is it fine to use retry with async, I don't have any issue, just
want to be sure.
The second requirement is, if the process fails I want to log it to log file including the number of remaining retries. So, is there a way to pass, or obtain the number of remaining retries from inside the method?
There is no way around using the annotations, #Recover annotated method executes only after the last failed retry, not after each one failing.
Refer to this github documentation
An excerpt from the link above- "Call the "service" method and if it fails with a RemoteAccessException then it will retry (up to three times by default), and then execute the "recover" method if unsuccessful."
Even with using RetryTemplate the Retry callback is called only after all retries are exhausted.
Another excerpt form the same link- "When a retry is exhausted the RetryOperations can pass control to a different callback, the RecoveryCallback. To use this feature clients just pass in the callbacks together to the same method"
You should use the #Recover annotation to perform an action on each fail and keep a count inside your object outside of the methods. Make sure no other methods interact with this counter. Here is the basic premise:
#Service
class Service {
private int attemptsLeft=12;
#Retryable(maxAttempts=12, backoff=#Backoff(delay=100, maxDelay=500))
public service() {
// ... do something that throws a KnownException you create to catch later on.
}
#Recover
public void connectionException(KnownException e) {
this.attemptsLeft = this.attemptsLeft-1; //decrease your failure counter
Logger.warn("Retry attempts left:{}",attemptsLeft);
}
}
If you don't want a member variable tracking count, you might need to ditch the annotations and declare the RetryTemplate to get access to the context, with the getRetryCount() method.
public String serviceWithRetry() {
RetryTemplate retryTemplate = new RetryTemplate();
final SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy();
retryPolicy.setMaxAttempts(12);
retryTemplate.setRetryPolicy(retryPolicy);
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
backOffPolicy.setInterval(100L);
retryTemplate.setBackOffPolicy(backOffPolicy);
retryTemplate.execute(new RetryCallback<Void, RuntimeException>()
return retryTemplate.execute(new RetryCallback<Void, RuntimeException>() {
#Override
public void doWithRetry(RetryContext context) {
LOG.info("Retry of connection count: {}", context.getRetryCount());
return //something with your connection logic
}
});
}
I am trying to create a client library that reads JSON from an external file online. I already know about the function interfaces and optionals, but I was wondering if there is a way to allow users to supply callback functions such that the parent function exits completely. For JavaScript, such a function is as follows:
file.read('hello', function(err, data) {
// something here
});
Basically, I wish to do the same in Java. How can I do this such that the error callback supersedes the read function? What I mean is that in the event that the error callback is called, then read should not return a value at all. If the callback is not called then the read should return the value.
You could have the user pass in a function and then just not do anything with it if there is no error.
This example assumes that you have a custom class called Error that the caller is aware of and would like to interact with in case of an error.
public void read (String str, Function<Error,Void> errorFunc)
{
//interact w/ libraries, boolean error = true or false
//if there is an error, variable err of type Error contains information
if (error)
{
errorFunc.apply(err);
}
}
In Java upto 1.7 the only way to achieve javascript like callbacks is thru interface. The api user who calls your method read has the liberty of implementing what he feels needs to be done to handle the error by writing an implementation class for the interface at the invocation point.
public String read(String options,IErrorCallBack errorHandler) throws Exception {
try {
// When everything works fine return what you think should be returned.
return "Success";
}
catch(Exception e) {
// On Error call the function on the error handler.
errorHandler.doSomething();
throw e;
}
}
public interface IErrorCallBack {
public void doSomething();
}
// The invocation point.
read("myString", new IErrorCallBack() {
public void doSomething() {
// Your implementation.
}
});
I am new to Akka and trying to write some code in Play Framework 2 in Java and use Akka.
To create an actor and send a test message to it, I have:
public class Global extends GlobalSettings {
#Override
public void onStart(Application app) {
final ActorRef testActor = Akka.system().actorOf(Props.create(TestActor.class), "testActor");
testActor.tell("hi", ActorRef.noSender());
}
}
This work perfectly fine and I see that my actor received my message, here is the code for my actor:
public class TestActor extends UntypedActor {
#Override
public void onReceive(Object message) throws Exception {
if(message.toString().equals("hi")){
System.out.println("I received a HI");
}else{
unhandled(message);
}
}
}
Very simple.
However, If I try to send a message from a controller:
public static Result index() {
final ActorRef testActor = Akka.system().actorFor("testActor");
testActor.tell("hi", ActorRef.noSender());
return ok(index.render("Your new application is ready."));
}
I get this message on terminal:
[INFO] [09/20/2014 11:40:30.850] [application-akka.actor.default-dispatcher-4] [akka://application/testActor] Message [java.lang.String] from Actor[akka://application/deadLetters] to Actor[akka://application/testActor] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
Can someone help me with this? why the first usage works and the second one fails? Thanks
The actorFor method requires the entire path, and your actor lives in the user space, so you have to use actorFor("/user/testActor"). Currently you are sending it to application/testActor, which would be a top-level actor in the ActorSystem itself.
By the way, actorFor is deprecated (at least in the Scala API) and replaced by actorSelection.
For more information, refer to the excellent documentation.
actorFor should get the path to the actor which is probably "akka://System/user/testActor". It also does not create the actor, meaning it should be exist.
Anyway, is there a reason that inside the controller you use the actorFor and not the actorOf? It had been deprecated and shouldn't be use.