How to add information to every span with Spring Cloud Sleuth - java

I am currently trying to understand how I can customize Spring Cloud Sleuth in a scalable way to add information to every Span.
What I have tried so far:
Using my own implementation of GenericFilterBean and HandlerInterceptorAdapter, give them a Tracer in the constructor and write Tags everytime they are called with tracer.addTag("key", "value")
I had a look at the idea of the new baggage information - however I interpret it in a way that it is global for the whole trace - and as the trace has several requests accross different services/machines it would not fit my purpose of adding information on service/machine level.
So far the tags from the Filter and Interceptor get set for some Spans but not for all, when I inspect the JSON that is written to my kafka topic through spring-cloud-stream-binder-kafka
So my question would be: Which types of requests/actions do exist that create spans and what are the appropriate ways to inject something into those spans. As I want to deploy this implementation to several micro services I do not want to annotate each and every method or do similarly work intensive and therefor not scalable approaches.

There are a lot of such places... but actually, we can tackle the problem from another angle. There's a single place where you can hook in - when the span is closed. https://github.com/spring-cloud/spring-cloud-sleuth/blob/master/spring-cloud-sleuth-core/src/main/java/org/springframework/cloud/sleuth/SpanReporter.java - you can create your own implementation of SpanReporter that before delegating to for example Zipkin span reporter will add a tag. Even easier way will be to just register the SpanAdjuster bean that adjusts the span before it gets reported. That way you can add the tag only in one place.

Related

How to name Span in Spring Sleuth in a reactive calling chaining

I have Spring Cloud Sleuth (2.0.2.RELEASE) working within a (partially) reactive class in some web based request/response system. The code is something like this:
private static final Scheduler PROCESSING_SCHEDULER = Schedulers.newParallel("processingScheduler");
public Set<ProcessedItem> processItems(List<Item> incomingItems) {
return Flux.fromIterable(incomingItems)
.publishOn(PROCESSING_SCHEDULER)
.collectMultimap(Item::getRequestIdentifier)
.flatMapIterable(Map::values)
.flatMap(itemProcessor::processGroupedItems)
.collect(Collectors.toSet())
.block();
}
As there are quite many responses coming in all the time, this method is being called several hundreds of times for one single request.
I suspect that this call with the .publishOn leads to hundreds and thousands of async spans in Zipkin (see attached screenshot). At least I assume that the spans are from that because it is what I understand from the documentation
So my first question would be:
How can I associate a name for such async threads? I don't have a place to put #SpanName here.
As a follow up, is there any way to NOT collect these spans? I don't need them, they fill up our Zipkin storage, but I also don't want to disable reactive or Sleuth in general since it is needed in other places ...
Screenshot from Zipkin
You can create your own custom SpanAdjuster that will modify the span name. You can also use FinishedSpanHandler to operate on finished spans to tweak them.

A very specific usage of callbacks in Java

This question is about a specific usage of a callback pattern. By callback i mean an interface from which i can define method(s) that is (are) optionnaly (= with a default set to 'do nothing', thanks Java 8) called from a lower layer in my application. My "application" is in fact a product which may have a lot of changes between client projects, so i need to separates somethings in order to reuse what won't change (technical code, integration of technologies) from the rest (model, rules).
Let's take an example :
I developped a Search Service which is based upon Apache CXF JAX-RS Search.
This service parses a FIQL query which can only handle AND/OR condition with =/</&gt/LIKE/... condition to create a JPA criteria query. I can't use a a condition like 'isNull'.
Using a specific interface i can define a callback that will be called when i got the criteria query from apache CXF layer in my search service and add my condition to the existing ones before the query is executed. This condition are defined on the upper layer of my searchService (RestController). This is in order to reduce code duplicate, like retuning a criteria query and finalize it in every methods where i need it. And because using #Transactional in CXF JAX-RS controller does not work well Spring proxy and CXF work (some JAX-RS annotation are ignored);
First question : does this example seems to be a good idea in terms of design ?
Now another example : i have an object which have some basic fields created from a service layer. But i want to be able to set others non-nullable fields not related to the service's process before the entity is persisted. These fields may move from a projects to another so i'd like to not have to change the signature of my service's method every time we add / remove columns. So again i'm considering using a callback pattern to be able to set within the same transaction and before object is persisted by the Service layer.
Second question : What about this example ?
Global question : Except the classic usage of callback for events : is this a pratice to use this pattern for some specific usage or is there any better way to handle it ?
If you need some code sample ask me, i'll make some (can't post my current code).
I wouldn't say that what you've described is a very specific usage of "an interface from which i can define method(s) that is (are) optionally called from a lower layer". I think that it is reasonable and also quite common solution.
Your doubts may be due to the naming. I'd rather use the term command pattern here. It seems to me that it is less confusing. Your approach also resembles the strategy pattern i.e. you provide (inject) an object which performs some calculations. Depending, on the context you inject objects that behave in a different way (for example add different conditions to a query).
To sum up callbacks/commands are not only used for events. I'd even say that events are specific usage of them. Command/callback pattern is used whenever we need to encapsulate an operation within an object and transfer/pass it somehow (by the way, in Java there is no other way to do so but for example in C++ there are pointers to methods, in C# there are delegates...).
As to your second example. I'm not sure if I understand it correctly. Why can't you simply populate all required fields of an object before calling the service?

Overriding Sling DefaultGetServlet.java

I'm working on a servlet to perform some logic specific to a resourceType in sling and set information to the request to be accessible via the jsp then handing off the request to the jsp similarly to the first solution provided in this answer.
Here's some example code to represent my situation:
#SlingServlet(
resourceTypes="myapp/components/mycomponent",
methods="GET",
extensions={"html"}
)
...
#Reference
private ServletResolver serlvetResolver;
protected void doGet(....) {
setPropertiesToRequest();
Servlet servlet = servletResolver.resolveServlet(resource, "....jsp");
servlet.service(slingRequest, slingResponse);
clearPropertiesFromRequest();
}
Because of this, I've noticed that I've lost sling's selector handling (I've had to roll my own simpler version to determine which jsp to render. Full featured sling selector handling is described in more detail here). I wanted to reach out to the stack overflow community and ask what else I may be missing out on by depriving the default get handler of the request. I've scanned through the source code but I think there may be more going on.
Secondly, I'd be interested in thoughts on how and where this approach may impact performance of the request resolution.
Thanks, Thomas
Processing the business logic in Java and delegating to scripts for rendering sounds like a job for the recently released Sling Models. Using that should remove the need to implement your own handling of selectors, as those won't affect the model selection, only the rendering scripts.
Not sure what you are trying to achieve here, but the main problem seems to me that your SlingServlet handles the html extension and by itself does not have selectors to filter a bit more. Thus it of course intercepts all the requests to your component. Then you have to take care of the selectors again to be able to choose the correct JSP.
The question is, why do you use a SlingServlet for it when you anyway do the rendering by JSP?
Can't you implement your logic in the JSP or better in a bean referenced in the JSP?
In our company we use our custom tag that takes care of this, but there are public frameworks available from other Adobe Partner:
https://github.com/Cognifide/Slice
http://neba.io/index.html

writing Operation logs using the Servlet Filters

We want to write operation logs in our application for all the operation being made to DataBase. The operation log should contain the operation info(the data being "add/modify/delete") and the result of the operation(success/failure).
Since there are more number of action classees, adding the code to write operation log in each action class looks difficult. So I thought of writing this part of code in the Servlet Filter.
But I have a problem here, I need to know the operation status(success/failure) but this is not possible in the filter with out parsing the response object. But parsing the response object looks difficult.
Can you suggest any alternative way to do this?
Thanks,
Chandra
If your application is AOP-based like Spring, then you can define aspects which can check for criteria like classes of a particular package, methods of a particular type (get/set/both). Using these aspects you can add logging.
I think the best way to achieve this is to add some extra logging to the JDBC driver. In the past I used Log4JDBC project.

Creating user events on certain actions. What is recommended solution?

This question regards how one can effectively create and persist event domain objects on certain system or user triggered events which themselves may or may not persist changes to the database.
Im creating a system where a user can tag some object, and when tagging occurs i should create a UserTagEvent which holds the object that was tagged, the tag that was applied or removed, and the user that tagged the object. (EDIT: This is not the actual TAG object, just a log of a tagging event)
The relationship of such a taggable object is one-to-many (a taggable object has many tags)
As far as i can see i have three alternatives.
Inline code in the controller/service which does the tagging (don't wanna do this as it mixes two different business processes.)
Use hibernate listeners pre-collection-update and post-collection-update to fetch the necessary information and create and persist a new UserTagEvent
Use AOP.
Do i have any other alternatives? Has anyone done something similar to this? What do you guys think i should do? Any help is appreciated.
It is not 100% clear if the UserTagEvent represents the actual tag or if it just acts as a log for a tag event.
Use hibernate listeners pre-collection-update and post-collection-update to fetch the necessary information and create and persist a new UserTagEvent
If the UserTagEvent is your tag the hibernate listeners would not make much sense because they would only get fired when you create a UserTagEvent and add it to the object by yourself and then you won nothing.
Inline code in the controller/service which does the tagging (don't wanna do this as it mixes two different business processes.)
I would start by creating a TagService that is responsible for tagging/tag-logging. You could use it either from a controller or by using it from aop but you should encapsule the functionality like: tagService.createTag(tag, object, user)
This could be handy especially when you later want to use a different technology to store the events like some nosql solution.
The following is what i learned when exploring my options:
1) Inline code in the controller/service which does the
tagging (don't wanna do this as it
mixes two different business
processes.)
Didnt give this alternative a try
2) Use hibernate listeners pre-collection-update and
post-collection-update to fetch the
necessary information and create and
persist a new UserTagEvent
This turned out to be very difficult, inefficient, and problematic for several reasons.
For example, you are working with a collection of items which may or may not be lazy initialized. In order to detect changes in the collection i had to listen for collection initialization event, get a cloned collection, store it to a field variable, then listen for a update collection event, get a cloned collection and compare with the collection previously stored.
In addition these events got fired for ALL hibernate events, not just for the domain objects i was interested in. So this was a "no go"...
3) Use AOP.
I was originally very optimistic about this solution, and after a few tries i soon came to realize that this wasn't as simple as i first thought. There were very few guides on the web describing Grails AND AOP, and those existed were rather old.
There was a lot more work involved than i originally thought. My overall impression is that grails seems to have a lot of bugs assosciated with AOP integration, and i also didn't like the fact that i had to add bean definitions to resources.groovy for each aspect that i created. I tried to make aspects be autoloaded through annotations (auto-proxy), but with no luck.
In addition i never got the pointcut to work outside the main project. As my tagging solution is defined as a grails plugin it seems that AOP can't be applied on classes of the plugin (even if it is a inplace plugin).
So this turned out to be a "no go" aswell
So drum roll please.
What i ended up with was using the observer pattern to fire off an event whenever a new tag was added or removed. This involved making changes to my tagger plugin where i could specify listeners through spring beans (whicn implemented a TagEventListener interface) and have the tagger plugin fire off events on the spring beans upon the addTag and removeTag method calls.
Overall im pretty happy with this solution, it involves one or two more method calls then what would be necessary if i had just inlined as described in option 1. But this way I have cleaner code, and i don't mix business processes. So i think the extra 1ns overhead is worth it.

Categories

Resources