How to add instrumentation to GraphQL Java with graphql-spring-boot? - java

does anybody know how I can add instrumentation to a GraphQL execution when using graphql-spring-boot (https://github.com/graphql-java-kickstart/graphql-spring-boot) ? I know how this is possible with plain-vanilla graphql-java: https://www.graphql-java.com/documentation/v13/instrumentation/
However, I don't know how to do this when graphql-spring-boot is used and takes control over the execution. Due to lack of documentation I tried it simply this way:
#Service
public class GraphQLInstrumentationProvider implements InstrumentationProvider {
#Override
public Instrumentation getInstrumentation() {
return SimpleInstrumentation.INSTANCE;
}
}
But the method getInstrumentation on my InstrumentationProvider bean is (as expected) never called. Any help appreciated.

Answering my own question. In the meantime I managed to do it this way:
final class RequestLoggingInstrumentation extends SimpleInstrumentation {
private static final Logger logger = LoggerFactory.getLogger(RequestLoggingInstrumentation.class);
#Override
public InstrumentationContext<ExecutionResult> beginExecution(InstrumentationExecutionParameters parameters) {
long startMillis = System.currentTimeMillis();
var executionId = parameters.getExecutionInput().getExecutionId();
if (logger.isInfoEnabled()) {
logger.info("GraphQL execution {} started", executionId);
var query = parameters.getQuery();
logger.info("[{}] query: {}", executionId, query);
if (parameters.getVariables() != null && !parameters.getVariables().isEmpty()) {
logger.info("[{}] variables: {}", executionId, parameters.getVariables());
}
}
return new SimpleInstrumentationContext<>() {
#Override
public void onCompleted(ExecutionResult executionResult, Throwable t) {
if (logger.isInfoEnabled()) {
long endMillis = System.currentTimeMillis();
if (t != null) {
logger.info("GraphQL execution {} failed: {}", executionId, t.getMessage(), t);
} else {
var resultMap = executionResult.toSpecification();
var resultJSON = ObjectMapper.pojoToJSON(resultMap).replace("\n", "\\n");
logger.info("[{}] completed in {}ms", executionId, endMillis - startMillis);
logger.info("[{}] result: {}", executionId, resultJSON);
}
}
}
};
}
}
#Service
class InstrumentationService {
private final ContextFactory contextFactory;
InstrumentationService(ContextFactory contextFactory) {
this.contextFactory = contextFactory;
}
/**
* Return all instrumentations as a bean.
* The result will be used in class {#link com.oembedler.moon.graphql.boot.GraphQLWebAutoConfiguration}.
*/
#Bean
List<Instrumentation> instrumentations() {
// Note: Due to a bug in GraphQLWebAutoConfiguration, the returned list has to be modifiable (it will be sorted)
return new ArrayList<>(
List.of(new RequestLoggingInstrumentation()));
}
}
It helped me to have a look into the class GraphQLWebAutoConfiguration. There I found out that the framework expects a bean of type List<Instrumentation>, which contains all the instrumentations that will be added to the GraphQL execution.

There is a simpler way to add instrumentation with spring boot:
#Configuration
public class InstrumentationConfiguration {
#Bean
public Instrumentation someFieldCheckingInstrumentation() {
return new FieldValidationInstrumentation(env -> {
// ...
});
}
}
Spring boot will collect all beans which implement Instrumentation (see GraphQLWebAutoConfiguration).

Related

How to implement pipeline design pattern using reactive webflux?

I have two simple interfaces called Processor and SeedPreProcessor and they are defined like:
Processor:
public interface Processor<I, O> {
Mono<O> process(I input);
}
SeedPreProcessor:
public interface SeedPreProcessor<D> extends Processor<D, D> {
/**
* Specify the location of this processor in the pipeline.
*
* #return the order
*/
Integer order();
String name();
}
and a PipeLine defined like:
public class PipeLine {
private final List<SeedPreProcessor<PreProcessorDocument>> allProcessors;
public PipeLine(List<SeedPreProcessor<PreProcessorDocument>> allProcessors) {
this.allProcessors = new ArrayList<>(allProcessors);
this.allProcessors.sort(comparingInt(SeedPreProcessor::order));
}
public Mono<PreProcessorDocument> execute(String url) {
log.info("Start processing URL = {}", url);
var initial = new PreProcessorDocument(url);
return Flux
.fromIterable(allProcessors)
.map(proc -> proc.process(initial).t) // my problem is here
}
}
I want to for a inital PreProcessorDocument to execute all the SeedPreProcessor in the
list allProcessors one by one.
How can I achieve this?
Simple foreach loop and flatMap operator will do the trick :
public Mono<PreProcessorDocument> execute(String url) {
var initial = new PreProcessorDocument(url);
Mono<PreProcessorDocument> m = Mono.just(initial);
for (SeedPreProcessor<PreProcessorDocument> p : allProcessors) {
m = m.flatMap(p::process);
}
return m;
}

Resilience4j context propagator not able to propagte thread local values

I am trying to migrate my circuit breaker code from Hystrix to Resilience4j. The communication is between two applications out of which one is an artifact containing all the resilience 4j config in the java code itself and the second application which is a microservice uses it directly.
There's one RequestId which generates in the microservice and propagates to the artifact context where it gets printed in the logs. With Hystrix, it was working perfectly fine but ever since I moved to resilience, I am getting null for the request Id.
Below is my config for bulk head and context propagator :
ThreadPoolBulkheadConfig bulkheadConfig = ThreadPoolBulkheadConfig.custom()
.maxThreadPoolSize(maxThreadPoolSize)
.coreThreadPoolSize(coreThreadPoolSize)
.queueCapacity(queueCapacity)
.contextPropagator(new DummyContextPropagator())
.build();
// Bulk Head Registry
ThreadPoolBulkheadRegistry bulkheadRegistry = ThreadPoolBulkheadRegistry.of(bulkheadConfig);
// Create Bulk Head
ThreadPoolBulkhead bulkhead = bulkheadRegistry.bulkhead(name, bulkheadConfig);
Dummy Context Propagator :
public class DummyContextPropagator implements ContextPropagator {
private static final Logger log = LoggerFactory.getLogger( DummyContextPropagator.class);
#Override
public Supplier<Optional<Object>> retrieve() {
return () -> (Optional<Object>) get();
}
#Override
public Consumer<Optional<Object>> copy() {
return (t) -> t.ifPresent(e -> {
clear();
put(e);
});
}
#Override
public Consumer<Optional<Object>> clear() {
return (t) -> DummyContextHolder.clear();
}
public static class DummyContextHolder {
private static final ThreadLocal threadLocal = new ThreadLocal();
private DummyContextHolder() {
}
public static void put(Object context) {
if (threadLocal.get() != null) {
clear();
}
threadLocal.set(context);
}
public static void clear() {
if (threadLocal.get() != null) {
threadLocal.set(null);
threadLocal.remove();
}
}
public static Optional<Object> get() {
return Optional.ofNullable(threadLocal.get());
}
}
}
However, nothing seems to work so that I can get the RequestId.
Am I doing everything right or is there another way to do that ?
i think you want to get params from threadlocal from parent-thread when you in sub-thread, in hystrix it use command-model to decorate callabletask
in resilience4j i think u can fix it like this:
#Resource
DispatcherServlet dispatcherServlet;
#PostConstruct
public void changeThreadLocalModel() {
dispatcherServlet.setThreadContextInheritable(true);
}
i find my last answer may lead to some problems, when you use "dispatcherServlet.setThreadContextInheritable(true);"
it may pollute your custom thread-pool`s threadlocalmap;
so here is my final resolve, and it only works at resilience4j;
#Resource
Resilience4jBulkheadProvider resilience4jBulkheadProvider;
#PostConstruct
public void concurrentThreadContextStrategy() {
ThreadPoolBulkheadConfig threadPoolBulkheadConfig = ThreadPoolBulkheadConfig.custom().contextPropagator(new CustomInheritContextPropagator()).build();
resilience4jBulkheadProvider.configureDefault(id -> new Resilience4jBulkheadConfigurationBuilder()
.bulkheadConfig(BulkheadConfig.ofDefaults()).threadPoolBulkheadConfig(threadPoolBulkheadConfig)
.build());
}
private static class CustomInheritContextPropagator implements ContextPropagator<RequestAttributes> {
#Override
public Supplier<Optional<RequestAttributes>> retrieve() {
// give requestcontext to reference from threadlocal;
// this method call by web-container thread, such as tomcat, jetty,or undertow, depends on what you used;
return () -> Optional.ofNullable(RequestContextHolder.getRequestAttributes());
}
#Override
public Consumer<Optional<RequestAttributes>> copy() {
// load requestcontex into real-call thread
// this method call by resilience4j bulkhead thread;
return requestAttributes -> requestAttributes.ifPresent(context -> {
RequestContextHolder.resetRequestAttributes();
RequestContextHolder.setRequestAttributes(context);
});
}
#Override
public Consumer<Optional<RequestAttributes>> clear() {
// clean requestcontext finally ;
// this method call by resilience4j bulkhead thread;
return requestAttributes -> RequestContextHolder.resetRequestAttributes();
}
}
i got the same problem with springboot 2.5 et springboot cloud 2020.0.6
and I solved it with an implementation of ContextPropagator
public class SleuthPropagator implements ContextPropagator<TraceContext> {
ThreadLocal<ScopedSpan> scopedSpanThreadLocal = new ThreadLocal<>();
#Override
public Supplier<Optional<TraceContext>> retrieve() {
return this::getCurrentcontext;
}
#Override
public Consumer<Optional<TraceContext>> copy() {
return c -> {
if (!c.isPresent()) {
return;
}
TraceContext traceContext = c.get();
ScopedSpan resilience4jSpan = getTracer()
.map(t -> t.startScopedSpanWithParent("Resilience4j", traceContext))
.orElse(null);
scopedSpanThreadLocal.set(resilience4jSpan);
};
}
#Override
public Consumer<Optional<TraceContext>> clear() {
return t -> {
try {
ScopedSpan resilience4jSpan = scopedSpanThreadLocal.get();
if (resilience4jSpan != null) {
resilience4jSpan.finish();
}
} finally {
scopedSpanThreadLocal.remove();
}
};
}
private static Optional<Tracer> getTracer() {
return Optional.ofNullable(Tracing.current())
.map(Tracing::tracer);
}
private Optional<TraceContext> getCurrentcontext() {
return getTracer()
.map(Tracer::currentSpan)
.map(Span::context);
}
}
And use the propagator in adding this to your application.properties
resilience4j.thread-pool-bulkhead.instances.YOUR_BULKHEAD_CONFIG.context-propagators=com.your.package.SleuthPropagator

Reactor Mono - execute parallel tasks

I am new to Reactor framework and trying to utilize it in one of our existing implementations. LocationProfileService and InventoryService both return a Mono and are to executed in parallel and have no dependency on each other (from the MainService). Within LocationProfileService - there are 4 queries issued and the last 2 queries have a dependency on the first query.
What is a better way to write this? I see the calls getting executed sequentially, while some of them should be executed in parallel. What is the right way to do it?
public class LocationProfileService {
static final Cache<String, String> customerIdCache //define Cache
#Override
public Mono<LocationProfileInfo> getProfileInfoByLocationAndCustomer(String customerId, String location) {
//These 2 are not interdependent and can be executed immediately
Mono<String> customerAccountMono = getCustomerArNumber(customerId,location) LocationNumber).subscribeOn(Schedulers.parallel()).switchIfEmpty(Mono.error(new CustomerNotFoundException(location, customerId))).log();
Mono<LocationProfile> locationProfileMono = Mono.fromFuture(//location query).subscribeOn(Schedulers.parallel()).log();
//Should block be called, or is there a better way to do ?
String custAccount = customerAccountMono.block(); // This is needed to execute and the value from this is needed for the next 2 calls
Mono<Customer> customerMono = Mono.fromFuture(//query uses custAccount from earlier step).subscribeOn(Schedulers.parallel()).log();
Mono<Result<LocationPricing>> locationPricingMono = Mono.fromFuture(//query uses custAccount from earlier step).subscribeOn(Schedulers.parallel()).log();
return Mono.zip(locationProfileMono,customerMono,locationPricingMono).flatMap(tuple -> {
LocationProfileInfo locationProfileInfo = new LocationProfileInfo();
//populate values from tuple
return Mono.just(locationProfileInfo);
});
}
private Mono<String> getCustomerAccount(String conversationId, String customerId, String location) {
return CacheMono.lookup((Map)customerIdCache.asMap(),customerId).onCacheMissResume(Mono.fromFuture(//query).subscribeOn(Schedulers.parallel()).map(x -> x.getAccountNumber()));
}
}
public class InventoryService {
#Override
public Mono<InventoryInfo> getInventoryInfo(String inventoryId) {
Mono<Inventory> inventoryMono = Mono.fromFuture(//inventory query).subscribeOn(Schedulers.parallel()).log();
Mono<List<InventorySale>> isMono = Mono.fromFuture(//inventory sale query).subscribeOn(Schedulers.parallel()).log();
return Mono.zip(inventoryMono,isMono).flatMap(tuple -> {
InventoryInfo inventoryInfo = new InventoryInfo();
//populate value from tuple
return Mono.just(inventoryInfo);
});
}
}
public class MainService {
#Autowired
LocationProfileService locationProfileService;
#Autowired
InventoryService inventoryService
public void mainService(String customerId, String location, String inventoryId) {
Mono<LocationProfileInfo> locationProfileMono = locationProfileService.getProfileInfoByLocationAndCustomer(....);
Mono<InventoryInfo> inventoryMono = inventoryService.getInventoryInfo(....);
//is using block fine or is there a better way to do?
Mono.zip(locationProfileMono,inventoryMono).subscribeOn(Schedulers.parallel()).block();
}
}
You don't need to block in order to get the pass that parameter your code is very close to the solution. I wrote the code using the class names that you provided. Just replace all the Mono.just(....) with the call to the correct service.
public Mono<LocationProfileInfo> getProfileInfoByLocationAndCustomer(String customerId, String location) {
Mono<String> customerAccountMono = Mono.just("customerAccount");
Mono<LocationProfile> locationProfileMono = Mono.just(new LocationProfile());
return Mono.zip(customerAccountMono, locationProfileMono)
.flatMap(tuple -> {
Mono<Customer> customerMono = Mono.just(new Customer(tuple.getT1()));
Mono<Result<LocationPricing>> result = Mono.just(new Result<LocationPricing>());
Mono<LocationProfile> locationProfile = Mono.just(tuple.getT2());
return Mono.zip(customerMono, result, locationProfile);
})
.map(LocationProfileInfo::new)
;
}
public static class LocationProfileInfo {
public LocationProfileInfo(Tuple3<Customer, Result<LocationPricing>, LocationProfile> tuple){
//do wathever
}
}
public static class LocationProfile {}
private static class Customer {
public Customer(String cutomerAccount) {
}
}
private static class Result<T> {}
private static class LocationPricing {}
Pleas remember that the first zip is not necessary. I re write it to mach your solution. But I would solve the problem a little bit differently. It would be clearer.
public Mono<LocationProfileInfo> getProfileInfoByLocationAndCustomer(String customerId, String location) {
return Mono.just("customerAccount") //call the service
.flatMap(customerAccount -> {
//declare the call to get the customer
Mono<Customer> customerMono = Mono.just(new Customer(customerAccount));
//declare the call to get the location pricing
Mono<Result<LocationPricing>> result = Mono.just(new Result<LocationPricing>());
//declare the call to get the location profile
Mono<LocationProfile> locationProfileMono = Mono.just(new LocationProfile());
//in the zip call all the services actually are executed
return Mono.zip(customerMono, result, locationProfileMono);
})
.map(LocationProfileInfo::new)
;
}

CRUD Repository findById() different return value

In my SpringBoot applicatication I'm using CrudRepo.
I have found a problem with return value: Required != Found
GitHub: https://github.com/einhar/WebTaskManager/tree/findById-problem
No matter about changing method return type from Task into Object -> IDE stopped show error, but then it could be problem due to validation of data type later on.
Do you know how to fix it? Any hint?
CrudRepo
public interface TaskRepository extends CrudRepository<Task, Integer> {}
Service
#Service
#Transactional
public class TaskService {
#Autowired
private final TaskRepository taskRepository;
public TaskService(TaskRepository taskRepository) {
this.taskRepository = taskRepository;
}
public List<Task> findAll() {
List<Task> tasks = new ArrayList<>();
for (Task task : taskRepository.findAll()) {
tasks.add(task);
}
return tasks; // Work properly :)
}
/* ... */
public Task findTask(Integer id) {
return taskRepository.findById(id); // Find:Task | Required: java.util.Optional :(
}
}
The findById method is return Optional, So you can get the task by get() method. You can choose the following 3 case
You will get an exception when Task not found:
public Task findTask(Integer id) {
return taskRepository.findById(id).get();
}
You will get null when Task not found:
public Task findTask(Integer id) {
return taskRepository.findById(id).orElse(null);
}
You will get an empty new Task when Task not found:
public Task findTask(Integer id) {
return taskRepository.findById(id).orElse(new Task());
}
Or just return the Optional Object
public Optional<Task> findTask(Integer id) {
return taskRepository.findById(id);
}
in your CrudRepo create a method:
Task getById(Integer id);
and then call this method in your TaskService and you should be ready to go:)
I think no need to create getById(... id) method in Repository bean class because in SimpleJPARepository such method is already implemented. So you can directly call this method.
See Official spring document:-
/*
* (non-Javadoc)
* #see org.springframework.data.repository.CrudRepository#findById(java.io.Serializable)
*/
public Optional<T> findById(ID id) {
Assert.notNull(id, ID_MUST_NOT_BE_NULL);
Class<T> domainType = getDomainClass();
if (metadata == null) {
return Optional.ofNullable(em.find(domainType, id));
}
LockModeType type = metadata.getLockModeType();
Map<String, Object> hints = getQueryHints().withFetchGraphs(em).asMap();
return Optional.ofNullable(type == null ? em.find(domainType, id, hints) : em.find(domainType, id, type, hints));
}
You will get an exception when Task not found, to solve this add an exception to your code
like this:
public Task findTask(Integer id) {
return taskRepository.findById(id).orElseThrow(()-> new RuntimeException(String.format("Account %s not found",id)));
}

Mapped Diagnostic Context logging with play framewok and akka in java

i am trying mdc logging in play filter in java for all requests i followed this tutorial in scala and tried converting to java http://yanns.github.io/blog/2014/05/04/slf4j-mapped-diagnostic-context-mdc-with-play-framework/
but still the mdc is not propagated to all execution contexts.
i am using this dispathcher as default dispatcher but there are many execution contexts for it. i need the mdc propagated to all execution contexts
below is my java code
import java.util.Map;
import org.slf4j.MDC;
import scala.concurrent.ExecutionContext;
import scala.concurrent.duration.Duration;
import scala.concurrent.duration.FiniteDuration;
import akka.dispatch.Dispatcher;
import akka.dispatch.ExecutorServiceFactoryProvider;
import akka.dispatch.MessageDispatcherConfigurator;
public class MDCPropagatingDispatcher extends Dispatcher {
public MDCPropagatingDispatcher(
MessageDispatcherConfigurator _configurator, String id,
int throughput, Duration throughputDeadlineTime,
ExecutorServiceFactoryProvider executorServiceFactoryProvider,
FiniteDuration shutdownTimeout) {
super(_configurator, id, throughput, throughputDeadlineTime,
executorServiceFactoryProvider, shutdownTimeout);
}
#Override
public ExecutionContext prepare() {
final Map<String, String> mdcContext = MDC.getCopyOfContextMap();
return new ExecutionContext() {
#Override
public void execute(Runnable r) {
Map<String, String> oldMDCContext = MDC.getCopyOfContextMap();
setContextMap(mdcContext);
try {
r.run();
} finally {
setContextMap(oldMDCContext);
}
}
#Override
public ExecutionContext prepare() {
return this;
}
#Override
public void reportFailure(Throwable t) {
play.Logger.info("error occured in dispatcher");
}
};
}
private void setContextMap(Map<String, String> context) {
if (context == null) {
MDC.clear();
} else {
play.Logger.info("set context "+ context.toString());
MDC.setContextMap(context);
}
}
}
import java.util.concurrent.TimeUnit;
import scala.concurrent.duration.Duration;
import scala.concurrent.duration.FiniteDuration;
import com.typesafe.config.Config;
import akka.dispatch.DispatcherPrerequisites;
import akka.dispatch.MessageDispatcher;
import akka.dispatch.MessageDispatcherConfigurator;
public class MDCPropagatingDispatcherConfigurator extends
MessageDispatcherConfigurator {
private MessageDispatcher instance;
public MDCPropagatingDispatcherConfigurator(Config config,
DispatcherPrerequisites prerequisites) {
super(config, prerequisites);
Duration throughputDeadlineTime = new FiniteDuration(-1,
TimeUnit.MILLISECONDS);
FiniteDuration shutDownDuration = new FiniteDuration(1,
TimeUnit.MILLISECONDS);
instance = new MDCPropagatingDispatcher(this, "play.akka.actor.contexts.play-filter-context",
100, throughputDeadlineTime,
configureExecutor(), shutDownDuration);
}
public MessageDispatcher dispatcher() {
return instance;
}
}
filter interceptor
public class MdcLogFilter implements EssentialFilter {
#Override
public EssentialAction apply(final EssentialAction next) {
return new MdcLogAction() {
#Override
public Iteratee<byte[], SimpleResult> apply(
final RequestHeader requestHeader) {
final String uuid = Utils.generateRandomUUID();
MDC.put("uuid", uuid);
play.Logger.info("request started"+uuid);
final ExecutionContext playFilterContext = Akka.system()
.dispatchers()
.lookup("play.akka.actor.contexts.play-custom-filter-context");
return next.apply(requestHeader).map(
new AbstractFunction1<SimpleResult, SimpleResult>() {
#Override
public SimpleResult apply(SimpleResult simpleResult) {
play.Logger.info("request ended"+uuid);
MDC.remove("uuid");
return simpleResult;
}
}, playFilterContext);
}
#Override
public EssentialAction apply() {
return next.apply();
}
};
}
}
Below is my solution, proven in real life. It's in Scala, and not for Play, but for Scalatra, but the underlying concept is the same. Hope you'll be able to figure out how to port this to Java.
import org.slf4j.MDC
import java.util.{Map => JMap}
import scala.concurrent.{ExecutionContextExecutor, ExecutionContext}
object MDCHttpExecutionContext {
def fromExecutionContextWithCurrentMDC(delegate: ExecutionContext): ExecutionContextExecutor =
new MDCHttpExecutionContext(MDC.getCopyOfContextMap(), delegate)
}
class MDCHttpExecutionContext(mdcContext: JMap[String, String], delegate: ExecutionContext)
extends ExecutionContextExecutor {
def execute(runnable: Runnable): Unit = {
val callingThreadMDC = MDC.getCopyOfContextMap()
delegate.execute(new Runnable {
def run() {
val currentThreadMDC = MDC.getCopyOfContextMap()
setContextMap(callingThreadMDC)
try {
runnable.run()
} finally {
setContextMap(currentThreadMDC)
}
}
})
}
private[this] def setContextMap(context: JMap[String, String]): Unit = {
Option(context) match {
case Some(ctx) => {
MDC.setContextMap(context)
}
case None => {
MDC.clear()
}
}
}
def reportFailure(t: Throwable): Unit = delegate.reportFailure(t)
}
You'll have to make sure that this ExecutionContext is used in all of your asynchronous calls. I achieve this through Dependency Injection, but there are different ways. That's how I do it with subcut:
bind[ExecutionContext] idBy BindingIds.GlobalExecutionContext toSingle {
MDCHttpExecutionContext.fromExecutionContextWithCurrentMDC(
ExecutionContext.fromExecutorService(
Executors.newFixedThreadPool(globalThreadPoolSize)
)
)
}
The idea behind this approach is as follows. MDC uses thread-local storage for the attributes and their values. If a single request of yours can run on a multiple threads, then you need to make sure the new thread you start uses the right MDC. For that, you create a custom executor that ensures the proper copying of the MDC values into the new thread before it starts executing the task you assign to it. You also must ensure that when the thread finishes your task and continues with something else, you put the old values into its MDC, because threads from a pool can switch between different requests.

Categories

Resources