I am a newbie to lagom and dgraph. And I got stuck to how to use lagom's read-side processor with Dgraph. Just to give you an idea following is the code which uses Cassandra with lagom.
import akka.NotUsed;
import com.lightbend.lagom.javadsl.api.ServiceCall;
import com.lightbend.lagom.javadsl.persistence.cassandra.CassandraSession;
import java.util.concurrent.CompletableFuture;
import javax.inject.Inject;
import akka.stream.javadsl.Source;
public class FriendServiceImpl implements FriendService {
private final CassandraSession cassandraSession;
#Inject
public FriendServiceImpl(CassandraSession cassandraSession) {
this.cassandraSession = cassandraSession;
}
//Implement your service method here
}
Lagom does not provide out-of-the-box support for Dgraph. If you have to use Lagom's Read-Side processor with Dgraph, then you have to use Lagom's Generic Read Side support. Like this:
/**
* Read side processor for Dgraph.
*/
public class FriendEventProcessor extends ReadSideProcessor<FriendEvent> {
private static void createModel() {
//TODO: Initialize schema in Dgraph
}
#Override
public ReadSideProcessor.ReadSideHandler<FriendEvent> buildHandler() {
return new ReadSideHandler<FriendEvent>() {
private final Done doneInstance = Done.getInstance();
#Override
public CompletionStage<Done> globalPrepare() {
createModel();
return CompletableFuture.completedFuture(doneInstance);
}
#Override
public CompletionStage<Offset> prepare(final AggregateEventTag<FriendEvent> tag) {
return CompletableFuture.completedFuture(Offset.NONE);
}
#Override
public Flow<Pair<FriendEvent, Offset>, Done, ?> handle() {
return Flow.<Pair<FriendEvent, Offset>>create()
.mapAsync(1, eventAndOffset -> {
if (eventAndOffset.first() instanceof FriendCreated) {
//TODO: Add Friend in Dgraph;
}
return CompletableFuture.completedFuture(doneInstance);
}
);
}
};
}
#Override
public PSequence<AggregateEventTag<FriendEvent>> aggregateTags() {
return FriendEvent.TAG.allTags();
}
}
For FriendEvent.TAG.allTags(), you have to add following code in FriendEvent interface:
int NUM_SHARDS = 20;
AggregateEventShards<FriendEvent> TAG =
AggregateEventTag.sharded(FriendEvent.class, NUM_SHARDS);
#Override
default AggregateEventShards<FriendEvent> aggregateTag() {
return TAG;
}
I hope this helps!
Related
I'm writing a java application that has users that create contracts. The contracts have a list of supporting users that can be added to the contract. The user that created this contract is not on the list of supporters. The supporting users can help the initial creator of the user follow through on the contract for example "No alcohol for 60 days."
I've got a function that can display the list of contracts in the user class.
I also can add users to a list in the contract.
How should I approach writing the viewSupportedContracts() function with the best memory issues in mind later on?
I will be using the User object in a main class.
package core;
import java.util.ArrayList;
import java.util.List;
public class Contract {
private boolean termsAndAgreementSigned = false;
private List<User> supporters = new ArrayList<User>();
public Contract()
{
}
public void setTermsAndAgreementSigned(boolean termsAndAgreementSigned) {
this.termsAndAgreementSigned = termsAndAgreementSigned;
}
public boolean isTermsAndAgreementSigned() {
return termsAndAgreementSigned;
}
public void addSupporter(User user)
{
supporters.add(user);
}
public void viewSupporters()
{
System.out.println("These are the supporters");
}
}
package core;
import java.util.ArrayList;
import java.util.List;
public class User {
private List<Contract> contracts = new ArrayList<Contract>();
public User()
{
}
public void addContract(Contract contract) throws Exception {
if(contract.isTermsAndAgreementSigned() == true)
contracts.add(contract);
}
public List<Contract> getContracts() throws Exception {
return contracts;
}
public void viewSupportedContracts() {
// TODO Auto-generated method stub
}
}
It seems that your code to add contracts with users is very efficient in regards to memory and seems to me that you are looking for a way to view the contracts. My approach would be to implement a toString() method in the Contract class and either call toString on the List or write a for loop to print each Contract on a new line
public class Contract {
public String toString() {
return supporters.toString() + " users support you!";
}
}
public class User {
public String toString() {
return "Create generated user text here";
}
public void viewSupportedContracts() {
System.out.println(contracts.toString());
}
}
or
public void viewSupportedContracts() {
for(Contract contract: contracts) {
System.out.println(contract.toString());
}
}
I have recently upgraded to latest 4.x version of cucumber-jvm in my project in order to leverage parallel execution feature of cucumber. But I am facing this issue now with respect to having custom data type as parameter. Earlier we had an interface called Transformer which we can implement for custom data types, now in the latest version I've found TypeRegistryConfigurer interface which needs to be implemented. But it is not recognising the step as I would've expected. Details as follows:
Gherkin Step:
Given user gets random(3,true,true) parameter
StepDefinition:
#Given("user gets {random} parameter")
public void paramTest(RandomString randomString) {
System.out.println(randomString.string);
}
RandomString class:
public class RandomString {
public String string;
public RandomString(String string) {
Matcher m = Pattern.compile("random\\((.?)\\)").matcher(string);
String t = "";
while (m.find()) {
t = m.group(1);
}
boolean isAlpha = true, isNum = true;
if (t.length() > 0) {
String[] placeholders = t.split(",");
if (placeholders.length == 3) {
int count = Integer.parseInt(placeholders[0]);
isAlpha = Boolean.valueOf(placeholders[1]);
isNum = Boolean.valueOf(placeholders[2]);
this.string = string.replaceAll("random(.*)", RandomStringUtils.random(count, isAlpha, isNum));
}
}
this.string = string.replaceAll("random(.*)", RandomStringUtils.random(3, isAlpha, isNum));
}
}
TypeRegistryImpl:
public class TypeRegistryConfiguration implements TypeRegistryConfigurer {
#Override
public Locale locale() {
return Locale.ENGLISH;
}
#Override
public void configureTypeRegistry(TypeRegistry typeRegistry) {
typeRegistry.defineParameterType(new ParameterType<>(
"random",
"random([0-9],true|false,true|false)",
RandomString.class,
RandomString::new)
);
}
}
Your string random(3,true,true) does not match the pattern used in:
typeRegistry.defineParameterType(new ParameterType<>(
"random",
"random([0-9],true|false,true|false)",
RandomString.class,
RandomString::new)
);
You can verify this by creating the pattern and testing it:
import java.util.regex.Pattern;
class Scratch {
public static void main(String[] args) {
Pattern pattern = Pattern.compile("random([0-9],true|false,true|false)");
// prints out false
System.out.println(pattern.matcher("random(3,true,true)").matches());
}
}
You have also not used a matching pattern in RandomString.
I've found the solution after trial and hit and going through some examples from some Unit Tests in cucumber-jvm project.
Modified StepDef:
#Given("user gets {random} parameter")
public void paramTest(String randomString) {
System.out.println(randomString.string);
}
TypeRegistryConfigurer Implementation:
import cucumber.api.TypeRegistry;
import cucumber.api.TypeRegistryConfigurer;
import io.cucumber.cucumberexpressions.CaptureGroupTransformer;
import io.cucumber.cucumberexpressions.ParameterType;
import org.apache.commons.lang3.RandomStringUtils;
import java.util.Locale;
public class TypeRegistryConfiguration implements TypeRegistryConfigurer {
#Override
public Locale locale() {
return Locale.ENGLISH;
}
#Override
public void configureTypeRegistry(TypeRegistry typeRegistry) {
typeRegistry.defineParameterType(new ParameterType<>(
"random",
"random\\(([0-9]+),(true|false),(true|false)\\)",
String.class,
new CaptureGroupTransformer<>() {
#Override
public String transform(String[] args) {
return RandomStringUtils.random(Integer.parseInt(args[0]), Boolean.valueOf(args[1]), Boolean.valueOf(args[2]));
}
})
);
}
}
How to choose CDI java bean base on annotation, then the annotation poses table of arguments?
The problem is easier to show using an example than to describe.
Assume that for each object of type Problem we have to choose proper solution.
public class Problem {
private Object data;
private ProblemType type;
public Object getData() { return data; }
public void setData(Object data) { this.data = data; }
public ProblemType getType() { return type; }
public void setType(ProblemType type) { this.type = type;}
}
There are few types of problems:
public enum ProblemType {
A, B, C;
}
There are few solutions:
public interface Solution {
public void resolve(Problem problem);
}
like FirstSolution:
#RequestScoped
#SolutionQualifier(problemTypes = { ProblemType.A, ProblemType.C })
public class FirstSolution implements Solution {
#Override
public void resolve(Problem problem) {
// ...
}
}
and SecondSolution:
#RequestScoped
#SolutionQualifier(problemTypes = { ProblemType.B })
public class SecondSolution implements Solution {
#Override
public void resolve(Problem problem) {
// ...
}
}
The solution should be chosen based on annotation #SolutionQualifier:
#Qualifier
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.TYPE)
public #interface SolutionQualifier {
ProblemType[] problemTypes();
public static class SolutionQualifierLiteral extends AnnotationLiteral<SolutionQualifier> implements SolutionQualifier {
private ProblemType[] problemTypes;
public SolutionQualifierLiteral(ProblemType[] problems) {
this.problemTypes = problems;
}
#Override
public ProblemType[] problemTypes() {
return problemTypes;
}
}
}
By SolutionProvider:
#RequestScoped
public class DefaultSolutionProvider implements SolutionProvider {
#Inject
#Any
private Instance<Solution> solutions;
#Override
public Instance<Solution> getSolution(Problem problem) {
/**
* Here is the problem of choosing proper solution.
* I do not know how method {#link javax.enterprise.inject.Instance#select(Annotation...)}
* works, and how it compares annotations, so I do no know what argument I should put there
* to obtain proper solution.
*/
ProblemType[] problemTypes = { problem.getType() };
return solutions.select(new SolutionQualifier.SolutionQualifierLiteral(problemTypes));
}
}
And in the last one there is a problem:
I do not know how method javax.enterprise.inject.Instance#select(Annotation...) works internally, and how it compares annotations, so I do no know what argument I should put there to obtain proper solution. If there appear a problem of type A table ProblemType[] will consist of one argument, while FirstSolution.class is annotated with #SolutionQualifier having two arguments, so therefore I will not get the proper Instance.
I didn't find a way to resolve it using CDI API, instead:
I created another enum:
public enum SoultionType {
A(ProblemType.A, ProblemType.C),
B(ProblemType.A);
//...
SoultionType(ProblemType problems...) {
// ...
}
public static SoultionType getByProblemType(ProblemType problem) {
// ...
}
}
Changed so SolutionQualifier has only SoultionType field inside, so there is no problem with the comparison.
i am trying mdc logging in play filter in java for all requests i followed this tutorial in scala and tried converting to java http://yanns.github.io/blog/2014/05/04/slf4j-mapped-diagnostic-context-mdc-with-play-framework/
but still the mdc is not propagated to all execution contexts.
i am using this dispathcher as default dispatcher but there are many execution contexts for it. i need the mdc propagated to all execution contexts
below is my java code
import java.util.Map;
import org.slf4j.MDC;
import scala.concurrent.ExecutionContext;
import scala.concurrent.duration.Duration;
import scala.concurrent.duration.FiniteDuration;
import akka.dispatch.Dispatcher;
import akka.dispatch.ExecutorServiceFactoryProvider;
import akka.dispatch.MessageDispatcherConfigurator;
public class MDCPropagatingDispatcher extends Dispatcher {
public MDCPropagatingDispatcher(
MessageDispatcherConfigurator _configurator, String id,
int throughput, Duration throughputDeadlineTime,
ExecutorServiceFactoryProvider executorServiceFactoryProvider,
FiniteDuration shutdownTimeout) {
super(_configurator, id, throughput, throughputDeadlineTime,
executorServiceFactoryProvider, shutdownTimeout);
}
#Override
public ExecutionContext prepare() {
final Map<String, String> mdcContext = MDC.getCopyOfContextMap();
return new ExecutionContext() {
#Override
public void execute(Runnable r) {
Map<String, String> oldMDCContext = MDC.getCopyOfContextMap();
setContextMap(mdcContext);
try {
r.run();
} finally {
setContextMap(oldMDCContext);
}
}
#Override
public ExecutionContext prepare() {
return this;
}
#Override
public void reportFailure(Throwable t) {
play.Logger.info("error occured in dispatcher");
}
};
}
private void setContextMap(Map<String, String> context) {
if (context == null) {
MDC.clear();
} else {
play.Logger.info("set context "+ context.toString());
MDC.setContextMap(context);
}
}
}
import java.util.concurrent.TimeUnit;
import scala.concurrent.duration.Duration;
import scala.concurrent.duration.FiniteDuration;
import com.typesafe.config.Config;
import akka.dispatch.DispatcherPrerequisites;
import akka.dispatch.MessageDispatcher;
import akka.dispatch.MessageDispatcherConfigurator;
public class MDCPropagatingDispatcherConfigurator extends
MessageDispatcherConfigurator {
private MessageDispatcher instance;
public MDCPropagatingDispatcherConfigurator(Config config,
DispatcherPrerequisites prerequisites) {
super(config, prerequisites);
Duration throughputDeadlineTime = new FiniteDuration(-1,
TimeUnit.MILLISECONDS);
FiniteDuration shutDownDuration = new FiniteDuration(1,
TimeUnit.MILLISECONDS);
instance = new MDCPropagatingDispatcher(this, "play.akka.actor.contexts.play-filter-context",
100, throughputDeadlineTime,
configureExecutor(), shutDownDuration);
}
public MessageDispatcher dispatcher() {
return instance;
}
}
filter interceptor
public class MdcLogFilter implements EssentialFilter {
#Override
public EssentialAction apply(final EssentialAction next) {
return new MdcLogAction() {
#Override
public Iteratee<byte[], SimpleResult> apply(
final RequestHeader requestHeader) {
final String uuid = Utils.generateRandomUUID();
MDC.put("uuid", uuid);
play.Logger.info("request started"+uuid);
final ExecutionContext playFilterContext = Akka.system()
.dispatchers()
.lookup("play.akka.actor.contexts.play-custom-filter-context");
return next.apply(requestHeader).map(
new AbstractFunction1<SimpleResult, SimpleResult>() {
#Override
public SimpleResult apply(SimpleResult simpleResult) {
play.Logger.info("request ended"+uuid);
MDC.remove("uuid");
return simpleResult;
}
}, playFilterContext);
}
#Override
public EssentialAction apply() {
return next.apply();
}
};
}
}
Below is my solution, proven in real life. It's in Scala, and not for Play, but for Scalatra, but the underlying concept is the same. Hope you'll be able to figure out how to port this to Java.
import org.slf4j.MDC
import java.util.{Map => JMap}
import scala.concurrent.{ExecutionContextExecutor, ExecutionContext}
object MDCHttpExecutionContext {
def fromExecutionContextWithCurrentMDC(delegate: ExecutionContext): ExecutionContextExecutor =
new MDCHttpExecutionContext(MDC.getCopyOfContextMap(), delegate)
}
class MDCHttpExecutionContext(mdcContext: JMap[String, String], delegate: ExecutionContext)
extends ExecutionContextExecutor {
def execute(runnable: Runnable): Unit = {
val callingThreadMDC = MDC.getCopyOfContextMap()
delegate.execute(new Runnable {
def run() {
val currentThreadMDC = MDC.getCopyOfContextMap()
setContextMap(callingThreadMDC)
try {
runnable.run()
} finally {
setContextMap(currentThreadMDC)
}
}
})
}
private[this] def setContextMap(context: JMap[String, String]): Unit = {
Option(context) match {
case Some(ctx) => {
MDC.setContextMap(context)
}
case None => {
MDC.clear()
}
}
}
def reportFailure(t: Throwable): Unit = delegate.reportFailure(t)
}
You'll have to make sure that this ExecutionContext is used in all of your asynchronous calls. I achieve this through Dependency Injection, but there are different ways. That's how I do it with subcut:
bind[ExecutionContext] idBy BindingIds.GlobalExecutionContext toSingle {
MDCHttpExecutionContext.fromExecutionContextWithCurrentMDC(
ExecutionContext.fromExecutorService(
Executors.newFixedThreadPool(globalThreadPoolSize)
)
)
}
The idea behind this approach is as follows. MDC uses thread-local storage for the attributes and their values. If a single request of yours can run on a multiple threads, then you need to make sure the new thread you start uses the right MDC. For that, you create a custom executor that ensures the proper copying of the MDC values into the new thread before it starts executing the task you assign to it. You also must ensure that when the thread finishes your task and continues with something else, you put the old values into its MDC, because threads from a pool can switch between different requests.
I found this class that I really want to use:
org.hibernate.action.spi.AfterTransactionCompletionProcess -
http://docs.jboss.org/hibernate/orm/3.6/javadocs/org/hibernate/action/AfterTransactionCompletionProcess.html
Basically, I'd like some custom logic to happen after the transaction is committed. But I cannot for the life of me figure out how to use this thing.
Where do I specify this interface? Any examples would be awesome.
Found an example in the Hibernate 4.3's unit test code base:
org.hibernate.envers.test.integration.basic.RegisterUserEventListenersTest
Shows exactly what I was looking for:
package org.hibernate.envers.test.integration.basic;
import org.hibernate.Session;
import org.hibernate.action.spi.AfterTransactionCompletionProcess;
import org.hibernate.action.spi.BeforeTransactionCompletionProcess;
import org.hibernate.engine.spi.SessionImplementor;
import org.hibernate.envers.internal.tools.MutableInteger;
import org.hibernate.envers.test.BaseEnversFunctionalTestCase;
import org.hibernate.envers.test.entities.StrTestEntity;
import org.hibernate.event.service.spi.EventListenerRegistry;
import org.hibernate.event.spi.EventType;
import org.hibernate.event.spi.PostInsertEvent;
import org.hibernate.event.spi.PostInsertEventListener;
import org.hibernate.persister.entity.EntityPersister;
import org.junit.Assert;
import org.junit.Test;
import org.hibernate.testing.TestForIssue;
/**
* #author Lukasz Antoniak (lukasz dot antoniak at gmail dot com)
*/
public class RegisterUserEventListenersTest extends BaseEnversFunctionalTestCase {
#Override
protected Class<?>[] getAnnotatedClasses() {
return new Class<?>[] {StrTestEntity.class};
}
#Test
#TestForIssue(jiraKey = "HHH-7478")
public void testTransactionProcessSynchronization() {
final EventListenerRegistry registry = sessionFactory().getServiceRegistry()
.getService( EventListenerRegistry.class );
final CountingPostInsertTransactionBoundaryListener listener = new CountingPostInsertTransactionBoundaryListener();
registry.getEventListenerGroup( EventType.POST_INSERT ).appendListener( listener );
Session session = openSession();
session.getTransaction().begin();
StrTestEntity entity = new StrTestEntity( "str1" );
session.save( entity );
session.getTransaction().commit();
session.close();
// Post insert listener invoked three times - before/after insertion of original data,
// revision entity and audit row.
Assert.assertEquals( 3, listener.getBeforeCount() );
Assert.assertEquals( 3, listener.getAfterCount() );
}
private static class CountingPostInsertTransactionBoundaryListener implements PostInsertEventListener {
private final MutableInteger beforeCounter = new MutableInteger();
private final MutableInteger afterCounter = new MutableInteger();
#Override
public void onPostInsert(PostInsertEvent event) {
event.getSession().getActionQueue().registerProcess(
new BeforeTransactionCompletionProcess() {
#Override
public void doBeforeTransactionCompletion(SessionImplementor session) {
beforeCounter.increase();
}
}
);
event.getSession().getActionQueue().registerProcess(
new AfterTransactionCompletionProcess() {
#Override
public void doAfterTransactionCompletion(boolean success, SessionImplementor session) {
afterCounter.increase();
}
}
);
}
#Override
public boolean requiresPostCommitHanding(EntityPersister persister) {
return true;
}
public int getBeforeCount() {
return beforeCounter.get();
}
public int getAfterCount() {
return afterCounter.get();
}
}
}