I have this code in my Main() function:
DataStream<OutputObject> asyncResultStream = AsyncDataStream.orderedWait(
listOfData,
new CustomAsyncConnector(),
5,
TimeUnit.SECONDS,
10).setParallelism(3).startNewChain().uid("customUid");
Which is the simple format for using AsyncDataStreams in 1.2. And the code in the CustomAsyncConnector is just like every example you will find at its core:
public class CustomAsyncConnector extends RichAsyncFunction<CustomObject, ResultObject> {
private transient Session client;
#Override
public void open(Configuration parameters) throws Exception {
client = Cluster.builder().addContactPoint("<CustomUrl>")
.withPort(1234)
.build()
.connect("<thisKeyspace>");
}
#Override
public void close() throws Exception {
client.close();
}
#Override
public void asyncInvoke(final CustomObject ctsa, final AsyncCollector<ResultObject> asyncCollector) throws Exception {
//Custom code here...
}
}
Now here are my questions:
1.) What is the proper way to pass "parameters" to the open() function in CustomAsyncConnector() from where it is called in my Main() function.
2.) How are the parameters supposed to be used to set up the connection to the client in the open() function?
My guess on the first question is to create a new CustomAsyncConnector() object instance in main and then call the open() function directly and pass the parameters object to it and then put that instance in the AsysDataStream's code. However I am not sure if this is the best way or, and more importantly, the proper way to set the fields in a Configuration type object (again, assuming that doing "configParameters.setString("contactPointUrl", "127.0.0.1")" is right, but am not sure). And this leads to my second, and honestly most important, question.
So regarding my second question, the parameters I want to pass to the open() function are the contactPointUrl, the portNumber, and the keyspace to be put in .connect(). However I cannot seem to access them by doing something like ".addContactPoint(parameters.getString("contactPointUrl"))". I also tried seeing if or where I should do Cluster.builder().getConfiguration(parameters) but I am shooting in the dark where that even belongs or if at all and if the parameter names have to be something specific and so on.
So I hope I didn't word that too poorly, but any and all help would be greatly appreciated.
Thanks in advance!
Here is what ended up working. Still not sure how to pass the configuration parameters to the .open() method, but oh well.
Added this to the CustomAsyncConnector class:
private final CustomProps props;
public CustomAsyncConnector(CustomProps props) {
super();
this.props = props;
}
And what I pass in the main() method:
AsyncDataStream
.unorderedWait(
dataToProcess,
new CustomAsyncConnector(props),
5,
TimeUnit.SECONDS,
10);
And utilized the props in the .open() method like how I wanted to use the parameters.
Related
In my current project, I use java 11/JOOQ 3.15/Micronaut/Micrometer. In order to have relevant SQL metrics, I would like to put a name on my JOOQ queries.
To do that, I have tried to use the ctx.data() field combined with a custom ExecuteListener.
Let's take a really simplified listener:
#Singleton
public class JooqListener extends DefaultExecuteListener {
transient StopWatch watch;
private final MeterRegistry meterRegistry;
public JooqListener(MeterRegistry meterRegistry) {
this.meterRegistry = meterRegistry;
}
#Override
public void executeStart(ExecuteContext ctx) {
watch = new StopWatch();
}
#Override
public void fetchEnd(ExecuteContext ctx) {
Tags prometheusTag = Tags.of("queryName", ctx.configuration().data("queryName").toString());
meterRegistry.timer("sql.query.timer", prometheusTag)
.record(watch.split(), TimeUnit.NANOSECONDS);
}
// I have tried to remove the data manually, but not working
#Override
public void end(ExecuteContext ctx) {
ctx.configuration().data().remove("queryName");
}
}
If I send 2 different queries from two different repositories, like for example:
DSLContext context = DSL.using(jooqConfiguration);
context.data("queryName", "query1");
return context.select(1).from("dual").fetch();
And just after, let say I'm not attentive and I forgot to name my query:
DSLContext context = DSL.using(jooqConfiguration);
return context.select(2).from("dual").fetch();
ctx.configuration().data("queryName") in my listener will always contain "query1", which I didn't expect because ExecuteListeners are listening query by query, and furthermore, I have created two different DSLContexts. It looks like the ctx.data() cannot be cleaned but just overwritten.
Is it an expected behaviour? Is there an other object/method I should use which can be limited to the query scope? (I searched a lot on google but "data" keyword is a little bit annoying...)
Thank you
A DSLContext just wraps a Configuration. It doesn't have its own lifecycle. So, if you're modifying the Configuration.data() map through DSLContext, you're modifying a globally shared object. In other words, you must not modify Configuration.data() except for when you initialise your configuration for the first time. See this section of the manual for more details.
A better way to do what you intend to do is:
// Create a "derived" configuration, which is a new,
// independent Configuration instance
DSLContext context = DSL.using(jooqConfiguration.derive());
context.data("queryName", "query1");
return context.select(1).from("dual").fetch();
And then, in your ExecuteListener:
#Override
public void fetchEnd(ExecuteContext ctx) {
// Reading the Configuration.data() is still fine:
Tags prometheusTag = Tags.of("queryName",
ctx.configuration().data("queryName").toString());
meterRegistry.timer("sql.query.timer", prometheusTag)
.record(watch.split(), TimeUnit.NANOSECONDS);
}
#Override
public void end(ExecuteContext ctx) {
// But you shouldn't modify it
ctx.configuration().data().remove("queryName");
}
i am kind of stuck on a problem with creating beans, or probably i got the wrong intention.. Maybe you can help me solve it:
I got a application which takes in requests for batch processing. For every batch i need to create an own context depending on the parameters issued by the request.
I will try to simplyfy it with the following example:
I receive a request to process in a batch FunctionA which is a implementation for my Function_I interface and has sub-implementation FunctionA_DE and FunctionA_AT
Something like this:
public interface Function_I {
String doFunctionStuff()
}
public abstract class FunctionA implements Function_I {
FunctionConfig funcConfig;
public FunctionA(FunctionConfig funcConfig) {
this.funcConfig = funcConfig;
}
public String doFunctionStuff() {
// some code
String result = callSpecificFunctionStuff();
// more code
return result;
}
protected abstract String callSpecificFunctionStuff();
}
public class FunctionA_DE extends FunctionA {
public FunctionA_DE(FunctionConfig funcConf) {
super(funcConf)
}
protected String callSpecifiFunctionStuff() {
//do some specificStuff
return result;
}
}
public class FunctionA_AT extends FunctionA {
public FunctionA_AT(FunctionConfig funcConf) {
super(funcConf)
}
protected String callSpecifiFunctionStuff() {
//do some specificStuff
return result;
}
}
what would be the Spring-Boot-Way of creating a instance for FunctionA_DE to get it as Function_I for the calling part of the application, and what should it look like when i add FunctionB with FunctionB_DE / FunctionB_AT to my classes..
I thought it could be something like:
PSEUDO CODE
#Configuration
public class FunctionFactory {
#Bean(SCOPE=SCOPE_PROTOTYPE) // i need a new instance everytime i call it
public Function_I createFunctionA(FunctionConfiguration funcConfig) {
// create Function depending on the funcConfig so either FunctionA_DE or FunctionA_AT
}
}
and i would call it by Autowiring the FunctionFactory into my calling class and use it with
someSpringFactory.createFunction(functionConfiguration);
but i cant figure it out to create a Prototype-Bean for the function with passing a parameter.. And i cant really find a solution to my question by browsing through SO, but maybe i just got the wrong search terms.. Or my approach to solve this issue i totally wrong (maybe stupid), nobody would solve it the spring-boot-way but stick to Factories.
Appreciate your help!
You could use Springs's application context. Create a bean for each of the interfaces but annotate it with a specific profile e.g. "Function-A-AT". Now when you have to invoke it, you can simply set the application context of spring accordingly and the right bean should be used by Spring.
Hello everyone and thanks for reading my question.
after a discussion with a friend who is well versed in the spring framework i came to the conclusion that my approach or my favoured solution was not what i was searching for and is not how spring should be used. Because the Function_I-Instance depends on the for the specific batch loaded configuration it is not recommended to manage all these instances as #Beans.
In the end i decided to not manage the instances for my Function_I with spring. but instead i build a Controller / Factory which is a #Controller-Class and let this class build the instance i need with the passed parameters for decision making on runtime.
This is how it looks (Pseudo-Code)
#Controller
public class FunctionController {
SomeSpringManagedClass ssmc;
public FunctionController(#Autowired SomeSpringManagedClass ssmc) {
this.ssmc = ssmc;
}
public Function_I createFunction(FunctionConfiguration funcConf) {
boolean funcA, cntryDE;
// code to decide the function
if(funcA && cntryDE) {
return new FunctionA_DE(funcConf);
} else if(funB && cntryDE) {
return new FunctionB_DE(funcConf);
} // maybe more else if...
}
}
I'm learning about unit testing and I came across the problem to create a final check on wether a test case is correct or not. Usually I try to create a verification, like through an assertEquals(). But what is recommended to do when it's not possible to test it like this?
I have a class like this:
public class Landlord {
private Map<String, ChannelHandlerContext> currentOccupier;
private static Landlord instance;
public Landlord() {
currentOccupier = new HashMap<>();
}
public static Landlord getInstance {
//return instance
}
public void add(Occupier occupier){
currentOccupier.put("test", occupier.getChannelHandlerContext());
}
}
And now I try to test the method like this:
public class LandlordTest {
private Landlord landlord;
#Mock
private Occupier occupier;
#Mock
private ChannelHandlerContext channelHandlerContext;
#BeforeEach
void setUp() {
occupier = mock(Occupier.class);
channelHandlerContext = mock(ChannelHandlerContext.class);
landlord = Landlord.getInstance();
when(occupier.getChannelHandlerContext()).thenReturn(channelHandlerContext);
}
public void add(Occupier occupier){
addedOccupier.put(occupier.getChannelHandlerContext());
//adding succeded
}
}
Maybe in this short example it wouldn't be needed to test it, but is there a way to verify that the add method was successful? Normally in these kind of cases, I'd try something like: assertEquals(currentOccupier.size(), 1), but here I can't access the hashMap of the instance to do it like this. Is there another way to verify the correct behaviour of adding it?
This assertEquals(currentOccupier.size(), 1) is really not enough.
You want to assert that the map contains the entry that you added in the map.
That assertion is too shallow : it doesn't check for the entry neither the value of the key nor the value of the value.
You should do something like :
ChannelHandlerContext actualContext = landLord.get("test");
assertSame(addedContext, actualContext);
// or assertEquals if the instances may differ because you do some defensive copy in add()
Note also that you mock here some things that should not need to be mocked : occupier and channelHandlerContext make part of your model. You should be able to provide "normal" instances of them in the frame of the test.
Here you have broadly two ways to perform that :
1) adding a public method in the class to under test to find an ChannelHandlerContext :
public ChannelHandlerContext get(String name){
currentOccupier.get(name);
}
Do that only if providing this access is acceptable.
If you cannot add a public method, add a package level method as this doesn't make part of the exposed API.
2) use reflection api (essentially Class.getDeclaredField(String) and Field.get()) to retrieve the map instance from the instance under test and then assert that it contains the expected ChannelHandlerContext instance for the "test" key.
I write simple application. I don't want to use any frameworks. Please suggest me right place to hold annotation processing.
I have a few lines in main method:
String myString = (#NonNull String)list;
And I created #interface:
#Target({ElementType.TYPE_USE, ElementType.TYPE_PARAMETER})
public #interface NonNull {
}
Which step should I take next? Can I work with annotations without using reflection? Could you expose for me samples of such annotation processing code?
There is no way (AFAIK) to work with annotations without reflection.
If you don't want to use any framework, first step is to write kind of proxy class handling the method requests. It is an example of method processing with annotation use over method:
public class MyProxy {
private <T> T getProxy(T t) {
return (T) Proxy.newProxyInstance(t.getClass().getClassLoader(), new Class<?>[]{MyClass.class}, new MyInvocationHandler(t));
}
}
And then implement InvocationHandler:
public class MyInvocationHandler implements InvocationHandler {
private Object obj;
MyInvocationHandler (Object obj) {
this.obj = obj;
}
#Override
public Object invoke(Object proxy, final Method method, final Object[] args) throws Throwable {
boolean isNotNull = method.isAnnotationPresent(NotNull.class);
if (isNotNull) {
/* process annotated method. Or go through proxy object fields etc.. */
}
}
}
I hope it will help you.
You didn't say what kind of annotation processing you want to do.
Do you want to add a run-time check that will cause your code to crash if list is ever null at run time? For this, reflection will work.
Do you want to add a compile-time check that will reject your code if it cannot prove that list is never null at run time? For this, an annotation processor such as the Checker Framework will work.
Your question does not explain why you don't want to use a framework. Doing so will save you from re-implementing a lot of functionality that others have already created.
I'm trying to mock a class that looks like below
public class MessageContainer {
private final MessageChain[] messages;
MessageContainer(final int numOfMessages, final MessageManagerImpl manager, final Object someOtherStuff) {
messages = new MessageChain[numOfMessages]
// do other stuff
}
public void foo(final int index) {
// do something
messages[index] = getActiveMessage();
}
}
My test code would be as followed:
#Test
public void testFoo() {
MessageContainer messageContainer = Mockito.mock(MessageContainer.class);
Mockito.doCallRealMethod().when(messageContainer).foo(anyIndex);
}
I got a NullPointerException since 'messages' is null. I tried to inject the mock by using #InjectMocks, however this case is not supported since not every parameters of the constructor are declared as members.
I also tried to set the 'messages' field by using WhiteBox
Whitebox.setInternalState(messageContainer, MessageChain[].class, PowerMockito.mock(MessageChain[].class));
but I got a compile error since setInternalState only supports (Object, Object, Object) and not Object[].
Is there any possible way to mock a private final field?
Thank you guys in advance.
Based on your edits and comments, I would say mocking this class and verifying the method was invoked is sufficient.
If it is third-party code, you should rely only on its method signature, which comprises the class's public API. Otherwise you are coupling your tests too tightly to something you have no control over. What do you do when they decide to use a Collection instead of an array?
Simply write:
MessageContainer container = mock(MessageContainer.class);
//Your code here...
verify(container).foo(obj);