I have 4 step definition classes and a set of domain object classes.
My first step definition class looks like this:
public class ClaimProcessSteps {
Claim claim;
public ClaimProcessSteps(Claim w){
this.claim = w;
}
#Given("^a claim submitted with different enrolled phone's model$")
public void aClaimSubmittedFromCLIENTSChannelWithDifferentEnrolledPhoneSModel() throws Throwable {
claim = ObjMotherClaim.aClaimWithAssetIVH();
}
}
My Claim class looks like this:
public class Claim {
private String claimType;
private String clientName;
private Customer caller;
private List<Hold> holds;
public Claim() {}
public Claim(String claimType, String clientName, Customer caller) {
this.claimType = claimType;
this.clientName = clientName;
this.caller = caller;
}
public String getClaimType() {
return claimType;
}
My second step definition class looks like:
public class CaseLookupSteps {
Claim claim;
public CaseLookupSteps(Claim w){
this.claim = w;
}
#When("^I access case via (right|left) search$")
public void iAccessCaseInCompassViaRightSearch(String searchVia) throws Throwable {
System.out.println(claim.getClaimType());
}
I've already imported the picocontainter dependency in my POM.XML and I am getting the following error.
3 satisfiable constructors is too many for 'class java.lang.String'. Constructor List:[(Buffer), (Builder), ()]
None of my step definition classes constructors receive primitives as arguments. Does anyone have any clue as to why I am still getting that error? Could it be my business object constructor that does expect a String in its constructor?
Thanks in advance for any help.
Picocontainer looks over not only your step definition classes to resolve dependencies. It also looks over all classes that your steps definitions depend on.
In this case, it's trying to resolve the dependencies for your non-default Claim constructor.
public Claim(String claimType, String clientName, Customer caller) {
...
}
According to this issue there's no way to solve this other than keeping only default constructors in all your dependencies.
Assuming your scenario looks like this:
Given some sort of claim
When I lookup this claim
Then I see this claim
Currently your test is missing the setup step of the claim.
So rather then directly sharing the claim object between steps you should create a ClaimService class with only the default constructor. You can inject this service into your step definitions.
Once you have injected the service, you can use it in the step definition of Given some sort of claim to callclaimService.createSomeSortOfClaim() to create a claim. This claim can be created in memory, in a mock db, actual db, or other persistence medium.
In When I lookup this claim you then use claimService.getClaim() to return that claim so you can use its type to search for it.
Doing it this way you'll avoid the difficulty of trying to make the DI container figure out how it should create the claim under test.
Related
i am kind of stuck on a problem with creating beans, or probably i got the wrong intention.. Maybe you can help me solve it:
I got a application which takes in requests for batch processing. For every batch i need to create an own context depending on the parameters issued by the request.
I will try to simplyfy it with the following example:
I receive a request to process in a batch FunctionA which is a implementation for my Function_I interface and has sub-implementation FunctionA_DE and FunctionA_AT
Something like this:
public interface Function_I {
String doFunctionStuff()
}
public abstract class FunctionA implements Function_I {
FunctionConfig funcConfig;
public FunctionA(FunctionConfig funcConfig) {
this.funcConfig = funcConfig;
}
public String doFunctionStuff() {
// some code
String result = callSpecificFunctionStuff();
// more code
return result;
}
protected abstract String callSpecificFunctionStuff();
}
public class FunctionA_DE extends FunctionA {
public FunctionA_DE(FunctionConfig funcConf) {
super(funcConf)
}
protected String callSpecifiFunctionStuff() {
//do some specificStuff
return result;
}
}
public class FunctionA_AT extends FunctionA {
public FunctionA_AT(FunctionConfig funcConf) {
super(funcConf)
}
protected String callSpecifiFunctionStuff() {
//do some specificStuff
return result;
}
}
what would be the Spring-Boot-Way of creating a instance for FunctionA_DE to get it as Function_I for the calling part of the application, and what should it look like when i add FunctionB with FunctionB_DE / FunctionB_AT to my classes..
I thought it could be something like:
PSEUDO CODE
#Configuration
public class FunctionFactory {
#Bean(SCOPE=SCOPE_PROTOTYPE) // i need a new instance everytime i call it
public Function_I createFunctionA(FunctionConfiguration funcConfig) {
// create Function depending on the funcConfig so either FunctionA_DE or FunctionA_AT
}
}
and i would call it by Autowiring the FunctionFactory into my calling class and use it with
someSpringFactory.createFunction(functionConfiguration);
but i cant figure it out to create a Prototype-Bean for the function with passing a parameter.. And i cant really find a solution to my question by browsing through SO, but maybe i just got the wrong search terms.. Or my approach to solve this issue i totally wrong (maybe stupid), nobody would solve it the spring-boot-way but stick to Factories.
Appreciate your help!
You could use Springs's application context. Create a bean for each of the interfaces but annotate it with a specific profile e.g. "Function-A-AT". Now when you have to invoke it, you can simply set the application context of spring accordingly and the right bean should be used by Spring.
Hello everyone and thanks for reading my question.
after a discussion with a friend who is well versed in the spring framework i came to the conclusion that my approach or my favoured solution was not what i was searching for and is not how spring should be used. Because the Function_I-Instance depends on the for the specific batch loaded configuration it is not recommended to manage all these instances as #Beans.
In the end i decided to not manage the instances for my Function_I with spring. but instead i build a Controller / Factory which is a #Controller-Class and let this class build the instance i need with the passed parameters for decision making on runtime.
This is how it looks (Pseudo-Code)
#Controller
public class FunctionController {
SomeSpringManagedClass ssmc;
public FunctionController(#Autowired SomeSpringManagedClass ssmc) {
this.ssmc = ssmc;
}
public Function_I createFunction(FunctionConfiguration funcConf) {
boolean funcA, cntryDE;
// code to decide the function
if(funcA && cntryDE) {
return new FunctionA_DE(funcConf);
} else if(funB && cntryDE) {
return new FunctionB_DE(funcConf);
} // maybe more else if...
}
}
I am working on a REST API where I have an interface that defines a list of methods which are implemented by 4 different classes, with the possibility of adding many more in the future.
When I receive an HTTP request from the client there is some information included in the URL which will determine which implementation needs to be used.
Within my controller, I would like to have the end-point method contain a switch statement that checks the URL path variable and then uses the appropriate implementation.
I know that I can define and inject the concrete implementations into the controller and then insert which one I would like to use in each particular case in the switch statement, but this doesn't seem very elegant or scalable for 2 reasons:
I now have to instantiate all of the services, even though I only need to use one.
The code seems like it could be much leaner since I am literally calling the same method that is defined in the interface with the same parameters and while in the example it is not really an issue, but in the case that the list of implementations grows ... so does the number of cases and redundant code.
Is there a better solution to solve this type of situation? I am using SpringBoot 2 and JDK 10, ideally, I'd like to implement the most modern solution.
My Current Approach
#RequestMapping(Requests.MY_BASE_API_URL)
public class MyController {
//== FIELDS ==
private final ConcreteServiceImpl1 concreteService1;
private final ConcreteServiceImpl2 concreteService2;
private final ConcreteServiceImpl3 concreteService3;
//== CONSTRUCTORS ==
#Autowired
public MyController(ConcreteServiceImpl1 concreteService1, ConcreteServiceImpl2 concreteService2,
ConcreteServiceImpl3 concreteService3){
this.concreteService1 = concreteService1;
this.concreteService2 = concreteService2;
this.concreteService3 = concreteService3;
}
//== REQUEST MAPPINGS ==
#GetMapping(Requests.SPECIFIC_REQUEST)
public ResponseEntity<?> handleSpecificRequest(#PathVariable String source,
#RequestParam String start,
#RequestParam String end){
source = source.toLowerCase();
if(MyConstants.SOURCES.contains(source)){
switch(source){
case("value1"):
concreteService1.doSomething(start, end);
break;
case("value2"):
concreteService2.doSomething(start, end);
break;
case("value3"):
concreteService3.doSomething(start, end);
break;
}
}else{
//An invalid source path variable was recieved
}
//Return something after additional processing
return null;
}
}
In Spring you can get all implementations of an interface (say T) by injecting a List<T> or a Map<String, T> field. In the second case the names of the beans will become the keys of the map. You could consider this if there are a lot of possible implementations or if they change often. Thanks to it you could add or remove an implementation without changing the controller.
Both injecting a List or a Map have some benefits and drawbacks in this case. If you inject a List you would probably need to add some method to map the name and the implementation. Something like :
interface MyInterface() {
(...)
String name()
}
This way you could transform it to a Map<String, MyInterface>, for example using Streams API. While this would be more explicit, it would polute your interface a bit (why should it be aware that there are multiple implementations?).
When using the Map you should probably name the beans explicitly or even introduce an annotation to follow the principle of least astonishment. If you are naming the beans by using the class name or the method name of the configuration class you could break the app by renaming those (and in effect changing the url), which is usually a safe operation to do.
A simplistic implementation in Spring Boot could look like this:
#SpringBootApplication
public class DynamicDependencyInjectionForMultipleImplementationsApplication {
public static void main(String[] args) {
SpringApplication.run(DynamicDependencyInjectionForMultipleImplementationsApplication.class, args);
}
interface MyInterface {
Object getStuff();
}
class Implementation1 implements MyInterface {
#Override public Object getStuff() {
return "foo";
}
}
class Implementation2 implements MyInterface {
#Override public Object getStuff() {
return "bar";
}
}
#Configuration
class Config {
#Bean("getFoo")
Implementation1 implementation1() {
return new Implementation1();
}
#Bean("getBar")
Implementation2 implementation2() {
return new Implementation2();
}
}
#RestController
class Controller {
private final Map<String, MyInterface> implementations;
Controller(Map<String, MyInterface> implementations) {
this.implementations = implementations;
}
#GetMapping("/run/{beanName}")
Object runSelectedImplementation(#PathVariable String beanName) {
return Optional.ofNullable(implementations.get(beanName))
.orElseThrow(UnknownImplementation::new)
.getStuff();
}
#ResponseStatus(BAD_REQUEST)
class UnknownImplementation extends RuntimeException {
}
}
}
It passes the following tests:
#RunWith(SpringRunner.class)
#SpringBootTest
#AutoConfigureMockMvc
public class DynamicDependencyInjectionForMultipleImplementationsApplicationTests {
#Autowired
private MockMvc mockMvc;
#Test
public void shouldCallImplementation1() throws Exception {
mockMvc.perform(get("/run/getFoo"))
.andExpect(status().isOk())
.andExpect(content().string(containsString("foo")));
}
#Test
public void shouldCallImplementation2() throws Exception {
mockMvc.perform(get("/run/getBar"))
.andExpect(status().isOk())
.andExpect(content().string(containsString("bar")));
}
#Test
public void shouldRejectUnknownImplementations() throws Exception {
mockMvc.perform(get("/run/getSomethingElse"))
.andExpect(status().isBadRequest());
}
}
Regarding two of your doubts :
1. Instantiating the service object should not be an issue as this is one time job and controller gonna need them to serve all type of request.
2. You can use the exact Path mapping to get rid of switch case. For e.g. :
#GetMapping("/specificRequest/value1")
#GetMapping("/specificRequest/value2")
#GetMapping("/specificRequest/value3")
All of the above mapping will be on separate method which would deal with specific source value and invoke respective service method.
Hope this will help to make code more cleaner and elegant.
There is one more option of separating this on service layer and having only one endpoint to serve all types of source but as you said there is different implementation for each source value then it says that source is nothing but a resource for your application and having separate URI/separate method makes the perfect sense here. Few advantages that I see here with this are :
Makes it easy to write the test cases.
Scaling the same without impacting any other source/service.
Your code dealing the each source as separate entity from other sources.
The above approach should be fine when you have limited source values. If you have no control over source value then we need further redesign here by making source value differentiate by one more value like sourceType etc. and then having separate controller for each group type of source.
In my project I have different services. Each service can define its own Permissions. For each permission, a bean will created. This way, the Authorization service can inject all available permission, without actually knowing them.
The Permission definition of ServiceA will look like this:
#Configuration()
public class ServiceAPermissions extends Permissions {
private static final String BASE = "servicea";
public static final String SERVICEA_READ = join(BASE, READ);
public static final String SERVICEA_WRITE = join(BASE, WRITE);
#Bean()
Permission getReadPermission() {
return new Permission(SERVICEA_READ);
}
#Bean()
Permission getWritePermission() {
return new Permission(SERVICEA_WRITE);
}
}
ServiceB will define the following Permissions:
#Configuration()
public class ServiceBPermissions extends Permissions {
private static final String BASE = "serviceb";
public static final String SERVICEB_READ = join(BASE, READ);
#Bean()
Permission getReadPermission() {
return new Permission(SERVICEB_READ);
}
}
Obviously, this will end in a name clash of the defined beans as I have defined a bean with the name getReadPermission twice. If course I can name the methods like getServiceAReadPermission so they will be distinguished, but this only a convention, which might be ignored.
In this situation, Spring doesn't notify me about the duplicate definition, it simply will just instantiate one and ignore the other definition. Is there a way to tell Spring to throw an Exception, if a bean is defined twice? This way one would be always aware of a duplicate definition.
Alternatively, is there a way to tell spring, that it should use a random bean name instead of the method signature? I know that I can give each bean a name manually #Bean(name = "A name"), but I like to avoid that, as a dev will not be forced to do so and still might forget it.
That design does not seem very logical. A bean is supposed to be available only once, you're using it differently.
I'd suggest to provide a PermissionFactory-Bean which does what you need, along the line of
#Component
public class PermissionFactory {
public Permission createFactory() {
// create A or B permission randomly, as you wanted
}
}
I'm working on a system that uses guice to bind and inject MyBatis mappers used to remove entries from different DBs. The fact is that all DB are located in different hosts but have the same structure. Since there are a lot of them and the number and location of the hosts change quite often, I would like to install a MyBatis module with different data sources that are loaded dynamically using the same mapper.
I've been looking around but can't figure out how to solve the mapper ambiguity. I also took a look to MyBatis beans CDI plugin, that makes it easier to add named mappers with multiple data sources, but still can't get it working since I don't have a fixed list of data sources that i can name.
Am I missing an easy way to achieve this?
You need to bind your MyBatisModule privately and expose the mappings with a unique binding attribute. I've got an example below. I've verified that it works, too :)
DaoModule: This module is setup to bind a single mapper to a key with a specific data-source. Note that this class is extending a "PrivateModue" and it's exposing the key to the parent module. You'll be using this key to inject the mapping.
public class DaoModule<T> extends PrivateModule {
private static final String ENVIRONMENT_ID = "development";
private final Key<T> key;
private final Class<T> mapper;
private final Provider<DataSource> dataSourceProvider;
public DaoModule(Key<T> key, Class<T> mapper, Provider<DataSource> dataSourceProvider) {
this.key = key;
this.mapper = mapper;
this.dataSourceProvider = dataSourceProvider;
}
#Override
protected void configure() {
install(new InnerMyBatisModule());
expose(key);
}
private class InnerMyBatisModule extends MyBatisModule {
#Override
protected void initialize() {
bind(key).to(mapper);
addMapperClass(mapper);
environmentId(ENVIRONMENT_ID);
bindDataSourceProvider(dataSourceProvider);
bindTransactionFactoryType(JdbcTransactionFactory.class);
}
}
}
MyModule: This module installs two DaoModules with the same mapper type by two different keys and different data-sources.
public class MyModule extends AbstractModule {
#Override
protected void configure() {
Key<MapperDao> key1 = Key.get(MapperDao.class, Names.named("Mapper1"));
Provider<DataSource> datasource1 = null;
Key<MapperDao> key2 = Key.get(MapperDao.class, Names.named("Mapper2"));
Provider<DataSource> datasource2 = null;
install(new DaoModule<MapperDao>(key1, MapperDao.class, datasource1));
install(new DaoModule<MapperDao>(key2, MapperDao.class, datasource2));
}
}
Main: And the main acquires the two mappers of the same type but with different data-sources.
public class Main {
public static void main(String... args) {
Injector i = Guice.createInjector(new MyModule());
MapperDao mapper1 = i.getInstance(Key.get(MapperDao.class, Names.named("Mapper1")));
MapperDao mapper2 = i.getInstance(Key.get(MapperDao.class, Names.named("Mapper2")));
}
}
Example Injection Class: This shows how to use field injection to inject the mappers
public class MyExampleClass {
#Inject
#Named("Mapper1")
MapperDao mapper1;
#Inject
#Named("Mapper2")
MapperDao mapper2;
}
This answer is for slightly different scope than the question. For anyone who has fixed number of datasources and needs to share a mapper, there is also a solution without using #Named the way it is described in accepted answer.
You can simply use
interface SomeMapperForDbA extends SomeMapper {}
and add + expose SomeMapperForDbA in the corresponding PrivateModule.
Interface name here acts as a logical data source discriminator, while all the mapping queries stil stay intact in one place in SomeMapper. There are pros and cons to this approach vs named injects, but it works and might save the day for some.
Obviously, you need to inject SomeMapperForDbA to use the DbA data source. That said, it can neatly be done in constructor only, while the class member type used in the actual code can just be the SomeMapper to avoid confusion.
Alternatively, you could add some DbA-specific selects to SomeMapperForDbA, if databases have common and different parts etc. In this case I would suggest a better name, that reflects such logic.
I.e. don't be afraid to extend mapper interfaces when needed.
org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource is designed for this purpose.
https://www.baeldung.com/spring-abstract-routing-data-source
I'm trying to mock a class that looks like below
public class MessageContainer {
private final MessageChain[] messages;
MessageContainer(final int numOfMessages, final MessageManagerImpl manager, final Object someOtherStuff) {
messages = new MessageChain[numOfMessages]
// do other stuff
}
public void foo(final int index) {
// do something
messages[index] = getActiveMessage();
}
}
My test code would be as followed:
#Test
public void testFoo() {
MessageContainer messageContainer = Mockito.mock(MessageContainer.class);
Mockito.doCallRealMethod().when(messageContainer).foo(anyIndex);
}
I got a NullPointerException since 'messages' is null. I tried to inject the mock by using #InjectMocks, however this case is not supported since not every parameters of the constructor are declared as members.
I also tried to set the 'messages' field by using WhiteBox
Whitebox.setInternalState(messageContainer, MessageChain[].class, PowerMockito.mock(MessageChain[].class));
but I got a compile error since setInternalState only supports (Object, Object, Object) and not Object[].
Is there any possible way to mock a private final field?
Thank you guys in advance.
Based on your edits and comments, I would say mocking this class and verifying the method was invoked is sufficient.
If it is third-party code, you should rely only on its method signature, which comprises the class's public API. Otherwise you are coupling your tests too tightly to something you have no control over. What do you do when they decide to use a Collection instead of an array?
Simply write:
MessageContainer container = mock(MessageContainer.class);
//Your code here...
verify(container).foo(obj);