How to test object initialisation within a super constructor? - java

I have a BasePersister that builds a complex persistence client in it's constructor, using Dagger:
public abstract class BasePersister {
#Getter
private PersistenceClient client;
public BasePersister() {
this.client = DaggerPersistenceClientComponent.create().getClient();
}
public abstract void persist(String data);
protected void helper() {
System.out.println("I am a helper");
}
}
The idea is that child persister classes can just extend the base class and perform its persistence logic with the client. An example child class:
public class SpecialPersister extends BasePersister {
public void persist(String data) {
// do stuff with the client
getClient().persist(data);
// use the shared helper
helper();
}
}
Moving the client instantiation within the base class constructor was ideal because in my PersisterFactory, I can simply invoke statements like new SpecialPersister(); the constructor doesn't take any arguments, doesn't need Dagger to instantiate and the factory is completely unaware of any clients.
I'm having trouble testing these child classes and I'm suspecting it has to do with my design choice of secretly instantiating clients within the base constructors.
More specifically, in my SpecialPersisterTest class, I can't do Spy(SpecialPersister) as this invokes the base constructor, which then instantiates my complex clients (giving me an error). I somehow need to mock this super-constructor call so that it doesn't actually invoke client instantiation, which has complex network calls etc.
Ideally, I can do a simple test such as checking:
def "my ideal test"() {
given:
specialPersister.persist(validData)
then:
1 * specialPersister.getClient()
and:
1 * specialPersister.helper()
}

Moving the client instantiation within the base class constructor was ideal because in my PersisterFactory, I can simply invoke statements like new SpecialPersister(); the constructor doesn't take any arguments, doesn't need Dagger to instantiate and the factory is completely unaware of any clients.
I'm having trouble testing these child classes and I'm suspecting it has to do with my design choice of secretly instantiating clients within the base constructors.
This design choice is the issue. If you want the code to be testable without making calls on the real client, you will need to be able to stub your client. One option for this is to pass the PersistenceClient in at instantiation.
Since you are using a factory pattern, your factory can provide it without worrying about the details elsewhere in your code. It should know how to create Persister objects, regardless of if it needs to know the details about the client - coupling at this level should be encouraged. You may also want your factory to take the argument, as well, so that a Persister from the factory can be tested.
public abstract class BasePersister {
private PersistenceClient client;
public BasePersister(PersistenceClient client) {
this.client = client;
}
}
public class SpecialPersister extends BasePersister {
public SpecialPersister(PersistenceClient client) {
super(client);
}
}
public class PersisterFactory {
// pass in the client once to a PersisterFactory instance
private PersistenceClient client;
public PersisterFactory(PersistenceClient client) {
this.client = client;
}
public SpecialPersister createSpecialPersister() {
return new SpecialPersister(client);
}
}
// elsewhere
PersisterFactory persisterFactory = new PersisterFactory(DaggerPersistenceClientComponent.create().getClient());
// ...
BasePersister persister = persisterFactory.createSpecialPersister();

Related

Does strategy always needs to be passed from the client code in Strategy Pattern?

I have below piece of code:
public interface SearchAlgo { public Items search(); }
public class FirstSearchAlgo implements SearchAlgo { public Items search() {...} }
public class SecondSearchAlgo implements SearchAlgo { public Items search() {...} }
I also have a factory to create instances of above concrete classes based on client's input. Below SearchAlgoFactory code is just for the context.
public class SearchAlgoFactory {
...
public SearchAlgo getSearchInstance(String arg) {
if (arg == "First") return new FirstSearchAlgo();
if (arg == "Second") return new SecondSearchAlgo();
}
}
Now, I have a class that takes input from client, get the Algo from Factory and executes it.
public class Manager{
public Items execute(String arg) {
SearchAlgo algo = SearchAlgoFactory.getSearchInstance(arg);
return algo.search();
}
}
Question:
I feel that I am using both Factory and Strategy pattern but I am not sure 'cause whatever examples I have seen they all have a Context class to execute the strategy and client provides the strategy which they want to use. So, is this a correct implementation of Strategy?
If it comes to implementing design patterns, it is much more important to understand what they do than to conform to some gold standard reference implementation. And it looks like you understand the strategy pattern.
The important thing about strategies is that the implementation is external to some client code (usually called the context) and that it can be changed at runtime. This can be done by letting the user provide the strategy object directly. However, introducing another level of indirection through your factory is just as viable. Your Manager class acts as the context you see in most UML diagrams.
So, yes. In my opinion, your code implements the strategy pattern.

How to design a class for unit tests

I have a Java class like the following:
public class MyClass {
/** Database Connection. */
private dbCon;
public MyClass() {
dbCon = ...
}
public void doSomethingWith(MyData data) {
data = convertData(data);
dbCon.storeData(data);
}
private MyData convertData(MyData data) {
// Some complex logic...
return data;
}
}
since the true logic of this class lies in the convertData() method, I want to write a Unit Test for this method.
So I read this post
How do I test a private function or a class that has private methods, fields or inner classes?
where a lot of people say that the need to test a private method is a design smell. How can it be done better?
I see 2 approaches:
Extract the convertData() method into some utility class with a public api. But I think this would be also bad practice since such utility classes will violate the single responsibilty principle, unless I create a lot of utility classes with maybe only one or two methods.
Write a second constructor that allows injection of the dbCon, which allows me to inject a mocked version of the database connection and run my test against the public doSomething() method. This would be my preferred approach, but there are also discussions about how the need of mocking is also a code smell.
Are there any best practices regarding this problem?
Extract the convertData() method into some utility class with a public api. But I think this would be also bad practice since such utility classes will violate the single responsibility principle, unless I create a lot of utility classes with maybe only one or two methods.
You interpretation of this is wrong. That is exactly what the SRP and SoC (Separation of Concerns) suggests
public interface MyDataConverter {
MyData convertData(MyData data);
}
public class MyDataConverterImplementation implements MyDataConverter {
public MyData convertData(MyData data) {
// Some complex logic...
return data;
}
}
convertData implementation can be now tested in isolation and independent of MyClass
Write a second constructor that allows injection of the dbCon, which allows me to inject a mocked version of the database connection and run my test against the public doSomething() method. This would be my preferred approach, but there are also discussions about how the need of mocking is also a code smell.
Wrong again. Research Explicit Dependency Principle.
public class MyClass {
private DbConnection dbCon;
private MyDataConverter converter;
public MyClass(DbConnection dbCon, MyDataConverter converter) {
this.dbCon = dbCon;
this.converter = converter;
}
public void doSomethingWith(MyData data) {
data = converter.convertData(data);
dbCon.storeData(data);
}
}
MyClass is now more honest about what it needs to perform its desired function.
It can also be unit tested in isolation with the injection of mocked dependencies.

java open closed principle for multiple services

Let's say I wanted to define an interface which represents a call to a remote service.
Both Services have different request and response
public interface ExecutesService<T,S> {
public T executeFirstService(S obj);
public T executeSecondService(S obj);
public T executeThirdService(S obj);
public T executeFourthService(S obj);
}
Now, let's see implementation
public class ServiceA implements ExecutesService<Response1,Request1>
{
public Response1 executeFirstService(Request1 obj)
{
//This service call should not be executed by this class
throw new UnsupportedOperationException("This method should not be called for this class");
}
public Response1 executeSecondService(Request1 obj)
{
//execute some service
}
public Response1 executeThirdService(Request1 obj)
{
//execute some service
}
public Response1 executeFourthService(Request1 obj)
{
//execute some service
}
}
public class ServiceB implements ExecutesService<Response2,Request2>
{
public Response1 executeFirstService(Request1 obj)
{
//execute some service
}
public Response1 executeSecondService(Request1 obj)
{
//This service call should not be executed by this class
throw new UnsupportedOperationException("This method should not be called for this class");
}
public Response1 executeThirdService(Request1 obj)
{
//This service call should not be executed by this class
throw new UnsupportedOperationException("This method should not be called for this class");
}
public Response1 executeFourthService(Request1 obj)
{
//execute some service
}
}
In a other class depending on some value in request I am creating instance of either ServiceA or ServiceB
I have questions regarding the above:
Is the use of a generic interface ExecutesService<T,S> good in the case where you want to provide subclasses which require different Request and Response.
How can I do the above better?
Basically, your current design violates open closed principle i.e., what if you wanted to add executeFifthService() method to ServiceA and ServiceB etc.. classes.
It is not a good idea to update all of your Service A, B, etc.. classes, in simple words, classes should be open for extension but closed for modification.
Rather, you can refer the below approach:
ExecutesService interface:
public interface ExecutesService<T,S> {
public T executeService(S obj);
}
ServiceA Class:
public class ServiceA implements ExecutesService<Response1,Request1> {
List<Class> supportedListOfServices = new ArrayList<>();
//load list of classnames supported by ServiceA during startup from properties
public Response1 executeService(Request1 request1, Service service) {
if(!list.contains(Service.class)) {
throw new UnsupportedOperationException("This method should
not be called for this class");
} else {
return service.execute(request1);
}
}
}
Similarly, you can implement ServiceB as well.
Service interface:
public interface Service<T,S> {
public T execute(S s);
}
FirstService class:
public class FirstService implements Service<Request1,Response1> {
public Response1 execute(Request1 req);
}
Similarly, you need to implement SecondService, ThirdService, etc.. as well.
So, in this approach, you are basically passing the Service (to be actually called, it could be FirstService or SecondService, etc..) at runtime and ServiceA validates whether it is in supportedListOfServices, if not throws an UnsupportedOperationException.
The important point here is that you don't need to update any of the existing services for adding new functionality (unlike your design where you need to add executeFifthService() in ServiceA, B, etc..), rather you need to add one more class called FifthService and pass it.
I would suggest you to create two different interfaces every of which is handling its own request and response types.
Of course you can develop an implementation with one generic interface handling all logic but it may make the code more complex and dirty from my point of view.
regards
It makes not really sense to have a interface if you know that for one case, most of methods of the interface are not supported and so should not be called by the client.
Why provide to the client an interface that could be error prone to use ?
I think that you should have two distinct API in your use case, that is, two classes (if interface is not required any longer) or two interfaces.
However, it doesn't mean that the two API cannot share a common interface ancestor if it makes sense for some processing where instances should be interchangeable as they rely on the same operation contract.
Is the use of a generic interace (ExecutesService) good in the case
where you want to provide subclasses which require different Request
and Response.
It is not classic class deriving but in some case it is desirable as
it allows to use a common interface for implementations that has some enough similar methods but don't use the same return type or parameter types in their signature :
public interface ExecutesService<T,S>
It allows to define a contract where the classic deriving cannot.
However, this way of implementing a class doesn't allow necessarily to program by interface as the declared type specifies a particular type :
ExecutesService<String, Integer> myVar = new ExecutesService<>();
cannot be interchanged with :
ExecutesService<Boolean, String> otherVar
like that myVar = otherVar.
I think that your question is a related problem to.
You manipulate implementations that have close enough methods but are not really the same behavior.
So, you finish to mix things from two concepts that have no relation between them.
By using classic inheriting (without generics), you would have probably introduced very fast distinct interfaces.
I guess it is not a good idea to implement interface and make possible to call unsupported methods. It is a sign, that you should split your interface into two or three, depending on concrete situation, in a such way, that each class implements all methods of the implemented interface.
In your case I would split the entire interface into three, using inheritance to avoid doubling. Please, see the example:
public interface ExecutesService<T, S> {
T executeFourthService(S obj);
}
public interface ExecutesServiceA<T, S> extends ExecutesService {
T executeSecondService(S obj);
T executeThirdService(S obj);
}
public interface ExecutesServiceB<T, S> extends ExecutesService {
T executeFirstService(S obj);
}
Please, also take into account that it is redundant to place public modifier in interface methods.
Hope this helps.

Adding functionality to implementation classes without changing the implemented Interface

I have got an interface that defines some service methods for data retrieval:
public interface DataReceiver {
public Data getData();
}
Then i have a class that implements this interface and loads the data through a connection. I supply this connection using constructor injection:
public class ConnectionDataReceiver implements DataReceiver {
private Connection connection;
public ConnectionDataReceiver(Connection connection) {
this.connection = connection;
}
public Data getData() {
return connection.query("blabla");
}
}
This works pretty nicely. I can instantiate my ConnectionDataReceiver objects using the constructor, or i could add a factory method/class that extends the usability by providing an option to select a config file for connection setup. I then use my new class through my interface, so i can easily swap out the implementation (like loading the data from a file instead of a connection).
But what if i want to change my connection during runtime, without instantiating a new ConnectionDataReceiver? I would have to add getters and setters for my class. But since they are not part of my public service definition, i can't put them in my interface. I could use the implementation object in my code to set a new connection, but it feels pretty awkward hanging onto a reference to the original object only for maybe changing the connection object:
ConnectionDataReceiver conDataRec = new ConnectionDataReceiver(myConnection);
DataReceiver dataRec = conDataRec;
// use dataRec
conDataRec.setConnection(myNewConnection);
// use dataRec again
In this example it would be the easiest way to just instantiate a new ConnectionDataReceiver and just reassign dataRec, but what if the instantiation of my object is really expensive? How do i give my implementation classes additional functionality while still being able to use my old service interface? Or is it generally frowned upon changing data at runtime, when the interface doesn't define that functionality?
What you can do is that adding following two simple methods in your interface:
public void setProperty(String name, Object value);
public Object getProperty(String name);
Now with the help of these two simple methods, you may configure as many additional functionalities as you want in your implementation classes without adding a new method for a new feature (of your implementation class) in your super type.
This pattern is used in following interface:
com.ibm.msg.client.jms.JmsQueueConnectionFactory
The interface has setCharProperty, setDoubleProperty, setFloatProperty etc so that when they release a new implementation they do not have to modify the interface.
My version:
Interface
public interface DataReceiver
{
public Data getData();
}
Implementation
public class ConnectionDataReceiver implements DataReceiver
{
private Connection connection;
public ConnectionDataReceiver(Connection connection)
{
this.connection = connection;
}
public Data getData()
{
return connection.query("blabla");
}
}
Interface using in business layer, here method setReceiver will assign new implementation of interface in run-time.
public class SomeBusinessLogic
{
private DataReceiver receiver;
public SomeBusinessLogic(DataReceiver receiver)
{
this.receiver = receiver;
}
public void setReceiver(DataReceiver receiver)
{
this.receiver = receiver;
}
}
With this approach you can change implementation of DataReceiver in run-time

Java putting common methods inside superclass

I have two classes that both take the same type of object as an argument, but then call a different method on that object to obtain another object (the obtained object's type is also different in both classes) that is used extensively throughout the classes in different methods. Now, some of these methods are identical between the two classes, so I thought that it would be wise to put them in a subclass. However, since the methods are dependent on the object that was obtained by calling a different method on the object given as an argument to the constructors, i can't just copy the constructors from subclasses to the superclass. I'm uncertain how can the superclass obtain the required object. It seems that my potential superclass `Server` would be dependent on its subclasses, which even sounds wrong.
Here's an illustrative code of the problem:
class ServerOne() {
Connector connector;
public ServerOne(Conf conf) {
Conf.ServerOneConf config = conf.getServerOneConf();
connector = config.getConnector(); //
}
// a lot of methods that use connector
}
class ServerTwo() {
Connector connector;
public ServerTwo(Conf conf) {
Conf.ServerTwoConf config = conf.getServerTwoConf(); // notice that it uses a different method for obtaining the configuration. Also, the obtained object is of a different type than the configuration object that was obtained in the ServerOne constructor.
connector = config.getConnector();
}
// a lot of methods that use connector
}
class Server() {
// would like to implement some common methods that use connector.
// need to get an instance of the Connector to this class.
}
Thank you very much for your help :)
There may be reasons to subclass your Server class, but how to get the connector probably isn't a reason for subclassing. Make a strategy to handle getting a Connector:
interface ConnectorStrategy {
Connector retrieveConnector(Conf conf);
}
with implementations like
class ServerOneConnectorStrategy implements ConnectorStrategy {
public Connector retrieveConnector(Conf conf) {
return conf.getServerOneConf().getConnector();
}
}
and pass this in to the Server object when you create it.
Or if you need the hierarchy, use the template method pattern:
abstract class Server {
abstract Connector retrieveConnector(Conf conf);
void initializeConnector(Conf conf) {
...
connector = retrieveConnector(conf);
}
...
}
class ServerOne extends Server {
public Connector retrieveConnector(Conf conf) {
return conf.getServerOneConf().getConnector();
}
}
How about making Server as an abstract class and extends ServerOne and ServerTwo from it.
Like this:
public abstract class Server() {
Connector connector;
public Server() {
Configuration config = conf.getServerTwoConf();
connector = config.getConnector();
}
...
}
class ServerOne() extends Server{
...
}
class ServerTwo() extends Server{
...
}
This is a perfect case for extending a superclass. The object you are populating in both constructors is of the same type - not the same data. When you create a ServerOne, it will populate the object held in the superclass the way you currently do. Then, the common methods in the superclass can now operate on this object that is populated.
Put your shared methods in the super class
class Server() {
Connector connector;
public Server(Conf conf) {
Configuration config = conf.getServerConf();
connector = config.getConnector(); //
}
// your methods
}
and then just use the super() call in the constructor of your subclasses. They can easily inherit the methods of your super class without having to write them again.
class ServerOne() extends Server {
public ServerOne(Conf conf) {
super(conf);
}
}

Categories

Resources