Decorator Pattern - Adding functionality not defined in interface - java

Are there any hard and fast rules on the use of this pattern or is it solely intended as a way to achieve additional functionality within method calls without using inheritance?
I have amended the example below that I took from a SO post to demonstrate what I am considering.
public interface Coffee {
public double getCost();
public String getIngredients();
}
public class SimpleCoffee implements Coffee {
#Override
public double getCost() {
return 1;
}
#Override
public String getIngredients() {
return "Coffee";
}
}
public class CoffeeDecorator implements Coffee {
protected final Coffee decoratedCoffee;
public CoffeeDecorator(Coffee c) {
this.decoratedCoffee = c;
}
#Override
public double getCost() {
//you can add extra functionality here.
return decoratedCoffee.getCost();
}
#Override
public String getIngredients() {
//you can add extra functionality here.
return decoratedCoffee.getIngredients();
}
public boolean methodNotDefinedInInterface() {
//do something else
return true;
}
}
So with the example above in mind, is it viable to:
a) use the simple Coffee whenever you see fit without decorating it
b) Add additional functionality that is not defined in the Coffee interface to decorator objects such as the methodNotDefinedInInterface()
Could someone also explain where the composition comes into this pattern as the SimpleCoffee is something that can exist in its own right, but it seems to be the decorator that actually 'owns' any object.
Although without the SimpleCoffee class (or some concrete implementation of Coffee) the decorator doesnt have any purpose, so aggregation doesnt seem to be what is occurring here.

The description of the pattern includes intent which makes it pretty clear what the pattern is for:
The decorator pattern can be used to extend (decorate) the functionality of a certain object statically, or in some cases at run-time, independently of other instances of the same class, provided some groundwork is done at design time.
As for "hard and fast rules" - I generally don't think that there are "hard and fast rules" in patterns at all. Like, if you don't implement it exactly as GoF described, there will be no "pattern police" punishing you. The only point is that if you follow the classic guidelines, other developers will have less problems recognizing patterns in your code.
Your example is quite OK from my point of view.
SimpleCoffee is not a decorator, so no composition there. CoffeeDecorator has decoratedCoffee as a component (here you have your composition)

a) use the simple Coffee whenever you see fit without decorating it
Yes, of course.
b) Add additional functionality that is not defined in the Coffee
interface to decorator objects such as the
methodNotDefinedInInterface()
You can add more methods just like adding new methods to SimpleCoffee class, but note that you would need to use those additional methods somewhere in the decorator class.
Personally, I find this pattern useful when someone gives you an instance of Coffee (i.e. you didn't instantiate it). If you need to change its behavior at runtime, the only way is to wrap it inside another object of Coffee type. This is when you can throw it into the decorator class. The decorator can expose some of the original behavior while providing some new behaviors.

Related

does using default methods in interfaces violates Interface segregation principle?

I'm learning about SOLID principles, and ISP states that:
Clients should not be forced to depend upon interfaces that they do
not use.
Does using default methods in interfaces violate this principle?
I have seen a similar question but I'm posting here with an example to get a clearer picture if my example violate ISP.
Say I have this example:
public interface IUser{
void UserMenu();
String getID();
default void closeSession() {
System.out.println("Client Left");
}
default void readRecords(){
System.out.println("User requested to read records...");
System.out.println("Printing records....");
System.out.println("..............");
}
}
With the following classes implementing IUser Interface
public class Admin implements IUser {
public String getID() {
return "ADMIN";
}
public void handleUser() {
boolean sessionIsOpen = true;
while (sessionIsOpen) {
switch (Integer.parseInt(in.readLine())) {
case 1 -> addNewUser();
case 2 -> sessionIsOpen=false;
default -> System.out.println("Invalid Entry");
}
}
closeSession();
}
private void addNewUser() {
System.out.println("Adding New User..."); }
}
}
Editor Class:
public class Editor implements IUser {
public String getID() {
return "EDITOR";
}
public void handleUser() {
boolean sessionIsOpen=true;
while (sessionIsOpen){
switch (Integer.parseInt(in.readLine())) {
case 1 -> addBook();
case 2 -> readRecords();
case 3 -> sessionIsOpen=false;
default ->
System.out.println("Invalid Entry");
}
}
closeSession();
}
private void addBook() {
System.out.println("Adding New Book..."); }
}
}
Viewer Class
public class Viewer implements IUser {
public String getID() {
return "Viewer";
}
public void handleUser() {
boolean sessionIsOpen=true;
while (sessionIsOpen){
switch (Integer.parseInt(in.readLine())) {
case 1 -> readRecords();
case 2 -> sessionIsOpen=false;
default ->
System.out.println("Invalid Entry");
}
}
closeSession();
}
}
Since editor and viewer class use readRecords() method and Admin class doesn't provide an implementation for that method, I implemented it as a default method in IUser Interface to minimize code repetition (DRY Principle).
Am I violating the interface segregation principle in the above code by using default methods in IUser because the Admin class does not use the read method?
Can someone explain please, because I think I'm not forcing Admin class to use methods/interfaces that they do not use.
does using default methods in interfaces violate the principle?
No, not if they're used correctly. In fact, they can help to avoid violating ISP (see below).
Does your example of using default methods violate ISP?
Yes! Well, probably. We could have a debate about exactly how badly it violates ISP, but it definitely violates a bunch of other principles, and isn't good practice with Java programming.
The problem is that you're using a default method as something for the implementing class to call. That's not their intent.
Default methods should be used to define methods that:
users of the interface will likely wish to call (i.e. not implementers)
provide aggregate functionality
have an implementation that is likely to be the same for most (if not all) implementers of the interface
Your example appears to break several conditions.
The first condition is there for a simple reason: all inheritable methods on Java interfaces are public, so they always can be called by users of the interface. To give a concrete example, the below code works fine:
Admin admin = new Admin();
admin.closeSession();
admin.readRecords();
Presumably, you don't want this to be possible, not just for Admin, but for Editor and Viewer too? I would argue that this is a sort-of violation of ISP, because you are depending on users of your classes not calling those methods. For the Admin class, you could make readRecords() 'safe' by overriding it and giving it a no-op implementation, but that just highlights a much more direct violation of ISP. For all other methods/implementations, including the classes that do make use of readRecords(), you're screwed. Rather than thinking of this in terms of ISP, I'd call it API or implementation leakage: it allows your classes to be used in ways that you didn't intend (and may wish to break in the future).
The second condition I stated might need further explanation. By aggregate functionality, I mean that the methods should probably call (either directly or indirectly) one or more of the abstract methods on the interface. If they don't do that, then the behaviour of those methods can't possibly depend on the state of the implementing class, and so could probably be static, or moved into a different class entirely (i.e. see the Single-responsibility principle). There are examples and use cases where it's OK to relax this condition but they should be thought about very carefully. In the example you give, the default methods are not aggregate, but it looks like sanitized code for the sake of Stack Overflow, so maybe your "real" code is fine.
It's debatable whether 2/3 implementers counts as "most" with with regards to my third condition. However, another way to think about it is you should know in advance of writing the implementing classes whether or not they should have that method with that functionality. How certainly can you say whether in the future, if you need to create a new class of User, they will require the functionality of readRecords()? Either way, it's a moot point as this condition only really needs to be thought about if you haven't violated the first 2.
A good use of default methods
There are examples in the standard library of good uses default methods. One would be java.util.function.Function with its andThen(...) and compose(...) methods. These are are useful pieces of functionality for users of Functions, they (indirectly) make use of the Function's abstract apply(...) method, and importantly, it's highly unlikely that an implementing class would ever wish to override them, except maybe for efficiency in some highly specialized scenarios.
These default methods do not violate ISP, as classes that implement Function have no need to call or override them. There may be many use-cases where concrete instances of Function never have their andThen(...) method called, but that's fine – you don't break ISP by providing useful but non-essential functionality, as long as you don't encumber all those use-cases by forcing them to do something with it. In the case of Function, providing these methods as abstract rather than default would violate ISP, as all implementing classes would have to add their own implementations, even when they know it's unlikely to ever be called.
How can you achieve DRY without breaking 'the rules'?
Use an abstract class!
Abstract classes have been poo-pooed a lot in discussions about good Java practice, because they were frequently misunderstood, misued and abused. It wouldn't surprise me if at least some programming best-practice guides like SOLID were published in reaction to this misuse. A very frequent issue I've seen is having an abstract class provide a "default" implementation for tons of methods that is then overridden almost everywhere, often by copy-pasting the base implementation and changing 1 or 2 lines. Essentially, this is breaking my third condition on default methods above (which also applies to any method on an intended-to-be-subclassed type), and it happens A LOT.
However, in this scenario, abstract classes are probably just what you need.
Something like this:
interface IUser {
// Add all methods here intended to be CALLED by code that holds
// instances of IUser
// e.g.:
void handleUser();
String getID();
// If some methods only make sense for particular types of user,
// they shouldn't be added.
// e.g.:
// NOT void addBook();
// NOT void addNewUser();
}
abstract class AbstractUser implements IUser {
// Add methods and fields here that will be USEFUL to most or
// all implementations of IUser.
//
// Nothing should be public, unless it's an implementation of
// one of the abstract methods defined on IUser.
//
// e.g.:
protected void closeSession() { /* etc... */ }
}
abstract class AbstractRecordReadingUser extends AbstractUser {
// Add methods here that are only USEFUL to a subset of
// implementations of IUser.
//
// e.g.:
protected void readRecords(){ /* etc... */ }
}
final class Admin extends AbstractUser {
#Override
public void handleUser() {
// etc...
closeSession();
}
public void addNewUser() { /* etc... */ }
}
final class Editor extends AbstractRecordReadingUser {
#Override
public void handleUser() {
// etc...
readRecords();
// etc...
closeSession();
}
public void addBook() { /* etc... */ }
}
final class Viewer extends AbstractRecordReadingUser {
#Override
public void handleUser() {
// etc...
readRecords();
// etc...
closeSession();
}
}
Note: Depending on your situation, there may be better alternatives to abstract classes that still achieve DRY:
If your common helper methods are stateless (i.e. don't depend on fields in the class), you could use an auxiliary class of static helper methods instead (see here for an example).
You might wish to use composition instead of abstract class inheritance. For example, instead of creating the AbstractRecordReadingUser as above, you could have:
final class RecordReader {
// Fields relevant to the readRecords() method
public void readRecords() { /* etc... */ }
}
final class Editor extends AbstractUser {
private final RecordReader r = new RecordReader();
#Override
void handleUser() {
// etc...
r.readRecords();
// etc...
}
}
// Similar for Viewer
This avoids the problem that Java doesn't allow multiple inheritance, which would become an issue if you tried to have multiple abstract classes containing different pieces of optional functionality, and some final classes needed to use several of them. However, depending on what state (i.e. fields) the readRecord() method needs to interact with, it might not be possible to separate it out into a separate class cleanly.
You could just put your readRecords() method in AbstractUser and avoid having the additional abstract class. The Admin class isn't obliged to call it, and as long as the method is protected, there's no risk that anyone else will call it (assuming you have your packages properly separated). This doesn't violate ISP as even though Admin can interact with readRecords(), it isn't forced to. It can pretend that method doesn't exist, and everyone is fine!
I believe this is a violation of the principle ISP. But you don't have to strictly follow all solid principles as this will complicate development.

Is this a correct implementation of the strategy pattern (JAVA, Fly method of ducks)?

I just want a quick lookover that I implemented the different fly strategies correctly.
The program simply consists of a duck class that uses an interface for its fly method. The interface has different implementations (namely SimpleFly and NoFly), and a switch statement chooses the correct method based on the specie enum.
As I understand, the strategy pattern is meant to avoid duplicate code between child classes at the same level, which decreases maintainability and extensibility. So instead we abstract out the related algorithms to interfaces and choose them as needed.
CODE:
package DesignPatterns;
//Here we will separate the fly method of ducks into implementations of an internface and then instantiate ducks with those attributes
interface IFly {
public void fly();
}
//These are called strategies
class SimpleFly implements IFly {
#Override
public void fly() {
System.out.println("Quack Quack i am flying in the air");
}
}
class NoFly implements IFly {
#Override
public void fly() {
System.out.println("I cannot fly.");
}
}
//Now the base class just has to implement one of these strategies
class Duck {
private IFly flyType;
public enum SPECIE {
WILD, CITY, RUBBER
}
public Duck(SPECIE specie) {
switch(specie) {
//Here just select the algorithms you want to assign to each type of DUCK. More flexible than horizontal code between species.
case WILD:
case CITY:
this.flyType = new SimpleFly();
break;
case RUBBER:
this.flyType = new NoFly();
break;
default:
//If a new enum is defined but no definition yet, this stops code from breaking
this.flyType = new SimpleFly();
}
}
public void fly() {
flyType.fly();
}
}
The output is correct as in this example:
Duck rubberDuck = new Duck(Duck.SPECIE.RUBBER);
Duck normalDuck = new Duck(Duck.SPECIE.WILD);
rubberDuck.fly();
normalDuck.fly();
Yields:
I cannot fly.
Quack Quack i am flying in the air
Thank you in advance and please let me know about any gaps in my knowledge,
Sshawarma
I would point out a couple issues of semantics and terminology.
It's confusing to have a method named fly that can be implemented as not flying. Naming the method tryToFly or documenting the method as merely an attempt are two ways of addressing this confusion. The software principle to reference here is Liskov Substitution.
The base class does not implement one of the strategies; rather, it composes a strategy. The purpose of the Strategy pattern is to avoid subclassing through composition.
To reiterate one of the comments, Duck should accept an instance of IFly directly in its constructor (or a setter method) rather than switching on an enum. Another goal of the Strategy pattern is to avoid branching logic.
The essence of the pattern is that you've avoided creating multiple subclasses of Duck by instead creating multiple implementations of IFly. This has the advantage that those IFly implementations can be reused without a complex inheritance hierarchy, e.g. WILD and CITY can share one strategy.
As mentioned in the comments, strategies also have the advantage that a Duck could change its strategy at runtime. For example, IFly might be implemented by Soar and Glide so that a Duck would switch between these different strategies depending on the wind.

Encapsuling Attributes in Management Objects and Interfaces

I am considering to encapule certain "not very often accessed" attributes and functionalities into their own "config" and "extended" - Objects within the data structure, so that I can offer to user defined callback functions an object of a type that only gives access to the most commonly used functions and attributes and offering a "getExtended" method that returns the same object with another type that offers uncommonly used functions.
This idea is mostly based around having a slim list of auto-completion friendly functions so that development with IDEs like Eclipse flows more smoothly when writing code using the most commonly used methods of the offered object, and not having to filter out the methods that are mostly used to do one-time configuration at a very specific place in the code.
Am I falling here into an obvious anti-pattern trap or is this actually a good way to lay out the structure of an easy to use lib?
One way to do this is to use interfaces. Interfaces can be extended through inheritance just like classes. Your collection could reference a base interface, say ISimple like below. Your getExtended() could return the IAdvanced interface. Both could be implemented by the same object (if they share identity and purpose), or by different objects. The decision of whether to implement together or not should really be based on the Single Responsibility Principle. Here is some sample code:
interface ISimple {
IAdvanced getAdvanced();
int getLength();
String getName();
}
interface IAdvanced extends ISimple {
void verifyAllTheThings();
}
class Implementation implements IAdvanced {
public IAdvanced getAdvanced() { return this; }
// ISimple
public int getLength() { return 2; }
public String getName() { return "something"; }
// IAdvanced
public void verifyAllTheThings() { /* do stuff */ }
}
I think you really are asking if this is a bad pattern or not. In and of itself it is not a bad pattern (IMHO), but there is different design problem implied by your question. If your motivation of being friendly with IDE's is because there are are a huge number of methods on each object, then that is possibly a questionable design. Again, the Single Responsibility Principle is a good guide to tell you if your object is doing too much, and should be split apart into separate concerns. If so, then doing a simple/extended split is a possibly weak way to divide up the set of all methods, and instead you might want to consider breaking up the object along more conceptual lines.
Duane already get to the point, but sadly missed the mark with using the interface. An interface can be inherited but not always need to. And yes, using interface to limit the method accessibility is the right way to do it.
As example, actually you can:
interface ITwoDimensional {
int getWidth();
int getLength();
}
interface IThreeDimensional extends ITwoDimensional {
int getHeight();
}
interface ISpherical{
int getRadius();
}
class Consumer{
int calculateArea(ITwoDimensional a){
// you can only access getLength() and getWidth();
}
int calculateArea(ISpherical a){
// you can only access getRadius();
}
int calculateArea(IThreeDimensional a){
// you can access getLength(), getWidth() and getHeight();
}
}
That is only the basic example. There are many more design available with interface access.

Design pattern to add attributes to objects dynamically

Consider that we have a Car object. The acceleration and breaking features are implemented using strategy pattern. But what if we want to introduce nitro gas feature to an existing car object ? What is the design pattern that I can use ?
I want to add the nitro feature(Attribute) after creating the car object.
You can check the Decorator pattern, it can be used to dynamically add functionality to an existing object.
Decorator pattern can add different functionalities to objects dynamically. But these functionalities have to be implemented in a Concrete Decorator. The developer can decide what functionalities to add at run time.
In statically typed languages you can not add methods to an object at runtime. Compiler, when it encounters a statement like: car.nitroAccelerate(), checks whether a car object implements any interface that has nitroAccelerate method. If you could add (or remove) methods during runtime such checks would be impossible.
Dynamic languages allow to add methods during runtime. But this has a drawback, when you put car.nitroAccelerate() in the code, you need to carefully analyze if the car object in this point has such method.
You can use decorator to modify existing methods at runtime, but doing so, you are not modifying an existing object, just creating a new one that wraps the old one.
So if you do something like:
Car fasterCar = new CarWithNitro(car);
and some piece of your code still holds a reference to the original car, this original car would not be faster, because the act of wrapping does not modify the original.
If you want to add new methods, you need to create a new subclass and/or use delegation. This will be necessary if the "nitro" feature requires an explicit method call to activate.
If however all you want to do is to add to existing functionality without adding methods, Decorator is a good bet. Let's say that interface "Car" has a method called floorIt(). In that case, you can add a "nitro kick" to floorIt with Decorator without having to add to the Car inteface.
Of course, there's a middle ground. If you use runtime type discovery and/or multiple interfaces, you can both use Decorator and add methods to the resulting object.
I'll show you two methods of creating it; one is neither good nor bad and the other one is better.
Method 1
You can use Generic and Interface for that new Feature. The advantage is that you don't need to wrap inside the new object for a new field or attribute.
Create a Feature interface.
public interface Feature<T> {
T getClassValue();
void setClassValue(T feature);
}
Add it to the concrete class.
public class Car<ReturnType> {
//acceleration and breaking features are added using the strategy pattern
public Feature<ReturnType> newFeature;
public Car(Feature<ReturnType> features) {
this.newFeature = features;
}
public Feature<ReturnType> getNewFeature() {
return newFeature;
}
public void setNewFeature(Feature<ReturnType> newFeature) {
this.newFeature = newFeature;
}
}
Problem: The problem is that the new feature can be NULL. It also breaks the Single Responsibility Principle because the car has the responsibility to add the feature. It would populate more Generics to some Abstract classes later on.
Car<Void> car1 = new Car<>(null);
car1.getNewFeature(); //catch Exception here!
And you also have to catch NullPointerException for that. Even thinking with Visitor Design Pattern, its accept(Visitor v); method is always there even if there's no visitor yet.
Advantage: So, is there anything good about it? Yes, there is!
Now, we're gonna create an AbstractFeature class first.
public abstract class AbstractFeature<T> implements Feature<T> {
protected T concreteClass;
#Override
public T getConcreteClass() {
return concreteClass;
}
#Override
public void setConcreteClass(T concreteClass) {
this.concreteClass = concreteClass;
}
}
Now, we're gonna create what kind of new Feature Class we want. Let's say, we wanna add NitrogenGas to the car. Let's create it first. Note: It's a free class. You can add fields as many as you want here. And we'll call this class and use it later.
public class NitrogenGas {
//some methods
public String fillGas() {
return "Filling NitrogenGas";
}
}
And let's add a feature later like this! Just use the AbstractFeature class. That's the reusability.
public class NitrogenGasFeature extends AbstractFeature<NitrogenGas> {
}
Now, you can add a new Feature to the Car class like this! The NitrogenFeature class holds a concrete class we want to use.
Car<NitrogenGas> car2 = new Car<>(new NitrogenGasFeature()); //Feature added
car2.getNewFeature().setConcreteClass(new NitrogenGas()); //Concrete Class added
car2.getNewFeature().getConcreteClass().fillGas(); //use its method.
Let's try and use a Wrapper class like Boolean. Just in case you just need it for backtracking or something.
Car<Boolean> car2 = new Car<>(new BooleanFeature());
car2.getNewFeature().ConcreteClass(new Boolean(true));
System.out.println(car2.getNewFeature().getConcreteClass().booleanValue());
FYI, we can create BacktrackingFeature and Backtracking classes for readability. But it will increase more code. So, use think it wisely.
This is method 1. It breaks the Single Responsibility, more dependency and more code but increases reusability and readability. Sometimes, when you think too much on the code, even inheritance breaks the Single Responsibility. And that's why we use Composition with the interface. Rules are meant to be broken for the great good.
Method 2 The DTO pattern. You can use like the above answer. Just create a new Object and wrap it. Add any fields or attributes you want to use it. That's the better way. It doesn't break the Single Responsibility Principle. And it has a more meaningful OOP way. But it's up to your decision to make.

Properly designing Java class hierarchies for code sharing and encapsulation

When laying out a class hierarchy, I often find myself frustrated at the gap between being able to encapsulate functionality while also sharing code. Part of the problem, of course, is lack of multiple inheritance, but interfaces help somewhat. The inability to define protected methods on interfaces seems to me to be the bigger issue.
The standard solution seems to be to have a public interface that is implemented by a protected abstract base class. The problem is when we have the following
public interface Foo {
public String getName();
}
abstract protected BaseFoo implements Foo {
abstract protected int getId();
private String name;
protected BaseFoo(String name) {
this.name = name;
}
#Override
public String getName() {
return this.name;
}
}
public class ConcreteFoo extends BaseFoo {
public ConcreteFoo (String name) {
super(name);
}
#Override
protected int getId() {
return 4; // chosen by fair dice roll.
// guaranteed to be random.
}
}
// in the foo package with the classes above
public class FooCollection {
private static Map<Integer, Foo> foos = new HashMap();
public static void add(Foo foo) {
synchronized(foos) {
foos.put(foo.getId(), foo); // can't call foo.getId()
}
}
}
// client code, not in the foo package
FooCollection.add(new ConcreteFoo("hello world"));
That is, we return one of our nicely-encapsulated objects to caller, but then any method which gets that object back needs to be able to rely on some internal functionality. That internal functionality cannot be part of the interface (that would break encapsulation), but to make it part of an abstract base class requires us to use casting.
We cannot make Foo an abstract class because other interfaces need to extend it to add optional, orthogonal functionality to a more complex hierarchy than is display here.
What are the standard approaches to this problem? Do you add getId to the Foo interface, even though clients shouldn't use it? Do you perform an unsafe cast to BaseFoo in FooCollection.add? If you check before casting, what do you do when the types don't match, even though they always should for all intents and purposes?
Any information you have on best practices in this sort of situation would be very helpful.
Edit: In case it's not clear, this example is intentionally oversimplified. The key point is that sometimes you return an "interface view" of an object. When that "interface view" is passed back in to a package-specific class, the method it is passed to will likely need to use internal functionality in its implementation. How does one manage that mismatch between internal and public functionality?
Okay, here's a couple of points:
Contrary to popular opinion, inheritance really isn't about sharing code. What you create in an inheritance hierarchy is an organization of things that share some common set of abstract behaviors; it just works out sometimes to have the effect of reusing some code.
The fashion has changed quite a bit in the last few years, so that deep and complicated inheritance hierarchies are no longer considered good form. In general in Java. you should
use aggregation before implementing an interface
use interfaces to express "mix-in" contracts
use inheritance only if the classes describe something that has natural inheritance.
If you really want the effect of multiple inheritance, build implementation classes for your interfaces, and aggregate them.
In particular, by defining your classes with interfaces and implementation classes, you make building tests much easier; if your interface is separate, it's almost trivial to build a mock for that interface.
I don't know about "best" practices, but here are a couple of ideas.
Interfaces are supposed to separate "what is to be done" from "how something is to be done". I don't think getters and setters belong in interfaces. I try to give them more meaningful signatures.
In your case, I see nothing wrong with two interfaces:
public interface Nameable {
String getName();
}
public interface Identifiable {
int getId();
}
Separate the two; force clients to implement only the ones they need. Your decision to make id part of the abstract class is arbitrary. Separating it out and making it explicit can be helpful.
Casting loses all benefit of polymorphism. I don't think that it should be abandoned lightly. If you must move getId() up to the interface, do so. If you can avoid it by different choices, do so.
"Best" depends on your context. Your simple example might not be true in all cases.

Categories

Resources