I have two process and for each process, I will get different Record object and I need to validate those Record object. This means I cannot use single validator as I have to validate different fields for both the process.
For processA, I am using ValidatorA class to validate its Record object.
For processB, I am using ValidatorB class to validate its Record object.
If they are valid, then I will move forward otherwise I won't move forward. Below is my process code both for A and B.
public class ProcessConsumer implements Runnable {
private static final Logger logger = Logger.getInstance(ProcessConsumer.class);
private final String processName;
private final Validator validator;
private final RecordProcessor<byte[], byte[]> process;
public ProcessConsumer(String processName, Validator validator) {
this.processName = processName;
this.validator = validator;
this.process = new RecordProcessor<>();
}
#Override
public void run() {
try {
process.subscribe(getTopicsBasedOnProcessName(processName));
....
while (true) {
ConsumerRecords<byte[], byte[]> crs = process.poll(2000);
for (ConsumerRecord<byte[], byte[]> cr : crs) {
// record object will be different for my both the processes.
Record record = decoder.decode(cr.value());
Optional<DataHolder> validatedHolder = validator.getDataHolder(processName, record);
if (!validatedHolder.isPresent()) {
logger.logError("records dropped. payload= ", record);
continue;
}
// send validatedHolder to processor class
Processor.getInstance().execute(validatedHolder);
}
}
} catch (Exception ex) {
logger.logError("error= ", ExceptionUtils.getStackTrace(ex));
}
}
}
Below is my ValidatorA class in which I am validating few fields on record object and if it is valid, then I am returning DataHolder.
public class ValidatorA extends Validator {
private static final Logger logger = Logger.getInstance(ValidatorA.class);
#Override
public static Optional<DataHolder> getDataHolder(String processName, Record record) {
Optional<DataHolder> dataHolder = Optional.absent();
if (isValid(processName, record))
dataHolder = Optional.of(buildDataHolder(processName, record));
return dataHolder;
}
private DataHolder isValid(String processName, Record record) {
return isValidClientIdDeviceId(processName, record) && isValidPayId(processName, record)
&& isValidHolder(processName, record)
}
private DataHolder buildDataHolder(String processName, Record record) {
Map<String, String> holder = (Map<String, String>) DataUtils.extract(record, "holder");
String deviceId = (String) DataUtils.extract(record, "deviceId");
Integer payId = (Integer) DataUtils.extract(record, "payId");
String clientId = (String) DataUtils.extract(record, "clientId");
// add mandatory fields in the holder map after getting other fields
holder.put("isClientId", (clientId == null) ? "false" : "true");
holder.put("isDeviceId", (clientId == null) ? "true" : "false");
holder.put("abc", (clientId == null) ? deviceId : clientId);
return new DataHolder.Builder(record).setClientId(clientId).setDeviceId(deviceId)
.setPayId(String.valueOf(payId)).setHolder(holder).build();
}
private boolean isValidHolder(String processName, Record record) {
Map<String, String> holder = (Map<String, String>) DataUtils.extract(record, "holder");
if (MapUtils.isEmpty(holder)) {
logger.logError("invalid holder is coming.");
return false;
}
return true;
}
private boolean isValidpayId(String processName, Record record) {
Integer payId = (Integer) DataUtils.extract(record, "payId");
if (payId == null) {
logger.logError("invalid payId is coming.");
return false;
}
return true;
}
private boolean isValidClientIdDeviceId(String processName, Record record) {
String deviceId = (String) DataUtils.extract(record, "deviceId");
String clientId = (String) DataUtils.extract(record, "clientId");
if (Strings.isNullOrEmpty(clientId) && Strings.isNullOrEmpty(deviceId)) {
logger.logError("invalid clientId and deviceId is coming.");
return false;
}
return true;
}
}
And below is my ValidatorB class in which I am validating few different fields as compared to ValidatorA on record object and if it is valid, then I am returning DataHolder.
public class ValidatorB extends Validator {
private static final Logger logger = Logger.getInstance(ValidatorB.class);
#Override
public static Optional<DataHolder> getDataHolder(String processName, Record record) {
Optional<DataHolder> dataHolder = Optional.absent();
if (isValid(processName, record))
dataHolder = Optional.of(buildDataHolder(processName, record));
return dataHolder;
}
private DataHolder isValid(String processName, Record record) {
return isValidType(processName, record) && isValidDatumId(processName, record) && isValidItemId(processName, record);
}
private DataHolder buildDataHolder(String processName, Record record) {
String type = (String) DataUtils.extract(record, "type");
String datumId = (String) DataUtils.extract(record, "datumId");
String itemId = (String) DataUtils.extract(record, "itemId");
return new DataHolder.Builder(record).setType(type).setDatumId(datumId)
.setItemId(itemId).build();
}
private boolean isValidType(String processName, Record record) {
String type = (String) DataUtils.extract(record, "type");
if (Strings.isNullOrEmpty(type)) {
logger.logError("invalid type is coming.");
return false;
}
return true;
}
private boolean isValidDatumId(String processName, Record record) {
String datumId = (String) DataUtils.extract(record, "datumId");
if (Strings.isNullOrEmpty(datumId)) {
logger.logError("invalid datumId is coming.");
return false;
}
return true;
}
private boolean isValidItemId(String processName, Record record) {
String itemId = (String) DataUtils.extract(record, "itemId");
if (Strings.isNullOrEmpty(itemId)) {
logger.logError("invalid itemId is coming.");
return false;
}
return true;
}
}
And below is my abstract class:
public abstract class Validator {
public abstract Optional<DataHolder> getDataHolder(String processName, Record record);
}
Question:
This is how I am calling for both of my process. As you can see, I am passing processName and its particular validator in the constructor argumnets.
ProcessConsumer processA = new ProcessConsumer("processA", new ValidatorA());
ProcessConsumer processB = new ProcessConsumer("processB", new ValidatorB());
Is this a good design where for each of my process, pass its validator along with?
Is there any way we can avoid passing that? And internally figure out what validators to use basis on the processName? I already have an enum with all my processName. I need to make this design extensible so that if I add new process in future, it should be scalable.
Also the way I have my abstract class Validator is right? It is not doing any useful things at all looks like.
Each of my Validator is basically trying to validate whether the record object is valid or not. If they are valid, then they make DataHolder builder and return it, otherwise it returns Optional.absent();
I saw this post where they talked about using Decorator pattern but I am not sure how that will help me in this case.
First when I see the declaration and the implementation of them :
public abstract class Validator {
public abstract Optional<DataHolder> getDataHolder(String processName, Record record);
}
I don't think "Validator" is the best term. Your validators are not only validators. What you call validators have as main function : extract data for a specific process. The extraction requires a validation but it is not the main function.
While the main function of a validator is validating.
So I think you could call them something as : ProcessDataExtractor.
ProcessConsumer processA = new ProcessConsumer("processA", new ValidatorA());
ProcessConsumer processB = new ProcessConsumer("processB", new ValidatorB());
Is this a good design where for each of my process, pass its validator
along with? Is there any way we can avoid passing that? And internally
figure out what validators to use basis on the processName? I already
have an enum with all my processName. I need to make this design
extensible so that if I add new process in future, it should be
scalable.
Scalability is another thing.
Having a extensible design is broadly having a design which doesn't imply important and or risky modifications as soon as a new "normal" requirement happens in the life of the application.
If a new process consumer is added, you have to add a ProcessDataExtractor according to your needs. The client should be aware of this new process consumer.
If the client code instantiate its process consumer and its data extractor at compile-time, using enum and map to represent process consumers and data extractors doesn't make your design not expandable since it requires very little of modification and these are isolated
If you want to have still less of modification in your code, you could instantiate by reflection the extractor and using a naming convention to retrieve them. For example, place them always in the same package and name each extractor with the same prefix, for example : ProcessDataExtractorXXX or XXX is the variable part.
The problem of this solution is at compile time : clients doesn't know necessary the ProcessDataExtractor available.
If you want that the adding of a new process consumer and extractor to be dynamic, that is during the runtime of the application and that clients may retrieve them during runtime too, it is another subject I think.
At compile-time, the design could be better because so far the client of the ProcessConsumer and ProcessDataExtractor classes may make a bad use of them (that is using Process A with ProcessDataExtractor B).
To avoid that, you have several ways of doing.
But you have guessed the idea : making the initialization and the mapping between ProcessConsumer and ProcessDataExtractor in a dedicated place and a protected way.
To achieve it, I advise you to introduce a interface for ProcessConsumer which provides only the functional method from Runnable:
public interface IProcessConsumer extends Runnable {
}
From now clients who want to consume a process should only use this interface to perform their task. We don't want provide to the client method or constructor to choose its data extractor.
To do it, the concrete ProcessConsumer class should be an inner private class in order to not allow clients to instantiate it directly.
They will have to use a factory to address this need.
In this way, client are able to create the specific process consumer with the required data extractor by requesting a factory of Processes which is responsible to ensure the consistence between a data extractor and a process and which also guarantees the instantiation of a new process consumer at each call (your processes are stateful, so you have to create a new process consumer for each process consumer you start).
Here is the ProcessConsumerFactory class :
import java.util.HashMap;
import java.util.Map;
public class ProcessConsumerFactory {
public static enum ProcessType {
A("processA"), B("processB");
private String name;
ProcessType(String name) {
this.name = name;
}
public String getName() {
return name;
}
}
private class ProcessConsumer implements IProcessConsumer {
private final ProcessType processType;
private final ProcessDataExtractor extractor;
private final RecordProcessor<byte[], byte[]> process;
public ProcessConsumer(ProcessType processType, ProcessDataExtractor extractor) {
this.processType = processType;
this.extractor = extractor;
this.process = new RecordProcessor<>();
}
#Override
public void run() {
// your implementation...
}
}
private static ProcessConsumerFactory instance = new ProcessConsumerFactory();
private Map<ProcessType, ProcessDataExtractor> extractorsByProcessName;
private ProcessConsumerFactory() {
extractorsByProcessName = new HashMap<>();
extractorsByProcessName.put(ProcessType.A, new ProcessDataExtractorA());
extractorsByProcessName.put(ProcessType.B, new ProcessDataExtractorB());
// add a new element in the map to add a new mapping
}
public static ProcessConsumerFactory getInstance() {
return instance;
}
public IProcessConsumer createNewProcessConsumer(ProcessType processType) {
ProcessDataExtractor extractor = extractorsByProcessName.get(processType);
if (extractor == null) {
throw new IllegalArgumentException("processType " + processType + " not recognized");
}
IProcessConsumer processConsumer = new ProcessConsumer(processType, extractor);
return processConsumer;
}
}
Now the clients of the Process consumers class could instante them like that:
IProcessConsumer processConsumer = ProcessConsumerFactory.getInstance().createNewProcessConsumer(ProcessType.A);
Also the way I have my abstract class Validator is right? It is not
doing any useful things at all looks like.
You use an abstract class for validators but for the moment you have not move common behavior in this abstract class, so you could use an interface :
public interface ProcessDataExtractor{
Optional<DataHolder> getDataHolder(ProcessType processType, Record record);
}
You could introduce the abstract class later if it becomes suitable.
There are a few problems with your design:
Catch invalid data as early as possible.
Post-construction is not the right way. Once an object, in this case Record, is constructed it should have valid state. Which means, your validation should be performed prior to constructing Record.
Get data from somehere: from web, database, text file, user input or whatever.
Validate data.
Construct Record object. At this point, either Record object has valid state, or it fails construction for example by raising exception.
Now, if the source from which you get data, contains mostly invalid data, it should be dealt there. Because that is a problem in itself. If the source is right but reading or getting the data has problems, it should be solved first.
Assuming the above issues, if exists, solved then the sequence of program should be something like this.
// Get the data from some source
// Perform Validation on the data. This is generic validation, like validation // of data read from an input form etc.
validate deviceId
validate payId
validate clientId
...
if invalid do something
else if valid proceed
// construct Record object
Record record = new Record(deviceId, payId, clientId, ...)
// At this point record has valid data
public class Record {
deviceId
payId
clientId
Record(deviceId, payId, clientId, ...) {
// Perform business rule validation, pertaining to Record's requirements.
// For example, if deviceId must be in certain range etc.
// if data is valid, perform construction.
// else if invalid, don't construct. throw exception or something
// to deal with invalid condition
}
}
Another problem is, you use some utils class to extract internal data from Record. That is not right either. Record itself should provide getters to its attributes. Right now, what is related to Record is scattered between
Record, Utils, and Validator.
I think your code needs a thorough re-evaluation. I suggest, take a pause, start again but this time at a higher level. Start designing without code for example with some sort of diagramming. Start with only box and arrows (Something like a class diagram but don't need to use a UML tool etc. Pencil and paper. Decide what should go where. Things like,
what are the entities you are dealing with.
What properties each entity has, attributes, methods etc.
What is the relationship among these entities
Consider the sequence these entities are used or interact, then keep refining it.
Without this high level view, dealing with the design question at the code level is difficult and almost always gives bad results.
Once you dealt with the design at a higher level. Then, putting that in code is much easier. Of course you can refine it at that level too, but high level structure should be considered first.
Related
Since I'm a newbie, I would like to know if there is a better way to code this.
Let say we have batch (spring) where we have downloader/processor/mapper/writer for every type of file we receive since we have customized logic for each file type. X number of Mapper , X number of processor for X number of file types.
Currently looking into templatize the code so not much changes may be required when new type is introduced. Below is my idea. so let say mapper, we have different objects for different file types and all of them will be converted to object of Class CustomObject as below. mapper bean in sample spring context
bean id = "file1Mapper" class = "com.filemapper.file1Mapper"
and it invokes file1Mapper class which has mapping logic. Same for other files.
This is what I'm coming up with to avoid all those file1mapper, file2mapper...... instead one generic mapper which does all together, but looking for better solutions,
public class GMapper{
public <T> CustomObject map(T item){
CustomObject customObject = new CustomObject()
.WithABCDetails(getABCDetails(item));
}
private <T> XYZDetails getABCDetails(T item) {
ABCDetails details = new ABCDetails();
if( item instanceof A){
A a = (A)item;
// read a and map it to ABCDetails object
}
if( item instanceof B){
B b = (B)item;
// read b and map it to ABCDetails object
}
...
...
// repeat this if loop for mapping all file types.
return details;
}
}
Sample jsons
class ABCDetails{
// JsonProperty
Object1 ob1;
Object2 ob2;
Integer d;
}
class Object1{
// JsonProperty
Object3 ob3;
String abc;
String def;
}
class Object2{
// JsonProperty
String ab;
Integer e;
}
class A{
// JsonProperty
String e;
String d; // ex, this is mapped to Object 2 String "ab"
}
This does't look so professional and I believe there might be better ways to do it. Can someone please share an example or explanation on how can this code be made better. I also reading Functional interface to see if that could help.
Thanks in advance.
It is impossible to understand what you need. So I will give some common advice.
Format your code - use tabs/spaces to indent.
Do not put capital letters together - replace ABCDetails with AbcDetails. No one cares how real world name looks like.
Do not write meaningless comments - say no to // JsonProperty
Name variables so that someone can understand what they are supposed to store - avoid {Object1 ob1; Object2 ob2; Integer d;}
Do not write if ... else if ... else if ... or case when ... since this scales badly. Use Map. Examples below.
And a general solution to your problem: use plugin architecture - the best thing (and maybe the only thing) that OOP can offer. Just make all your processors implement common interface. And to work with plugins use dispatcher pattern.
First create all processors.
public interface FileProcessor {
String extension();
void process(String filename);
}
#Component
public final class CsvFileProcessor implements FileProcessor {
public String extension() {
return "csv";
}
public void process(String filename) {
/* do what you need with csv */
}
}
#Component
public final class JsonFileProcessor implements FileProcessor {
public String extension() {
return "json";
}
public void process(String filename) {
/* do what you need with json */
}
}
Then inject them into your dispatcher. Do not forget to process errors, for example, some files may not have suffix, for some files you will not have processor, etc.
#Component
public final class FileDispatcher {
private final Map<String, FileProcessor> processorByExtension;
#Autowired
public FileDispatcher(List<FileProcessor> processors) {
processorByExtension = processors.stream().collect(Collectors.toMap(p -> p.extension(), p -> p));
}
public void dispatch(String filename) {
String extension = filename.split("//.")[1];
processorByExtension.get(extension).process(filename);
}
}
Now if you need to support new file format you have to add only one class - implementation of FileProcessor. You do not have to change any of already created classes.
I am writing a RequestBuilder class, which will handle the creation of a query string, based on the following criteria
category (String)
country (String)
keywords (String[])
page (int)
pageSize (int)
Since not all criteria are mandatory and there are many combinations between them (I counted 7, of which only four should be valid - see below why), I decided to use the builder pattern:
public class RequestBuilder {
private String category = "";
private String country = "&country=us";
private String keywords = "";
private String page = "";
private String pageSize = "&pageSize=100";
public RequestBuilder() {
}
private String buildQuery() {
return this.category + this.country + this.keywords + this.page + this.pageSize;
}
// the setter methods, which I omitted for readability
But there is a problem. I need to force the user to specify at least two of either category, country or keywords before building the object(right now the user isn't obliged to specify even one!). A user shouldn't be able to create an object by specifying only country, for example.
So how do I force this requirement? If I make three constructors(each having two of those parameters) I feel like I am ruining the Builder pattern, even though there will be three more optional properties to specify.
As a designer, you need to decide what fields are really required. There is no such thing as "maybe required". To use Builder Pattern and enforce required parameters, mark the fields as final and inject them through the constructor:
public class Request {
// fields
private final String requiredField;
private final String optional1;
private final String optional2;
private Request(RequestBuilder builder) {
requiredField = builder.requiredField;
optional1 = builder.optional1;
optional2 = builder.optional2;
}
// add only getter method to the Request class (builds immutable Request objects)
public static class RequestBuilder {
private final String requiredField;
private String optional1;
private String optional2;
public RequestBuilder(String requiredField) {
this.requiredField = requiredField;
}
public RequestBuilder setOptional1(String optional1) {
this.optional1 = optional1;
return this;
}
public RequestBuilder setOptional2(String optional2) {
this.optional2 = optional2;
return this;
}
public Request build() {
return new Request(this);
}
}
}
The builder enforces required and optional fields. The object being built by the builder hides the constructor so that it is only accessible via the builder. The fields inside the request object are all final for immutability.
To use, you'll do something like this:
RequestBuilder builder = new RequestBuilder("required");
Request request = builder.setOptional1("foo").setOptional2("bar").build();
or you could simply call build() at any time after calling the builder constructor.
UPDATE:
Now to your problem.... You could (potentially) modify the build() to check how many "semi-required" fields you have with values and compare it to the total number of fields. To me, this is a hack. For this, you have two options
Hard code the number of fields and check how many out of the total number are still null or empty. If the number of fields that are not set is below a certain count, throw some exception (i.e. InvalidRequiredFieldCount). Otherwise, you return the new instance. For this, you need to increment the "count" every time a setter method is called.
Use reflection to get the list (array) of fields and use this field and use this field count to calculate the minimum number of "required" fields. Throw exception if that minimum is not reach or return a new request instance if the minimum threshold is reached.
public Request build() throws Exception {
Request request = new Request(this);
int count = 0;
int max = 2;
Field[] allFields = Request.class.getDeclaredFields();
for (Field field : allFields) {
Object o = field.get(request);
if (o != null) {
count++;
}
}
if (count < 2) {
throw new Exception("Minimum number of set fields (2) not reached");
}
return request;
}
This is not pretty, but it works. If I run this:
RequestBuilder builder = new RequestBuilder("required");
Request request = builder.build();
will result in an exception:
Exception in thread "main" java.lang.Exception: Minimum number of set fields (2) not reached
at com.master.oxy.Request$RequestBuilder.build(Request.java:54)
at com.master.oxy.Request.main(Request.java:63)
However, if I set at least one optional, the new instance will be returned.
I would like to suggest another object-oriented solution for that problem:
Let's assume you don't want to pass the required arguments to the builder c'tor. You can use the following technique to enforce providing the required field during the build process of the object:
Usage - demonstrate how only the required field is visible first:
Usage2 - demonstrate how the rest of the optional and build() methods are visible after providing the required field:
We implement it by doing the following:
public static class RequestBuilder implements RequestBuilderRequiredField {
public interface RequestBuilderRequiredField {
RequestBuilder setRequiredField(String requiredField)
}
private final String requiredField;
private String optional1;
private String optional2;
private RequestBuilder() {
}
public static RequestBuilderRequiredField aRequestBuilder() {
return new RequestBuilder();
}
Note the the builder c'tor is private because we want to expose first the interface with the required field (and if have many, we want to chain the interfaces methods so they will return an interface for for each required field)
The downside of that approach is that you need to maintain the same amount of interfaces on a large object when many (or even all) properties are required.
Interesting to think if libraries such as Lombok that auto generates Builder with #Builde annotation can have the ability to generate it
How can i create a method that accepts Class and Field as parameters? Like this:
List<SomeClassEntity> list = ...;
// Service to make useful things around a list of objects
UsefulThingsService<SomeClassEntity> usefulThingsService = new UsefulThingsService<>();
// Maybe invoke like this. Did't work
usefulThingsService.makeUsefulThings(list, SomeClassEntity.class, SomeClassEntity::getFieldOne);
// or like this. Will cause delayed runtime erros
usefulThingsService.makeUsefulThings(list, SomeClassEntity.class, "fieldTwo");
public class SomeClassEntity {
Integer fieldOne = 10;
Double fieldThree = 0.123;
public Integer getFieldOne() {
return fieldOne;
}
public void setFieldOne(Integer fieldOne) {
this.fieldOne = fieldOne;
}
public Double getFieldThree() {
return fieldThree;
}
public void setFieldThree(Double fieldThree) {
this.fieldThree = fieldThree;
}
}
public class UsefulThingsService<T> {
public void makeUsefulThings(Class<T> someClassBClass, String fieldName) {
// there is some code
}
}
Want to have correct references on compile stage, not at runtime.
Update:
I need code that would look more convenient than this:
Field fieldOne = null;
try {
fieldOne = SomeClassEntity.class.getDeclaredField("fieldOne");
} catch (NoSuchFieldException e) {
e.printStackTrace();
}
usefulThingsService.makeUsefulThings(SomeClassEntity.class, fieldOne);
I apologize for the next clarification.
Update 2:
- The service compares the list with the previous list, reveals only the changed fields of objects (list items) and updates these fields in the objects in the original list.
- Currently i use annotation on entity's field that is actually ID of the entity and that ID is used to detect identically entities (old and new) when i need to update field of entity in source list.
- Service detect annotated field and use it for next update process.
- I want to refuse to use annotations and provide an Field directly in constructor of service. Or use something other that could establish a relationship between class and field on compilation stage.
Assuming that you want field access because you want to get and set the value, you’d need two functions:
public class UsefulThingsService<T> {
public <V> void makeUsefulThings(List<T> list, Function<T,V> get, BiConsumer<T,V> set) {
for(T object: list) {
V v = get.apply(object);
// there is some code
set.accept(object, v);
}
}
}
and
usefulThingsService.makeUsefulThings(
list, SomeClassEntity::getFieldOne, SomeClassEntity::setFieldOne);
usefulThingsService.makeUsefulThings(
list, SomeClassEntity::getFieldThree, SomeClassEntity::setFieldThree);
There are, however, some things open. E.g., how is this service supposed to do something useful with the field resp. property, without even knowing its actual type. In your example, both are subtypes of Number, so you could declare <V extends Number>, so the method knows how to extract numerical values, however, constructing an appropriate result object would require specifying another function argument.
I'm designing a module which can support different datasources.
My module gets the user's company id as inputs and I must call the appropriate class based on the company id.
I'm trying to incorporate some good design and avoid conditional statements where possible.
I have a FetchDataSource singleton class with this method.
public class FetchDataSourceSingleton {
private static Map<String, Communicator> communicatorMap;
public static Communicator getCommunicatorInstance(String dataSourceType) {
if (communicatorMap == null || communicatorMap.isEmpty())
populateCommunicatorMap();
if (communicatorMap.containsKey(dataSourceType))
return communicatorMap.get(dataSourceType);
return null;
}
.... other methods including populateCommunicatorMap()
}
"Communicator" is an interface, and the communicator map will return the appropriate instance.
This is the populateCommunicatorMap() method in the same singleton class.
private static void populateCommunicatorMap() {
communicatorMap = new HashMap<String, Communicator>();
communicatorMap.put("AD", new ADCommunicator());
communicatorMap.put("DB2", new DB2Communicator());
communicatorMap.put("MYSQL", new MYSQLCommunicator());
}
ADCommunicator, DB2Communicator and MYSQLCommunicator will implement the Communicator inteface.
The code seems to work in my test draft.
The only concern I have is the HashMap will return the same object for all communication requests to the same type. I can't seem to avoid having the same instance in the hashmap if I want to avoid the conditional statements. Otherwise instead of the hashmap, I could have just make calls like this.
Communicator comm;
if (type = "AD") comm = new ADCommunicator();
if (type = "DB2") comm = new DB2Communicator();
if (type = "MYSQL") comm = new MYSQLCommunicator();
I've avoided this by using the hashmap to return an instance based on type.
But then I can't avoid the singleton problem where I get the same instance.
In a multithreaded environment, which needs to support hundreds of thousands of communication requests at a time, this could be a problem considering I'll need to syncronize a lot of code in each of the Communicator classes.
Is there a way I can avoid the syncronization and make it thread safe without impacting performance?
I can't seem to avoid having the same instance in the hashmap
You can use a switch instead of a bunch of ifs.
Switch Over an enum (Java 5)
Change type to be an enum in Java 5+, then you can switch on it. I'd recommend enums in general for type safety.
// type is-a enum Communicator.TYPE
switch(type) {
case AD: return new ADCommunicator();
case DB2: return new DB2Communicator();
case MYSQL: return new MYSQLCommunicator();
default: return null;
}
Switch over a String (Java 8)
Java 8 can switch over Strings directly.
// type is-a String
switch(type) {
case "AD": return new ADCommunicator();
case "DB2": return new DB2Communicator();
case "MYSQL": return new MYSQLCommunicator();
default: return null;
}
Switching over an enum will be as fast as a map, if not faster. Switching on the string will be as fast as a Map.
A Map of Factory (factory of factories)
Or have a map of factories:
private final static Map<String, Factory<? extends Communicator>> map;
static {
map.put("AD", ADCommunicatorFactory.getInstance());
//...
map.put(null, NullFactory<Communicator>.getInstance());
} // populated on class-load. Eliminates race from lazy init
// on get
return map.get(type).make();
A Map of Class (reflection)
Or use the reflection API to make instances, but then it would probably be better to just use conditionals.
// on init
Map<String, Class<? extends Communicator>> map = new HashMap<>();
map.put("AD", ADCommunicator.class);
// on get
try {
return (Communicator) map.get(type).newInstance();
} catch(InstantiationException | IllegalAccessException | NullPointerException e) {
return null;
}
P.S.
This all sounds like premature optimization. I doubt that determining which Communicator to use is going to be a bottleneck in your system.
If all your communicators can be constructed with empty argument list constructor, then you can store the type (class) of the communicator in the map instead of an instance. Then you can look up the type (java.lang.Class) from your communicatorMap and instantiate a new instance with java.lang.Class.newInstance().
For example:
public interface Communicator {
void communicate();
}
public class Communicator1 implements Communicator {
public void communicate() {
System.out.println("communicator1 communicates");
}
}
public class Communicator2 implements Communicator {
public void communicate() {
System.out.println("communicator2 communicates");
}
}
public class CommuniicatorTest {
public static void main(String[] args) throws Exception {
Map<String, Class<? extends Communicator>> communicators = new HashMap<String, Class<? extends Communicator>>();
communicators.put("Comm1", Communicator1.class);
communicators.put("Comm2", Communicator2.class);
Communicator comm2 = communicators.get("Comm2").newInstance();
comm2.communicate();
System.out.println("comm2: " + comm2);
Communicator anotherComm2 = communicators.get("Comm2").newInstance();
anotherComm2.communicate();
System.out.println("anotherComm2: " + anotherComm2);
}
}
result:
communicator2 communicates
comm2: pack.Communicator2#6bc7c054
communicator2 communicates
anotherComm2: pack.Communicator2#232204a1
Assylias is correct about using a static initializer. It runs when your class loads, which guarantees that the map will be loaded before anything else happens to the class.
You didn't show the declaration of the map; I assume that it is static.
private final static Map<String, Communicator> communicatorMap;
static {
communicatorMap = new HashMap<>();
communicatorMap.put("AD", new ADCommunicator());
communicatorMap.put("DB2", new DB2Communicator());
communicatorMap.put("MYSQL", new MYSQLCommunicator());
}; // populated on class-load. Eliminates race from lazy init
The remaining issue is the Communicator implementation. All this assumes that it is thread-safe as well.
I need to get a Field (or a list of Fields) without knowing it's name.
I.e: for a custom entitymanager i'd like to be able to do Method Calls like this:
cem.getEntities(MyEntity.class, ParamMap) where the ParamMap should be of the Type Map<Field, Object>.
What i can do at the moment is something like this:
Map<Field, Object> params = new HashMap<Field, Object>();
params.put(MyEntity.class.getDeclaredField("someFieldName"), 20);
List<MyEntity> entitysWithSomeFieldNameEquals20 = cem.getEntities(MyEntity.class, params);
Im trying to avoid the usage of querys, because it should work "generic" in the first place, but also be independent from Strings. (They are error-prone). The Entity Manager therefore uses reflection to determine the table and column names, he needs to use.
However, I STILL need to use
MyEntity.class.getDeclaredField("someFieldName")
which will simple move the error-prone string "out" of the entity manager...
What i'm trying to achieve would be something like this:
MyEntity.class.getDeclaredField(MyEntity.class.fields.someFieldName.toString())
So, no matter what the actual field is named, it can be referenced in a save way and refactoring will refactor all the field-access calls, too.
I'm not sure if this is possible. I could go with a (encapsuled) enum for ALL entities, but I hope, that theres a more generic way to achieve this.
Edit:
One good solution seems to be the usage of constants:
public class MyEntity{
private static string SOME_FIELD = "some_field_name_in_database";
#Column(name = SOME_FIELD);
private String someField;
}
...
Map<String, Object> params = new HashMap<String, Object>();
params.put(MyEntity.SOME_FIELD, matchValue);
List<MyEntity> result = eem.getEntities(MyEntity.class, params);
This at least reduces the usage of the string to exactly one location, where it can be maintained and changed without affecting any other file. But im still searching for a solution without constants, so the contants don't need to be synchronized with the available fields :-)
Ok, this is just an idea, which is not easy to implement, but it could work.
Suppose MyEntity looks like this:
public class MyEntity {
private String foo;
private String bar;
public String getFoo() { return this.foo; }
public void setFoo(String foo) { this.foo = foo; }
public String getBar() { return this.bar; }
public void setBar(String bar) { this.bar = bar; }
}
and there is an interface:
public interface Pattern {
public Class<?> getEntityClass();
public Map<Field, Object> getFields();
}
and there is a method, which takes a class and generates a pattern object, which is an instance of the given class:
public class PatternFactory {
public <T> T createPattern(Class<T> klass) {
// magic happens here
}
}
The requirement for the emitted instance would be that it should implement the Pattern interface, such that the method getFields returns only the fields which were explicitly set. GetEntityClass should return the entity class. Then the custom entity manager could be implemented like this:
public class EntityManager {
public <T> Collection<T> getEntities(T pattern) {
if (!(pattern instanceof Pattern))
throw new IllegalArgumentException();
Class<?> klass = ((Pattern) pattern).getEntityClass();
Map<Field, Object> fields = ((Pattern) pattern).getFields();
// fetch objects here
}
}
Then you could use it like this:
PatternFactory pf = // obtain somehow
EntityManager em = // obtain somehow
MyEntity pattern = pf.createPattern(MyEntity.class);
pattern.setFoo("XYZ");
pattern.setBar(null);
Collection<MyEntity> result = em.getEntities(pattern);
In this case pattern.getFields would return a map with two entries.
The difficulty here lies, of course, in the implementation of the createPattern method, where you will have to emit bytecode at run-time. However, this is possible and can be done.