Best practice to associate message and target class instance creation - java

The program I am working on has a distributed architecture, more precisely the Broker-Agent Pattern. The broker will send messages to its corresponding agent in order to tell the agent to execute a task. Each message sent contains the target task information(the task name, configuration properties needed for the task to perform etc.). In my code, each task in the agent side is implemente in a seperate class. Like :
public class Task1 {}
public class Task2 {}
public class Task3 {}
...
Messages are in JSON format like:
{
"taskName": "Task1", // put the class name here
"config": {
}
}
So what I need is to associate the message sent from the broker with the right task in the agent side.
I know one way is to put the target task class name in the message so that the agent is able to create an instance of that task class by the task name extracted from the message using reflections, like:
Class.forName(className).getConstructor(String.class).newInstance(arg);
I want to know what is the best practice to implement this association. The number of tasks is growing and I think to write string is easy to make mistakes and not easy to maintain.

If you're that specific about classnames you could even think about serializing task objects and sending them directly. That's probably simpler than your reflection approach (though even tighter coupled).
But usually you don't want that kind of coupling between Broker and Agent. A broker needs to know which task types there are and how to describe the task in a way that everybody understands (like in JSON). It doesn't / shouldn't know how the Agent implements the task. Or even in which language the Agent is written. (That doesn't mean that it's a bad idea to define task names in a place that is common to both code bases)
So you're left with finding a good way to construct objects (or call methods) inside your agent based on some string. And the common solution for that is some form of factory pattern like: http://alvinalexander.com/java/java-factory-pattern-example - also helpful: a Map<String, Factory> like
interface Task {
void doSomething();
}
interface Factory {
Task makeTask(String taskDescription);
}
Map<String, Factory> taskMap = new HashMap<>();
void init() {
taskMap.put("sayHello", new Factory() {
#Override
public Task makeTask(String taskDescription) {
return new Task() {
#Override
public void doSomething() {
System.out.println("Hello" + taskDescription);
}
};
}
});
}
void onTask(String taskName, String taskDescription) {
Factory factory = taskMap.get(taskName);
if (factory == null) {
System.out.println("Unknown task: " + taskName);
}
Task task = factory.makeTask(taskDescription);
// execute task somewhere
new Thread(task::doSomething).start();
}
http://ideone.com/We5FZk
And if you want it fancy consider annotation based reflection magic. Depends on how many task classes there are. The more the more effort to put into an automagic solution that hides the complexity from you.
For example above Map could be filled automatically by adding some class path scanning for classes of the right type with some annotation that holds the string(s). Or you could let some DI framework inject all the things that need to go into the map. DI in larger projects usually solves those kinds of issues really well: https://softwareengineering.stackexchange.com/questions/188030/how-to-use-dependency-injection-in-conjunction-with-the-factory-pattern
And besides writing your own distribution system you can probably use existing ones. (And reuse rather then reinvent is a best practice). Maybe http://www.typesafe.com/activator/template/akka-distributed-workers or more general http://twitter.github.io/finagle/ work in your context. But there are way too many other open source distributed things that cover different aspects to name all the interesting ones.

Related

Identification of a service

Service interface:
public interface UserInterface {
void present();
void onStart();
void onStop();
}
I have two implementations: TextUserInterface and GraphicalUserInterface.
How can I identify the one I want to use when I launch my program? Source
private static void main(String[] args) {
ServiceLoader<UserInterface> uiLoader = ServiceLoader.load(UserInterface.class);
UserInterface ui = uiLoader.? //what to do to identify the one I want to use?
}
I was thinking of introducing an enum with the type of UI, so I could just iterate through all services and pick the one I'd like to, but isn't this approach just a misuse of services? In this case when I want to pick GraphicalUserInterface I could just skip the ServiceLoader part and just instantiate one. The only difference I see is fact that without services, I'd have to require the GraphicalUserInterface module, which "kind of" breaks the encapsulation.
I don't actually think that it would be a misuse of it. As a matter of fact, what you get from ServiceLoader.load(...) method is an Iteratable object, and if you need for a specific service, you will have to iterate through all the available instances.
The idea of the enum is not that bad, but I suggest that you take advantage of the Java stream and filter for the instance you need. For example, you might have something like that:
enum UserInterfaceType {
TEXT_UI, GRAPH_UI;
}
public interface UserInterface {
UserInterfaceType getTypeUI();
...
}
// In your main method
ServiceLoader<UserInterface> uiLoader = ServiceLoader.load(UserInterface.class);
UserInterface ui = uiLoader.steam()
.filter(p -> p->getTypeUI() == <TypeUIyouNeed> )
.findFirst()
.get();
That is open to a number of possibilities, for example you can put this is a separated method, which receives in input a UserInterfaceType value, and it can retrieve the service implementation based on the type enum value you passed.
As I said, that is just the main idea, but definitely you are not doing any misuse of the ServiceLoader.

Danger of instantiating a class in a verticle vert.x

I explain my problem, I have a verticle in which I defined all the routes. And I have simple java classes that contain methods that I call in my verticle depending on the route. For example, my downloadFile() method is in the MyFile class like this:
public class MyFile {
public final void downloadFile(RoutingContext rc, Vertx vertx) {
final HttpServerResponse response = rc.response();
response.putHeader("Content-Type", "text/html");
response.setChunked(true);
rc.fileUploads().forEach(file -> {
final String fileNameWithoutExtension = file.uploadedFileName();
final JsonObject jsonObjectWithFileName = new JsonObject();
response.setStatusCode(200);
response.end(jsonObjectWithFileName.put("fileName", fileNameWithoutExtension).encodePrettily());
});
}
public final void saveFile(RoutingContext rc, Vertx vertx) {
//TODO
}
}
And I use this class in my verticle like this:
public class MyVerticle extends AbstractVerticle{
private static final MyFile myFile = new MyFile();
#Override
public void start(Future<Void> startFuture) {
final Router router = Router.router(vertx);
final EventBus eventBus = vertx.eventBus();
router.route("/getFile").handler(routingContext -> {
myFile.downloadFile(routingContext, vertx);
});
router.route("/saveFile").handler(routingContext -> {
myFile.saveFile(routingContext, vertx);
});
}
}
My colleague tells me that it is not good to instantiate a class in a verticle and when I asked him why, he replied that it becomes stateful and I have doubts about what he says to me because I don't see how. And as I declared my MyFile class instance "static final" in my verticle, I want to say that I even gain in performance because I use the same instance for each incoming request instead of creating a new instance .
If it's bad to instantiate a class in a verticle, please explain why?
In addition I would like to know what is the interest of using 2 verticles for a treatment that only one verticle can do?
For example, I want to build a JsonObject with the data I select in my database, why send this data to another verticle knowing that this verticle does nothing but build the JsonObject and wait for it to answer me for sent the response to the client so that I can build this JsonObject in the verticle where I made my request and immediately sent the response to the client.I put you a pseudo code to see better :
public class MyVerticle1 extends AbstractVerticle{
public void start(Future<Void> startFuture) {
connection.query("select * from file", result -> {
if (result.succeeded()) {
List<JsonArray> rowsSelected = result.result().getResults();
eventBus.send("adress", rowsSelected, res -> {
if (res.succeded()) {
routinContext.response().end(res.result().encodePrettily());
}
});
} else {
LOGGER.error(result.cause().toString());
}
});
}
}
public class MyVerticle2 extends AbstractVerticle{
public void start(Future<Void> startFuture) {
JsonArray resultOfSelect = new JsonArray();
eventBus.consumer("adress", message -> {
List<JsonArray> rowsSelected = (List<JsonArray>) message.body();
rowsSelected.forEach(jsa -> {
JsonObject row = new JsonObject();
row.put("id", jsa.getInteger(0));
row.put("name", jsa.getString(1));
resultOfSelect.add(row);
});
message.reply(resultOfSelect);
});
}
}
I really do not see the point of making 2 verticles since I can use the result of my query in the first verticle without using the second verticle.
For me, EventBus is important for transmitting information to verticles for parallel processing.
bear in mind... the answers you're looking for are unfortunately very nuanced and will vary depending on a number of conditions (e.g. the experience of whoever is answering, design idioms in the codebase, tools/libraries at your disposal, etc). so there aren't an authoritative answers, just whatever suits you (and your co-workers).
My colleague tells me that it is not good to instantiate a class in a
verticle and when I asked him why, he replied that it becomes stateful
and I have doubts about what he says to me because I see not how.
your colleague is correct in the general sense that you don't want to have individual nodes in a cluster maintaining their own state because that will in fact hinder the ability to scale reliably. but in this particular case, MyFile appears to be stateless, so introducing it as a member of a Verticle does not automagically make the server stateful.
(if anything, i'd take issue with MyFile doing more than file-based operations - it also handles HTTP requests and responses).
And as I declared my MyFile class instance "static final" in my
verticle, I want to say that I even gain in performance because I use
the same instance for each incoming request instead of creating a new
instance .
i'd say this goes to design preferences. there isn't any real "harm" done here, per se, but i tend to avoid using static members for anything other than constant literals and prefer instead to use dependency injection to wire up my dependencies. but maybe this is a very simple project and introducing a DI framework is beyond the complexity you wish to introduce. it totally depends on your particular set of circumstances.
In addition I would like to know what is the interest of using 2
verticles for a treatment that only one verticle can do?
again, this depends on your set of circumstances and your "complexity budget". if the processing is simple and your desire is to keep the design equally simple, a single Verticle is fine (and arguably easier to understand/conceptualize and support). in larger applications, i tend to create many Verticles along the lines of the different logical domains in play (e.g. Verticles for authentication, Verticles for user account functionality, etc), and orchestrate any complex processing through the EventBus.

Reactor / WebFlux implement a reactive http news ticker

I have a request that is rather simple to formulate, but I cannot pull it of without leaking resources.
I want to return a response of type application/stream+json, featuring news events someone posted. I do not want to use Websockets, not because I don't like them, I just want to know how to do it with a stream.
For this I need to return a Flux<News> from my restcontroller, that is continuously fed with news, once someone posts any.
My attempt for this was creating a Publisher:
public class UpdatePublisher<T> implements Publisher<T> {
private List<Subscriber<? super T>> subscribers = new ArrayList<>();
#Override
public void subscribe(Subscriber<? super T> s) {
subscribers.add(s);
}
public void pushUpdate(T message) {
subscribers.forEach(s -> s.onNext(message));
}
}
And a simple News Object:
public class News {
String message;
// Constructor, getters, some properties omitted for readability...
}
And endpoints to publish news respectively get the stream of news
// ...
private UpdatePublisher<String> updatePublisher = new UpdatePublisher<>();
#GetMapping(value = "/news/ticker", produces = "application/stream+json")
public Flux<News> getUpdateStream() {
return Flux.from(updatePublisher).map(News::new);
}
#PutMapping("/news")
public void putNews(#RequestBody News news) {
updatePublisher.pushUpdate(news.getMessage());
}
This WORKS, but I cannot unsubscribe, or access any given subscription again - so once a client disconnects, the updatePublisher will just continue to push onto a growing number of dead channels - as I have no way to call the onCompleted() handler on the subscriptions.
TL;DL:
Can one push messages onto a possible endless Flux from a different thread and still terminate the Flux on demand without relying on a reset by peer exception or something along those lines?
You should never try to implement yourself the Publisher interface, as it boils down to getting the reactive streams implementation right. This is exactly the issue you're facing here.
Instead you should use one of the generator operators provided by Reactor itself (this is actually a Reactor question, nothing specific to Spring WebFlux).
In this case, Flux.create or Flux.push are probably the best candidates, given your code uses some type of event listener to push events down the stream. See the reactor project reference documentation on that.
Without more details, it's hard to give you a concrete code sample that solves your problem. Here are a few pointers though:
you might want to .share() the stream of events for all subscribers if you'd like some multicast-like communication pattern
pay attention to the push/pull/push+pull model that you'd like to have here; how is the backpressure supposed to work here? What if we produce more events that the subscribers can handle?
this model would only work on a single application instance. If you'd like this to work on multiple application instances, you might want to look into messaging patterns using a broker

I can't unit test my class without exposing private fields -- is there something wrong with my design?

I have written some code which I thought was quite well-designed, but then I started writing unit tests for it and stopped being so sure.
It turned out that in order to write some reasonable unit tests, I need to change some of my variables access modifiers from private to default, i.e. expose them (only within a package, but still...).
Here is some rough overview of my code in question. There is supposed to be some sort of address validation framework, that enables address validation by different means, e.g. validate them by some external webservice or by data in DB, or by any other source. So I have a notion of Module, which is just this: a separate way to validate addresses. I have an interface:
interface Module {
public void init(InitParams params);
public ValidationResponse validate(Address address);
}
There is some sort of factory, that based on a request or session state chooses a proper module:
class ModuleFactory {
Module selectModule(HttpRequest request) {
Module module = chooseModule(request);// analyze request and choose a module
module.init(createInitParams(request)); // init module
return module;
}
}
And then, I have written a Module that uses some external webservice for validation, and implemented it like that:
WebServiceModule {
private WebServiceFacade webservice;
public void init(InitParams params) {
webservice = new WebServiceFacade(createParamsForFacade(params));
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
So basically I have this WebServiceFacade which is a wrapper over external web service, and my module calls this facade, processes its response and returns some framework-standard response.
I want to test if WebServiceModule processes reponses from external web service correctly. Obviously, I can't call real web service in unit tests, so I'm mocking it. But then again, in order for the module to use my mocked web service, the field webservice must be accessible from the outside. It breaks my design and I wonder if there is anything I could do about it. Obviously, the facade cannot be passed in init parameters, because ModuleFactory does not and should not know that it is needed.
I have read that dependency injection might be the answer to such problems, but I can't see how? I have not used any DI frameworks before, like Guice, so I don't know if it could be easily used in this situation. But maybe it could?
Or maybe I should just change my design?
Or screw it and make this unfortunate field package private (but leaving a sad comment like // default visibility to allow testing (oh well...) doesn't feel right)?
Bah! While I was writing this, it occurred to me, that I could create a WebServiceProcessor which takes a WebServiceFacade as a constructor argument and then test just the WebServiceProcessor. This would be one of the solutions to my problem. What do you think about it? I have one problem with that, because then my WebServiceModule would be sort of useless, just delegating all its work to another components, I would say: one layer of abstraction too far.
Yes, your design is wrong. You should do dependency injection instead of new ... inside your class (which is also called "hardcoded dependency"). Inability to easily write a test is a perfect indicator of a wrong design (read about "Listen to your tests" paradigm in Growing Object-Oriented Software Guided by Tests).
BTW, using reflection or dependency breaking framework like PowerMock is a very bad practice in this case and should be your last resort.
I agree with what yegor256 said and would like to suggest that the reason why you ended up in this situation is that you have assigned multiple responsibilities to your modules: creation and validation. This goes against the Single responsibility principle and effectively limits your ability to test creation separately from validation.
Consider constraining the responsibility of your "modules" to creation alone. When they only have this responsibility, the naming can be improved as well:
interface ValidatorFactory {
public Validator createValidator(InitParams params);
}
The validation interface becomes separate:
interface Validator {
public ValidationResponse validate(Address address);
}
You can then start by implementing the factory:
class WebServiceValidatorFactory implements ValidatorFactory {
public Validator createValidator(InitParams params) {
return new WebServiceValidator(new ProdWebServiceFacade(createParamsForFacade(params)));
}
}
This factory code becomes hard to unit-test, since it is explicitly referencing prod code, so keep this impl very concise. Put any logic (like createParamsForFacade) on the side, so that you can test it separately.
The web service validator itself only gets the responsibility of validation, and takes in the façade as a dependency, following the Inversion of Control (IoC) principle:
class WebServiceValidator implements Validator {
private final WebServiceFacade facade;
public WebServiceValidator(WebServiceFacade facade) {
this.facade = facade;
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
Since WebServiceValidator is not controlling the creation of its dependencies anymore, testing becomes a breeze:
#Test
public void aTest() {
WebServiceValidator validator = new WebServiceValidator(new MockWebServiceFacade());
...
}
This way you have effectively inverted the control of the creation of the dependencies: Inversion of Control (IoC)!
Oh, and by the way, write your tests first. This way you will naturally gravitate towards a testable solution, which is usually also the best design. I think that this is due to the fact that testing requires modularity, and modularity is coincidentally the hallmark of good design.

Add Quartz Source Java Files on the Fly

I have looked around and around for this answer, but I have not been able to find a good answer. I would like to create a system based on Quartz that allows people to schedule their own tasks. I will use a pseudo example.
Let's say my main method for my Quartz program is called quartz.java.
Then I have a file called sweep.java that implements the Quartz "job" interface.
So in my quartz.java, I schedule my sweep.java to run every hour. I run quartz.java, and it works fine. GREAT; however, now I want to add a dust.java to the quartz scheduler; however, since this is a production service, I don't want to have to stop my quartz.java file, add in my dust.java, and recompile and run quartz.java again. This downtime would be unacceptable.
Does anyone have any ideas on how I could accomplish this? It seems impossible because how could you ever feed another java file into the program without recompiling, linking, etc.
I hope that this example is clear. Please let me know if I need to clarify any part of it.
Partial answer: it is possible to compile, and then instantiate, a class, programatically.
Here are links to example code:
how to compile from a String;
CompilerOutput;
CompilerOutputDirectory.
The extracted class is grabbed in the third source file (see method getGeneratedClass, which returns a Class<?> object).
HOWEVER: keep in mind that this is potentially dangerous to do so. One problem, which can be quite serious if you are not careful, is that when you dynamically instantiate a class, its static initialization blocks are executed. And these can potentially wreak havoc on your application. So, in addition, you'll have to create an appropriate SecurityContext.
In the code above, I actually only ever get the Class<?> object and never instantiate it in any way, so no code is executed. But your usage scenario is quite different.
I have not tried any of these but are worth trying .
1) Consider using Quartz camel endpoint .
If my understanding is right, Apache Camel lets you create the camel routes on the fly.
It just needs to deploy the camel-context.xml into a container taking into consideration that the required classes would be already available on classpath of container.
2) Quartz lets you create a job declaratively i.e. with xml configuration of job and trigger.
You can find more information here.
3) Now this requires some efforts ;-)
Create an interface which has a method which you will execute as a part of job. Lets say this will have a method called
public interface MyDynamicJob
{
public void executeThisAsPartOfJob();
}
Create your instances of Job methods.
public EmailJob implements MyDynamicJob
{
#Override
public void executeThisAsPartOfJob()
{
System.out.println("Sending Email");
}
}
Now in your main scheduler engine, use the Observer pattern to store/initiate the job dynamically.
Something like,
HashMap jobs=new HashMap<String,MyDynamicJob>();
// call this method to add the job dynamically.
// If you add a job after the scheduler engine started , find a way here how to reiterate over this map without shutting down the scheduler :-).
public void addJob(String someJobName,MyDynamicJob job)
{
jobs.add(someJobName,job);
}
public void initiateScheduler()
{
// Iterate over the jobs map to get all registered jobs. Create
// Create JobDetail instances dynamically for each job Entry. add your custom job class as a part of job data map.
Job jd1=JobBuilder.newJob(GenericJob.class)
.withIdentity("FirstJob", "First Group").build();
Map jobDataMap=jd1.getJobDataMap();
jobDataMap.put("dynamicjob", jobs.get("dynamicjob1"));
}
public class GenericJob implements Job {
public void execute(JobExecutionContext arg0) throws JobExecutionException {
System.out.println("Executing job");
Map jdm=arg0.getJobDetail().getJobDataMap();
MyDynamicJob mdj=jdm.get("dynamicjob");
// Now execute your custom job method here.
mdj.executeThisAsPartOfJob();
System.out.println("Job Execution complete");
}
}

Categories

Resources