Break cyclic dependency in order to use dependency injection - java

I just started using dagger 2 and have not used any other dependency injection framework before. Now I'm stuck with a cyclic dependency and I don't know how to solve it properly. Consider the following example in a server application, which uses the Reactor pattern with Java NIO:
I have a Handler object attached to a selection key that is executed when new information arrives on the network:
class Handler implements Runnable {
Server server;
Client client;
public void run {
// static factory method that eventually calls a method on server, passing in 'client' as argument
Command.parse(input).execute(server, client);
}
public void send(String string) {
// enqueu string and inform reactor about interest in sending
}
}
The Client class holds some state about the connected client. All connected Clients are managed in the Server class.
class Client {
Handler h;
public send(String response) {
h.send(response);
}
}
When new input arrives, the Handler creates the Command objects, executes them on the server, and the server eventually will respond to the client.
So what I'm doing right now, is creating a Client object manually in Handler, passing in a this reference, in order to be able to send the response:
client = new Client(this);
So my question now is: Is something wrong with the design? Is it possible to decouple Client and Handler? Or should I just live with this and don't use dependency injection everywhere?
I appreciate your suggestions

If you want the client to be able to send a message back through handler, then the following can break your cycle:
// lives in a common package both classes access
public interface ResponseClass {
public void sendSomeMessage(String message);
}
public class Handler { // handler can also implement ResponseClass directly but I prefer using an additional class for flexibility.
public void whenYouCreateClient() {
Client client = new Client(new HandlerWrapper(this));
}
public static class HandlerWrapper implements ResponseClass {
private final Handler handler;
public HandlerWrapper(Handler handler) { this.handler = handler; }
public void sendSomeMessage(String message) {
handler.send(message);
}
}
public void send(String string) {
// enqueu string and inform reactor about interest in sending
}
}
public class Client {
ResponseClass rc; // naming should be improved :)
public void sendMessage(String message) {
rc.sendSomeMessage(message);
}
}
Now runtime your classes are still tied together but as far as your design is concerned your Client is only attached to a generic ResponseClass.
you can have a hierarchy like:
common <-- client <-- handler
where handler knows of client and common
and client only knows of common. (assuming you put the interface in the common package)
instead of
client <--> handler
I deliberately used sendSomeMessage to highlight that it's a different method you call ont he wrapper/interface but ofcourse you can name them however you like.
One remark: I didn't use dagger2 so I cannot say for sure that what I do can be done using that product, but this is how I would decouple this sort of cyclic dependency

I realized that what I really was trying to solve was not breaking the dependency between Client and Handler, but to use dependency injection instead of the new operator.
The solution I was looking for: Inject a ClientFactory into the constructor of Handler and use clientFactory.create(this) to create a Client object instead. The brilliant library AutoFactory allows you to create such a factory with a simple #AutoFactory annotation. The constructor of the created class is automatically annotated with #Inject.

Related

Can I use a graphQL client to communicate to a graphql server class in the same application?

I am currently creating a POC. We have a proxy lambda linked up to an API gateway. This Lambda access some data and returns films. These items are very large, so we would like to give customers the ability to filter which attributes they get from the items. We would like to do this using graphql. I am also using quarkus and java following this tutorial. https://quarkus.io/guides/smallrye-graphql
I was thinking that by using graphql I could make a resource like
#GraphQLApi
public class FilmResource {
#Inject
GalaxyService service;
#Query("allFilms")
#Description("Get all Films from a galaxy far far away")
public List<Film> getAllFilms() {
return service.getAllFilms();
}
#Query
#Description("Get a Films from a galaxy far far away")
public Film getFilm(#Name("filmId") int id) {
return service.getFilm(id);
}
}
An aws proxy lambda accepts a type of APIGatewayProxyRequestEvent which has a String body. I was hoping I could wrap FilmResource.java in a graphql client and directly apply the body of an APIGatewayProxyEvent to the Film Resource.
This would look something like this
#Named("test")
public class TestLambda implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
#Inject
FilmResource resource;
#Override
public APIGatewayProxyResponseEvent handleRequest(APIGatewayProxyRequestEvent awsLambdaEvent, Context context) {
DynamicGraphQLClient client = DynamicGraphQLClientBuilder.newBuilder().client(resource).build();
JsonObject data = client.executeSync(awsLambdaEvent.getBody()).getData();
return new APIGatewayProxyResponseEvent().withBody(data.toString()).withStatusCode(200);
}
}
So Ideally, the customer would be able to pass the below query to the lambda, and it would receive the correct response.
query allFilms {
allFilms {
title
releaseDate
}
}
However, it seems like all the clients only work with remote graphql servers. Is there a client that can wrap an internal graphql server class?
Source: https://quarkus.io/guides/smallrye-graphql-client
There's no way to specifically have the client run over some kind of 'local' transport, but you should still be able to use the HTTP transport even though the target is inside the same JVM. You just need to configure the client to run against an appropriate URL, that could be something like
quarkus.smallrye-graphql-client.MY-CLIENT-KEY.url=http://127.0.0.1:${quarkus.http.port:8080}/graphql
And inject a dynamic client instance using
#Inject
#GraphQLClient("MY-CLIENT-KEY")
DynamicGraphQLClient client;

How to design http connection to database in concurrent environment (static variable or not)

I am using dynamodb from amazon web services as my database. The client providd by AWS uses http to make the requests to the database. This code will be on a server which will accept requests from users and send it over to dynamodb. I had a few questions how to design this then.
Since this is a server accepting many requests I am using the async client http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodbv2/AmazonDynamoDBAsyncClient.html instead of the sync because I don't want for every request to block and instead I will wait for a future to return (better performance). Is it best to make this client static?
public class Connection {
AmazonDynamoDBAsyncClient client;
static DynamoDB dynamoDB;
public Connection(){
client = new AmazonDynamoDBAsyncClient(new ProfileCredentialsProvider());
dynamoDB = null;
}
public void setConnection(String endpoint){
client.setEndpoint(endpoint);
dynamoDB = new DynamoDB(client);
}
public DynamoDB getConnection(){
return dynamoDB;
}
}
Then to call this static variable from main:
public class Main{
Connection c;
DynamoDB con;
public Main() throws Exception {
try {
c = new Connection();
c.setConnection("http://dynamodbserver:8000");
con = c.getConnection();
//Do stuff with the connection now
} catch (Exception e) {
System.err.println("Program failed:");
System.err.println(e.getMessage());
}
Is this a good approach? What will happen if two users are requesting to use the static variable at the same time (I am using a framework called vertx so this program will run on a single thread, but there will be multiple instances of this program)?
You should not set the connection as static member, Also the way you are setting endpoints of your connection is:
not thread safe
may lead to race condition
Endpoints should be setup at the time of AmazonDynamoDBAsyncClient
construction and then this Async client should be used in the DynamoDB
construction. Please also refer to the documentation.
Why don't you use the SDK provided with AWS for dynamoDb? It will take care of connection management for you in a thread safe manner.
On a side note, If you still want to roll out your own solution for connection management, I would recommend that you use a Dependecy Injection framework. I would highly recommend google-guice.
Here is the sample code of DI through guice.
public class DynamoDBProvider implements Provider<DynamoDB> {
// Notice that endpoint is set at the time of client construction and the
// get() method provides an instance of DynamoDb.
// In another configuration class, we define that DynamoDb will be
// served under singleton scope, so you will have a single instance.
private final AmazonDynamoDBAsyncClient asyncClient;
#Inject
public DatabaseTransactionLogProvider(
ProfileCredentialsProvider creds,
#Named("Endpoint") String endpoint) {
this.asyncClient = new AmazonDynamoDBAsyncClient(creds);
// endpoint is a configuration so it must also be injected to the provider.
this.setEndpoint(endPoint);
}
public DynamoDb get() {
return new DynamoDB(asyncClient);
}
}
This is how you ensure your connection instance is served as singleton.
public class DynamoDBModule extends AbstractModule {
protected void configure() {
bind(Dynamodb.class).toProvider(DynamoDbProvider.class).in(Singleton.class);
}
}
Learning DI through guice or any other framework will require some
effort but it will go a a long way in making your code maintainable
and unit testable. Please be aware to utilize the benefits of DI, you
will have to refactor your project so that all depedencies are
injected.
static would be good since it is shared by all connection instance. but you can further improve the code by introducing singleton design pattern for connection class so that only one connection instance will be created and used for serve all the requests.

Java EE7 websocket initialization - implement logic before first #OnOpen

I think this is more a design specific question, than direct coding issue.
I want to implement a websocket service which serves an updated dataset from a foreign http:// resource to the clients.
but i want to have the data available before the first client connects, so #OnLoad notation won't do.
In HttpServlet world I would
#Override
public void init() throws...
I could not figure out a suitable way for doing so just using JSR-356.
I tried with custom ServerEndpointConfig.Configurator but that does not seem to give me access to method similar to init() from HttpServlet.
So my question is:
Letting the websocket Endpoint class extend HttpServlet gives me access to the init() method and I can place my initial logic there.
Is that a suitable way for solving my problem, or have i missed something in JSR-356 which does the job elegantly and without importing mostly unused servlet packages?
Thank you very much!
A class annotated with #ServerEndpoint("/myEndPoint") is instantiated each time a new connection is created via #OnOpen. It is not a static class nor a singleton (e.g. not behaves as Spring #Service).
I have a similar problem to yours, I need to make a web socket the observer of a Spring web service (don't ask, I'm with you that is a bad architecture the problem). In order to make it an observer, I have to add it to the observable class, but because of the lack of an initialization for the web socket I don't have a clear spot where to add the observer, adding it in the #OnOpen method would repeatedly add it on each new connection.
The only solution I found is a workaround. Usually a web socket class has a static Set of the peers connected to it, you need something similar for your initialization. Either use a static block or a static flag in the constructor. In my case I solved with:
private static boolean observerFlag = false;
private static Set<Session> peers = Collections.synchronizedSet(new HashSet<Session>());
public MyWebSocket() {
if (!observerFlag) {
observable.addObserver(this);
observerFlag = true;
}
}
And to remove the observer:
#OnClose
public void onClose(Session peer) {
peers.remove(peer);
if (peers.isEmpty()) {
observable.deleteObserver(this);
observerFlag = false;
}
}
I repeat that this is a workaround, I think that there is a more elegant solution.

GWT: invoke the same RPC-methods on the server-side as on the client-side

The question might seem stupid/trivial and might be, but I simply cannot understand how to achieve my goal. (Sorry if the title is misguiding, couldn't think of a better one)
I have a webpage on a App Engine server which uses GWT. I got client code and server code. The client code can call RPC methods without any problem (my problem has nothing to do with the "gwt-client" at all).
I got the following classes:
//MyClassService.java - client package
#RemoteServiceRelativePath("myService")
public interface MyClassService extends RemoteService{
public doSomething();
}
//MyClassServiceAsync.java - client package
public interface MyClassServiceAsync{
public void doSomething(AsyncCallback<Void> callback);
}
//MyClassServiceImpl.java - server package
public class MyClassServiceImpl extends RemoteServiceServlet implements MyClassService{
#Override
public void doSomething()
{
//does something
}
}
A scenario and what I want to do:
I've got a remote client, in other words, a client who's not connecting through the page via the "GWT-interface", it's a client who's simply making GET, POST requests to a path on the server (from elsewhere). This remote client is not "using" GWT at all. The client is connecting through an HttpServlet, inside this servlet I want to reuse the RPC mechanics so that i don't have to rewrite the interfaces, who are on the client side and using client-dependent code (the implementation is already server-side).
To reuse the existing methods on the server-side I could create an instance of MyClassServiceImpl.java and just use those. BUT as you can see above, they are implemented as synchronous methods, since GWT-RPC automatically makes the calls asyncronous when using the GWT-RPC.
How would i go about to reuse the MyClassServiceImpl on the server-side and also get them as asynchronous?
Also if I'm wrong with the approach I'm taking, please suggest some other solution. For example, one solution might be for the remote client to directly communicate with the RemoteServiceServlet instead of creating a HttpServlet which the client connects through, but I don't know if that's possible (and if it is, please tell me how)!
Thank you!
EDIT (thanks to some answers below I got some insight and will try to improve my question):
The server-side implementation of the methods is SYNCHRONOUS. Meaning they will block until results a returned. When invoking these method from the gwt-client code, they are 'automatically' made ASYNCHRONOUS one can call them by doing the following:
MyClassServiceAsync = (MyClassServiceAsync) GWT.create(MyClassService.class);
ServiceDefTarget serviceDef = (ServiceDefTarget) service;
serviceDef.setServiceEntryPoint(GWT.getModuleBaseURL() + "myService");
service.doSomething(new AsyncCallback<Void>() {
#Override
public void onSuccess(Void result) {
//do something when we know server has finished doing stuff
}
#Override
public void onFailure(Throwable caught) {
}
});
As you can see from the above code, there is support for the doSomething method to take an AsyncCallback, without even having the implementation for it. This is what I wanted on the server-side so i didn't have to use threads or create a new implementation for "async-usage". Sorry if I was unclear!
1) Any client can call MyClassServiceImpl.doSomething() with the current configuration. MyClassServiceImpl is a servlet and properly exposed. In order to achieve communication this way, the client must be able to "speak" the GWT dialect for data transportation. Google may provide you with libraries implementing this. I haven't used any, so I cannot make suggestions.
An example, proof-of-concept setup: Check the network communications with Firebug to get an idea of what is going on. Then try calling the service with curl.
2) If you do not want to use the GWT dialect, you can easily expose the same service as REST (JSON) or web services (SOAP). There are plenty of libraries, e.g. for the REST case RestEasy and Jersey. You do not mention any server-side frameworks (Spring? Guice? CDI?), so the example will be simplistic.
I'd suggest implementing your business method in a class independent of transportation method:
public class MyBusinessLogic {
public void doSomething() {
...
}
}
Then, the transport implementations use this business logic class, adding only transport-specific stuff (e.g. annotations):
GWT:
public class MyClassServiceImpl extends RemoteServiceServlet implements MyClassService{
#Override
public void doSomething() {
MyBusinessLogic bean = ... // get it from IoC, new, whatever
bean.doSomething();
}
}
JAX-RS:
#Path("myService")
public class MyResource {
#GET
public void doSomething() {
MyBusinessLogic bean = ... // get it from IoC, new, whatever
bean.doSomething();
}
}
So the transport endpoints are just shells for the real functionality, implemented in one place, the class MyBusinessLogic.
Is this a real example? Your method takes no arguments and returns no data.
Anyhow you can create a new servlet and invoke it via normal HTTP request. The servlet then just invokes the target method:
public class MyNewServlet extends HttpServlet{
protected void doGet(HttpServletRequest request, HttpServletResponse response){
MyBusinessLogic bean = ... // get it from IoC, new, whatever
bean.doSomething();
}
}

Multiple handlers in netty

I would like to make a kind of logging proxy in netty. The goal is to be able to have a web browser make HTTP requests to a netty server, have them be passed on to a back-end web server, but also be able to take certain actions based on HTTP specific things.
There's a couple of useful netty exmaples, HexDumpProxy (which does the proxying part, agnostic to the protocol), and I've taken just a bit of code from HttpSnoopServerHandler.
My code looks like this right now:
HexDumpProxyInboundHandler can be found at http://docs.jboss.org/netty/3.2/xref/org/jboss/netty/example/proxy/HexDumpProxyInboundHandler.html
//in HexDumpProxyPipelineFactory
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline p = pipeline(); // Note the static import.
p.addLast("handler", new HexDumpProxyInboundHandler(cf, remoteHost, remotePort));
p.addLast("decoder", new HttpRequestDecoder());
p.addLast("handler2", new HttpSnoopServerHandler());
return p;
}
//HttpSnoopServerHandler
public class HttpSnoopServerHandler extends SimpleChannelUpstreamHandler {
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
HttpRequest request = (HttpRequest) e.getMessage();
System.out.println(request.getUri());
//going to do things based on the URI
}
}
Unfortunately messageReceived in HttpSnoopServerHandler never gets called - it seems like HexDumpProxyInboundHandler consumes all the events.
How can I have two handlers, where one of them requires a decoder but the other doesn't (I'd rather have HexDumpProxy as it is, where it doesn't need to understand HTTP, it just proxies all connections, but my HttpSnoopHandler needs to have HttpRequestDecoder in front of it)?
I've not tried it but you could extend HexDumpProxyInboundHandler and override messageReceived with something like
super.messageReceived(ctx, e);
ctx.sendUpstream(e);
Alternatively you could modify HexDumpProxyInboundHandler directly to that the last thing messageReceived does is call super.messageReceived(ctx,e).
This would only work for inbound data from the client. Data from the service you're proxy-ing would still be passed through without you code seeing it.

Categories

Resources