Netty - How to pass information between handlers in the same pipeline - java

I would like to create a pipeline of handlers such as:
public ChannelPipeline getPipeline() throws Exception
{
return Channels.pipeline(
new ObjectEncoder(),
new ObjectDecoder(),
new AuthenticationServerHandler(),
new BusinessLogicServerHandler());
}
The key here is that I'd like the AuthenticationServerHandler to be able to pass the login information to the BusinessLogicServerHandler.
I do understand that you can use an Attachment, however that only stores the information for that handler, the other handlers in the pipeline cannot access it. I also noticed there was something called ChannelLocal which might do the trick, however I cannot find any real information on how to use it. All I've seen is people create a static instance to it, but how do you retrieve and access the info in another handler? Assuming that's the correct method.
My question is: how you do pass information between handlers in the same pipeline. In the example above, how do I pass the login credentials from the AuthenticationServerHandler to the BusinessLogicServerHandler?

ChannelLocal is the way to go atm. Just create an static instance somewhere and then access it from within your handlers by pass the Channel to the set/get method. This way you can share stuff between your channels.

I wasn't a fan of the ChannelLocal implementation with the lack of an internal static map, so what I ended up doing was putting my object on the Channel's attachment for now:
ctx.getChannel().setAttachment(myobj);
Then I make "myobj" basically a context POJO that contains all the information gathered about the request so far.
public class RequestContext {
private String foo = "";
public String getFoo(){
return foo;
}
public void setFoo(String foo){
this.foo = foo;
}
}
RequestContext reqCtx = new RequestContext();
reqCtx.setFoo("Bar");
ctx.getChannel().setAttachment(reqCtx);
reqCtx = (RequestContext)ctx.getChannel().getAttachment();
It's not elegant, but it works...

I pass information from one handler to the next ones by using dedicated instances to compose the pipeline for each channel, and by having the handlers reference each others within each pipeline.
The passing of information is made the old way, very simply, without any problem.

Related

Bridge Pattern applied to messaging API

I wish to have the bridge pattern applied for a project, basically I want this project to be able to trigger requests towards multiple different channels.
Example, I want to create messages which can be SMSs, E-mails or Viber for example... Obviously each of them is a message, but each with some different things, and so I wanted to have the Bridge applied there.
Is the bridge pattern the right one? If yes, how can it be implemented? In case another one should be used, also, please let me know how to use it in this context.
Thank you!
DISCLAMER This example is built from my understanding of the bridge pattern. If you feel like I'm not giving an appropriate definition, please let me know and I will happily remove it.
Bridge pattern is a good guess, but not for your objects. You can simply use polymorphism to create an Abstract Message class. This class could be extend in all of your specific objects.
public abstract class Message {
/* ... */
}
public class SmsMessage extends Message {
/* ... */
}
Where the bridge pattern could be useful is when you want to actually send the message. Chances are you are going to need different protocol to send different message so implementing a bridge pattern is a good idea.
The benefit of the bridge pattern is to generalize some classes, that way, if you need to add a new type of those classes, the code that uses it doesn't change.
Lets say your sending logic is tangle into a 3000 line class and that each time you want to send a message, you need to check what type of message it is, to send via the correct protocol. Well, adding a new message type, like FlyingPigeonMessage would be a real pain, since you need to replace every code that check what message to send.
On the other hand, if your 3000 line classes never know what TYPE of message they are, only that they are MESSAGEs, they adding a new type is a walk in the park. With that in mind, here is a simple implementation of the bridge pattern.
First, we need to define our bridge. In our case, it can be an interface that implements a simple method send.
public interface IMessageProvider {
public void send(Message message)
}
We then need to create different implementation of that Interface, one for each type of message. Here I'm only going to build the SMS class because this is an example.
public class SmsMessageProvider implements IMessageProvider {
#override
public void send(Message message) {
/* call a sms service or somehting... */
}
}
Once we have multiple providers, we need a way to instanciate them depending on a given condition. I like to use factories for that, you can pass it an object and depending on it's type, you get a specific implementation.
/**
* Creates message providers.
*/
public class MessageProviderFactory {
public static IMessageProvider getProviderForMessage(Message message) {
// we return an implementation of IMessageProbvider depending on the type of message.
if(message instanceOf SmsMessage) {
return new SmsMessageProvider();
} else {
// other types of message
}
}
}
Now, we have a bridge interface, we have implementations and we have a factory. All we need is to send the message. The beauty of the bridge pattern is that the function that call the send methods doesn't need to know exactly what object it has. Which make it way easier to maintain.
public class Application() {
public static void main(String[] args) {
Message message;
Boolean isSendingSMS = true; // user prefer sms over email
// we build the message depending on the config.
if(isSendingSMS) {
message = new SmsMessage("my awesome message");
} else {
/* ... */
}
// will send the message we built.
Application.sendMessage(message);
}
public static void sendMessage(Message message) {
// for a given message, we retreive the appropriate provider
IMessageProvider provider = MessageProviderFactory.getProviderForMessage(message);
// using this provider we send the message
provider.send(message);
}
}
In the end, we end up sending a message via the correct provider without having to actually know what provider it was. We used the bridge pattern to build the provider and simple polymorphism to build our object.
NOTE I havn't done Java in a long time, this code might not be syntaxically valid but I hope it provide a good example.

Ways to pass additional data to Custom RevisionEntity in Hibernate Envers?

It's RESTful web app. I am using Hibernate Envers to store historical data. Along with revision number and timestamp, I also need to store other details (for example: IP address and authenticated user). Envers provides multiple ways to have a custom revision entity which is awesome. I am facing problem in setting the custom data on the revision entity.
#RevisionEntity( MyCustomRevisionListener.class )
public class MyCustomRevisionEntity extends DefaultRevisionEntity {
private String userName;
private String ip;
//Accessors
}
public class MyCustomRevisionListener implements RevisionListener {
public void newRevision( Object revisionEntity ) {
MyCustomRevisionEntity customRevisionEntity = ( MyCustomRevisionEntity ) revisionEntity;
//Here I need userName and Ip address passed as arguments somehow, so that I can set them on the revision entity.
}
}
Since newRevision() method does not allow any additional arguments, I can not pass my custom data (like username and ip) to it. How can I do that?
Envers also provides another approach as:
An alternative method to using the org.hibernate.envers.RevisionListener is to instead call the getCurrentRevision( Class revisionEntityClass, boolean persist ) method of the org.hibernate.envers.AuditReader interface to obtain the current revision, and fill it with desired information.
So using the above approach, I'll have to do something like this:
Change my current dao method like:
public void persist(SomeEntity entity) {
...
entityManager.persist(entity);
...
}
to
public void persist(SomeEntity entity, String userName, String ip) {
...
//Do the intended work
entityManager.persist(entity);
//Do the additional work
AuditReader reader = AuditReaderFactory.get(entityManager)
MyCustomRevisionEntity revision = reader.getCurrentRevision(MyCustomRevisionEntity, false);
revision.setUserName(userName);
revision.setIp(ip);
}
I don't feel very comfortable with this approach as keeping audit data seems a cross cutting concern to me. And I obtain the userName and Ip and other data through HTTP request object. So all that data will need to flow down right from entry point of application (controller) to the lowest layer (dao layer).
Is there any other way in which I can achieve this? I am using Spring.
I am imagining something like Spring keeping information about the 'stack' to which a particular method invocation belongs. So that when newRevision() in invoked, I know which particular invocation at the entry point lead to this invocation. And also, I can somehow obtain the arguments passed to first method of the call stack.
One good way to do this would be to leverage a ThreadLocal variable.
As an example, Spring Security has a filter that initializes a thread local variable stored in SecurityContextHolder and then you can access this data from that specific thread simply by doing something like:
SecurityContext ctx = SecurityContextHolder.getSecurityContext();
Authorization authorization = ctx.getAuthorization();
So imagine an additional interceptor that your web framework calls that either adds additional information to the spring security context, perhaps in a custom user details object if using spring security or create your own holder & context object to hold the information the listener needs.
Then it becomes a simple:
public class MyRevisionEntityListener implements RevisionListener {
#Override
public void newRevision(Object revisionEntity) {
// If you use spring security, you could use SpringSecurityContextHolder.
final UserContext userContext = UserContextHolder.getUserContext();
MyRevisionEntity mre = MyRevisionEntity.class.cast( revisionEntity );
mre.setIpAddress( userContext.getIpAddress() );
mre.setUserName( userContext.getUserName() );
}
}
This feels like the cleanest approach to me.
It is worth noting that the other API getCurrentRevision(Session,boolean) was deprecated as of Hibernate 5.2 and is scheduled for removal in 6.0. While an alternative means may be introduced, the intended way to perform this type of logic is using a RevisionListener.

How do I shutdown and reconfigure an AsyncHttpClient that is using NettyAsyncHttpProvider

I'm constructing an AsyncHttpClient like this:
public AsyncHttpClient getAsyncHttpClient() {
AsyncHttpClientConfig config = new AsyncHttpClientConfig.Builder()
.setProxyServer(makeProxyServer())
.setRequestTimeoutInMs((int) Duration.create(ASYNC_HTTP_REQUEST_TIMEOUT_MIN, TimeUnit.MINUTES).toMillis())
.build();
return new AsyncHttpClient(new NettyAsyncHttpProvider(config), config);
}
This gets called once at startup, and then the return value is passed around and used in various places. makeProxyServer() is my own function to take my proxy settings an return a ProxyServer object. What I need to do is be able to change the proxy server settings and then recreate the AsyncHttpClient object. But, I don't know how to shut it down cleanly. A bit of searching on leads me to believe that close() isn't gracefull. I'm worried about spinning up a whole new executor and set of threads every time the proxy settings change. This won't be often, but my application is very long-running.
I know I can use RequestBuilder.setProxyServer() for each request, but I'd like to have it set in one spot so that all callers of my asyncHttpClient instance obey the system-wide proxy settings without each developer having to remember to do it.
What's the right way to re-configure or teardown and rebuild a Netty-based AsyncHttpClient?
The problem with using AsyncHttpClient.close() is that it shuts down the thread pool executor used by the provider, then there is no way to re-use the client without re-building it, because as per documentation, the executor instance cannot be reused once ts is shutdown. So, there is no way but re-build the client if you go that way (unless you implement your own ExecutorService that would have another shutdown logic, but it is a long way to go, IMHO).
However, from looking into the implementation of NettyAsyncHttpProvider, I can see that it stores the reference to the given AsyncHttpClientConfiginstance and calls its getProxyServerSelector() to get the proxy settings for every new NettyAsyncHttpProvider.execute(Request...) invocation (i.e. for every request executed by AsyncHttpClient).
Then, if we could make the getProxyServerSelector() return the configurable instance of ProxyServerSelector, that would do the thing.
Unfortunately, AsyncHttpClientConfig is designed to be a read-only container, instantiated by AsyncHttpClientConfig.Builder.
To overcome this limitation, we would have to hack it, using, say, "wrap/delegate" approach:
Create a new class, derived from AsyncHttpClientConfig. The class should wrap the given separate AsyncHttpClientConfig instance and implement the delegation of the AsyncHttpClientConfig getters to that instance.
To be able to return the proxy selector we want at any given point of time, we make this setting mutable in a this wrapper class and expose the setter for it.
Example:
public class MyAsyncHttpClientConfig extends AsyncHttpClientConfig
{
private final AsyncHttpClientConfig config;
private ProxyServerSelector proxyServerSelector;
public MyAsyncHttpClientConfig(AsyncHttpClientConfig config)
{
this.config = config;
}
#Override
public int getMaxTotalConnections() { return config.maxTotalConnections; }
#Override
public int getMaxConnectionPerHost() { return config.maxConnectionPerHost; }
// delegate the others but getProxyServerSelector()
...
#Override
public ProxyServerSelector getProxyServerSelector()
{
return proxyServerSelector == null
? config.getProxyServerSelector()
: proxyServerSelector;
}
public void setProxyServerSelector(ProxyServerSelector proxyServerSelector)
{
this.proxyServerSelector = proxyServerSelector;
}
}
Now, in your example, wrap your AsyncHttpClient config instance with our new wrapper and use it to configure the AsyncHttpClient:
Example:
MyAsyncHttpClientConfig myConfig = new MyAsyncHttpClientConfig(config);
return new AsyncHttpClient(new NettyAsyncHttpProvider(myConfig), myConfig);
Whenever you invoke myConfig.setProxyServerSelector(newSelector), the new request executed by NettyAsyncHttpProvider instance in your client will use the new proxy server settings.
A few hints/warnings:
This approach relies on the internal implementation of NettyAsyncHttpProvider; therefore make your own judgement on maintainability, future Netty libraries versions upgrade strategy etc. You could always look at the Netty source code before upgrading to the new version. At the current point, I personally think it is unlikely to change too much to invalidate this implementation.
You could get ProxyServerSelector for ProxyServer by using com.ning.http.util.ProxyUtils.createProxyServerSelector(proxyServer) - that's exactly what AsyncHttpClientConfig.Builder does.
The given example has no synchronization logic for accessing proxyServerSelector; you may want to add some as your application logic needs.
Maybe it is a good idea to submit a feature request for AsyncHttpClient to be able to setup a "configuration factory" for the AsyncHttpProvider so all these complications would vanish :-)
You should be holding a RequestHandle instance for all your unfinished requests. When you want to shut down, you can loop through and call isFinished() on all of them until they are all done. Then you know you can safely close it and no pending requests will be killed.
Once it's closed, just build a new one. Don't try to reuse the existing one. If you have references to it around, change those to reference a Factory that will return the current one.

Using Stripes, what is the best pattern for Show/Update/etc Action Beans?

I have been wrestling with this problem for a while. I would like to use the same Stripes ActionBean for show and update actions. However, I have not been able to figure out how to do this in a clean way that allows reliable binding, validation, and verification of object ownership by the current user.
For example, lets say our action bean takes a postingId. The posting belongs to a user, which is logged in. We might have something like this:
#UrlBinding("/posting/{postingId}")
#RolesAllowed({ "USER" })
public class PostingActionBean extends BaseActionBean
Now, for the show action, we could define:
private int postingId; // assume the parameter in #UrlBinding above was renamed
private Posting posting;
And now use #After(stages = LifecycleStage.BindingAndValidation) to fetch the Posting. Our #After function can verify that the currently logged in user owns the posting. We must use #After, not #Before, because the postingId won't have been bound to the parameter before hand.
However, for an update function, you want to bind the Posting object to the Posting variable using #Before, not #After, so that the returned form entries get applied on top of the existing Posting object, instead of onto an empty stub.
A custom TypeConverter<T> would work well here, but because the session isn't available from the TypeConverter interface, its difficult to validate ownership of the object during binding.
The only solution I can see is to use two separate action beans, one for show, and one for update. If you do this however, the <stripes:form> tag and its downstream tags won't correctly populate the values of the form, because the beanclass or action tags must map back to the same ActionBean.
As far as I can see, the Stripes model only holds together when manipulating simple (none POJO) parameters. In any other case, you seem to run into a catch-22 of binding your object from your data store and overwriting it with updates sent from the client.
I've got to be missing something. What is the best practice from experienced Stripes users?
In my opinion, authorisation is orthogonal to object hydration. By this, I mean that you should separate the concerns of object hydration (in this case, using a postingId and turning it into a Posting) away from determining whether a user has authorisation to perform operations on that object (like show, update, delete, etc.,).
For object hydration, I use a TypeConverter<T>, and I hydrate the object without regard to the session user. Then inside my ActionBean I have a guard around the setter, thus...
public void setPosting(Posting posting) {
if (accessible(posting)) this.posting = posting;
}
where accessible(posting) looks something like this...
private boolean accessible(Posting posting) {
return authorisationChecker.isAuthorised(whoAmI(), posting);
}
Then your show() event method would look like this...
public Resolution show() {
if (posting == null) return NOT_FOUND;
return new ForwardResolution("/WEB-INF/jsp/posting.jsp");
}
Separately, when I use Stripes I often have multiple events (like "show", or "update") within the same Stripes ActionBean. For me it makes sense to group operations (verbs) around a related noun.
Using clean URLs, your ActionBean annotations would look like this...
#UrlBinding("/posting/{$event}/{posting}")
#RolesAllowed({ "USER" })
public class PostingActionBean extends BaseActionBean
...where {$event} is the name of your event method (i.e. "show" or "update"). Note that I am using {posting}, and not {postingId}.
For completeness, here is what your update() event method might look like...
public Resolution update() {
if (posting == null) throw new UnauthorisedAccessException();
postingService.saveOrUpdate(posting);
message("posting.save.confirmation");
return new RedirectResolution(PostingsAction.class);
}

Design pattern to process events

I am trying to understand the most suitable (Java) design pattern to use to process a series of messages. Each message includes a "type" which determines how the data contained in the message should be processed.
I have been considering the Command pattern, but are struggling to understand the roles/relevance of the specific Command classes. So far, I have determined that the receiver will contain the code that implements the message processing methods. Concrete commands would be instantiated based on message type. However, I have no idea how the actual message data should be passed. Should it be passed to the receiver constructor with the appropriate receiver methods being called by the concrete command execute method? Maybe the message data should be passed in the receiver action method invocations?
I am fairly new to all of this so any guidance would be appreciated.
This may help:
public interface Command {
public void execute(String msg);
}
public class AO1Command implements Command {
Receiver rec = new Receiver();
public void execute(String msg) {
rec.admit(msg);
}
}
public class CommandFactory {
public protected CommandFactory () { }
public static Command getInstance(String type) {
if (type.equals("A01")) return new A01Command();
else if (type.equals("A02")) return new A02Command();
else {
return null;
}
}
Ok, your title says a pattern for handling events. If you are talking about an actual event framework, then the Observer/Observable pattern comes to mind. This would work when you want do fire an event of some type, then have event handlers pick up the processing of the events.
Seems like your problem is in the implementation details of the command pattern. Can you post some code that shows where you are stuck?
Note that patterns are not mutually exclusive, you could use the command pattern in the context of the Observable pattern.
EDIT -- based on your code, you should
1) make the CommandFactory static.
2) pass the type to the getCommand method, which should also be static.
3) You don't need reflection for this, you can simply do
if (type == "type1") return new Command1();
else if (type == "type2") return new Command2();
...
Im not saying you can't use reflection, I'm saying its overcomplicating what you are trying to do. Plus, they way you are doing it binds the the String that represents the message type to the implementation details of the command class names, which seems unnecessary.
You are on the right track. A Command pattern is the appropriate solution to the outlined problem.
To answer your question, you would have your CommandFactory instantiate an appropriate Command instance based on the data differentiator (in this case some data in your message). You would then invoke a method on the Command instance, passing in your message. It is common (best) practice to call this method Execute(...), but you can call it whatever you want.
You may want to take a look to the Jakarta Digester project (to process XML), it has a SAX implementation, wich is an event based API as explained here http://www.saxproject.org/event.html, it's a short explanation but could serve as a starting point for you.

Categories

Resources