Akka constructor wiring via spring - java

I have read the links
Akka and spring configuration
http://doc.akka.io/docs/akka/2.4.1/java/untyped-actors.html
Spring is no longer available as a module in Akka 2.4.1 but can be created and used as an extension. I also understand that the concept of bean/actor creation being managed by DI-fwk like Spring can cause fundamental conflicts with the Akka Actor parent-child/supervision model. So I still don't understand how to wire these together.
I have a set of actor classes and I have written them to be generic enough for example: properties like "listener", "name", "messageQueueName" etc are configurable. The link above tells me that I provide convenience factory constructors and then create actor with the code snippet
system.actorOf(DemoActor.props(42), "demo");
It is this line that I do not like. What I want to write in my application.conf is something like
deployment {
/demo {
magicNumber : 42
}
}
and then in all my application I simply want to look up the actor (I am okay to use the actorSelection) method.
Am I doing something wrong?

I think you are on the wrong path there, you should have a look at these tutorials:
http://www.lightbend.com/activator/template/akka-java-spring
https://myshittycode.com/2015/08/26/akka-spring-integration/
and for passing arguments via constructors check my answer to this question:
Custom Spring Bean Parameters
The configuration file is used for akka specific parameters (like specifying the dispatcher, mailbox, etc.)

Related

How to force developers to write CustomAnnotation before each api Springboot

How can we force developer to write Developed Custom-annotation on rest api
Example :
We Developed annotation Called : ValidatePermission
what we need to do , displaying runtime error for developer that he missing annotation #ValidatePermission on API , when he tried to write new api
#ValidatePermission
#GetMapping("/details")
#PreAuthorize("hasAuthority('902')")
public ResponseEntity<CustDtlsInqDto> getCustomerDetails(#CurrentUser UserPrincipal currentUser,
#RequestParam(name = "poiNumber", required = false) String poiNumber,
#RequestParam(name = "cif", required = false) String cif) {
return ResponseEntity.ok(customerService.getCustomerDetailsByPoiOrCif(currentUser.getId(), poiNumber, cif));
}
Annotations usage cannot be forced in any way before or on compilation (at least I am not aware of any technique, feel free to correct me).
The only way to go is to perform a check-up during the unit testing phase. Simply write an unit test that scans through the REST API definition beans and its public methods (or annotated) to check up using teh Reflection API whether an annotation from a particular category (implementation details up to you) is present within the formal parameters of methods.
Gits Github: Find all annotated classes in a package using Spring
Baeldung: A Guide to the Reflections Library
Something looks to me weird in this approach.
So you say:
...displaying runtime error for developer that he missing annotation #ValidatePermission on API
Based on this phrase, let me suggest an alternative:
So the developer that runs the project locally (during the debugging session or maybe tests) should see an error if he/she didn't put the annotation on the methods of rest controller, right?
If so, Why don't you need the developers to put this annotation?
The main idea of my suggestion is: Why not letting spring to do it for you automatically?
You could implement some kind of aspect or if you don't want to use a spring aop and prefer 'raw plain spring', BeanPostProcessor, that would 'wrap' all the methods of class annotated with RestContoller (by creating a run-time proxy) and before running a controller method will executed the logic that was supposed to be supported by the annotation?
In the case of Web MVC, another approach is to implement an interceptor that will be invoked automatically by spring mvc engine and you'll be able to execute any custom logic you want there, you'll also be able to inject other beans (like auxiliary services) into the interceptor.
Read this article in case you're not familiar with these interceptors, you'll need preHandle methods as far as I understand.

Shiro throws an UnavailableSecurityManagerException in custom JSON serialiser

I'm working on a REST API of a TomEE 7 based web app, which uses Shiro 1.3.2 for security. When an API request comes in, a SecurityManager and a Subject are created, and the latter is bound to a SubjectThreadState. I can call SecurityUtils.getSubject() anywhere in the endpoint code and the subject is always available.
However, problems arise when I try to do the same inside my custom JSON serialiser. It only serialises specific fields in some classes, so I register it on a per-field basis using this annotation:
#JsonSerialize(using = MySerialiser.class)
Long myRelatedItemId;
I wrote my serialiser based on the example code on this page under "2.7. #JsonSerialize". The serialiser needs to perform a cache lookup, and for that it has to have a Shiro subject. There is none because, thanks to the annotation above, I don't call the serialiser manually; instead Jersey calls it. This exception gets thrown (clarification: when I try to run SecurityUtils.getSubject() from the serialiser code):
org.apache.shiro.UnavailableSecurityManagerException: No SecurityManager accessible to the calling code, either bound to the org.apache.shiro.util.ThreadContext or as a vm static singleton. This is an invalid application configuration.
at org.apache.shiro.SecurityUtils.getSecurityManager(SecurityUtils.java:123)
at org.apache.shiro.subject.Subject$Builder.<init>(Subject.java:627)
at org.apache.shiro.SecurityUtils.getSubject(SecurityUtils.java:56)
I have confirmed that everything works if I call something like ObjectMapper().writeValueAsString() manually from the API endpoint code. However, that is definitely not the proper way to do it, because then the endpoint would effectively send and receive strings instead of the objects they are meant to handle.
I don't understand much about the inner workings of Shiro or Jackson, but it seems like the serialisation is being performed inside another thread, where Shiro's SubjectThreadState doesn't exist. Although if threading really is the cause, then I cannot see why Thread.currentThread().getName() returns the same value both inside and outside the serialiser, as does Thread.currentThread().getId().
I have tried a vast number of things to no avail, including:
Upgrading to Shiro 1.4.0.
Upgrading Jackson from 2.7.5 to 2.9.7.
Saving the SecurityManager instance that is created at the start of the API call inside a static ThreadLocal variable of the serialiser class.
Writing my own implementation of MessageBodyWriter which, not surprisingly, is called in exactly the same fashion.
Setting the staticSecurityManagerEnabled parameter to true in the ShiroFilter configuration in my web.xml.
Can anyone suggest how I could make the SecurityManager (or Subject) visible to the serialiser, when it's running in a thread not started by my code (clarification: or, otherwise running in parallel and started by Jersey, as far as I can tell)? Thanks in advance.
Update:
This stack trace was taken inside the serialiser:
<mypackage>.MySerializer.serialize()
com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField()
com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields()
com.fasterxml.jackson.databind.ser.BeanSerializer.serialize()
com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue()
com.fasterxml.jackson.databind.ObjectWriter$Prefetch.serialize()
com.fasterxml.jackson.databind.ObjectWriter.writeValue()
com.fasterxml.jackson.jaxrs.base.ProviderBase.writeTo()
org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo()
org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo()
org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed()
org.glassfish.jersey.server.internal.JsonWithPaddingInterceptor.aroundWriteTo()
org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed()
org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo()
org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed()
org.glassfish.jersey.message.internal.MessageBodyFactory.writeTo()
org.glassfish.jersey.server.ServerRuntime$Responder.writeResponse()
org.glassfish.jersey.server.ServerRuntime$Responder.processResponse()
org.glassfish.jersey.server.ServerRuntime$Responder.process()
org.glassfish.jersey.server.ServerRuntime$2.run()
This one was taken in our interceptor class where the Subject is created and bound:
<mypackage>.MySecurityInterceptor.createSession()
sun.reflect.NativeMethodAccessorImpl.invoke0()
sun.reflect.NativeMethodAccessorImpl.invoke()
sun.reflect.DelegatingMethodAccessorImpl.invoke()
java.lang.reflect.Method.invoke()
org.apache.openejb.core.interceptor.ReflectionInvocationContext$Invocation.invoke()
org.apache.openejb.core.interceptor.ReflectionInvocationContext.proceed()
org.apache.openejb.monitoring.StatsInterceptor.record()
org.apache.openejb.monitoring.StatsInterceptor.invoke()
sun.reflect.GeneratedMethodAccessor111.invoke()
sun.reflect.DelegatingMethodAccessorImpl.invoke()
java.lang.reflect.Method.invoke()
org.apache.openejb.core.interceptor.ReflectionInvocationContext$Invocation.invoke()
org.apache.openejb.core.interceptor.ReflectionInvocationContext.proceed()
org.apache.openejb.core.interceptor.InterceptorStack.invoke()
org.apache.openejb.core.stateless.StatelessContainer._invoke()
org.apache.openejb.core.stateless.StatelessContainer.invoke()
org.apache.openejb.core.ivm.EjbObjectProxyHandler.synchronizedBusinessMethod()
org.apache.openejb.core.ivm.EjbObjectProxyHandler.businessMethod()
org.apache.openejb.core.ivm.EjbObjectProxyHandler._invoke()
org.apache.openejb.core.ivm.BaseEjbProxyHandler.invoke()
com.sun.proxy.$Proxy279.getEntity()
org.openapitools.api.impl.MyApiServiceImpl.getEntity()
org.openapitools.api.MyApi.getEntity()
sun.reflect.NativeMethodAccessorImpl.invoke0()
sun.reflect.NativeMethodAccessorImpl.invoke()
sun.reflect.DelegatingMethodAccessorImpl.invoke()
java.lang.reflect.Method.invoke()
org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke()
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run()
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke()
org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch()
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch()
org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke()
org.glassfish.jersey.server.model.ResourceMethodInvoker.apply()
org.glassfish.jersey.server.model.ResourceMethodInvoker.apply()
org.glassfish.jersey.server.ServerRuntime$2.run()
There are 46 more calls that are identical in both traces after that last line, so I excluded them. They contain a bunch of org.apache.catalina.core and org.glassfish.jersey.
Take a look at Shiro's Subject Thread Association doc

Dropwizard registering two classes/clients

I have two instances of clients with different configs that I am creating (timeout, threadpool, etc...), and would like to leverage Dropwizard's metric on both of the clients.
final JerseyClientBuilder jerseyClientBuilder = new JerseyClientBuilder(environment)
.using(configuration.getJerseyClientConfiguration());
final Client config1Client = jerseyClientBuilder.build("config1Client");
environment.jersey().register(config1Client);
final Client config2Client = jerseyClientBuilder.build("config2Client");
environment.jersey().register(config2Client);
However, I am getting
org.glassfish.jersey.internal.Errors: The following warnings have been detected:
HINT: Cannot create new registration for component type class org.glassfish.jersey.client.JerseyClient:
Existing previous registration found for the type.
And only one client's metric shows up.
How do I track both clients' metrics or is it not common to have 2 clients in a single dropwizard app?
Never mind, turned out I was an idiot (for trying to save some resource on the ClientBuilder).
2 Things that I did wrong with my original code:
1. You don't need to register Jersey clients, just the resource is enough... somehow I missed the resource part in my code and just straight up trying to register the client
2. You need to explicitly build each JerseyClientBuilder and then build your individually configured clients, then dropwizard will fetch by each JerseyClientBuilder's metrics
In the end, I just had to change my code to the following:
final Client config1Client = new JerseyClientBuilder(environment)
.using(configuration.getJerseyClientConfiguration()).build("config1Client");
final Client config2Client = new JerseyClientBuilder(environment)
.using(configuration.getJerseyClientConfiguration()).build("config2Client");
Doh.
environment.jersey().register() has a javadoc listing of Adds the given object as a Jersey singleton component meaning that the objects registered become part of the jersey dependency injection framework. Specifically this method is used to add resource classes to the jersey context, but any object with an annotation or type that Jersey looks for can be added this way. Additionally, since they are singletons you can only have one of them per any concrete type (which is why you are getting a "previous registration" error from Jersey).
I imagine that you want to have two Jersey clients to connect to two different external services via REST/HTTP. Since your service needs to talk to these others to do its work, you'll want to have the clients accessible wherever the "work" or business logic is being performed.
For example, this guide creates a resource class that requires a client to an external http service to do currency conversions. I'm not saying this is a great example (just a top google result for dropwizard external client example). In fact, I think this not a good to structure your application. I'd create several internal objects that hide from the resource class how the currency information is fetched, like a business object (BO) or data access object (DAO), etc.
For your case, you might want something like this (think of these as constructor calls). JC = jersey client, R = resource object, BO = business logic object
JC1()
JC2()
B1(JC1)
B2(JC2)
R1(B1)
R2(B2)
R3(B1, B2)
environment.jersey().register(R1)
environment.jersey().register(R2)
environment.jersey().register(R3)
The official Dropwizard docs are somewhat helpful. They at least explain how to create a jersey client; they don't explain how to structure your application.
If you're using the Jersey client builder from dropwizard, each of the clients that you create should be automatically registered to record metrics. Make sure you're using the client builder from the dropwizard-client artifact and package io.dropwizard.client. (Looks like you are because you have the using(config) method.)

Configure a Service .. with XML ( instead of properties )

i have a ( bnd-annotated ) component that implements a simple api and exposes itself as a service
package com.mycompany.impl;
import com.mycompany.api.IFoo;
#Component(designateFactory=FooImpl.Configuration.class)
class FooImpl implements IFoo {
interface Configuration {
String foo();
// ..
}
Configuration configuration;
#Activate
public void activate(Map properties) {
configuration = Configurable.createConfigurable(Configuration.class, properties);
// ..
}
}
its configuration is loaded from a watched directory by Felix FileInstall and the service is instantiated by the Felix Configuration Service ( at least, i assume thats whats happening - i’m new to OSGi, please bear with me ) This, with the generated MetaType descriptor is working great.
However, as it stands, FooImpl requires structured configuration ( lists of lists, maps of lists..etc ) and i was wondering if there is an elegant ( * ) way to configure instances of the component through a similar workflow; that is to say, configuration discovery and instantiation/deployment remains centralised.
It seems to me that the Configuration Service spec manages maps - will i have to roll my own Configuration Service & FileInstall to be able to present components with xml/json/yaml backed structured configuration?
as opposed to, say, defining the location of an xml configuration file in properties ...confiception ? and doing my own parsing.
Yes and no...
The OSGi Configuration Admin service deals with abstract Configuration records, which are based on flat maps (actually java.util.Dictionary, but it's essentially the same thing). Config Admin does not know anything about the underlying physical storage; it always relies on somebody else to call the methods on the ConfigurationAdmin service, i.e. getConfiguration, createFactoryConfiguration etc.
The "somebody else" that calls Config Admin is usually called a "management agent". Felix FileInstall is a very simple example of a management agent that reads files in the Java properties format. Actually FileInstall is probably too simple and I don't consider it appropriate for production deployment — but that's a separate discussion.
It sounds like you want to write your own management agent that reads XML files and feeds them into Config Admin. This is really not a large or difficult task and you should not be afraid to take it on. Config Admin was designed under the assumption that applications would have very diverse requirements for configuration data storage, and that most applications would therefore have to write their own simple management agent, which is why it does not define its own storage format or location(s).
However, once the configuration data has been read by your management agent, it must be passed into Config Admin as a map/dictionary, which in turn will pass it to the components as a map. Therefore the components themselves do not receive highly structured data e.g. trees or nested maps. There is some flexibility though: configuration properties can contain lists of the based type; you can also use enum values etc.

Spring MVC Request mapping, can this be dynamic/configurable?

With Spring MVC, I know how you set the RequestMapping in every controller and method/action.
But what if I wanted this to be configurable, so for example I the following controllers:
BlogController
- with methods for listing blogs entries, single entry, new, update, etc.
ArticleController
- with methods for listing articles entries, single entry, new, update, etc.
Now in my application, the administrator can setup 2 blogs for the webiste, and 1 article section so the urls would be like:
www.example.com/article_section1/ - uses ArticleController
www.example.com/blog1/ - uses BlogController
www.example.com/blog2/ - uses BlogController
Maybe after a while the administrator wants another article section, so they just configure that with a new section like:
www.example.com/article_section2/
This has to work dynamically/on-the-fly without having to restart the application of course.
My question is only concerned with how I will handle url mappings to my controllers.
How would this be possible with Spring MVC?
I only know how to map urls to controllers using #RequestMapping("/helloWorld") at the controller or method level, but this makes the url mappings fixed and not configurable like how I want it.
Update:
I will be storing the paths in the database, and with the mapping to the type of controller so like:
path controller
/article_section1/ article
/blog1/ blog
/blog2/ blog
..
With the above information, how could I dispatch the request to the correct controller?
Again, not looking to reload/redeploy, and I realize this will require more work but its in the spec :)
Would this sort of URL mapping work for you?
www.example.com/blog/1/
www.example.com/blog/2/
If yes, then that's easy: Spring 3 supports path variables: http://static.springsource.org/spring/docs/3.0.x/reference/mvc.html#mvc-ann-requestmapping-advanced
Alternatively, you can create a generic request mapping and your own sub-dispatcher that reads a config file, but I think that's probably more work than it's worth.
Truly changing the request mappings at runtime might be hard (and not really recommended, since small errors can easily occur). If you still wish to do it, perhaps JRebel, and more specificly LiveRebel can be interesting for live redeployment of code and configuration.
Otherwise, like other posts suggested, RequestMappings supports wildcards, the limits of this should be clear after a quick read of the official documentation.
Try using with #RequestMapping wild cards as below:
#RequestMapping(value="/article_section*/"}
public void getArticle(....){
//TODO implementation
}
#RequestMapping(value="/blog*/"}
public void getBlog(....){
//TODO implementation
}
Hope this helps!!!
Also another solution might be to create a custom annotation that holds the already defined path on the #RequestMapping and also the new one to apply, let's say #ApiRestController.
Then, before the Spring context loads, the #Controller classes can be changed to have their annotation values changed at runtime by the new one (with the desired path). By doing this, Spring will load the enhanced request mapping and not the default one.
Created a small project to exemplify this for someone that needs this in the future https://gitlab.com/jdiasamaro/spring-api-rest-controllers.
Hope it helps.
Cheers.
doesn't this work?
#RequestMapping("/helloWorld*")

Categories

Resources