I am not understanding the concept behind #Module annotation. Documents say, its where you can setup the code / load the container. But I am not getting that.
I see that there are set of return types for methods annotated with #Module. But I do not see those methods used anywhere in the code. I am speaking from Testing context.
Can someone please explain?
Prior to running the test the container will invoke the #Module annotated methods of the test class and deploy those applications. Effectively, use #Module to build and return the application you want to test.
Related
I have a library in which I defined a class let's say MyClass with #Component along with #Value but when I try to use this in my Spring Boot application and try to Autowire it, I get exception about Spring not being able to find this type and ask me to define a Bean. All other classes gets injected just fine that I have defined in the application it self.
How can I make the MyClass to be injected?
Without a complete information on about your library, how you are using it, we can't provide solution. Assuming everything on your library is correct, you can simply add #ComponentScan on the application that use your library.
Create a class as below and that should fix your problem.
#Configuration
#ComponentScan({"your library package"})
public class YourConfig {
}
If this doesn't solve your problem, add more information on your question and I will update my answer accordingly.
We're migrating our Java-only Play application from Play 2.4 to Play 2.5. First step: Get rid of GlobalSettings, still completely in the 2.4 realm. I wrote some StartModule which will take over functionality as the migration guide and "the internet" describes. I add
play.modules.enabled += "de.[...].modules.StartModule"
to the application's .conf file. Executing this via sbt run or sbt start works as expected. Massive problems, however, arise when I try to unittest this stuff with sbt test or sbt test-only.
We have a rather elaborated unit test setup as the application is complex and has large legacy parts. Eventually, the unit test instance of the Play server is started with
Helpers.start(testserver=Helpers.testServer(playhttpport,
app=new GuiceApplicationBuilder()
.configure(getConfiguration())
.build()));
This works as long as the play.modules.enabled line above is not visible to the unit test code. As soon as I enable it, I get a number of errors like
Test de.[...]Tests failed: com.google.inject.CreationException:
Unable to create injector, see the following errors:
1) No implementation for play.inject.ApplicationLifecycle was bound.
while locating play.inject.ApplicationLifecycle
or
2) Could not find a suitable constructor in play.api.Environment.
Classes must have either one (and only one) constructor annotated with #Inject
or a zero-argument constructor that is not private.
Same thing happens if I remove the play.modules.enabled line and change the server start to
Helpers.start(testserver=Helpers.testServer(playhttpport,
app=new GuiceApplicationBuilder()
.load(Guiceable.modules(new StartModule()))
.configure(getConfiguration())
.build()));
In my limited understanding, it seems that GuiceApplicationBuilder (or whatever) "forgets" about all builtin dependency injection configuration if any additional dependency definitions are given. Unfortunately, I have not found any applicable postings here or anywhere else which would lead me to a solution.
Questions:
Is my analysis correct?
How can I make my unit test code functional with the additional module in the DI framework?
Would it be helpful to directly continue in Play 2.5? I'd like to solve this problem before as that migration step will bring its own plethora of things to handle and I'd really to have a functional base for this - including an operational unit test framework...
Any insight and help greatly appreciated!
Kind regards,
Dirk
Update This is StartModule:
public class StartModule extends AbstractModule {
#Override protected void configure() {
bind(InnerServerLauncher.class).asEagerSingleton();
}
}
And this is the InnerServerLauncher:
#Singleton
public class InnerServerLauncher {
#Inject
public InnerServerLauncher(ApplicationLifecycle lifecycle,
Environment environment) {
System.err.println("*** Server launcher called ***");
}
}
I should add that the problem also arises if I put a totally different class into play.modules.enabled like
play.modules.enabled += "play.filters.cors.CORSModule"
Ok, I finally got it. Problem is the configuration() method which I mentioned above but did not elaborate further. As I said, we have quite some legacy in our system. Therefore, we have a mechanism which constructs the configuration independently from Play's config files for the unit tests. GuiceBuilder.configure() (and btw. also the fakeApplication()-based methods) merges this with the Play-internal configuration but only on the topmost layer. For plain settings (strings, numbers etc.) that's ok, but for value lists it means that the complete list is overwritten and replaced.
play.modules.enabled is used by Play internally to gather the default modules which have to be registered with the dependency injection framework. Documentation states very clearly that your statements in application.conf must only add elements to play.modules.enabled, i.e.
play.modules.enabled += "package.Module"
Our "special way" of constructing the configuration environment for unit tests, however, overwrote Play's own play.modules.enabled as soon as any value in our own configuration was set. And that destroyed the complete dependency injection scheme of Play as none of its own base classes were accessible any more. Bummer!
I solved this by actually using a "real" configuration file which is just read normally by GuiceApplicationBuilder and which contains those play.modules.enabled += ... lines. As this config file is still artifically and temporarily generated for the unit test scenario, I pass its name to GuiceApplicationBuilder via System.setProperty:
System.setProperty("config.file",conffilename);
Helpers.start(testserver=Helpers.testServer(playhttpport,
app=new GuiceApplicationBuilder().build()));
Now, the configuration is created correctly and with the internal default settings for play.modules.enabled. Now, I finally can start to actually move the code from GlobalSettings into that injected and eagerly loaded module. And it was just ten hours of hunting...
I have a TestNG class which is like the following:
public class WebAPITestCase extends AbstractTestNGSpringContextTests{.....}
I was trying to understand what this means extends AbstractTestNGSpringContextTests.
How does it work and what is the use of it?
Please read the javadoc:
AbstractTestNGSpringContextTests is an abstract base test class that
integrates the Spring TestContext Framework with explicit
ApplicationContext testing support in a TestNG environment. When you
extend AbstractTestNGSpringContextTests, you can access a protected
applicationContext instance variable that you can use to perform
explicit bean lookups or to test the state of the context as a whole.
Basically a spring application context will be setup for the test class.
If that still doesn't make sense I'd recommend you read this.
First, TestNG (stands for Test Next Generation) is a testing framework inspired from JUnit and NUnit but introducing some new functionalities that make it more powerful and easier to use like test that your code is multithread safe, powerful execution model, etc.
The class AbstractTestNGSpringContextTests includes the spring ApplicationContext. To make it available when executing TestNG test, AbstractTestNGSpringContextTests has methods annotated with TestNG annotations like #BeforeClass and #BeforeMethod.
So to have this functionality of running TestNG with Spring components, all it left to do is to extend AbstractTestNGSpringContextTests.
BTW, AbstractTransactionalTestNGSpringContextTests extends AbstractTestNGSpringContextTests. It not only provides transactional support but also has some convenience functionality for JDBC access.
i have a maven project having two modules; one spring module and one for gwt module. gwt module depends to spring module. And i have XService interfaces and XServiceImpl implementations as Spring beans annotated as #Service("myXServiceImpl").
I want to call myXServiceImpl bean's method from gwt client-side. For this purpose i write proper gwt classes; XGWTService, XGWTServiceAsync, XGWTServiceImpl and XGWTServiceImpl uses XService by #Autowired (I use spring4gwt and XGWTServiceImpl is a spring bean annotated as #Service("myXGWTServiceImpl"))
Actually, i want a practical solution as simple as defining only XGWTServiceAsync which is annotated with #RemoteServiceRelativePath("spring4gwt/myXServiceImpl")
I wonder if there is an easy way to call my spring beans without coding extra 3 classes(XGWTService, XGWTServiceAsync, XGWTServiceImpl)?
Thanks in advance
Put the XService interface and all the classes used by it in a separate package. Make the XService extend RemoteService. You can then define a GWT module that includes those classes. Package the source along with the jar file. Inherit the GWT module in your main GWT module and implement only the XServiceAsync interface.
You can also do away with manually implementing the XServiceAsync - the maven GWT plugin has an option to generate the Async version from the interface.
The only awkward thing in this is making the XService implement the RemoteService interface of GWT and thus having to make your service implementation depend on the GWT jar. But since it doesn't come in the way of implementation that is something that we can live with.
The other option is to just create a XGwtService interface that extends XService & RemoteService - not add any additional methods into this. On the server just the XServiceImpl should be sufficient.
How should each class in an application retrieve the Spring application context? Or, stated another way, how many times should an application invoke new ClassPathXmlApplicationContext("applicationContext.xml")?
Usually a class does not need the application context, but it needs some of the objects Spring injects. And this is configured in that applicationContext.
As such an application typically calls new ClassPathXmlApplicationContext("applicationContext.xml") only once.
With dependency injection, you shouldn't have to, in general. But if your class really needs to be aware of the application context, implement the ApplicationContextAware interface. Spring will automatically call the setApplicationContext method defined in that interface to provide your class with the application context.
Note that if you're trying to gain access to filesystem resources, you should use ResourceLoaderAware. If you want access to the message source, then don't implement an interface; instead, inject a reference to the MessageSource bean.
I think you should take the advice from the answer to your other question here. Implementing ApplicationContextAware or ServletContextAware (if you are in a servlet container) is the best way to get the context.
Look up how spring handles Dependency Injection or Inversion of Control.
Once.
Actually you should let Spring do the heavy lifting and build/configure the classes rather than the other way around.
The whole idea is that all classes can be built without having to call the outside world for dependencies, which are 'magically' provided by the Spring framework.
This approach was invented to get away from the ServiceLocator pattern to which you are alluding, i.e. get a reference to an object to get the dependencies you need, ala JNDI.