Mocking database calls from external class - java

There is a library built by our company that is used as an external dependency by the other project we're currently creating tests for (Component Tests, to be more precise). Our goal is thus to let the code flow as deep as possible in all the classes of the pom.xml dependencies, but without requiring network calls.
We want to mock all external dependencies, and do not want to have to modify or fork those libraries from our company.
That library makes calls to an Oracle database using a JNDI connection.
The project runs on Tomcat, and the context.xml sets up the JNDI connection. We initially wanted to simply inject a H2 connection in there instead, but H2 isn't directly compatible with Oracle and the amount of work to achieve that is too big. There is a stored procedure that would need to be translated into a Java function (for the needs of H2 compatibility), among other things.
Is it possible to somehow intercept the calls made to the database, and return something we'd decide instead?
Solutions we're looking into, but haven't yet understood how to make it work:
Something similar to how MockServer/WireMock work.
Figuring out how to set up a proxy.
Somehow injecting a different class than the one that actually calls the database.
We use Java 8, Spring 5 with XML configuration of beans, and Maven Cargo to start the Tomcat 8. The library uses Java 8, Spring 4, and the classes establishing connections to the database are not Beans.
EDIT:
More context on our testing set up.
The project has 2 modules: one for the service itself (it contains the SOAP endpoints), and one for the component-tests.
The service one isn't started with a public static void main (i.e. not even through Spring Boot context launcher).
The test one starts the test suite through a TestNg class that launches the Cucumber runner.
Thus, to test the service, we trigger a Maven Cargo to build the service and deploy in an embedded Tomcat, injecting our test-specific context.xml file. Then, subsequently we launch the CucumberRunner which itself ends up calling the endpoints for the tests.

Related

Edit and re-run spring boot unit test without reloading context to speed up tests

I have a spring boot app and have written unit tests using a postgres test container (https://www.testcontainers.org/) and JUnit. The tests have the #SpringBootTest annotation which loads the context and starts up a test container before running the test.
Loading the context and starting the container takes around 15sec on my relatively old Macbook, but the test themselves are pretty fast (< 100ms each). So in a full build with 100s of tests, this does not really matter. It is a one time cost of 15sec.
But developing/debugging the tests individually in an IDE becomes very slow. Every single test incurs a 15 sec startup cost.
I know IntelliJ and Springboot support hot reload of classes when the app is running. Are there similar solutions/suggestions for doing the same for unit tests ? i.e Keep the context loaded and the testcontainer(DB) running but recompile just the modified test class and run the selected test again .
There is a simple solution for your issue I believe. You haven't specified how exactly do you run the test container in the test, however I have a successful experience with the following approach:
For tests running locally - start postgres server on your laptop once (say at the beginning of your working day or something). It can be dockerized process or even regular postgresql installation.
During the test spring boot application doesn't really know that it interacts with test container - it gets host/port/credentials and that's it - it creates a DataSource out of these parameters.
So for your local development, you can modify the integration with the test container so that the actual test container will be launched only if there is no "LOCAL.TEST.MODE" env. variable defined (basically you can pick any name - it's not something that exists).
Then, define the ENV variable on your laptop (or you can use system property for that - whatever works for you better) and then configure spring boot's datasource to get the properties of your local installation if that system property is defined:
In a nutshell, it can be something like:
#Configuration
#ConditionalOnProperty(name = "test.local.mode", havingValue = "true", matchIfMissing = false)
public class MyDbConfig {
#Bean
public DataSource dataSource () {
// create a data source initialized with local credentials
}
}
Of course, more "clever" solution with configuration properties can be implemented, it all depends on how do you integrate with test containers and where do the actual properties for the data source initialization come from, but the idea will remain the same:
In your local env. you'll actually work with a locally installed PostgreSQL server and won't even start the test container
Since all the operations in postgresql including DDL are transactional, you can put a #Transactional annotation on the test and spring will roll back all the changes done by the test so that the DB won't be full of garbage data.
As opposed to Test containers, this method has one significant advantage:
If your test fails and some data remains in the database you can check that locally because the server will remain alive. So you'll be able to connect to the db with PG Admin or something and examine the state...
Update 1
Based on op's comment
I see what you say, Basically, you've mentioned two different issues that I'll try to refer to separately
Issue 1 Application Context takes about 10-12 seconds to start.
Ok, this is something that requires investigation. The chances are that there is some bean that gets initialized slowly. So you should understand why exactly does the application starts so slowly:
The code of Spring (scanning, bean definition population, etc) works for particles of a second and usually is not a bottleneck by itself - it must be somewhere in your application.
Checking the beans startup time is kind of out of scope for this question, although there are certainly methods to do so, for example:
see this thread and for newer spring versions and if you use actuator this here. So I'll assume you will figure out one way or another why does it start slowly
Anyway, what you can do with this kind of information, and how you can make the application context loading process faster?
Well, obviously you can exclude the slow bean/set of beans from the configuration, maybe you don't need it at all in the tests or at least can use #MockBean instead - this highly varies depending on the actual use case.
Its also possible to provide configuration in some cases that will still load that slow bean but will alter its behavior so that it won't become slow.
I can also point of "generally applicable ideas" that can help regardless your actual code base.
First of all, if you're running different test cases (multi-select tests in the IDE and run them all at once) that share exactly the same configurations, then spring boot is smart enough to not re-initialize the application context. This is called "caching of the application context in cache". Here is one of the numerous tutorials about this topic.
Another approach is using lazy beans initialization. In spring 2.2+ there is a property for that
spring:
main:
lazy-initialization: true
Of course, if you're not planning to use it in production, define it in src/test/resource's configuration file of your choice. spring-boot will read it as well during the test as long as it adheres to the naming convention. If you have technical issues with this. (again out of scope of the question), then consider reading this tutorial
If your spring boot is older than 2.2 you can try to do that "manually": here is how
The last direction I would like to mention is - reconsidering your test implementation. This is especially relevant if you have a big project to test. Usually, the application has separation onto layers, like services, DAO-s, controllers, you know. My point is that the testing that involves DB should be used only for the DAO's layer - this is where you test your SQL queries.
The Business logic code usually doesn't require DB connection and in general, can be covered in unit tests that do not use spring at all. So instead of using #SpringBootTest annotation that starts the whole application context, you can run only the configuration of DAO(s), the chances that this will start way faster and "slow beans" belong to other parts of the application. Spring boot even has a special annotation for it (they have annotations for everything ;) ) #DataJpaTest.
This is based on the idea that the whole spring testing package is intended for integration tests only, in general, the test where you start spring is the integration test, and you'll probably prefer to work with unit tests wherever possible because they're way faster and do not use external dependencies: databases, remote services, etc.
The second issue: the schema often goes out of sync
In my current approach, the test container starts up, liquibase applies my schema and then the test is executed. Everything gets done from within the IDE, which is a bit more convenient.
I admit I haven't worked with liquibase, we've used flyway instead but I believe the answer will be the same.
In a nutshell - this will keep working like that and you don't need to change anything.
I'll explain.
Liquibase is supposed to start along with spring application context and it should apply the migrations, that's true. But before actually applying the migrations it should check whether the migrations are already applied and if the DB is in-sync it will do nothing. Flyway maintains a table in the DB for that purpose, I'm sure liquibase uses a similar mechanism.
So as long as you're not creating tables or something that test, you should be good to go:
Assuming, you're starting the Postgres server for the first time, the first test you run "at the beginning of your working day", following the aforementioned use-case will create a schema and deploy all the tables, indices, etc. with the help of liquibase migrations, and then will start the test.
However, now when you're starting the second test - the migrations will already be applied. It's equivalent to the restarting of the application itself in a non-test scenario (staging, production whatever) - the restart itself won't really apply all the migration to the DB. The same goes here...
Ok, that's the easy case, but you probably populate the data inside the tests (well, you should be ;) ) That's why I've mentioned that it's necessary to put #Transactional annotation on the test itself in the original answer.
This annotation creates a transaction before running all the code in the test and artificially rolls it back - read, removes all the data populated in the test, despite the fact that the test has passed
Now to make it more complicated, what if you create tables, alter columns on existing tables inside the test? Well, this alone will make your liquibase crazy even for production scenarios, so probably you shouldn't do that, but again putting #Transactional on the test itself helps here, because PostgreSQL's DDLs (just to clarify DDL = Data Definition Language, so I mean commands like ALTER TABLE, basically anything that changes an existing schema) commands also transactional. I know that Oracle for example didn't run DDL commands in a transaction, but things might have changed since then.
I don't think you can keep the context loaded.
What you can do is activate reusable containers feature from testcontainers. It prevents container's destruction after test is ran.
You'll have to make sure, that your tests are idempotent, or that they remove all the changes, made to container, after completion.
In short, you should add .withReuse(true) to your container definition and add testcontainers.reuse.enable=true to ~/.testcontainers.properties (this is a file in your homedir)
Here's how I define my testcontainer to test my code with Oracle.
import org.testcontainers.containers.BindMode;
import org.testcontainers.containers.OracleContainer;
public class StaticOracleContainer {
public static OracleContainer getContainer() {
return LazyOracleContainer.ORACLE_CONTAINER;
}
private static class LazyOracleContainer {
private static final OracleContainer ORACLE_CONTAINER = makeContainer();
private static OracleContainer makeContainer() {
final OracleContainer container = new OracleContainer()
// Username which testcontainers is going to use
// to find out if container is up and running
.withUsername("SYSTEM")
// Password which testcontainers is going to use
// to find out if container is up and running
.withPassword("123")
// Tell testcontainers, that those ports should
// be mapped to external ports
.withExposedPorts(1521, 5500)
// Oracle database is not going to start if less
// than 1gb of shared memory is available, so this is necessary
.withSharedMemorySize(2147483648L)
// This the same as giving the container
// -v /path/to/init_db.sql:/u01/app/oracle/scripts/startup/init_db.sql
// Oracle will execute init_db.sql, after container is started
.withClasspathResourceMapping("init_db.sql"
, "/u01/app/oracle/scripts/startup/init_db.sql"
, BindMode.READ_ONLY)
// Do not destroy container
.withReuse(true)
;
container.start();
return container;
}
}
}
As you can see this is a singleton. I need it to control testcontainers lifecycle manually, so that I could use reusable containers
If you want to know how to use this singleton to add Oracle to Spring test context, you can look at my example of using testcontainers. https://github.com/poxu/testcontainers-spring-demo
There's one problem with this approach though. Testcontainers is not going to stop reusable container ever. You have to stop and destroy the container manually.
I can't imagine some hot reload magic flag for testing - there is just so much stuff that can dirty the spring context, dirty the database etc.
In my opinion the easiest thing to do here is to locally replace test container initializer with manual container start and change the properties for the database to point to this container. If you want some automation for this - you could add before launch script (if you are using IntelliJ...) to do something like that: docker start postgres || docker run postgres (linux), which will start the container if its not running and do nothing if it is running.
Usually IDE recompiles just change affected classes anyway and Spring context probably wont start for 15 secs without a container starting, unless you have a lot of beans to configure as well...
I'm trying to learn testing with Spring Boot, so sorry if this answer is not relevant.
I came across this video that suggests a combination of (in order of most to least used):
Mockito unit tests with the #Mock annotation, with no Spring context when it's possible
Slice tests using the #WebMvcTest annotation, when you want to involve some Spring context
Integration tests with #SpringBootTest annotation, when you want to involve the entire Spring Context

How to share Repository and Service classes between 2 projects

I am working on 2 projects, one web app (Spring MVC) and one standalone backend service application (Spring boot) that heavily interact together. I am using hibernate for both and they are both coded using the Netbeans IDE.
My "issue" is that i end up with duplicate code in both project, mainly in the Repository and Service layers. My entities are obviously also duplicated since both projects use the same database.
Is there a way to make some sort of class library (a third project maybe?) and put all the common code in there? If that is indeed possible, how do you then change each project so they can still access this code as if it were part of them? I was thinking of putting all my Repositories, Services and entities in there to avoid code duplication and greatly reduce the risk of error.
Thank you!
Separate those Repository and Service classes to a submodule.
The structure looks like:
-- your app
-- api (dependent on `common` module)
-- webapp (dependent on `common` module)
-- common
Then the problem is to initialize beans inside common module. AFAIK, you have two options:
In #Configuration class of api or webapp module, add base packages of common module to component scan packages
In api or webapp resources folder, add Spring configuration factory
/src/main/resources/META-INF/spring.factories
org.springframework.boot.autoconfigure.EnableAutoConfiguration=your.path.AutoConfiguration
Define service/repository #Bean inside AutoConfiguration class
I am assuming in this answer your projects are connected to each other
You can set multiple properties within one Spring project, where you store your database connection parameters etc. with the help of multiple property files.
For example:
application-web.properties
application-backend.properties
You can use these in your project, by activating the needed properties file per application. The profile names will be web and backend in these cases.
When using maven, this is the command line I am using:
mvn spring-boot:run -Drun.profiles=<<profile>>
Now, back to your java code.
If there are classes only one of your application is using, you can specify this by 'profile'. Example:
#Controller
#Profile({ "web" })
public class WebEndpoint {
}
This way you can make the shared code available for both applications, without duplicating most of the code.

Spring - dependency injection - testing with different implementation

One of the main advantage of using spring dependency injection is for testing the functionality using same interface with different implementation without making any changes in the code, that is through injecting these different implementations(dependencies) in configuration file.
Lets take an example where we have developed our application with java configuration/annotation based (No .xml files at all).
We have done a code freeze and have deployed the code in server.
Now for a QA team to perform testing they need to inject different implementations for the interface by making changes in configuration file without touching code.
If its a .xml file, devOps team can inject the different implementation by injecting that bean name and can restart the server.
But since we have used the annotations based/java based configuration, How can we achieve this ?
Thanks in advance.
One of the main advantage of using spring dependency injection is for
testing the functionality using same interface with different
implementation
One of main advantages of Spring is indeed the dependency injection facility.
But you will also find very often cases where you have beans with a single implementation :
beans that rely on an interface but there is only one implementation for it.
bean that don't rely on any interface but are straight classes that you want to turn into injectable beans.
We have done a code freeze and have deployed the code in server. Now
for a QA team to perform testing they need to inject different
implementations for the interface by making changes in configuration
file without touching code.
Spring and more generally dependency injection pattern/frameworks are not designed to perform hot swapping or implementation modification of a deployed component without repackaging the component.
At startup, Spring creates its context and loads all required beans for the application in its container.
If you want to change configurations of some beans, the most clean and side effect less way is destroying the spring context/container, repackage the application with the needed changes and restart it.
If its a .xml file, QA team can inject the different implementation by
injecting that bean name and can restart the server.
Ideally, the QA team should test the implementation that you deploy in QA env and that will be used by final users to stay the closest of the real functioning of the application.
Now, if because of some specific constraints, some components to test by the QA should be mocked/stubbed in a some way, just create a different build for that.
Spring Boot Profile and Maven Profile features can help for.

Run multiple spring boot jars in one jvm

My project contains several services, each one is annotated with #SpringBootApplication and can be run on a random port via "gradle bootRun".
Is it possible to build the services into jars and run them together in one JVM? Not matter by programmatic method or just put them in a container.
Please show me some instructions if possible. Thanks!
It's a little hacky, but can be done. I wrote a blog post about it some time ago: Running Multiple Spring Boot Apps in the Same JVM. The basic idea is to run every Spring Boot application in a different classloader (because otherwise there would be resource conflicts).
I, personally, only used it for testing. I would prefer to run the different applications in different docker containers in production. But for testing it's pretty cool: You can quickly boot up your application and debug everything...
If you want to launch multiple spring boot microservices in single JVM then you can achieve this by launching multiple threads. Please refer sample code here https://github.com/rameez4ever/springboot-demo.git
Yes you can please check this SO.
However, if separating the running-user processes and simplicity is core , I would recommend the use of Docker containers, each running instance of the container(your apps) will runs in its own JVM on the same or distributed host
This is applicable, as David Tanzer said, by using two classloaders to start each Spring application in one JVM process. And no special code changes are required for these Spring apps.
In this way, almost every resource under those classloaders are separated: spring beans, class instances and even static fields of a same class.
But there are still concerns if you decide to hack like this:
Some resources like ports, cannot be reused in one JVM;
JVM system properties are shared within JVM process so pay attention if those two apps are reading a system property with same name. If you are using Spring, could try setting properties via command line argument to override those from system properties.
Classes loaded by the system class loader or its parents will share static fields and class definitions. For example, Spring Boot's thin jar launcher will use the system class loader to load bean class definition by default so there will be only one class definition even you have launched Spring apps in separate class loaders.

Unit Testing Dilemma: Using a JNDI data source without running JBoss or Spring

Problem Statement
I want to be able to run junit tests on methods that connect to a database.
Current setup
Eclipse Java EE IDE – Java code is using no framework. The developers (me included) want more robust testing of current legacy code BEFORE attempting to move the code into a Spring framework so that we can prove along the way that the behavior is still correct.
JBoss 4.2 – Version limitation by vendor software (Adobe LiveCycle ES2); Our Java web application uses this setup of JBoss to run and makes use of the Adobe LiveCycle API.
We have been unable to successfully run the vendor configured JBoss within Eclipse – we have spent weeks attempting this, including contacting the company that provides our support for the configuration of JBoss for Adobe LiveCycle. Supposedly the problem is a memory limitation issue with settings in Eclipse, but changing the memory settings has thus far failed in a successful JBoss server start within Eclipse. For now, attempting to get JBoss to run inside of Eclipse is on hold.
The database connection is defined in a JNDI data source that JBoss loads on start up. Both our web application and Adobe LiveCycle need to create connections to this data source.
Code
I am glossing over error checking and class structure in this code snippet to focus on the heart of the matter. Hopefully that does not cause problems for others. Text in square brackets is not actual text.
Our code to create the connection is something like this:
Properties props = new Properties();
FileInputStream in = null;
in = new FileInputStream(System.getProperty("[Properties File Alias]"));
props.load(in);
String dsName = props.getProperty(“[JNDI data source name here]”);
InitialContext jndiCntx = new InitialContext();
DataSource ds = (DataSource) jndiCntx.lookup(dsName);
Ds.getConnection();
I want to be able to test methods dependent upon this code without making any changes to it.
Reference to properties file alias in properties-service.xml file:
<!-- ==================================================================== -->
<!-- System Properties Service -->
<!-- ==================================================================== -->
<!-- Allows rich access to system properties.-->
<mbean code="org.jboss.varia.property.SystemPropertiesService"
name="jboss:type=Service,name=SystemProperties">
<attribute name="Properties">
[Folder Alias]=[filepath1]
[Properties File Alias]=[filepath2]
</attribute>
</mbean>
Snippet from properties file located at filepath2
[JNDI data source name]=java:/[JNDI data source name]
The JNDI xml file for this data source is set up like this:
<datasources>
<local-tx-datasource>
<jndi-name>[JNDI data source name here]</jndi-name>
<connection-url>jdbc:teradata://[url]/database=[database name]</connection-url>
<driver-class>com.teradata.jdbc.TeraDriver</driver-class>
<user-name>[user name]</user-name>
<password>[password]</password>
<!-- sql to call on an existing pooled connection when it is obtained from pool -->
<check-valid-connection-sql>SELECT 1+1</check-valid-connection-sql>
</local-tx-datasource>
</datasources>
My thoughts of where the solution may be
Is there something I can do in a #BeforeClass method in order to make the properties the above code is looking for available without JBoss? Maybe somehow using the setProperty method of the java.util.Properties class? I would also like to use the same JNDI xml file that JBoss reads from, if possible, in order to reduce duplicate configuration settings.
So far all of my research ends with the advice “Use Spring”, but I don’t think we’re ready to open that can of worms yet. I am not an expert in JBoss, but if more details of our JBoss setup are needed for a helpful answer, I will do my best to get them, though I will likely need some pointers on where to look.
Stackoverflow Research references:
Jndi lookup in junit using spring
Out of container JNDI data source
Other research references:
http://docs.oracle.com/javase/1.4.2/docs/api/java/util/Properties.html
http://docs.oracle.com/javase/jndi/tutorial/basics/prepare/initial.html
There's a very simple answer to your problem, but you're not going to like it: Don't.
By definition, a unit test should verify the functionality of a single unit (the size of which may vary, but it should be self-sufficient). Creating a setup where the test depends upon web services, databases, etc. is counter-productive: It slows down your tests, it includes a gzillion of possible things that could go wrong (failed network connections, changes to data sets, ...) during the test, which have nothing to do with the actual code you are working on, and most importantly: It makes testing much, much harder and more complicated.
Instead, you should be looking for ways to decouple the legacy code from any data sources, so that you can easily substitute mock objects or similar test doubles while you are testing.
You should create tests to verify the integrity of your entire stack, but those are called integration tests, and they operate at a higher level of abstraction. I personally like to defer writing those until the units themselves are in place, tested and working - at least until you have come to a point where you no longer expect changes to service calls and protocols on a daily basis.
In your case, the most obvious strategy would be to encapsulate all calls to the web service in one or more separate classes, extract an interface that the business objects can depend on, and use mocks implementing that same interface for unit testing.
For example, if you have a business object that calls an address database, you should copy the JNDI lookup code into a new service class called AddressServiceImpl. Its public methods should mimic all the method signatures of your JNDI datasource. Those, then, you extract to the AddressService interface.
You can then write a simple integration test to verify that the new class works: Call all the methods once and see if you get proper results. The beauty of this is that you can supply a JNDI configuration that points to a test database (instead of the original one), which you can populate with test datasets to make sure you always get the the expected results. You don't necessarily need a JBoss instance for this (though I have never had any problems with the eclipse integration) - any other JNDI provider should work, as long as the data source itself behaves the same way. And to be clear: You test this once, then forget about it. At least until the actual service methods ever change.
Once you verified that the service is functional, the next task is to go through all the dependent classes and replace the direct calls to the datasource with calls to the AddressService interface. And from that point on, you have a proper setup to implement unit tests on the actual business methods, without ever having to worry about things that should be tested elsewhere ;)
EDIT
I second the recommendation for Mockito. Really good!
I had a very similar situation with some legacy code in JBoss AS7, for which refactoring would have been way out of scope.
I gave up on trying to get the datasource out of JBoss, because it does not support remote access to datasources, which I confirmed in trying.
Ideally though, you don't want to have your unit tests dependant on a running JBoss instance in order to run, and you really don't want them to have to run inside of JBoss. It would be counter to the concept of self-contained unit tests (even though you'll still need the database to be running :) ).
Fortunately, the initial context used by your app doesn't have to come from a running JBoss instance. After looking at this article referred to by an answer to another question, I was able to create my own initial context, populate it with my own datasource object.
This works without creating dependencies in the code because the classes under test typically run inside the container, where they simply do something like this to get the container-provided context:
InitialContext ic = new InitialContext();
DataSource ds = (DataSource)ic.lookup(DATA_SOURCE_NAME);
They don't need to specify any environment to the constructor, because it has already been set up by the container.
In order for your unit tests to stand in for the container and provide a context, you create it, and bind a name:
InitialContext ic = new InitialContext();
// Construct DataSource
OracleConnectionPoolDataSource ds = new OracleConnectionPoolDataSource();
ds.setURL("url");
ds.setUser("username");
ds.setPassword("password");
ic.bind(DATA_SOURCE_NAME, ds);
This needs to happen in each test class's #BeforeClass method.
Now the classes being tested get my initial context when running in unit tests, and the container's when deployed.
If you are using tools like Git and Maven this can be done easily with them. Check in a UnitTest specific properties file along side development and qa. Use Maven and its profile facilities to specify a profile that copies your UnitTest file over to where it should go, same with your dev and qa when run with different profiles active.
There is no magic to this; Spring introduces complexity more than anything. it definitly doesn't introduce simplicity like this.
You can run your tests with a fake InitialContext implementation, which returns whatever you need from calls to lookup(String).
A mocking/faking tool which allows such fake implementations is JMockit. The fake implementation would be written like the following:
public class FakeInitialContext extends MockUp<InitialContext>
{
#Mock
public Object lookup(String name)
{
// Return whatever is needed based on "name".
}
}
To apply it to a JUnit/TestNG test run, add jmockit.jar to the runtime classpath (before junit.jar if this is the case) and set the "jmockit-mocks" system property to the name of the fake class: -Djmockit-mocks=com.whatever.FakeInitialContext.
Of course, you can also write true JUnit/TestNG unit tests where any dependency can be easily mocked, by using the "Expectations & Verifications" mocking API.
(PS: For full disclosure, I am the creator of the JMockit project.)

Categories

Resources