I have a simple Verticle that reads configuration from a properties file and loads in into vertx config. I have written a unit test to test the deployment of this verticle and possible cause of test failure is non availability of the properties file at the location.
When I run the test, unit test passes irrespective of whether I change the properties file name or path and the handler says the verticle was deployed successfully.
Am I doing something wrong here? Below is my code
import io.vertx.config.ConfigRetrieverOptions;
import io.vertx.config.ConfigStoreOptions;
import io.vertx.core.DeploymentOptions;
import io.vertx.core.json.JsonObject;
import io.vertx.rxjava.config.ConfigRetriever;
import io.vertx.rxjava.core.AbstractVerticle;
/**
* This is the main launcher verticle, the following operations will be executed in start() method of this verticle:
* 1. Read configurations from application.properties file
* 2. Deploy all other verticles in the application
*/
public class LauncherVerticle extends AbstractVerticle {
#Override
public void start() throws Exception {
//set up configuration from the properties file
ConfigStoreOptions fileStore = new ConfigStoreOptions()
.setType("file")
.setFormat("properties")
.setConfig(new JsonObject().put("path", System.getProperty("vertex.config.path"));
//create config retriever options add properties to filestore
ConfigRetrieverOptions options = new ConfigRetrieverOptions().addStore(fileStore);
ConfigRetriever configRetriever = ConfigRetriever.create(vertx, options);
DeploymentOptions deploymentOptions = new DeploymentOptions();
//Deploy verticles after the config has been loaded
//The configurations are loaded into JsonConfig object
//This JsonConfig object can be accessed in other verticles using the config() method.
configRetriever.rxGetConfig().subscribe(s -> {
//pass on the JsonConfig object to other verticles through deployment options
deploymentOptions.setConfig(s);
vertx.deployVerticle(AnotherVerticle.class.getName(), deploymentOptions);
}, e -> {
log.error("Failed to start application : " + e.getMessage(), e);
try {
stop();
} catch (Exception e1) {
log.error("Unable to stop vertx, terminate the process manually : "+e1.getMessage(), e1);
}
});
}
}
This is my unit test
import io.vertx.ext.unit.TestContext;
import io.vertx.ext.unit.junit.VertxUnitRunner;
import io.vertx.rxjava.core.Vertx;
import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;
import rx.Single;
#RunWith(VertxUnitRunner.class)
public class LoadConfigurationTest {
/**
* Config should be loaded successfully
*
* #param context
*/
#Test
public void loadConfigTest(TestContext context) {
/*
* Set the system property "vertx.config.path" with value "application.properties"
* This system property will be used in the Launcher verticle to read the config file
*/
System.setProperty("vertx.config.path", "/opt/vertx/config/application.properties");
//create vertx instance
Vertx vertx = Vertx.vertx();
Single<String> single = vertx.rxDeployVerticle(LauncherVerticle.class.getName());
single.subscribe(s -> {
vertx.rxUndeploy(s);
}, e -> {
Assert.fail(e.getMessage());
});
}
/**
* Test for negative use case - file not available in the specified location
*
* #param context
*/
#Test
public void loadConfigFailTest(TestContext context) {
//set path = non existing path
System.setProperty("vertx.config.path", "/non/existing/path/application.properties");
//create vertx instance
Vertx vertx = Vertx.vertx();
Single single = vertx.rxDeployVerticle(LauncherVerticle.class.getName());
single.subscribe(s -> {
//not executing this statement
Assert.fail("Was expecting error but Verticle deployed successfully");
}, e -> {
//not executing this statement either
System.out.println("pass");
});
}
}
Can you try the below code inside your LauncherVerticle the changes only include using AbstractVerticles start with Future which is a neat way for handling the config loading and everything around the same during your starup.
public class LauncherVerticle extends AbstractVerticle {
#Override
public void start(Future<Void> startFuture) throws Exception {
ConfigStoreOptions fileStore = new ConfigStoreOptions()
.setType("file")
.setFormat("properties")
.setConfig(new JsonObject().put("path", System.getProperty("vertex.config.path")));
ConfigRetrieverOptions options = new ConfigRetrieverOptions().addStore(fileStore);
ConfigRetriever configRetriever = ConfigRetriever.create(vertx, options);
DeploymentOptions deploymentOptions = new DeploymentOptions();
configRetriever.rxGetConfig().subscribe(s -> {
deploymentOptions.setConfig(s);
vertx.deployVerticle(AnotherVerticle.class.getName(),
deploymentOptions,
result -> startFuture.complete()
);
},
startFuture::fail
);
}
}
startFuture there, would help you to control the state of your verticle loading.
Also remember that #Constantine way for handing the test is best way, use of Async to prevent your tests passing without actually asserting anything.
Seems like there is nothing wrong with your verticle. However, there is something in tests - the asynchronous nature of verticle deployment is not taken into account. These test methods finish immediately instead of waiting for verticle deployment, and JUnit test that does not result in AssertionError is a passed test. You have to signal completion explicitly using Async.
Please see an example for your negative scenario below:
import io.vertx.ext.unit.Async;
import io.vertx.ext.unit.TestContext;
import io.vertx.ext.unit.junit.RunTestOnContext;
import io.vertx.ext.unit.junit.VertxUnitRunner;
import io.vertx.rxjava.core.Vertx;
import org.junit.Rule;
import org.junit.Test;
import org.junit.runner.RunWith;
#RunWith(VertxUnitRunner.class)
public class LoadConfigurationTest {
#Rule
public RunTestOnContext runTestOnContextRule = new RunTestOnContext();
#Test
public void testConfigLoading_shouldFail_whenConfigDoesNotExist(TestContext context) {
// create an Async instance that controls the completion of the test
Async async = context.async();
// set non existing path
System.setProperty("vertx.config.path", "/non/existing/path/application.properties");
// take vertx instance and wrap it with rx-ified version
Vertx vertx = Vertx.newInstance(runTestOnContextRule.vertx());
vertx.rxDeployVerticle(LauncherVerticle.class.getName()).subscribe(s -> {
context.fail("Was expecting error but Verticle deployed successfully"); // failure
}, e -> {
async.complete(); // success
});
}
}
Also please note that you can take a Vertx instance from RunTestOnContext rule (as in the snippet above).
Related
I'm trying to migrate from Vert.x to Quarkus and in Vert.x when I write message consumers like Kafka/AMQP etc. I have to scale the number of verticals to maximize performance across multiple cores i.e. Vertical Scaling - is this possible in Quarkus? I see a similar question here but it wasn't answered.
For example, with Kafka I might create a consumer inside a vertical and then scale that vertical say 10 times (that is specify the number of instances in the deployment to be 10) after doing performance testing to determine that's the optimal number. My understanding is that by default, 1 vertical = 1 event loop and does not scale across multiple cores.
I know that it's possible to use Vert.x verticals in Quarkus but is there another way to scale things like the number of Kafka consumers across multiple core?
I see that this type of scalability is configurable for things like Quarkus HTTP but I can't find anything about message consumers.
Here's the Vert.x Verticle approach that overall I'm very happy with, but I wish there were better documentation on how to do this.
UPDATE - Field injection doesn't work with this example but constructor injection does work.
Lets say I want to inject this
#ApplicationScoped
public class CoffeeRepositoryService {
public CoffeeRepositoryService() {
System.out.println("Injection succeeded!");
}
}
Here's my Verticle
package org.acme;
import io.smallrye.mutiny.Uni;
import io.smallrye.mutiny.vertx.core.AbstractVerticle;
import io.vertx.core.impl.logging.Logger;
import io.vertx.core.impl.logging.LoggerFactory;
import io.vertx.mutiny.core.eventbus.EventBus;
import io.vertx.mutiny.rabbitmq.RabbitMQClient;
import io.vertx.mutiny.rabbitmq.RabbitMQConsumer;
import io.vertx.rabbitmq.QueueOptions;
import io.vertx.rabbitmq.RabbitMQOptions;
public class RQVerticle extends AbstractVerticle {
private final Logger LOGGER = LoggerFactory.getLogger(org.acme.RQVerticle.class);
//This doesn't work - returns null
#Inject
CoffeeRepositoryService coffeeRepositoryService;
RQVerticle() {} // dummy constructor needed
#Inject // constructor injection - this does work
RQVerticle(CoffeeRepositoryService coffeeRepositoryService) {
//Here coffeeRepositoryService is injected properly
}
#Override
public Uni<Void> asyncStart() {
LOGGER.info(
"Creating RabbitMQ Connection after Quarkus successful initialization");
RabbitMQOptions config = new RabbitMQOptions();
config.setUri("amqp://localhost:5672");
RabbitMQClient client = RabbitMQClient.create(vertx, config);
Uni<Void> clientResp = client.start();
clientResp.subscribe()
.with(asyncResult -> {
LOGGER.info("RabbitMQ successfully connected!");
});
return clientResp;
}
}
Main Class - injection doesn't work like this
package org.acme;
import io.quarkus.runtime.Quarkus;
import io.quarkus.runtime.QuarkusApplication;
import io.quarkus.runtime.annotations.QuarkusMain;
import io.vertx.core.DeploymentOptions;
import io.vertx.mutiny.core.Vertx;
#QuarkusMain
public class Main {
public static void main(String... args) {
Quarkus.run(MyApp.class, args);
}
public static class MyApp implements QuarkusApplication {
#Override
public int run(String... args) throws Exception {
var vertx = Vertx.vertx();
System.out.println("Deployment Starting");
DeploymentOptions options = new DeploymentOptions()
.setInstances(2);
vertx.deployVerticleAndAwait(RQVerticle::new, options);
System.out.println("Deployment completed");
Quarkus.waitForExit();
return 0;
}
}
}
Main Class with working injection but cannot deploy more than one instance
package org.acme;
import io.quarkus.runtime.StartupEvent;
import io.vertx.mutiny.core.Vertx;
import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.event.Observes;
import org.jboss.logging.Logger;
#ApplicationScoped
public class MainVerticles {
private static final Logger LOGGER = Logger.getLogger(MainVerticles.class);
public void init(#Observes StartupEvent e, Vertx vertx, RQVerticle verticle) {
public void init(#Observes StartupEvent e, Vertx vertx, RQVerticle verticle) {
DeploymentOptions options = new DeploymentOptions()
.setInstances(2);
vertx.deployVerticle(verticle,options).await().indefinitely();
}
}
Std Out - first main class looks good
2021-09-15 15:48:12,052 INFO [org.acm.RQVerticle] (vert.x-eventloop-thread-2) Creating RabbitMQ Connection after Quarkus successful initialization
2021-09-15 15:48:12,053 INFO [org.acm.RQVerticle] (vert.x-eventloop-thread-3) Creating RabbitMQ Connection after Quarkus successful initialization
Std Out - second main class
2021-09-22 15:48:11,986 ERROR [io.qua.run.Application] (Quarkus Main
Thread) Failed to start application (with profile dev):
java.lang.IllegalArgumentException: Can't specify > 1 instances for
already created verticle
What I'm trying to do is test an authentication handler, but my problem boils down to having no Session instance in the registry.
An example test:
package whatever
import groovy.transform.CompileStatic
import ratpack.groovy.handling.GroovyChainAction
import ratpack.groovy.test.handling.GroovyRequestFixture
import ratpack.http.Status
import ratpack.session.Session
import spock.lang.Specification
class SessionChainTest extends Specification {
GroovyChainAction sessionChain = new GroovyChainAction() {
#Override
#CompileStatic
void execute() throws Exception {
get('foo') {
Session s = get Session
// Stuff using session here
}
}
}
def "should get session"() {
given:
def result = GroovyRequestFixture.handle(sessionChain) {
uri 'foo'
method 'GET'
}
expect:
result.status == Status.OK
// If the server threw, rethrow that
Throwable t = result.exception(Throwable)
if (t) throw t // <<< Throws NotInRegistryException because no Session in registry
}
}
(The extra rethrow is in there to allow us to see the exception thrown within the ratpack test, because by default it is caught and stashed in the result.)
I know that in principle I could create a Session instance and add it to the registry with a registry { add <Session instance> } block, but I've delved into the Ratpack code, and creating a Session object requires getting a lot of disparate other components and passing them to SessionModule#sessionAdaptor (or the DefaultSession constructor). I can't find any examples of that being done, it appears this call is handled by Guice dependency-injection magic I can't unpick.
The usual way to do it in an application is to use a bind { module SessionModule } block but this isn't accessible from the context of RequestFixture#execute.
As sessions are bread and butter for any web application, my hunch is that this may be an easily solved problem, I just haven't found the right way to do it?
You can access Registry through GroovyRequestFixture.handle(handler, closure) method call and you can e.g. register mocked Session object:
GroovyRequestFixture.handle(sessionChain) {
uri 'foo'
method 'GET'
registry { r ->
r.add(Session, session)
}
}
Take a look at following example:
import groovy.transform.CompileStatic
import ratpack.exec.Promise
import ratpack.groovy.handling.GroovyChainAction
import ratpack.groovy.test.handling.GroovyRequestFixture
import ratpack.http.Status
import ratpack.jackson.internal.DefaultJsonRender
import ratpack.session.Session
import spock.lang.Specification
import static ratpack.jackson.Jackson.json
class SessionChainTest extends Specification {
Session session = Mock(Session) {
get('test') >> Promise.value(Optional.of('Lorem ipsum'))
}
GroovyChainAction sessionChain = new GroovyChainAction() {
#Override
#CompileStatic
void execute() throws Exception {
get('foo') {
Session s = get Session
s.get('test').map { Optional<String> o ->
o.orElse(null)
}.flatMap { value ->
Promise.value(value)
}.then {
render(json([message: it]))
}
}
}
}
def "should get session"() {
given:
def result = GroovyRequestFixture.handle(sessionChain) {
uri 'foo'
method 'GET'
registry { r ->
r.add(Session, session)
}
}
expect:
result.status == Status.OK
and:
result.rendered(DefaultJsonRender).object == [message: 'Lorem ipsum']
}
}
In this test I mock Session object for key test to store Lorem ipsum text. When running this test, both assertions pass.
Alternative approach: registering Guice.registry()
If you don't want to use mocked Session object you can try replacing default Ratpack's Registry with a Guice registry object. Firstly, initialize a function that creates Guice registry and add SessionModule via bindings:
static Function<Registry, Registry> guiceRegistry = Guice.registry { bindings ->
bindings.module(new SessionModule())
}
Next inside execute() method of GroovyChainAction you can replace the default registry by calling:
register(guiceRegistry.apply(registry))
No mocks anymore, but in this case you can't access Session object outside request scope, so you wont be able to add anything to the session in preparation stage of your test. Below you can find full example:
import groovy.transform.CompileStatic
import ratpack.exec.Promise
import ratpack.func.Function
import ratpack.groovy.handling.GroovyChainAction
import ratpack.groovy.test.handling.GroovyRequestFixture
import ratpack.guice.Guice
import ratpack.http.Status
import ratpack.jackson.internal.DefaultJsonRender
import ratpack.registry.Registry
import ratpack.session.Session
import ratpack.session.SessionModule
import spock.lang.Specification
import static ratpack.jackson.Jackson.json
class SessionChainTest extends Specification {
static Function<Registry, Registry> guiceRegistry = Guice.registry { bindings ->
bindings.module(new SessionModule())
}
GroovyChainAction sessionChain = new GroovyChainAction() {
#Override
#CompileStatic
void execute() throws Exception {
register(guiceRegistry.apply(registry))
get('foo') {
Session s = get Session
s.get('test').map { Optional<String> o ->
o.orElse(null)
}.flatMap { value ->
Promise.value(value)
}.then {
render(json([message: it]))
}
}
}
}
def "should get session"() {
given:
def result = GroovyRequestFixture.handle(sessionChain) {
uri 'foo'
method 'GET'
}
expect:
result.status == Status.OK
and:
result.rendered(DefaultJsonRender).object == [message: null]
}
}
Hope it helps.
I am trying to attach a subscriber to an event in Esper but I would like to use .epl file for that. I've been browsing repositories and I have seen examples of doing that by using annotation interfaces. I was trying to do it the same way they do it in CoinTrader, but I can't seem to get it to work. Yet, if I set the subscriber in Java, it works.
This is my project structure for reference
This is my .epl file:
module queries;
import events.*;
import configDemo.*;
import annotations.*;
create schema MyTickEvent as TickEvent;
#Name('allEvents')
#Description('test')
#Subscriber(className='configDemo.TickSubscriber')
select * from TickEvent;
#Name('tickEvent')
#Description('Get a tick event every 3 seconds')
select currentPrice from TickEvent;
This is my config file:
<?xml version="1.0" encoding="UTF-8"?>
<esper-configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://www.espertech.com/schema/esper"
xsi:noNamespaceSchemaLocation="esper-configuration-6-0.xsd">
<event-type-auto-name package-name="events"/>
<auto-import import-name="annotations.*"/>
<auto-import import-name="events.*"/>
<auto-import import-name="configDemo.*"/>
This is my Subscriber interface:
package annotations;
public #interface Subscriber {
String className();
}
This is my event class:
package configDemo;
import events.TickEvent;
public class TickSubscriber {
public void update(TickEvent tick) {
System.out.println("Event registered by subscriber - Tick is: " +
tick.getCurrentPrice());
}
}
And my main file is this:
package configDemo;
import java.io.IOException;
import java.util.concurrent.CountDownLatch;
import com.espertech.esper.client.EPStatement;
import com.espertech.esper.client.deploy.DeploymentException;
import com.espertech.esper.client.deploy.DeploymentOptions;
import com.espertech.esper.client.deploy.Module;
import com.espertech.esper.client.deploy.ParseException;
public class Main {
public static EngineHelper engineHelper;
public static Thread engineThread;
public static boolean continuousSimulation = true;
public static void main(String[] args) throws DeploymentException, InterruptedException, IOException, ParseException {
engineHelper = new EngineHelper();
DeploymentOptions options = new DeploymentOptions();
options.setIsolatedServiceProvider("validation"); // we isolate any statements
options.setValidateOnly(true); // validate leaving no started statements
options.setFailFast(false); // do not fail on first error
Module queries = engineHelper.getDeployAdmin().read("queries.epl");
engineHelper.getDeployAdmin().deploy(queries, null);
CountDownLatch latch = new CountDownLatch(1);
EPStatement epl = engineHelper.getAdmin().getStatement("allEvents");
//epl.setSubscriber(new TickSubscriber());
engineThread = new Thread(new EngineThread(latch, continuousSimulation, engineHelper.getRuntime()));
engineThread.start();
}
}
As you can see the setSubscriber line is commented out. When I run it as is, I expected that the subscriber will be recognized and registered and yet it isn't. I only get the tick events flowing in the console. If I decomment the line and I run it, I get a notification after each tick that the subscriber received the event and it all works fine.
What am I doing wrong? How can I set a subscriber within the .epl file?
Assigning a subscriber is done by the application and is not something that the engine does for you. The app code would need to loop thru the statements, get the annotations "stmt.getAnnotations" and inspect these and assign the subscriber.
I have been inventing a way how to work around the problem of adding consumers to a jetty endpoint (it does not allow multiple consumers). The way we do it in our company is to build our own router and a broadcasting endpoint which consumes from jetty and routes requests to underlying "subscriptions". Only one of them will eventually process the request. It kind of works but it's not completely ok, since recently when updating to latest Camel we have found our custom built component to leak memory and in general I consider using built-in functionality over custom hacks.
I started investigating the Camel REST API and found it very nice and pretty much replacing our home-grown component apart from one thing - you cannot re-configure it at runtime - you have to stop the context basically for this to work. Below I include my unit test with a happy path and the path that fails. Frankly I think is a bug, but if there is a legitimate way to achieve what I want, I'd like to hear sound advice:
package com.anydoby.camel;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.fail;
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.commons.io.IOUtils;
import org.junit.Before;
import org.junit.Test;
/**
* Test tries to add/remove routes at runtime.
*/
public class RoutesTest {
private DefaultCamelContext ctx;
#Before
public void pre() throws Exception {
ctx = new DefaultCamelContext();
new RouteBuilder(ctx) {
#Override
public void configure() throws Exception {
restConfiguration("jetty").host("localhost").port(8080);
rest("/")
.get("/issues/{isin}").route().id("issues")
.process(e -> e.getOut().setBody("Here's your issue " + e.getIn().getHeader("isin"))).endRest()
.get("/listings").route().id("listings").process(e -> e.getOut().setBody("some listings"));
}
}.addRoutesToCamelContext(ctx);
ctx.start();
}
#Test
public void test() throws IOException {
{
InputStream stream = new URL("http://localhost:8080/issues/35").openStream();
assertEquals("Here's your issue 35", IOUtils.toString(stream));
}
{
InputStream stream = new URL("http://localhost:8080/listings").openStream();
assertEquals("some listings", IOUtils.toString(stream));
}
}
#Test
public void disableRoute() throws Exception {
ctx.stopRoute("issues");
ctx.removeRoute("issues");
try (InputStream stream = new URL("http://localhost:8080/issues/35").openStream()) {
fail();
} catch (Exception e) {
}
new RouteBuilder(ctx) {
#Override
public void configure() throws Exception {
rest().get("/issues/{isin}/{sedol}").route().id("issues")
.process(e -> e.getOut()
.setBody("Here's your issue " + e.getIn().getHeader("isin") + ":" + e.getIn().getHeader("sedol")))
.endRest();
}
}.addRoutesToCamelContext(ctx);
{
InputStream stream = new URL("http://localhost:8080/issues/35/65").openStream();
assertEquals("Here's your issue 35:65", IOUtils.toString(stream));
}
}
}
The disableRoute() test fails since I cannot add another consumer to an existing endpoint.
So my question is - "is there a way to add a new URL mapping to a restful camel-jetty endpoint"? If you do it during first configuration it works fine, but when later you want to reconfigure one of the routes the error is:
org.apache.camel.FailedToStartRouteException: Failed to start route because of Multiple consumers for the same endpoint is not allowed: jetty:http://localhost:8080/issues/%7Bisin%7D/%7Bsedol%7D?httpMethodRestrict=GET
I want to run unit tests on a database other than the default one. Here is my application.conf:
application.secret="[cut]"
application.langs="en"
db.default.driver=com.mysql.jdbc.Driver
db.default.url="jdbc:mysql://localhost:3306/city_game?characterEncoding=UTF-8"
db.default.user=root
db.default.password=""
db.test.driver=com.mysql.jdbc.Driver
db.test.url="jdbc:mysql://localhost:3306/play_test?characterEncoding=UTF-8"
db.test.user=root
db.test.password=""
ebean.default="models.*"
ebean.test="models.*"
logger.root=ERROR
logger.play=INFO
logger.application=DEBUG
BaseModelTest.java:
package models;
import com.avaje.ebean.Ebean;
import com.avaje.ebean.EbeanServer;
import com.avaje.ebean.config.ServerConfig;
import com.avaje.ebeaninternal.server.ddl.DdlGenerator;
import com.avaje.ebean.config.dbplatform.MySqlPlatform;
import com.avaje.ebeaninternal.api.SpiEbeanServer;
import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import play.test.FakeApplication;
import play.test.Helpers;
import java.io.IOException;
public class BaseModelTest
{
public static FakeApplication app;
public static DdlGenerator ddl;
#BeforeClass
public static void startApp() throws IOException
{
app = Helpers.fakeApplication();
Helpers.start(app);
String serverName = "test";
EbeanServer server = Ebean.getServer(serverName);
ServerConfig config = new ServerConfig();
ddl = new DdlGenerator();
ddl.setup((SpiEbeanServer) server, new MySqlPlatform(), config);
}
#AfterClass
public static void stopApp()
{
Helpers.stop(app);
}
#Before
public void dropCreateDb() throws IOException
{
// Drop
ddl.runScript(false, ddl.generateDropDdl());
// Create
ddl.runScript(false, ddl.generateCreateDdl());
}
}
However, I get results saved in the default database, and the test one has its tables created but empty. What I expect is to have the results written to the test db and default one untouched.
I somehow ended with different approach.
I still created separate real test database instance (because of stored procedures), but instead I used the Play1-like approach.
I have separates configuration sides beneath my main configuration (e.g. test configuration, prod specific stuff, stage specific stuff etc ..)
I load it via Global.scala as shown below (please note the exaple provided below works in in Play for java developers version as well)
object Global extends GlobalSettings {
override def onLoadConfig(config: Configuration, path: File, cl: ClassLoader, mode: Mode.Mode): Configuration = {
val modeFile: String = s"application.${mode.toString.toLowerCase}.conf"
Logger.error(s"Loading {${path.toURI}conf/application.conf}")
Logger.error(s"Appending mode specific configuration {${path.toURI}conf/$modeFile}")
val modeConfig = config ++ Configuration(ConfigFactory.load(modeFile))
super.onLoadConfig(modeConfig, path, cl, mode)
}
}
And the application.test.conf config file is as follows:
# test database
db.default.logStatements=false
db.default.jndiName=DefaultDS
db.default.url="jdbc:postgresql://127.0.0.1:5432/db-test"
db.default.user=user
db.default.password="password!##$"
db.default.driver=org.postgresql.Driver
This way I get following benefits:
I still write my tests the usual way
Play evolutions gets tested on CI / jenkins as well
I have to write my tests in the way I could safely retun them on the existing db instance w/ minimal assumptions about data and userbase. That way I'm 90% certain I will be able to run those against staging / prod environment with much less friction. (Controversial point)
I think you should separate your code
as these
#BeforeClass
public static void startApp() throws IOException {
app = Helpers.fakeApplication();
Helpers.start(app);
}
#Before
public void dropCreateDb() throws IOException {
String serverName = "test";
EbeanServer server = Ebean.getServer(serverName);
ServerConfig config = new ServerConfig();
DdlGenerator ddl = new DdlGenerator((SpiEbeanServer) server, new MySqlPlatform(), config);
// Drop
ddl.runScript(false, ddl.generateDropDdl());
// Create
ddl.runScript(false, ddl.generateCreateDdl());
}