Load neo4j dump in test container during integration test run - java

I am trying to write integration test for neo4j using spring boot. I am using test container. Can anyone help me how to load database dump file to testcontainer?

here's one way to do this (using current Neo4j 5.3). I have a dump file called neo4j.dump created from a local instance and I use withCopyFileToContainer to copy it to the container before it starts.
As I use the community edition in this example, there is no online backup/restore, but only dump/load. So therefor I have to change the startup command. The load needs to happen before Neo4j starts. I can't stop Neo4j in the container because it would stop the container.
Therefor I create a small shell script that executes the desired other command and than delegates to the original entry point.
This script is transferred with file mode 0100555, corresponding to r-xr-xr-x so that it is executable.
Finally, the container is started with the above script as command.
import org.junit.jupiter.api.AfterAll;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.TestInstance;
import org.neo4j.driver.AuthTokens;
import org.neo4j.driver.Driver;
import org.neo4j.driver.GraphDatabase;
import org.testcontainers.containers.Neo4jContainer;
import org.testcontainers.images.builder.Transferable;
import org.testcontainers.utility.MountableFile;
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
public class LoadDumpTest {
Neo4jContainer<?> neo4j;
Driver driver;
#BeforeAll
void initNeo4j() {
neo4j = new Neo4jContainer<>("neo4j:5.3.0")
.withCopyFileToContainer(MountableFile.forClasspathResource("neo4j.dump"),
"/var/lib/neo4j/data/dumps/neo4j.dump")
.withCopyToContainer(Transferable.of("""
#!/bin/bash -eu
/var/lib/neo4j/bin/neo4j-admin database load neo4j
/startup/docker-entrypoint.sh neo4j
""", 0100555), "/startup/load-dump-and-start.sh")
.withCommand("/startup/load-dump-and-start.sh")
.withLogConsumer(f -> System.out.print(f.getUtf8String()));
neo4j.start();
driver = GraphDatabase.driver(neo4j.getBoltUrl(), AuthTokens.basic("neo4j", neo4j.getAdminPassword()));
}
#Test
void dataShouldHaveBeenLoaded() {
try (var session = driver.session()) {
var numNodes = session.run("MATCH (n) RETURN count(n)").single().get(0).asLong();
Assertions.assertTrue(numNodes > 0);
}
}
#AfterAll
void stopNeo4j() {
neo4j.stop();
}
}
Edit:
If you are on a recent enterprise edition, Christophe suggested the following solution on Twitter, which I do personally think it's superior, as it is less hacky.
https://github.com/ikwattro/neo4j-5-testcontainers-restore-backup/blob/main/src/test/java/dev/ikwattro/Neo4j5RestoreBackupExampleTest.java
It makes use of seed URIs for databases. Copying or binding the resource in his example works the same as in mine.

If you're using Neo4j 5, I suggest you make backups instead of dumps. Using backups you can take advantage of the seedUri option when creating a database, meaning you can create a database from an URI pointing to a backup on disk ( or in the neo4j container ). https://neo4j.com/docs/operations-manual/current/clustering/databases/#cluster-seed-uri
Here is an example using Testcontainers
package dev.ikwattro;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.TestInstance;
import org.neo4j.driver.AuthTokens;
import org.neo4j.driver.Driver;
import org.neo4j.driver.GraphDatabase;
import org.neo4j.driver.Session;
import org.neo4j.driver.SessionConfig;
import org.testcontainers.containers.BindMode;
import org.testcontainers.containers.Neo4jContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
import static org.assertj.core.api.Assertions.assertThat;
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
#Testcontainers(disabledWithoutDocker = true)
public class Neo4j5RestoreBackupExampleTest {
#Container
private Neo4jContainer<?> neo4j = new Neo4jContainer<>("neo4j:5.3.0-enterprise")
.withAdminPassword("password")
.withEnv("NEO4J_dbms_memory_heap_max__size", "256M")
.withEnv("NEO4J_dbms_databases_seed__from__uri__providers", "URLConnectionSeedProvider")
.withClasspathResourceMapping("backups", "/backups", BindMode.READ_ONLY)
.withEnv("NEO4J_ACCEPT_LICENSE_AGREEMENT", "yes");
#BeforeAll
void beforeAll() {
neo4j.start();
createDbFromBackup();
}
#Test
void testCreatingDbFromBackup() {
try (Driver driver = GraphDatabase.driver(neo4j.getBoltUrl(), AuthTokens.basic("neo4j", "password"))) {
try (Session session = driver.session(SessionConfig.forDatabase("worldcup22"))) {
var result = session.run("MATCH (n) RETURN count(n) AS c").single().get("c").asLong();
assertThat(result).isPositive();
}
}
}
private void createDbFromBackup() {
try (Driver driver = GraphDatabase.driver(neo4j.getBoltUrl(), AuthTokens.basic("neo4j", "password"))) {
try (Session session = driver.session(SessionConfig.forDatabase("system"))) {
session.run("""
CREATE DATABASE worldcup22 OPTIONS { existingData: "use", seedUri: "file:///backups/world-cup-2022-neo4j.backup"}
""");
}
}
}
}
You can find a working maven project here https://github.com/ikwattro/neo4j-5-testcontainers-restore-backup

Related

There is unknown field "container" in Tekton

I'm interested in Tekton these days.
However there are some issue when I implement Task with java fabric8.tekton apis.
There exist api which is adding steps in spec in units of container(withContainer) in TaskBuilder class.
However I got error message in rune time like below,
Can I get some advices?
Tekton version - v0.10.1
I used packages like below:
io.fabric8:kubernetes-client:4.7.1
io.fabric8:tekton-client:4.7.1
Here is my complete test code.
package com.example.tekton;
import java.util.ArrayList;
import java.util.List;
import io.fabric8.kubernetes.api.model.Container;
import io.fabric8.kubernetes.api.model.ContainerBuilder;
import io.fabric8.kubernetes.client.BaseClient;
import io.fabric8.kubernetes.client.Config;
import io.fabric8.kubernetes.client.ConfigBuilder;
import io.fabric8.tekton.client.TektonClient;
import io.fabric8.tekton.client.DefaultTektonClient;
import io.fabric8.tekton.client.handlers.TaskHandler;
import io.fabric8.tekton.client.handlers.TaskRunHandler;
import io.fabric8.tekton.pipeline.v1alpha1.ArrayOrString;
import io.fabric8.tekton.pipeline.v1alpha1.Task;
import io.fabric8.tekton.pipeline.v1alpha1.TaskBuilder;
import io.fabric8.tekton.pipeline.v1alpha1.TaskRun;
import io.fabric8.tekton.pipeline.v1alpha1.TaskRunBuilder;
import io.fabric8.tekton.pipeline.v1alpha1.TaskRefBuilder;
public class DefaultKubernetesTest {
public Task getTask() {
Container con = new ContainerBuilder()
.withNewImage("ubuntu")
.withNewName("echo-hello-world")
.addNewCommand("echo")
.addNewArg("hello jinwon world")
.build();
Task task = new TaskBuilder()
.withApiVersion("tekton.dev/v1alpha1")
.withKind("Task")
.withNewMetadata()
.withName("echo-hello-world-test")
.endMetadata()
.withNewSpec()
.addNewStep()
.withContainer(con)
.endStep()
.endSpec()
.build();
return task;
}
public TaskRun getTaskRun() {
TaskRun taskRun = new TaskRunBuilder()
.withNewMetadata()
.withName("taskrun")
.endMetadata()
.withNewSpec()
.withTaskRef(new TaskRefBuilder().withName("echo-hello-world-test").withApiVersion("tekton.dev/v1alpha1").withKind("Task").build())
.endSpec().build();
return taskRun;
}
public static void main(String[] args) {
ConfigBuilder config = new ConfigBuilder();
DefaultKubernetesTest kubeTest = new DefaultKubernetesTest();
String username = "testUser";
String password = "testPwd";
config = config.withMasterUrl("https://192.168.6.236:6443");
config = config.withUsername(username);
config = config.withPassword(password);
Config kubeConfig = config.build();
try (DefaultTektonClient test = new DefaultTektonClient(kubeConfig)) {
Task task = kubeTest.getTask();
TaskRun taskRun = kubeTest.getTaskRun();
test.tasks().inNamespace("test").create(task);
test.taskRuns().inNamespace("test").create(taskRun);
test.close();
}
}
}
Tekton ships with an admission controller, which validates the CRD specs before allowing them into the cluster. Because the project is still in alpha, its moving quite fast. Fabric8 may be templating out K8s objects against a different spec from what has been installed on your cluster. You should be able to validate the spec version used in Fabric8 and remove all the Tekton objects in your cluster and re-apply them at a specific version.

How to run a JMeter JMX file in Java code?

I am trying to run a JMeter MS SQL database test plan from Java code through my Spring Boot app but it's showing the following errors:
I have loaded the plugin manager in JMeter and put that in jmeter/lib/ext folder and installed all required plugins.
Java code to run JMeter test case:
package com.example.demofin;
import java.io.File;
import org.apache.jmeter.engine.StandardJMeterEngine;
import org.apache.jmeter.reporters.ResultCollector;
import org.apache.jmeter.reporters.Summariser;
import org.apache.jmeter.save.SaveService;
import org.apache.jmeter.util.JMeterUtils;
import org.apache.jorphan.collections.HashTree;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;
import com.example.demofin.config.Properties;
#Component
public class Demofin implements CommandLineRunner {
#Autowired
private Properties properties;
private Summariser result;
#Override
public void run(String... args) throws Exception {
for(String i : args)
System.out.println(i);
// String j = properties.getPropertyByKey("JMETER_HOME");
String j="D:\\apache-jmeter-4.0";
// System.out.println(j);
//System.out.println("Jmeter home path: " + properties.getPropertyByKey("JMETER_HOME"));
StandardJMeterEngine jmeter = new StandardJMeterEngine();
// // Initialize Properties, logging, locale, etc.
JMeterUtils.setJMeterHome(j);
JMeterUtils.loadJMeterProperties(j+"/bin/jmeter.properties");
//
// // you can comment this line out to see extra log messages of i.e. DEBUG level
JMeterUtils.initLogging();
JMeterUtils.initLocale();
// Initialize JMeter SaveService
SaveService.loadProperties();
// Load existing .jmx Test Plan
HashTree testPlanTree = SaveService.loadTree(new File("D:\\apache-jmeter-4.0\\" + "bin\\JDBC Connection Configuration.jmx"));
Summariser summer = null;
String summariserName = JMeterUtils.getPropDefault("summariser.name", "summary");
if (summariserName.length() > 0) {
summer = new Summariser(summariserName);
}
ResultCollector logger = new ResultCollector(summer);
testPlanTree.add(testPlanTree.getArray()[0], logger);
// Run JMeter Test
jmeter.configure(testPlanTree);
jmeter.run();
result = summer;
}
}
Usually, we compile java code to jar and push it to jmeter/lib/ext and use it in Jmeter.
If you attempt to using Jmeter in java, I suggest downloading source code of JMeter and merge to your project.

How to connect to multiple databases in Spring Boot JPA?

I currently have one database connected and it is working. I would like to connect another (and eventually 2 more) databases. How do I do so? There should be a solution using only annotations and properties files.
I also read this question, How to use 2 or more databases with spring?, but I dont know how it works too well/ if it will apply. I'm not using a controller class and I dont know what that does. I'm also not sure how the config class they mention in the answer actually connects to the specific DO.
This is my application.properties file: (marked out username and password but its there in my file)
hibernate.dialect=org.hibernate.dialect.SQLServer2012Dialect
hibernate.show_sql=true
hibernate.format_sql=true
hibernate.default_schema=dbo
hibernate.packagesToScan=src.repositories.LMClientRepository.java
spring.jpa.generate-ddl=true
spring.jpa.hibernate.naming-strategy=org.hibernate.cfg.DefaultNamingStrategy
spring.datasource.username=***
spring.datasource.password=***
spring.datasource.url=jdbc:sqlserver://schqvsqlaod:1433;database=dbMOBClientTemp;integratedSecurity=false;
spring.datasource.testOnBorrow=true
spring.datasource.validationQuery=SELECT 1
spring.jpa.database=dbMOBClientTemp
spring.jpa.show-sql=true
spring.jpa.hibernate.dialect=org.hibernate.dialect.SQLServer2012Dialect
spring.datasource.driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver
This is my application file:
package testApplication;
import java.util.ArrayList;
import java.util.List;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.orm.jpa.EntityScan;
import org.springframework.cache.annotation.EnableCaching;
import org.springframework.context.annotation.Bean;
import org.springframework.data.jpa.repository.config.EnableJpaRepositories;
import fileRetrieval.InputFileParse;
import lmDataObjects.LMClientDO;
import lmDataObjects.LoadMethodDO;
import repositories.LMClientRepository;
import repositories.LoadMethodRepository;
#SpringBootApplication
#EnableJpaRepositories(basePackageClasses = LoadMethodRepository.class)
#EntityScan(basePackageClasses = LoadMethodDO.class)
#EnableCaching
public class Application {
private static final Logger log = LoggerFactory.getLogger(Application.class);
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
#Bean
public CommandLineRunner demo(LoadMethodRepository lm_repo, LMClientRepository lmc_repo) {
return (args) -> {
List<LMClientDO> lmlist = InputFileParse.getMultiGroupfile();
List<String> uniqueMediaIds = new ArrayList(InputFileParse.getUniqueMediaIds());
for (int i = 0; i < InputFileParse.getUniqueMediaIds().size(); i ++){
lm_repo.save(new LoadMethodDO(uniqueMediaIds.get(i)));
}
for (int i = 0; i < lmlist.size(); i++){
lmc_repo.save(new LMClientDO(lmlist.get(i).getClientId(), lmlist.get(i).getMediaId()));
}
//Here is where I would like to do stuff with data from the other database that I have not connected yet
};
}
}
I also made a new properties file called application-MTS.properties and I put data for the new database in there. Still unsure of what to do with it.
spring.datasource.username=***
spring.datasource.password=***
spring.datasource.url=jdbc:sqlserver://SCHQVSQLCON2\VSPD:1433;database=dbMTS;integratedSecurity=false;
Spring Data has understood this is a common use case that people may want so they created a example project on how to do this. I would review the spring-data-examples for multiple-datasources.
The important aspect is to look at the OrderConfig and CustomerConfig classes as they define the two data sources.
For Hibernate you can have mutiple hbm.xml files and get that in a class and use it.
Something like this:
if ("yourCondition".equals(definedCondition)) {
sf = new Configuration().configure("example.cfg.xml").buildSessionFactory();
} else {
sf = new Configuration().configure("exampleTwo.cfg.xml").buildSessionFactory();
}
You can have separate database and login info in the xml files.
If you want two separate connections, you can store them in separate methods and call them to create multiple sessions.

How to run Spring Shell scripts in a JUnit test

I have a Spring Shell-based application and a couple of scripts. Is there an easy way to run the scripts in a JUnit test such that a test fails, if some exception/error occurs during the execution of the script?
The purpose of the tests is to make sure that all correct scripts run without errors.
Update 1:
Here's a little helper class for running scripts in JUnit:
import org.apache.commons.io.FileUtils;
import org.springframework.shell.Bootstrap;
import org.springframework.shell.core.CommandResult;
import org.springframework.shell.core.JLineShellComponent;
import java.io.File;
import java.io.IOException;
import java.util.List;
import static org.fest.assertions.api.Assertions.*;
public class ScriptRunner {
public void runScript(final File file) throws IOException
{
final Bootstrap bootstrap = new Bootstrap();
final JLineShellComponent shell = bootstrap.getJLineShellComponent();
final List<String> lines = FileUtils.readLines(file);
for (final String line : lines) {
execVerify(line, shell);
}
}
private void execVerify(final String command, final JLineShellComponent shell) {
final CommandResult result = shell.executeCommand(command);
assertThat(result.isSuccess()).isTrue();
}
}
You can create an instance of Bootstrap, get the shell out of it and then executeCommand() (including the shell command) on it.
You may be interested in what is done in Spring XD for this: https://github.com/spring-projects/spring-xd/blob/master/spring-xd-shell/src/test/java/org/springframework/xd/shell/AbstractShellIntegrationTest.java (although there are a lot of XD specific details)

How to declare second database for testing in Play Framework?

I want to run unit tests on a database other than the default one. Here is my application.conf:
application.secret="[cut]"
application.langs="en"
db.default.driver=com.mysql.jdbc.Driver
db.default.url="jdbc:mysql://localhost:3306/city_game?characterEncoding=UTF-8"
db.default.user=root
db.default.password=""
db.test.driver=com.mysql.jdbc.Driver
db.test.url="jdbc:mysql://localhost:3306/play_test?characterEncoding=UTF-8"
db.test.user=root
db.test.password=""
ebean.default="models.*"
ebean.test="models.*"
logger.root=ERROR
logger.play=INFO
logger.application=DEBUG
BaseModelTest.java:
package models;
import com.avaje.ebean.Ebean;
import com.avaje.ebean.EbeanServer;
import com.avaje.ebean.config.ServerConfig;
import com.avaje.ebeaninternal.server.ddl.DdlGenerator;
import com.avaje.ebean.config.dbplatform.MySqlPlatform;
import com.avaje.ebeaninternal.api.SpiEbeanServer;
import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import play.test.FakeApplication;
import play.test.Helpers;
import java.io.IOException;
public class BaseModelTest
{
public static FakeApplication app;
public static DdlGenerator ddl;
#BeforeClass
public static void startApp() throws IOException
{
app = Helpers.fakeApplication();
Helpers.start(app);
String serverName = "test";
EbeanServer server = Ebean.getServer(serverName);
ServerConfig config = new ServerConfig();
ddl = new DdlGenerator();
ddl.setup((SpiEbeanServer) server, new MySqlPlatform(), config);
}
#AfterClass
public static void stopApp()
{
Helpers.stop(app);
}
#Before
public void dropCreateDb() throws IOException
{
// Drop
ddl.runScript(false, ddl.generateDropDdl());
// Create
ddl.runScript(false, ddl.generateCreateDdl());
}
}
However, I get results saved in the default database, and the test one has its tables created but empty. What I expect is to have the results written to the test db and default one untouched.
I somehow ended with different approach.
I still created separate real test database instance (because of stored procedures), but instead I used the Play1-like approach.
I have separates configuration sides beneath my main configuration (e.g. test configuration, prod specific stuff, stage specific stuff etc ..)
I load it via Global.scala as shown below (please note the exaple provided below works in in Play for java developers version as well)
object Global extends GlobalSettings {
override def onLoadConfig(config: Configuration, path: File, cl: ClassLoader, mode: Mode.Mode): Configuration = {
val modeFile: String = s"application.${mode.toString.toLowerCase}.conf"
Logger.error(s"Loading {${path.toURI}conf/application.conf}")
Logger.error(s"Appending mode specific configuration {${path.toURI}conf/$modeFile}")
val modeConfig = config ++ Configuration(ConfigFactory.load(modeFile))
super.onLoadConfig(modeConfig, path, cl, mode)
}
}
And the application.test.conf config file is as follows:
# test database
db.default.logStatements=false
db.default.jndiName=DefaultDS
db.default.url="jdbc:postgresql://127.0.0.1:5432/db-test"
db.default.user=user
db.default.password="password!##$"
db.default.driver=org.postgresql.Driver
This way I get following benefits:
I still write my tests the usual way
Play evolutions gets tested on CI / jenkins as well
I have to write my tests in the way I could safely retun them on the existing db instance w/ minimal assumptions about data and userbase. That way I'm 90% certain I will be able to run those against staging / prod environment with much less friction. (Controversial point)
I think you should separate your code
as these
#BeforeClass
public static void startApp() throws IOException {
app = Helpers.fakeApplication();
Helpers.start(app);
}
#Before
public void dropCreateDb() throws IOException {
String serverName = "test";
EbeanServer server = Ebean.getServer(serverName);
ServerConfig config = new ServerConfig();
DdlGenerator ddl = new DdlGenerator((SpiEbeanServer) server, new MySqlPlatform(), config);
// Drop
ddl.runScript(false, ddl.generateDropDdl());
// Create
ddl.runScript(false, ddl.generateCreateDdl());
}

Categories

Resources