How can I do integration testing in elasticsearch with spring-data-elasticsearch? - java

I am using spring-data-elasticsearch v3.2.4.RELEASE which is available via spring-boot-starter-data-elasticsearch v2.2.4.RELEASE.
I want to do the integration tests for this but the available option which is this:
https://github.com/allegro/embedded-elasticsearch not working.
The part which I tried to get started as POC is below and it is throwing exception:
public class EmbeddedElasticConfiguration {
public static final String VERSION = "6.8.4";
public static final String DOWNLOAD_DIRECTORY = "<path>\\test-elasticsearch";
public static final String INSTALLATION_DIRECTORY = "<path>\test-elasticsearch";
public static final String NAME = "elasticsearch";
public static final String TRANSPORT_PORT = "9300";
public static final String HTTP_CLIENT_PORT = "9200";
public static final String TEST_INDEX = "salesorder";
public static final String TEST_TYPE = "salesorder";
public static final String RESOURCE_LOCATION = "src/test/resources/salesorder-mapping.json";
private ObjectMapper objectMapper = new ObjectMapper();
EmbeddedElastic embeddedElastic;
#Test
public void configure() throws IOException, InterruptedException {
embeddedElastic = EmbeddedElastic.builder()
.withElasticVersion(VERSION)
.withSetting(TRANSPORT_TCP_PORT, 9300)
.withSetting(CLUSTER_NAME, "my-cluster")
//.withPlugin("analysis-stempel")
.withDownloadDirectory(new File(DOWNLOAD_DIRECTORY))
.withInstallationDirectory(new File(INSTALLATION_DIRECTORY))
.withSetting(HTTP_PORT, 9200)
.withIndex(TEST_INDEX, IndexSettings.builder()
.withType(TEST_TYPE, readMappingFromJson())
.build())
.build();
embeddedElastic.start();
}
private String readMappingFromJson() throws IOException {
final File file = ResourceUtils.getFile(RESOURCE_LOCATION);
String mapping = new String(Files.readAllBytes(file.toPath()));
System.out.println("mapping: "+ mapping);
return mapping;
}
#After
public void stopServer() {
embeddedElastic.stop();
}
}
I am below getting below exception:
pl.allegro.tech.embeddedelasticsearch.EmbeddedElasticsearchStartupException: Failed to start elasticsearch within time-out
at pl.allegro.tech.embeddedelasticsearch.ElasticServer.waitForElasticToStart(ElasticServer.java:127)
at pl.allegro.tech.embeddedelasticsearch.ElasticServer.start(ElasticServer.java:50)
at pl.allegro.tech.embeddedelasticsearch.EmbeddedElastic.startElastic(EmbeddedElastic.java:82)
at pl.allegro.tech.embeddedelasticsearch.EmbeddedElastic.start(EmbeddedElastic.java:63)
at com.xxx.elasticsearch.adapter.configuration.EmbeddedElasticConfiguration.configure(EmbeddedElasticConfiguration.java:50)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Can someone help with any other options for integration tests in elasticsearch with spring-data or How should I write integration tests for elasticsearch.
I know there are other answers on stackoverflow and other portals for embedded-elasticsearch but those are not working with my current elasticsearch version.

You did not write what version of JUnit you are using. I can tell you how we handle this in Spring Data Elasticsearch itself:
For JUnit 4 you can check the JUnit 4 Rule that uses the Utils class to set up a local running Elasticsearch node and tears it down at the end.
For JUnit 5 you might have a look at how this is handled in the current master branch,the relevant classes are found here.
By using the annotation SpringIntegrationTest a local Elasticsearch is started and automatically shut down when all tests are done. Internally there is quite some work done in setting the cluster up, getting the info into the JUnit extension and enabling Spring autowiring of the relevant information into the configuration class. This setup is quite complex, but in the end it uses the same Utils class mentioned above.
I hope this gives a good starting point

Related

Access field in a Rabbitmq testContainer extension class in junit5

I am using TestContainer to run RabbitMQ instance in order to use it in my integration tests.
I create a Junit 5 extension class that implement the BeforeAllCallback interface to run the container only once before my tests, to connect to the container i need to retrieve the mapped port that is exposed in my host machine, so i am wandering if there is any solution in order to access the extension class field from my integration test class.
The Extension
public class RmqExtension implements BeforeAllCallback {
private static final String DEFAULT_USER = "guest";
private static final String DEFAULT_PASS = "guest";
public static final int RABBIT_HTTP_API_PORT = 15672;
private static final String RABBIT_MQ_IMAGE_NAME = "rmqImage";
private static final String RABBIT_MQ_OVERVIEW_PATH = "/api/overview";
private static final GenericContainer rabbitMqContainer = new GenericContainer(DockerImageName.parse(RABBIT_MQ_IMAGE_NAME))
.withExposedPorts(RABBIT_HTTP_API_PORT)
.waitingFor(Wait.forHttp(RABBIT_MQ_OVERVIEW_PATH).withBasicCredentials(DEFAULT_USER, DEFAULT_PASS).forStatusCode(HttpStatus.SC_OK));
#Override
public void beforeAll(ExtensionContext extensionContext) throws Exception {
rabbitMqContainer.start();
}
}
My test Class
#ExtendWith(RmqExtension.class)
class RabbitMqIT {
private int myPort;
#Test
void myTest(){
// What i need to do
myPort = rabbitMqContainer.getMappedPort(15672);
}
}
I am unsure what is the most elegant JUnit-Jupiter-idiomatic way to do this, but if there is only 1 instance of the container per JVM process, you could either use a public static field or save it System Properties.
Also, see the Singleton Container Pattern for another example of how to do this without JUnit:
https://www.testcontainers.org/test_framework_integration/manual_lifecycle_control/#singleton-containers

Junit - Running 1 test works, multiple fail on Assertion

I am having an issue where if I try running multiple tests of a similar type as what is shown below, the test fails on the assertTrue call. Running a test case separately or in the debugger gives the expected results though. I tried researching this obviously, and it said that maybe I wasn't cleaning up correctly, so I added the #After annotation. This Dao test is using an in-memory hsql database. I'm still not quite sure what I am doing wrong here. Any suggestions?
Thanks!
public class ConfigDaoTest
{
private static final Logger LOGGER = LoggerFactory.getLogger(ConfigDaoTest.class);
private static final String DOC_ID="12345678";
private static final String STATUS_INDICATOR="S";
private static final String FILE_NAME="Testing.txt";
private DaoTestResources resources;
#Before
public void setUp()
{
resources = new DaoTestResources();
System.setProperty("IS_JUNIT_TEST", "TRUE");
}
#After
public void cleanUp()
{
resources = null;
}
#Test
public void testUpdateNotice() {
boolean retVal = false;
try {
retVal = new ConfigDao(ConfigurationManager.getInstance("test/resources/junit.test.properties")).updateNoticeByFileName(FILE_NAME,DOC_ID,STATUS_INDICATOR);
}catch(SQLException e) {
LOGGER.debug("ErrorText:" + e.getMessage());
assertNotNull(true);
}
assertTrue(retVal);
System.out.println("Return value: " + retVal);
assertNotNull(retVal);
}
// More tests like the above.
}
I'm not quite sure why this worked for me (if someone does know, I would love to know the technical details), but I removed cleanUp() and changed #Before to #BeforeClass, and now each of my test cases is running as expected when built with ant both from CL and Eclipse. My best guess is that there was some issue being caused by initializing the hsqldb before each test - that's what the resource class does.

How to write Junit for main method which prints "Hello, World!" string to the console and uses Spring Framework

I'm a beginner with Spring and have the following code:
import org.springframework.stereotype.Service;
#Service("helloWorldService")
public class HelloWorldService {
private String name = "Hello World";
public String getName() {
return this.name;
}
}
/**
* Hello class to start this Java application
*/
public class Hello {
private static final Logger LOGGER = Logger.getLogger(Hello.class.getName());
#SuppressWarnings("resource")
public static void main(final String[] args) {
final ApplicationContext context = new ClassPathXmlApplicationContext(
"applicationContext.xml");
final HelloWorldService service = (HelloWorldService) context.getBean("helloWorldService");
final String message = service.getName();
Hello.LOGGER.info(message);
}
}
How do I need to write a test? Since if I use this.log.getLog (), then the actual output in Junit is left empty for comparison.
I assume the question is about how to verify that some value is actually written to the Log. If you are using Spring Boot, you can use OutputCapture to verify what has been written to the console. Un exemple is given in Spring Boot documentation.
In general, however, your unit tests should allow to verify the actual result of some operation against the expected one. The method getName in HelloWorldService is a good candidate for such a test, since you can add Junit (or other framework) assertions that the returned value is equal to what you expect.

How to unit test a KinesisRecord-based DoFn?

I’m getting started on a Beam project that reads from AWS Kinesis, so I have a simple DoFn that accepts a KinesisRecord and logs the contents. I want to write a unit test to run this DoFn and prove that it works. Unit testing with a KinesisRecord has proven to be challenging, though.
I get this error when I try to just use Create.of(testKinesisRecord):
java.lang.IllegalArgumentException: Unable to infer a coder and no Coder was specified. Please set a coder by invoking Create.withCoder() explicitly or a schema by invoking Create.withSchema().
I have tried providing the KinesisRecordCoder explicitly using "withCoder" as the error suggests, but it’s a private class. Perhaps there's another way to unit test a DoFn?
Test code:
public class MyProjectTests {
#Rule
public TestPipeline p = TestPipeline.create();
#Test
public void testPoC() {
var testKinesisRecord = new KinesisRecord(
ByteBuffer.wrap("SomeData".getBytes()),
"seq01",
12,
"pKey",
Instant.now().minus(Duration.standardHours(4)),
Instant.now(),
"MyStream",
"shard-001"
);
PCollection<Void> output =
p.apply(Create.of(testKinesisRecord))
.apply(ParDo.of(new MyProject.PrintRecordFn()));
var result = p.run();
result.waitUntilFinish();
result.metrics().allMetrics().getCounters().forEach(longMetricResult -> {
Assertions.assertEquals(1, longMetricResult.getCommitted().intValue());
});
}
}
DoFn code:
static class PrintRecordFn extends DoFn<KinesisRecord, Void> {
private static final Logger LOG = LoggerFactory.getLogger(PrintRecordFn.class);
private final Counter items = Metrics.counter(PrintRecordFn.class, "itemsProcessed");
#ProcessElement
public void processElement(#Element KinesisRecord element) {
items.inc();
LOG.info("Stream: `{}` Shard: `{}` Arrived at `{}`\nData: {}",
element.getStreamName(),
element.getShardId(),
element.getApproximateArrivalTimestamp(),
element.getDataAsBytes());
}
}
KinesisRecordCoder is supposed to be used for internal purposes, so it is made package private. In the same time, you can provide custom AWSClientsProvider and use it to generate test data. As an example, please, take a look on KinesisMockReadTest and custom Provider

How to have DropWizard JUnit App Rule definition use startup information from a docker rule?

The general problem I am trying to solve is this. I have a solution, but it's very clunky, and I'm hoping someone knows of a more orderly one.
Dropwizard offers a JUnit TestRule called DropwizardAppRule, which is used for integration tests. You use it like this:
#ClassRule
public static final DropWizardAppRule<MyConfiguration> APP_RULE = new DropwizardAppRule(MyApplication.class, myYmlResourceFilePath, ConfigOverride("mydatabase.url", myJdbcUrl));
It will start up your application, configuring it with your yml resource file, with overrides that you specified in the constructor. Note, however, that your overrides are bound at construction time.
There are also JUnit rules out there to start up a Docker container, and I'm using one to start up MySql, and a JUnit RuleChain to enforce the fact that the container must start up before I launch my Dropwizard application that depends on it.
All that works great, if I'm willing to specify in advance what port I want the MySql container to expose. I'm not. I want these integration tests to run on a build machine, quite possibly in parallel for branch builds of the same project, and I would strongly prefer to use the mechanism where you ask Docker to pick any available port, and use that.
The problem I run into with that, is that the exposed container port is not known at the time that the DropwizardAppRule is constructed, which is the only time you can bind configuration overrides.
The solution I adopted was to make a wrapper JUnit Rule, like so:
public class CreateWhenRunRuleWrapper<T extends ExternalResource> extends ExternalResource {
private final Supplier<T> wrappedRuleFactory;
private T wrappedRule;
public CreateWhenRunRuleWrapper(Supplier<T> wrappedRuleFactory) {
this.wrappedRuleFactory = wrappedRuleFactory;
}
public T getWrappedRule() {
return wrappedRule;
}
#Override
protected void before() throws Throwable {
wrappedRule = wrappedRuleFactory.get();
wrappedRule.before();
}
#Override
protected void after() {
wrappedRule.after();
}
}
This works, allowing me to construct the DropWizardAppRule class in the before() method, but is quite obviously outside JUnit's design intent, as shown by the fact that I had to locate it in the org.junit.rules package, in order to empower my class to be able to call the before() and after() methods of the late-created Rule.
What would be a more orderly, best practice way to accomplish the same objective?
2 Options we came up with:
The hacky solution is to use static {} which executes the code after spinning up the container but before initialising the dropwizard instance:
public static final GenericContainer mongodb = new GenericContainer("mongo:latest").withExposedPorts(27017);
static {
mongodb.start();
System.setProperty("dw.mongoConfig.uri", "mongodb://" + mongodb.getContainerIpAddress() + ":" + mongodb.getMappedPort(27017));
}
#ClassRule
public static final DropwizardIntegrationAppRule<Config> app1 = new DropwizardIntegrationAppRule<>(Service.class);
The second option is cleaner and much like yours.
private static final MongoDContainerRule mongo = new MongoDContainerRule();
private static final DropwizardIntegrationAppRule<Config> app = new DropwizardIntegrationAppRule<>(Service.class);
#ClassRule
public static final RuleChain chain = RuleChain
.outerRule(mongo)
.around(app)
MongoDContainerRule is like your wrapper but it also sets the right port through system properties.
public class MongoDContainerRule extends MongoDBContainerBase {
private static final GenericContainer mongodb = new GenericContainer("mongo:latest").withExposedPorts(27017);
#Override
protected void before() throws Throwable {
mongodb.start();
System.setProperty("dw.mongoConfig.uri", "mongodb://" + mongodb.getContainerIpAddress() + ":" + mongodb.getMappedPort(27017));
System.setProperty("dw.mongoConfig.tls", "false");
System.setProperty("dw.mongoConfig.dbName", DB_NAME);
}
#Override
protected void after() {
mongodb.stop();
}
}
The container will expose mongodb on a free port. mongodb.getMappedPort(internalPort) will return it. System.setProperty("dw.*") injects values into the dropwizard config.

Categories

Resources