I am attempting to use Unitils to assist me in Database testing. I would like to use the Unitils/DBMaintain functionality for disabling constraints. However there is a few problems with this. I do not wish to use DBMaintain to create my databases for me however I wish to use its constraint disabling functionality. I was able to achieve this through the use of a custom module listed below:
public class DisableConstraintModule implements Module {
private boolean disableConstraints = false;
public void afterInit() {
if (disableConstraints) {
DatabaseUnitils.disableConstraints();
}
}
public void init(Properties configuration) {
disableConstraints = PropertyUtils.getBoolean("Database.disableConstraints", false, configuration);
}
}
This partially solves what I want however I wish to be able to only disable constraints for tables I will be using in my test. My tests will be running against a database with multiple schemas and each schema has hundreds of different tables. DatabaseUnitils.disableConstraints() disables the constraints for every table in every schema which would be far too time consuming and is unnecessary.
Upon searching the dbmaintain code I found that the Db2Database class does indeed contain a function for disabling constraints on a specific schema and table name basis however this method is protected. I could access this be either extending the Db2Database class or using reflection.
Next I need to be able to determine which schemas and tables I am interested in. I could do this by observing the #DataSet annotation to determine which schemas and tables are important based on what is in the xml. In order to do this I need to override the TestListener so I can instruct it to disable the constraints using the xml before it attempts to insert the dataset. This was my attempt at this:
public class DisableConstraintModule extends DbUnitModule {
private boolean disableConstraints = false;
private TableBasedConstraintsDisabler disabler;
public void afterInit() {
}
public void init(Properties configuration) {
disableConstraints = PropertyUtils.getBoolean("Database.disableConstraints", false, configuration);
PropertyUtils.getInstance("org.unitils.dbmaintainer.structure.ConstraintsDisabler.implClassName", configuration);
}
public void disableConstraintsForDataSet(MultiSchemaDataSet dataSet) {
disabler.disableConstraints(dataSet);
}
protected class DbUnitCustomListener extends DbUnitModule.DbUnitListener {
#Override
public void beforeTestSetUp(Object testObject, Method testMethod) {
disableConstraintsForDataSet(getDataSet(testMethod, testObject));
insertDataSet(testMethod, testObject);
}
}
}
This is what I would like to do however I am unable to get the #DataSet annotation to trigger my DbUnitCustomListener and instead it calls the default DBUnitModule DbUnitListener. Is there anyway for me to override which listener gets called when using the #DataSet annotation or is there a better approach all together for disabling constraints on a specific schema and table level for a DB2 Database?
Thanks
You have to tell Unitils to use your subclass of DbUnitModule. You do this using the unitils.module.dbunit.className property in your unitils.properties file. It sounds like you've got this part figured out.
The second part is to override DbUnitModule's getTestListener() in order to return your custom listener.
See this post for an example.
Related
I have an authentication module which is imported inside our projects to provide authentication related APIs.
AppConfig.java
#Configuration
#ComponentScan({"com.my.package.ldap.security"})
#EnableCaching
#EnableRetry
public class ApplicationConfig {
...
}
I've configured Swagger/OpenAPI in my projects and I wish to find a way to manage these imported endpoints:
Specifically, I wish to set an order on the Example object's fields. Right now it is sorted alphabetically by default.
The reason for doing that is because a lot of these fields are "optional" and we have to remove these fields every time from the example object in order to authenticate a user which is a waste of time.
I've tried annotating the Object with #JsonPropertyOrder but it makes no change:
#JsonPropertyOrder({
"domain",
"username",
"password"
})
Is there any way to achieve that?
I made a small POC. It isn't pretty or very extendible, but it does work as intended. Perhaps one could make it more flexible, re-using the property position on the metadata object, but this example does not include that. This way you can loop definitions and models, manually doing the work that the framework fails to do at the moment.
Also, be sure not to make this too heavy because it will be executed every time someone opens up the swagger documentation. It's a piece of middleware that transforms the original Swagger API definition structure. It does not change the original one.
#Order(SWAGGER_PLUGIN_ORDER)
public class PropertyOrderTransformationFilter implements WebMvcSwaggerTransformationFilter {
#Override
public Swagger transform(final SwaggerTransformationContext<HttpServletRequest> context) {
Swagger swagger = context.getSpecification();
Model model = swagger.getDefinitions().get("applicationUserDetails");
Map<String, Property> modelProperties = model.getProperties();
// Keep a reference to the property definitions
Property domainPropertyRef = modelProperties.get("domain");
Property usernamePropertyRef = modelProperties.get("username");
Property passwordPropertyRef = modelProperties.get("password");
// Remove all entries from the underlying linkedHashMap
modelProperties.clear();
// Add your own keys in a specific order
Map<String, Property> orderedPropertyMap = new LinkedHashMap<>();
orderedPropertyMap.put("domain", domainPropertyRef);
orderedPropertyMap.put("username", usernamePropertyRef);
orderedPropertyMap.put("password", passwordPropertyRef);
orderedPropertyMap.put("..rest..", otherPropertyRef);
model.setProperties(orderedPropertyMap);
return swagger;
}
#Override
public boolean supports(final DocumentationType documentationType) {
return SWAGGER_2.equals(documentationType);
}
}
#Configuration
class SwaggerConf {
#Bean
public PropertyOrderTransformationFilter propertyOrderTransformationFilter () {
return new PropertyOrderTransformationFilter ();
}
}
I got a very basic question (new to Java) and which goes as below. To give a bit of background, I am using BDD driven test automation framework, working with CUCUMBER and JAVA.
I want to set a global variable in my main class object depending on the parameter/value in one of my step definitions and then access the same variable across the test in other step definitions (or objects)
Let's say my class is
public class FeatureStepDefinitions{
#Given("I want to login to system as (.+)$")
public void iWantToLoginToSystemAs(String userType)
{
//some logic
}
#When("I send a request for user type (.+)$")
public void iSendRequestForUserType(String userType)
{
//some logic
}
#Then("I should be able to see the right response$")
public void iShouldBeAbleToSeeTheRightResponse()
{
if(userType.equalsIgnoreCase("xyz")
{
//verify this logic
}
else if(userType.equalsIgnoreCase("abc")
{
//verify that logic
}
}
I know I can use the parameter "userType" in my THEN statement and perform this, but my question is if I do not want to refactor an existing then and still want to verify different behaviours depending on userType set in previous steps.
Any help/direction is appreciated
The recommended way to share state between steps in cucumber-jvm is to use Dependency Injection.
From the Cucumber docs:
"If your programming language is Java, you will be writing glue code (step definitions and hooks) in plain old Java classes.
Cucumber will create a new instance of each of your glue code classes before each scenario.
If all of your glue code classes have an empty constructor, you don’t need anything else. However, most projects will benefit from a dependency injection (DI) module to organize your code better and to share state between step definitions.
The available dependency injection modules are:
PicoContainer (The recommended one if your application doesn’t use another DI module)
Spring
Guice
OpenEJB
Weld
Needle"
While you can declare a variable in your step definitions class to share state between the step definitions, this will only allow you to share between step definitions declared in the same file, and not between files.
As the number of step definition grows, you'll want to group them in some meaningful way, and this approach will no longer suffice.
I did a bit of digging around and found its quite simple.
public class FeatureStepDefinitions{
public Static String globalUserType = null;
#Given("I want to login to system as (.+)$")
public void iWantToLoginToSystemAs(String userType)
{
globalUserType = userType;
//some logic
}
#When("I send a request for user type (.+)$")
public void iSendRequestForUserType(String userType)
{
//some logic
}
#Then("I should be able to see the right response$")
public void iShouldBeAbleToSeeTheRightResponse()
{
if(globalUserType.equalsIgnoreCase("xyz")
{
//verify this logic
}
else if(globalUserType.equalsIgnoreCase("abc")
{
//verify that logic
}
}
Is there a way to validate the "spring.jpa.hibernate.ddl-auto" property during application startup to ensure that it's set only to none? I want to force all deployments(including dev) to use liquibase.
Edit :- I also need to ensure that this property is not accidentally set in production, which could wipe out the data.
You can hook up on starting of your application by implementing ApplicationListener<ContextRefreshedEvent> class, like:
#Component
public class YourListner implements ApplicationListener<ContextRefreshedEvent> {
#Value("${spring.jpa.properties.hibernate.ddl-auto}")
private String hibernateDdlAuto;
#Override
public void onApplicationEvent(ContextRefreshedEvent event) {
if (!"none".equalsIgnoreCase(hibernateDdlAuto))
throw new MyValidationException();
}
}
Moreover, you can even make it more verbose by registering your own FailureAnalyzer.
As a best practice, you can maintain a universal application.properties/yml file and set the property (spring.jpa.hibernate.ddl-auto) there. Afterwards, maintain a separate property/yml file (application_*.properties/yml) which will fetch the properties from application.properties/yml file, by default.
Also, you can maintain other "common" properties in the parent file.
Using Cassandra, I want to create keyspace and tables dynamically using Spring Boot application. I am using Java based configuration.
I have an entity annotated with #Table whose schema I want to be created before application starts up since it has fixed fields that are known beforehand.
However depending on the logged in user, I also want to create additional tables for those user dynamically and be able to insert entries to those tables.
Can somebody guide me to some resources that I can make use of or point me in right direction in how to go about solving these issues. Thanks a lot for help!
The easiest thing to do would be to add the Spring Boot Starter Data Cassandra dependency to your Spring Boot application, like so...
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-cassandra</artifactId>
<version>1.3.5.RELEASE</version>
</dependency>
In addition, this will add the Spring Data Cassandra dependency to your application.
With Spring Data Cassandra, you can configure your application's Keyspace(s) using the CassandraClusterFactoryBean (or more precisely, the subclass... CassandraCqlClusterFactoryBean) by calling the setKeyspaceCreations(:Set) method.
The KeyspaceActionSpecification class is pretty self-explanatory. You can even create one with the KeyspaceActionSpecificationFactoryBean, add it to a Set and then pass that to the setKeyspaceCreations(..) method on the CassandraClusterFactoryBean.
For generating the application's Tables, you essentially just need to annotate your application domain object(s) (entities) using the SD Cassandra #Table annotation, and make sure your domain objects/entities can be found on the application's CLASSPATH.
Specifically, you can have your application #Configuration class extend the SD Cassandra AbstractClusterConfiguration class. There, you will find the getEntityBasePackages():String[] method that you can override to provide the package locations containing your application domain object/entity classes, which SD Cassandra will then use to scan for #Table domain object/entities.
With your application #Table domain object/entities properly identified, you set the SD Cassandra SchemaAction to CREATE using the CassandraSessionFactoryBean method, setSchemaAction(:SchemaAction). This will create Tables in your Keyspace for all domain object/entities found during the scan, providing you identified the proper Keyspace on your CassandraSessionFactoryBean appropriately.
Obviously, if your application creates/uses multiple Keyspaces, you will need to create a separate CassandraSessionFactoryBean for each Keyspace, with the entityBasePackages configuration property set appropriately for the entities that belong to a particular Keyspace, so that the associated Tables are created in that Keyspace.
Now...
For the "additional" Tables per user, that is quite a bit more complicated and tricky.
You might be able to leverage Spring Profiles here, however, profiles are generally only applied on startup. If a different user logs into an already running application, you need a way to supply additional #Configuration classes to the Spring ApplicationContext at runtime.
Your Spring Boot application could inject a reference to a AnnotationConfigApplicationContext, and then use it on a login event to programmatically register additional #Configuration classes based on the user who logged into the application. You need to follow your register(Class...) call(s) with an ApplicationContext.refresh().
You also need to appropriately handle the situation where the Tables already exist.
This is not currently supported in SD Cassandra, but see DATACASS-219 for further details.
Technically, it would be far simpler to create all the possible Tables needed by the application for all users at runtime and use Cassandra's security settings to restrict individual user access by role and assigned permissions.
Another option might be just to create temporary Keyspaces and/or Tables as needed when a user logs in into the application, drop them when the user logs out.
Clearly, there are a lot of different choices here, and it boils down more to architectural decisions, tradeoffs and considerations then it does technical feasibility, so be careful.
Hope this helps.
Cheers!
Following spring configuration class creates keyspace and tables if they dont exist.
#Configuration
public class CassandraConfig extends AbstractCassandraConfiguration {
private static final String KEYSPACE = "my_keyspace";
private static final String USERNAME = "cassandra";
private static final String PASSWORD = "cassandra";
private static final String NODES = "127.0.0.1"; // comma seperated nodes
#Bean
#Override
public CassandraCqlClusterFactoryBean cluster() {
CassandraCqlClusterFactoryBean bean = new CassandraCqlClusterFactoryBean();
bean.setKeyspaceCreations(getKeyspaceCreations());
bean.setContactPoints(NODES);
bean.setUsername(USERNAME);
bean.setPassword(PASSWORD);
return bean;
}
#Override
public SchemaAction getSchemaAction() {
return SchemaAction.CREATE_IF_NOT_EXISTS;
}
#Override
protected String getKeyspaceName() {
return KEYSPACE;
}
#Override
public String[] getEntityBasePackages() {
return new String[]{"com.panda"};
}
protected List<CreateKeyspaceSpecification> getKeyspaceCreations() {
List<CreateKeyspaceSpecification> createKeyspaceSpecifications = new ArrayList<>();
createKeyspaceSpecifications.add(getKeySpaceSpecification());
return createKeyspaceSpecifications;
}
// Below method creates "my_keyspace" if it doesnt exist.
private CreateKeyspaceSpecification getKeySpaceSpecification() {
CreateKeyspaceSpecification pandaCoopKeyspace = new CreateKeyspaceSpecification();
DataCenterReplication dcr = new DataCenterReplication("dc1", 3L);
pandaCoopKeyspace.name(KEYSPACE);
pandaCoopKeyspace.ifNotExists(true).createKeyspace().withNetworkReplication(dcr);
return pandaCoopKeyspace;
}
}
Using #Enes Altınkaya answer:
#Value("${cassandra.keyspace}")
private String keySpace;
#Override
protected List<CreateKeyspaceSpecification> getKeyspaceCreations() {
return Arrays.asList(
CreateKeyspaceSpecification.createKeyspace()
.name(keySpace)
.ifNotExists()
.withNetworkReplication(new DataCenterReplication("dc1", 3L)));
}
To define your varaibles use an application.properties or application.yml file:
cassandra:
keyspace: yout_keyspace_name
Using config files instead of hardcoded strings you can publish your code on for example GitHub without publishing your passwords and entrypoints (.gitignore files) which may be a security risk.
The following cassandra configuration will create a keyspace when it does not exist and also run the start-up script specified
#Configuration
#PropertySource(value = {"classpath:cassandra.properties"})
#EnableCassandraRepositories
public class CassandraConfig extends AbstractCassandraConfiguration {
#Value("${cassandra.keyspace}")
private String cassandraKeyspace;
#Override
protected List<CreateKeyspaceSpecification> getKeyspaceCreations() {
return Collections.singletonList(CreateKeyspaceSpecification.createKeyspace(cassandraKeyspace)
.ifNotExists()
.with(KeyspaceOption.DURABLE_WRITES, true)
.withSimpleReplication());
}
#Override
protected List<String> getStartupScripts() {
return Collections.singletonList("CREATE TABLE IF NOT EXISTS "+cassandraKeyspace+".test(id UUID PRIMARY KEY, greeting text, occurrence timestamp) WITH default_time_to_live = 600;");
}
}
For table's creation you can use this in the application.properties file
spring.data.cassandra.schema-action=CREATE_IF_NOT_EXISTS
This answer is inspired by Viswanath's answer.
My cassandra.yml looks as follows:
spring:
data:
cassandra:
cluster-name: Test Cluster
keyspace-name: keyspace
port: 9042
contact-points:
- 127.0.0.1
#Configuration
#PropertySource(value = { "classpath:cassandra.yml" })
#ConfigurationProperties("spring.data.cassandra")
#EnableCassandraRepositories(basePackages = "info.vishrantgupta.repository")
public class CassandraConfig extends AbstractCassandraConfiguration {
#Value("${keyspacename}")
protected String keyspaceName;
#Override
protected String getKeyspaceName() {
return this.keyspaceName;
}
#Override
protected List getKeyspaceCreations() {
return Collections.singletonList(CreateKeyspaceSpecification
.createKeyspace(keyspaceName).ifNotExists()
.with(KeyspaceOption.DURABLE_WRITES, true)
.withSimpleReplication());
}
#Override
protected List getStartupScripts() {
return Collections.singletonList("CREATE KEYSPACE IF NOT EXISTS "
+ keyspaceName + " WITH replication = {"
+ " 'class': 'SimpleStrategy', "
+ " 'replication_factor': '3' " + "};");
}
}
You might have to customize #ConfigurationProperties("spring.data.cassandra"), if your configuration starts with cassandra in cassandra.yml file then use #ConfigurationProperties("cassandra")
This is my first post here, recently i have been working with JSF2.0 with primefaces. we have this requirement to export PDF in our application. initially we used primefaces default dataexporter tag. but the format was simply terrible. so, i used itext to generate PDF. we have like upto 15 datatables in our app, and all of them require PDF exporting. i have created a method called generatePDF which creates the PDF using Itext for all the tables.
Interface PDFI {
public void setColNames();
public void setColValues();
public void setContentHeader();
}
Class DataEx {
public void generatePDF(ActionEvent event) {
// generate pdf...
}
}
consider i have a Datatable A in the view
Datatable A ...
bean behind this datatable..
Class BeanA implements PDFI {
//implemented methods
}
}
Class BeanB implements PDFI {
//implemented methods
}
and behind another datatable B, i do the same thing as above ..
so, my question here is, is this considered duplicate code ?? and also, is this the efficient way to do this.
any help is appreciated.
thanks ina dvance
Rule of thumb that I use before re-factoring duplicate code- when part of the code in one place have a bug- are you need to change the other one to? cause you probably will forget
in your case, it's look like you have duplicate code block. I'll consider add the require parameters to generatePDF so it'll do all work in one place.