TestNG - WebDriver - Web apps Testing - Independent Tests - java

How to handle DB inside of your script using TestNG framework?
How to delete DB before each run of test script?
How to load sql file into clean DB before running test script?
Goal: Each test case must be independent
Framework: TestNG
Language: Java
Each test case must be independent against other test cases. Goal is to run Test cases randomly, no order is required.
Previously I have used PHPUnit framework where each test case was independent.
Before running each test script, I would:
dropdatabase
create a new database
load sql file into database with initial data
I was using this inside of shell script, and I would call shell script via command line:
mysql -u$DB_USER -p$DB_PWD -h$HOST -e "DROP DATABASE $DB_NAME"
mysql -u$DB_USER -p$DB_PWD -h$HOST -e "CREATE DATABASE $DB_NAME"
mysql -u$DB_USER -p$DB_PWD -h$HOST $DB_NAME < sql/dbinit.sql
Google-ing was not helpfull, therefore I am posting question here. I would need something like this for TestNG but I have not found anything similiar.
Could someone give advice to fellow QA.
How do you handle with oracle database, how do you delete data from the DB and load them inside of your test script?
Any advice, book, tutorial would be very helpfull.

You should look into DbUnit. I've recently started using it myself (I'm a TestNG user) and, as of yet, I haven't come across scenarios where you absolutely need JUnit itself. You can restore your database between tests, populate it, export it, etc ...
http://dbunit.sourceforge.net/
basic example:
private DatabaseHelper dbh;
private EntityManager em;
private IDatabaseConnection connection;
private IDataSet dataset;
#BeforeClass
private void setupDatabaseResource() throws Exception {
// using JPA and a custom helper class
em = dbh.getEntityManager();
connection = new DatabaseConnection(((SessionImpl) (em.getDelegate())).connection());
connection.getConfig().setProperty(DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new HsqldbDataTypeFactory());
// full database export
IDataSet fullDataSet = connection.createDataSet();
FlatXmlDataSet.write(fullDataSet, new FileOutputStream("target/generated-sources/test-dataset.xml"));
FlatXmlDataSetBuilder flatXmlDataSetBuilder = new FlatXmlDataSetBuilder();
flatXmlDataSetBuilder.setColumnSensing(true);
// keep the dataset in memory, for later restore points
dataset = flatXmlDataSetBuilder.build(new FileInputStream("target/generated-sources/test-dataset.xml"));
}
edit: an example where #BeforeMethod restores your database between tests
#BeforeMethod
public void cleanDB() throws Exception {
DatabaseOperation.CLEAN_INSERT.execute(connection, dataset);
}

Related

attach debugger to neo4j procedure jar

I'm developing a neo4j procedure in java. I can test it with the custom data below.
#Test
public void commonTargetTest2() {
// This is in a try-block, to make sure we close the driver after the test
try (Driver driver = GraphDatabase.driver(embeddedDatabaseServer.boltURI(), driverConfig);
Session session = driver.session()) {
// And given I have a node in the database
session.run(
"CREATE (n1:Person {name:'n1'}) CREATE (n2:Person {name:'n2'}) CREATE (n3:Person {name:'n3'}) CREATE (n4:Person {name:'n4'}) CREATE (n5:Person {name:'n5'})"
+ "CREATE (n6:Person {name:'n6'}) CREATE (n7:Person {name:'n7'}) CREATE (n8:Person {name:'n8'}) CREATE (n9:Person {name:'n9'}) CREATE (n10:Person {name:'n10'})"
+ "CREATE (n11:Person {name:'n11'}) CREATE (n12:Person {name:'n12'}) CREATE (n13:Person {name:'n13'})"
+ "CREATE (n14:Person {name:'n14'}) CREATE "
+ "(n1)-[:KNOWS]->(n6),(n2)-[:KNOWS]->(n7),(n3)-[:KNOWS]->(n8),(n4)-[:KNOWS]->(n9),(n5)-[:KNOWS]->(n10),"
+ "(n7)-[:KNOWS]->(n11),(n8)-[:KNOWS]->(n12),(n9)-[:KNOWS]->(n13),"
+ "(n11)-[:KNOWS]->(n14),(n12)-[:KNOWS]->(n14),(n13)-[:KNOWS]->(n14);");
// name of the procedure I defined is "p1", below I'm calling it in cypher
StatementResult result = session
.run("CALL p1([1,3], [], 3, 0) YIELD nodes, edges return nodes, edges");
InternalNode n = (InternalNode) result.single().get("nodes").asList().get(0);
assertThat(n.id()).isEqualTo(13);
}
}
This works fine but the data is newly generated with CREATE statements and it is very small. I want to test my procedure with an existing neo4j database server. So that I can see the performance/results of my procedure with real/big data.
I can also achieve that with the below code. I can connect to an up and running neo4j database.
#Test
public void commonTargetTestOnImdb() {
// This is in a try-block, to make sure we close the driver after the test
try (Driver drv = GraphDatabase.driver("bolt://localhost:7687", AuthTokens.basic("neo4j", "123"));
Session session = drv.session()) {
// find 1 common downstream of 3 nodes
StatementResult result = session.run(
"CALL commonStream([1047255, 1049683, 1043696], [], 3, 2) YIELD nodes, edges return nodes, edges");
InternalNode n = (InternalNode) result.single().get("nodes").asList().get(0);
assertThat(n.id()).isEqualTo(5);
}
}
NOW, my problem is that I can't debug the codes of my procedure if I connect to an existing database. I package a JAR file and put it inside plugin folder of my neo4j database so that neo4j can call my procedure. I think I should debug the JAR file. I'm using vscode and java extensions to debug and run tests. How can I debug JAR file with vscode?
For the record, I find a way to debug my neo4j stored procedure. I'm using Java 8. I used IntelliJ idea. I added the config dbms.jvm.additional=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 to the neo4j.conf file
Inside IntelliJ IDEA, I added a new configuration for remote debugging.
Here note that syntax in Java 9+ is different. At the end, to give port parameter it uses address=*:5005. There is post about it here. https://stackoverflow.com/a/62754503/3209523

Google Cloud Platform running Spark throwing error - Database 'default' not found

I am running a migration project to move code from old platform to Google Cloud Platform. The code is written in Java and we use Spark to run queries and other data transformations on Hive. After migration, all that code is replaced to run on Google BigQuery.
Now on to the actual problem. In one place this line of code is executed:
dataFrame = dataFrame.withColumn(columnName, org.apache.spark.sql.functions.callUDF("randomKeyLong", dataFrame.col("delete")));
The error on GCP when this job runs is below and stacktrace points to the above line of code:
Exception in thread "main" org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database 'default' not found;
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.org$apache$spark$sql$catalyst$catalog$SessionCatalog$$requireDbExists(SessionCatalog.scala:174)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.functionExists(SessionCatalog.scala:1071)
at org.apache.spark.sql.hive.HiveSessionCatalog.functionExists(HiveSessionCatalog.scala:175)
...
at org.apache.spark.sql.Dataset.select(Dataset.scala:1312)
at org.apache.spark.sql.Dataset.withColumns(Dataset.scala:2197)
at org.apache.spark.sql.Dataset.withColumn(Dataset.scala:2164)
...
Funny part is that sometimes this job runs without an error, and sometimes it throws an error. We are not running any query which may accidentally refer to the 'default' database.
The UDF is just to encrypt the value of this column using a custom class. The corresponding code is:
UDF1<Long, String> LongPeopleKeyRandomUDF = new UDF1<Long, String>() {
public String call(Long key) {
Encrypt udf = new Encrypt(); // Custom class
return "people.key." + udf.Encriptar(Integer.toString(key.intValue()));
}
};
We've been stuck on this for a while now, and any help is appreciated.

Cannot get api hostname via System property in Java

Recently got the code to write bdd tests with cucumber on Java. There is already maven project with couple of tests and test framework. I need to continue writing bdd tests using this framework.
I am writing API tests and try to run them and i get the error. I found where it fails to run further but I want to figure out what's the idea of doing so in the code. Let me share some code:
So the test framework is collecting info about the API host name this way:
public class AnyClass {
private static final String API_HOSTNAME = "hostname";
private static String getAPIHostName() {
String apiHostName = System.getProperty(API_HOSTNAME);
...
}
When i leave it as is, and run the test, i get the error that host name is empty.
Can you advise on what might be expected to have under System property key "hostname"?
p.s. I tried to use http://localhost and http://127.0.0.1, where my api is located instead of assigning system property but it cannot find such host name.
Can you advise on what might be expected to have under System property key "hostname"?
Yes, I needed to run tests in command line with the syntax like:
mvn clean verify -Dhostname=http://127.0.0.1:8080

project data for custom extension is not getting imported during junit init in hybris

Trying to import data for unit+integration tests at once (during init)
Running project update from HAC is working fine.
But when I use command to init or update project data for my custom or even for OOTB extension, it is not getting imported.
I have tried using following method for setup to import data :
#SystemSetup(type = Type.PROJECT, process = Process.ALL)
public void createProjectData(final SystemSetupContext context) {//...}
And I have tried "type = Type.ESSENTIAL" too for my impex import but not a success from CLI at platform directory.
Any help will be appreciated.
What you can do is directly do it from your test code.
I give you an example in groovy :
def init(){
//Call below line only if you want to do an init between two tests for example
initTestTenant();
//Call this to execute the code in createProjectData
final SystemSetupContext systemSetupContext = new SystemSetupContext(new HashMap<String, String[]>(), Type.ESSENTIAL,
Process.ALL, "projectname");
yourExtensionSystemSetup.createProjectData(systemSetupContext);
}

Flyway - oracle PL/SQL procedures migration

What would be the preferable way to update schema_version table and execute modified PL/SQL packages/procedures in flyway without code duplication?
My example would require a class file be created for each PL/SQL code modicaition
public class V2_1__update_scripts extends AbstractMigration {
// update package and procedures
}
AbstractMigration class executes the files in db/update folder:
public abstract class AbstractMigration implements SpringJdbcMigration {
private static final Logger log = LoggerFactory.getLogger(AbstractMigration.class);
#Override
public void migrate(JdbcTemplate jdbcTemplate) throws Exception {
Resource packageFolder = new ClassPathResource("db/update");
Collection<File> files = FileUtils.listFiles(packageFolder.getFile(), new String[]{"sql"}, true);
for (File file : files ) {
log.info("Executing [{}]", file.getAbsolutePath());
String fileContents = FileUtils.readFileToString(file);
jdbcTemplate.execute(fileContents);
}
}
}
Is there any better way of executing PL/SQL code?
I wonder if it's better to duplicate the code into the standard migrations folder. It seems like with the given example you wouldn't then be able to migrate up to version N of the db, as some prior version would execute all the current version of the pl/sql. I'd be interested to see if you settled on a solution for this.
There is no built-in support or other command you have missed.
Of the top of my head, I would think about either the way you presented here or using a generator to produce new migration sql files after an SCM commit.
Let's see if someone else found a better solution.
The version of Flyway current at the time of this writing (v4.2.0) supports the notion of repeatable scripts designed specifically for such situations. Basically any script with a "Create or replace" semantic is a candidate.
Simply name your script as R__mypackage_body.sql or whatever prefix you wish for repeatable scripts. Please see Sql-based migrations and Repeatable migrations for further information.

Categories

Resources