I am trying to automate one scenario using Cucumber.
Step Then Create item actually takes values from first row only.
What I want to do is execute step Then Create item 2 times, before moving to step Then assigns to CRSA.
But my code is taking values from first row only (0P00A). How to take values from both rows?
Background: Application login
Given User launch the application on browser
When User logs in to application
Scenario: Test
Then Create item
| Item ID | Attribute Code | New Value | Old Value |
| 0P00A | SR | XYZ21 | ABC21 |
| 0P00B | CA | XYZ22 | ABC22 |
Then assigns to CRSA
#Then("Create item")
public void createItem(DataTable dataTable) {
List<Map<String, String>> inputData = dataTable.asMaps();
}
You can use foreach like below:
List<Map<String, String>> inputData = dataTable.asMaps();
for (Map<String, String> columns : inputData ) {
columns.get("Item ID");
columns.get("Attribute Code");
}
Related
i have credit transaction table in postgresql as below
id | vh_owner_id | credit | debit | created_dttm | modified_dttm | is_deleted
----------------------------------------------------------------------------------
1 | 1 | 0 | 1000 | 19-dec-2021 | null 0
i have backend service in spring boot and when i do spring crud save operation; it is creating two rows , but this is not happening for all credit transaction, it is happening intermittently.
in my logic, i have below spring boot java service as
#Transactional(isolation = Isolation.READ_COMMITTED, rollbackFor = Exception.class)
public CreditTransactionMO createCreditTransaction(CreditTransactionMO creditTransactionMO) throws Exception {
//getting last entered credit transaction based on created dttm for the customer
CreditTransaction creditTransaction = creditTransactionRepo
.findTopByVhOwnerIdOrderByCreatedDttmDesc(creditTransactionMO.getVhOwner().getId());
if ( condition check) {
double totalOutstanding = creditTransactionMO.getCredit() + creditTransaction.getCredit()
- creditTransaction.getDebit();
CreditTransaction newCreditTrans = new CreditTransaction();
if (totalOutstanding < 0) {
double totalCredit = creditTransaction.getCredit() + creditTransactionMO.getCredit();
mapMOtoDO(creditTransactionMO, creditTransaction);
creditTransaction.setCredit(totalCredit);
// update transactions with new credit amt
creditTransaction = creditTransactionRepo.save(creditTransaction);
// create new debit transaction
creditTransactionMO.setId(creditTransaction.getId());
newCreditTrans.setVhOwnerId(creditTransactionMO.getVhOwner().getId());
double newDebit = (-1 * totalOutstanding);
newCreditTrans.setDebit(newDebit);
//this is latest credit transaction
// this transaction is creating two rows, instead of one row.
creditTransactionRepo.save(newCreditTrans);
how to avoid save operation from creating duplicate rows. And current table design doesn't have any unique column to avoid this duplicate.. looking for better way to handle this, since it is transaction table, i dont want to add more complexity by adding extra constraints to the table design.
I have csv file:
Lets call it product.csv
Product_group | Product producer | Product_name | CODE | RANDOM_F_1 | ... | RANDOM_F_25
----------------------------------------------------------------------------------------
Electronic | Samsung | MacBook_1 | 60 | 0.8 | ... | 1.2
Electronic | Samsung | MacBook_2 | | 0.8 | ... | 1.2
... | ... | ... | | ... | ... | ...
Electronic | Samsung | MacBook_9999 | 63 | 1.2 | ... | 3.1
Electronic | Samsung | MacBook_9999 | 64 | 1.2 | ... | 3.1
I will try to explain this csv file:
The couple Product_name + CODE are unique (if code present), RANDOM_F_1 are fields with random values.
So, my goal:
I have java class, which generate this csv file. And when it generates new file - it will clean product.csv, and generate new one with other random attributes.
Now, i have a goal - not overwrite this random fields in new csv generation.
So, i have one idea - create copy of this csv file before cleaning, and if MacBook_9999 present in copy file - just use this raw in new generation of file.
My code now looks like:
public void createProducts(List<Products> products) {
//copying file
for(Product newProduct : products) {
Product previousProduct = findPreviousProduct(newProduct);
if(previousProduct != null) {
newProduct = previousProduct;
}
addToCsv(newProduct );
}
}
private void copyFile() {
//here i am copying file by FileInputStream and FileOutputStream
}
private Product findPreviousProduct(Product product) {
File copyFile = new File(...);
//creation BufferReader br here, in try with resources
previousProduct = br.lines().parallel()
.map(Product::new)
.filter(e -> e.getName.equals(product.getName) && //here is comparison by code)
.findFirst().orElse(...);
//return statement here
}
Everythink works fine, but i faced one performance problem after this check adding, see below test example (file with 12k raws):
BEFORE: 3seconds
AFTER: 2minutes 20seconds
So, the question is: how can i boost it ? Should i use other way to save my RANDOM fields?
Because it is really low perfomance. If i will have 100k raws it will take 22minutes :(
Idea with saving data in hash map (Blaž Mrak comment), and getting row by key is brittiant, but if i will have 500-700k of objects - my Heap memory will ends.
Than you, developers
I don't think you have O(n) complexity, but a O(n^2), which means that for 100k lines your code will run for 220 minutes, not 22. What makes it worse is that you are reading the file each time you call findPreviousProduct. I would suggest first loading csv into memory and then searching it:
//somewhere else... MyCsvReader or sth
public List<Product> readPerviousProducts() {
File copyFile = new File(...);
...
return br.lines().parallel()
.map(Product::new).toList();
}
//then in your class
public void createProducts(List<Product> products, List<Product> previousProducts) {
for(Product newProduct : products) {
Product previousProduct = findPreviousProduct(previousProducts, newProduct);
if(previousProduct != null) {
newProduct = previousProduct;
}
addToCsv(newProduct );
}
}
private Product findPreviousProduct(List<Product> previousProducts, Product product) {
return previousProducts.filter(e -> e.getName.equals(product.getName) && //here is comparison by code)
.findFirst().orElse(...);
}
Give this a try first to see if there are some performance improvements. The second optimization is to create a HashMap instead of a List. You create a key() method on the product, which will return a unique string generated from name and code. (basically just name + _ + code)
//somewhere else
public List<Product> readPerviousProducts() {
File copyFile = new File(...);
...
return br.lines().parallel()
.map(Product::new)
.toMap((product) -> product.key(), (product) -> product);
}
public void createProducts(List<Product> products, HashMap<String, Product> previousProducts) {
for(Product newProduct : products) {
Product previousProduct = findPreviousProduct(previousProducts, newProduct);
if(previousProduct != null) {
newProduct = previousProduct;
}
addToCsv(newProduct );
}
}
private Product findPreviousProduct(List<Product> previousProducts, Product product) {
return previousProducts.get(product.key());
}
You can then compare how much faster each solution is :)
I would like to know how to find all the elements in the same page and assert them they are displayed in the page using Data table in Selenium / Java /Cucumber.
For example, I have a scenario like this
Sceanario: Verify all the elements in the xyz page
Given I am in the abc page
When I navigate to xyz page
Then I can see the following fields in the xyz page
|field 1|
|field 2|
|field 3|
|field 4|
First Step : Constructing Data Table. (Clue, Using Header we can implement Data Table in much clean & precise way and considering Data Table looks like below one)
Then I can see the following fields in the xyz page
| Field Name | Locator |
| field 1 | Xpath1 |
| field 2 | Xpath2 |
| field 3 | Xpath3 |
| field 4 | Xpath4 |
Second Step : Implementing Step Definition
#Then
public void I_can_see_the_following_fields_in_the_xyz_page(DataTable table) throws Throwable {
WebElement element;
List<Map<String, String>> list = table.asMaps(String.class,String.class);
for(Map<String, String> list : data) {
element = driver.findElement(By.xpath(list.get("Locator")));
Assert.assertTrue("Element : "+list.get("Field Name")+ "not found",isElementPresent(element));
}
}
Utility Method : To check if element present
protected synchronized boolean isElementPresent(WebElement element) {
boolean elementPresenceCheck = false;
Wait<WebDriver> wait=null;
try {
wait = new FluentWait<WebDriver>((WebDriver) driver).withTimeout(10, TimeUnit.SECONDS).pollingEvery(1,
TimeUnit.SECONDS);
elementPresenceCheck = wait.until(ExpectedConditions.visibilityOf(element)).isDisplayed();
return elementPresenceCheck;
}catch(Exception e) {
return elementPresenceCheck;
}
}
What if you will place all values to array
{ field 1, field 2, field 3, field 4 }
and as next step -> will check on visibility each value on page?
I consider that it should resolve you issue.
I am making an Android app that lets users query the database by inputting a phone number. The search is returned with the username linked with the phone number, much like truecaller.
This is what the Firestore database structure currently looks like:
collection("contact")---document(auto generated ID)---(field1 ("username"), field2 ("phoneNum"))
This is my code:
CollectionReference collectionReference = db.collection("contact");
Query query = collectionReference.whereEqualTo("phoneNum",phonesearch);
query.get().addOnCompleteListener(new OnCompleteListener<QuerySnapshot>(){
#Override
public void onComplete(#NonNull Task<QuerySnapshot> task){
if (task.isSuccessful()) {
for (DocumentSnapshot document:task.getResult()){
username = document.getString("username");
phonenum = document.getString("phonenum");
//storing the searcher username in the database
Map<String, String> Usermap = new HashMap<>();
searcher = user.getDisplayName();
Usermap.put("searcher", searcher);
db.collection("contact").add(Usermap).addOnSuccessListener(new OnSuccessListener<DocumentReference>() {
#Override
public void onSuccess(DocumentReference documentReference) {
}
});
How best can I save the username(s) of searchers who search for phone numbers, so the owners of the phone numbers can get notified whenever someone searches for their number? It should look like this:
collection("contact")---document(auto generated ID)---(field1 ("username"), field2 ("phone number"), field3 ("searcher1"), field4 ("searcher2"),.....fieldN("searcher3"))
currently it creates an entirely new document. I don't want it to do so, I just want it to add the new field (searcher) to the existing document.
Other solutions would be appreciated also. Thanks.
................................................This is what it looks like after the changes, it overwrites the previous searcher username with the most recent one, instead of updating the list with the latest searcher username.
Map<String, Object> Usermap = new HashMap<>();
searcher = user.getDisplayName();
Usermap.put("searcher", searcher);
db.collection("contact").document(document.getId()).update(Usermap).addOnSuccessListener(new OnSuccessListener<Void>() {
#Override
public void onSuccess(Void aVoid) {
}
});
database photo1
database photo2
The second screenshot might look confusing...it's because fields are arranged in alphabetical order. Searcher field is an array type, by now it's meant to have about 3 usernames under it but it keeps deleting the previous one and replacing it with the most current one, thats why it still has only one user Id in it.
It adds a new document because you are using CollectionReference's add() method:
Adds a new document to this collection with the specified POJO as contents, assigning it a document ID automatically.
Instead of using DocumentReference's update() method:
Updates fields in the document referred to by this DocumentReference.
So in order to solve this, please change the following line of code:
db.collection("contact").add(Usermap).addOnSuccessListener
to
db.collection("contact").document(document.getId()).update(String.valueOf(Usermap), searches).addOnSuccessListener(/* ... */);
To store those searches, I recommend you create another object (map) named searches and store all those searches beneath it. Don't store each search as a separate property. Your schema should look like this:
Firestore-root
|
--- contact (collection)
|
--- autoGeneratedId (document)
|
--- username: "User Name"
|
--- phoneNumber: +123456789
|
--- searches
|
--- searcher1: true
|
--- searcher2: true
There is also an approach in which you can store all those searches into an array. This is how your database structure should look like:
Firestore-root
|
--- contact (collection)
|
--- autoGeneratedId (document)
|
--- username: "User Name"
|
--- phoneNumber: +123456789
|
--- searches: ["searcher1", "searcher2"]
Remember, this solutions will work fine only of you'll store a limited number of searches. If you think that the number of searches will be much more that a document can hold, in that case please consider create a collection of searches like this:
Firestore-root
|
--- contact (collection)
|
--- autoGeneratedId (document)
|
--- username: "User Name"
|
--- phoneNumber: +123456789
|
--- searches (collection)
|
--- searcher1 (document)
| |
| --- //details
|
--- searcher2 (document)
|
--- //details
I would like to create an HashMap where the key is a string and the value is a List. All the values are taken from a Mysql table. The problem is that I have an HashMap where the key is the right one while the value is not the right one, because it is overwritten. In fact I have for all different keys the same list with the same content.
This is the code:
public static HashMap<String,List<Table_token>> getHashMapFromTokenTable() throws SQLException, Exception{
DbAccess.initConnection();
List<Table_token> listFrom_token = new ArrayList();
HashMap<String,List<Table_token>> hMapIdPath = new HashMap<String,List<Table_token>>();
String query = "select * from token";
resultSet = getResultSetByQuery(query);
while(resultSet.next()){
String token=resultSet.getString(3);
String path=resultSet.getString(4);
String word=resultSet.getString(5);
String lemma=resultSet.getString(6);
String postag=resultSet.getString(7);
String isTerminal=resultSet.getString(8);
Table_token t_token = new Table_token();
t_token.setIdToken(token);
t_token.setIdPath(path);
t_token.setWord(word);
t_token.setLemma(lemma);
t_token.setPosTag(postag);
t_token.setIsTerminal(isTerminal);
listFrom_token.add(t_token);
System.out.println("path "+path+" path2: "+token);
int row = resultSet.getRow();
if(resultSet.next()){
if((resultSet.getString(4).compareTo(path)!=0)){
hMapIdPath.put(path, listFrom_token);
listFrom_token.clear();
}
resultSet.absolute(row);
}
if(resultSet.isLast()){
hMapIdPath.put(path, listFrom_token);
listFrom_token.clear();
}
}
DbAccess.closeConnection();
return hMapIdPath;
}
You can find an example of the content of the HashMap below:
key: p000000383
content: [t0000000000000019231, t0000000000000019232, t0000000000000019233]
key: p000000384
content: [t0000000000000019231, t0000000000000019232, t0000000000000019233]
The values that are in "content" are in the last rows in Mysql table for the same key.
mysql> select * from token where idpath='p000003361';
+---------+------------+----------------------+------------+
| idDoc | idSentence | idToken | idPath |
+---------+------------+----------------------+------------+
| d000095 | s000000048 | t0000000000000019231 | p000003361 |
| d000095 | s000000048 | t0000000000000019232 | p000003361 |
| d000095 | s000000048 | t0000000000000019233 | p000003361 |
+---------+------------+----------------------+------------+
3 rows in set (0.04 sec)
You need to allocate a new listFrom_token each time instead of clear()ing it. Replace this:
listFrom_token.clear();
with:
listFrom_token = new ArrayList<Table_token>();
Putting the list in the HashMap does not make a copy of the list. You are clearing and refilling the same list over and over.
Your data shows that idPath is not a primary key. That's what you need to be the key in the Map. Maybe you should make idToken the key in the Map - it's the only thing in your example that's unique.
Your other choice is to make the column name the key and give the values to the List. Then you'll have four keys, each with a List containing four values.