Compare two Maps & Identify modified value of a existing key - java

I'm working on a problem, where for the first time i want to upload Excel file, read it & store into a MySQL DB. I'm done with this part & everything is working as expected.
Form the 2nd time whenever i'll upload Excel file again(Excel file can have exactly same data as DB or modified already existing data or both modified already existing data & newly added data), I've to compare it with the data available in MySQL & identify the changes.
I'm reading Excel file using Apache POI library & storing it in a Map<Integer, List<MyCell> where key is a Row Number & its value is List<MyCell> which is basically List of columns.
I'm able to identify the newly added records (newly added keys & their values in
a Map) by this logic
Map<Integer, List<MyCell>> filteredMap = new HashMap<>();
for (Integer key : datafromExcel.keySet()) {
if (datafromDB.containsKey(key)) {
datafromDB.remove(key);
} else {
filteredMap.put(key, datafromExcel.get(key));
}
}
But I didn't succeed in finding existing modified record(existing modified value of a same key in a Map)
How can i get this?

To check the modified values you have to compare the list coming from DB and excel with respective key.
Like I'm showing you just comparison of list here :
Collection<String> similar = new HashSet<String>( fromDB.get(key) );
Collection<String> different = new HashSet<String>();
different.addAll( fromDB.get(key) );
different.addAll( fromExcel.get(key) );
similar.retainAll( fromExcel.get(key) );
different.removeAll( similar );
if(different.size()>0){
SOP("NO CHANGE");
}
Modify as per your logic if MyCell(I have used String here) is a class, you can use comparator.
Hope this will help
.

For key existing in your database you are calling datafromDB.remove(key);.
So immediatelly you find the modified record you remove it and do not longer know you had such record. Clearly you have to do something more than datafromDB.remove(key); or do not remove data for key at all.

Related

Push data from a dataframe to a Map where values are List of objects in Scala

I am working on a Scala codebase and I have to implement a scenario which uses some data structure to populate information for further processing.
The gist of the problem is,
I have a dataframe studentDf which has the student marksheet information eg.
Name, ID, Subject, Details, Marks, isFail
Now there can be multiple records for the same Name-ID mapping. I have to reflect the scenario where if the student has failed in any subject, the details (and the corresponding record) will pop up in a resultDf. And if he has not failed in any subject (congratulations!!!) then we can populate any record corresponding to the Name-ID mapping.
Basically what I would do in Java 8 for this is,
Assuming I have List<StudentMarks> studentMarksList => list of all the marks of all the students.
Map<String, List<StudentMarks>> studentToMarkMapping = new HashMap<>();
studentMarksList.stream().foreach(studentMark->{
studentToMarkMapping.computeIfAbsent(studentMark.getName()+"_"+studentMark.getID, k => new ArrayList<>()).add(studentMark);
}
Set<Student> resultSet = new HashSet<>();
for(Map.Entry<String,List<StudentMarks>> studentToMark : studentToMarkMapping){
List<StudentMarks> studentMarks = studentToMark.getValue();
for(StudentMarks studentMark : studentMarks){
if(studentMark.getFailed() == true){//Return corresponding failed subject record
resultSet.add(studentMark);
break;
}
}
resultSet.add(studentMark.get(0)); // just add any subjectMark for student who has passed all subjects
}
In Scala to do the same, I was trying to load the data into a Mutable Map, but to populate multiple records for the same student into a list and then find out whether he has failed in any or not, I am getting stuck. I see the concept of using ListBuffer which is a mutable variant of a list, but I am confused how to use it. It is possible that we can do without Map as well, but I tried some other ways which didn't end up working.
If somebody can provide any help on this, would be great. Thanks a lot!!!

Writing/appending data to a CSV file, column wise, in JAVA

I want to write/append data to a CSV file, column-by-column, in below fashion:
query1 query2 query3
data_item1 data_item7 data_item12
data_item2 data_item8 data_item13
data_item3 data_item9 data_item14
data_item4 data_item10
data_item5 data_item11
data_item6
I have the data in a hashMap, with the queryID (i.e. query1,query2) being the key and data_items for the
corresponding queries being the values.
The values(data_items for every query) are in a list.
Therefore, my hash map looks like this :
HashMap<String,List<String>> hm = new HashMap<String,List<String>>();
How can I write this data, column by column to a csv, as demonstrated above, using JAVA ?
I tried CSVWriter, but couldn't do it. Can anyone please help me out ?
csv files are mostly used to persist data structured like a table... meaning data with columns and rows that are in a close context.
In your example there seems to be only a very loose connection between query1, 2 and 3, and no connection horizontally between item 1,7 and 12, or 2, 8 and 13 and so on.
On top of that writing into files are usually facilitated along rows or lines. So you open your file write one line, and then another and so on.
So to write the data columnwise as you are asking, you have to either restructure your data in your code alrady to have all the data which is written into one line available on writing that line, or run through your csv file and it's lines several times, each time adding another item to a row. Of course the latter option is very time consuming and would not make much sense.
So i would suggest if there is really no connection between the data of the 3 queries, you either write your data into 3 different csv files: query1.csv, 2.csv and 3.csv.
Or, if you have a horizontal connection i.e. between item 1,7 and 12, and so on you write it into one csv file, organizing the data into rows and columns. Something like:
queryNo columnX columnY columnZ
1 item1 item2 item3
2 item7 item8 item9
3 item12 item13 item14
How to do that is well described in this thread: Java - Writing strings to a CSV file.
Other examples you can also find here https://mkyong.com/java/how-to-export-data-to-csv-file-java/
After days of tinkering around, I finally succeeded. Here is the implementation :
for(int k=0;k<maxRows;k++) {
List<String> rowValues = new ArrayList<String>();
for(int i=0;i<queryIdListArr.length;i++) {
subList = qValuesList.subList(i, i+1);
List<String> subList2 = subList.stream().flatMap(List::stream).collect(Collectors.toList());
if(subList2.size()<=k) {
rowValues.add("");
}else{
rowValues.add(subList2.get(k));
}
}
String[] rowValuesArr = new String[rowValues.size()];
rowValuesArr = rowValues.toArray(rowValuesArr);
// System.out.println(rowValues);
writer.writeNext(rowValuesArr);
}
maxRows : Size of the value list with max size. I have a list of values for each key. My hash map looks like this
HashMap<String,List<String>> hm = new HashMap<String,List<String>>();
queryIdListArr : List of all the values obtained from the hash map.
qValuesList : List of all the value lists.
List<List<String>> qValuesList = new ArrayList<List<String>>();
subList2 : sublist obtained from qValuesList using the below syntax :
qValuesList.subList(i, i+1);
rowValuesArr is an array that gets populated with the index wise value for each
value fetched from qValuesList.
The idea is to fetch all the values for each index from all the sublists and then write those values to the row. If for that index, no value is found, write a blank character.

Spring MongoTemplate upsert entire list

I have a list of objects which I want to insert into a collection. The mongoTemplate.insert(list); works fine but now I want to modify it to upsert(); as my list can contain duplicate objects which are already inserted into a collection. So what I want is insert entire list and on the go check if the item is already present in the collection then skip it else insert it.
You can try out continueOnError or ordered flag like this:
db.collection.insert(myArray, {continueOnError: true})
OR,
db.collection.insert(myArray, {ordered: false})
You need to create a unique index field of your object's id(if there is no unique constraint). So that it will make error while you try to insert using same id.
Using the unique constraint you insert array or using BulkInsert
For using insert you can set a flag continueOnError: true which can continue insertion whenever error found in case of error because of unique constraint while inserting existing id of object.
The only way to do a bulk-upsert operation is the method MongoCollection.bulkWrite (or at least: the only way I know... ;-))
To use it, you have to convert your documents to the appropriate WriteModel: for upserts on a per-document basis, this is UpdateOneModel.
List<Document> toUpdate = ...;
MongoCollection coll = ...;
// Convert Document to UpdateOneModel<Document>
List<UpdateOneModel<Document>> bulkOperationList = toUpdate.stream()
.map(doc -> new UpdateOneModel<Document>(
Filters.eq("_id", doc.get("_id")), // identify by same _id
doc,
new UpdateOptions().upsert(true)))
.collect(Collectors.toList());
// Write to DB
coll.bulkWrite(bulkOperationList);
(Disclaimer: I only typed this code, I never ran it)

How can I retrieve the last record inserted into a Java Hashtable data structure?

I have the following problem working on a Java application that uses an Hashtable calcoliTable data structure.
This is the content of my calcoliTable:
{
FITTIZIO-2015=CalcoloValoreDellaGSBean [data=2015-11-27, maturato=249540.544989802560000, sommaMaturatoMovimento=249540.544989802560000],
1=CalcoloValoreDellaGSBean [data=2015-11-27, maturato=249540.544989802560000, sommaMaturatoMovimento=249540.544989802560000]
}
As you can see it contains 2 entries having id=1 and id=FITTIZIO-2015.
I want to retrieve the entry that was inserted most recently(that should be the one having id=FITTIZIO-2015).
I have tried to do in this way:
CalcoloValoreDellaGSBean calcoloPrecedente = (CalcoloValoreDellaGSBean) calcoliTable.get(calcoliTable.size())
But it won't work because in this way it is searching the entry having id=2.
How can I retrieve the last entry inserted into my HashTable? Exist a way to do it without using explitelly the key?
Use a LinkedHashMap with an access-order ordering mode :
Map<KeyType,ValueType> map = new LinkedHashMap<KeyType,ValueType> (16, 0.75F, true);
And then
map.entrySet().iterator().next()
will give you the last entry you accessed (either inserted or retrieved).

Is is possible to store a SQLite table in Shared Prefs on Android

I need to take around 100 rows from a table and put them into the shared prefs then delete the database install a fresh and compare values.
I would only need to store 2 values from each row. Column A and column B, but I don't know if this is even possible?
Cheers for any help.
you can't store a table, but if your table really is that simple (or even if it isn't... I think you can store as much data as you feel like, really), you can transfer it over to a JSON object, stringify and store that. Then pack it back into JSON and over into your new table when you're ready to use it again.
I can also suggest to use a generic lib for abstraction of local store.
Here you have an example:
Use this android lib: https://github.com/wareninja/generic-store-for-android
import com.wareninja.opensource.genericstore.GenericStore;
// step.1: transfer data fromt able into GenericStore (aka cache)
String objKey = "mytable01";
int storeType = GenericStore.TYPE_SHAREDPREF;
//keep in mind that you can also use GenericStore.TYPE_MEMDISKCACHE
// which will store on disk cache, which can be faster and better for memory allocation
LinkedHashMap<String, Object> dataMap = new LinkedHashMap<String, Object>();
// fill in all the data from you table, e.g.
dataMap.put("row1", "val1");
GenericStore.saveObject(storeType, objKey, dataMap, mContext);
// now everything is on stored/cached
// step.2: get data back
dataMap = (LinkedHashMap<String, Object>) GenericStore.getObject(storeType, objKey, mContext);
...
Social Coding # Aspiro TV

Categories

Resources