i have credit transaction table in postgresql as below
id | vh_owner_id | credit | debit | created_dttm | modified_dttm | is_deleted
----------------------------------------------------------------------------------
1 | 1 | 0 | 1000 | 19-dec-2021 | null 0
i have backend service in spring boot and when i do spring crud save operation; it is creating two rows , but this is not happening for all credit transaction, it is happening intermittently.
in my logic, i have below spring boot java service as
#Transactional(isolation = Isolation.READ_COMMITTED, rollbackFor = Exception.class)
public CreditTransactionMO createCreditTransaction(CreditTransactionMO creditTransactionMO) throws Exception {
//getting last entered credit transaction based on created dttm for the customer
CreditTransaction creditTransaction = creditTransactionRepo
.findTopByVhOwnerIdOrderByCreatedDttmDesc(creditTransactionMO.getVhOwner().getId());
if ( condition check) {
double totalOutstanding = creditTransactionMO.getCredit() + creditTransaction.getCredit()
- creditTransaction.getDebit();
CreditTransaction newCreditTrans = new CreditTransaction();
if (totalOutstanding < 0) {
double totalCredit = creditTransaction.getCredit() + creditTransactionMO.getCredit();
mapMOtoDO(creditTransactionMO, creditTransaction);
creditTransaction.setCredit(totalCredit);
// update transactions with new credit amt
creditTransaction = creditTransactionRepo.save(creditTransaction);
// create new debit transaction
creditTransactionMO.setId(creditTransaction.getId());
newCreditTrans.setVhOwnerId(creditTransactionMO.getVhOwner().getId());
double newDebit = (-1 * totalOutstanding);
newCreditTrans.setDebit(newDebit);
//this is latest credit transaction
// this transaction is creating two rows, instead of one row.
creditTransactionRepo.save(newCreditTrans);
how to avoid save operation from creating duplicate rows. And current table design doesn't have any unique column to avoid this duplicate.. looking for better way to handle this, since it is transaction table, i dont want to add more complexity by adding extra constraints to the table design.
Related
I am trying to automate one scenario using Cucumber.
Step Then Create item actually takes values from first row only.
What I want to do is execute step Then Create item 2 times, before moving to step Then assigns to CRSA.
But my code is taking values from first row only (0P00A). How to take values from both rows?
Background: Application login
Given User launch the application on browser
When User logs in to application
Scenario: Test
Then Create item
| Item ID | Attribute Code | New Value | Old Value |
| 0P00A | SR | XYZ21 | ABC21 |
| 0P00B | CA | XYZ22 | ABC22 |
Then assigns to CRSA
#Then("Create item")
public void createItem(DataTable dataTable) {
List<Map<String, String>> inputData = dataTable.asMaps();
}
You can use foreach like below:
List<Map<String, String>> inputData = dataTable.asMaps();
for (Map<String, String> columns : inputData ) {
columns.get("Item ID");
columns.get("Attribute Code");
}
I am reading at https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/data_stream_api/#examples-for-fromchangelogstream,
The EXAMPLE 1:
// === EXAMPLE 1 ===
// interpret the stream as a retract stream
// create a changelog DataStream
val dataStream = env.fromElements(
Row.ofKind(RowKind.INSERT, "Alice", Int.box(12)),
Row.ofKind(RowKind.INSERT, "Bob", Int.box(5)),
Row.ofKind(RowKind.UPDATE_BEFORE, "Alice", Int.box(12)),
Row.ofKind(RowKind.UPDATE_AFTER, "Alice", Int.box(100))
)(Types.ROW(Types.STRING, Types.INT))
// interpret the DataStream as a Table
val table = tableEnv.fromChangelogStream(dataStream)
// register the table under a name and perform an aggregation
tableEnv.createTemporaryView("InputTable", table)
tableEnv
.executeSql("SELECT f0 AS name, SUM(f1) AS score FROM InputTable GROUP BY f0")
.print()
// prints:
// +----+--------------------------------+-------------+
// | op | name | score |
// +----+--------------------------------+-------------+
// | +I | Bob | 5 |
// | +I | Alice | 12 |
// | -D | Alice | 12 |
// | +I | Alice | 100 |
// +----+--------------------------------+-------------+
The EXAMPLE 2:
// === EXAMPLE 2 ===
// convert to DataStream in the simplest and most general way possible (no event-time)
val simpleTable = tableEnv
.fromValues(row("Alice", 12), row("Alice", 2), row("Bob", 12))
.as("name", "score")
.groupBy($"name")
.select($"name", $"score".sum())
tableEnv
.toChangelogStream(simpleTable)
.executeAndCollect()
.foreach(println)
// prints:
// +I[Bob, 12]
// +I[Alice, 12]
// -U[Alice, 12]
// +U[Alice, 14]
For the two examples, I would ask why the first one prints -D and +I in the last two records, while the second one prints -U and +U. What's the rule here to determine the kind of change? Thanks.
The reason for the difference has two parts, both of them defined in GroupAggFunction, which is the process function used to process this query.
The first is this part of the code:
// update aggregate result and set to the newRow
if (isAccumulateMsg(input)) {
// accumulate input
function.accumulate(input);
} else {
// retract input
function.retract(input);
}
When a new value is received for a given key, the method first checks if it is an accumulation message (RowKind.INSERT or RowKind.UPDATE_AFTER) or a retract message (RowKind.UPDATE_BEFORE).
In your first example, you explicitly state the RowKind yourself. When the execution reaches Row.ofKind(RowKind.UPDATE_BEFORE, "Alice", Int.box(12)), which is a retraction message, it will first retract the input from the existing accumulator. This means that after the retraction, we end up with a key which has an empty accumulator. When that happens, the below line is reached:
} else {
// we retracted the last record for this key
// sent out a delete message
if (!firstRow) {
// prepare delete message for previous row
resultRow.replace(currentKey, prevAggValue).setRowKind(RowKind.DELETE);
out.collect(resultRow);
}
// and clear all state
accState.clear();
// cleanup dataview under current key
function.cleanup();
}
Since this is not the first row received for the key "Alice", we emit a delete message for the previous row, and then the next one will emit an INSERT.
For your second example where you don't explicitly specify the RowKind, all messages are received with RowKind.INSERT by default. This means that now we don't retract the existing accumulator, and the following code path is taken:
if (!recordCounter.recordCountIsZero(accumulators)) {
// we aggregated at least one record for this key
// update the state
accState.update(accumulators);
// if this was not the first row and we have to emit retractions
if (!firstRow) {
if (stateRetentionTime <= 0 && equaliser.equals(prevAggValue, newAggValue)) {
// newRow is the same as before and state cleaning is not enabled.
// We do not emit retraction and acc message.
// If state cleaning is enabled, we have to emit messages to prevent too early
// state eviction of downstream operators.
return;
} else {
// retract previous result
if (generateUpdateBefore) {
// prepare UPDATE_BEFORE message for previous row
resultRow
.replace(currentKey, prevAggValue)
.setRowKind(RowKind.UPDATE_BEFORE);
out.collect(resultRow);
}
// prepare UPDATE_AFTER message for new row
resultRow.replace(currentKey, newAggValue).setRowKind(RowKind.UPDATE_AFTER);
}
Since the row count is greater than 0 (we didn't retract), and this is not the first row received for the key, and because the AggFunction has set generateUpdateBefore to true, we first receive an UPDATE_BEFORE message (-U) followed immediately by an UPDATE_AFTER (+U).
All the relevant code can be found here.
I have these Entities:
#Entity
public class Content extends AbstractEntity
{
#NotNull
#OneToOne(optional = false)
#JoinColumn(name = "CURRENT_CONTENT_REVISION_ID")
private ContentRevision current;
#OneToMany(mappedBy = "content", cascade = CascadeType.ALL, orphanRemoval = true)
private List<ContentRevision> revisionList = new ArrayList<>();
}
#Entity
public class ContentRevision extends AbstractEntity
{
#NotNull
#ManyToOne(optional = false)
#JoinColumn(name = "CONTENT_ID")
private Content content;
#Column(name = "TEXT_DATA")
private String textData;
#Temporal(TIMESTAMP)
#Column(name = "REG_DATE")
private Date registrationDate;
}
and this is the db mapping:
CONTENT
+-----------------------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------------------------+--------------+------+-----+---------+----------------+
| ID | bigint(20) | NO | PRI | NULL | auto_increment |
| CURRENT_CONTENT_REVISION_ID | bigint(20) | NO | MUL | NULL | |
+-----------------------------+--------------+------+-----+---------+----------------+
CONTENT_REVISION
+-----------------------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------------------------+--------------+------+-----+---------+----------------+
| ID | bigint(20) | NO | PRI | NULL | auto_increment |
| REG_DATE | datetime | YES | | NULL | |
| TEXT_DATA | longtext | YES | | NULL | |
| CONTENT_ID | bigint(20) | NO | MUL | NULL | |
+-----------------------------+--------------+------+-----+---------+----------------+
I have also these requirements:
Content.current is always a member of Content.revisionList (think about Content.current as a "pointer").
Users can add a new ContentRevision to an existing Content
Users can add a new Content with an initial ContentRevision (cascade persist)
Users can change Content.current (move the "pointer")
Users can modify Content.current.textData, but saves Content (cascade merge)
Users can delete ContentRevision
Users can delete Content (cascade remove to ContentRevision)
Now, my questions are:
Is this the best approach? Any best practice?
Is it safe to cascade merge when the same entity is referenced twice? (Content.current is also Content.revisionList[i])
Are Content.current and Content.revisionList[i] the same instance? (Content.current == Content.revisionList[i] ?)
Thanks
#jabu.10245 I'm very grateful for your effort. Thank you, really.
However, there's a problematic (missing) case from your tests: when you run it inside a container using CMT:
#RunWith(Arquillian.class)
public class ArquillianTest
{
#PersistenceContext
private EntityManager em;
#Resource
private UserTransaction utx;
#Deployment
public static WebArchive createDeployment()
{
// Create deploy file
WebArchive war = ShrinkWrap.create(WebArchive.class, "test.war");
war.addPackages(...);
war.addAsResource("persistence-arquillian.xml", "META-INF/persistence.xml");
war.addAsManifestResource(EmptyAsset.INSTANCE, "beans.xml");
// Show the deploy structure
System.out.println(war.toString(true));
return war;
}
#Test
public void testDetached()
{
// find a document
Document doc = em.find(Document.class, 1L);
System.out.println("doc: " + doc); // Document#1342067286
// get first content
Content content = doc.getContentList().stream().findFirst().get();
System.out.println("content: " + content); // Content#511063871
// get current revision
ContentRevision currentRevision = content.getCurrentRevision();
System.out.println("currentRevision: " + currentRevision); // ContentRevision#1777954561
// get last revision
ContentRevision lastRevision = content.getRevisionList().stream().reduce((prev, curr) -> curr).get();
System.out.println("lastRevision: " + lastRevision); // ContentRevision#430639650
// test equality
boolean equals = Objects.equals(currentRevision, lastRevision);
System.out.println("1. equals? " + equals); // true
// test identity
boolean same = currentRevision == lastRevision;
System.out.println("1. same? " + same); // false!!!!!!!!!!
// since they are not the same, the rest makes little sense...
// make it dirty
currentRevision.setTextData("CHANGED " + System.currentTimeMillis());
// perform merge in CMT transaction
utx.begin();
doc = em.merge(doc);
utx.commit(); // --> ERROR!!!
// get first content
content = doc.getContentList().stream().findFirst().get();
// get current revision
currentRevision = content.getCurrentRevision();
System.out.println("currentRevision: " + currentRevision);
// get last revision
lastRevision = content.getRevisionList().stream().reduce((prev, curr) -> curr).get();
System.out.println("lastRevision: " + lastRevision);
// test equality
equals = Objects.equals(currentRevision, lastRevision);
System.out.println("2. equals? " + equals);
// test identity
same = currentRevision == lastRevision;
System.out.println("2. same? " + same);
}
}
since they are not the same:
if I enable cascading on both properties, an Exception is thrown
java.lang.IllegalStateException:
Multiple representations of the same entity [it.shape.edea2.jpa.ContentRevision#1] are being merged.
Detached: [ContentRevision#430639650];
Detached: [ContentRevision#1777954561]
if I disable cascade on current, the change get lost.
the strange thing is that running this test outside the container results in successful execution.
Maybe it's lazy loading (hibernate.enable_lazy_load_no_trans=true), maybe something else, but it's definitely NOT SAFE.
I wonder if there's a way to get the same instance.
Is it safe to cascade merge when the same entity is referenced twice?
Yes. If you manage an instance of Content, then it's Content.revisionList and Content.current are managed as well. Changes in any of those will be persisted when flushing the entity manager. You don't have to call EntityManager.merge(...) manually, unless you're dealing with transient objects that need to be merged.
If you create a new ContentRevision, then call persist(...) instead of merge(...) with that new instance and make sure it has a managed reference to the parent Content, also add it to the content's list.
Are Content.current and Content.revisionList[i] the same instance?
Yes, should be. Test it to be sure.
Content.current is always a member of Content.revisionList (think about Content.current as a "pointer").
You could do that check in in SQL with a check constraint; or in Java, although you'd have to be sure the revisionList is fetched. By default it's lazy fetched, meaning Hibernate will run another query for this list if you access the getRevisionList() method. And for that you need a running transaction, otherwise you'll be getting a LazyInitializationException.
You could instead load the list eagerly, if that's what you want. Or you could define a entity graph to be able to support both strategies in different queries.
Users can modify Content.current.textData, but saves Content (cascade merge)
See my first paragraph above, Hibernate should save changes on any managed entity automatically.
Users can delete ContentRevision
if (content.getRevisionList().remove(revision))
entityManager.remove(revision);
if (revision.equals(content.getCurrentRevision())
content.setCurrentRevision(/* to something else */);
Users can delete Content (cascade remove to ContentRevision)
Here I'd prefer to ensure that in the database schema, for instance
FOREIGN KEY (content_id) REFERENCES content (id) ON DELETE CASCADE;
UPDATE
As requested, I wrote a test. See this gist for the implementations of Content and ContentRevision I used.
I had to make one important change though: Content.current cannot really be #NotNull, especially not the DB field, because if it were, then we couldn't persist a content and revision at the same time, since both have no ID yet. Hence the field must be allowed to be NULL initially.
As a workaround I added the following method to Content:
#Transient // ignored in JPA
#AssertTrue // javax.validation
public boolean isCurrentRevisionInList() {
return current != null && getRevisionList().contains(current);
}
Here the validator ensures that the there is always a non-null current revision and that it is contained in the revision list.
Now here are my tests.
This one proves that the references are the same (Question 3) and that it is enough to persist content where current and revisionList[0] is referencing the same instance (question 2):
#Test #InSequence(0)
public void shouldCreateContentAndRevision() throws Exception {
// create java objects, unmanaged:
Content content = Content.create("My first test");
assertNotNull("content should have current revision", content.getCurrent());
assertSame("content should be same as revision's parent", content, content.getCurrent().getContent());
assertEquals("content should have 1 revision", 1, content.getRevisionList().size());
assertSame("the list should contain same reference", content.getCurrent(), content.getRevisionList().get(0));
// persist the content, along with the revision:
transaction.begin();
entityManager.joinTransaction();
entityManager.persist(content);
transaction.commit();
// verify:
assertEquals("content should have ID 1", Long.valueOf(1), content.getId());
assertEquals("content should have one revision", 1, content.getRevisionList().size());
assertNotNull("content should have current revision", content.getCurrent());
assertEquals("revision should have ID 1", Long.valueOf(1), content.getCurrent().getId());
assertSame("current revision should be same reference", content.getCurrent(), content.getRevisionList().get(0));
}
The next ensures that it's still true after loading the entity:
#Test #InSequence(1)
public void shouldLoadContentAndRevision() throws Exception {
Content content = entityManager.find(Content.class, Long.valueOf(1));
assertNotNull("should have found content #1", content);
// same checks as before:
assertNotNull("content should have current revision", content.getCurrent());
assertSame("content should be same as revision's parent", content, content.getCurrent().getContent());
assertEquals("content should have 1 revision", 1, content.getRevisionList().size());
assertSame("the list should contain same reference", content.getCurrent(), content.getRevisionList().get(0));
}
And even when updating it:
#Test #InSequence(2)
public void shouldAddAnotherRevision() throws Exception {
transaction.begin();
entityManager.joinTransaction();
Content content = entityManager.find(Content.class, Long.valueOf(1));
ContentRevision revision = content.addRevision("My second revision");
entityManager.persist(revision);
content.setCurrent(revision);
transaction.commit();
// re-load and validate:
content = entityManager.find(Content.class, Long.valueOf(1));
// same checks as before:
assertNotNull("content should have current revision", content.getCurrent());
assertSame("content should be same as revision's parent", content, content.getCurrent().getContent());
assertEquals("content should have 2 revisions", 2, content.getRevisionList().size());
assertSame("the list should contain same reference", content.getCurrent(), content.getRevisionList().get(1));
}
SELECT * FROM content;
id | version | current_content_revision_id
----+---------+-----------------------------
1 | 2 | 2
UPDATE 2
It was hard to reproduce that situation on my machine, but I got it to work. Here is what I've done so far:
I changed all #OneToMany relations to use lazy fetching (the default) and rerun the following test case:
#Test #InSequence(3)
public void shouldChangeCurrentRevision() throws Exception {
transaction.begin();
entityManager.joinTransaction();
Document document = entityManager.find(Document.class, Long.valueOf(1));
assertNotNull(document);
assertEquals(1, document.getContentList().size());
Content content = document.getContentList().get(0);
assertNotNull(content);
ContentRevision revision = content.getCurrent();
assertNotNull(revision);
assertEquals(2, content.getRevisionList().size());
assertSame(revision, content.getRevisionList().get(1));
revision.setTextData("CHANGED");
document = entityManager.merge(document);
content = document.getContentList().get(0);
revision = content.getCurrent();
assertSame(revision, content.getRevisionList().get(1));
assertEquals("CHANGED", revision.getTextData());
transaction.commit();
}
The test passed with lazy fetching. Note that lazy fetching requires it to be executed within a transaction.
For some reason the content revision instance you're editing is not the same as the one in the one-to-many list. To reproduce that I've modified my test as follows:
#Test #InSequence(4)
public void shouldChangeCurrentRevision2() throws Exception {
transaction.begin();
Document document = entityManager.find(Document.class, Long.valueOf(1));
assertNotNull(document);
assertEquals(1, document.getContentList().size());
Content content = document.getContentList().get(0);
assertNotNull(content);
ContentRevision revision = content.getCurrent();
assertNotNull(revision);
assertEquals(2, content.getRevisionList().size());
assertSame(revision, content.getRevisionList().get(1));
transaction.commit();
// load another instance, different from the one in the list:
revision = entityManager.find(ContentRevision.class, revision.getId());
revision.setTextData("CHANGED2");
// start another TX, replace the "current revision" but not the one
// in the list:
transaction.begin();
document.getContentList().get(0).setCurrent(revision);
document = entityManager.merge(document); // here's your error!!!
transaction.commit();
content = document.getContentList().get(0);
revision = content.getCurrent();
assertSame(revision, content.getRevisionList().get(1));
assertEquals("CHANGED2", revision.getTextData());
}
And there, I got exactly your error. Then I modified the cascading setting on the #OneToMany mapping:
#OneToMany(mappedBy = "content", cascade = { PERSIST, REFRESH, REMOVE }, orphanRemoval = true)
private List<ContentRevision> revisionList;
And the error disappeared :-) ... because I removed CascadeType.MERGE.
I am new to querydsl and I'm trying to use querydsl in pure java (no hibernate, JPA or anything).
We have a database where the tables are linked through minimum 3 columns
I followed the doc here and ended up with my schema duly created.
Here are my pseudo tables :
Item
Corporation (pk) mmcono
Item number (pk) mmitno
Environnement (pk) mmenv
Item description mmitds
Item_warehouse
Corporation (fk for Item) mbcono
Item number (fk for Item) mbitno
Environnement (fk for Item) mbenv
Warehouse number mbwhlo
Other properties (not important)
Inside the Item_wharehouse class, I manually added the foreignKey (because it's not defined in the actual db schema)
public final com.querydsl.sql.ForeignKey<QItemWharehouse > _ItemWharehouseFk = createInvForeignKey(Arrays.asList(mbitno, mbcono, mbenv), Arrays.asList("mmitno", "mmcono", "mbenv"));
I'm working on the following code in my main class:
SQLTemplates templates = SQLServer2012Templates.builder().printSchema().build();
Configuration configuration = new Configuration(templates);
QItem mm = new QItem ("mm");
QItemWarehouse mb = new QItemWarehouse("mb");
JtdsDataSource ds = getDataSource();
SQLQueryFactory queryFactory = new SQLQueryFactory(configuration, ds);
String toto = queryFactory.select(mm.mmitno, mm.mmitds)
.from(mm)
.join( ???????????? )
.where(mb.mbwhlo.eq("122"))
.fetch()
As per doc here I should be able to do something like this : AbstractSQLQuery.innerJoin(ForeignKey<E> key, RelationalPath<E> entity)
What I want in the end is to allow joining table without having to specify manually all the columns required for the join condition.
As stated before, my model starts with minimum 3 columns in the pk, and it's not uncommon to have 6 or 7 cols in the on clause! It's a lot of typing and very error prone, because you can easily miss one and get duplicate results.
I would like something like .join(mb._ItemWharehouseFk, ???) and let querydsl handle little details like generating the on clause for me.
My trouble is that I can't find the second parameter of type RelationalPath<E> entity for the join method.
I am doing something wrong ? What do I miss ? Is it even possible to accomplish what I want ?
Oups I found the problem : I had it all in the wrong order.
The foreign key was located in the itemWarehouse class.
it should have been named this way :
public final com.querydsl.sql.ForeignKey<QItem> _ItemFk = createInvForeignKey(Arrays.asList(mbitno, mbcono, mbenv), Arrays.asList("mmitno", "mmcono", "mbenv"));
that means that you just have to reverse the order in the statement this way :
SQLTemplates templates = SQLServer2012Templates.builder().printSchema().build();
Configuration configuration = new Configuration(templates);
QItem mm = new QItem ("mm");
QItemWarehouse mb = new QItemWarehouse("mb");
JtdsDataSource ds = getDataSource();
SQLQueryFactory queryFactory = new SQLQueryFactory(configuration, ds);
List<Tuple> toto = queryFactory.select(mm.mmitno, mm.mmitds)
.from(mb)
.join(mb._ItemFk, mm )
.where(mb.mbwhlo.eq("122"))
.fetch()
And you get your nice on clause generated. It's just a question of how you construct your relation.
#Enigma, I sincerely hope it will help you for your Friday afternoon. I wouldn't want your boss to be disappointed with you :-)
I would like to create an HashMap where the key is a string and the value is a List. All the values are taken from a Mysql table. The problem is that I have an HashMap where the key is the right one while the value is not the right one, because it is overwritten. In fact I have for all different keys the same list with the same content.
This is the code:
public static HashMap<String,List<Table_token>> getHashMapFromTokenTable() throws SQLException, Exception{
DbAccess.initConnection();
List<Table_token> listFrom_token = new ArrayList();
HashMap<String,List<Table_token>> hMapIdPath = new HashMap<String,List<Table_token>>();
String query = "select * from token";
resultSet = getResultSetByQuery(query);
while(resultSet.next()){
String token=resultSet.getString(3);
String path=resultSet.getString(4);
String word=resultSet.getString(5);
String lemma=resultSet.getString(6);
String postag=resultSet.getString(7);
String isTerminal=resultSet.getString(8);
Table_token t_token = new Table_token();
t_token.setIdToken(token);
t_token.setIdPath(path);
t_token.setWord(word);
t_token.setLemma(lemma);
t_token.setPosTag(postag);
t_token.setIsTerminal(isTerminal);
listFrom_token.add(t_token);
System.out.println("path "+path+" path2: "+token);
int row = resultSet.getRow();
if(resultSet.next()){
if((resultSet.getString(4).compareTo(path)!=0)){
hMapIdPath.put(path, listFrom_token);
listFrom_token.clear();
}
resultSet.absolute(row);
}
if(resultSet.isLast()){
hMapIdPath.put(path, listFrom_token);
listFrom_token.clear();
}
}
DbAccess.closeConnection();
return hMapIdPath;
}
You can find an example of the content of the HashMap below:
key: p000000383
content: [t0000000000000019231, t0000000000000019232, t0000000000000019233]
key: p000000384
content: [t0000000000000019231, t0000000000000019232, t0000000000000019233]
The values that are in "content" are in the last rows in Mysql table for the same key.
mysql> select * from token where idpath='p000003361';
+---------+------------+----------------------+------------+
| idDoc | idSentence | idToken | idPath |
+---------+------------+----------------------+------------+
| d000095 | s000000048 | t0000000000000019231 | p000003361 |
| d000095 | s000000048 | t0000000000000019232 | p000003361 |
| d000095 | s000000048 | t0000000000000019233 | p000003361 |
+---------+------------+----------------------+------------+
3 rows in set (0.04 sec)
You need to allocate a new listFrom_token each time instead of clear()ing it. Replace this:
listFrom_token.clear();
with:
listFrom_token = new ArrayList<Table_token>();
Putting the list in the HashMap does not make a copy of the list. You are clearing and refilling the same list over and over.
Your data shows that idPath is not a primary key. That's what you need to be the key in the Map. Maybe you should make idToken the key in the Map - it's the only thing in your example that's unique.
Your other choice is to make the column name the key and give the values to the List. Then you'll have four keys, each with a List containing four values.