NetBeans E-commerce Tutorial submit order error - java

I'm trying to learn javaee webdevelopment, I started by following [The NetBeans E-commerce Tutorial - Integrating Transactional Business Logic][1], I'm having a problem when I click submit order to enter data in the database, the orderID never gets updated and therefor I fail.
I did use em.flush() as explained:
To understand why order.getId method returns null, consider what the
code is actually trying to accomplish. The getId method attempts to
get the ID of an order which is currently in the process of being
created. Since the ID is an auto-incrementing primary key, the
database automatically generates the value only when the record is
added. One way to overcome this is to manually synchronize the
persistence context with the database. This can be accomplished using
the EntityManager's flush method.
but still I get the orderid as 0.
private void addOrderedItems(CustomerOrder order, ShoppingCart cart) {
em.flush();
List<ShoppingCartItem> items = cart.getItems();
// iterate through shopping cart and create OrderedProducts
for (ShoppingCartItem scItem : items) {
int productId = scItem.getProduct().getId();
// set up primary key object
OrderedProductPK orderedProductPK = new OrderedProductPK();
orderedProductPK.setCustomerOrderId(order.getId());
orderedProductPK.setProductId(productId);
// create ordered item using PK object
OrderedProduct orderedItem = new OrderedProduct(orderedProductPK);
// set quantity
orderedItem.setQuantity(scItem.getQuantity());
em.persist(orderedItem);
}
}
The problem was fixed. I did not give a value for the date, which should not be null. That's why I couldn't write to the database.

Related

JPA concurrent postgresql counter column with retrieving value

Pre-requisites
Postgresql
Spring boot with spring data jpa
Problem
I have 2 tables. Products and ProductsLocationCounter. Each product has a location_id and counter_value fields among others. location_id is also the primary key of ProductsLocationCounter.
The ProductsLocationCounter is meant to keep a counter of the number of products grouped by a specific location_id whenever a new product is added.
The problem is that I also need the counter value at that point in time to be attached to the product entity.
So the flow would be like
1. create product
2. counter_value = get counter
3. increment counter
4. product.counter_value = counter_value
Of course this has to be done in a concurrent matter.
Now, i've read/tried different solutions.
this stackoverflow post suggests that i should let the db to handle the concurrency, which sounds fine by me. But the trick is that I need the value of the counter in the same transaction. So I've created a trigger
CREATE FUNCTION maintain_location_product_count_fun() RETURNS TRIGGER AS
$$
DECLARE
counter_var BIGINT;
BEGIN
IF TG_OP IN ('INSERT') THEN
select product_location_count.counter into counter_var from product_location_count WHERE id = new.location_id FOR UPDATE;
UPDATE product_location_count SET counter = counter + 1 WHERE id = new.location_id;
UPDATE products SET counter_value = counter_var WHERE location_id = new.location_id;
END IF;
RETURN NULL;
END
$$
LANGUAGE plpgsql;
CREATE TRIGGER maintain_location_product_count_trig
AFTER INSERT ON products
FOR EACH ROW
EXECUTE PROCEDURE maintain_location_product_count_fun();
and tested it with a parallel stream
IntStream.range(1, 5000)
.parallel()
.forEach(value -> {
executeInsideTransactionTemplate(status -> {
var location = locationRepository.findById(location.getId()).get();
return addProductWithLocation(location)
});
});
Got no duplication on the counter_value column. Is this trigger safe for multi-threaded apps? Haven't worked with triggers/postgresql functions before. Not sure what to expect
The second solution I tried was to add PESIMISTIC_WRITE on findById method of the ProductsLocationCounter entity but i ended up getting
cannot execute SELECT FOR UPDATE in a read-only transaction even though i was executing the code in a #Transactional annotated method ( which by default has read-only false).
The third one was to update and retrieve the value of the counter in the same statement but spring jpa doesn't allow that (nor the underlying db) as the update statement only return the number of rows affected
Is there any other solution or do i need to add something to the trigger function to make it threadsafe? Thank you
This is how I've achieved what i needed.
Long story short, i've used a sql function and I called it inside repository. I didn't need the trigger anymore.
https://stackoverflow.com/a/74208072/3018285

Java to postgresSQL. I am using SERIAL for my id column , can I make it replace missing value when I remove row and add a new one?

I am doing Library db , sometimes I need to remove books and add a new one and I dont like how it starts missing lower ids and it just goes one . Because Serial just increade everytime . Is there a way how to do that ? For example different type of column instead of SERIAL
I don't know if that will solve your problem but I suggest that you do updating for all serial ids again.
So If you want it to be (1,2,3,4) so you can use this query :
ALTER SEQUENCE seq RESTART WITH 1;
UPDATE t SET idcolumn=nextval('seq');
source: How to reset sequence in postgres and fill id column with new data?
You can create a service that executes this query inside deleting books service, so after deleting books you will call this service to rearange ids
public void UpdateIds() {
// make a connection with your database and send query above to update ids
}
and in your service where you delete books, I suppose you have a service looks like that
public void deleteBooks(//some books ids to be deleted
){
//some code for deleting books by ids
// call the method above to update ids again
UpdateIds()
}

jOOQ upsert a pojo

Feeling a bit stupid, but I have a simple architecture where the repositories are the only ones accessing ~Record classes and the services work on POJOs.
So basic flow is
repository fetches into POJO
service modifies POJO
repository receives POJO to update DB
repository matches updated POJO to record
repository stores (insert or update) the record
repository maps updated record (may have received generated values from insert) back to POJO
service receives updated POJO
i.e. something like
fun save(set: MySet): MySet {
set.description = set.description ?: ""
val record = ctx.newRecord(MY_SET, set).apply {
store()
}
// "When store() performs an INSERT statement, jOOQ attempts to load any generated keys from the database back into the record."
// cf. https://www.jooq.org/doc/latest/manual/sql-execution/crud-with-updatablerecords/simple-crud/
return record.into(MySet::class.java)
}
This fails because to quote documentation for newRecord:
Create a new pre-filled Record that can be inserted into the corresponding table.
This performs roughly the inverse operation of Record.into(Class)
The resulting record will have its internal "changed" flags set to true for all values. This means that UpdatableRecord.store() will perform an INSERT statement. If you wish to store the record using an UPDATE statement, use executeUpdate(UpdatableRecord) instead.
I CAN, of course, check if I have an id, and then either fetch the record from the database or create a new one
fun save(set: MySet): MySet {
set.description = set.description ?: ""
val record = when (val setId = set.id) {
null -> ctx.newRecord(MY_SET, set)
else -> ctx.selectFrom(MY_SET).where(MY_SET.ID.eq(setId)).fetchSingle()
}
//TOOD: update record manually from `set`
record.store()
// "When store() performs an INSERT statement, jOOQ attempts to load any generated keys from the database back into the record."
// cf. https://www.jooq.org/doc/latest/manual/sql-execution/crud-with-updatablerecords/simple-crud/
return record.into(MySet::class.java)
}
But that kind of is a lot of boilerplate code.
I DO have access to the MySetDao but that one just has insert and update, there's no store or upsert, as far as I can see.
Is there a way to turn a POJO into an UpdatableRecord directly or is this fetch-and-manual-update the way to go?
(Worth noting: the MySet POJO used here was generated by jOOQ.)

Retrieve value of a DB column after I update it

Sorry in advance for the long post. I'm working with a Java WebApplication which uses Spring (2.0, I know...) and Jpa with Hibernateimplementation (using hibernate 4.1 and hibernate-jpa-2.0.jar). I'm having problems retrieving the value of a column from a DB Table (MySql 5) after i update it. This is my situation (simplified, but that's the core of it):
Table KcUser:
Id:Long (primary key)
Name:String
.
.
.
Contract_Id: Long (foreign key, references KcContract.Id)
Table KcContract:
Id: Long (primary Key)
ColA
.
.
ColX
In my server I have something like this:
MyController {
myService.doSomething();
}
MyService {
private EntityManager myEntityManager;
#Transactional(readOnly=true)
public void doSomething() {
List<Long> IDs = firstFetch(); // retrieves some users IDs querying the KcContract table
doUpdate(IDs); // updates a column on KcUser rows that matches the IDs retrieved by the previous query
secondFecth(IDs); // finally retrieves KcUser rows <-- here the returned rows contains the old value and not the new one i updated in the previous method
}
#Transactional(readOnly=true)
private List<Long> firstFetch() {
List<Long> userIDs = myEntityManager.createQuery("select c.id from KcContract c" ).getResultList(); // this is not the actual query, there are some conditions in the where clause but you get the idea
return userIDs;
}
#Transactional(readOnly=false, propagation=Propagation.REQUIRES_NEW)
private void doUpdate(List<Long> IDs) {
Query hql = myEntityManager().createQuery("update KcUser t set t.name='newValue' WHERE t.contract.id IN (:list)").setParameter("list", IDs);
int howMany = hql.executeUpdate();
System.out.println("HOW MANY: "+howMany); // howMany is correct, with the number of updated rows in DB
Query select = getEntityManager().createQuery("select t from KcUser t WHERE t.contract.id IN (:list)" ).setParameter("list", activeContractIDs);
List<KcUser> users = select.getResultList();
System.out.println("users: "+users.get(0).getName()); //correct, newValue!
}
private void secondFetch(List<Long> IDs) {
List<KcUser> users = myEntityManager.createQuery("from KcUser t WHERE t.contract.id IN (:list)").setParameter("list", IDs).getResultList()
for(KcUser u : users) {
myEntityManager.refresh(u);
String name = u.getName(); // still oldValue!
}
}
}
The strange thing is that if i comment the call to the first method (myService.firstFetch()) and call the other two methods with a constant list of IDs, i get the correct new KcUser.name value in secondFetch() method.
Im not very expert with Jpa and Hibernate, but I thought it might be a cache problem, so i've tried:
using myEntityManager.flush() after the update
clearing the cache with myEntityManager.clear() and myEntityManager.getEntityManagerFactory().evictAll();
clearing the cache with hibernate Session.clear()
using myEntityManager.refresh on KcUser entities
using native queries (myEntityManager.createNativeQuery("")), which to my understanding should not involve any cache
Nothing of that worked and I always got returned the old KcUser.name value in secondFetch() method.
The only things that worked so far are:
making the firstFetch() method public and moving its call outside of myService.doSomething(), so doing something like this in MyController:
List<Long> IDs = myService.firstFetch();
myService.doSomething(IDs);
using a new EntityManager in secondFetch(), so doing something like this:
EntityManager newEntityManager = myEntityManager.getEntityManagerFactory().createEntityManager();
and using it to execute the subsequent query to fetch users from DB
Using either of the last two methods, the second select works fine and i get users with the updated value in "name" column.
But I'd like to know what's actually happening and why noone of the other things worked: if it's actually a cache problem a simply .clear() or .refresh() should have worked i think. Or maybe i'm totally wrong and it's not related to the cache at all, but then i'm bit lost to what might actually be happening.
I fear there might be something wrong in the way we are using hibernate / jpa which might bite us in the future.
Any idea please? Tell me if you need more details and thanks for your help.
Actions are performed in following order:
Read-only transaction A opens.
First fetch (transaction A)
Not-read-only transaction B opens
Update (transaction B)
Transaction B closes
Second fetch (transaction A)
Transaction A closes
Transaction A is read-only. All subsequent queries in that transaction see only changes that were committed before the transaction began - your update was performed after it.

Proper way to insert record with unique attribute

I am using spring, hibernate and postgreSQL.
Let's say I have a table looking like this:
CREATE TABLE test
(
id integer NOT NULL
name character(10)
CONSTRAINT test_unique UNIQUE (id)
)
So always when I am inserting record the attribute id should be unique
I would like to know what is better way to insert new record (in my spring java app):
1) Check if record with given id exists and if it doesn't insert record, something like this:
if(testDao.find(id) == null) {
Test test = new Test(Integer id, String name);
testeDao.create(test);
}
2) Call straight create method and wait if it will throw DataAccessException...
Test test = new Test(Integer id, String name);
try{
testeDao.create(test);
}
catch(DataAccessException e){
System.out.println("Error inserting record");
}
I consider the 1st way appropriate but it means more processing for DB. What is your opinion?
Thank you in advance for any advice.
Option (2) is subject to a race condition, where a concurrent session could create the record between checking for it and inserting it. This window is longer than you might expect because the record might be already inserted by another transaction, but not yet committed.
Option (1) is better, but will result in a lot of noise in the PostgreSQL error logs.
The best way is to use PostgreSQL 9.5's INSERT ... ON CONFLICT ... support to do a reliable, race-condition-free insert-if-not-exists operation.
On older versions you can use a loop in plpgsql.
Both those options require use of native queries, of course.
Depends on the source of your ID. If you generate it yourself you can assert uniqueness and rely on catching an exception, e.g. http://docs.oracle.com/javase/1.5.0/docs/api/java/util/UUID.html
Another way would be to let Postgres generate the ID using the SERIAL data type
http://www.postgresql.org/docs/8.1/interactive/datatype.html#DATATYPE-SERIAL
If you have to take over from an untrusted source, do the prior check.

Categories

Resources