I am using spring, hibernate and postgreSQL.
Let's say I have a table looking like this:
CREATE TABLE test
(
id integer NOT NULL
name character(10)
CONSTRAINT test_unique UNIQUE (id)
)
So always when I am inserting record the attribute id should be unique
I would like to know what is better way to insert new record (in my spring java app):
1) Check if record with given id exists and if it doesn't insert record, something like this:
if(testDao.find(id) == null) {
Test test = new Test(Integer id, String name);
testeDao.create(test);
}
2) Call straight create method and wait if it will throw DataAccessException...
Test test = new Test(Integer id, String name);
try{
testeDao.create(test);
}
catch(DataAccessException e){
System.out.println("Error inserting record");
}
I consider the 1st way appropriate but it means more processing for DB. What is your opinion?
Thank you in advance for any advice.
Option (2) is subject to a race condition, where a concurrent session could create the record between checking for it and inserting it. This window is longer than you might expect because the record might be already inserted by another transaction, but not yet committed.
Option (1) is better, but will result in a lot of noise in the PostgreSQL error logs.
The best way is to use PostgreSQL 9.5's INSERT ... ON CONFLICT ... support to do a reliable, race-condition-free insert-if-not-exists operation.
On older versions you can use a loop in plpgsql.
Both those options require use of native queries, of course.
Depends on the source of your ID. If you generate it yourself you can assert uniqueness and rely on catching an exception, e.g. http://docs.oracle.com/javase/1.5.0/docs/api/java/util/UUID.html
Another way would be to let Postgres generate the ID using the SERIAL data type
http://www.postgresql.org/docs/8.1/interactive/datatype.html#DATATYPE-SERIAL
If you have to take over from an untrusted source, do the prior check.
Related
Here I have a dilemma.
Let's imagine that we have a sql table like this
enter image description here
It could be a problem when two or more users overwrite data in the table.
How should I check if the place hasn't been taken before update data?
I have two options
in SQL query:
UPDATE ticket SET user_user_id = ? WHERE place = ? AND user_user_id is NULL
or in Service layer:
try {
Ticket ticket = ticketDAO.read(place)
if (ticket.getUser() == null) {
ticket.setUser(user)
ticketDAO.update(ticket)
}else {
throw new DAOException("Place has been already tooken")
}
What way is safer and commonly used in practice?
Please, share your advice.
Possible approach here is to go ahead with SQL query. After query execution check number of rows modified in ticketDAO.update method. If 0 rows modified then throw exception DAOException.
There is a user with the attribute Role, by default TENANT, using a query we set him LANDLORD and in theHOUSE table he adds an apartment with various attributes: description, price, city_id and others. But suddenly this user wanted to remove himself from the status of LANDLORD, delete his apartments from our database and again become justTENANT, how in this case can I delete the information that he has apartments? How to do it, if he has apartments, then they need to be deleted, if not, then just change the user's status to TENANT?
At first there was an idea to assign a zero value, but it seemed strange to me if we just zeroed it out, because then the table will start to get cluttered. There is also a status option: ACTIVE or BANNED, but I don't like this option, because his apartment is still not needed.
The code looks like this:
#PutMapping ("/ {id}")
#PreAuthorize ("hasAuthority ('landlord: write')")
public void TenantPostAdd (#PathVariable (value = "id") Long id) {
User user = userRepository.findById (id) .orElseThrow ();
Role role = Role.TENANT;
user.setRole (role);
House house = houseRepository.findById (id) .orElseThrow ();
house ... // what's here
}
Full Code
To build this level of infrastructure, there are a lot of questions I would have to ask to recommend something. I'd want to see the current database schema as well. Your also requesting the ability to delete which can become problematic. You may want to consider leaving data if you believe that the customer may change roles again. That kind of information is based off of the terms of agreement.
Have you considered building something like this?
Absolute(Numeric) Mode
0 No Permission --- etc...
https://www.guru99.com/file-permissions.html
This could be a prepared statement issue with not the appropriate joins occurring in the statement. I believe you should take another look over your database schema.
I'm trying to learn javaee webdevelopment, I started by following [The NetBeans E-commerce Tutorial - Integrating Transactional Business Logic][1], I'm having a problem when I click submit order to enter data in the database, the orderID never gets updated and therefor I fail.
I did use em.flush() as explained:
To understand why order.getId method returns null, consider what the
code is actually trying to accomplish. The getId method attempts to
get the ID of an order which is currently in the process of being
created. Since the ID is an auto-incrementing primary key, the
database automatically generates the value only when the record is
added. One way to overcome this is to manually synchronize the
persistence context with the database. This can be accomplished using
the EntityManager's flush method.
but still I get the orderid as 0.
private void addOrderedItems(CustomerOrder order, ShoppingCart cart) {
em.flush();
List<ShoppingCartItem> items = cart.getItems();
// iterate through shopping cart and create OrderedProducts
for (ShoppingCartItem scItem : items) {
int productId = scItem.getProduct().getId();
// set up primary key object
OrderedProductPK orderedProductPK = new OrderedProductPK();
orderedProductPK.setCustomerOrderId(order.getId());
orderedProductPK.setProductId(productId);
// create ordered item using PK object
OrderedProduct orderedItem = new OrderedProduct(orderedProductPK);
// set quantity
orderedItem.setQuantity(scItem.getQuantity());
em.persist(orderedItem);
}
}
The problem was fixed. I did not give a value for the date, which should not be null. That's why I couldn't write to the database.
I'm trying to do upsert using mongodb driver, here is a code:
BulkWriteOperation builder = coll.initializeUnorderedBulkOperation();
DBObject toDBObject;
for (T entity : entities) {
toDBObject = morphia.toDBObject(entity);
builder.find(toDBObject).upsert().replaceOne(toDBObject);
}
BulkWriteResult result = builder.execute();
where "entity" is morphia object. When I'm running the code first time (there are no entities in the DB, so all of the queries should be insert) it works fine and I see the entities in the database with generated _id field. Second run I'm changing some fields and trying to save changed entities and then I receive the folowing error from mongo:
E11000 duplicate key error collection: statistics.counters index: _id_ dup key: { : ObjectId('56adfbf43d801b870e63be29') }
what I forgot to configure in my example?
I don't know the structure of dbObject, but that bulk Upsert needs a valid query in order to work.
Let's say, for example, that you have a unique (_id) property called "id". A valid query would look like:
builder.find({id: toDBObject.id}).upsert().replaceOne(toDBObject);
This way, the engine can (a) find an object to update and then (b) update it (or, insert if the object wasn't found). Of course, you need the Java syntax for find, but same rule applies: make sure your .find will find something, then do an update.
I believe (just a guess) that the way it's written now will find "all" docs and try to update the first one ... but the behavior you are describing suggests it's finding "no doc" and attempting an insert.
I use MyBatis 3.1.
I have two use cases when I need to bypass MyBatis local cache and directly hit the DB.
Since MyBatis configuration file only have global settings, it is not applicable to my case, because I need it as an exception, not as a default. Attributes of MyBatis <select> XML statement do not seem to include this option.
Use case 1: 'select sysdate from dual'.
MyBatis caching causes this one to always return the same value within a MyBatis session. This causes an issue in my integration test, when I try to replicate a situation of an outdated entry.
My workaround was just to use a plain JDBC call.
Use case 2: 'select' from one thread does not always see the value written by another thread.
Thread 1:
SomeObject stored = dao.insertSomeObject(obj);
runInAnotherThread(stored.getId());
//complete and commit
Thread 2:
//'id' received as an argument provided to 'runInAnotherThread(...)'
SomeObject stored = dao.findById(id);
int count = 0;
while(stored == null && count < 300) {
++count;
Thread.sleep(1000);
stored = dao.findById(id);
}
if (stored == null) {
throw new MyException("There is no SomeObject with id="+id);
}
I occasionally receive MyException errors on a server, but can't reproduce on my local machine. In all cases the object is always in the DB. So I guess the error depends on whether the stored object was in MyBatis local cache at the first time, and waiting for 5 minutes does not help, since it never checks the actual DB.
So my question is how to solve the above use cases within MyBatis without falling back to the plain JDBC?
Being able just to somehow signal MyBatis not to use a cached value in a specific call (the best) or in all calls to a specific query would be the preferred option, but I will consider any workaround as well.
I don't know a way to bypass local cache but there are two options how to achieve what you need.
The first option is to set flushCache="true" on select. This will clear the cache after statement execution so next query will hit database.
<select id="getCurrentDate" resultType="date" flushCache="true">
SELECT SYSDATE FROM DUAL
</select>
Another option is to use STATEMENT level local cache. By default local cache is used during SESSION (which is typically translates to transaction). This is specified by localCacheScope option and is set per session factory. So this will affect all queries using this mybatis session factory.
Let me summarize.
The solution from the previous answer, 'flushCache="true"' option on the query, works and solves both use cases. It will flush cache after every such 'select', so the next 'select' statement will hit the DB. Although it works after the 'select' statement is executed, it's OK since the cache is empty anyway before the first 'select'.
Another solution is to start a new session. I use Spring, so it's enough to mark a method with #Transactional(propagation = Propagation.REQUIRES_NEW). Since MyBatis session is tied to Spring transaction, this will cause to create another MyBatis session with fresh cache every time the method is called.
By some reason, the MyBatis option 'useCache="false"' in the query does not work.
The following Options annotation can be used:
#Options(useCache=false, flushCache=FlushCachePolicy.TRUE)
Apart from answers by Roman and Alexander there is one more solution for this:
Configuration configuration = MyBatisUtil.getSqlSessionFactory().getConfiguration();
Collection<Cache> caches = configuration.getCaches();
//If you have multiple caches and want a particular to get deleted.
// Cache cache = configuration.getCache("PPL"); // namespace of particular XML
for (Cache cache : caches) {
Lock w = cache.getReadWriteLock().writeLock();
w.lock();
try {
cache.clear();
} finally {
w.unlock();
}
}