I'm new to Spring JPA, bear with me if I did something wrong.
I have a Repository DAO for executing some native query:
#Repository
public class TestingDAO {
#Autowired
private EntityManager entityManager;
public void createNewFoos(Long fooId, Long barId) {
if (ebuId == null) return;
String insertQuery = "INSERT INTO FOO_BAR(foo_id, bar_id) values (" + fooId + "," + barId + ")";
Query query = entityManager.createNativeQuery(insertQuery);
query.executeUpdate();
}
}
FOO_BAR is a relation table with 2 FK.
I realized that the execution time of createNewFoos method keeps increasing when I call it multiple times in one transaction (for 10,000 th, it even takes some seconds). When I use JPA Repository to save the entity object (result in db is the same), there is no performance issue like that.
Could you please explain why does it happen? Am I did something wrong?
Thanks in advance for you helps!
Related
I'm using JPA with Hibernate and spring-boot-starter-data-jpa.
I want to merge/update a set of Items. If it's a new item, I want to persist, if it's an already existsing item i want to update it.
#PersistenceContext(unitName = "itemEntityManager")
private EntityManager em;
#Transactional
public void saveItems(Set<Item>> items) {
items.forEach(em::merge);
}
When I try it like this, every Item creates a new HQL statment and it's inperformant.
So I'm looking for a way to save all Items in one time (if it's possible), to save calls and time.
I found this:
EntityTransaction transaction = em.getTransaction();
transaction.begin();
items.forEach(em::merge);
transaction.commit();
but i can't use this transaction because i use the #Transactional.
Is there a way with native SQL?
You could use #SQLInsert for this purpose to use batch inserts. See Hibernate Transactions and Concurrency Using attachDirty (saveOrUpdate)
I created a native SQL statement:
I joined all items as a list of values in sql format
String sqlValues = String.join(",", items.stream().map(this::toSqlEntry).collect(Collectors.toSet()));
Then i called a native query
em.createNativeQuery("INSERT INTO config_item"
+ "( id, eaid, name, type, road, offs, created, deleted )"
+ " VALUES " + sqlValues
+ " ON CONFLICT (id) DO UPDATE "
+ " SET "
+ "eaid=excluded.eaid,\n"
+ "name=excluded.name,\n"
+ "type=excluded.type,\n"
+ "road=excluded.road,\n"
+ "offs=excluded.offs,\n"
+ "created=excluded.created,\n"
+ "deleted=excluded.deleted;"
).executeUpdate();
That’s a lot faster and works
I've a database with many thousands of tables that have been (and continue to be) created with a naming strategy - one table per calendar day:
data_2010_01_01
data_2010_01_02
...
data_2020_01_01
All tables contain sensor data from the same system in the same shape. So a single entity (lets call it SensorRecord) will absolutely map to all tables.
I'd imagined something like this would work:
#Query(nativeQuery = true, value = "SELECT * FROM \"?1\"")
Collection<SensorRecord> findSensorDataForDate(String tableName);
But it does not, and reading around the topic seems to suggest I am on the wrong path. Most posts on dynamic naming seem to state explicitly that you need one entity per table, but generating thousands of duplicate entities also seems wrong.
How can I use JPA (JPQL?) to work with this data where the table name follows a naming convention and can be changed as part of the query?
Parameters are only allowed in the where clause.
You can create custom repository method returns collection of SensorRecord dto. No need to map so many entities. You should get List<Object []> as query result and manually create dto objects.
#Autowired
EntityManager entityManager;
public List<SensorRecord> findSensorDataForDate(LocalDate date) {
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy_MM_dd");
String tableName = "data_" + date.format(formatter);
Query query = entityManager.createNativeQuery(
"select t.first_column, t.second_column from " + tableName + " t");
List<Object[]> queryResults = query.getResultList();
List<SensorRecord> sensorRecords = new ArrayList<>();
for (Object[] row : queryResults) {
SensorRecord record = new SensorRecord();
record.setFirstParameter((Integer) row[0]);
record.setSecondParameter((String) row[1]);
sensorRecords.add(record);
}
return sensorRecords;
}
Could it be just syntax error?
This has worked for me:
#Query(value = "select * from job where job.locked = 1 and job.user = ?1", nativeQuery = true)
public List<JobDAO> getJobsForUser(#Param("user") String user);
I would like to refresh managed entities, I used Session.refresh but it's been causing StackOverflowError because I mapped bi-directionnal relationships.
Plus, I don't want one-to-many relationships to be reloaded nor to keep the same state, I want them to be non-initialized as though their parents entities were a query result.
I tried this :
#Override
public void refresh(IdentifiableByIdImpl entity) {
Query query;
Object refreshedEntity;
try {
query = session.createQuery(
"FROM " + entity.getClass().getSimpleName() +
"WHERE id = " + entity.getId()
);
refreshedEntity = query.uniqueResult();
copyProperties(refreshedEntity, entity);
} catch(StackOverflowError e) {
System.err.println("S.O");
}
}
But it keeps triggering a StackOverflowError.
A simple way would be to return the "refreshedEntity" nonetheless I find this way non-flexible.
I would like to ask you for help with following problem. I have method:
String sql = "INSERT INTO table ...."
Query query = em.createNativeQuery(sql);
query.executeUpdate();
sql = "SELECT max(id) FROM ......";
query = em.createNativeQuery(sql);
Integer importId = ((BigDecimal) query.getSingleResult()).intValue();
for (EndurDealItem item : deal.getItems()) {
String sql2 = "INSERT INTO another_table";
em.createNativeQuery(sql2).executeUpdate();
}
And after executing it, data are not commited (it takes like 10 or 15 minutes until data are commited). Is there any way how to commit data explicitly or trigger commit? And what causes the transaction to remain uncommited for such a long time?
The reason we use nativeQueries is, that we are exporting data on some shared interface and we are not using the data anymore.
I would like to mention, that the transaction is Container-Managed (by Geronimo). EntityManager is created via linking:
#PersistenceContext(unitName = "XXXX", type = PersistenceContextType.TRANSACTION)
private EntityManager em;
Use explicitly the transaction commit:
EntityManager em = /* get an entity manager */;
em.getTransaction().begin();
// make some changes
em.getTransaction().commit();
This should work. The time of execution of all operation between .begin() and .end() depends of course also from the cycle you're performing, the number of row you're inserting, from the position of the database (the speed of the network matters) and so on...
I just created a custom hibernate ID generator, and since I'm not an hibernate expert I would like to get some feedback on my code. The generated ID is select max(id) from table, +1.
public class MaxIdGenerator implements IdentifierGenerator, Configurable {
private Type identifierType;
private String tableName;
private String columnName;
#Override
public void configure(Type type, Properties params, Dialect dialect) {
identifierType = type;
tableName = (String) params.getProperty("target_table");
columnName = (String) params.getProperty("target_column");
}
#Override
public synchronized Serializable generate(SessionImplementor session,
Object object) {
return generateHolder(session).makeValue();
}
protected IntegralDataTypeHolder generateHolder(SessionImplementor session) {
Connection connection = session.connection();
try {
IntegralDataTypeHolder value = IdentifierGeneratorHelper
.getIntegralDataTypeHolder(identifierType
.getReturnedClass());
String sql = "select max(" + columnName + ") from " + tableName;
PreparedStatement qps = connection.prepareStatement(sql);
try {
ResultSet rs = qps.executeQuery();
if (rs.next())
value.initialize(rs, 1);
else
value.initialize(1);
rs.close();
} finally {
qps.close();
}
return value.copy().increment();
} catch (SQLException e) {
throw new IdentifierGenerationException(
"Can't select max id value", e);
}
}
}
I'd like to know:
How can I make this multi-transaction-safe? (ie if two concurrent transactions insert data, how can I safely assume that I will not end-up having twice the same ID?) -- I guess the only solution here would be to prevent two concurrent hibernate transaction to run at the same time if they use the same generator, is this possible?
If the code could be improved: I feel wrong having to use hard-coded "select", "target_column", etc...
To guarantee point 1), I can fallback if necessary on synchronizing inserts on my java client code.
Please do not comment on the reasons why I'm using this kind of generator: legacy code still insert data onto the same database and uses this mechanism... and can't be modified. And yes, I know, it sucks.
I think the easiest way to achive a transaction-safe behaviour is to put the code you use to retrive the maximum id and do the insert statement, in a transactional block.
Something like:
Transaction transaction = session.beginTransaction();
//some code...
transaction.commit();
session.close()
I also recommend to use HQL (Hibernate Query Language) to create the query, instead of native sql where possible. Moreover, from your description, I have understood that you expect from the query a unique result, the maximum id. So, you could use uniqueResult() method over your query instead of executeQuery.
you can use AtomicInteger for generating ID. That can be used by many threads concurrently.
If you are free to use any other provider of ID then i will suggest to use UUID class for generating random ID.
UUID.randomUUID();
You can refer to link which contain some other ways to generate ID.