Corrupted objects when using Spring Batch with Embedded datasource - java

I got strange results using Spring Batch in conjuction with embedded storages like H2.
Here is my case:
My project is just regular Spring Boot microservice. Domain layer has an entity Model class with transient List field. This field is backed by byte array field, which is in turn not transient and supposed to be stored. I use #PrePersist and
#PostLoad to serialize and deserialize list. After model is loaded I clear array so it doesn't take memory.
The service has two controllers: ModelController and AnalyzeController. Each of them executes Spring Batch job. First job creates Model and stores it into db. Second job pull model from db and use it for calculating.
#Entity
#Getter
#Setter
#Table(name = "Model")
#Slf4j
public class Model {
#Id
#GeneratedValue(strategy = GenerationType.AUTO, generator = "system-uuid")
#GenericGenerator(name = "system-uuid", strategy = "uuid2")
#Column(name = "id", length = 36)
private String id;
#Transient
private List<Cluster<ItemPoints>> clusters;
#Column(columnDefinition = "BYTEA")
private byte[] serializedClusters;
//region Serialization / Deserialization
#PrePersist
public void serializeClusters() {
try (ByteArrayOutputStream out = new ByteArrayOutputStream();
ObjectOutputStream oos = new ObjectOutputStream(out)) {
oos.writeObject(clusters);
serializedClusters = out.toByteArray();
} catch (IOException e) {
log.error(SERIALIZATION_ERROR_MESSAGE, e);
}
}
#PostLoad
#SuppressWarnings("unchecked")
public void deserializeClusters() {
try (ByteArrayInputStream in = new ByteArrayInputStream(serializedClusters);
ObjectInputStream is = new ObjectInputStream(in)) {
clusters = (List<Cluster<TimeSeriesItemPoints>>) is.readObject();
} catch (IOException | ClassNotFoundException e) {
log.error(SERIALIZATION_ERROR_MESSAGE, e);
}
// clear serialized data as it is no more needed
serializedClusters = new byte[]{};
}
//endregion
}
Repository layer is provided by Spring (CrudRepository)
If I try to execute AnalyzeController after ModelController for the first time it works as expected, but further calls to repository return Models with empty array (and List won't be deserialized).
This happens only with embedded database and single DataSource (same for batch processing and models).
If I use two different DataSource, it works correctly. If I use an external database it works correctly even with a single DataSource (PostgreSQL for example). Also it works OK with single datasource and no Spring Batch.
Configuration for single datasource case:
#Configuration
#Profile("default")
public class DefaultDataSourceConfiguration {
#Bean
public DataSource batchDataSource() {
return new EmbeddedDatabaseBuilder().setType(EmbeddedDatabaseType.H2).build();
}
}

Related

Obtain InputStream from Blob by calling getBinaryStream

I have a Spring Boot CRUD application that has the Document class, which is a DB entity with a Blob field. I also have the DocumentWrapper class, which is a class that is used for the transmission of the document and has a MultiPartFile field.
So, Document is the entity that I am storing in the DB, and has a dedicated JPA repository, while DocumentWrapper is the "helper" entity that the controller accepts and returns.
So in my service layer, I perform a transformation between DocumentWrapper and Document types as follow, in order to use the DocumentRepository and using this class as Bean to perform the conversion:
#Configuration
public class DocumentWrapperConverter implements AttributeConverter<DocumentWrapper, Document>, Serializable {
#Autowired
private LobService lobService;
#Override
public DocumentWrapper convertToEntityAttribute(Document document) {
DocumentWrapper documentWrapper = new DocumentWrapper();
documentWrapper.setName(document.getName());
documentWrapper.setDescription(document.getDescription());
documentWrapper.setId(document.getId());
MultipartFile multipartFile = null;
try {
InputStream is = this.lobService.readBlob(document.getBlob());
multipartFile = new MockMultipartFile(document.getName(), document.getOriginalFilename(), document.getContentType().toString(), is);
} catch (IOException e) {
e.printStackTrace();
}
documentWrapper.setFile(multipartFile);
return documentWrapper;
}
#Override
public Document convertToDatabaseColumn(DocumentWrapper documentWrapper) {
Document document = new Document();
document.setName(documentWrapper.getName());
document.setDescription(documentWrapper.getDescription());
document.setId(documentWrapper.getId());
document.setContentType(documentWrapper.getContentType());
document.setFileSize(documentWrapper.getSize());
document.setOriginalFilename(documentWrapper.getOriginalFilename());
Blob blob = null;
try {
blob = this.lobService.createBlob(documentWrapper.getFile().getInputStream(), documentWrapper.getFile().getSize());
} catch (IOException e) {
e.printStackTrace();
}
document.setBlob(blob);
return document;
}
}
I encapsulated the logic to transform a Blob to an InputStream in the LobService and its implementor LobServiceImpl:
#Service
public class LobServiceImpl implements LobService {
#Autowired
private SessionFactory sessionFactory;
#Autowired
private DataSource dataSource;
#Transactional
#Override
public Blob createBlob(InputStream content, long size) {
return this.sessionFactory.getCurrentSession().getLobHelper().createBlob(content, size);
}
#Transactional
#Override
public InputStream readBlob(Blob blob) {
Connection connection = null;
try {
connection = this.dataSource.getConnection();
} catch (SQLException e) {
e.printStackTrace();
}
InputStream is = null;
try {
is = blob.getBinaryStream();
} catch (SQLException e) {
e.printStackTrace();
}
try {
assert connection != null;
connection.close();
} catch (SQLException e) {
e.printStackTrace();
}
return is;
}
}
This is my generic JPARepository interface:
#NoRepositoryBean
public interface GenericRepository<T extends JPAEntityImpl, S extends Serializable> extends JpaRepository<T, S> {}
Which is extended in my Document class:
#Repository
#Transactional
public interface DocumentRepository<T extends JPAEntityImpl, S extends Serializable> extends GenericRepository<Document, UUID> {
}
Also, the Document entity class:
#Entity
#Table(uniqueConstraints = {
#UniqueConstraint(columnNames = "id")
})
#JsonTypeInfo(
use = JsonTypeInfo.Id.NAME,
include = JsonTypeInfo.As.EXISTING_PROPERTY,
property = "typeName",
defaultImpl = Document.class)
public class Document extends JPAEntityImpl{
#Id
#Convert(converter = UUIDConverter.class)
#GeneratedValue(generator = "UUID")
#GenericGenerator(
name = "UUID",
strategy = "org.hibernate.id.UUIDGenerator"
)
#Column(nullable = false, unique = true)
protected UUID id;
#Column(length = 1000, nullable = false)
protected String name;
#Column(length = 1000, nullable = false)
protected String description;
#Column(length = 1000, nullable = true)
protected String originalFilename;
#Column(length = 1000, nullable = false)
protected MediaType contentType;
#Column
protected long fileSize;
#Column
#Lob
protected Blob blob;
public Document() {}
// GETTERS, SETTERS, TOSTRING....
And the DocumentWrapper entity class:
#JsonTypeInfo(
use = JsonTypeInfo.Id.NAME,
include = JsonTypeInfo.As.EXISTING_PROPERTY,
property = "typeName",
defaultImpl = DocumentWrapper.class)
public class DocumentWrapper extends JPAEntityImpl {
private UUID id;
private String name;
private String description;
private MultipartFile file;
public DocumentWrapper() {}
// GETTERS, SETTERS, TOSTRING....
I am having problems in the method public InputStream readBlob(Blob blob). The relevant part of the error log is the following:
org.postgresql.util.PSQLException: This connection has been closed.
at org.postgresql.jdbc.PgConnection.checkClosed(PgConnection.java:883)
at org.postgresql.jdbc.PgConnection.getLargeObjectAPI(PgConnection.java:594)
at org.postgresql.jdbc.AbstractBlobClob.getLo(AbstractBlobClob.java:270)
at org.postgresql.jdbc.AbstractBlobClob.getBinaryStream(AbstractBlobClob.java:117)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.hibernate.engine.jdbc.SerializableBlobProxy.invoke(SerializableBlobProxy.java:60)
at com.sun.proxy.$Proxy157.getBinaryStream(Unknown Source)
at org.ICIQ.eChempad.services.LobServiceImpl.readBlob(LobServiceImpl.java:42)
...
This line is the one which produces the Exception:
is = blob.getBinaryStream();
So, I would like to know how can I retrieve an InputStream for the Blob that I am receiving in the method public InputStream readBlob(Blob blob), so I can make the transformation between Document and DocumentWrapper. ¿How can I restore this connection that has been closed to retrieve the InputStream of the Blob? ¿Are there any workarounds?
Looking for answers on the Internet I saw many people performing this transformation using a ResultSet, but I am using a JPARepository interface to manipulate the records of Document in the database, not raw SQL queries; So I do not know how to proceed in that way.
I already tried using #Transactional annotations in this method but it did not work. I also tried setting the spring.jpa.properties.hibernate.current_session_context_class property (application.properties) to the values org.hibernate.context.ThreadLocalSessionContext, thread and org.hibernate.context.internal.ThreadLocalSessionContext but none of them worked.
I also tried to create a new connection, but it did not work.
I tried opening and closing a Session before and after calling blob.getBinaryStream(); but it did not work.
It seems to me that we need the same connection that was used to retrieve the Blob in the first place, but at some point Spring closes it and I do not know how to restore it or avoid that the connection is closed.
I also know that working with Byte[] arrays in my Document class could simplify things, but I do need to work with InputStream since I work with large files and is not convenient that whole files are loaded in memory.
If you need any other information about my code please do not hesitate to ask. I will add any required extra information.
Any tip, help or workaround is very welcome. Thank you.
Whatever method calls your DocumentWrapperConverter#convertToEntityAttribute probably also loads the data from the database. It should be fine to annotate this caller method with #Transactional to keep the Connection through the transaction alive. Note though, that you have to keep the connection open until you close the input stream, so you should probably push the bytes of the input stream to some sink within the transaction. Not sure if your MockMultipartFile consumes the input stream and stores the data into a byte[] or a temporary file, but that's what you will probably have to do for this to work.
Okay, so finally I have found the error by myself. Thanks for the help #Christian Beikov, you were actually hinting in the correct direction. Other Stack Overflow solutions did not work for me, but at least they gave me some idea of what is going on. Check this and this.
The problem is due to Hibernate and the implementation of BLOB types. The buffer inside the BLOB can only be read once. If we try to read it again (by using bob.getBinaryStream()), a database connection and session is needed to reload it.
So the first thing that you need to do if you have this error is check that your Datasource object and your database configuration is properly working and that your methods are correctly delimited by transactional boundaries by using the #Transactional annotation.
If that does not work for you and you still have problems regarding the database connection / session you can explicitly configure a SessionFactory in order to explicitly manage the session. To do so:
Define a bean that provides a SessionFactory
import org.hibernate.SessionFactory;
import org.hibernate.boot.registry.StandardServiceRegistryBuilder;
import org.hibernate.cfg.Configuration;
import org.springframework.beans.factory.annotation.Autowired;
import java.util.Properties;
#org.springframework.context.annotation.Configuration
public class HibernateUtil {
private static SessionFactory sessionFactory;
private static HibernateUtil hibernateUtil;
// Singleton
#Autowired
public HibernateUtil(Properties hibernateProperties)
{
try {
Configuration configuration = new Configuration().setProperties(hibernateProperties);
Properties properties = configuration.getProperties();
StandardServiceRegistryBuilder builder = new StandardServiceRegistryBuilder().applySettings(properties);
sessionFactory = configuration.buildSessionFactory(builder.build());
HibernateUtil.hibernateUtil = this; // singleton
} catch (Throwable ex) {
// Make sure you log the exception, as it might be swallowed
System.err.println("Initial SessionFactory creation failed: " + ex);
throw new ExceptionInInitializerError(ex);
}
}
public static HibernateUtil getInstance() {
if (HibernateUtil.hibernateUtil == null)
{
// Init
Logger.getGlobal().info("HIBERNATE UTIL INSTANCE HAS NOT BEEN INITIALIZED BY SPRING");
}
return HibernateUtil.hibernateUtil;
}
public static SessionFactory getSessionFactory()
{
return HibernateUtil.sessionFactory;
}
public void shutdown() {
// Close caches and connection pools
getSessionFactory().close();
}
This class uses a singleton pattern, so everyone that request the HibernateUtil bean will have the same instance. The first initialization of this class is triggered by Spring Boot, so we can just use it from a global context by typing HibernateUtil.getSessionFactory() and receiving the same factory.
You can use that SessionFactory to delimit the boundaries of your transaction manually like this. I created a LobService class to wrap the logic to read and create BLOBs:
#Service
public class LobServiceImpl implements LobService {
#Transactional
#Override
public Blob createBlob(InputStream content, long size) {
Session session = HibernateUtil.getSessionFactory().getCurrentSession();
session.beginTransaction();
Blob blob = session.getLobHelper().createBlob(content, size);
session.getTransaction().commit();
session.close();
return blob;
}
#Transactional
#Override
public InputStream readBlob(Blob blob) {
Session session = HibernateUtil.getSessionFactory().getCurrentSession();
session.beginTransaction();
InputStream is = null;
try {
is = blob.getBinaryStream();
} catch (SQLException e) {
e.printStackTrace();
}
session.getTransaction().commit();
session.close();
return is;
}
}
If that does not work for you, in my project even if there was a database connection and session another error showed up in the same place as before: SQLException: could not reset reader. What you need to do is reload the database instance that contains the BLOB the second time that you try to access to it. So, you do an extra (useless) query, but in this case requerying for the same instance reloads the underlying BinaryStream and avoids that Exception. For example, in one of the methods of my service I receive DocumentWrapper, transform it to Document, forward call to DocumentService, and transform its return to a DocumentWrapper and returns it to the upper level of controllers):
#Override
public <S1 extends DocumentWrapper> S1 save(S1 entity) {
Document document = this.documentWrapperConverter.convertToDatabaseColumn(entity);
// Internally the BLOB is consumed, so if we use the same instance to return an exception will be thrown
// java.sql.SQLException: could not reset reader
Document documentDatabase = this.documentService.save(document);
// This seems silly, but is necessary to update the Blob and its InputStream
Optional<Document> documentDatabaseOptional = this.documentService.findById((UUID) documentDatabase.getId());
return documentDatabaseOptional.map(value -> (S1) this.documentWrapperConverter.convertToEntityAttribute(value)).orElse(null);
}
Please, notice that this line
// This seems silly, but is necessary to update the Blob and its InputStream
Optional<Document> documentDatabaseOptional = this.documentService.findById((UUID) documentDatabase.getId());
Fetches the same entity from the DB, but now the BinaryStream is reloaded and can be read again.
So, TL;DR reload the managed instance from the database if you already exhausted its BinaryStream buffer.

Tapestry JPA Jackson Deserialization

I'm working on a project I didn't initially create, in which the data was stored in-memory. I'm curently moving this data into the database. I'm doing this using hibernate and tapestry JPA. At some point in the project Jackson Deserialization is used (actually in connection with a UI, but I doubt that's relevant), via the #JsonDeserialize annotation, with a deserializer class (let's call it DefinitionDeserializer). DefinitionDeserializer then creates an instance of a POJO representation (let's call it Definition) of a database table (D_DEFINITION). However, D_DEFINITION has a connection to another table (D_TYPE) (and hence another POJO (PeriodType)). To resolve this connection, I'm using a tapestry service (ConnectingService), which I usually inject via the #Inject annotation. However, I can't use this method of injection when the object (in which I'm trying to inject the service, i.e. DefinitionDeserializer) was created via the new keyword - which seems to be the case for the #JsonDeserialize annotation. I also can't use ConnectingService without injecting it via the #Inject keyword, because then I couldn't inject any other services in ConnectingService either, which I'm currently doing.
I'm hoping this description didn't confuse you too much, I can't share the actual code with you and I don't think a minimal example would be much better, as it's quite a complicated case and wouldn't be such a small piece of code. If you need one, however, I can try to provide one.
Basically what I need is a way to tell JsonDeserialize to take a tapestry service instead of creating an instance of DefinitionDeserializer itself.
Edit: The classes as examples:
public DefinitionDeserializer extends StdDeserializer<Definition> {
private static final long serialVersionUID = 1L;
//TODO: The injection doesn't work yet
#Inject
private ConnectingService connectingService;
public DefinitionDeserializer() {
this(null);
}
public DefinitionDeserializer(Class<?> vc) {
super(vc);
}
#Override
public Definition deserialize(JsonParser p, DeserializationContext ctxt) throws IOException, JsonProcessingException {
Definition pd = new Definition();
JsonNode node = p.getCodec().readTree(p);
if (node.has("type"))
pd.setType(periodTypeDao.findByValue("PeriodType." + node.get("type").asText()));
return pd;
}
}
#Entity
#Table(name = Definition.TABLE_NAME)
#Cacheable
#Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE, region =
JpaEntityModelConstants.CACHE_REGION_ADMINISTRATION)
public class Definition {
public static final String TABLE_NAME = "D_DEFINITION";
private static final long serialVersionUID = 389511526676381027L;
#Id
#SequenceGenerator(name = JpaEntityModelConstants.SEQUENCE_NAME, sequenceName = JpaEntityModelConstants.SEQUENCE_NAME, initialValue = 1, allocationSize = 1)
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = JpaEntityModelConstants.SEQUENCE_NAME)
#Column(name = "ID")
private Long id;
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumns({
#JoinColumn(name = "FK_TYPE", referencedColumnName = "ID")}
)
private PeriodType type;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public PeriodType getType() {
return type;
}
public void setType(PeriodType dpmType) {
this.type = dpmType;
}
//More columns
}
PeriodType looks pretty much the same as Definition.
//BaseService contains all the standard methods for tapestry JPA services
public interface ConnectingService extends BaseService<PeriodType> {
}
public class ConnectingServiceImpl extends BaseServiceImpl<PeriodType> implements ConnectingService {
public ConnectingServiceImpl() {
super (PeriodType.class);
}
}
Currently I'm using it like this (which doesn't work):
#JsonDeserialize(using = DefinitionDeserializer.class)
#JsonSerialize(using = DefinitionSerializer.class)
private Definition definition;
#JsonDeserialize doesn't create instances of deserialisers, it's just a hint for ObjectMapper to know which class to use when deserialising.
By default ObjectMapper uses Class.newInstance() for instantiating deserialisers, but you can specify custom HandlerInstantiator (ObjectMapper#setHandlerInstantiator()) in which you can use Tapestry's ObjectLocator to get instances of deserialisers, i.e. using ObjectLocator#autobuild(), or use ObjectLocator#getService() if your deserialisers are Tapestry services themselves.
Update:
public class MyHandlerInstantiator extends HandlerInstantiator
{
private final ObjectLocator objectLocator;
public MyHandlerInstantiator(ObjectLocator objectLocator)
{
this.objectLocator = objectLocator;
}
#Override
public JsonDeserializer<?> deserializerInstance(
DeserializationConfig config, Annotated annotated, Class<?> deserClass)
{
// If null returned here instance will be created via reflection
// you can always use objectLocator, or use it conditionally
// just for some classes
return objectLocator.autobuild(deserClass);
}
// Other method overrides can return null
}
then later when you're configuring ObjectMapper use #Injected instance of ObjectLocator to create an instance of custom HandlerInstantiator, i.e.:
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.setHandlerInstantiator(new MyHandlerInstantiator(objectLocator));
return objectMapper;

spring data elasticsearch dynamic multi tenant index mismatch?

I am experimenting with spring data elasticsearch by implementing a cluster which will host multi-tenant indexes, one index per tenant.
I am able to create and set settings dynamically for each needed index, like
public class SpringDataES {
#Autowired
private ElasticsearchTemplate es;
#Autowired
private TenantIndexNamingService tenantIndexNamingService;
private void createIndex(String indexName) {
Settings indexSettings = Settings.builder()
.put("number_of_shards", 1)
.build();
CreateIndexRequest indexRequest = new CreateIndexRequest(indexName, indexSettings);
es.getClient().admin().indices().create(indexRequest).actionGet();
es.refresh(indexName);
}
private void preapareIndex(String indexName){
if (!es.indexExists(indexName)) {
createIndex(indexName);
}
updateMappings(indexName);
}
The model is created like this
#Document(indexName = "#{tenantIndexNamingService.getIndexName()}", type = "movies")
public class Movie {
#Id
#JsonIgnore
private String id;
private String movieTitle;
#CompletionField(maxInputLength = 100)
private Completion movieTitleSuggest;
private String director;
private Date releaseDate;
where the index name is passed dynamically via the SpEl
#{tenantIndexNamingService.getIndexName()}
that is served by
#Service
public class TenantIndexNamingService {
private static final String INDEX_PREFIX = "test_index_";
private String indexName = INDEX_PREFIX;
public TenantIndexNamingService() {
}
public String getIndexName() {
return indexName;
}
public void setIndexName(int tenantId) {
this.indexName = INDEX_PREFIX + tenantId;
}
public void setIndexName(String indexName) {
this.indexName = indexName;
}
}
So, whenever I have to execute a CRUD action, first I am pointing to the right index and then to execute the desired action
tenantIndexNamingService.setIndexName(tenantId);
movieService.save(new Movie("Dead Poets Society", getCompletion("Dead Poets Society"), "Peter Weir", new Date()));
My assumption is that the following dynamically index assignment, will not work correctly in a multi-threaded web application:
#Document(indexName = "#{tenantIndexNamingService.getIndexName()}"
This is because TenantIndexNamingService is singleton.
So my question is how achieve the right behavior in a thread save manner?
I would probably go with an approach similar to the following one proposed for Cassandra:
https://dzone.com/articles/multi-tenant-cassandra-cluster-with-spring-data-ca
You can have a look at the related GitHub repository here:
https://github.com/gitaroktato/spring-boot-cassandra-multitenant-example
Now, since Elastic has differences in how you define a Document, you should mainly focus in defining a request-scoped bean that will encapsulate your tenant-id and bind it to your incoming requests.
Here is my solution. I create a RequestScope bean to hold the indexes per HttpRequest
how does singleton bean handle dynamic index

External object linked through foreign key in hibernate and MySql

I'm using Spring data with Hibernate and MySql and I have a doubt.
My entity is
#Entity
#Table(name = "car", catalog = "DEMO")
public class Car implements java.io.Serializable {
/**
*
*/
private static final long serialVersionUID = 1L;
private Integer idCar;
#JsonBackReference
private CarType carType;
#JsonBackReference
private Fleet fleet;
private String id;
private int initialKm;
private String carChassis;
private String note;
#JsonManagedReference
private Set<Acquisition> acquisitions = new HashSet<Acquisition>(0);
with get and set method.
Sometimes, I need external object as carType, another entity.
If I use this webservice
#Override
#RequestMapping(value = { "/cars/{idFleet}"}, method = RequestMethod.GET)
public String getCars(#PathVariable int idFleet, Model model){
try{
model.addAttribute("carsList",fleetAndCarService.findCarsByIdFleet(idFleet));
//Modal parameter
model.addAttribute("carTypeList",fleetAndCarService.getCarsType());
model.addAttribute("fleetApplication",fleetAndCarService.getFleetById(idFleet));
model.addAttribute("carForm", new CarForm());
model.addAttribute("error",false);
}catch (Exception e){
LOG.error("Threw exception in FleetAndCarControllerImpl::getCars : " + ErrorExceptionBuilder.buildErrorResponse(e));
model.addAttribute("error",true);
}
return "cars";
}
from my html page I can retrieve carType.idCarType,but if I use this
#Override
#RequestMapping(value = { "/cars/{idFleet}"}, method = RequestMethod.GET)
public #ResponseBody TableUI getCars(#PathVariable int idFleet) {
TableUI ajaxCall=new TableUI();
try {
ajaxCall.setData(fleetAndCarService.findCarsByIdFleet(idFleet));
return ajaxCall;
} catch (QueryException e) {
ErrorResponse errorResponse= ErrorResponseBuilder.buildErrorResponse(e);
LOG.error("Threw exception in FleetAndCarControllerImpl::addCar :" + errorResponse.getStacktrace());
return ajaxCall;
}
}
where TableUi has only a field data where I put the result to use it into datatables, I don't have carType and fleet. Why? Do I have to use Hibernate.initialize, and how so it is a list?Thansk,regards
Also this update doesn't work:
#Override
#Transactional
public List<Car> findByFleetIdFleet(int idFleet) {
List<Car> carList= carRepository.findByFleetIdFleet(idFleet);
for (Car car:carList)
Hibernate.initialize(car.getCarType());
return carList;
}
You could call Hibernate.initialize on each element
Collection<Car> cars = fleetAndCarService.findCarsByIdFleet(idFleet);
for(Car car : cars) {
Hibernate.initialize(car.getCarType());
Hibernate.initialize(car.getFleet());
}
ajaxCall.setData();
return ajaxCall;
This would be a good starting point and would allow you to move forwards. At high scales however this could become a performance bottleneck as it will perform a query with each call to initialize so you will have 2*n queries to the database.
For maximum performance you will have several other options:
Iterate through the cars and build up a list of IDs and then query for the car types by ID in a single query with the list of IDs. Do the same for the fleets. Then call Hibernate.initialize. The first two queries will populate the persistence context and the call to initialize will not need to go to the database.
Create a special query for this call which fetch joins the properties you will need.
Setup batch fetching which will fetch the cards and fleets in batches instead of one car/fleet per query.
Use a second level cache so the initialization causes Hibernate to pull from the cache instead of the database.
Describing these options in details is beyond the scope of a single question but a good place to start would be Hibernate's documentation on performance.

hibernate findbyid causes update?

I faced with a very strange behavior in my web app with spring 3 and hibernate-core 3.5.1-Final.
For simplicity i provide my code..
if(ripid!=null){ //Parameter
Appuntamento apDaRip = appuntamentoService.findById(ripid);
if(apDaRip.getIdpadre()!=null){
apDaRip.setNota("RIPROGRAMMATO n."+ripid.toString()+"\n"+apDaRip.getNota());
apDaRip.setIdpadre(apDaRip.getIdpadre());
}else{
apDaRip.setNota("RIPROGRAMMATO n."+ripid.toString()+"\n"+apDaRip.getNota());
apDaRip.setIdpadre(ripid);
}
try{
apDaRip.setOrarioinizio(null);
apDaRip.setDurata(null);
//apDaRip.setIdappuntamento(null);
}catch(Exception e){e.printStackTrace();}
map.put("appuntamento", apDaRip);
}
di = datiintranetService.findById(DatiintranetService.PASS_X_INTERVENTI);
map.put("passinterventi", di.getBoolean());
The idea behind is to use some data of an object "Appuntamento" for produce a new one.
So i'm going to change some value and before send the object to my view (jsp) i fetch other data by calling findbyid. This cause an update to the Appuntamento object... Off course i don't want this behavior. Someone can have an explanation of this?
Edit-1
Here's the Dao
#Transactional
public class DatiintranetService {
private DatiintranetDAO datiintranetDAO;
public void setDatiintranetDAO(DatiintranetDAO datiintranetDAO) {
this.datiintranetDAO = datiintranetDAO;
}
public DatiintranetDAO getDatiintranetDAO() {
return datiintranetDAO;
}
public Datiintranet findById(Integer id) {
return datiintranetDAO.findById(id);
}
}
and For Appuntamento class I provide to you a snapshot
#Entity
#Table(name = "appuntamento", schema = "public")
public class Appuntamento implements java.io.Serializable {
#Id
#SequenceGenerator(name="appuntamentoID", sequenceName="appuntamento_idappuntamento_seq",allocationSize =1)
#GeneratedValue(strategy=GenerationType.SEQUENCE, generator="appuntamentoID")
#Column(name = "idappuntamento", unique = true, nullable = false)
public Integer getIdappuntamento() {
return this.idappuntamento;
}
}
Edit-2
IF i move thoese two row above the if statement no update occur.
di = datiintranetService.findById(DatiintranetService.PASS_X_INTERVENTI);
map.put("passinterventi", di.getBoolean());
If you query for an entity and change the entity, the default behavior is to persist those changes via an update to the database. This is usually what you want to happen, but obviously not in all cases.
If you want to avoid the update, you need to detach the entity by calling session.evict(apDaRip) where session is a reference to the hibernate session (see Session.evict()). You probably want to evict the entity right after you get it (immediately following the call to findById).

Categories

Resources