I am new to reactive programming and I want to implement spring-webclient on our existing project.
For simplicity, I created a pseudo code based from my original code. Here I am sending SMS to each provider. If condition is met then it will not proceed to next provider.
public List<Sms> send(List<Sms> smsRequests) {
return Flux.fromIterable(smsRequests)
.flatMap(smsRequest -> {
return Flux.fromIterable(smsRequest.getProviders())
.concatMap(smsProvider -> this.invokeApiUsingWebClient(smsProvider, smsRequest))
.filter(providerResponse -> providerResponse.getErrorStatus() == null)
.map(ProviderResponse::toSms)
.next(); // emits cancel(). Resulting to connection being closed.
})
.toStream()
.collect(Collectors.toList());
}
My problem is that this flow calls cancel() every time next() is invoke. Resulting to poor performance by not reusing the connection on the webclient threads.
See below logs.
TRACE 15112 [reactor-http-nio-1] [ExchangeFunctions] - [5e2f49c] Response 200 OK...
INFO 15112 [reactor-http-nio-1] [1] - onNext(...)
INFO 15112 [reactor-http-nio-1] [1] - cancel()
DEBUG 15112 [reactor-http-nio-1] [ExchangeFunctions] - [5e2f49c] Cancel signal (to close connection)
INFO 15112 [reactor-http-nio-1] [1] - onComplete()
TRACE 15112 [reactor-http-nio-2] [ExchangeFunctions] - [5e2f49c] Response 200 OK...
INFO 15112 [reactor-http-nio-2] [2] - onNext(...)
INFO 15112 [reactor-http-nio-2] [2] - cancel()
DEBUG 15112 [reactor-http-nio-2] [ExchangeFunctions] - [5e2f49c] Cancel signal (to close connection)
INFO 15112 [reactor-http-nio-2] [2] - onComplete()
Is there a possible way to refactor above code without emitting the cancel() method?
Related
I have a camel route in MyRouteBuilder.java file which is consuming messages from ActiveMQ:
from("activemq:queue:myQueue" )
.process(consumeDroppedMessage)
.log(">>> I am here");
I wrote a test case for the following like this :
#Override
public RouteBuilder createRouteBuilder() throws Exception {
return new MyRouteBuilder();
}
#Test
void testMyTest() throws Exception {
String queueInputMessage = "My Msg";
template.sendBody("activemq:queue:myQueue", queueInputMessage);
assertMockEndpointsSatisfied();
}
When I run the unit test case I get this strange error:
7:53:26.175 [main] DEBUG org.apache.camel.impl.engine.InternalRouteStartupManager - Route: route1 >>> Route[activemq://queue:null -> null]
17:53:26.175 [main] DEBUG org.apache.camel.impl.engine.InternalRouteStartupManager - Starting consumer (order: 1000) on route: route1
17:53:26.175 [main] DEBUG org.apache.camel.support.DefaultConsumer - Build consumer: Consumer[activemq://queue:null]
17:53:26.185 [main] DEBUG org.apache.camel.support.DefaultConsumer - Init consumer: Consumer[activemq://queue:null]
17:53:26.185 [main] DEBUG org.apache.camel.support.DefaultConsumer - Starting consumer: Consumer[activemq://queue:null]
17:53:26.213 [main] DEBUG org.apache.activemq.thread.TaskRunnerFactory - Initialized TaskRunnerFactory[ActiveMQ Task] using ExecutorService: java.util.concurrent.ThreadPoolExecutor#3fffff43[Running, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
17:53:26.215 [main] DEBUG org.apache.activemq.transport.failover.FailoverTransport - Reconnect was triggered but transport is not started yet. Wait for start to connect the transport.
17:53:26.334 [main] DEBUG org.apache.activemq.transport.failover.FailoverTransport - Started unconnected
17:53:26.334 [main] DEBUG org.apache.activemq.transport.failover.FailoverTransport - Waking up reconnect task
17:53:26.335 [ActiveMQ Task-1] DEBUG org.apache.activemq.transport.failover.FailoverTransport - urlList connectionList:[tcp://localhost:61616], from: [tcp://localhost:61616]
17:53:26.339 [main] DEBUG org.apache.camel.component.jms.DefaultJmsMessageListenerContainer - Established shared JMS Connection
17:53:26.340 [main] DEBUG org.apache.camel.component.jms.DefaultJmsMessageListenerContainer - Resumed paused task: org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker#58c34bb3
17:53:26.372 [ActiveMQ Task-1] DEBUG org.apache.activemq.transport.failover.FailoverTransport - Attempting 0th connect to: tcp://localhost:61616
17:53:28.393 [ActiveMQ Task-1] DEBUG org.apache.activemq.transport.failover.FailoverTransport - Connect fail to: tcp://localhost:61616, reason: {}
I am especially stumped to see these messages:
Route: route1 >>> Route[activemq://queue:null -> null]
and
urlList connectionList:[tcp://localhost:61616], from: [tcp://localhost:61616]
Why is the queue coming up as null though I have a proper queue name? Also why is the broker url tcp://localhost:61616?
I want to run this unit test case so that it runs properly in all environments like: local, DIT , SIT, PROD etc. So, for that I cannot afford the broker url to be: tcp://localhost:61616.
Any ideas as to what I am doing wrong here and what I should be doing?
EDIT 1:
One of the issues that I am seeing is even before the test class is called, the MyRouteBuilder() inside createRouteBuilder() is invoked, leading to the issues that I see in the log.
The "activemq:queue:.." is telling Camel to use the auto-configure magic behind the scenes (which uses default url) and your use case is beyond that.
You need to configure a connection factory (ActiveMQConnectionFactory) and configure a camel-jms component to use that connection factory.
The connection factory allows you to specify url, userName, password, default connection settings and setup SSL.
A best practice is to externalize the url, userName, password and queue to a properties file so you can change those across the environments-- local, DIT, SIT and prod, etc.
NOTE: Use org.apache.camel/camel-jms component, and not the org.apache.activemq/activemq-camel component. activemq-camel is deprecated and being removed in ActiveMQ 5.17.x.
Instead of setting up an explicit active mq broker , I started using a VM broker .
#Override
protected RoutesBuilder createRouteBuilder() throws Exception {
return new RouteBuilder() {
#Override
public void configure() {
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory("vm://localhost?broker.persistent=false");
ActiveMQComponent activeMQComponent = new ActiveMQComponent();
activeMQComponent.setConnectionFactory(connectionFactory);
context.addComponent("activemq", activeMQComponent);
from("activemq:queue:myQueue").to("mock:collector");
}
};
}
Also , I mistook camel junit as a traditional junit . We don't need to call explicitly the actual route builder class . Instead after setting up my activeMq component up above , I was able to write my test methods, mock my end points for queue and send messages and assert them . Camel is truly versatile . Requires a lot of study though .
I would like to notice that the scenarion I will describe happen rare enough and in most cases everything works as expected.
I have 1 topic and 1 subscription on Pub/Sub side.
My java application listens for subscription, does some processing and sends acknowledge back. Because of fact that google Pub/Sub guarantees at least once delivery, we do message deduplication on our side based on objectGeneration header and 'objectId' header.
Sometimes we see that message that was acknowldged is accepted by our application again and again and it is unexpected behaviour.
Log example:
//first
2019-12-17 20:51:57.375 INFO 1 --- [sub-subscriber3] bucketNotificationFlow : Received new message from pub-sub: GenericMessage [payload={....}, headers={.....objectGeneration=1576615916875106, eventTime=2019-12-17T20:51:56.874940Z, objectId=Small_files_bunch/100_12_1.csv, ....
....
2019-12-17 20:51:57.698 INFO 1 --- [sub-subscriber3] .i.g.PubSubMessageAcknowledgementHandler : Acknowledged message - 1576615916875106
...
//duplicate 1
2019-12-17 20:51:59.663 INFO 1 --- [sub-subscriber4] bucketNotificationFlow : Received new message from pub-sub: GenericMessage [payload={...}, headers={ objectGeneration=1576615916875106, eventTime=2019-12-17T20:51:56.874940Z, objectId=Small_files_bunch/100_12_1.csv", ....
...
2019-12-17 20:51:59.704 INFO 1 --- [sub-subscriber4] c.b.m.i.DiscardedMessagesHandler : Duplicate message received GenericMessage [ headers={idempotent.keys=[objectGeneration.1576615916875106, objectId.Small_files_bunch/100_12_1.csv], ...
....
//duplicate 2
2019-12-17 22:52:02.239 INFO 1 --- [sub-subscriber1] bucketNotificationFlow : Received new message from pub-sub: GenericMessage [payload={...}, headers={objectGeneration=1576615916875106, eventTime=2019-12-17T20:51:56.874940Z, objectId=Small_files_bunch/100_12_1.csv, ...
...
2019-12-17 22:52:02.339 INFO 1 --- [sub-subscriber1] c.b.m.i.DiscardedMessagesHandler : Duplicate message received GenericMessage [ headers={idempotent.keys=[objectGeneration.1576615916875106, objectId.Small_files_bunch/100_12_1.csv], ...
// and so on each 2 hours
Code for acknowledgement:
var generation = message.getHeaders().get("objectGeneration");
pubSubMessage = message.getHeaders().get(GcpPubSubHeaders.ORIGINAL_MESSAGE, BasicAcknowledgeablePubsubMessage.class)
pubSubMessage.ack().addCallback(
v -> {
removeFromIdempotentStore(targetMessage, false);
log.info("Acknowledged message - {}", generation); //from logs we see that this line was invoked
},
e -> {
removeFromIdempotentStore(targetMessage, false);
log.error("Failed to acknowledge message - {}", generation, e);
}
);
GCP subscription page contains following diagram:
StackDriver acknowledge diagram:
Any ideas what is going on, how to troubleshoot it and fix it ?
Try checking Stackdriver to see if you are missing acknowledgement deadlines.
The two hour wait time between duplicates is very interesting. Have you tried expanding your message deadline before? (Info on this is at the above link.)
See more info here: How to cleanup the JdbcMetadataStore?
According our conclusion, it would be better do not remove entries from metadata store table immediately after processing. Some external job should do the trick from time to time and only for those entries which are old enough for remove and we are definitely sure that Pub/Sub won't redeliver to us the same message anymore.
I'm trying to measure the latency of my service at a lower level. Poking around I saw that it is possible to add a addStreamTracerFactory to the grpc builder.
I've done this simple implementation like this and printed the logs:
val server = io.grpc.netty.NettyServerBuilder.forPort(ApplicationConfig.Service.bindPort).addStreamTracerFactory(ServerStreamTracerFactory)....
class Telemetry(fullMethodName: String, headers: Metadata) extends ServerStreamTracer with LazyLogging {
override def serverCallStarted(callInfo: ServerStreamTracer.ServerCallInfo[_, _]): Unit = {
logger.info(s"Telemetry '$fullMethodName' '$headers' callinfo:$callInfo")
super.serverCallStarted(callInfo)
}
override def inboundMessage(seqNo: Int): Unit = {
logger.info(s"inboundMessage $seqNo")
super.inboundMessage(seqNo)
}
override def inboundMessageRead(seqNo: Int, optionalWireSize: Long, optionalUncompressedSize: Long): Unit = {
logger.info(s"inboundMessageRead $seqNo $optionalWireSize $optionalUncompressedSize")
super.inboundMessageRead(seqNo, optionalWireSize, optionalUncompressedSize)
}
override def outboundMessage(seqNo: Int): Unit = {
logger.info(s"outboundMessage $seqNo")
super.outboundMessage(seqNo)
}
override def outboundMessageSent(seqNo: Int, optionalWireSize: Long, optionalUncompressedSize: Long): Unit = {
logger.info(s"outboundMessageSent $seqNo $optionalWireSize $optionalUncompressedSize")
super.outboundMessageSent(seqNo, optionalWireSize, optionalUncompressedSize)
}
override def streamClosed(status: Status): Unit = {
logger.info(s"streamClosed $status")
super.streamClosed(status)
}
}
object ServerStreamTracerFactory extends Factory with LazyLogging{
logger.info("called")
override def newServerStreamTracer(fullMethodName: String, headers: Metadata): ServerStreamTracer = {
logger.info(s"called with $fullMethodName $headers")
new Telemetry(fullMethodName, headers)
}
}
I'm running a simple grpc client in a loop and examining the output of the server stream tracer.
I see that the "lifecycle" of logs repeats itself. Here is one iteration (but it spews out the exact same again and again):
22:15:06 INFO [grpc-default-worker-ELG-3-2] [newServerStreamTracer:38] [ServerStreamTracerFactory$] called with com.dy.affinity.service.AffinityService/getAffinities Metadata(content-type=application/grpc,user-agent=grpc-python/1.15.0 grpc-c/6.0.0 (osx; chttp2; glider),grpc-accept-encoding=identity,deflate,gzip,accept-encoding=identity,gzip)
22:15:06 INFO [grpc-default-executor-0] [serverCallStarted:8] [Telemetry] Telemetry 'com.dy.affinity.service.AffinityService/getAffinities' 'Metadata(content-type=application/grpc,user-agent=grpc-python/1.15.0 grpc-c/6.0.0 (osx; chttp2; glider),grpc-accept-encoding=identity,deflate,gzip,accept-encoding=identity,gzip)' callinfo:io.grpc.internal.ServerCallInfoImpl#5badffd8
22:15:06 INFO [grpc-default-worker-ELG-3-2] [inboundMessage:13] [Telemetry] inboundMessage 0
22:15:06 INFO [grpc-default-worker-ELG-3-2] [inboundMessageRead:17] [Telemetry] inboundMessageRead 0 19 -1
22:15:06 INFO [pool-1-thread-5] [outboundMessage:21] [Telemetry] outboundMessage 0
22:15:06 INFO [pool-1-thread-5] [outboundMessageSent:25] [Telemetry] outboundMessageSent 0 0 0
22:15:06 INFO [grpc-default-worker-ELG-3-2] [streamClosed:29] [Telemetry] streamClosed Status{code=OK, description=null, cause=null}
A few things that aren't quite clear to me from just looking at these logs:
Why is a new stream being created for each request? I though that the grpc client is supposed to re-use the connection. "stream closed" shouldn't be called right?
If the stream is being re-used, how come I see that the inboundMessage number (and outboundMessage) is always "0". (Also when I've started multiple clients in parallel this is always 0). In what case should the message number not be 0?
If the stream isn't being re-used, how should I be configuring the clients differently to re-use the connection?
In gRPC one HTTP2 stream is created for each RPC (while if retries or hedging is enabled there can be more than one streams for each RPC). HTTP2 streams are multiplexed on one connection, and it's pretty cheap to open and close streams. So, it's the connection being re-used, not the stream.
The seqNo you get from the tracer methods is the seqNo of messages for this stream, which starts from 0. Looks like you are doing unary RPCs, which makes one request and gets one response then closes. What you see is totally normal.
I'm currently developing a custom handler to deliver Oracle change logs.
When some errors occurred, normally, I can throw RuntimeException or return Status.ABEND. Then OGG would log the error and stop the process.
The following code works well when operationAdded() failed (i.e., Extract process will report abend, and when the Extract restart after the errors, the operations of the whole failed transaction would be resent to the handler).
#Override
public Status operationAdded(DsEvent e, DsTransaction tx,
DsOperation dsOperation) {
Status status = super.operationAdded(e, tx, dsOperation);
...
//throw new RuntimeException("op add runtime error");
return status;
}
However, when error occurred in the transactionCommit() function, OGG doesn't work as expected. Neither throw RuntimeException nor return Status.ABEND can stop the Extract. OGG just keep working like nothing happened. (Codes below)
#Override
public Status transactionCommit(DsEvent e, DsTransaction tx) {
super.transactionCommit(e, tx);
Status status = sendEvents();
handlerProperties.totalTxns++;
//throw new RuntimeException("tx ci runtime error");
return Status.ABEND;
}
I tried to kill and restart the Extract process. The failed transaction were not resend to the handler. It seems that all the failed transaction data were lost !
Following are the logs of return Status.ABEND in transactionCommit():
...
DEBUG [main] (AbstractHandler.java:509) - Event: handler=ggdatahub, transactionCommit ( Commit transaction ) DsTransaction [ops=1, buffered=1, state=BEGIN, start=2015-08-21 20:04:25.842275, end=2015-08-21 20:04:25.842275]
WARN [main] (DsEventManager.java:231) - Error sending event to handler: status=ABEND, event=Commit transaction, handler=ggdatahub
Exception in thread "main" com.goldengate.atg.util.GGException: Unable to commit transaction, STATUS=ABEND
at com.goldengate.atg.datasource.UserExitDataSource.commitActiveTransaction(UserExitDataSource.java:1392)
at com.goldengate.atg.datasource.UserExitDataSource.commitTx(UserExitDataSource.java:1326)
Error occured in javawriter.c[752]:
***********************************************************************
Exception received committing transaction: com.goldengate.atg.util.GGException: Unable to commit transaction, STATUS=ABEND
DEBUG [main] (UserExitDataSource.java:504) - (JNI) C-user-exit checkpoint event
DEBUG [main] (UserExitDataSource.java:1364) - UserExitDataSource.CommitActiveTransaction: Same transaction committed more than once (possibly due to commit-on-checkpoint).
DEBUG [main] (UserExitDataSource.java:516) - UserExitDataSource.userExitCheckpoint: incrementing the flush counter
DEBUG [main] (PendingOpGroup.java:315) - now ready to checkpoint? false (was ready? false): {pendingOps=1, groupSize=0, timer=0:00:00.000 [total = 0 ms ]}
DEBUG [main] (UserExitDataSource.java:504) - (JNI) C-user-exit checkpoint event
DEBUG [main] (UserExitDataSource.java:1364) - UserExitDataSource.CommitActiveTransaction: Same transaction committed more than once (possibly due to commit-on-checkpoint).
DEBUG [main] (UserExitDataSource.java:516) - UserExitDataSource.userExitCheckpoint: incrementing the flush counter
DEBUG [pool-1-thread-1] (AbstractDataSource.java:737) - [2] getStatusReport: Mon Aug 24 10:51:14 CST 2015
DEBUG [Thread-1] (UserExitDataSource.java:1601) - UserExitDataSource closing, #1 of class=UserExitDataSource
DEBUG [main] (PendingOpGroup.java:315) - now ready to checkpoint? false (was ready? false): {pendingOps=3, groupSize=0, timer=0:00:00.000 [total = 0 ms ]}
DEBUG [Thread-1] (UserExitDataSource.java:1608) - Shutting down data source; attempting a final checkpoint.
INFO [pool-1-thread-1] (AbstractDataSource.java:730) - Memory at Status : Max: 455.00 MB, Total: 60.50 MB, Free: 27.54 MB, Used: 32.96 MB
DEBUG [pool-1-thread-1] (UserExitDataSource.java:1637) - time spent checkpointing: 0:00:00.000 [total = 0 ms ]
DEBUG [Thread-1] (UserExitDataSource.java:1668) - doCheckpoint() called
INFO [pool-1-thread-1] (AbstractDataSource.java:980) - Status report: Mon Aug 24 10:51:14 CST 2015
*************************************************
Status Report for UserExit
*************************************************
Total elapsed time: 2 days 14:47:06.139 [total = 226026 sec = 3767 min = 62 hr ] => Total time since first event
Event processing time: 0:00:12.692 [total = 12 sec ] => Time spent sending msgs (max: 4795 ms)
Metadata process time: 0:00:02.159 [total = 2 sec ] => Time spent receiving metadata (1 tables, 3 columns)
Operations Received/Sent: 3 / 3
Rate (overall): 0 op/s (peak: 0 op/s)
(per event): 0 op/s
Transactions Received/Sent: 2 / 0
Rate (overall): 0 tx/s (peak: 0 tx/s)
(per event): 0 tx/s
3 records processed as of Mon Aug 24 10:51:14 CST 2015 (rate 0/sec, delta 3)
*************************************************
Anybody know how to fix this? Thanks in advance!
For others who may encounter this problem:
It turns out to be a bug...
I swithed from Version 12.1.2.1.4 20470586 OGGCORE_12.1.2.1.0OGGBP_PLATFORMS_150303.1209 to Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230. Everything works fine now.
I'm working on an Integrator for Hibernate (background on Integrators: https://docs.jboss.org/hibernate/orm/4.3/manual/en-US/html/ch14.html#objectstate-decl-security) that by using listeners is supposed to take my data from how it's stored in the DB and convert it into a different form for processing at runtime. This works great when saving the data using .persist() however there's an odd behavior involving transactions. The following code is from Hibernate's own quickstart tutorial code:
// now lets pull events from the database and list them
entityManager = entityManagerFactory.createEntityManager();
entityManager.getTransaction().begin();
List<Event> result = entityManager.createQuery( "from Event", Event.class ).getResultList();
for ( Event event : result ) {
System.out.println( "Event (" + event.getDate() + ") : " + event.getTitle() );
}
entityManager.getTransaction().commit();
entityManager.close();
Notice the unusual transaction begin/commit wrapping the query to select the data. Running this gives the following output after the query completes:
01:01:59.111 [main] DEBUG org.hibernate.engine.transaction.spi.AbstractTransactionImpl.commit(175) - committing
01:01:59.112 [main] DEBUG org.hibernate.event.internal.AbstractFlushingEventListener.prepareEntityFlushes(149) - Processing flush-time cascades
01:01:59.112 [main] DEBUG org.hibernate.event.internal.AbstractFlushingEventListener.prepareCollectionFlushes(189) - Dirty checking collections
01:01:59.114 [main] DEBUG org.hibernate.event.internal.AbstractFlushingEventListener.logFlushResults(123) - Flushed: 0 insertions, 2 updates, 0 deletions to 2 objects
01:01:59.114 [main] DEBUG org.hibernate.event.internal.AbstractFlushingEventListener.logFlushResults(130) - Flushed: 0 (re)creations, 0 updates, 0 removals to 0 collections
01:01:59.114 [main] DEBUG org.hibernate.internal.util.EntityPrinter.toString(114) - Listing entities:
01:01:59.114 [main] DEBUG org.hibernate.internal.util.EntityPrinter.toString(121) - org.hibernate.tutorial.em.Event{date=2015-07-28 01:01:57.776, id=1, title=Our very first event!}
01:01:59.114 [main] DEBUG org.hibernate.internal.util.EntityPrinter.toString(121) - org.hibernate.tutorial.em.Event{date=2015-07-28 01:01:58.746, id=2, title=A follow up event}
01:01:59.115 [main] DEBUG org.hibernate.SQL.logStatement(109) - update EVENTS set EVENT_DATE=?, title=? where id=?
Hibernate: update EVENTS set EVENT_DATE=?, title=? where id=?
01:01:59.119 [main] DEBUG org.hibernate.SQL.logStatement(109) - update EVENTS set EVENT_DATE=?, title=? where id=?
Hibernate: update EVENTS set EVENT_DATE=?, title=? where id=?
01:01:59.120 [main] DEBUG org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doCommit(113) - committed JDBC Connection
01:01:59.120 [main] DEBUG org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.close(201) - HHH000420: Closing un-released batch
01:01:59.121 [main] DEBUG org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.releaseConnection(246) - Releasing JDBC connection
01:01:59.121 [main] DEBUG org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.releaseConnection(264) - Released JDBC connection
01:01:59.121 [main] DEBUG org.hibernate.internal.SessionFactoryImpl.close(1339) - HHH000031: Closing
It appears that since the Integrator does a modification on the entity in question it gets marked as "dirty" and upon committing this odd transaction, it bypasses my event listeners and writes the value back in the wrong format! I did some digging in the code and it turns out that org.hibernate.event.internal.AbstractFlushingEventListener.flushEntities(FlushEvent, PersistenceContext) gets called above and tries to get listeners for EventType.FLUSH_ENTITY. Unfortunately a listener added for this EventType is never called in my Integrator. How can I write my Integrator to behave correctly in this case so that I can "undo" the conversion that has happened with my entities at runtime and not flush the wrong value out?
Ultimately the problem was due to the EventTypes of the event listeners added with the EventListenerRegistry. What worked was using EventType.POST_LOAD for all the read operations combined with EventType.PRE_UPDATE and EventType.PRE_INSERT for writes that call a helper method for handling both the same way.
To prevent unneeded writes after making your entity updates it's a good idea to reset the data used for tracking if the entity is dirty in EntityEntry called loadedState. This is a private field in Hibernate 4 so you'll need to use Reflection, however in Hibernate 5 it's available via the getLoadedState() method. One more gotcha is you need to update values of the "state" used when actually flushing the values to the database by the PreInsertEvent and PreUpdateEvent which can be retrieved from the getState() method defined in each.