I have two separate ChronicleQueues that were created by independent threads that monitor web socket streams in a Java application. When I read each queue independently in a separate single-thread program, I can traverse each entire queue as expected - using the following minimal code:
final ExcerptTailer queue1Tailer = queue1.createTailer();
final ExcerptTailer queue2Tailer = queue2.createTailer();
while (true)
{
try( final DocumentContext context = queue1Tailer.readingDocument() )
{
if ( isNull(context.wire()) )
break;
counter1++;
queue1Data = context.wire()
.bytes()
.readObject(Queue1Data.class);
queue1Writer.write(String.format("%d\t%d\t%d%n", counter1, queue1Data.getEventTime(), queue1Data.getEventContent()));
}
}
while (true)
{
try( final DocumentContext context = queue2Tailer.readingDocument() )
{
if ( isNull(context.wire()) )
break;
counter2++;
queue2Data = context.wire()
.bytes()
.readObject(Queue2Data.class);
queue2Writer.write(String.format("%d\t%d\t%d%n", counter2, queue2Data.getEventTime(), queue2Data.getEventContent()));
}
}
In the above, I am able to read all the Queue1Data objects, then all the Queue2Data objects and access values as expected. However, when I try to interleave reading the queues (read an object from one queue, based on a property of Queue1Data object (a time stamp), read Queue2Data objects until the first object that is after the time stamp (the limit variable below), of the active Queue1Data object is found - then do something with it) after only one object from the queue2Tailer is read, an exception is thrown .DecoratedBufferUnderflowException: readCheckOffset0 failed. The simplified code that fails is below (I have tried putting the outer while(true) loop inside and outside the the queue2Tailer try block):
final ExcerptTailer queue1Tailer = queue1Queue.createTailer("label1");
try( final DocumentContext queue1Context = queue1Tailer.readingDocument() )
{
final ExcerptTailer queue2Tailer = queue2Queue.createTailer("label2");
while (true)
{
try( final DocumentContext queue2Context = queue2Tailer.readingDocument() )
{
if ( isNull(queue2Context.wire()) )
{
terminate = true;
break;
}
queue2Data = queue2Context.wire()
.bytes()
.readObject(Queue2Data.class);
while(true)
{
queue1Data = queue1Context.wire()
.bytes()
.readObject(Queue1Data.class); // first read succeeds
if (queue1Data.getFieldValue() > limit) // if this fails the inner loop continues
{ // but the second read fails
// cache a value
break;
}
}
// continue working with queu2Data object and cached values
} // end try block for queue2 tailer
} // end outer while loop
} // end outer try block for queue1 tailer
I have tried as above, and also with both Tailers created at the beginning of the function which does the processing (a private function executed when a button is clicked in a relatively simple Java application). Basically I took the loop which worked independently, and put it inside another loop in the function, expecting no problems. I thinking I am missing something crucial in how tailers are positioned and used to read objects, but I cannot figure out what it is - since the same basic code works when reading queues independently. The use of isNull(context.wire()) to determine when there are no more objects in a queue I got from one of the examples, though I am not sure this is the proper way to determine when there are no more objects in a queue when processing the queue sequentially.
Any suggestions would be appreciated.
You're not writing it correctly in the first instance.
Now, there's hardcore way of achieving what you are trying to achieve (that is, do everything explicitly, on lower level), and use MethodReader/MethodWriter magic rovided by Chronicle.
Hardcore way
Writing
// write first event type
try (DocumentContext dc = queueAppender.writingDocument()) {
dc.wire().writeEventName("first").text("Hello first");
}
// write second event type
try (DocumentContext dc = queueAppender.writingDocument()) {
dc.wire().writeEventName("second").text("Hello second");
}
This will write different types of messages into the same queue, and you will be able to easily distinguish those when reading.
Reading
StringBuilder reusable = new StringBuilder();
while (true) {
try (DocumentContext dc = tailer.readingDocument()) {
if (!dc.isPresent) {
continue;
}
dc.wire().readEventName(reusable);
if ("first".contentEquals(reusable)) {
// handle first
} else if ("second".contentEquals(reusable)) {
// handle second
}
// optionally handle other events
}
}
The Chronicle Way (aka Peter's magic)
This works with any marshallable types, as well as any primitive types and CharSequence subclasses (i.e. Strings), and Bytes. For more details have a read of MethodReader/MethodWriter documentation.
Suppose you have some data classes:
public class FirstDataType implements Marshallable { // alternatively - extends SelfDescribingMarshallable
// data fields...
}
public class SecondDataType implements Marshallable { // alternatively - extends SelfDescribingMarshallable
// data fields...
}
Then, to write those data classes to the queue, you just need to define the interface, like this:
interface EventHandler {
void first(FirstDataType first);
void second(SecondDataType second);
}
Writing
Then, writing data is as simple as:
final EventHandler writer = appender.methodWriterBuilder(EventHandler).get();
// assuming firstDatum and secondDatum are created earlier
writer.first(firstDatum);
writer.second(secondDatum);
What this does is the same as in the hardcore section - it writes event name (which is taken from the method name in method writer, i.e. "first" or "second" correspondingly), and then the actual data object.
Reading
Now, to read those events from the queue, you need to provide an implementation of the above interface, that will handle corresponding event types, e.g.:
// you implement this to read data from the queue
private class MyEventHandler implements EventHandler {
public void first(FirstDataType first) {
// handle first type of events
}
public void second(SecondDataType second) {
// handle second type of events
}
}
And then you read as follows:
EventHandler handler = new MyEventHandler();
MethodReader reader = tailer.methodReader(handler);
while (true) {
reader.readOne(); // readOne returns boolean value which can be used to determine if there's no more data, and pause if appropriate
}
Misc
You don't have to use the same interface for reading and writing. In case you want to only read events of second type, you can define another interface:
interface OnlySecond {
void second(SecondDataType second);
}
Now, if you create a handler implementing this interface and give it to tailer#methodReader() call, the readOne() calls will only process events of second type while skipping all others.
This also works for MethodWriters, i.e. if you have several processes writing different types of data and one process consuming all that data, it is not uncommon to define multiple interfaces for writing data and then single interface extending all others for reading, e.g.:
interface FirstOut {
void first(String first);
}
interface SecondOut {
void second(long second);
}
interface ThirdOut {
void third(ThirdDataType third);
}
interface AllIn extends FirstOut, SecondOut, ThirdOut {
}
(I deliberately used different data types for method parameters to show how it is possible to use various types)
With further testing, I have found that nested loops to read multiple queues which contain data in different POJO classes is possible. The problem with the code in the above question is that queue1Context is obtained once, OUTSIDE the loop that I expected to read queue1Data objects. My fundamental misconception was that DocumentContext objects managed stepping through objects in a queue, whereas actually ExcerptTailer objects manage stepping (maintaining indices) when reading a queue sequentially.
In case it might help someone else just getting started with ChronicleQueues, the inner loop in the original question should be:
while(true)
{
try (final DocumentContext queue1Context = queue1Tailer() )
{
queue1Data = queue1Context.wire()
.bytes()
.readObject(Queue1Data.class); // first read succeeds
if (queue1Data.getFieldValue() > limit) // if this fails the inner loop continues as expected
{ // and second and subsequent reads now succeed
// cache a value
break;
}
}
}
And of course the outer-most try block containing queue1Context (in the original code) should be removed.
Related
Is conditional composition of Consumers possible in Java 8? Basically I'm looking to create a custom Lambda interface similar to Consumer but that only works with one type of object. Let's call it, Stateful and it contains multiple statuses (we'll say two for the purpose of this example):
public class Stateful {
private int status1;
private int status2;
}
We have a lot of areas in our code where we do an operation on a Stateful and, if the status has changed, we would do another operation. I was wondering if we could use composition to handle this in a more compact and elegant manner. Right now we would do something like:
SimpleEntry<Integer, Integer> oldStates = new SimpleEntry(stateful.getStatus1(), stateful.getStatus2());
applyLogicOnStateful(stateful); //do some operation that may change state values
if(isStatusChanged(oldStates, stateful) { //compare oldStates integers to status integers
doSomethingElse(stateful);
}
where I think something like this would look better:
statefulConsumer
.accept((stateful)->applyLogicOnStateful(stateful))
.ifStatusChanged((stateful)->doSomethingElse(stateful));
but I don't know if we would be able to track the change in status from before the first consumer to after. Maybe I need to create a lambda that takes two consumers as input?
I'm definitely looking to do this without the assistance of a 3rd party library, although you're welcome to promote one here if it is helpful.
Here is a function that will return a Consumer<Stateful> that will extract the former state, do the change, compare results, and conditionally operate on the changed object.
public static Consumer<Stateful> getStatefulConsumer(
Function<Stateful,SimpleEntry<Integer,Integer>> getStatus, // extract status from Stateful
Consumer<Stateful> applyLogic, // apply business logic
BiPredicate<SimpleEntry<Integer,Integer>,SimpleEntry<Integer,Integer>> compareState, // test statuses for change
Consumer<Stateful> onChange) // doSomethingElse
{
return stateful -> {
SimpleEntry<Integer,Integer> oldStatus = getStatus.apply(stateful);
applyLogic.accept(stateful);
if(!compareState.test(oldStatus, getStatus.apply(stateful))){
onChange.accept(stateful);
}
};
}
You might use it like this:
Consumer<Stateful> ifChanged = getStatefulConsumer(s -> new SimpleEntry<> ( s.status1, s.status2 ),
s -> changeSomething(s), Objects::equals, s->doSomething(s));
You could generify the extracted status so that different stateful types could have different extracted status types, or even use Stateful::clone to copy the status.
The solution I am working with right now is to create a Lambda interface that takes the Stateful instance and two Consumers as input:
public interface StatefulConsumer {
void accept(Stateful stateful, Consumer<Stateful> consumer, Consumer<Stateful> ifStateChangedConsumer);
}
and an implementation:
final StatefulConsumer IfStateChanges = new StatefulConsumer() {
#Override
public void accept(Stateful stateful, Consumer<Stateful> consumer, Consumer<Stateful> ifStateChangedConsumer) {
SimpleEntry<Integer, Integer> oldStates = new SimpleEntry(stateful.getStatus1(), stateful.getStatus2());
consumer.accept(stateful); //do some operation that may change state values
if(isStatusChanged(oldStates, stateful) { //compare oldStates integers to status integers
ifStateChangedConsumer.accept(stateful);
}
}
};
which could be called like this:
IfStateChanges.accept(stateful,
(Stateful s)->applyLogicOnStateful(stateful),
(Stateful s)->doSomethingElse(stateful))
It could also be implemented as a Predicate or a Function that takes a stateful and a consumer as input and returns a boolean for use in an if Statement
I am new to Java and Hibernate.
I have implemented a functionality where I generate request nos. based on already saved request no. This is done by finding the maximum request no. and incrementing it by 1,and then again save i it to database.
However I am facing issues with multithreading. When two threads access my code at the same time both generate same request no. My code is already synchronized. Please suggest some solution.
synchronized (this.getClass()) {
System.out.println("start");
certRequest.setRequestNbr(generateRequestNumber(certInsuranceRequestAddRq.getAccountInfo().getAccountNumberId()));
reqId = Utils.getUniqueId();
certRequest.setRequestId(reqId);
ItemIdInfo itemIdInfo = new ItemIdInfo();
itemIdInfo.setInsurerId(certRequest.getRequestId());
certRequest.setItemIdInfo(itemIdInfo);
dao.insert(certRequest);
addAccountRel();
System.out.println("end");
}
Following is the output showing my synchronization:
start
end
start
end
Is it some Hibernate issue.
Does the use of transactional attribute in Spring affects the code commit in my Case?
I am using the following Transactional Attribute:
#Transactional(readOnly = false, propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
EDIT: code for generateRequestNumber() shown in chat room.
public String generateRequestNumber(String accNumber) throws Exception {
String requestNumber = null;
if (accNumber != null) {
String SQL_QUERY = "select CERTREQUEST.requestNbr from CertRequest as CERTREQUEST, "
+ "CertActObjRel as certActObjRel where certActObjRel.certificateObjkeyId=CERTREQUEST.requestId "
+ " and certActObjRel.certObjTypeCd=:certObjTypeCd "
+ " and certActObjRel.certAccountId=:accNumber ";
String[] parameterNames = {"certObjTypeCd", "accNumber"};
Object[] parameterVaues = new Object[]
{
Constants.REQUEST_RELATION_CODE, accNumber
};
List<?> resultSet = dao.executeNamedQuery(SQL_QUERY,
parameterNames, parameterVaues);
// List<?> resultSet = dao.retrieveTableData(SQL_QUERY);
if (resultSet != null && resultSet.size() > 0) {
requestNumber = (String) resultSet.get(0);
}
int maxRequestNumber = -1;
if (requestNumber != null && requestNumber.length() > 0) {
maxRequestNumber = maxValue(resultSet.toArray());
requestNumber = Integer.toString(maxRequestNumber + 1);
} else {
requestNumber = Integer.toString(1);
}
System.out.println("inside function request number" + requestNumber);
return requestNumber;
}
return null;
}
Don't synchronize on the Class instance obtained via getClass(). It can have some strange side effects. See https://www.securecoding.cert.org/confluence/pages/viewpage.action?pageId=43647087
For example use:
synchronize(this) {
// synchronized code
}
or
private synchronized void myMethod() {
// synchronized code
}
To synchronize on the object instance.
Or do:
private static final Object lock = new Object();
private void myMethod() {
synchronize(lock) {
// synchronized code
}
}
Like #diwakar suggested. This uses a constant field to synchronize on to guarantee that this code is synchronizing on the same lock.
EDIT: Based on information from chat, you are using a SELECT to get the maximum requestNumber and increasing the value in your code. Then this value is set on the CertRequest which is then persisted in the database via a DAO. If this persist action is not committed (e.g. by making the method #Transactional or some other means) then another thread will still see the old requestNumber value. So you could solve this by making the code transactional (how depends on which frameworks you use etc.). But I agree with #VA31's answer which states that you should use a database sequence for this instead of incrementing the value in code. Instead of a sequence you could also consider using an auto-incement field in CertRequest, something like:
#GeneratedValue(strategy=GenerationType.AUTO)
private int requestNumber;
For getting the next value from a sequence you can look at this question.
You mentioned this information in your question.
I have implemented a functionality where I generate request nos. based on already saved request no. This is done by finding the maximum request no. and incrementing it by 1,and then again save i it to database.
On a first look, it seems the problem caused by multi appserver code. Threads are synchronised inside one JVM(appserver). If you are using more than one appserver then you have to do it differently using more robust approach by using server to server communication or by batch allocation of request no to each appserver.
But, if you are using only one appserver and multiple threads accessing the same code then you can put a lock on the instance of the class rather then the class itself.
synchronized(this) {
lastName = name;
nameCount++;
}
Or you can use the locks private to the class instance
private Object lock = new Object();
.
.
synchronized(lock) {
System.out.println("start");
certRequest.setRequestNbr(generateRequestNumber(certInsuranceRequestAddRq.getAccountInfo().getAccountNumberId()));
reqId = Utils.getUniqueId();
certRequest.setRequestId(reqId);
ItemIdInfo itemIdInfo = new ItemIdInfo();
itemIdInfo.setInsurerId(certRequest.getRequestId());
certRequest.setItemIdInfo(itemIdInfo);
dao.insert(certRequest);
addAccountRel();
System.out.println("end");
}
But make sure that your DB is updated by the new sequence no before the next thread is accessing it to get new one.
It is a good practice to generate "the request number (Unique Id)" by using the DATABASE SEQUENCE so that you don't need to synchronize your Service/DAO methods.
First thing:
Why are you getting the thread inside the method. I is not required here.
Also, one thing;
Can you try like this once:
final static Object lock = new Object();
synchronized (lock)
{
.....
}
what I feel is that object what you are calling is different so try this once.
I have a requirement in Spring Batch where I have a file with thousands of records coming in a sorted order.The key field is product code.
The file may have multiple records of the same product code.The requirement is that I have to group the records that have the same
product Code in a collection (i.e List) and then send them over to a method i.e validateProductCodes(List prodCodeList).
I am looking for the best way to do this.The approach I thought of was to read every record in the Processor and then build a collection
of records for the same product code in the processor.If at any point in the processor,if the product code in the record is different than it would imply that
the productCode grouping is complete and the validateProductCodes() can be called for that group of records with the same product code.Also I am using a Step.So does
not that automatically mean that the process is multithreaded?Meaning Groups of records with same productCode will be processed in a multithreaded way.Please advise.
Thanks
There are two questions in your question: first, you want to know how to group the items together and second how they are processed.
In order to group them, you could create a group reader as Luca suggested or something like:
public class GroupReader<I> implements ItemReader<List<I>>{
private SingleItemPeekableItemReader<I> reader;
private ItemReader<I> peekReaderDelegate;
public void setReader(ItemReader<I> reader) {
peekReaderDelegate = reader;
}
#Override
public void afterPropertiesSet() throws Exception {
Assert.notNull(peekReaderDelegate, "The 'itemReader' may not be null");
this.reader= new SingleItemPeekableItemReader<I>();
this.reader.setDelegate(delegateReader);
}
#Override
public List<I> read() throws Exception {
State state = State.NEW;
List<I> group = null;
I item = null;
while (state != State.COMPLETE) {
item = reader.read();
switch (state) {
case NEW: {
if (item == null) {
// end reached
state = State.COMPLETE;
break;
}
group = new ArrayList<I>();
group.add(item);
state = State.READING;
I nextItem = reader.peek();
if (isItAKeyChange(item, nextItem)) {
state = State.COMPLETE;
}
break;
}
case READING: {
group.add(item);
// peek and check if there the peeked entry has a new date
I nextItem = peekEntry();
if (isItAKeyChange(item, nextItem)) {
state = State.COMPLETE;
}
break;
}
default: {
throw new org.springframework.expression.ParseException(groupCounter, "ParsingError: Reader is in an invalid state");
}
}
}
return group;
}
}
For every key, this reader will return a list with all elements matching this key. Therefore, the grouping ist done directly in the reader.
You cannot do that with a processor, as you described.
Your second question about multithreading.
Now, using a step does not necessarily mean, that the step is processed with several threads.
In order to do that, you need set an AsyncTaskExecutor and you have to set the throttle limit.
But if you do that, your reader must be threadsafe, or otherwise your grouping won't work. You could do that by simply defining the read method above as synchronized.
Another way could be to write a small SynchronizedWrapperReader, as suggested in this question: Parellel Processing Spring Batch StaxEventItemReader
Please note, depending on your target you are writing to, you probably also have to synchronize the writer, and if necessary to reorder the result.
I am making a multiplayer game which makes heavy use of a serialisable Event class to send messages over a network. I want to be able to reconstruct the appropriate subclass of Event based on a constant.
So far I have opted for the following solution:
public class EventFactory {
public static Event getEvent(int eventId, ByteBuffer buf) {
switch (eventId){
case Event.ID_A:
return EventA.deserialise(buf);
case Event.ID_B:
return EventB.deserialise(buf);
case Event.ID_C:
return EventC.deserialise(buf);
default:
// Unknown Event ID
return null;
}
}
}
However, this strikes me as being very verbose and involves adding a new 'case' statement every time I create a new Event type.
I am aware of 2 other ways of accomplishing this, but neither seems better*:
Create a mapping of constants -> Event subclasses, and use clazz.newInstance() to instantiate them (using an empty constructor), followed by clazz.initialiase(buf) to supply the necessary parameters.
Create a mapping of constants -> Event subclasses, and use reflection to find and call the right method in the appropriate class.
Is there a better approach than the one I am using? Am I perhaps unwise to disregard the alternatives mentioned above?
*NOTE: in this case better means simpler / cleaner but without compromising too much on speed.
You can just use a HashMap<Integer,Event> to get the correct Event for the eventID. Adding or removing events is going to be easy, and as the code grows this is easy to maintain when compared to switch case solution and speed wise also this should be faster than switch case solution.
static
{
HashMap<Integer,Event> eventHandlerMap = new HashMap<>();
eventHandlerMap.put(eventId_A, new EventHandlerA());
eventHandlerMap.put(eventId_B, new EventHandlerB());
............
}
Instead of your switch statement Now you can just use :
Event event = eventHandlerMap.get(eventId);
if(event!=null){
event.deserialise(buf);
}
If you're not afraid of reflection, you could use:
private static final Map<Integer, Method> EVENTID_METHOD_MAP = new LinkedHashMap<>();
static {
try {
for (Field field : Event.class.getFields())
if (field.getName().startsWith("ID_")) {
String classSuffix = field.getName().substring(3);
Class<?> cls = Class.forName("Event" + classSuffix);
Method method = cls.getMethod("deserialize", ByteBuffer.class);
EVENTID_METHOD_MAP.put(field.getInt(null), method);
}
} catch (IllegalAccessException|ClassNotFoundException|NoSuchMethodException e) {
throw new ExceptionInInitializerError(e);
}
}
public static Event getEvent(int eventId, ByteBuffer buf)
throws InvocationTargetException, IllegalAccessException {
return (Event) EVENTID_METHOD_MAP.get(eventId).invoke(null, buf);
}
This solution requires that int ID_N always maps to class EventN, where N can be any String where all characters return true for the method java.lang.Character.isJavaIdentifierPart(c). Also, class EventN must define a static method called deserialize with one ByteBuffer argument that returns an Event.
You could also check if field is static before trying to get its field value. I just forget how to do that at the moment.
I have this code that dumps documents into MongoDB once an ArrayBlockingQueue fills it's quota. When I run the code, it seems to only run once and then gives me a stack trace. My guess is that the BulkWriteOperation someone has to 'reset' or start over again.
Also, I create the BulkWriteOperations in the constructor...
bulkEvent = eventsCollection.initializeOrderedBulkOperation();
bulkSession = sessionsCollection.initializeOrderedBulkOperation();
Here's the stacktrace.
10 records inserted
java.lang.IllegalStateException: already executed
at org.bson.util.Assertions.isTrue(Assertions.java:36)
at com.mongodb.BulkWriteOperation.insert(BulkWriteOperation.java:62)
at willkara.monkai.impl.managers.DataManagers.MongoDBManager.dumpQueue(MongoDBManager.java:104)
at willkara.monkai.impl.managers.DataManagers.MongoDBManager.addToQueue(MongoDBManager.java:85)
Here's the code for the Queues:
public void addToQueue(Object item) {
if (item instanceof SakaiEvent) {
if (eventQueue.offer((SakaiEvent) item)) {
} else {
dumpQueue(eventQueue);
}
}
if (item instanceof SakaiSession) {
if (sessionQueue.offer((SakaiSession) item)) {
} else {
dumpQueue(sessionQueue);
}
}
}
And here is the code that reads from the queues and adds them to an BulkWriteOperation (initializeOrderedBulkOperation) to execute it and then dump it to the database. Only 10 documents get written and then it fails.
private void dumpQueue(BlockingQueue q) {
Object item = q.peek();
Iterator itty = q.iterator();
BulkWriteResult result = null;
if (item instanceof SakaiEvent) {
while (itty.hasNext()) {
bulkEvent.insert(((SakaiEvent) itty.next()).convertToDBObject());
//It's failing at that line^^
}
result = bulkEvent.execute();
}
if (item instanceof SakaiSession) {
while (itty.hasNext()) {
bulkSession.insert(((SakaiSession) itty.next()).convertToDBObject());
}
result = bulkSession.execute();
}
System.out.println(result.getInsertedCount() + " records inserted");
}
The general documentation applies to all driver implementations in this case:
"After execution, you cannot re-execute the Bulk() object without reinitializing."
So the .execute() method effectively "drains" the current list of operations that have been sent to it and now contains state information about how the commands were actually sent. So you cannot add more entries or call .execute() again on the same instance without reinitializing .
So after you call execute on each "Bulk" object, you need to call the intialize again:
bulkEvent = eventsCollection.initializeOrderedBulkOperation();
bulkSession = sessionsCollection.initializeOrderedBulkOperation();
Each of those lines placed again repectively after each .execute() call in your function. Then further calls to those instances can add operations and call execute again continuing the cycle.
Note that "Bulk" operations objects will store as many items as you want to put into them but will break up requests to the server into maximum amounts of 1000 items. After execution the state of the operations list will reflect exactly how this is done should you want to inspect that.