How to order fields in outgoing messages in QuickFIX/J - java

Is there any way to order fields in outgoing messages without rebuilding QuickFIX/J? Or any configuration flag available which orders messages according to any validation file that we might set using some path flag?

See the QuickFIX/J User FAQ, topic "I altered my data dictionary. Should I regenerate/rebuild QF/J?". Specifically following excerpts:
If your DD changes aren't very extensive, maybe just a few field changes, then you don't really need to. If you added a whole new custom message type, then you probably should. If you changed field orders inside of repeating groups, then I recommend that you do, especially if those group changes are in outgoing messages.
And
OUTGOING MSGS: The DD xml file is irrelevant when you construct outgoing messages. You can pretty much add whatever fields you want to messages using the generic field setters (setString, setInt, etc) and QF will let you. The only trouble is with repeating groups. QF will write repeating group element ordering according to the DD that was used for code generation. If you altered any groups that are part of outgoing messages, you DEFINITELY need to rebuild.
From what I gather from this FAQ entry, you should not rebuild for outgoing messages unless the reordering is within repeating groups. In case you change field order in repeating groups you should rebuild.
In any case it's easy to test by shuffling fields around in a message in the dictionary, refer to it the custom dictionary in your configuration, then log the message generated by the QuikFIX/J engine.

Related

Spring Integration aggregating messages that were split twice

I have a use case where my message are being split twice and i want to aggregate all these messages. How can this best be achieved, should i aggregate the messages twice by introducing different sequence headers, or is there a way to aggregate the messages in single aggregating step by overriding the method how messages are grouped?
That's called a "nested splitting" and there is built-in algorithm to push sequence detail headers to the stack for a new splitting context. This would allow in the end to have an ascendant aggregation: the first one aggregate for the closest nested splitting, pops sequence detail headers and allows the next aggregator to deal with its own sequence context.
So, in two words: it is better to have as many aggregator as you have splitting if you want to send a single message in the start and receive a single message in the end.
Of course you can have a custom splitting algorithm with an applySequence = false. As many as you need. And have only a single aggregator in the end, but with a custom correlation logic already.
We have some explanation in the docs: https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#aggregatingmessagehandler
Starting with version 5.3, after processing message group, an AbstractCorrelatingMessageHandler performs a MessageBuilder.popSequenceDetails() message headers modification for the proper splitter-aggregator scenario with several nested levels.
We don't have a sample on the matter, but here is a configuration for test case: https://github.com/spring-projects/spring-integration/blob/main/spring-integration-core/src/test/java/org/springframework/integration/aggregator/scenarios/NestedAggregationTests-context.xml

Any idea to prevent specific attribute to appear in the logs?

I'm looking for a way to prevent some sensitive data from being logged.
Ideally i would like to prevent / capture things like
String sensitive = "";
log.info ("This should be prevented or caught by something : {} ", sensitive);
this post is a bit of a longshot, I'm willing to investigate on any lead.
annotation, new types, Sonar Rules, logger hacking etc...
thx for your brainstorming :)
guillaume
Create custom type for it.
Make sure that toString doesn't return actual content.
I imagine there are multiple ways to do this, but one way is to use the Logback configuration file, to specify a message provider for the "arguments" and "message". In those providers, you define a "writeTo" method that looks for particular patterns in the output, and masks them.
This is the path to a solution, but I obviously don't provide many details here. I'm not aware of any "standard" solutions for this.
Another possibility would avail itself if your architecture has services running in transient containers, and the log output is sent to a centralized log aggregator, like Splunk. If you were ok with the initial logs written in the container having sensitive data, you could have the log aggregator look for patterns to mask for.
I would recommend two options, can you split your PII data into a separate log and then log that data securely?
If not, consider something like Cribl Logstream. Point your log shipper at it and let it strip away any PII you are concerned about. LogStream makes it very very easy to remove/mask/encrypt sensitive data. It has all sorts of other features as well.
At my last job we used LogStream as the router to make decisions about the data based on the content. PII data was detected and one copy was pushed to a secure PII certified logging platform and another copy was pushed to the operational logging platform but the PII data was masked so a wider audience could use the logging with no risk. It was a very useful workflow that solved a log of problems.

How to Monitor/inspect data/attribute flow in Java code

I have a use case when I need to capture the data flow from one API to another. For example my code reads data from database using hibernate and during the data processing I convert one POJO to another and perform some more processing and then finally convert into final result hibernate object. In a nutshell something like POJO1 to POJO2 to POJO3.
In Java is there a way where I can deduce that an attribute from POJO3 was made/transformed from this attribute of POJO1. I want to look something where I can capture data flow from one model to another. This tool can be either compile time or runtime, I am ok with both.
I am looking for a tool which can run in parallel with code and provide data lineage details on each run basis.
Now instead of Pojos I will call them States! You are having a start position you iterate and transform your model through different states. At the end you have a final terminal state that you would like to persist to the database
stream(A).map(P1).map(P2).map(P3)....-> set of B
If you use a technic known as Event sourcing you can deduce it yes. How would this look like then? Instead of mapping directly A to state P1 and state P1 to state P2 you will queue all your operations that are necessary and enough to map A to P1 and P1 to P2 and so on... If you want to recover P1 or P2 at any time, it will be just a product of the queued operations. You can at any time rewind forward or rewind backwards as long as you have not yet chaged your DB state. P1,P2,P3 can act as snapshots.
This way you will be able to rebuild the exact mapping flow for this attribute. How fine grained you will queue your oprations, if it is going to be as fine as attribute level , or more course grained it is up to you.
Here is a good article that depicts event sourcing and how it works: https://kickstarter.engineering/event-sourcing-made-simple-4a2625113224
UPDATE:
I can think of one more technic to capture the attribute changes. You can instument your Pojo-s, it is pretty much the same technic used by Hibernate to enhance Pojos and same technic profiles use to for tracing. Then you can capture and react to each setter invocation on the Pojo1,Pojo2,Pojo3. Not sure if I would have gone that way though....
Here is some detiled readin about the byte code instrumentation if https://www.cs.helsinki.fi/u/pohjalai/k05/okk/seminar/Aarniala-instrumenting.pdf
I would imagine two reasons, either the code is not developed by you and therefore you want to understand the flow of data along with combinations to convert input to output OR your code is behaving in a way that you are not expecting.
I think you need to log the values of all the pojos, inputs and outputs to any place that you can inspect later for each run.
Example: A database table if you might need after hundred of runs, but if its one time may be to a log in appropriate form. Then you need to yourself manually use those data values layer by later to map to the next layer. I think with availability of code that would be easy. If you have a different need pls. explain.
Please accept and like if you appreciate my gesture to help with my ideas n experience.
There are "time travelling debuggers". For Java, a quick search did only spill this out:
Chronon Time Travelling Debugger, see this screencast how it might help you .
Since your transformations probably use setters and getters this tool might also be interesting: Flow
Writing your own java agent for tracking this is probably not what you want. You might be able to use AspectJ to add some stack trace logging to getters and setters. See here for a quick introduction.

WSO2 EI - Disable collecting statistics for specific component types

I'm looking to disable collecting statistics for all sequences and mediators in WSO2 EI. I still want to collect statistics about service calls and what not but discard
unwanted statistics about sequences and mediators contained in those services (which is a lot of unnecessary data).
I'm aware that apart from enabling/disabling statistics for specific services, you can also disable statistics for specific sequences, which would also mean not collecting stats about mediators contained in those sequences. However, in our project some services only contain mediators and not sequences.
So far we've tried adding booleans into synapse.properties file
mediation.flow.statistics.collect.proxy=true
mediation.flow.statistics.collect.api=true
mediation.flow.statistics.collect.mediator=false
mediation.flow.statistics.collect.sequence=false
mediation.flow.statistics.collect.resource=true
mediation.flow.statistics.collect.endpoint=true
and editing reportEntryEvent() and
reportChildEntryEvent() methods in org.apache.synapse.aspects.flow.statistics.collectors.OpenEventCollector.java file. For example if incoming componentType is mediator, I quit the reportChildEntryEvent() method assuming it would stop the statistic collection process. However this logic doesn't seem to be correct as I still receive mediator statistics in my Stream Processor.
This statistic handling is probably being managed also somewhere else but I actually struggle to see where and what exactly in the wso2-synapse code should I edit to achieve this behavior.
Thanks for any reply.

Loading GWT Messages from a Database

In GWT one typically loads i18n strings using a interface like this:
public interface StatusMessage extends Messages {
String error(String username);
:
}
which then loads the actual strings from a StatusMessage.property file:
error=User: {0} does not have access to resource
This is a great solution, however my client is unbendable in his demand for putting the i18n strings in a database so they can be changed at runtime (though its not a requirement that they be changed realtime).
One solution is to create a async service which takes a message ID and user locale and returns a string. I have implemented this and find it terribly ugly (it introduces a huge amount of extra communication with the server, plus it makes property placeholder replacement rather complicated).
So my question is this, can I in some nice way implement a custom message provider that loads the messages from the backend in one big swoop (for the current user session). If it can also hook into the default GWT message mechanism, then I would be completely happy (i.e. so I can create a interface like above and keep using the the nice {0}, {1}... property replacement format).
Other suggestions for clean database driven messages in GWT are also welcome.
GWT's in-built Dictionary class is the best way to move forward. Here's the official documentation on how to use it.
Let's say your application has 500 messages per locale at an average of 60 chars per message. I wouldn't think twice about loading all of these when the user logs in or selects his language: it's <50k of data and should not be an issue if you can assume broadband connectivity being available...your "one swoop" suggestion. I already do that in one GWT application, although it's not messages, but properties that are read from the database.
i think you might find this article useful:
http://googlewebtoolkit.blogspot.com/2010/02/putting-test-data-in-its-place.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+blogspot/NWLT+(Google+Web+Toolkit+Blog)&utm_content=Google+Reader
What you could do is set up a TextResource and then, you could just change the text at runtime. I haven't tried this but I am very confident that this would work.
To optimize the performance, you can put your messages in a js resource, for example: http://host.com/app/js/messages.js?lang=en, then map this resource to a servlet which will take the messages dictionary from your cache (a singleton bean, for instance) and write it to the response.
To optimize even more, you can:
- put a parameter to the resource URL, for example: .../messages.js?lang=en&version={last updated date of messages}
- {last updated date of messages} is stored somewhere in DB
- whenever user updates the messages, {last updated date of messages} will change
- in the response to browser, set Cache-control as you want to tell browser to cache your messages.

Categories

Resources