How distinguish different JMS text messages from the same queue? - java

I have an AMQ queue I'm connecting to. The publisher is sending JMS TextMessage. The messages are of different types: foo-updated, bar-updated, baz-updated etc, all on the single queue.
The payload/message body is JSON for all types, and their schemas have significant overlap (but no direct type information).
The publishing team said "search the string/text of the message for foo, and if it's there, it's a foo-updated message".
That sounds wrong.
There may be header information in the JMS message I can use (I'm exploring that), but assuming I can influence (but not necessarily change anything), what's the best way to handle this ?

If you have influence over using JMS topics, you should use that. Just like REST URLs you could use topics to indicate resources and actions on those: foo/create, foo/update, bar/update
Then the JMS Broker can help you to efficiently route different messages to difference consumers. E.g. one consumer subscribing to foo/* another to */update
If you are stuck with a queue, the publisher should add additional information as header properties, for example type=foo and action=update. Then your consumer can specify JMS selectors like "action = 'update'" to receive only some of the messages.
Otherwise you are actually stuck with looking into the content :-(

Use JMS Message Selectors
See: Message Selectors

Related

Multicast in Apache Camel

I'm newbie in Apache Camel, so please forgive me for the stupid question.
I am browsing examples of sending messages using multicast and I don't understand it.
I know that (in the network layer) multicast source sends datagram to specified address from range 224.0.0.0 to 239.255.255.255, to subscribers, but multicast source "does not know" how many subscribers are, only one datagram is sent for anyone of subscribers .
I do not understand either the example from the documentation (https://camel.apache.org/components/latest/eips/multicast-eip.html#_multicast_example) or from here (https://www.javarticles.com/2015/05/apache-camel-multicast-examples.html).
Why (if I understood correctly) the subscribers of the message are explicitly specified ("direct: a", "direct: b", "direct: c")?
After all, in one moment there may be 3 of them, in another time 10 of them, and so on. I don't think I need to change the code and define e.g. "direct:10", am I right?
Does the Apache Camel multicast mean something different than the one from the network layer?
Yes its not the same. Multicast EIP is a way of sending a message to N recipients at the same time (where the number of recipient is fixed/known ahead of time).
yes, you correctly understand that there is difference between network layer multicast and apache camel multicast.
The use case for camel multicast is when you want to send same message to the multiple endpoints. So in example from docs:
from("direct:a").multicast().to("direct:x", "direct:y", "direct:z");
The same message from "direct:a" will be send to all three endpoints. And the number of "destination" endpoints is defined for each route and could be different for different routes.
Note that in case of:
from("direct:a").to("direct:x").to("direct:y")
You are chaining processing of the messages. The result from the "direct:a" will be send to "direct:x" and the result from "direct:x" will be send to "direct:y", so "direct:y" could get different message as "direct:x".

Dynamically connecting a Kafka input stream to multiple output streams

Is there functionality built into Kafka Streams that allows for dynamically connecting a single input stream into multiple output streams? KStream.branch allows branching based on true/false predicates, but this isn't quite what I want. I'd like each incoming log to determine the topic it will be streamed to at runtime, e.g., a log {"date": "2017-01-01"} will be streamed to the topic topic-2017-01-01 and a log {"date": "2017-01-02"} will be streamed to the topic topic-2017-01-02.
I could call forEach on the stream, then write to a Kafka producer, but that doesn't seem very elegant. Is there a better way to do this within the Streams framework?
If you want to create topics dynamically based on your data, you do not get any support within Kafka's Streaming API at the moment (v0.10.2 and earlier). You will need to create a KafkaProducer and implement your dynamic "routing" by yourself (for example using KStream#foreach() or KStream#process()). Note, that you need to do synchronous writes to avoid data loss (which are not very performant unfortunately). There are plans to extend Streaming API with dynamic topic routing, but there is no concrete timeline for this feature right now.
There is one more consideration you should take into account. If you do not know your destination topic(s) ahead of time and just rely on the so-called "topic auto creation" feature, you should make sure that those topics are being created with the desired configuration settings (e.g., number of partitions or replication factor).
As an alternative to "topic auto creation" you can also use Admin Client (available since v0.10.1) to create topics with correct configuration. See https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations

Message processig - should I use JMS?

I'm developing a 'WS oriented' application basing on Spring/CXF/Oracle DB. Now, I stuck with some architectural consideration about right approach to organize message processing (already stored in db).
Briefly, process looks as follows:
(A) Get the message from client -> Validate -> Store -> Send reposponse
(B) Process -> Update data
I consider two general approaches for part B of the process:
1) Use JMS queue
Just after validation and storing incoming message details in DB publish a message to the JSM queue. On the other side define cosumer which will retrieve the message and do the processing
2) Fetch data to be processed
Manually fetch data from with db and process it.
Additional facts:
The processing won't be compute-intensive, so for new I dont think that work distribution will be needed (all in single JVM).
All data in single db schema
So, I'm interested what are key factors to choose JMS in such case?
JMS would be a better approach. In positive scenario, approach #2 works as well. But JMS would provide you some in-built capability, specially for failed case. Though internally JMS would be using a DB-based persistent storage; it would provide a better interface to communicate that data.
For example, you could configure an error queue to track all the messages, whose processing failed.
It would also provide you scalable architecture, where some other app (in future) could starts consuming your message and process.
Reliable: Due to asynchronous messaging, all the pieces don’t need to be up for the application to function as a whole.
Flexible : Think of scenario, in which you might want to process certain type of data before all other (prioritization). JMS would provide more better approach than tweaking logic in a program.

Java Chat system protocol design, how to determine message type?

I have a chat program implemented in Java. The client can send lots of different types of information to the server (i.e, Joins the server and sends username, password; requests a private chat with another user on the server, disconnects from the server, etc).
I'm looking for the correct way to have the server/client differentiate between 'text' messages that are just meant to be chat text messages sent from one client to the others, and 'command' messages (disconnect, request private chat, request file transfer, etc) that are meant for the server or the client.
I see two options:
Use serialized objects, and determine what they are on the receiving end by doing an 'instanceof'
Send the data as a byte array, reserving the first N bytes of the array to specify the 'type' of the incoming data.
What is the 'correct' way to do this? How to real protocols (oscar, irc) handle this situation?
I've googled around on this topic and only found examples/discussions centering on simple java chat applications. None that go into detail about protocol design (which I ultimately intend to practice).
Thanks to any help...
Second approach is much better, because serialization is a complex mechanism, that can be easily used in a wrong way (for example you may bind yourself to internal content of a concrete serialized class). Plus your protocol will be bound to JVM mechanism.
Using some "protocol header" for message differentiation is a common way in network protocols (FTP, HTTP, etc). It is even better when it is in a text form (people will be able to read it).
You typically have a little message header identifying the type of content in all messages, including standard text/chat messages.
Either of your two suggestions are fine. (In your second approach, you probably want to reserve some bytes for the length of the array as well.)

Protocol specific channel handlers

I'm writing an application server that will receive SIP and DNS messages from the network.
When I receive a message from the network, I understand from the documentation that at first, I get a ChannelBuffer. I would like to determine which kind of message has been received (SIP or DNS) and to decode it.
To determine the message type, I can dedicate port to each type of message, but I would be interested to know if there exist another solution for that. My question is more about how to decode the ChannelBuffer.
Is there a ChannelHandler provided by Netty to decode SIP or DNS messages? If not, what would be the right place in the type hierarchy to write my custom ChannelHandler?
To illustrate my question, let's take as example the HttpRequestDecoder, the hierarchy is:
java.lang.Object
org.jboss.netty.channel.SimpleChannelUpstreamHandler
org.jboss.netty.handler.codec.frame.FrameDecoder
org.jboss.netty.handler.codec.replay.ReplayingDecoder<HttpMessageDecoder.State>
org.jboss.netty.handler.codec.http.HttpMessageDecoder
org.jboss.netty.handler.codec.http.HttpRequestDecoder
Also, do I need to use two different ChannelHandler for decoding and encoding, or is there a possibility to use a single ChannelHandler for both?
Thanks
If you really have a requirement for port unification (an example here), i.e. receiving different protocols on the same port, then you would have to detect the protocol in a handler and take appropriate actions. Could be as simple as inserting different handlers in the pipe line.
However, I find it very improbable that SIP and DNS would share the same port, hence no need to complicate matters.
I haven't seen a SIP decoder/encoder for Netty, but depending on what you want to do with the message, the HTTP decoder is a a very good starting point (and could be made simpler since chunking is not supported in SIP).
I would strongly recommend not to try to combine DNS and SIP decoding in one handler (or any other combination for that matter). Keep the handlers as simple and coherent as possible. Combine handlers instead, if needed.

Categories

Resources