I have an ActimeMQ consumer which expects a message in javax.jms.ObjectMessage format.
This message pojo has 5 string elements.
Now I am trying to write a message producer for this consumer in NodeJs.
I am using stompit module
My current NodeJs code is
stompit.connect(connectOptions, function(error, client) {
if (error) {
console.log('connect error ' + error.message);
return;
} else {
console.log("connected");
}
var sendHeaders = {
'destination': '/queue/test',
'transformation': 'jms-object-json'
};
var msg = new Object();
msg.val1 = "12";
msg.val2 = "test";
msg.val3 = "1";
msg.val4 = "1";
msg.val5 = "Y";
var frame = client.send(sendHeaders);
frame.write(JSON.stringify(msg));
frame.end();
});
Java consumer is able to get the message but throws the exception
org.apache.activemq.command.ActiveMQTextMessage cannot be cast to javax.jms.ObjectMessage
I have read this page from activeMQ which says that
Currently, ActiveMQ comes with a transformer that can transform XML/JSON text to Java objects, but you can add your own transformers as well
I didn't quite understand this part on how to convert data.
I have added xstream-1.4.10.jar and jettison-1.3.8.jar in apache-activemq-5.15.0\lib and restarted the ActiveMq server.
But still I get the error in the consumer.
Also in the ActiveMQ console -> Queues -> message properties, it shows transformation-error
Please let me know how I can convert this ActiveMQTextMessage type to javax.jms.ObjectMessage before it reaches the consumer
There isn't a transformer in ActiveMQ that will convert any random JSON string into and ObjectMessages, you'd have to write you own to handle whatever format you are sending. The converter in ActiveMQ will convert some basic types that Map from the JSON but it's tricky and not necessarily reliable. You are better off handling the TextMessage and doing something meaningful with the JSON yourself.
ActiveMQTextMessage and ObjectMessage are different , they can't cast to each other.
From ActiveMQTextMessage , you can get the true message content as a String, then you have to trans it to a json object yourself.
Related
I have a relatively straightforward use case:
Read Avro data from a Kafka topic
Use KPL (v0.14.12) to send this data to Kinesis Data Streams
Use Kinesis Firehose to transform this data into Parquet and transfer it to S3.
The Kafka topic was written into by Kafka Streams using the following producer Configuration:
private void addAwsGlueSpecificProperties(Map<String, Object> props) {
props.put(AWSSchemaRegistryConstants.AWS_REGION, "eu-central-1");
props.put(AWSSchemaRegistryConstants.DATA_FORMAT, DataFormat.AVRO.name());
props.put(AWSSchemaRegistryConstants.SCHEMA_AUTO_REGISTRATION_SETTING, true);
props.put(AWSSchemaRegistryConstants.REGISTRY_NAME, "Kinesis_Schema_Registry");
props.put(AWSSchemaRegistryConstants.COMPRESSION_TYPE, AWSSchemaRegistryConstants.COMPRESSION.ZLIB.name());
props.put(DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
props.put(DEFAULT_VALUE_SERDE_CLASS_CONFIG, GlueSchemaRegistryKafkaStreamsSerde.class.getName());
}
Most notably, I've set SCHEMA_AUTO_REGISTRATION_SETTING to true to try and rule out problems with my schema definition. The auto-registration itself worked without any issues.
I have a very simple loop running for test purposes, which does step 1 and 2 of the above. It looks as follows:
KinesisProducer kinesisProducer = new KinesisProducer(getKinesisConfig());
try (final KafkaConsumer<String, AvroEvent> consumer = new KafkaConsumer<>(properties)) {
consumer.subscribe(Collections.singletonList(TOPIC));
while (true) {
log.info("Polling...");
final ConsumerRecords<String, AvroEvent> records = consumer.poll(Duration.ofMillis(100));
for (final ConsumerRecord<String, AvroEvent> record : records) {
final String key = record.key();
ListenableFuture<UserRecordResult> request = kinesisProducer.addUserRecord("my-data-stream", key, randomExplicitHashKey(), value.toByteBuffer(), gsrSchema);
Futures.addCallback(request, CALLBACK, executor);
}
Thread.sleep(Duration.ofSeconds(10).toMillis());
}
}
The callback just does a bit of logging on success/failure.
My Kinesis Config looks as follows:
private static KinesisProducerConfiguration getKinesisConfig() {
KinesisProducerConfiguration config = new KinesisProducerConfiguration();
GlueSchemaRegistryConfiguration schemaRegistryConfiguration = getGlueSchemaRegistryConfiguration();
config.setGlueSchemaRegistryConfiguration(schemaRegistryConfiguration);
config.setRegion("eu-central-1");
config.setCredentialsProvider(new DefaultAWSCredentialsProviderChain());
config.setMaxConnections(2);
config.setThreadingModel(KinesisProducerConfiguration.ThreadingModel.POOLED);
config.setThreadPoolSize(2);
config.setRateLimit(100L);
return config;
}
private static GlueSchemaRegistryConfiguration getGlueSchemaRegistryConfiguration() {
GlueSchemaRegistryConfiguration gsrConfig = new GlueSchemaRegistryConfiguration("eu-central-1");
gsrConfig.setAvroRecordType(AvroRecordType.GENERIC_RECORD ); // have also tried SPECIFIC_RECORD
gsrConfig.setRegistryName("Kinesis_Schema_Registry");
gsrConfig.setCompressionType(AWSSchemaRegistryConstants.COMPRESSION.ZLIB);
return gsrConfig;
}
This setup allows me to read Specific Avro records from Kafka and send them to Kinesis. I have also verified that the correct schema version ID is queried from GSR by my code. However, when my data gets to Firehose, I receive only the following error message for all my records (one per record):
{
"attemptsMade": 1,
"arrivalTimestamp": 1659622848304,
"lastErrorCode": "DataFormatConversion.ParseError",
"lastErrorMessage": "Encountered malformed JSON. Illegal character ((CTRL-CHAR, code 3)): only regular white space (\\r, \\n, \\t) is allowed between tokens\n at [Source: com.fasterxml.jackson.databind.util.ByteBufferBackedInputStream#6252e7eb; line: 1, column: 2]",
"attemptEndingTimestamp": 1659623152452,
"rawData": "<base64EncodedData>",
"sequenceNumber": "<seqNum>",
"dataCatalogTable": {
"databaseName": "<Glue database name>",
"tableName": "<Glue table name>",
"region": "eu-central-1",
"versionId": "LATEST",
"roleArn": "<arn>"
}
}
Unfortunately I can't post the entirety of the data as it is sensitive. However, the relevant part is that it always starts with the above control character that is causing the problem:
0x03 0x05 <schemaVersionId> <data>
My original data does not contain these control characters. After some debugging, I've found that KPL explicitly adds these bytes to the beginning of a UserRecord. In com.amazonaws.services.schemaregistry.serializers.SerializationDataEncoder#write:
public byte[] write(final byte[] objectBytes, UUID schemaVersionId) {
byte[] bytes;
try (ByteArrayOutputStream out = new ByteArrayOutputStream()) {
writeHeaderVersionBytes(out);
writeCompressionBytes(out);
writeSchemaVersionId(out, schemaVersionId);
boolean shouldCompress = this.compressionHandler != null;
bytes = writeToExistingStream(out, shouldCompress ? compressData(objectBytes) : objectBytes);
} catch (Exception e) {
throw new AWSSchemaRegistryException(e.getMessage(), e);
}
return bytes;
}
With writeHeaderVersionBytes(out) and writeCompressionBytes(out) writing to the front of the stream, respectively:
// byte HEADER_VERSION_BYTE = (byte) 3;
private void writeHeaderVersionBytes(ByteArrayOutputStream out) {
out.write(AWSSchemaRegistryConstants.HEADER_VERSION_BYTE);
}
// byte COMPRESSION_BYTE = (byte) 5
// byte COMPRESSION_DEFAULT_BYTE = (byte) 0
private void writeCompressionBytes(ByteArrayOutputStream out) {
out.write(compressionHandler != null ? AWSSchemaRegistryConstants.COMPRESSION_BYTE
: AWSSchemaRegistryConstants.COMPRESSION_DEFAULT_BYTE);
}
Why is Kinesis unable to parse a message that is produced by the library that is supposed to be best suited for writing to it? What am I missing?
I've finally figured out the problem and it's quite dumb.
What it boils down to, is that the transformer that converts data to parquet in Firehose expects a pure JSON payload. It expects records in the form:
{"itemId": 1, "itemName": "someItem"}{"itemId": 2, "itemName": "otherItem"}
It seemingly does not accept the same data in a different format.
This means that Avro-compatible JSON (where the above itemId would look like "itemId": {"long": 1}, or e.g. binary Avro data, is not compatible with the Kinesis Firehose parquet transformer, regardless of the fact that my schema definition in the Glue Schema Registry is explicitly registered as being in Avro format.
In addition, the Firehose parquet transformer requires the use of a Glue table - creating this table from an imported Avro schema simply does not work (see this answer), and had to be created manually. Luckily, even though it can't use the table that is based on an existing schema, the table definition was the same (with the exception of the Serde it needs to use), so it was relatively easy to fix...
To sum up, to get the above code to work I had to:
Create a Glue table for the schema manually (you can use the first table created from the existing schema as a template for creating this second table, but you can't have Firehose link to the first table)
Change the above code:
kinesisProducer.addUserRecord("my-data-stream", key, randomExplicitHashKey(), value.toByteBuffer(), gsrSchema);
to:
ByteBuffer data = ByteBuffer.wrap(value.toString().getBytes(StandardCharsets.UTF_8));
kinesisProducer.addUserRecord("my-data-stream", key, randomExplicitHashKey(), data);
Note that the I am now using the overloaded addUserRecord function that does not include a Schema parameter, which internally invokes the previous function with a null schema parameter. This prevents the KPL from encoding my payload and instead sends the 'plain' JSON over to KDS.
This is contrary to the only AWS Docs example that I could find on the topic, which likely is meant for a Firehose stream which does not convert the data prior to sending it to its destination.
I can't quite understand the reasons for all these undocumented limitations, and it was a pain to debug seeing how neither of the KPL functions nor KDS explicitly mentions anywhere that I can find that this is the expected behaviour. I feel like it's not worth trying to open an issue/PR over at the KPL repo seeing how it seems like Amazon doesn't really care about maintaining it that much...
I'll probably switch over to the plain Kinesis Client + Kinesis Aggregation for a more robust solution in the future, but hey, at least it works.
I'm new to Java and to backend development, and I really could use some help.
I am currently using Vert.x to develop a server that takes in a Json request that tells this server which file to analyze, and the server analyzes the file and gives a response in a Json format.
I have created an ImageRecognition class where there is a method called "getNum" which gets a json as an input and outputs a json containing the result.
But I am currently having trouble getting the Json file from the request.
public void start(Promise<Void> startPromise) throws Exception {
JsonObject reqJo = new JsonObject();
Router router = Router.router(vertx);
router.get("/getCall").handler(req ->{
JsonObject subJson = req.getBodyAsJson();
reqJo.put("name", subJson.getValue("name"));
req.end(reqJo.encodePrettily());
});
router.post("/getCall").produces("*/json").handler(plateReq ->{
plateReq.response().putHeader("content-tpye", "application/json");
JsonObject num = imageRecogService.getNum(reqJo);
plateReq.end(num.encodePrettily());
});
vertx.createHttpServer().requestHandler(router).listen(8080)
.onSuccess(ok -> {
log.info("http server running on port 8080");
startPromise.complete();
})
.onFailure(startPromise::fail);
}
}
Any feedback or solution to the code would be deeply appreciated!!
Thank you in advance!!
You have several errors in your code:
1:
JsonObject reqJo = new JsonObject();
Router router = Router.router(vertx);
router.get("/getCall").handler(req ->{
reqJo.put("name", subJson.getValue("name"));
});
You are modifying the reqJo object in handlers. I am not sure if this is thread safe, but a more common practice is to allocate the JsonObject object inside of request handlers and pass them to consequent handlers using RoutingContext.data().
2:
Your two handlers are not on the same method (the first one is GET, while the second is POST). I assume you want them both be POST.
3:
In order to extract multipart body data, you need to use POST, not GET.
4:
You need to append a BodyHandler before any of your handlers that reads the request body. For example:
// Important!
router.post("/getCall").handler(BodyHandler.create());
// I changed to posts
router.post("/getCall").handler(req ->{
JsonObject subJson = req.getBodyAsJson();
reqJo.put("name", subJson.getValue("name"));
req.end(reqJo.encodePrettily());
});
Otherwise, getBodyAsJson() will return null.
According to the Document of RoutingContext#getBodyAsJson, "the context must have first been routed to a BodyHandler for this to be populated."
Read more: BodyHandler.
I would like to know how my Selenium framework can dequeue a message sitting in a message queue. I have built an application to send a JSON string containing k/v pairs to a message queue.
My architecture is as follows and separate apps:
A JSP Web Application exists accepting parameters resulting in a JSON string
A message sender exists and takes the JSON string and publishes it to a Queue
A message consumer exists and consumes the Messages. Its basically just sitting here
A Selenium Java Framework exists, but I would like to process the messages and for each message it will interpret the k/v pairs and kicks off the script.
I would like to use the messages already in the queue and process these messages within the selenium framework, how can I achieve this?
I will appreciate the help. I have edited the question with the code
This is the code snippet to send the JSON Message
public class MessageSender {
public static void main(String[] args) throws IOException {
SingleNumberLogin generateLogin = new SingleNumberLogin();
//function call to build the JSON object
String jsonQueue = generateLogin.buildJASONObject();
ConnectionFactory conFactory = new ConnectionFactory();
try {
Connection connInterface = conFactory.newConnection();
Channel mqChannel = connInterface.createChannel();
mqChannel.queueDeclare("MyQueue",false,false,false,null);
//Just assigning json to another string, then publish the message
String myMessage = jsonQueue;
mqChannel.basicPublish("","MyQueue",false ,false, null,myMessage.getBytes());
}catch (
IOException | TimeoutException e)
{
System.out.println(e.getStackTrace());
}
conFactory.setUsername("guest");
conFactory.setPassword("guest");
conFactory.setVirtualHost("/");
conFactory.setHost("localhost");
conFactory.setPort(5672);
}
}
code snippet for consumer code that I have inserted into the startup function of the automation script, so if a message arrives a single test case is executed
#BeforeTest
public static void initializeTestBaseSetup() throws Exception, IOException, TimeoutException {
ConnectionFactory conFactory = new ConnectionFactory();
Connection connInterface = conFactory.newConnection();
Channel mqChannel = connInterface.createChannel();
mqChannel.queueDeclare("MyQueue",false,false,false,null);
mqChannel.basicConsume("MyQueue", true, (consumerTag, message) -> {
//convert to byte array
String m = new String (message.getBody(), "UTF-8");
System.out.println("Message received" + m);
}, consumerTag -> {
});
}
Output JSON
JSON Message received 2020-08-28T20:39:30.845{
"NUMBER": "0000011111",
"Type": "BAU",
"User": "MyUser ",
"Email": "riidonesh#gmail.com",
}
When tested in isolation, it works perfectly fine, what I mean is that I send the message and check that the consumer receives it, adding the consumer code to my framework is where i am stuck.
I would suggest you don't think about what you have as a "selenium framework" - think of it as a "java framework".
Selenium is a set of libraries that allow you automate the web browser at a GUI level. The framework is the coded solution to facilitate creation and management of your test suite - it doesn't have to be limited to selenium and chances that's already just one of its components.
Trying to answer your question directly:
SELENIUM cannot read messages
JAVA can read messages
If your rabbitmq has a web front end then you may be able to use selenium for it, but this isn't a very efficient or a logical solution.
What you might want to consider, and what i would do, is extending your framework to use the rabbitmq libraries to process messages as you need. These libraries are designed for this task.
You say:
I would like to process the messages and for each message it will
interpret the k/v pairs and kicks off the script.
I understand this to mean that the messages are the pre-req data for the tests. If you want to read the values of a message before the test you can either:
Place the get/read in a generic #Before method
or if it's a specific message per test case, add it into the start of the test.
You're working in java so you can do whatever you want really.
To get you started, the rabbitmq tutorial starts here.
This is there hello world example for reading messages from the queue:
public class Recv {
private final static String QUEUE_NAME = "hello";
public static void main(String[] argv) throws Exception {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
System.out.println(" [*] Waiting for messages. To exit press CTRL+C");
}
}
I am new to Spring Integration and I am trying to read a file and transform into a custom object which has to be sent to jms Queue wrapped in jms.Message.
It all has to be done using annotation.
I am reading the files from directory using below.
#Bean
#InboundChannelAdapter(value = "filesChannel", poller = #Poller(fixedRate = "5000", maxMessagesPerPoll = "1"))
public MessageSource<File> fileReadingMessageSource() {
FileReadingMessageSource source = new FileReadingMessageSource();
source.setDirectory(new File(INBOUND_PATH));
source.setAutoCreateDirectory(false);
/*source.setFilter(new AcceptOnceFileListFilter());*/
source.setFilter(new CompositeFileListFilter<File>(getFileFilters()));
return source;
}
Next Step is transforming the file content to Invoice Object(assume).
I want to know what would be incoming message type for my transformer and how should I transform it. Could you please help here. I am not sure what would be the incoming datatype and what should be the transformed object type (should it be wrapped inside Message ?)
#Transformer(inputChannel = "filesChannel", outputChannel = "jmsOutBoundChannel")
public ? convertFiletoInvoice(? fileMessage){
}
The payload is a File (java.io.File).
You can read the file and output whatever you want (String, byte[], Invoice etc).
Or you could use some of the standard transformers (e.g. FileToStringTransformer, JsonToObjectTransformer etc).
The JMS adapter will convert the object to TextMessage, ObjectMessage etc.
Am using apache camel, With Polling consumer, when poll my mail is mark as read.
options : delete=false&peek=false&unseen=true
After polling , when i am processing the attachment, if any error occurs , i want to make the mail as "unread". So that i can pool again later.
public void process(Exchange exchange) throws Exception {
Map<String, DataHandler> attachments = exchange.getIn().getAttachments();
Message messageCopy = exchange.getIn().copy();
if (messageCopy.getAttachments().size() > 0) {
for (Map.Entry<String, DataHandler> entry : messageCopy.getAttachments().entrySet()) {
DataHandler dHandler = entry.getValue();
// get the file name
String filename = dHandler.getName();
// get the content and convert it to byte[]
byte[] data =
exchange.getContext().getTypeConverter().convertTo(byte[].class, dHandler.getInputStream());
log.info("Downloading attachment, file name : " + filename);
InputStream fileInputStream = new ByteArrayInputStream(data);
try {
// Processing attachments
// if any error occurs here, i want to make the mail mark as unread
} catch (Exception e) {
log.info(e.getMessage());
}
}
}
}
I noticed the option peek, by setting it to true, It will not make the mail mark as read during polling, in that case is there any option to make it mark as read after processing.
To get the result that you want you should have options
peek=true&unseen=true
The peek=true option is supposed to ensure that messages remain the exact state on the mail server as they where before polling even if there is an exception. However, currently it won't work. This is actually a bug in Camel Mail component. I've submitted a patch to https://issues.apache.org/jira/browse/CAMEL-9106 and this will probably be fixed in a future release.
As a workaround you can set mapMailMessages=false but then you will have to work with the email message content yourself. In Camel 2.15 onward you also have postProcessAction option and with that you could probably remove the SEEN flags from messages with processing errors. Still, I would recommend waiting for the fix though.
We can set the mail unread flag with the following code
public void process(Exchange exchange) throws Exception {
final Message mailMessage = exchange.getIn(MailMessage.class).getMessage();
mailMessage.setFlag(Flag.SEEN, false);
}