calling a Redis function(loaded Lua script) using Lettuce library - java

I am using Java, Spring-Boot, Redis 7.0.4, and lettuce 6.2.0.RELEASE.
I wrote a Lua script as below:
#!lua
name = updateRegisterUserJobAndForwardMsg
function updateRegisterUserJobAndForwardMsg (KEYS, ARGV)
local jobsKey = KEYS[1]
local inboxKey = KEYS[2]
local jobRef = KEYS[3]
local jobIdentity = KEYS[4]
local accountsMsg = ARGV[1]
local jobDetail = redis.call('HGET', jobsKey ,jobRef)
local jobObj = cmsgpack.unpack(jobDetail)
local msgSteps = jobObj['steps']
msgSteps[jobIdentity] = 'IN_PROGRESS'
jobDetail = redis.call('HSET', jobsKey, jobRef, cmsgpack.pack(jobObj))
local ssoMsg = redis.call('RPUSH', inboxKey, cmsgpack.pack(accountsMsg))
return jobDetail
end
redis.register_function('updateRegisterUserJobAndForwardMsg', updateRegisterUserJobAndForwardMsg)
Then I registered it as a function in my Redis using the below command:
cat updateJobAndForwardMsgScript.lua | redis-cli -x FUNCTION LOAD REPLACE
Now I can easily call my function using Redis-cli as below:
FCALL updateJobAndForwardMsg 4 key1 key2 key3 key4 arg1
And it will get executed successfully!!
Now I want to call my function using lettuce which is my Redis-client library in my application, but I haven't found anything on the net, and it seems that lettuce does not support Redis 7 new feature for calling FUNCTION using FCALL command!!
Does it have any other customized way for executing Redis commands using lettuce?
Any help would be appreciated!!

After a bit more research about the requirement, I found the following StackOverFlow answer:
StackOverFlow Answer
And also based on the documentation:
Redis Custom Commands :
Custom commands can be dispatched on the one hand using Lua and the
eval() command, on the other side Lettuce 4.x allows you to trigger
own commands. That API is used by Lettuce itself to dispatch commands
and requires some knowledge of how commands are constructed and
dispatched within Lettuce.
Lettuce provides two levels of command dispatching:
Using the synchronous, asynchronous or reactive API wrappers which
invoke commands according to their nature
Using the bare connection to influence the command nature and
synchronization (advanced)
So I could handle my requirements by creating an interface which extends the io.lettuce.core.dynamic.Commands interface as below:
public interface CustomCommands extends Commands {
#Command("FCALL :funcName :keyCnt :jobsKey :inboxRef :jobsRef :jobIdentity :frwrdMsg ")
Object fcall_responseJob(#Param("funcName") byte[] functionName, #Param("keyCnt") Integer keysCount,
#Param("jobsKey") byte[] jobsKey, #Param("inboxRef") byte[] inboxRef,
#Param("jobsRef") byte[] jobsRef, #Param("jobIdentity") byte[] jobIdentity,
#Param("frwrdMsg") byte[] frwrdMsg);
}
Then I could easily call my loaded FUNCTION(which was a Lua script) as below:
private void updateResponseJobAndForwardMsgToSSO(SharedObject message, SharedObject responseMessage) {
try {
ObjectMapper objectMapper = new MessagePackMapper();
RedisCommandFactory factory = new RedisCommandFactory(connection);
CustomCommands commands = factory.getCommands(CustomCommands.class);
Object obj = commands.fcall_responseJob(
Constant.REDIS_RESPONSE_JOB_FUNCTION_NAME.getBytes(StandardCharsets.UTF_8),
Constant.REDIS_RESPONSE_JOB_FUNCTION_KEY_COUNT,
(message.getAgent() + Constant.AGENTS_JOBS_POSTFIX).getBytes(StandardCharsets.UTF_8),
(message.getAgent() + Constant.AGENTS_INBOX_POSTFIX).getBytes(StandardCharsets.UTF_8),
message.getReferenceNumber().getBytes(StandardCharsets.UTF_8),
message.getTyp().getBytes(StandardCharsets.UTF_8),
objectMapper.writeValueAsBytes(responseMessage));
LOG.info(obj.toString());
} catch (Exception e) {
e.printStackTrace();
}
}

Related

Streaming data from Kinesis to S3 fails with Illegal Character that KPL itself writes

I have a relatively straightforward use case:
Read Avro data from a Kafka topic
Use KPL (v0.14.12) to send this data to Kinesis Data Streams
Use Kinesis Firehose to transform this data into Parquet and transfer it to S3.
The Kafka topic was written into by Kafka Streams using the following producer Configuration:
private void addAwsGlueSpecificProperties(Map<String, Object> props) {
props.put(AWSSchemaRegistryConstants.AWS_REGION, "eu-central-1");
props.put(AWSSchemaRegistryConstants.DATA_FORMAT, DataFormat.AVRO.name());
props.put(AWSSchemaRegistryConstants.SCHEMA_AUTO_REGISTRATION_SETTING, true);
props.put(AWSSchemaRegistryConstants.REGISTRY_NAME, "Kinesis_Schema_Registry");
props.put(AWSSchemaRegistryConstants.COMPRESSION_TYPE, AWSSchemaRegistryConstants.COMPRESSION.ZLIB.name());
props.put(DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
props.put(DEFAULT_VALUE_SERDE_CLASS_CONFIG, GlueSchemaRegistryKafkaStreamsSerde.class.getName());
}
Most notably, I've set SCHEMA_AUTO_REGISTRATION_SETTING to true to try and rule out problems with my schema definition. The auto-registration itself worked without any issues.
I have a very simple loop running for test purposes, which does step 1 and 2 of the above. It looks as follows:
KinesisProducer kinesisProducer = new KinesisProducer(getKinesisConfig());
try (final KafkaConsumer<String, AvroEvent> consumer = new KafkaConsumer<>(properties)) {
consumer.subscribe(Collections.singletonList(TOPIC));
while (true) {
log.info("Polling...");
final ConsumerRecords<String, AvroEvent> records = consumer.poll(Duration.ofMillis(100));
for (final ConsumerRecord<String, AvroEvent> record : records) {
final String key = record.key();
ListenableFuture<UserRecordResult> request = kinesisProducer.addUserRecord("my-data-stream", key, randomExplicitHashKey(), value.toByteBuffer(), gsrSchema);
Futures.addCallback(request, CALLBACK, executor);
}
Thread.sleep(Duration.ofSeconds(10).toMillis());
}
}
The callback just does a bit of logging on success/failure.
My Kinesis Config looks as follows:
private static KinesisProducerConfiguration getKinesisConfig() {
KinesisProducerConfiguration config = new KinesisProducerConfiguration();
GlueSchemaRegistryConfiguration schemaRegistryConfiguration = getGlueSchemaRegistryConfiguration();
config.setGlueSchemaRegistryConfiguration(schemaRegistryConfiguration);
config.setRegion("eu-central-1");
config.setCredentialsProvider(new DefaultAWSCredentialsProviderChain());
config.setMaxConnections(2);
config.setThreadingModel(KinesisProducerConfiguration.ThreadingModel.POOLED);
config.setThreadPoolSize(2);
config.setRateLimit(100L);
return config;
}
private static GlueSchemaRegistryConfiguration getGlueSchemaRegistryConfiguration() {
GlueSchemaRegistryConfiguration gsrConfig = new GlueSchemaRegistryConfiguration("eu-central-1");
gsrConfig.setAvroRecordType(AvroRecordType.GENERIC_RECORD ); // have also tried SPECIFIC_RECORD
gsrConfig.setRegistryName("Kinesis_Schema_Registry");
gsrConfig.setCompressionType(AWSSchemaRegistryConstants.COMPRESSION.ZLIB);
return gsrConfig;
}
This setup allows me to read Specific Avro records from Kafka and send them to Kinesis. I have also verified that the correct schema version ID is queried from GSR by my code. However, when my data gets to Firehose, I receive only the following error message for all my records (one per record):
{
"attemptsMade": 1,
"arrivalTimestamp": 1659622848304,
"lastErrorCode": "DataFormatConversion.ParseError",
"lastErrorMessage": "Encountered malformed JSON. Illegal character ((CTRL-CHAR, code 3)): only regular white space (\\r, \\n, \\t) is allowed between tokens\n at [Source: com.fasterxml.jackson.databind.util.ByteBufferBackedInputStream#6252e7eb; line: 1, column: 2]",
"attemptEndingTimestamp": 1659623152452,
"rawData": "<base64EncodedData>",
"sequenceNumber": "<seqNum>",
"dataCatalogTable": {
"databaseName": "<Glue database name>",
"tableName": "<Glue table name>",
"region": "eu-central-1",
"versionId": "LATEST",
"roleArn": "<arn>"
}
}
Unfortunately I can't post the entirety of the data as it is sensitive. However, the relevant part is that it always starts with the above control character that is causing the problem:
0x03 0x05 <schemaVersionId> <data>
My original data does not contain these control characters. After some debugging, I've found that KPL explicitly adds these bytes to the beginning of a UserRecord. In com.amazonaws.services.schemaregistry.serializers.SerializationDataEncoder#write:
public byte[] write(final byte[] objectBytes, UUID schemaVersionId) {
byte[] bytes;
try (ByteArrayOutputStream out = new ByteArrayOutputStream()) {
writeHeaderVersionBytes(out);
writeCompressionBytes(out);
writeSchemaVersionId(out, schemaVersionId);
boolean shouldCompress = this.compressionHandler != null;
bytes = writeToExistingStream(out, shouldCompress ? compressData(objectBytes) : objectBytes);
} catch (Exception e) {
throw new AWSSchemaRegistryException(e.getMessage(), e);
}
return bytes;
}
With writeHeaderVersionBytes(out) and writeCompressionBytes(out) writing to the front of the stream, respectively:
// byte HEADER_VERSION_BYTE = (byte) 3;
private void writeHeaderVersionBytes(ByteArrayOutputStream out) {
out.write(AWSSchemaRegistryConstants.HEADER_VERSION_BYTE);
}
// byte COMPRESSION_BYTE = (byte) 5
// byte COMPRESSION_DEFAULT_BYTE = (byte) 0
private void writeCompressionBytes(ByteArrayOutputStream out) {
out.write(compressionHandler != null ? AWSSchemaRegistryConstants.COMPRESSION_BYTE
: AWSSchemaRegistryConstants.COMPRESSION_DEFAULT_BYTE);
}
Why is Kinesis unable to parse a message that is produced by the library that is supposed to be best suited for writing to it? What am I missing?
I've finally figured out the problem and it's quite dumb.
What it boils down to, is that the transformer that converts data to parquet in Firehose expects a pure JSON payload. It expects records in the form:
{"itemId": 1, "itemName": "someItem"}{"itemId": 2, "itemName": "otherItem"}
It seemingly does not accept the same data in a different format.
This means that Avro-compatible JSON (where the above itemId would look like "itemId": {"long": 1}, or e.g. binary Avro data, is not compatible with the Kinesis Firehose parquet transformer, regardless of the fact that my schema definition in the Glue Schema Registry is explicitly registered as being in Avro format.
In addition, the Firehose parquet transformer requires the use of a Glue table - creating this table from an imported Avro schema simply does not work (see this answer), and had to be created manually. Luckily, even though it can't use the table that is based on an existing schema, the table definition was the same (with the exception of the Serde it needs to use), so it was relatively easy to fix...
To sum up, to get the above code to work I had to:
Create a Glue table for the schema manually (you can use the first table created from the existing schema as a template for creating this second table, but you can't have Firehose link to the first table)
Change the above code:
kinesisProducer.addUserRecord("my-data-stream", key, randomExplicitHashKey(), value.toByteBuffer(), gsrSchema);
to:
ByteBuffer data = ByteBuffer.wrap(value.toString().getBytes(StandardCharsets.UTF_8));
kinesisProducer.addUserRecord("my-data-stream", key, randomExplicitHashKey(), data);
Note that the I am now using the overloaded addUserRecord function that does not include a Schema parameter, which internally invokes the previous function with a null schema parameter. This prevents the KPL from encoding my payload and instead sends the 'plain' JSON over to KDS.
This is contrary to the only AWS Docs example that I could find on the topic, which likely is meant for a Firehose stream which does not convert the data prior to sending it to its destination.
I can't quite understand the reasons for all these undocumented limitations, and it was a pain to debug seeing how neither of the KPL functions nor KDS explicitly mentions anywhere that I can find that this is the expected behaviour. I feel like it's not worth trying to open an issue/PR over at the KPL repo seeing how it seems like Amazon doesn't really care about maintaining it that much...
I'll probably switch over to the plain Kinesis Client + Kinesis Aggregation for a more robust solution in the future, but hey, at least it works.

Can my selenium framework consume an incoming message

I would like to know how my Selenium framework can dequeue a message sitting in a message queue. I have built an application to send a JSON string containing k/v pairs to a message queue.
My architecture is as follows and separate apps:
A JSP Web Application exists accepting parameters resulting in a JSON string
A message sender exists and takes the JSON string and publishes it to a Queue
A message consumer exists and consumes the Messages. Its basically just sitting here
A Selenium Java Framework exists, but I would like to process the messages and for each message it will interpret the k/v pairs and kicks off the script.
I would like to use the messages already in the queue and process these messages within the selenium framework, how can I achieve this?
I will appreciate the help. I have edited the question with the code
This is the code snippet to send the JSON Message
public class MessageSender {
public static void main(String[] args) throws IOException {
SingleNumberLogin generateLogin = new SingleNumberLogin();
//function call to build the JSON object
String jsonQueue = generateLogin.buildJASONObject();
ConnectionFactory conFactory = new ConnectionFactory();
try {
Connection connInterface = conFactory.newConnection();
Channel mqChannel = connInterface.createChannel();
mqChannel.queueDeclare("MyQueue",false,false,false,null);
//Just assigning json to another string, then publish the message
String myMessage = jsonQueue;
mqChannel.basicPublish("","MyQueue",false ,false, null,myMessage.getBytes());
}catch (
IOException | TimeoutException e)
{
System.out.println(e.getStackTrace());
}
conFactory.setUsername("guest");
conFactory.setPassword("guest");
conFactory.setVirtualHost("/");
conFactory.setHost("localhost");
conFactory.setPort(5672);
}
}
code snippet for consumer code that I have inserted into the startup function of the automation script, so if a message arrives a single test case is executed
#BeforeTest
public static void initializeTestBaseSetup() throws Exception, IOException, TimeoutException {
ConnectionFactory conFactory = new ConnectionFactory();
Connection connInterface = conFactory.newConnection();
Channel mqChannel = connInterface.createChannel();
mqChannel.queueDeclare("MyQueue",false,false,false,null);
mqChannel.basicConsume("MyQueue", true, (consumerTag, message) -> {
//convert to byte array
String m = new String (message.getBody(), "UTF-8");
System.out.println("Message received" + m);
}, consumerTag -> {
});
}
Output JSON
JSON Message received 2020-08-28T20:39:30.845{
"NUMBER": "0000011111",
"Type": "BAU",
"User": "MyUser ",
"Email": "riidonesh#gmail.com",
}
When tested in isolation, it works perfectly fine, what I mean is that I send the message and check that the consumer receives it, adding the consumer code to my framework is where i am stuck.
I would suggest you don't think about what you have as a "selenium framework" - think of it as a "java framework".
Selenium is a set of libraries that allow you automate the web browser at a GUI level. The framework is the coded solution to facilitate creation and management of your test suite - it doesn't have to be limited to selenium and chances that's already just one of its components.
Trying to answer your question directly:
SELENIUM cannot read messages
JAVA can read messages
If your rabbitmq has a web front end then you may be able to use selenium for it, but this isn't a very efficient or a logical solution.
What you might want to consider, and what i would do, is extending your framework to use the rabbitmq libraries to process messages as you need. These libraries are designed for this task.
You say:
I would like to process the messages and for each message it will
interpret the k/v pairs and kicks off the script.
I understand this to mean that the messages are the pre-req data for the tests. If you want to read the values of a message before the test you can either:
Place the get/read in a generic #Before method
or if it's a specific message per test case, add it into the start of the test.
You're working in java so you can do whatever you want really.
To get you started, the rabbitmq tutorial starts here.
This is there hello world example for reading messages from the queue:
public class Recv {
private final static String QUEUE_NAME = "hello";
public static void main(String[] argv) throws Exception {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
System.out.println(" [*] Waiting for messages. To exit press CTRL+C");
}
}

HBase PrivilegedExceptionAction runAs thread?

I have HBase code that I use for gets (Although I don't have Kerberos on, I plan to have it later so I wanted to make sure that user credentials were handled correctly when connecting and doing a Put or Get).
final ByteArrayOutputStream bos = new ByteArrayOutputStream();
MyHBaseService.getUserHBase().runAs(new PrivilegedExceptionAction<Object>() {
#Override
public Object run() throws Exception {
Connection connection = null;
Table StorageTable = null;
List<hFile> HbaseDownload = new ArrayList<>();
try {
// Open an HBase Connection
connection = ConnectionFactory.createConnection(MyHBaseService.getHBaseConfiguration());
Get get = new Get(Bytes.toBytes("filenameCell"));
Result result = table.get(get);
byte[] data = result.getValue(Bytes.toBytes(MyHBaseService.getDataStoreFamily()), Bytes.toBytes(MyHBaseService.getDataStoreQualifier()));
bos.write(data, 0, data.length);
bos.flush();
...
}
});
// now get the outputstream.
// I am assuming byteArrayStream is synchronized and thread-safe.
return bos.toByteArray();
However, I wasn't sure if this was running an asynchronous or synchronous thread.
The problem:
I use:
Get get = new Get(Bytes.toBytes("filenameCell"));
Result result = table.get(get);
Inside this run() function. But to get information OUT of the run() thread I use a new ByteOutputArrayStream OUTSIDE the run(). ByteOutputArrayStream.write & ByteOutputArrayStream.flush inside the run(). Then toByteArray() to get the binary bytes of the HBase content out of the function. This causes null bytes to be returned though, so maybe I'm not doing this right.
However, I am having difficulty finding good examples of HBase Java API to do these things and no one seems to use runAs like I do. It's so strange.
I have HBase 1.2.5 client running inside a Web App (request-based function calls).
Here in this code the thread is running inside "MyHBaseService.getUserHBase().runAs" this. But if it is running asyncronously then before executing it properly program will return "bos.toByteArray();" as this is outside the runAs(). So before even executing the complete function it will return the output.
I think thats the reason of null values.

pass object to another JVM using serialization - same Java version and jars (both running our app)

Updates:
For now using a Map. Class that wants to send something to other instance sends the object, the routing string.
Use an object stream, use Java serializable to write the object to servlet.
Write String first and then the object.
Receiving servlet wraps input stream around a ObjectInputStream. Reads string first and then the Object. Routing string decides were it goes.
A more generic way might have been to send a class name and its declared method or a Spring bean name, but this was enough for us.
Original question
Know the basic way but want details of steps. Also know I can use Jaxb or RMI or EJB ... but would like to do this using pure serialization to a bytearray and then encode that send it from servlet 1 in jvm 1 to servlet 2 in jvm 2 (two app server instances in same LAN, same java versions and jars set up in both J2EE apps)
Basic steps are (Approcah 1) :-
serialize any Serializable object to a byte array and make a string. Exact code see below
Base64 output of 1. Is it required to base 64 or can skip step 2?
use java.util.URLEncode.encode to encode the string
use apache http components or URL class to send from servlet 1 to 2 after naming params
on Servlet 2 J2EE framework would have already URLDecoced it, now just do reverse steps and cast to object according to param name.
Since both are our apps we would know the param name to type / class mapping. Basically looking for the fastest & most convenient way of sending objects between JVMs.
Example :
POJO class to send
package tst.ser;
import java.io.Serializable;
public class Bean1 implements Serializable {
/**
* make it 2 if add something without default handling
*/
private static final long serialVersionUID = 1L;
private String s;
public String getS() {
return s;
}
public void setS(String s) {
this.s = s;
}
}
* Utility *
package tst.ser;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.net.URLEncoder;
public class SerUtl {
public static String serialize(Object o) {
String s = null;
ObjectOutputStream os = null;
try {
os = new ObjectOutputStream(new ByteArrayOutputStream());
os.writeObject(o);
s = BAse64.encode(os.toByeArray());
//s = URLEncoder.encode(s, "UTF-8");//keep this for sending part
} catch (Exception e) {
// TODO: logger
e.printStackTrace();
return null;
} finally {
// close OS but is in RAM
try {
os.close();// not required in RAM
} catch (Exception e2) {// TODO: handle exception logger
}
os = null;
}
return s;
}
public static Object deserialize(String s) {
Object o = null;
ObjectInputStream is = null;
try {
// do base 64 decode if done in serialize
is = new ObjectInputStream(new ByteArrayInputStream(
Base64.decode(s)));
o = is.readObject();
} catch (Exception e) {
// TODO: logger
e.printStackTrace();
return null;
} finally {
// close OS but is in RAM
try {
is.close();// not required in RAM
} catch (Exception e2) {// TODO: handle exception logger
}
is = null;
}
return o;
}
}
**** sample sending servlet ***
Bean1 b = new Bean1(); b.setS("asdd");
String s = SerUtl.serialize(b);
//do UrlEncode.encode here if sending lib does not.
HttpParam p = new HttpParam ("bean1", s);
//http components send obj
**** sample receiving servlet ***
String s = request.getParameter("bean1");
Bean1 b1 = (Beean1)SerUtl.deserialize(s);
Serialize any Serializable object with to a byte array
Yes.
and make a string.
No.
Exact statements see below
os = new ObjectOutputStream(new ByteArrayOutputStream());
os.writeObject(o);
s = os.toString();
// s = Base64.encode(s);//Need this some base 64 impl like Apache ?
s = URLEncoder.encode(s, "UTF-8");
These statements don't even do what you have described, which is in any case incorrect. OutputStream.toString() doesn't turn any bytes into Strings, it just returns a unique object identifier.
Base64 output of 1.
The base64 output should use the byte array as the input, not a String. String is not a container for binary data. See below for corrected code.
ByteArrayOutputStream baos = new ByteArrayOutputStream();
os = new ObjectOutputStream(baos);
os.writeObject(o);
os.close();
s = Base64.encode(baos.toByeArray()); // adjust to suit your API
s = URLEncoder.encode(s, "UTF-8");
This at least accomplishes your objective.
Is it required to base 64 or can skip step 2?
If you want a String you must encode it somehow.
Use java.util.URLEncode.encode to encode the string
This is only necessary if you're sending it as a GET or POST parameter.
Use apache http components or URL class to send from servlet 1 to 2 after naming params
Yes.
On Servlet 2 J2EE framework would have already URLDecoded it, now just do reverse steps and cast to object according to param name.
Yes, but remember to go directly from the base64-encoded string to the byte array, no intermediate String.
Basically looking for the fastest & most convenient way of sending objects between JVMs.
These objectives aren't necessarily reconcilable. The most convenient these days is probably XML or JSON but I doubt that these are faster than Serialization.
os = null;
Setting references that are about to fall out of scope to null is pointless.
HttpParam p = new HttpParam ("bean1", s);
It's possible that HttpParam does the URLEncoding for you. Check this.
You need not convert to string. You can post the binary data straight to the servlet, for example by creating an ObjectOutputStream on top of a HttpUrlConnection's outputstream. Set the request method to POST.
The servlet handling the post can deserialize from an ObjectStream created from the HttpServletRequest's ServletInputStream.
I'd recommend JAXB any time over binary serialization, though. The frameworks are not only great for interoperability, they also speed up development and create more robust solutions.
The advantages I see are way better tooling, type safety, and code generation, keeping your options open so you can call your code from another version or another language, and easier debugging. Don't underestimate the cost of hard to solve bugs caused by accidentally sending the wrong type or doubly escaped data to the servlet. I'd expect the performance benefits to be too small to compensate for this.
Found this Base64 impl that does a lot of the heavy lifting for me : http://iharder.net/base64
Has utility methods :
String encodeObject(java.io.Serializable serializableObject, int options )
Object decodeToObject(String encodedObject, int options, final ClassLoader loader )
Using :
try {
String dat = Base64.encodeObject(srlzblObj, options);
StringBuilder data = new StringBuilder().append("type=");
data.append(appObjTyp).append("&obj=").append(java.net.URLEncoder.encode(dat, "UTF-8"));
Use the type param to tell the receiving JVM what type of object I'm sending. Each servlet/ jsps at most receives 4 types, usually 1. Again since its our own app and classes that we are sending this is quick (as in time to send over the network) and simple.
On the other end unpack it by :
String objData = request.getParameter("obj");
Object obj = Base64.decodeToObject(objData, options, null);
Process it, encode the result, send result back:
reply = Base64.encodeObject(result, options);
out.print("rsp=" + reply);
Calling servlet / jsp gets the result:
if (reply != null && reply.length() > 4) {
String objDataFromServletParam = reply.substring(4);
Object obj = Base64.decodeToObject(objDataFromServletParam, options, null);
options can be 0 or Base64.GZIP
You can use JMS as well.
Apache Active-MQ is one good solution. You will not have to bother with all this conversion.
/**
* #param objectToQueue
* #throws JMSException
*/
public void sendMessage(Serializable objectToQueue) throws JMSException
{
ObjectMessage message = session.createObjectMessage();
message.setObject(objectToQueue);
producerForQueue.send(message);
}
/**
* #param objectToQueue
* #throws JMSException
*/
public Serializable receiveMessage() throws JMSException
{
Message message = consumerForQueue.receive(timeout);
if (message instanceof ObjectMessage)
{
ObjectMessage objMsg = (ObjectMessage) message;
Serializable sobject = objMsg.getObject();
return sobject;
}
return null;
}
My point is do not write custom code for Serialization, iff it can be avoided.
When you use AMQ, all you need to do is make your POJO serializable.
Active-MQ functions take care of serialization.
If you want fast response from AMQ, use vm-transport. It will minimize n/w overhead.
You will automatically get benefits of AMQ features.
I am suggesting this because
You have your own Applications running on network.
You need a mechanism to transfer objects.
You will need a way to monitor it as well.
If you go for custom solution, you might have to solve above things yourselves.

Single-threaded Java Websocket for Testing

We are developing an application with Scala and Websockets. For the latter we use Java-Websocket. The application itself works great and we are in the middle of writing unit tests.
We use a WebSocket class as follows
class WebSocket(uri : URI) extends WebSocketClient(uri) {
connectBlocking()
var response = ""
def onOpen(handshakedata : ServerHandshake) {
println("onOpen")
}
def onMessage(message : String) {
println("Received: " + message)
response = message
}
def onClose(code : Int, reason : String, remote : Boolean) {
println("onClose")
}
def onError(ex : Exception) {
println("onError")
}
}
A test might look like this (pseudo code)
websocketTest {
ws = new WebSocket("ws://example.org")
ws.send("foo")
res = ws.getResponse()
....
}
Sending and receiving data works. However, the problem is that connecting to the websocket creates a new thread and only the new thread will have access to response using the onMessage handler. What is the best way to either make the websocket implementation single-threaded or connect the two threads so that we can access the response in the test case? Or is there another, even better way of doing it? In the end we should be able to somehow test the response of the websocket.
There are a number of ways you could try to do this. The issue will be that you might get an error or a successful response from the server. As a result, the best way is probably to use some sort of timeout. In the past I have used a pattern like (note, this is untested code):
...
use response in the onMessage like you did
...
long start = System.currentTimeMillis();
long timeout = 5000;//5 seconds
while((system.currentTimeMillis()-start)<timeout && response==null)
{
Thread.sleep(100);
}
if(response == null) .. timed out
else .. do something with the response
If you want to be especially safe you can use an AtomicReference for the response.
Of course the timeout and sleep can be minimized based on your test case.
Moreover, you can wrap this in a utility method.

Categories

Resources