I want to use a Protocol Buffer Message in a Java project, but I don't know what type will be used during runtime. It can be a simple datatype or another message object.
Example:
proto file:
...
message MyMessage {
repeated google.protobuf.Any params = 1;
}
...
to serialize my values, I use the following code (type of payload is unknown):
Any.Builder anyBuilder = Any.newBuilder();
ByteString byteString;
if (ClassUtils.isPrimitiveOrWrapper(payload.getClass())) {
byte[] serialize = SerializationUtils.serialize((Serializable) payload);
byteString = ByteString.copyFrom(serialize);
} else {
//serialize custom message
}
anyBuilder.setValue(byteString);
myMessage.addParams(Any.pack(anyBuilder.build()));
to deserialize my values, I use the following code:
List<Any> paramsList = myMessage.getParamsList();
Object[] objects = new Object[paramsList.size()];
int i = 0;
for (Any any : paramsList) {
ByteString value = any.getValue();
objects[i++] = SerializationUtils.deserialize(value.toByteArray());
}
The code throws org.apache.commons.lang3.SerializationException: java.io.EOFException
Can I use SerializationUtils#serialize for a independent communication (e.g. Java to Python; Java to C++) at all or do I have to use something else?
Can I use any in Proto3 for a simple datatype or is it restricted to proto messages?
Thanks!
Related
I'm relying on Confluent's schema registry to store my protobuf schemas.
I posted the following schema in the schema registry:
{
"schema": "syntax = 'proto3'; package com.xyz.message; option java_package = 'com.xyz.message'; option java_outer_classname = 'ActionMessage'; message Action { reserved 7; string id = 1; string version = 2; string action_name = 3; string unique_event_i_d = 4; string rule_i_d = 5; map<string, Value> parameters = 6; string secondary_id = 8; message Value { string value = 1; repeated string values = 2; } }",
"schemaType" : "PROTOBUF"
}
I then query the schema registry REST API from my application to retrieved it
...
JsonElement schemaRegistryResponse = new JsonParser().parse(inputStreamReader);
String schema = schemaRegistryResponse.getAsJsonObject().get("schema").getAsString();
This indeed makes schema variable holding a string containing the protobuf schema. I now want to create a com.google.protobuf.Descriptors.Descriptor instance from it.
I proceed as follows:
byte[] encoded = Base64.getEncoder().encode(schema.getBytes());
FileDescriptorSet set = FileDescriptorSet.parseFrom(encoded);
FileDescriptor f = FileDescriptor.buildFrom(set.getFile(0), new FileDescriptor[] {});
Descriptors.Descriptor descriptor = f.getMessageTypes().get(0);
However, this throws a Protocol message end-group tag did not match expected tag exception when invoking parseFrom(encoded) method.
Any idea what I might be doing wrong?
You're trying to parse a base64-encoded representation of a .proto file. That's not at all what FileDescriptorSet.parseFrom expects. It expects a protobuf binary representation of a FileDescriptorSet message, which is typically created by protoc using the descriptor_set_out option.
I don't believe there's any way of getting the protobuf library to parse the text of a .proto file - you really need to run protoc.
Given a simple Protobuf message
message MessageWithUrl {
string url_params = 1;
}
I'm doing the following:
Msg.MessageWithUrl message = Msg.MessageWithUrl
.newBuilder()
.setUrlParams("?key=value")
.build();
String json = JsonFormat.printer()
.print(message);
System.out.println(json);
My expected outcome is:
{
"urlParams": "?key=value"
}
Instead I get:
{
"urlParams": "?key\u003dvalue"
}
I understand that the printer is using Gson under the hood, but I don't know how to have it accept the Gson option of disableHtmlEscaping
I have an ActimeMQ consumer which expects a message in javax.jms.ObjectMessage format.
This message pojo has 5 string elements.
Now I am trying to write a message producer for this consumer in NodeJs.
I am using stompit module
My current NodeJs code is
stompit.connect(connectOptions, function(error, client) {
if (error) {
console.log('connect error ' + error.message);
return;
} else {
console.log("connected");
}
var sendHeaders = {
'destination': '/queue/test',
'transformation': 'jms-object-json'
};
var msg = new Object();
msg.val1 = "12";
msg.val2 = "test";
msg.val3 = "1";
msg.val4 = "1";
msg.val5 = "Y";
var frame = client.send(sendHeaders);
frame.write(JSON.stringify(msg));
frame.end();
});
Java consumer is able to get the message but throws the exception
org.apache.activemq.command.ActiveMQTextMessage cannot be cast to javax.jms.ObjectMessage
I have read this page from activeMQ which says that
Currently, ActiveMQ comes with a transformer that can transform XML/JSON text to Java objects, but you can add your own transformers as well
I didn't quite understand this part on how to convert data.
I have added xstream-1.4.10.jar and jettison-1.3.8.jar in apache-activemq-5.15.0\lib and restarted the ActiveMq server.
But still I get the error in the consumer.
Also in the ActiveMQ console -> Queues -> message properties, it shows transformation-error
Please let me know how I can convert this ActiveMQTextMessage type to javax.jms.ObjectMessage before it reaches the consumer
There isn't a transformer in ActiveMQ that will convert any random JSON string into and ObjectMessages, you'd have to write you own to handle whatever format you are sending. The converter in ActiveMQ will convert some basic types that Map from the JSON but it's tricky and not necessarily reliable. You are better off handling the TextMessage and doing something meaningful with the JSON yourself.
ActiveMQTextMessage and ObjectMessage are different , they can't cast to each other.
From ActiveMQTextMessage , you can get the true message content as a String, then you have to trans it to a json object yourself.
I want to send my object to couchdb using the ektorp java client. But I couldn't write my bytearray value to couchdb properly. My java object as follow:
If I convert bytearray to String:
The metadata value is saved on couchdb as "AgIGZm9vBmJhegA=" (base64), This means that "foobaz". Why is my bytearray value changed?
My example code :
private CouchDbInstance dbInstance;
private CouchDbConnector db;
...
Map<String, Object> doc = new HashMap<>();
doc.put("_id", "foo.com:http/");
byte[] serilazeData = IOUtils.serialize(writer, fieldValue);
doc.put("metadata", serilazeData);
...
db.update(doc);
My main code block
public void put(K key, T obj) {
final Map<String, Object> doc = new HashMap<>();
doc.put("_id", key.toString());
Schema schema = obj.getSchema();
List<Field> fields = schema.getFields();
for (int i = 0; i < fields.size(); i++) {
if (!obj.isDirty(i)) {
continue;
}
Field field = fields.get(i);
Schema.Type type = field.schema().getType();
Object fieldValue = obj.get(field.pos());
Schema fieldSchema = field.schema();
fieldValue = serializeFieldValue(fieldSchema, fieldValue);
doc.put(field.name(), fieldValue);
}
db.update(doc);
}
private Object serializeFieldValue(Schema fieldSchema, Object fieldValue ){
...
byte[] data = null;
try {
SpecificDatumWriter writer = getDatumWriter(fieldSchema);
data = IOUtils.serialize(writer, fieldValue);
} catch (IOException e) {
LOG.error(e.getMessage(), e);
}
fieldValue = data;
...
return fieldValue;
}
The value is the base64 encoded string "foobaz". You should probably post your code as well to get any meaningful feedback regarding this issue.
edit: Now that you provided code, is it possible, that the object you are trying to update already exists in the database? If yes you need to get it first or provide the proper existing revision id for the update. Otherwise the update will be rejected.
CouchDB stores JSON documents, and JSON does not support byte arrays, so I guess Ektorp is applying its own Base64 conversion during conversion of your object to JSON, before sending to CouchDB, and maybe that is skipping some characters in the byte array.
You might prefer to sidestep the Ektorp behaviour by applying your own Base64 serialisation before calling Ektorp, and then deserialise yourself after fetching the document from CouchDB. Or you could use something like Jackson, which will handle the object/JSON conversion behind the scenes, including byte arrays.
Ektorp uses Jackson for json serialization, I think jackson defaults to base64 for byte arrays. As long as you read/write with Ektorp you should not have any problems.
But I see in your code that you have some kind of type system of your own so that complicates things. I suggest you use POJOS instead of rolling your own since you won't get much help from ektorp and jackson if you are doing it yourself.
I've written Socket Communication server with Java and a AIR programm with AS3, using Socket connection.
The communication through socket connection is done with JSON serialization.
Sometimes with really long JSON strungs over socket, AS3 code says that there is a JSON parse error.
Each JSON string I end with end string to let programm know, that it is not the end of the message, so this is not the problem with AIR programm reading the message in parts.
The error occurs only with realy long json string, for example, string with 78031 length. Is there any limits for JSON serialization?
I had the same problem. The problem is in Flash app reading data from socket.
The point is that Flash ProgressEvent.SOCKET_DATA event fires even when server didn't send all the data, and something is left (especially when the data is big and the connection is slow).
So something like {"key":"value"} comes in two (or more) parts, like: {"key":"val and ue"}. Also sometimes you might receive several joined JSONs in one message like {"json1key":"value"}{"json2key":"value"} - built-in Flash JSON parser cannot handle these too.
To fight this I recommend you to modify your SocketData handler in the Flash app to add a cache for received strings. Like this:
// declaring vars
private var _socket:Socket;
private var _cache: String = "";
// adding EventListener
_socket.addEventListener(ProgressEvent.SOCKET_DATA, onSocketData);
private function onSocketData(e: Event):void
{
// take the incoming data from socket
var fromServer: ByteArray = new ByteArray;
while (_socket.bytesAvailable)
{
_socket.readBytes(fromServer);
}
var receivedToString: String = fromServer.toString();
_cache += receivedToString;
if (receivedToString.length == 0) return; // nothing to parse
// convert that long string to the Vector of JSONs
// here is very small and not fail-safe alghoritm of detecting separate JSONs in one long String
var jsonPart: String = "";
var jsonVector: Vector.<String> = new Vector.<String>;
var bracketsCount: int = 0;
var endOfLastJson: int = 0;
for (var i: int = 0; i < _cache.length; i++)
{
if (_cache.charAt(i) == "{") bracketsCount += 1;
if (bracketsCount > 0) jsonPart = jsonPart.concat(_cache.charAt(i));
if (_cache.charAt(i) == "}")
{
bracketsCount -= 1;
if (bracketsCount == 0)
{
jsonVector.push(jsonPart);
jsonPart = "";
endOfLastJson = i;
}
}
}
// removing part that isn't needed anymore
if (jsonVector.length > 0)
{
_cache = _cache.substr(endOfLastJson + 1);
}
for each (var part: String in jsonVector)
{
trace("RECEIVED: " + part); // voila! here is the full received JSON
}
}
According to Adobe, it appears that you are not facing a JSON problem but instead a Socket limitation.
A String you may send over a Socket via writeUTF and readUTF is limited by 65,535 bytes. This is due to the string being prepended with a 16 bit unsigned integer rather than a null terminated string.