OPC UA calling Acknowlagement method and fetching Bad_EventIdUnkown in java - java

Currently i am developing OPCUA client using Eclipse Milo. I am able to read data from OPC UA cpp server and able to write setpoint data to server.
I am not able to perform acknowledgement of OPCUA ALarm and events.
I tried with node-red opcua server and getting alarms,now i want to acknowledge the same , so i had tried code for calling acknowledgement method for opcua. Here it is:
byte[] b=new byte[] {-66, -115, -121, -6, -39, 40, 89, 114, 60, -66, -126, -79, -72, -128, -21, 23, 0, 0, 4, -4};
LocalizedText lt = new LocalizedText("bagiya");
ByteString bs=new ByteString(b);
Variant evntid1 = new Variant(bs);
System.out.println("jangli id :=>"+vs[1]);
Variant lt1 = new Variant(lt);
Variant[] v = new Variant[] { evntid1,lt1 = new Variant(lt)};
System.out.println(vs[1].getDataType());
CallMethodRequest rec = new CallMethodRequest(NodeId.parse("ns=1;i=1003"), NodeId.parse("ns=1;i=1022"), v);
System.out.println( rec.getTypeId()+"::=>"+client.call(rec).get());
this byte array string is given by node-red opcua server using eclipse milo code of Event Subscription Example.
EventFilter eventFilter = new EventFilter(new SimpleAttributeOperand[] {
new SimpleAttributeOperand(Identifiers.BaseEventType,
new QualifiedName[] { new QualifiedName(0, "EventId") }, AttributeId.Value.uid(), null),
new SimpleAttributeOperand(Identifiers.BaseEventType,
new QualifiedName[] { new QualifiedName(0, "EventType") }, AttributeId.Value.uid(), null),
new SimpleAttributeOperand(Identifiers.BaseEventType,
new QualifiedName[] { new QualifiedName(0, "Severity") }, AttributeId.Value.uid(), null),
new SimpleAttributeOperand(Identifiers.BaseEventType,
new QualifiedName[] { new QualifiedName(0, "Time") }, AttributeId.Value.uid(), null),
new SimpleAttributeOperand(Identifiers.BaseEventType,
new QualifiedName[] { new QualifiedName(0, "Time") }, AttributeId.Value.uid(), null),
new SimpleAttributeOperand(Identifiers.BaseEventType,
new QualifiedName[] { new QualifiedName(0, "Message") }, AttributeId.Value.uid(), null) },
new ContentFilter(null));
So we are getting this below given status:
CallMethodResult{StatusCode=StatusCode{name=Bad_EventIdUnknown, value=0x809A0000, quality=bad}, InputArgumentResults=[StatusCode{name=Good, value=0x00000000, quality=good}, StatusCode{name=Good, value=0x00000000, quality=good}], InputArgumentDiagnosticInfos=[], OutputArguments=[]}
Kindly give me some suggestions to troubleshoot this problem.

In this line of code :
CallMethodRequest rec = new CallMethodRequest(NodeId.parse("ns=1;i=1003"),
NodeId.parse("ns=1;i=1022"), v);
You parse two node Id's. And the "CallMethodResult" Status depicts that:
ns=1;i=1003 :
StatusCode{name=Bad_EventIdUnknown, value=0x809A0000, quality=bad}, InputArgumentResults=[StatusCode{name=Good, value=0x00000000, quality=good}
Meaning
Bad_EventIdUnknown, 0x809A0000 : The specified event ID is not recognized.
(with reference to here).
ns=1;i=1022 :
StatusCode{name=Good, value=0x00000000, quality=good}], InputArgumentDiagnosticInfos=[], OutputArguments=[]
Therefore, I would like to conclude that there might be some problems with parsing
ns=1;i=1003 : node id.

Related

Group by multiple fields in stream java 8

I have a class:
public class PublicationDTO {
final String publicationName;
final String publicationID;
final String locale;
final Integer views;
final Integer shares;
}
A need to get sum of views and shares. For test i created a list:
PublicationDTO publicationDTO1 = new PublicationDTO("Name1", "name1", "CA", 5, 6);
PublicationDTO publicationDTO2 = new PublicationDTO("Name2", "name2", "US", 6, 3);
PublicationDTO publicationDTO3 = new PublicationDTO("Name1", "name1", "CA", 10, 1);
PublicationDTO publicationDTO4 = new PublicationDTO("Name2", "name2", "CA", 2, 3);
List<PublicationDTO> publicationDTOS = List.of(publicationDTO1, publicationDTO2, publicationDTO3, publicationDTO4);
I want to group objects in list by publicationName, publicationId and locale and get result list like:
List.of(new PublicationDTO("Name1", "name1", "CA", 15, 7),
new PublicationDTO("Name2", "name2", "CA", 2, 3),
new PublicationDTO("Name2", "name2", "US", 6, 3));
I found a solution like:
List<PublicationDTO> collect = publicationDTOS.stream()
.collect(groupingBy(PublicationDTO::getPublicationID))
.values().stream()
.map(dtos -> dtos.stream()
.reduce((f1, f2) -> new PublicationDTO(f1.publicationName, f1.publicationID, f1.locale, f1.views + f2.views, f1.shares + f2.shares)))
.map(Optional::get)
.collect(toList());
but the result not grouped by locale and I'm not sure if it works by publicationId. Please let me know how to properly use collectors in such case?
You are only grouping by getPublicationID:
.collect(groupingBy(PublicationDTO::getPublicationID))
Since those fields you are interested in for grouping are all strings you could concatenate them and use that as a grouping classifier:
.collect(Collectors.groupingBy(p -> p.getPublicationName() + p.getPublicationID() + p.getLocale()))

MongoDB incorrect BSON document format

I was trying to get the BSON document out of the MongoDB. As per the https://www.mongodb.com/json-and-bson , i am expecting the document received from the driver as :
{"hello": "world"} →
\x16\x00\x00\x00 // total document size
\x02 // 0x02 = type String
hello\x00 // field name
\x06\x00\x00\x00world\x00 // field value
\x00 // 0x00 = type EOO ('end of object')
But i see it in readable json format.
The data inserted in MongoDB:
db.test.insert({"hello": "world"})
Sample code:
MongoClient mongoClient = MongoClients
.create("mongodb://localhost:27018/admin");
MongoDatabase database = mongoClient.getDatabase("MYTEST");
MongoCollection<Document> coll = database.getCollection("testarr");
Publisher<Document> publisher = coll.find();
MongoDBObservableSubscriber<Document> subscriber = null;
subscriber = new MongoDBObservableSubscriber<Document>();
publisher.subscribe(subscriber);
Queue<Document> mongoCursor = subscriber.getResults();
while (true) {
if (!mongoCursor.isEmpty()) {
Document document = mongoCursor.poll();
if (document != null) {
System.out.println("==========");
System.out.println("actual document -> "+document);
System.out.println("Bson document->"+document.toBsonDocument());
System.out.println("Json document->"+document.toJson(JsonWriterSettings.builder().outputMode(JsonMode.EXTENDED).build()));
RawBsonDocument raw = RawBsonDocument.parse(document.toJson(JsonWriterSettings.builder().outputMode(JsonMode.EXTENDED).build()));
System.out.println("raw document ->"+raw);
final BsonDocument ceDoc = document.toBsonDocument();
final OutputBuffer outputBuffer = new BasicOutputBuffer();
final BsonWriter innerWriter = new BsonBinaryWriter(outputBuffer);
BsonDocumentCodec bsonDocumentCodec= new BsonDocumentCodec();
bsonDocumentCodec.encode(innerWriter, ceDoc, EncoderContext.builder().build());
final BsonBinary encoded = new BsonBinary(outputBuffer.toByteArray());
System.out.println("Encoded->"+encoded.toString());
Bson bsonObject =BsonDocument.parse(document.toJson(JsonWriterSettings.builder().outputMode(JsonMode.EXTENDED).build()));
System.out.println("bsonObject->"+bsonObject);
System.out.println("==========");
}
}
}
output:
actual document -> Document{{_id=618b9c31759ba7a2fa73094c, hello=world}}
Bson document->{"_id": {"$oid": "618b9c31759ba7a2fa73094c"}, "hello": "world"}
Json document->{"_id": {"$oid": "618b9c31759ba7a2fa73094c"}, "hello": "world"}
raw document ->{"_id": {"$oid": "618b9c31759ba7a2fa73094c"}, "hello": "world"}
Encoded->BsonBinary{type=0, data=[39, 0, 0, 0, 7, 95, 105, 100, 0, 97, -117, -100, 49, 117, -101, -89, -94, -6, 115, 9, 76, 2, 104, 101, 108, 108, 111, 0, 6, 0, 0, 0, 119, 111, 114, 108, 100, 0, 0]}
bsonObject->{"_id": {"$oid": "618b9c31759ba7a2fa73094c"}, "hello": "world"}
In non of the ways, I see output in actual BSON format[as below one.]
\x16\x00\x00\x00 // total document size
\x02 // 0x02 = type String
hello\x00 // field name
\x06\x00\x00\x00world\x00 // field value
\x00 // 0x00 = type EOO ('end of object')
Is there are away to derive the BSON format.
Every time you do a println, you're are implicitly casting the object to a string.
MongoDB-Java-Driver performs string casting by converting the BSON to the JSON representation, which you are seeing.
System.out.println("Bson document->"+document.toBsonDocument());
//----------------------------------^ this concatenation turns the bson
// object to the string representation (JSON)
The encoded BSON correctly shows the true BSON
I think you're confused because MongoDB automatically applies an ObjectID to any objects that are inserted to the database. This ObjectID is not part of the BSON standard, and therefore not documented on the BSON specification. However, since you're pulling data from MongoDB, you're seeing it.
Had you locally generated the BSON and parsed it, you would not see the ObjectID
Encoded->BsonBinary{type=0, data=[39, 0, 0, 0, 7, 95, 105, 100, 0, 97, -117, -100, 49, 117, -101, -89, -94, -6, 115, 9, 76, 2, 104, 101, 108, 108, 111, 0, 6, 0, 0, 0, 119, 111, 114, 108, 100, 0, 0]}
Parsing this out per the documentation
Notice your size is 39 (bigger than example) because MongoDB added the ObjectID to the document.
The latter half of the document has the key/value in it as you expect
\x02hello\x00\x06\x00\x00\x00world\x00
\x39\x00\x00\x00 // total document size
7 // 0x07 = type objectID
95,105,100,0 // field name = "_id" null terminated
97,-117,-100,49,117,-101,-89,-94,-6,115,9,76 // The objectID in bin format
\x02 // 0x02 = type String
hello\x00 // field name
\x06\x00\x00\x00world\x00 // field value
\x00 // 0x00 = type EOO ('end of object')

PriorityQueue incl. .poll() and Compareable explanation? (Java)

I am currently stuck at the following problem:
I have a class "WaitingRoom" and "Patient". Patients with the status "emergency" are the first ones in the queue (PriorityQueue waitingRoom) (time does not matter) and the others are sorted by their appointment.
I am testing my program with the following code:
WaitingRoom wz = new WaitingRoom();
Calendar c = Calendar.getInstance(Locale.GERMANY);
c.set(2009, Calendar.OCTOBER, 21, 9, 35);
wz.comesIn(new Patient("Smith Jones", c.getTime(), false));
c.set(2009, Calendar.OCTOBER, 21, 9, 30);
wz.comesIn(new Patient("Mueller Johan", c.getTime(), false));
wz.comesIn(new Patient("Emergency Brooklyn", new Date(), true));
c.set(2009, Calendar.OCTOBER, 21, 8, 30);
wz.comesIn(new Patient("Richard Smith", c.getTime(), false));
c.set(2009, Calendar.OCTOBER, 21, 9, 31);
wz.comesIn(new Patient("Kimberly Adams", c.getTime(), false));
c.set(2009, Calendar.OCTOBER, 21, 9, 29);
wz.comesIn(new Patient("Random Name", c.getTime(), false));
Patient patient = wz.getNextPatient();
assertEquals("Emergency Brooklyn", patient.getName());
patient = wz.getNextPatient();
assertEquals("Richard Smith", patient.getName());
patient = wz.getNextPatient();
assertEquals("Random Name", patient.getName());
The method .getNextPatient() returns waitingRoom.poll();
And my compareTo:
public int compareTo(Patient other) {
if (other.emergency) {
return 1;
} else {
if (this.emergency) {
return -1;
} else {
return this.appointment.compareTo(other.appointment);
}
}
}
The code works fine. But I do not understand how the sorting works. I tried understanding it by using the debugger.. but I still don't get it. It sorts it when a next patient comes in (how does it choose the other patient?) and another time when I use the function .poll().
Could you please explain me how it works?
Thank you advance!

create dummy SearchResponse instance for ElasticSearch test case

I'm trying to create a dummy SearchResponse object by passing the values manually to the constructor. I have a JUnit test class for which I'm using this dummy value to mock the actual method call. Trying with the below method to
public SearchResponse actionGet() throws ElasticsearchException {
ShardSearchFailure[] shardFailures = new ShardSearchFailure[0];
int docId = 0;
String id = "5YmRf-6OTvelt29V5dphmw";
Map<String, SearchHitField> fields = null;
InternalSearchHit internalSearchHit = new InternalSearchHit(docId, id,
null, fields);
InternalSearchHit[] internalSearchHit1 = { internalSearchHit };
InternalSearchResponse EMPTY = new InternalSearchResponse(
new InternalSearchHits(internalSearchHit1, 0, 0), null, null,
null, false);
SearchResponse searchResponse = new SearchResponse(EMPTY, "scrollId",
1, 1, 1000, shardFailures);
return searchResponse;
}
and here is my actual value of json when query directly to elasticsearch.
{
"took": 3,
"timed_out": false,
"_shards": {
"total": 3,
"successful": 3,
"failed": 0
},
"hits": {
"total": 28,
"max_score": null,
"hits": [
{
"_index": "monitoring",
"_type": "quota-management",
"_id": "5YmRf-6OTvelt29V5dphmw",
"_score": null,
"_source": {
"#timestamp": "2014-08-20T15:43:20.762Z",
"category_name": "cat1111",
"alert_message": "the new cpu threshold has been reached 80%",
"alert_type": "Critical",
"view_mode": "unread"
},
"sort": [
1408549226173
]
}
]
}
}
I want to create similar kind of response by creating the actual SearchResponse Object. But I couldn't find any way to send the values in InternalSearchHit[]. Please let me know how can I do this.
This will do what you want:
SearchShardTarget shardTarget = new SearchShardTarget("1", "monitoring", 1);
ShardSearchFailure[] shardFailures = new ShardSearchFailure[0];
float score = 0.2345f;
BytesReference source = new BytesArray("{\"#timestamp\":\"2014-08-20T15:43:20.762Z\",\"category_name\""
+ ":\"cat1111\",\"alert_message\":\"the new cpu threshold has been reached 80%\",\"alert_type\":"
+ "\"Critical\",\"view_mode\":\"unread\"}");
InternalSearchHit hit = new InternalSearchHit(1, "5YmRf-6OTvelt29V5dphmw", new StringText("quota-management"),
null);
hit.shardTarget(shardTarget);
hit.sourceRef(source);
hit.score(score);
InternalSearchHit[] hits = new InternalSearchHit[]{hit};
InternalSearchHits internalSearchHits = new InternalSearchHits(hits, 28, score);
InternalSearchResponse internalSearchResponse = new InternalSearchResponse(internalSearchHits, null, null,
null, false);
SearchResponse searchResponse = new SearchResponse(internalSearchResponse, "scrollId", 1, 1, 1000,
shardFailures);
If you call toString() on searchResponse it returns:
{
"_scroll_id" : "scrollId",
"took" : 1000,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"hits" : {
"total" : 28,
"max_score" : 0.2345,
"hits" : [ {
"_index" : "monitoring",
"_type" : "quota-management",
"_id" : "5YmRf-6OTvelt29V5dphmw",
"_score" : 0.2345,
"_source":{"#timestamp":"2014-08-20T15:43:20.762Z","category_name":"cat1111","alert_message":"the new cpu threshold has been reached 80%","alert_type":"Critical","view_mode":"unread"}
} ]
}
}
It works for me in ElasticsearchS 6.5
BytesReference source = new BytesArray(
"{your json response come here}" );
SearchHit hit = new SearchHit( 1 );
hit.sourceRef( source );
SearchHits hits = new SearchHits( new SearchHit[] { hit }, 5, 10 );
SearchResponseSections searchResponseSections = new SearchResponseSections( hits, null, null, false, null, null, 5 );
SearchResponse searchResponse = new SearchResponse( searchResponseSections, null, 8, 8, 0, 8, new ShardSearchFailure[] {} );
return searchResponse;

Deprecated code when using avro-maven-plugin version 1.6.1

I'm running a java code using Apache Avro. Some code gets deprecated in the java file and I'm not sure why.
I'm using Maven to run my Java program.
This is the java file
public class AvroAddressTest {
public int tempRand;
static String[] NAMES = { "Karthik", "Sam", "Joe", "Jess", "Tom",
"Huck", "Hector", "Duke", "Jill", "Natalie", "Chirsta", "Ramya" };
static String[] EMAILS = { "kar#gmail.com", "steve#gmail.com",
"garry#gmail.com", "kumar#hotmail.com", "dave#hotmail.com",
"will#hotmail.com", "rick#ymail.com", "vinod#ymail.com",
"basu#ymail.com", "sachin#ymail.com", "chester#ymail.com",
"anand#ymail.com" };
static String[] PHONE_NUMBERS = { "9940099321", "9940099456",
"9934099333", "9940099567", "9940077654", "9940088323",
"9940097543", "9940099776", "9940000981", "9940088444",
"9940099409", "9940033987" };
static int[] AGES = { 32, 43, 23, 21, 55, 34, 33, 31, 22, 41, 56, 62 };
static boolean[] STU = { true, false, true, true, false, false, true, false, true, false, false, true };
public void serializeGeneric() throws IOException {
// Create a datum to serialize.
Schema schema = new Schema.Parser().parse(getClass()
.getResourceAsStream("/AddressRec.avsc"));
GenericRecord datum = new GenericData.Record(schema);
Random random = new Random();
int randInt = random.nextInt(NAMES.length);
datum.put("name", new Utf8(NAMES[randInt]));
datum.put("email", new Utf8(EMAILS[randInt]));
datum.put("phone", new Utf8(PHONE_NUMBERS[randInt]));
datum.put("age", AGES[randInt]);
datum.put("student", STU[randInt]);
//datum.put("door",new Utf8(NAMES[randInt]) );
// Serialize it.
ByteArrayOutputStream out = new ByteArrayOutputStream();
DatumWriter<GenericRecord> writer = new GenericDatumWriter<GenericRecord>(
schema);
Encoder encoder = EncoderFactory.get().binaryEncoder(out, null);
writer.write(datum, encoder);
encoder.flush();
out.close();
System.out.println("\nSerialization: " + out);
// Deserialize it.
DatumReader<GenericRecord> reader = new GenericDatumReader<GenericRecord>(
schema);
BinaryDecoder decoder = DecoderFactory.get().binaryDecoder(
out.toByteArray(), null);
GenericRecord result = reader.read(null, decoder);
System.out.printf(
"Deserialized output:\nName: %s, Email: %s, Phone: %s, Age: %d, Student?: %s\n\n",
result.get("name"), result.get("email"), result.get("phone"),
result.get("age"), result.get("student"));
}
public void serializeSpecific() throws IOException {
// Create a datum to serialize.
AddressRec datum = new AddressRec();
Random random = new Random();
int randInt = random.nextInt(NAMES.length);
datum.**name** = new Utf8(NAMES[randInt]);
datum.**email** = new Utf8(EMAILS[randInt]);
datum.**phone** = new Utf8(PHONE_NUMBERS[randInt]);
datum.**age** = AGES[randInt];
datum.**student** = STU[randInt];
File tmpFile = File.createTempFile("AddressRecAvroExample", ".avro");
// Serialize it.
DataFileWriter<AddressRec> writer = new DataFileWriter<AddressRec>(
new SpecificDatumWriter<AddressRec>(AddressRec.class));
writer.create(AddressRec.SCHEMA$, tmpFile);
writer.append(datum);
writer.close();
System.out.println("\nSerialization to tempfile: " + tmpFile);
// Deserialize it.
FileReader<AddressRec> reader = DataFileReader.openReader(tmpFile,
new SpecificDatumReader<AddressRec>(AddressRec.class));
while (reader.hasNext()) {
AddressRec result = reader.next();
System.out.printf("Deserialized output:\nName: %s, Email: %s, Phone: %s, Age: %d, Student?: %s\n\n",
result.**name**, result.**email**, result.**phone**,
result.**age**, result.**student**);
}
reader.close();
}
#Test
public void serializeTest() throws IOException {
serializeGeneric();
serializeSpecific();
}
}
What is the problem? The code in block is getting deprecated.
This is the .avsc file
{
"type": "record",
"name": "AddressRec",
"namespace":"com.mycompany.samples.avro",
"fields": [
{"name": "name", "type": "string"},
{"name": "email", "type": "string"},
{"name": "phone", "type": "string"},
{"name": "age", "type": "int"},
{"name": "student", "type": "boolean"}
]
}
The program is running fine . Its just that some code is deprecated. The same code is not deprecated when i use version 1.5.1
The only thing I can think of (since you didn't provide us with the actual warning messages) is that instead of directly accessing the field values (datum.foo = x) you should use accessor methods.

Categories

Resources