How does one decode extension object obtained from HistoryReadResult to HistoryData type ? I read documentation that suggessts simply using the decode() method but the only variant I can find in the source code is that of decode(EncoderContext).
You forgot to mention which stack or SDK you are using, but I can guess that it's the OPC Foundation Java Stack, in which case, you can use EncoderContext.getDefaultInstance(). This will work fine with the standard structure types, such as HistoryData. For server specific types, you may need to use a connection specific EncoderContext.
Related
Is there any way how to debug what is causing Sun PKCS#11 wrapper exception?:
sun.security.pkcs11.wrapper.PKCS11Exception: CKR_TEMPLATE_INCONSISTENT
I would like to know which attribute of PKCS#11 object is inconsistent and fix it.
It is quite tricky to find exactly what attribute is missing or provided incorrectly. The only way you could fix this is by trial and error. Since this exception is thrown by the token, it wouldn't be logged, which makes it much difficult to solve.
I would recommend first to better understand what type of token you are dealing with. This will give you a better idea of what type of object template it would expect.
For example, if the token only allows you to create sensitive keys, if you set the attribute value as false, the token would complain. So you have to try a combination of attributes and see if it succeeds in creating the object.
Another thing you could do is, if, the token comes with its own sdk or tools, that can interact with the token and create objects, create a test object using their sdk/tool, and then use the PKCS#11 interface to extract the object and see what template it has. You could use this as a base template.
If it doesn't you can try to create an object starting with a minimal template, with required values, like:
Id (some random value)
Label (alias name)
Token (true recommended)
Sensitive (true recommended)
Algorithm/Mechanism (CKM_RSA_PKCS_KEY_PAIR_GEN / CKM_AES_KEY_GEN)
Key Type (CKK_RSA / CKK_AES)
Value Length (optional)
Class (optional)
You can use a pkcs11 logging wrapper.
For instance: https://github.com/Pkcs11Interop/pkcs11-logger
You'll need some environment variables:
PKCS11_LOGGER_LIBRARY_PATH -> path to the real pkcs11 library
PKCS11_LOGGER_LOG_FILE_PATH -> path to the log file
PKCS11_LOGGER_FLAGS -> flags (take a look at pkcs11-logger README.md
file)
I was doing some experiments with PKCS11 lib on C# and as far as I know, it's the same for Java or even JavaScript so...
When I tried to craft a "RSA key pair" I got a problem when declaring:
CKA.CKA_CLASS, CKO.CKO_SECRET_KEY
Also when working with AES keys I got issues when I tried to create it with "Value_length" different from 32
I need to encode some strings in my Java program using BaseN encoding (similar to Base64, but we want to use different base for encoding for different strings) and I found that it could be possible with Apache's library BaseNCodec. I found and included that into my project, but I cannot make it work.
https://commons.apache.org/proper/commons-codec/apidocs/org/apache/commons/codec/binary/BaseNCodec.html
It says there that this is abstract class, but I cannot extend it, I always get errors like "Inherited abstract methods are not accessible" in NetBeans. Is there any examples on this, how to use this library in a proper way?
As far as I see the BaseNCodec class (https://commons.apache.org/proper/commons-codec/apidocs/src-html/org/apache/commons/codec/binary/BaseNCodec.html) is not final which means that it can be inherited. Perhaps there were some fault during the steps when you tried to inherit, do you have a pointer to your source code?
Worst case scenario is that you can migrate the source file into your own code base since it's open source.
To derive a not "regular" codec you can take a look at the two implementation what Apache commons have: Base64 (https://commons.apache.org/proper/commons-codec/apidocs/src-html/org/apache/commons/codec/binary/Base64.html) and Base32 (https://commons.apache.org/proper/commons-codec/apidocs/src-html/org/apache/commons/codec/binary/Base32.html). Unfortunately it seems that this won't be as simple as just defining a custom encoding (ENCODE_TABLE) and decoding table (DECODE_TABLE). There's fair enough logic revolving around for example how the fractional bits are treated across the dictionary letters and also at the end of the bit stream.
I couldn't find any documentation about the Scheme and MultiScheme interfaces of Apache Storm. The implementations are here:
https://github.com/apache/storm/tree/master/storm-core/src/jvm/backtype/storm/spout
But I don't understand when I should use Scheme and when I should use MultiScheme. Most example code I found was using implementations of MultiScheme and many also used the mysterious SchemeAsMultiScheme implementation.
Can anyone explain what Scheme, MultiScheme and SchemeAsMultiScheme are actually for? Is there a difference between RawMultiScheme and SchemeAsMultiScheme(new RawScheme())?
There is a decent description on the Storm-Kafka GitHub page (https://github.com/apache/storm/tree/master/external/storm-kafka):
"The default RawMultiScheme just takes the byte[] and returns a tuple with byte[] as is. The name of the outputField is "bytes". There are alternative implementations like SchemeAsMultiScheme and KeyValueSchemeAsMultiScheme which can convert the byte[] to String.
There is also an extension of SchemeAsMultiScheme, MessageMetadataSchemeAsMultiScheme, which has an additional deserialize method that accepts the message byte[] in addition to the Partition and offset associated with the message."
I am getting following code in response in xml
<.........>
<stsuuser:Attribute name="authorized" type="urn:ibm:names:ITFIM:oauth:response:decision">
<stsuuser:Value>TRUE</stsuuser:Value>
</stsuuser:Attribute>
<.........>
Now how I can get <stsuuser:Value> is true or false using Java?
There are quite literally dozens of libraries for Java that parse XML. But, I find JOOX by far the simplest to use. It lacks quite a few common features of other libraries, but unlike them, it's a joy to use. It provides you with a jQuery-like API (even has the $ function) and, in your case, you'd use it as follows:
import static org.joox.JOOX.$;
...
//Get the value of "authorized" attribute
$(...).xpath("//Attribute[#name='authorized']/Value").text();
The $ function will accepts a String, a file, a stream, a reader etc, so you can initialize with pretty much anything.
My problem is to serialize protobuf data in C++ and deserialize the data in Java probably.
Here is the code I use to the hints given by dcn:
With this I create the protobuf data in C++ and write it to an ostream which is send via socket.
Name name;
name.set_name("platzhirsch");
boost::asio::streambuf b;
std::ostream os(&b);
ZeroCopyOutputStream *raw_output = new OstreamOutputStream(&os);
CodedOutputStream *coded_output = new CodedOutputStream(raw_output);
coded_output->WriteLittleEndian32(name.ByteSize());
name.SerializeToCodedStream(coded_output);
socket.send(b);
This is the Java side where I try to parse it:
NameProtos.Name name = NameProtos.Name.parseDelimitedFrom(socket.getInputStream());
System.out.println(name.newBuilder().build().toString());
However by this I get this Exception:
com.google.protobuf.UninitializedMessageException: Message missing required fields: name
What am I missing?
The flawed code line is: name.newBuilder().build().toString()
This would have never worked, a new instance is created with uninitialized name field. Anyway the answer here solved the rest of my problem.
One last thing, which I was told in the protobuf mailinglist: In order to flush the CodedOutputStreams, the objects have to be deleted!
delete coded_output;
delete raw_output;
I don't know what received is in your Java code, but your problem may be due to some charset conversion. Note also that protobuf does not delimit the messages when serializing.
Therefore you should use raw data to transmit the messages (byte array or directly (de)serialize from/to streams).
If you intent to send many message you should also send the size before you send the actual messages.
In Java you can do it directly via parseDelimitedFrom(InputStream) and writeDelimitedTo(OutputStream). You can do the same in C++ a litte more complex using CodedOutputStream like
codedOutput.WriteVarint32(protoMessage.ByteSize());
protoMessage.SerializeToCodedStream(&codedOutput);
See also this ealier thread.
You're writing two things to the stream, a size and the Name object, but only trying to read one.
As a general question: why do you feel the need to use CodedInputStream? To quote the docs:
Typically these classes will only be
used internally by the protocol buffer
library in order to encode and decode
protocol buffers. Clients of the
library only need to know about this
class if they wish to write custom
message parsing or serialization
procedures
And to emphasize jtahlborn's comment: why little-endian? Java deals with big-endian values, so will have to convert on reading.