BLS Signature Scheme in Java: Storing and Loading Keys - java

We need a method to sign messages using signatures that are as short as possible and came across the BLS scheme, which promises rather short-ish signatures. Trying the JPBC implementation, the examples are easy to set up and run, but they lack a rather crucial part: storing and loading the private keys.
The example from the current JPBC BLS website 1 does not contain any storage whatsoever, it just verifies a message using the instances in RAM.
An older example from the same website 2 which is no longer linked on the website but can be found using search engines refers to a store method which seems to have since been removed from the library in favour of an implementation that does not contain any storage capabilities.
The AsymmetricCipherKeyPair instances (which are what I get from the keygen) are not serializable by themselves, neither are instances of BLS01PublicKeyParameters or BLS01PrivateKeyParameters, with the fields that contain the keys (sk and pk) being private and typed only to the Element interface that doesn't really say much about the contents.
As a workaround, I have implemented a store method, that (stripped of all exception handling) roughly looks like this:
public static void storePrivateKey(AsymmetricCipherKeyPair key, String filename)
throws FileNotFoundException, IOException {
Field f = null;
f = key.getPrivate().getClass().getDeclaredField("sk");
if (f != null) {
f.setAccessible(true);
Object fieldContent = null;
fieldContent = f.get(key.getPrivate());
if (fieldContent != null) {
byte[] data = null;
if (fieldContent instanceof ImmutableZrElement) {
ImmutableZrElement izr = (ImmutableZrElement)fieldContent;
data = izr.toBytes();
}
try (FileOutputStream fos = new FileOutputStream(filename)) {
fos.write(data);
}
}
}
}
With a similar approach for public keys. That means, I'm now down to using reflection to retrieve the contents of a private field in order to store it somewhere. That solution is obviously a hackish collection of all sorts of bad smells, but it's so far the best that I've come up with. I know that writing some bytes to disk shouldn't really be that hard, but I really can't seem to find the proper way to do this. Also, to be blunt, I'm not into crypto: I want to apply this scheme to sign and verify some messages, that is all. I understand that I should dig deeper into the math of the whole approach, but time is limited - which is why I picked a library in the first place.

Related

JAVA - Most efficient way to remove file extension

I want remove the extension of a file. For example:
ActualFile = image.png
ExpectedFile = image
Which method is the most efficient to use?
removeExtension() method provided by org.apache.commons.io
fileName.substring(0, fileName.lastIndexOf('.'))
No difference, except one. Do you really want to add whole Apache lib to use only one method? If you use Apache in your application - then use it as well. If not - do create your custom implementation - this is not a rocket-science.
Looking at removeExtension from commons-io you can see that substring is also used underwater in that method in a similar way you describe:
public static String removeExtension(final String fileName) {
if (fileName == null) {
return null;
}
failIfNullBytePresent(fileName);
final int index = indexOfExtension(fileName);
if (index == NOT_FOUND) {
return fileName;
}
return fileName.substring(0, index);
}
Your method is faster since there are less operations being done, but the removeExtension method has the failIfNullBytePresent which states:
Check the input for null bytes, a sign of unsanitized data being
passed to to file level functions.
This may be used for poison byte attacks.
and indexOfExtension to get the index of the extension which has more checks (as you can see in the javadoc of that method here).
Conclusion
Your method is faster but I'd say using the commons-io is safer/more consistent in various situations, but what to use depends on how complex your situation is whether it's a critical feature of an application or just a home made project for yourself. removeExtension is not that complex or slow that you shouldn't use it perse.

recognize parameter change from git repository

I want to extract signature changes (method parameter changes to be exact) from commits to git repository by a java program. I have used the following code:
for (Ref branch : branches) {
String branchName = branch.getName();
for (RevCommit commit : commits) {
boolean foundInThisBranch = false;
RevCommit targetCommit = walk.parseCommit(repo.resolve(
commit.getName()));
for (Map.Entry<String, Ref> e : repo.getAllRefs().entrySet()) {
if (e.getKey().startsWith(Constants.R_HEADS)) {
if (walk.isMergedInto(targetCommit, walk.parseCommit(
e.getValue().getObjectId()))) {
String foundInBranch = e.getValue().getName();
if (branchName.equals(foundInBranch)) {
foundInThisBranch = true;
break;
}
}
}
}
I can extract commit message, commit data and Author name from that, however, I am not able to extract parameter changes from them. I mean it is unable for me to identify parameter changes. I want to know if there is any way to recognize that. I mean it is impossible to recognize them from commit notes that are generated by programmers; I am looking for something like any specific annotation or something else.
This is my code to extract differences:
CanonicalTreeParser oldTreeIter = new CanonicalTreeParser();
oldTreeIter.reset(reader, oldId);
CanonicalTreeParser newTreeIter = new CanonicalTreeParser();
newTreeIter.reset(reader, headId);
List<DiffEntry> diffs= git.diff()
.setNewTree(newTreeIter)
.setOldTree(oldTreeIter)
.call();
ByteArrayOutputStream out = new ByteArrayOutputStream();
DiffFormatter df = new DiffFormatter(out);
df.setRepository(git.getRepository());
The export is really huge and impossible to extract method changes.
You show a way you've found to examine the diffs, but say that the output is too large and you can't extract the method signature changes. If by that you mean that you're asking about specific git support for telling you that a method signature changes, then no - no such support exists. This is because git does not "know" anything about the languages you may or may not have used in the files under source control. Everything is just content that is, or is not, different from other content.
Since a method signature could be split across lines in any number of ways, it's not even guaranteed that just because a method's signature changed its name would appear anywhere in the diff. What you would really have to do is perform a sort of "structural diff". That is, you would have to
check out the "old" version, and pass it to a java parser
check out the "new" version, and pass it to a java parser
compare the resulting parse trees, looking for methods that belong to the same object, but have changed
Even that won't be terribly easy, because methods could be renamed, and because method overloading could make it unclear which signature change goes with which version of a method.
From there what you have is a non-trivial coding problem, which is beyond the scope of SO to answer. If you decide to tackle this problem and run into specific programming questions along the way, of course you could post those questions and perhaps someone will be able to help.

Read data from an `InputStream`

Disclaimer: I work on a non-traditional project, so don't be shocked if some assumptions seem absurd.
Context
I wish to create a stream reader for integers, strings, and the other common types in Scala, but to start with I focus only on integers. Also note that I'm not interesting in handling exception at the moment -- I'll deal with them in due time and this will be reflected in the API and in the meantime I can make the huge assumption that failures won't occur..
The API should be relatively simple, but due to the nature of the project I'm working on, I can't rely on some feature of Scala and the API needs to look something like this (slightly simplified for the purpose of this question):
object FileInputStream {
def open(filename: String): FileInputStream =
new FileInputStream(
try {
// Check whether the stream can be opened or not
val out = new java.io.FileReader(filename)
out.close()
Some[String](filename)
} catch {
case _: Throwable => None[String]
}
)
}
case class FileInputStream(var filename: Option[String]) {
def close: Boolean = {
filename = None[String]
true // This implementation never fails
}
def isOpen: Boolean = filename.isDefined
def readInt: Int = nativeReadInt
private def nativeReadInt: Int = {
??? // TODO
}
}
object StdIn {
def readInt: Int = nativeReadInt
private def nativeReadInt: Int = {
??? // TODO
}
}
Please also note that I cannot rely on additional fields in this class, with the exception of Int variables. This (probably) implies that the stream has to be opened and closed for every operations. Hence, it goes without saying that the implementation will not be efficient, but this is not an issue.
The Question
My goal is to implement the two nativeReadInt methods such that the input stream gets consumed by only one integer if one is available straight away. However, if the input doesn't start (w.r.t. the last read operation) with an integer then nothing should be read and a fixed value can be returned, say -1.
I've explored several high level Java and Scala standard APIs, but none seemed to offer a way to re-open a stream to a given position trivially. My hope is to avoid implementing low level parsing based solely on java.io.InputStream and its read() and skip(n) methods.
Additionally, to let the user read from the standard input stream, I need to avoid using scala.io.StdIn.readInt() method because it reads "an entire line of the default input", therefore trashing some potential data.
Are you aware of a Java or Scala API that could do the trick here?
Thank you

Enumerate Custom Slot Values from Speechlet

Is there any way to inspect or enumerate the Custom Slot Values that are set-up in your interaction model? For Instance, Say you have an intent schema with the following intent:
{
"intent": "MySuperCoolIntent",
"slots":
[
{
"name": "ShapesNSuch",
"type": "LIST_OF_SHAPES"
}
]
}
Furthermore, you've defined the LIST_OF_SHAPES Custom Slot to have the following Values:
SQUARE
TRIANGLE
CIRCLE
ICOSADECAHECKASPECKAHEDRON
ROUND
HUSKY
Question: is there a method I can call from my Speechlet or my RequestStreamHandler that will give me an enumeration of those Custom Slot Values??
I have looked through the Alexa Skills Kit's SDK Javadocs Located Here
And I'm not finding anything.
I know I can get the Slot's value that is sent in with the intent:
String slotValue = incomingIntentRequest.getIntent().getSlot("LIST_OF_SHAPES").getValue();
I can even enumerate ALL the incoming Slots (and with it their values):
Map<String, Slot> slotMap = IncomingIntentRequest.getIntent().getSlots();
for(Map.Entry<String, Slot> entry : slotMap.entrySet())
{
String key = entry.getKey();
Slot slot = (Slot)entry.getValue();
String slotName = slot.getName();
String slotValue = slot.getValue();
//do something nifty with the current slot info....
}
What I would really like is something like:
String myAppId = "amzn1.echo-sdk-ams.app.<TheRestOfMyID>";
List<String> posibleSlotValues = SomeMagicAlexaAPI.getAllSlotValues(myAppId, "LIST_OF_SHAPES");
With this information I wouldn't have to maintain two separate "Lists" or "Enumerations"; One within the interaction Model and another one within my Request Handler. Seems like this should be a thing right?
No, the API does not allow you to do this.
However, since your interaction model is intimately tied with your development, I would suggest you check in the model with your source code in your source control system. If you are going to do that, you might as well put it with your source. Depending on your language, that also means you can probably read it during run-time.
Using this technique, you can gain access to your interaction model at run-time. Instead of doing it automatically through an API, you do it by best practice.
You can see several examples of this in action for Java in TsaTsaTzu's examples.
No - there is nothing in the API that allows you to do that.
You can see the full extent of the Request Body structure Alexa gives you to work with. It is very simple and available here:
https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-interface-reference#Request%20Format
Please note, the Request Body is not to be confused with the request, which is a structure in the request body, with two siblings: version and session.

Google App Engine Objectify - load single objects or list of keys?

I am trying to get a grasp on Google App Engine programming and wonder what the difference between these two methods is - if there even is a practical difference.
Method A)
public Collection<Conference> getConferencesToAttend(Profile profile)
{
List<String> keyStringsToAttend = profile.getConferenceKeysToAttend();
List<Conference> conferences = new ArrayList<Conference>();
for(String conferenceString : keyStringsToAttend)
{
conferences.add(ofy().load().key(Key.create(Conference.class,conferenceString)).now());
}
return conferences;
}
Method B)
public Collection<Conference> getConferencesToAttend(Profile profile)
List<String> keyStringsToAttend = profile.getConferenceKeysToAttend();
List<Key<Conference>> keysToAttend = new ArrayList<>();
for (String keyString : keyStringsToAttend) {
keysToAttend.add(Key.<Conference>create(keyString));
}
return ofy().load().keys(keysToAttend).values();
}
the "conferenceKeysToAttend" list is guaranteed to only have unique Conferences - does it even matter then which of the two alternatives I choose? And if so, why?
Method A loads entities one by one while method B does a bulk load, which is cheaper, since you're making just 1 network roundtrip to Google's datacenter. You can observe this by measuring time taken by both methods while loading a bunch of keys multiple times.
While doing a bulk load, you need to be cautious about loaded entities, if datastore operation throws exception. Operation might succeed even when some of the entities are not loaded.
The answer depends on the size of the list. If we are talking about hundreds or more, you should not make a single batch. I couldn't find documentation what is the limit, but there is a limit. If it not that much, definitely go with loading one by one. But, you should make the calls asynchronous by not using the now function:
List<<Key<Conference>> conferences = new ArrayList<Key<Conference>>();
conferences.add(ofy().load().key(Key.create(Conference.class,conferenceString));
And when you need the actual data:
for (Key<Conference> keyConference : conferences ) {
Conference c = keyConference.get();
......
}

Categories

Resources