I am new in DeepLearining, and I try to make friend Java and TensorFlow Inception model
I retrained model with retrain.py script, so this create .pb file and .txt with serialized model of my graph and labels for it. So, now I try to connect Java and new retrained model.
So, I changed original LabelImage.java with new nodes DecodeJpeg and , difference between them see in LabelImageOriginal.java and LabelImage.java.
So now I've an error:
Output 0 of type float does not match declared output type string for
node recv_DecodeJpeg/contents_0 = Recvclient_terminated=true,
recv_device="/job:localhost/replica:0/task:0/cpu:0",
send_device="/job:localhost/replica:0/task:0/cpu:0",
send_device_incarnation=6706969514627308055,
tensor_name="DecodeJpeg/contents:0", tensor_type=DT_STRING,
_device="/job:localhost/replica:0/task:0/cpu:0"
And what can i do with that i dont know...
https://github.com/nbarskov/LabelImage - GIT
Related
I have the following query:
g
.V("user-11")
.repeat(bothE().subgraph("subGraph").outV())
.times(2)
.cap("subGraph")
.next()
When I run it using gremlin-python, I receive the following response:
{'#type': 'tinker:graph',
'#value': {'vertices': [v[device-3], v[device-1], v[user-11], v[card-1]],
'edges': [e[68bad734-db2b-bffc-3e17-a0813d2670cc][user-11-uses_device->device-1],
e[14bad735-2b70-860f-705f-4c0b769a7849][user-11-uses_device->device-3],
e[f0bb3b6d-d161-ec60-5e6d-068272297f24][user-11-uses_card->card-1]]}}
Which is a Graphson representation of the subgraph obtained by the query.
I want to get the same response using Java and gremlin-driver but I haven't been able to figure how.
My best try was:
ObjectMapper mapper = GraphSONMapper.build().version(GraphSONVersion.V3_0).create().createMapper();
Object a = graphTraversalSource
.V(nodeId)
.repeat(bothE().subgraph("subGraph").outV())
.times(2)
.cap("subGraph")
.next();
return mapper.writeValueAsString(a);
But that gave me the following error:
io.netty.handler.codec.DecoderException: org.apache.tinkerpop.gremlin.driver.ser.SerializationException: org.apache.tinkerpop.shaded.kryo.KryoException: Encountered unregistered class ID: 65536
I am using AWS Neptune, but I doubt that makes a difference given that I receive the answer I want through gremlin-python.
I appreciate any help you can give! Thanks
As mentioned in the comments
When using Java what you get back will be an actual TinkerGraph
Using the GraphBinary or GraphSONV3D0 serializer is recommended.
The Gyro one is older and is likely causing the error you saw if you did not specify one of the others serializers.
Note that even if you use one of the other serializers, to get the graph to deserialize into JSON you will need to use the specific TinkerGraph serializer (see the end of this answer for an example). Otherwise you will just get {} returned.
However, you may not need to produce JSON at all in the case of the Java Gremlin client ....
Given you have an actual TinkerGraph back you can run real Gremlin queries against the in-memory subgraph - just create a new traversal source for it. You can also use the graph.io classes to write the graph to file should you wish to. The TinkerGraph will include properties as well as edges and vertices.
You can also access the TinkerGraph object directly using statements such as
a.vertices and a.edges
By means of a concrete example, if you have a query of the form
TinkerGraph tg = (TinkerGraph)g.V().bothE().subgraph("sg").cap("sg").next();
Then you can do
GraphTraversalSource g2 = tg.traversal();
Long cv = g2.V().count().next();
Long ce = g2.E().count().next();
Or you can just access the TinkerGraph data structure directly using statements of the form:
Vertex v = tg.vertices[<some-id>]
Or
List properties = tg.vertices[<some-id>].properties()
This actually means you have a lot more power available to you in the Java client when working with subgraphs.
If you still feel that you need a JSON version of your subgraph, the IO reference is a handy bookmark to have: https://tinkerpop.apache.org/docs/3.4.9/dev/io/#_io_reference
EDITED: - to save you a lot of reading the docs, this code will print a TinkerGraph as JSON
mapper = GraphSONMapper.build().
addRegistry(TinkerIoRegistryV3d0.instance()).
version(GraphSONVersion.V3_0).create().createMapper();
mapper.writeValueAsString(tg)
I have a problem with java tensorflow API. I have run the training using the python tensorflow API, generating the files output_graph.pb and output_labels.txt. Now for some reason I want to use those files as input to the LabelImage module in java tensorflow API. I thought everything would have worked fine since that module wants exactly one .pb and one .txt. Nevertheless, when I run the module, I get this error:
2017-04-26 10:12:56.711402: W tensorflow/core/framework/op_def_util.cc:332] Op BatchNormWithGlobalNormalization is deprecated. It will cease to work in GraphDef version 9. Use tf.nn.batch_normalization().
Exception in thread "main" java.lang.IllegalArgumentException: No Operation named [input] in the Graph
at org.tensorflow.Session$Runner.operationByName(Session.java:343)
at org.tensorflow.Session$Runner.feed(Session.java:137)
at org.tensorflow.Session$Runner.feed(Session.java:126)
at it.zero11.LabelImage.executeInceptionGraph(LabelImage.java:115)
at it.zero11.LabelImage.main(LabelImage.java:68)
I would be very grateful if you help me finding where the problem is. Furthermore I want to ask you if there is a way to run the training from java tensorflow API, because that would make things easier.
To be more precise:
As a matter of fact, I do not use self-written code, at least for the relevant steps. All I have done is doing the training with this module, https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/image_retraining/retrain.py, feeding it with the directory that contains the images divided among subdirectories according to their description. In particular, I think these are the lines that generate the outputs:
output_graph_def = graph_util.convert_variables_to_constants(
sess, graph.as_graph_def(), [FLAGS.final_tensor_name])
with gfile.FastGFile(FLAGS.output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
with gfile.FastGFile(FLAGS.output_labels, 'w') as f:
f.write('\n'.join(image_lists.keys()) + '\n')
Then, I give the outputs (one some_graph.pb and one some_labels.txt) as input to this java module: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/java/src/main/java/org/tensorflow/examples/LabelImage.java, replacing the default inputs. The error I get is the one reported above.
The model used by default in LabelImage.java is different that the model that is being retrained, so the names of inputs and output nodes do not align. Note that TensorFlow models are graphs and the arguments to feed() and fetch() are names of nodes in the graph. So you need to know the names appropriate for your model.
Looking at retrain.py, it seems that it has a node that takes the raw contents of a JPEG file as input (the node DecodeJpeg/contents) and produces the set of labels in the node final_result.
If that's the case, then you'd do something like the following in Java (and you don't need the bit that constructs a graph to normalize the image since that seems to be a part of the retrained model, so replace LabelImage.java:64 with something like:
try (Tensor image = Tensor.create(imageBytes);
Graph g = new Graph()) {
g.importGraphDef(graphDef);
try (Session s = new Session(g);
// Note the change to the name of the node and the fact
// that it is being provided the raw imageBytes as input
Tensor result = s.runner().feed("DecodeJpeg/contents", image).fetch("final_result").run().get(0)) {
final long[] rshape = result.shape();
if (result.numDimensions() != 2 || rshape[0] != 1) {
throw new RuntimeException(
String.format(
"Expected model to produce a [1 N] shaped tensor where N is the number of labels, instead it produced one with shape %s",
Arrays.toString(rshape)));
}
int nlabels = (int) rshape[1];
float[] probabilities = result.copyTo(new float[1][nlabels])[0];
// At this point nlabels = number of classes in your retrained model
DoSomethingWith(probabilities);
}
}
Hope that helps.
Regarding the "No operation" error, I was able to resolve that by using input and output layer names "Mul" and "final_result", respectively. See:
https://github.com/tensorflow/tensorflow/issues/2883
I have been trying to import and make use of my trained model (Tensorflow, Python) in Java.
I was able to save the model in Python, but encountered problems when I try to make predictions using the same model in Java.
Here, you can see the python code for initializing, training, saving the model.
Here, you can see the Java code for importing and making predictions for input values.
The error message I get is:
Exception in thread "main" java.lang.IllegalStateException: Attempting to use uninitialized value Variable_7
[[Node: Variable_7/read = Identity[T=DT_FLOAT, _class=["loc:#Variable_7"], _device="/job:localhost/replica:0/task:0/cpu:0"](Variable_7)]]
at org.tensorflow.Session.run(Native Method)
at org.tensorflow.Session.access$100(Session.java:48)
at org.tensorflow.Session$Runner.runHelper(Session.java:285)
at org.tensorflow.Session$Runner.run(Session.java:235)
at org.tensorflow.examples.Identity_import.main(Identity_import.java:35)
I believe, the problem is somewhere in the python code, but I was not able to find it.
The Java importGraphDef() function is only importing the computational graph (written by tf.train.write_graph in your Python code), it isn't loading the values of trained variables (stored in the checkpoint), which is why you get an error complaining about uninitialized variables.
The TensorFlow SavedModel format on the other hand includes all information about a model (graph, checkpoint state, other metadata) and to use in Java you'd want to use SavedModelBundle.load to create session initialized with the trained variable values.
To export a model in this format from Python, you might want to take a look at a related question Deploy retrained inception SavedModel to google cloud ml engine
In your case, this should amount to something like the following in Python:
def save_model(session, input_tensor, output_tensor):
signature = tf.saved_model.signature_def_utils.build_signature_def(
inputs = {'input': tf.saved_model.utils.build_tensor_info(input_tensor)},
outputs = {'output': tf.saved_model.utils.build_tensor_info(output_tensor)},
)
b = saved_model_builder.SavedModelBuilder('/tmp/model')
b.add_meta_graph_and_variables(session,
[tf.saved_model.tag_constants.SERVING],
signature_def_map={tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature})
b.save()
And invoke that via save_model(session, x, yhat)
And then in Java load the model using:
try (SavedModelBundle b = SavedModelBundle.load("/tmp/mymodel", "serve")) {
// b.session().run(...)
}
Hope that helps.
Fwiw, Deeplearning4j lets you import models trained on TensorFlow with Keras 1.0 (Keras 2.0 support is on the way).
https://deeplearning4j.org/model-import-keras
We also built a library called Jumpy, which is a wrapper around Numpy arrays and Pyjnius that uses pointers instead of copying data, which makes it more efficient than Py4j when dealing with tensors.
https://deeplearning4j.org/jumpy
Your python-model will certainly fail at this:
sess.run(init) #<---this will fail
save_model(sess)
error = tf.reduce_mean(tf.square(prediction - y))
#accuracy = tf.reduce_mean(tf.cast(error, 'float'))
print('Error:', error)
init is not defined in the model - I'm unsure what you want achieve at this place, but that should give you a starting point
I am using the Tensorflow Java Api to load an already created Tensorflow model into the JVM.
I am using this as an example: tensorflow/examples/LabelImage.java
Here is my simple scala code:
import java.nio.file.{Files, Path, Paths}
import org.tensorflow.{Graph, Session, Tensor}
def readAllBytesOrExit(path: Path): Array[Byte] = Files.readAllBytes(path)
val graphDef = readAllBytesOrExit(Paths.get("PATH_TO_A_SINGLE_FILE_DESCRIBING_TF_MODEL.pb"))
val g = new Graph()
g.importGraphDef(graphDef)
val session = new Session(g)
val result: Tensor = session.runner().feed("input", image).fetch("output").run().get(0))
How do I save my model to get both the Session and the Graph stored in the same file. as described in the "PATH_TO_A_SINGLE_FILE_DESCRIBING_TF_MODEL.pb" above.
Described here it mentions:
The serialized representation of the graph, often referred to as a
GraphDef, can be generated by toGraphDef() and equivalents in other
language APIs.
What are the equivalents in other language APIs? I dont find it obvious
Note: I already looked at the mnist_saved_model.py under tensorflow_serving but saving it through that procedure gives me a .pb file and a variables folder. When trying to load that .pb file I get: java.lang.IllegalArgumentException: Invalid GraphDef
Currently with the Java API of tensorflow, I only found how to save a graph as a graphDef (i.e. without its variables and meta-data). This can be done by just writing the Array[Byte] to a file:
Files.write(Paths.get(modelDir, modelName), myGraph.toGraphDef)
Here myGraph is a java object from the Graph class.
I would suggest to save your model from the Python API, using the SavedModel api defined here. It will save your model in a folder with both the serialized graph in a .pb file and the variables in a folder. Note the tag_constants you use as you'll need it in your scala/java code to load the model with the variables. Then the graph and session with variables are easily loaded with the SavedModelBundle java class from the java api. It returns you a wrapper with both the graph and the session containing the variables values:
val model = SavedModelBundle.load(modelDir, modelTag)
If you already tried this, maybe you can share your code to see why it returned an invalid GraphDef.
Another option is to freeze your graph, i.e. you turned your variable nodes into constant Nodes so everything is self-contained in the .pb file. Mores infos here for the freezing part
Overview
I know that one can get the percentages of each prediction in a trained WEKA model through the GUI and command line options as conveniently explained and demonstrated in the documentation article "Making predictions".
Predictions
I know that there are three ways documented to get these predictions:
command line
GUI
Java code/using the WEKA API, which I was able to do in the answer to "Get risk predictions in WEKA using own Java code"
this fourth one requires a generated WEKA .MODEL file
I have a trained .MODEL file and now I want to classify new instances using this together with the prediction percentages similar to the one below (an output of the GUI's Explorer, in CSV format):
inst#,actual,predicted,error,distribution,
1,1:0,2:1,+,0.399409,*0.7811
2,1:0,2:1,+,0.3932409,*0.8191
3,1:0,2:1,+,0.399409,*0.600591
4,1:0,2:1,+,0.139409,*0.64
5,1:0,2:1,+,0.399409,*0.600593
6,1:0,2:1,+,0.3993209,*0.600594
7,1:0,2:1,+,0.500129,*0.600594
8,1:0,2:1,+,0.399409,*0.90011
9,1:0,2:1,+,0.211409,*0.60182
10,1:0,2:1,+,0.21909,*0.11101
The predicted column is what I want to get from a .MODEL file.
What I know
Based from my experience with the WEKA API approach, one can get these predictions using the following code (the PlainText inserted into an Evaluation object) BUT I do not want to do k-fold cross-validation that is provided by the Evaluation object.
StringBuffer predictionSB = new StringBuffer();
Range attributesToShow = null;
Boolean outputDistributions = new Boolean(true);
PlainText predictionOutput = new PlainText();
predictionOutput.setBuffer(predictionSB);
predictionOutput.setOutputDistribution(true);
Evaluation evaluation = new Evaluation(data);
evaluation.crossValidateModel(j48Model, data, numberOfFolds,
randomNumber, predictionOutput, attributesToShow,
outputDistributions);
System.out.println(predictionOutput.getBuffer());
From the WEKA documentation
Note that a .MODEL file classifies data from an .ARFF or related input is discussed in "Use Weka in your Java code" and "Serialization" a.k.a. "How to use a .MODEL file in your own Java code to classify new instances" (why the vague title smfh).
Using own Java code to classify
Loading a .MODEL file is through "Deserialization" and the following is for versions > 3.5.5:
// deserialize model
Classifier cls = (Classifier) weka.core.SerializationHelper.read("/some/where/j48.model");
An Instance object is the data and it is fed to the classifyInstance. An output is provided here (depending on the data type of the outcome attribute):
// classify an Instance object (testData)
cls.classifyInstance(testData.instance(0));
The question "How to reuse saved classifier created from explorer(in weka) in eclipse java" has a great answer too!
Javadocs
I have already checked the Javadocs for Classifier (the trained model) and Evaluation (just in case) but none directly and explicitly addresses this issue.
The only thing closest to what I want is the classifyInstances method of the Classifier:
Classifies the given test instance. The instance has to belong to a dataset when it's being classified. Note that a classifier MUST implement either this or distributionForInstance().
How can I simultaneously use a WEKA .MODEL file to classify and get predictions of a new instance using my own Java code (aka using the WEKA API)?
This answer simply updates my answer from How to reuse saved classifier created from explorer(in weka) in eclipse java.
I will show how to obtain the predicted instance value and the prediction percentage (or distribution). The example model is a J48 decision tree created and saved in the Weka Explorer. It was built from the nominal weather data provided with Weka. It is called "tree.model".
import weka.classifiers.Classifier;
import weka.core.Instances;
public class Main {
public static void main(String[] args) throws Exception
{
String rootPath="/some/where/";
Instances originalTrain= //instances here
//load model
Classifier cls = (Classifier) weka.core.SerializationHelper.read(rootPath+"tree.model");
//predict instance class values
Instances originalTrain= //load or create Instances to predict
//which instance to predict class value
int s1=0;
//perform your prediction
double value=cls.classifyInstance(originalTrain.instance(s1));
//get the prediction percentage or distribution
double[] percentage=cls.distributionForInstance(originalTrain.instance(s1));
//get the name of the class value
String prediction=originalTrain.classAttribute().value((int)value);
System.out.println("The predicted value of instance "+
Integer.toString(s1)+
": "+prediction);
//Format the distribution
String distribution="";
for(int i=0; i <percentage.length; i=i+1)
{
if(i==value)
{
distribution=distribution+"*"+Double.toString(percentage[i])+",";
}
else
{
distribution=distribution+Double.toString(percentage[i])+",";
}
}
distribution=distribution.substring(0, distribution.length()-1);
System.out.println("Distribution:"+ distribution);
}
}
The output from this is:
The predicted value of instance 0: no
Distribution: *1, 0