I am getting a null pointer exception whenever i am trying to process two images for color difference. The code is
MarvinImageIO.saveImage(currentFrame, "check1.jpg");
MarvinImageIO.saveImage(template, "check2.jpg");
currentFrame=MarvinImageIO.loadImage("check1.jpg");
template=MarvinImageIO.loadImage("check2.jpg");
// System.out.println(currentFrame.getWidth()+" "+currentFrame.getHeight()+" "+template.getWidth()+" "+template.getHeight());
scale(currentFrame, template, template.getWidth(), template.getHeight());
MarvinImagePlugin diff = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.difference.differenc eColor.jar");
MarvinAttributes attr = new MarvinAttributes();
attr.set("total", 0);
System.out.println(attr.get("total"));
diff.process(currentFrame, template,attr);
The error is on the diff.process statement. The images are not null and so is the attr.
Error statement is
Exception in thread "Thread-3" java.lang.NullPointerException
at org.marvinproject.image.difference.differenceColor.DifferenceColor.process(DifferenceColor.java:67)
at marvin.plugin.MarvinAbstractImagePlugin.process(MarvinAbstractImagePlugin.java:65)
at censor_player.player$MyThread.run(player.java:142)
Related
I am trying to train the model using linear SVM. I am having nearly 300 classes but need to run it with SVM so I used the one vs rest(OvR) approach. I am using apache spark with java. While running it with OvR I am getting the bellow error :
Exception in thread "main" java.lang.StackOverflowError
at scala.collection.TraversableLike$class.builder$1(TraversableLike.scala:229)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
at scala.collection.AbstractSet.scala$collection$SetLike$$super$map(Set.scala:47)
at scala.collection.SetLike$class.map(SetLike.scala:92)
at scala.collection.AbstractSet.map(Set.scala:47)
at org.apache.spark.sql.catalyst.expressions.AttributeSet.foreach(AttributeSet.scala:120)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableLike$class.to(TraversableLike.scala:590)
at org.apache.spark.sql.catalyst.expressions.AttributeSet.to(AttributeSet.scala:60)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
at org.apache.spark.sql.catalyst.expressions.AttributeSet.toBuffer(AttributeSet.scala:60)
at scala.collection.TraversableLike$class.toStream(TraversableLike.scala:585)
at org.apache.spark.sql.catalyst.expressions.AttributeSet.toStream(AttributeSet.scala:60)
at scala.collection.immutable.Stream$$anonfun$flatMap$1.apply(Stream.scala:497)
at scala.collection.immutable.Stream$$anonfun$flatMap$1.apply(Stream.scala:497)
at scala.collection.immutable.Stream.append(Stream.scala:255)
at scala.collection.immutable.Stream$$anonfun$append$1.apply(Stream.scala:255)
at scala.collection.immutable.Stream$$anonfun$append$1.apply(Stream.scala:255)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1233)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1223)
at scala.collection.immutable.Stream$$anonfun$flatMap$1.apply(Stream.scala:497)
at scala.collection.immutable.Stream$$anonfun$flatMap$1.apply(Stream.scala:497)
at scala.collection.immutable.Stream.append(Stream.scala:255)
at scala.collection.immutable.Stream$$anonfun$append$1.apply(Stream.scala:255)
at scala.collection.immutable.Stream$$anonfun$append$1.apply(Stream.scala:255)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1233)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1223)
at scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:418)
at scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:418)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1233)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1223)
at scala.collection.generic.Growable$class.loop$1(Growable.scala:54)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:57)
at scala.collection.mutable.SetBuilder.$plus$plus$eq(SetBuilder.scala:20)
at scala.collection.TraversableLike$class.to(TraversableLike.scala:590)
at scala.collection.AbstractTraversable.to(Traversable.scala:104)
at scala.collection.TraversableOnce$class.toSet(TraversableOnce.scala:304)
at scala.collection.AbstractTraversable.toSet(Traversable.scala:104)
at org.apache.spark.sql.catalyst.expressions.AttributeSet$.apply(AttributeSet.scala:45)
Here is the code I used:
LinearSVC sv = new LinearSVC()
.setMaxIter(10)
.setRegParam(0.001)
.setLabelCol(LABEL)
.setFeaturesCol("features");
OneVsRest ovr = new OneVsRest().setClassifier(sv);
Pipeline pipeline = new Pipeline()
.setStages(new PipelineStage[] {
labelIndexer,
tokenizer,
numberNormalizer,
remover,
wordVectorizer,
idf,
ovr,
deIndexer
});
long trainStart = System.currentTimeMillis();
// Fit the pipeline to training documents.
PipelineModel model = pipeline.fit(training);
I am getting this error on insert in java. Is there a way to prepare
the insert for the driver error?
Error:
Exception in thread "main" com.datastax.driver.core.exceptions.InvalidQueryException: Expected 4 or 0 byte int (10)
List<Flight> flightList = ProcessFlightsCSV.processFlights("flights_from_pg.csv");
for (Flight flight : flightList) {
System.out.println(flight);
Insert query = QueryBuilder.insertInto("flights")
.value("id", flight.getId())
.value("year", flight.getYear())
.value("fl_date", flight.getFlDate())
.value("airline_id", flight.getAirlineId())
.value("carrier", flight.getCarrier())
.value("fl_num", flight.getFlNum())
.value("origin_airport_id", flight.getOriginAirportId())
.value("origin", flight.getOrigin())
.value("origin_city_name", flight.getOriginCityName())
.value("origin_state_abr", flight.getOriginStateAbr())
.value("dest", flight.getDest())
.value("day_of_month", flight.getDayOfMonth())
.value("dest_city_name", flight.getDestCityName())
.value("dest_state_abr", flight.getDestStateAbr())
.value("dep_time", flight.getDepTime())
.value("arr_time", flight.getArrTime())
.value("distance", flight.getDistance());
session.execute(query);
}
Hopefully you have proper session before executing this query.
Update your session.execute(query.toString());
I have the pre-trained model like Inception-v3. I want to remove the output layer and use it in image cognition. Here is the example given by tensorflow:
Just like the python framework Keras, it has a method like model.layers.pop(). I tried do it with tensorflow java api. First I tried to use dl4j, but when I imported the keras model, I got an error like this:
2017-06-15 21:15:43 INFO KerasInceptionV3Net:52 - Importing Inception model from data/inception-model.json
2017-06-15 21:15:43 INFO KerasInceptionV3Net:53 - Importing Weights model from data/inception_v3_complete
Exception in thread "main" java.lang.RuntimeException: Unknown exception.
at org.bytedeco.javacpp.hdf5$H5File.allocate(Native Method)
at org.bytedeco.javacpp.hdf5$H5File.<init>(hdf5.java:12713)
at org.deeplearning4j.nn.modelimport.keras.Hdf5Archive.<init>(Hdf5Archive.java:61)
at org.deeplearning4j.nn.modelimport.keras.KerasModel$ModelBuilder.weightsHdf5Filename(KerasModel.java:603)
at org.deeplearning4j.nn.modelimport.keras.KerasModelImport.importKerasModelAndWeights(KerasModelImport.java:176)
at edu.usc.irds.dl.dl4j.examples.KerasInceptionV3Net.<init>(KerasInceptionV3Net.java:55)
at edu.usc.irds.dl.dl4j.examples.KerasInceptionV3Net.main(KerasInceptionV3Net.java:108)
HDF5-DIAG: Error detected in HDF5 (1.10.0-patch1) thread 0:
#000: C:\autotest\HDF5110ReleaseRWDITAR\src\H5F.c line 579 in H5Fopen(): unable to open file
major: File accessibilty
minor: Unable to open file
#001: C:\autotest\HDF5110ReleaseRWDITAR\src\H5Fint.c line 1100 in H5F_open(): unable to open file: time = Thu Jun 15 21:15:44 2017,name = 'data/inception_v3_complete', tent_flags = 0
major: File accessibilty
minor: Unable to open file
#002: C:\autotest\HDF5110ReleaseRWDITAR\src\H5FD.c line 812 in H5FD_open(): open failed
major: Virtual File Layer
minor: Unable to initialize object
#003: C:\autotest\HDF5110ReleaseRWDITAR\src\H5FDsec2.c line 348 in H5FD_sec2_open(): unable to open file: name = 'data/inception_v3_complete', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0
major: File accessibilty
minor: Unable to open file
So I went back to tensorflow. I'm going to modify the model in keras and convert the model to tensor. Here is my conversion script:
input_fld = './'
output_node_names_of_input_network = ["pred0"]
write_graph_def_ascii_flag = True
output_node_names_of_final_network = 'output_node'
output_graph_name = 'test2.pb'
from keras.models import load_model
import tensorflow as tf
import os
import os.path as osp
from keras.applications.inception_v3 import InceptionV3
from keras.applications.vgg16 import VGG16
from keras.models import Sequential
from keras.layers.core import Flatten, Dense, Dropout
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD
output_fld = input_fld + 'tensorflow_model/'
if not os.path.isdir(output_fld):
os.mkdir(output_fld)
net_model = InceptionV3(weights='imagenet', include_top=True)
num_output = len(output_node_names_of_input_network)
pred = [None]*num_output
pred_node_names = [None]*num_output
for i in range(num_output):
pred_node_names[i] = output_node_names_of_final_network+str(i)
pred[i] = tf.identity(net_model.output[i], name=pred_node_names[i])
print('output nodes names are: ', pred_node_names)
from keras import backend as K
sess = K.get_session()
if write_graph_def_ascii_flag:
f = 'only_the_graph_def.pb.ascii'
tf.train.write_graph(sess.graph.as_graph_def(), output_fld, f, as_text=True)
print('saved the graph definition in ascii format at: ', osp.join(output_fld, f))
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import graph_io
constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), pred_node_names)
graph_io.write_graph(constant_graph, output_fld, output_graph_name, as_t ext=False)
print('saved the constant graph (ready for inference) at: ', osp.join(output_fld, output_graph_name))
I got the model as .pb file, but when I put it into the tensor example, The LabelImage example, I got this error:
Exception in thread "main" java.lang.IllegalArgumentException: You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
[[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=<unknown>, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
at org.tensorflow.Session.run(Native Method)
at org.tensorflow.Session.access$100(Session.java:48)
at org.tensorflow.Session$Runner.runHelper(Session.java:285)
at org.tensorflow.Session$Runner.run(Session.java:235)
at com.dlut.cmh.sheng.LabelImage.executeInceptionGraph(LabelImage.java:98)
at com.dlut.cmh.sheng.LabelImage.main(LabelImage.java:51)
I don't know how to solve this. Can anyone help me? Or you have another way to do this?
The error message you get from the TensorFlow Java API:
Exception in thread "main" java.lang.IllegalArgumentException: You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
[[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=<unknown>, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
suggests that the model is constructed in a way that requires you to feed a boolean value for the tensor named batch_normalization_1/keras_learning_phase.
So, you'd have to include that in your call to run by changing:
try (Session s = new Session(g);
Tensor result = s.runner().feed("input",image).fetch("output").run().get(0)) {
to something like:
try (Session s = new Session(g);
Tensor learning_phase = Tensor.create(false);
Tensor result = s.runner().feed("input", image).feed("batch_normalization_1/keras_learning_phase", learning_phase).fetch("output").run().get(0)) {
The names of nodes you feed and fetch depend on the model, so it's possible that the names of the 'input' and 'output' nodes are different as well.
You might also want to consider using the TensorFlow SavedModel format (see also https://github.com/tensorflow/serving/issues/310#issuecomment-297015251)
Hope that helps
For the given data set and code, SmirnovTest shows the given exception-
data1[30]=
{190.0, 173.33, 174.67, 174.0, 177.33, 171.33, 166.0, 184.0, 176.67, 179.33, 163.33, 152.0, 175.33, 147.33, 169.33, 183.33, 196.0, 170.0, 176.0, 142.0, 168.0, 173.33, 179.33, 154.67, 160.67, 175.33, 158.0, 159.33, 158.0, 171.33 };
data2[20]=
{46.04, 23.8, 23.29, 15.35, 52.62, 59.46, 42.02, 50.31, 32.07, 16.87, 16.72, 62.91, 48.74, 52.87, 57.32, 15.61, 59.3, 45.62, 64.22, 61.42};
SmirnovTest test=new SmirnovTest(data1, data2);
test.getSP();
test.getTestStatistic();
Exception in thread "main" java.lang.IllegalArgumentException: Invalid SP -3.126388037344441E-13
Is there any problem in data set?
I have the below xml, Updated added "symptoms"
<EBF>
<EBFINFO>
<EBFNUM>EBF262323</EBFNUM>
<RELEASEDATETIME>May 06, 2011</RELEASEDATETIME>
<SYMPTOMS>
<br> INFA252994 - 910 : While running concurrent session Workflow manager hangs and workflow monitor does not respond</br>
<br> INFA262323 - 910 : pmcmd, pmdtm and all LM clients on Windows fail to connect to IS when IPv6 is installed but all IPv6 interfaces are disabled</br>
</SYMPTOMS>
<FILES>
<FILE>
<PATH>H:\EBF262323\EBF262323_Client_Installer_win32_x86\EBFs\clients\PmClient\client\bin\ACE.dll_bak</PATH>
<CHECKSUM>303966974</CHECKSUM>
<AFFECTEDFILES>
<CHECKSUM>3461283269</CHECKSUM>
<PATH>C:\clients\PmClient\CommandLineUtilities\PC\server\bin\ACE.dll</PATH>
<PATH>C:\clients\PmClient\client\bin\ACE.dll</PATH>
</AFFECTEDFILES>
</FILE>
</FILES>
<NOTES>
</NOTES>
</EBFINFO>
</EBF>
Note: In the above xml ebf\enfinfo\files\file\affectedfiles\path and ebf\enfinfo\files\file can be one or more
which I am parsing and generating another xml out of it
def records = new XmlParser().parseText(rs)
csm.ebfHistory(){
records.EBFINFO.each{
ebfHistory_info(num:it.EBFNUM.text(),
release_date_time:it.RELEASEDATETIME.text()
){
it.FILES.FILE.each{ //says Exception in thread "main" java.lang.NullPointerException: Cannot get property 'FILES' on null object
ebfHistory_fileinfo(file_path:it.PATH.text(),
file_checksum:it.CHECKSUM.text()
){
ebfHistory_fileinfo_affectedfiles(
afile_checksum:it.CHECKSUM.text(),
afile_path:it.PATH.text()
)
}
}
}
}
}
something like below
<ebfHistory>
<ebfHistory_info num="EBF262323",release_date_time="May 06, 2011">
<ebfHistory_fileinfo file_checksum="303966974">
<ebfHistory_fileinfo_affectedfiles afile_checksum="3461283269">
<path>C:\clients\PmClient\CommandLineUtilities\PC\server\bin\ACE.dll</path>
<path>C:\clients\PmClient\client\bin\ACE.dll</path>
</ebfHistory_fileinfo_affectedfiles>
</ebfHistory_fileinfo>
</ebfHistory_info>
</ebfHistory>
but instead I get Exception in thread "main" java.lang.NullPointerException: Cannot get property 'FILES' on null object where am i going wrong? Please help somebody. Thanks
Updated code (working)
def records = new XmlParser().parseText(rs)
csm.ebfHistory(){
records.EBFINFO.each{ ebfinfo ->
ebfHistory_info(num:ebfinfo.EBFNUM.text(),
release_date_time:ebfinfo.RELEASEDATETIME.text())
{
ebfinfo.SYMPTOMS.br.each{
ebfHistory_symptom(name:it.text())
}
}
}
ebfHistory_dump(rs){
"${rs}"
}
}
The it no longer refers to each EBFINFO, because you are in another closure--the ebfHistory_info closure.
Instead, explicitly name the EBFINFO object:
records.EBFINFO.each { ebfinfo -> // <-- Give it a name
ebfHistory_info(num:it.EBFNUM.text(),
release_date_time:it.RELEASEDATETIME.text()) {
ebfinfo.FILES.FILE.each { // <-- Use the name here
Same thing in the ebfHistory_fileinfo_affectedfiles parameters.