Header modifications do not take effect in fresh build - java

I am trying to use BRIEF descriptor in OpenCV 3.1 for andoid. In order to achieve that OpenCV has to be built from source with _contrib. So I compiled it without errors and could also see BRIEF.cpp.o beeing built in the command window.
So when I try to use it, my android app crashes throwing
OpenCV Error: Bad argument (Specified descriptor extractor type is not supported.) in static cv::javaDescriptorExtractor* cv::javaDescriptorExtractor::create(int), file /home/maksim/workspace/android-pack/opencv/modules/features2d/misc/java/src/cpp/features2d_manual.hpp, line 374
So I checked features2d_manual.hpp. Line 374 is the default expression of a switch case block:
CV_WRAP static javaDescriptorExtractor* create( int extractorType )
{
//String name;
if (extractorType > OPPONENTEXTRACTOR)
{
//name = "Opponent";
extractorType -= OPPONENTEXTRACTOR;
}
Ptr<DescriptorExtractor> de;
switch(extractorType)
{
//case SIFT:
// name = name + "SIFT";
// break;
//case SURF:
// name = name + "SURF";
// break;
case ORB:
de = ORB::create();
break;
//case BRIEF:
// name = name + "BRIEF";
// break;
case BRISK:
de = BRISK::create();
break;
//case FREAK:
// name = name + "FREAK";
// break;
case AKAZE:
de = AKAZE::create();
break;
default: //**this is line 374**
CV_Error( Error::StsBadArg, "Specified descriptor extractor type is not supported." );
break;
}
return new javaDescriptorExtractor(de);
So the error clearly comes up, because case BRIEF is commented. So I modified it like that:
#include "opencv2/xfeatures2d.hpp"
.
.
.
case BRIEF:
de = xfeatures2d::BriefDescriptorExtractor::create();
break;
.
.
.
default:
CV_Error( Error::StsBadArg, "---TEST--- Specified descriptor extractor type is not supported." );
break;
}
After rebuiling in a fresh directory and using the new build, the exact same error is persistent. Not even "---TEST---" is included with the message.
So I am wondering why my changes do not have any effect.
I am also wondering why the file path is:
/home/maksim/workspace/android-pack/opencv/modules/features2d/misc/java/src/cpp/features2d_manual.hpp
This dirctory doesn't even exist on my system and googling it showed, that /home/maksim/ is part of a lot of different error messages on android.
The actual path before building is:
C:\Users\JJG-CD\Desktop\Build_Workspace\opencv-3.1.0\modules\features2d\misc\java\src\cpp\features2d_manual.hpp
I hope somebody can explain to me what the problem is and eventually give me a hint how to solve it.

The error you're seeing almost certainly comes from a library that you link to that uses the same header file. When you recompile your code having changed the header, that header change only takes effect for the code you're actually compiling, and not the code that is already compiled in the libraries that you're also linking.
Look at your compile line and consider all the -l options as possible suspects.
This also explains the non-existent directory reference: this directory existed and was used at the time the library(ies) themselves were compiled on whatever machine they were compiled on.
If you want your header change to take effect in library code, the library itself needs to be recompiled. Have a look at your project configuration files: you may very well already have make or cmake options to do this.

I gave up already but found the solution by chance. The reason my own built libraries have not been used was the fact that those libraries are usually provided by the opencv manager app. To get rid of OpenCV manager and use my own libraries I just needed to initialize OpenCV statically.
static {
if (!OpenCVLoader.initDebug()) {
// Handle initialization error}
}
Further details can be found here

Related

The type org.apache.axiom.om.impl.llom.OMStAXWrapper is not visible

I have below code in old version. Now I have upgraded the axis2 version from 1.1.1 to 1.6.2. It then have compile problem as indicated below. I find in the web with link: https://issues.apache.org/jira/browse/AXIS2-4363
But I do not understand it. First of all Do I need to amend code? If yes, is there any example for me to follow?
if (reader.getEventType() == javax.xml.stream.XMLStreamConstants.START_ELEMENT && reader.getName().equals(new javax.xml.namespace.QName(org.apache.axiom.om.impl.MTOMConstants.XOP_NAMESPACE_URI, org.apache.axiom.om.impl.MTOMConstants.XOP_INCLUDE)))
{
java.lang.String id = org.apache.axiom.om.util.ElementHelper.getContentID(reader, "UTF-8");
object.set_return(((org.apache.axiom.soap.impl.builder.MTOMStAXSOAPModelBuilder)
((org.apache.axiom.om.impl.llom.OMStAXWrapper)
reader).getBuilder()).getDataHandler(id));
<--- highlight this The type org.apache.axiom.om.impl.llom.OMStAXWrapper is not visible
reader.next();
reader.next();
}
This appears to be generated code. If you upgrade from Axis2 1.1.1 to 1.6.2, then you need to regenerate that code. Note that the usual best practice applies here: generated code should always be generated during the build, not checked into the source control system.

WEKA - Multi Class Classification - Can't find class called: weka.classifiers.functions.supportVector.RegSMOImproved

I'm trying to train a MultiClassClassifiermodel in Weka with the base algorithm set to weka.classifiers.functions.supportVector.RegSMOImproved class, with the following options:
MultiClassClassifier cModel = new MultiClassClassifier();
String options[] = {
"weka.classifiers.meta.MultiClassClassifier",
"-M","0",
"-R","2.0",
"-S","1",
"-W","weka.classifiers.functions.supportVector.RegSMOImproved",
"-P","1.0e-12",
"-L","1.0e-3",
"-W","1"
};
try {
cModel.setOptions(options);
} catch (Exception e) {
e.printStackTrace();
}
When I run my code I get the following error:
java.lang.Exception: Can't find class called: weka.classifiers.functions.supportVector.RegSMOImproved
at weka.core.Utils.forName(Utils.java:1073)
at weka.classifiers.AbstractClassifier.forName(AbstractClassifier.java:90)
at weka.classifiers.SingleClassifierEnhancer.setOptions(SingleClassifierEnhancer.java:108)
at weka.classifiers.RandomizableSingleClassifierEnhancer.setOptions(RandomizableSingleClassifierEnhancer.java:93)
at weka.classifiers.meta.MultiClassClassifier.setOptions(MultiClassClassifier.java:802)
at myApp.Main.trainMultiClassClassifier(Main.java:983)
at myApp.Main.createSets(Main.java:903)
at myApp.Main.main(Main.java:387)
What is the correct class path for use of RegSMOImproved algorithm if not weka.classifiers.functions.supportVector.RegSMOImproved?
Am I missing something else here, an additional setting perhaps, or some kind of a parent class?
I'm using Weka developer-branch from here. If there is anything I left out unintentionally please let me know and I'll make an edit asap.
Thank You in advance.
EDIT 1:
I'm trying to accomplish multi class classification where I would train my model/models as one class vs. the rest. My data is balanced (100 samples per class). This is what I've found so far:
http://weka.8497.n7.nabble.com/meta-multi-class-classifier-with-the-option-smo-td26548.html
EDIT 2:
So I've changed my options object to:
String options[] = {
"-M","0",
"-R","2.0",
"-S","1",
"-W","weka.classifiers.functions.SMO",
"--",
"-C","1",
"-L","0.001",
"-P","1.0e-12",
"-M",
"-N", "0",
"-V","-1",
"-W","1",
"-K", "weka.classifiers.functions.supportVector.PolyKernel -C 250007 -E 1.0"
};
This seems to go through setOptions(), so I've clearly mixed the two SMO classes from supportVector and functions packages. I've also read that I need to set the -M and -V properties for SMO in order for my MultiClassClassifier to work correctly. So I've turned on "fitting calibration models to SVM outputs" with the -M property and I've set the number of folds for cross validation to -1 (default) with the -V property.
I assume the number of folds property for cross validation has to be set for testing purposes. Will have to check out posts on cross validation from this point.
Thank You again!
A) you probably shouldn't be using the developer branch unless you have a specific need. For all we know they are moving stuff around and its potentially broken
B) RegSMOImproved is for Regression , not classification. So some of your issues could be that miss match between the MultiClassClassifier and a regression algorithm.

SURF and SIFT algorithms doesn't work in OpenCV 3.0 Java

I am using OpenCV 3.0 (the latest version) in Java, but when I use SURF algorithm or SIFT algorithm it doesn't work and throws Exception which says: OpenCV Error: Bad argument (Specified feature detector type is not supported.) in cv::javaFeatureDetector::create
I have googled, but the answers which was given to this kind of questions did not solve my problem. If anyone knows about this problem please let me know.
Thanks in advance!
Update: The code below in third line throws exception.
Mat img_object = Imgcodecs.imread("data/img_object.jpg");
Mat img_scene = Imgcodecs.imread("data/img_scene.jpg");
FeatureDetector detector = FeatureDetector.create(FeatureDetector.SURF);
MatOfKeyPoint keypoints_object = new MatOfKeyPoint();
MatOfKeyPoint keypoints_scene = new MatOfKeyPoint();
detector.detect(img_object, keypoints_object);
detector.detect(img_scene, keypoints_scene);
If you compile OpenCV from source, you can fix the missing bindings by editing opencv/modules/features2d/misc/java/src/cpp/features2d_manual.hpp yourself.
I fixed it by making the following changes:
(line 6)
#ifdef HAVE_OPENCV_FEATURES2D
#include "opencv2/features2d.hpp"
#include "opencv2/xfeatures2d.hpp"
#include "features2d_converters.hpp"
...(line 121)
case SIFT:
fd = xfeatures2d::SIFT::create();
break;
case SURF:
fd = xfeatures2d::SURF::create();
break;
...(line 353)
case SIFT:
de = xfeatures2d::SIFT::create();
break;
case SURF:
de = xfeatures2d::SURF::create();
break;
The only requirement is that you build opencv_contrib optional module along with your sources (you can download the git project from https://github.com/Itseez/opencv_contrib and just set its local path on opencv's ccmake settings.
Oh, and keep in mind that SIFT and SURF are non-free software ^^;
That is because they are not free in newer versions of OpenCV (3+). I faced that problem some time ago. You have to:
Download OpenCV (if you have not)
Download the nonfree part from opencv github repo
Generate the makefiles with cmake -DBUILD_SHARED_LIBS=OFF specifying the nonfree part with DOPENCV_EXTRA_MODULES_PATH=../opencv_contrib/modules option and build with make -j8 (or whatever Java version you use)
Edit features2d_manual.hpp file, including opencv2/xfeatures2d.hpp and including the necessary code for SIFT and SURF case, which are commented and not defined:
fd=xfeatures2d::SIFT::create(); for SIFT descriptor and de = xfeatures2d::SIFT::create(); for SIFT extractor. Do the same for SURF if you want to use it too.
I wrote this post explaining step by step how to compile the non-free OpenCV part in order to use privative tools like SIFT or SURF.
Compile OpenCV non-free part.
I believe changing features2d module (FeatureDetector class or any other classes from features2d_manual.hpp) to enable methods from OpenCV contrib modules is less attractive approach because it leads to circular dependency between the "core" OpenCV and extensions (which can be non-free or experimental).
There is another way to fix this issue without affecting feature2d classes. Making changes in xfeatures2d CMakeLists.txt as described here leads to generation of java wrappers for SIFT and SURF - opencv-310.jar has org.opencv.xfeatures2d package now. Some fix was required in /opencv/modules/java/generator/gen_java.py. Namely inserted 2 lines as shown below:
def addImports(self, ctype):
if ctype.startswith('vector_vector'):
self.imports.add("org.opencv.core.Mat")
self.imports.add("org.opencv.utils.Converters")
self.imports.add("java.util.List")
self.imports.add("java.util.ArrayList")
self.addImports(ctype.replace('vector_vector', 'vector'))
elif ctype.startswith('Feature2D'): #added
self.imports.add("org.opencv.features2d.Feature2D") #added
elif ctype.startswith('vector'):
self.imports.add("org.opencv.core.Mat")
self.imports.add('java.util.ArrayList')
if type_dict[ctype]['j_type'].startswith('MatOf'):
self.imports.add("org.opencv.core." + type_dict[ctype]['j_type'])
else:
self.imports.add("java.util.List")
self.imports.add("org.opencv.utils.Converters")
self.addImports(ctype.replace('vector_', ''))
After these changes wrappers are generated successfully. However the main problem still remains, how to use these wrappers from Java )). For example SIFT.create() gives the pointer to a new SIFT class but calling any class method (for instance detect()) crashes Java. I also noticed that using MSER.create() directly from Java leads to the same crash.
So it looks like the problem is isolated to the way how Feature2D.create() methods are wrapped in Java. The solution seems to be the following (again, changing /opencv/modules/java/generator/gen_java.py):
Find the string:
ret = "%(ctype)s* curval = new %(ctype)s(_retval_);return (jlong)curval->get();" % { 'ctype':fi.ctype }
Replace it with the following:
ret = "%(ctype)s* curval = new %(ctype)s(_retval_);return (jlong)curval;" % { 'ctype':fi.ctype }
Rebuild the opencv. That is it, all create() methods will start working properly for all children of Feature2D class including experimental and non-free methods. FeatureDescriptor/DescriptorExtractor wrappers can be deprecated I think as Feature2D is much easier to use.
BUT! I'm not sure if the suggested fix is safe for other OpenCV modules. Is there a scenario when (jlong)curval needs to be dereferenced? It looks like the same fix was suggested already here.

Warning: File for type '[Insert class here]' created in the last round will not be subject to annotation processing

I switched an existing code base to Java 7 and I keep getting this warning:
warning: File for type '[Insert class here]' created in the last round
will not be subject to annotation processing.
A quick search reveals that no one has hit this warning.
It's not documented in the javac compiler source either:
From OpenJDK\langtools\src\share\classes\com\sun\tools\javac\processing\JavacFiler.java
private JavaFileObject createSourceOrClassFile(boolean isSourceFile, String name) throws IOException {
checkNameAndExistence(name, isSourceFile);
Location loc = (isSourceFile ? SOURCE_OUTPUT : CLASS_OUTPUT);
JavaFileObject.Kind kind = (isSourceFile ?
JavaFileObject.Kind.SOURCE :
JavaFileObject.Kind.CLASS);
JavaFileObject fileObject =
fileManager.getJavaFileForOutput(loc, name, kind, null);
checkFileReopening(fileObject, true);
if (lastRound) // <-------------------------------TRIGGERS WARNING
log.warning("proc.file.create.last.round", name);
if (isSourceFile)
aggregateGeneratedSourceNames.add(name);
else
aggregateGeneratedClassNames.add(name);
openTypeNames.add(name);
return new FilerOutputJavaFileObject(name, fileObject);
}
What does this mean and what steps can I take to clear this warning?
Thanks.
The warning
warning: File for type '[Insert class here]' created in the last round
will not be subject to annotation processing
means that your were running an annotation processor creating a new class or source file using a javax.annotation.processing.Filer implementation (provided through the javax.annotation.processing.ProcessingEnvironment) although the processing tool already decided its "in the last round".
This may be problem (and thus the warning) because the generated file itself may contain annotations being ignored by the annotation processor (because it is not going to do a further round).
The above ought to answer the first part of your question
What does this mean and what steps can I take to clear this warning?
(you figured this out already by yourself, didn't you :-))
What possible steps to take? Check your annotation processors:
1) Do you really have to use filer.createClassFile / filer.createSourceFile on the very last round of the annotaion processor? Usually one uses the filer object inside of a code block like
for (TypeElement annotation : annotations) {
...
}
(in method process). This ensures that the annotation processor will not be in its last round (the last round always being the one having an empty set of annotations).
2) If you really can't avoid writing your generated files in the last round and these files are source files, trick the annotation processor and use the method "createResource" of the filer object (take "SOURCE_OUTPUT" as location).
In OpenJDK test case this warning produced because processor uses "processingOver()" to write new file exactly at last round.
public boolean process(Set<? extends TypeElement> elems, RoundEnvironment renv) {
if (renv.processingOver()) { // Write only at last round
Filer filer = processingEnv.getFiler();
Messager messager = processingEnv.getMessager();
try {
JavaFileObject fo = filer.createSourceFile("Gen");
Writer out = fo.openWriter();
out.write("class Gen { }");
out.close();
messager.printMessage(Diagnostic.Kind.NOTE, "File 'Gen' created");
} catch (IOException e) {
messager.printMessage(Diagnostic.Kind.ERROR, e.toString());
}
}
return false;
}
I modified original example code a bit. Added diagnostic note "File 'Gen' created", replaced "*" mask with "org.junit.runner.RunWith" and set return value to "true". Produced compiler log was:
Round 1:
input files: {ProcFileCreateLastRound}
annotations: [org.junit.runner.RunWith]
last round: false
Processor AnnoProc matches [org.junit.runner.RunWith] and returns true.
Round 2:
input files: {}
annotations: []
last round: true
Note: File 'Gen' created
Compilation completed successfully with 1 warning
0 errors
1 warning
Warning: File for type 'Gen' created in the last round will not be subject to annotation processing.
If we remove my custom note from log, it's hard to tell that file 'Gen' was actually created on 'Round 2' - last round. So, basic advice applies: if in doubt - add more logs.
Where is also a little bit of useful info on this page:
http://docs.oracle.com/javase/7/docs/technotes/tools/solaris/javac.html
Read section about "ANNOTATION PROCESSING" and try to get more info with compiler options:
-XprintProcessorInfo
Print information about which annotations a processor is asked to process.
-XprintRounds Print information about initial and subsequent annotation processing rounds.
I poked around the java 7 compiler options and I found this:
-implicit:{class,none}
Controls the generation of class files for implicitly loaded source files. To automatically generate class files, use -implicit:class. To suppress class file generation, use -implicit:none. If this option is not specified, the default is to automatically generate class files. In this case, the compiler will issue a warning if any such class files are generated when also doing annotation processing. The warning will not be issued if this option is set explicitly. See Searching For Types.
Source
Can you try and implicitly declare the class file.

Problem validating against an XSD with Java5

I'm trying to validate an Atom feed with Java 5 (JRE 1.5.0 update 11). The code I have works without problem in Java 6, but fails when running in Java 5 with a
org.xml.sax.SAXParseException: src-resolve: Cannot resolve the name 'xml:base' to a(n) 'attribute declaration' component.
I think I remember reading something about the version of Xerces bundled with Java 5 having some problems with some schemas, but i cant find the workaround. Is it a known problem ? Do I have some error in my code ?
public static void validate() throws SAXException, IOException {
List<Source> schemas = new ArrayList<Source>();
schemas.add(new StreamSource(AtomValidator.class.getResourceAsStream("/atom.xsd")));
schemas.add(new StreamSource(AtomValidator.class.getResourceAsStream("/dc.xsd")));
// Lookup a factory for the W3C XML Schema language
SchemaFactory factory = SchemaFactory.newInstance("http://www.w3.org/2001/XMLSchema");
// Compile the schemas.
Schema schema = factory.newSchema(schemas.toArray(new Source[schemas.size()]));
Validator validator = schema.newValidator();
// load the file to validate
Source source = new StreamSource(AtomValidator.class.getResourceAsStream("/sample-feed.xml"));
// check the document
validator.validate(source);
}
Update : I tried the method below, but I still have the same problem if I use Xerces 2.9.0. I also tried adding xml.xsd to the list of schemas (as xml:base is defined in xml.xsd) but this time I have
Exception in thread "main" org.xml.sax.SAXParseException: schema_reference.4: Failed to read schema document 'null', because 1) could not find the document; 2) the document could not be read; 3) the root element of the document is not <xsd:schema>.
Update 2: I tried to configure a proxy with the VM arguments -Dhttp.proxyHost=<proxy.host.com> -Dhttp.proxyPort=8080 and now it works. I'll try to post a "real answer" from home.
and sorry, I cant reply as a comment : because of security reasons XHR is disabled from work ...
Indeed, people have been mentioning the Java 5 Sun provided SchemaFactory is giving troubles.
So: did you include Xerces in your project yourself?
After including Xerces, you need to ensure it is being used. If you like to hardcode it (well, as a minimal requirement you'd probably use some application properties file to enable and populate the following code):
String schemaFactoryProperty =
"javax.xml.validation.SchemaFactory:" + XMLConstants.W3C_XML_SCHEMA_NS_URI;
System.setProperty(schemaFactoryProperty,
"org.apache.xerces.jaxp.validation.XMLSchemaFactory");
SchemaFactory factory =
SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
Or, if you don't want to hardcode, or when your troublesome code would be in some 3rd party library that you cannot change, set it on the java command line or environment options. For example (on one line of course):
set JAVA_OPTS =
"-Djavax.xml.validation.SchemaFactory:http://www.w3.org/2001/XMLSchema
=org.apache.xerces.jaxp.validation.XMLSchemaFactory"
By the way: apart from the Sun included SchemaFactory implementation giving trouble (something like com.sun.org.apache.xerces.internal.jaxp.validation.xs.schemaFactoryImpl), it also seems that the "discovery" of non-JDK implementations fails in that version. If I understand correctly than, normally, just including Xerces would in fact make SchemaFactory#newInstance find that included library, and give it precedence over the Sun implementation. To my knowledge, that fails as well in Java 5, making the above configuration required.
I tried to configure a proxy with the VM arguments -Dhttp.proxyHost=<proxy.host.com> -Dhttp.proxyPort=8080 and now it works.
Ah, I didn't realize that xml.xsd is in fact the one referenced as http://www.w3.org/2001/xml.xsd or something like that. That should teach us to always show some XML and XSD fragments as well. ;-)
So, am I correct to assume that 1.) to fix the Java 5 issue, you still needed to include Xerces and set the system property, and that 2.) you did not have xml.xsd available locally?
Before you found your solution, did you happen to try using getResource rather than getResourceAsStream, to see if the exception would then have showed you some more details?
If you actually did have xml.xsd available (so: if getResource did in fact yield a URL) then I wonder what Xerces was trying to fetch from the internet then. Or maybe you did not add that schema to the list prior to adding your own schemas? The order is important: dependencies must be added first.
For whoever gets tot his question using the search: maybe using a custom EntityResolver could have indicated the source of the problem as well (if only writing something to the log and just returning null to tell Xerces to use the default behavior).
Hmmm, just read your "comment" -- editing does not alert people for new replies, so time to ask your boss for some iPhone or some other gadget that is connected to the net directly ;-)
Well, I assume you added:
schemas.add(
new StreamSource(AtomValidator.class.getResourceAsStream("/xml.xsd")));
If so, is xml.xsd actually to be found on the classpath then? I wonder if the getResourceAsStream did not yield null in your case, and how new StreamSource(null) would act then.
Even if getResourceAsStream did not yield null, the resulting StreamSource would still not know where it was loaded from, which may be a problem when trying to include references. So, what if you use the constructor StreamSource(String systemId) instead:
schemas.add(new StreamSource(AtomValidator.class.getResource("/atom.xsd")));
schemas.add(new StreamSource(AtomValidator.class.getResource("/dc.xsd")));
You might also use StreamSource(InputStream inputStream, String systemId), but I don't see any advantage over the above two lines. However, the documentation explains why passing the systemId in either of the 2 constructors seems good:
This constructor allows the systemID to be set in addition to the input stream, which allows relative URIs to be processed.
Likewise, setSystemId(String systemId) explains a bit:
The system identifier is optional if there is a byte stream or a character stream, but it is still useful to provide one, since the application can use it to resolve relative URIs and can include it in error messages and warnings (the parser will attempt to open a connection to the URI only if there is no byte stream or character stream specified).
If this doesn't work out, then maybe some custom error handler can give you more details:
ErrorHandlerImpl errorHandler = new ErrorHandlerImpl();
validator.setErrorHandler(errorHandler);
:
:
validator.validate(source);
if(errorHandler.hasErrors()){
LOG.error(errorHandler.getMessages());
throw new [..];
}
if(errorHandler.hasWarnings()){
LOG.warn(errorHandler.getMessages());
}
...using the following ErrorHandler to capture the validation errors and continue parsing as far as possible:
import org.xml.sax.helpers.DefaultHandler;
private class ErrorHandlerImpl extends DefaultHandler{
private String messages = "";
private boolean validationError = false;
private boolean validationWarning = false;
public void error(SAXParseException exception) throws SAXException{
messages += "Error: " + exception.getMessage() + "\n";
validationError = true;
}
public void fatalError(SAXParseException exception) throws SAXException{
messages += "Fatal: " + exception.getMessage();
validationError = true;
}
public void warning(SAXParseException exception) throws SAXException{
messages += "Warn: " + exception.getMessage();
validationWarning = true;
}
public boolean hasErrors(){
return validationError;
}
public boolean hasWarnings(){
return validationWarning;
}
public String getMessages(){
return messages;
}
}

Categories

Resources