I've been going at this for 4 hours now, and I simply can't see what I'm doing wrong. I have two files:
MyCrawler.java
Controller.java
MyCrawler.java
import edu.uci.ics.crawler4j.crawler.Page;
import edu.uci.ics.crawler4j.crawler.WebCrawler;
import edu.uci.ics.crawler4j.parser.HtmlParseData;
import edu.uci.ics.crawler4j.url.WebURL;
import java.util.List;
import java.util.regex.Pattern;
import org.apache.http.Header;
public class MyCrawler extends WebCrawler {
private final static Pattern FILTERS = Pattern.compile(".*(\\.(css|js|bmp|gif|jpe?g" + "|png|tiff?|mid|mp2|mp3|mp4"
+ "|wav|avi|mov|mpeg|ram|m4v|pdf" + "|rm|smil|wmv|swf|wma|zip|rar|gz))$");
/**
* You should implement this function to specify whether the given url
* should be crawled or not (based on your crawling logic).
*/
#Override
public boolean shouldVisit(WebURL url) {
String href = url.getURL().toLowerCase();
return !FILTERS.matcher(href).matches() && href.startsWith("http://www.ics.uci.edu/");
}
/**
* This function is called when a page is fetched and ready to be processed
* by your program.
*/
#Override
public void visit(Page page) {
int docid = page.getWebURL().getDocid();
String url = page.getWebURL().getURL();
String domain = page.getWebURL().getDomain();
String path = page.getWebURL().getPath();
String subDomain = page.getWebURL().getSubDomain();
String parentUrl = page.getWebURL().getParentUrl();
String anchor = page.getWebURL().getAnchor();
System.out.println("Docid: " + docid);
System.out.println("URL: " + url);
System.out.println("Domain: '" + domain + "'");
System.out.println("Sub-domain: '" + subDomain + "'");
System.out.println("Path: '" + path + "'");
System.out.println("Parent page: " + parentUrl);
System.out.println("Anchor text: " + anchor);
if (page.getParseData() instanceof HtmlParseData) {
HtmlParseData htmlParseData = (HtmlParseData) page.getParseData();
String text = htmlParseData.getText();
String html = htmlParseData.getHtml();
List<WebURL> links = htmlParseData.getOutgoingUrls();
System.out.println("Text length: " + text.length());
System.out.println("Html length: " + html.length());
System.out.println("Number of outgoing links: " + links.size());
}
Header[] responseHeaders = page.getFetchResponseHeaders();
if (responseHeaders != null) {
System.out.println("Response headers:");
for (Header header : responseHeaders) {
System.out.println("\t" + header.getName() + ": " + header.getValue());
}
}
System.out.println("=============");
}
}
Controller.java
package edu.crawler;
import edu.uci.ics.crawler4j.crawler.Page;
import edu.uci.ics.crawler4j.crawler.WebCrawler;
import edu.uci.ics.crawler4j.parser.HtmlParseData;
import edu.uci.ics.crawler4j.url.WebURL;
import java.util.List;
import java.util.regex.Pattern;
import org.apache.http.Header;
import edu.uci.ics.crawler4j.crawler.CrawlConfig;
import edu.uci.ics.crawler4j.crawler.CrawlController;
import edu.uci.ics.crawler4j.fetcher.PageFetcher;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtConfig;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtServer;
public class Controller
{
public static void main(String[] args) throws Exception
{
String crawlStorageFolder = "../data/";
int numberOfCrawlers = 7;
CrawlConfig config = new CrawlConfig();
config.setCrawlStorageFolder(crawlStorageFolder);
/*
* Instantiate the controller for this crawl.
*/
PageFetcher pageFetcher = new PageFetcher(config);
RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
CrawlController controller = new CrawlController(config, pageFetcher, robotstxtServer);
/*
* For each crawl, you need to add some seed urls. These are the first
* URLs that are fetched and then the crawler starts following links
* which are found in these pages
*/
controller.addSeed("http://www.ics.uci.edu/~welling/");
controller.addSeed("http://www.ics.uci.edu/~lopes/");
controller.addSeed("http://www.ics.uci.edu/");
/*
* Start the crawl. This is a blocking operation, meaning that your code
* will reach the line after this only when crawling is finished.
*/
controller.start(MyCrawler, numberOfCrawlers);
}
}
The Structure is as follows:
java/MyCrawler.java
java/Controller.java
jars/... --> all the jars crawler4j
I try to compile this on a WINDOWS machine using:
javac -cp "C:\xampp\htdocs\crawlcrowd\www\java\jars\*;C:\xampp\htdocs\crawlcrowd\www\java\*" MyCrawler.java
This works perfectly, and I end up with:
java/MyCrawler.class
However, when I type:
javac -cp "C:\xampp\htdocs\crawlcrowd\www\java\jars\*;C:\xampp\htdocs\crawlcrowd\www\java\*" Controller.java
it bombs out with:
Controller.java:50: error: cannot find symbol
controller.start(MyCrawler, numberOfCrawlers);
^
symbol: variable MyCrawler
location: class Controller
1 error
So, I think somehow I am not doing something that I need to be doing. Something that will make this new executable class be "aware" of the MyCrawler.class. I have tried fiddling with the classpath in the commandline javac part. I've also tried setting it in my environment variables.... no luck.
Any idea how I can get this to work?
UPDATE
I got most of this code from the Google Code page itself. But I just can't figure out what must go there. Even if I try this:
MyCrawler mc = new MyCrawler();
No luck. Somehow Controller.class does not know about MyCrawler.class.
UPDATE 2
I don't think it matters, due the problem clearly being that it can't find the class, but either way, here is the signature of "CrawlController controller". Taken from here.
/**
* Start the crawling session and wait for it to finish.
*
* #param _c
* the class that implements the logic for crawler threads
* #param numberOfCrawlers
* the number of concurrent threads that will be contributing in
* this crawling session.
*/
public <T extends WebCrawler> void start(final Class<T> _c, final int numberOfCrawlers) {
this.start(_c, numberOfCrawlers, true);
}
I am in fact passing through a "crawler" as I'm passing in "MyCrawler". The problem is that application doesn't know what MyCrawler is.
A couple of things come to mind:
Is your MyCrawler extending edu.uci.ics.crawler4j.crawler.WebCrawler?
public class MyCrawler extends WebCrawler
Are you passing in MyCrawler.class (i.e., as a class) into controller.start?
controller.start(MyCrawler.class, numberOfCrawlers);
Both of these need to be satisfied in order for the controller to compile and run. Also, Crawler4j has some great examples here:
https://code.google.com/p/crawler4j/source/browse/src/test/java/edu/uci/ics/crawler4j/examples/basic/BasicCrawler.java
https://code.google.com/p/crawler4j/source/browse/src/test/java/edu/uci/ics/crawler4j/examples/basic/BasicCrawlController.java
These 2 classes will compile and run right away (i.e., BasicCrawlController), so it's a good starting place if you are running into any issues.
The parameters for start() should be a class and number of crawlers. Its throwing an error as you are passing in an object of crawler and not the crawler class. Use the start method as shown below, it should work
controller.start(MyCrawler.class, numberOfCrawlers)
Here you are passing a class name MyCrawler as a parameter.
controller.start(MyCrawler, numberOfCrawlers);
I think class name should not be a parameter.
I am also working little bit on Crawling!
Related
I copied a simple web crawler from the internet and then started to run the application in a test class. Every time i try to run the application I get "Exception in thread "main" java.lang.NoClassDefFoundError: org/jsoup/Jsoup" error. I first imported the jsoup jar as a externaljar in a Libary, because I needed it for the http stuff.
Error messages:
Exception in thread "main" java.lang.NoClassDefFoundError: org/jsoup/Jsoup
at com.copiedcrawler.SpiderLeg.crawl(SpiderLeg.java:35)
at com.copiedcrawler.Spider.search(Spider.java:40)
at com.copiedcrawler.SpiderTest.main(SpiderTest.java:9)
Caused by: java.lang.ClassNotFoundException: org.jsoup.Jsoup
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:602)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
... 3 more
Spider Class
package com.copiedcrawler;
import java.util.HashSet;
import java.util.LinkedList;
import java.util.List;
import java.util.Set;
public class Spider
{
private static final int MAX_PAGES_TO_SEARCH = 10;
private Set<String> pagesVisited = new HashSet<String>();
private List<String> pagesToVisit = new LinkedList<String>();
public void search(String url, String searchWord)
{
while(this.pagesVisited.size() < MAX_PAGES_TO_SEARCH)
{
String currentUrl;
SpiderLeg leg = new SpiderLeg();
if(this.pagesToVisit.isEmpty())
{
currentUrl = url;
this.pagesVisited.add(url);
}
else
{
currentUrl = this.nextUrl();
}
leg.crawl(currentUrl); // Lots of stuff happening here. Look at the crawl method in
// SpiderLeg
boolean success = leg.searchForWord(searchWord);
if(success)
{
System.out.println(String.format("**Success** Word %s found at %s", searchWord, currentUrl));
break;
}
this.pagesToVisit.addAll(leg.getLinks());
}
System.out.println("\n**Done** Visited " + this.pagesVisited.size() + " web page(s)");
}
/**
* Returns the next URL to visit (in the order that they were found). We also do a check to make
* sure this method doesn't return a URL that has already been visited.
*
* #return
*/
private String nextUrl()
{
String nextUrl;
do
{
nextUrl = this.pagesToVisit.remove(0);
} while(this.pagesVisited.contains(nextUrl));
this.pagesVisited.add(nextUrl);
return nextUrl;
}
}
SpiderLeg class
package com.copiedcrawler;
import java.io.IOException;
import java.util.LinkedList;
import java.util.List;
import org.jsoup.Connection;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
public class SpiderLeg
{
// We'll use a fake USER_AGENT so the web server thinks the robot is a normal web browser.
private static final String USER_AGENT =
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/13.0.782.112 Safari/535.1";
private List<String> links = new LinkedList<String>();
private Document htmlDocument;
/**
* This performs all the work. It makes an HTTP request, checks the response, and then gathers
* up all the links on the page. Perform a searchForWord after the successful crawl
*
* #param url
* - The URL to visit
* #return whether or not the crawl was successful
*/
public boolean crawl(String url)
{
try
{
Connection connection = Jsoup.connect(url).userAgent(USER_AGENT);
Document htmlDocument = connection.get();
this.htmlDocument = htmlDocument;
if(connection.response().statusCode() == 200) // 200 is the HTTP OK status code
// indicating that everything is great.
{
System.out.println("\n**Visiting** Received web page at " + url);
}
if(!connection.response().contentType().contains("text/html"))
{
System.out.println("**Failure** Retrieved something other than HTML");
return false;
}
Elements linksOnPage = htmlDocument.select("a[href]");
System.out.println("Found (" + linksOnPage.size() + ") links");
for(Element link : linksOnPage)
{
this.links.add(link.absUrl("href"));
}
return true;
}
catch(IOException ioe)
{
// We were not successful in our HTTP request
return false;
}
}
/**
* Performs a search on the body of on the HTML document that is retrieved. This method should
* only be called after a successful crawl.
*
* #param searchWord
* - The word or string to look for
* #return whether or not the word was found
*/
public boolean searchForWord(String searchWord)
{
// Defensive coding. This method should only be used after a successful crawl.
if(this.htmlDocument == null)
{
System.out.println("ERROR! Call crawl() before performing analysis on the document");
return false;
}
System.out.println("Searching for the word " + searchWord + "...");
String bodyText = this.htmlDocument.body().text();
return bodyText.toLowerCase().contains(searchWord.toLowerCase());
}
public List<String> getLinks()
{
return this.links;
}
}
SpiderTest class
package com.copiedcrawler;
public class SpiderTest {
public static void main(String[] args) {
// TODO Auto-generated method stub
Spider s1 = new Spider();
s1.search("https://www.w3schools.com/html/", "html");
}
}
Based on stacktrace you are running java program from command line and you forgot to add jsoup into class path. Try running
java -cp classes:libs/jsoup.jar com.copiedcrawler.SpiderTest
Where classes is your program compiled and libs is a folder with libraries.
You might have added the Jsoup Jar File into the Modulepath.
You need to add the JAR file to classpath.
Follow the below steps:
Remove the Jsoup JAR from the libraries.
Project->Build Path->Configure Build Path->Libraries->ClassPath->Add External JARs.
Apply and Close.
Re-run the project.
Now, It should work.
I am working on a project to crawl a small web directory and have implemented a crawler using crawler4j. I know that RobotstxtServer should be checking to see if a file is allow/disallowed by the robots.txt file, but mine is still showing a directory that should not be visited.
I have read over the source code and my code many times but I can't seem to figure out why this is. In short, why isn't my program recognizing the /donotgohere/ file that the robots.txt file says not to do to?
Below is my code for the program. Any help would be awesome. Thank you!
Crawler:
package crawler_Project1_AndrewCranmer;
import java.util.Set;
import java.util.regex.Pattern;
import java.io.IOException;
import edu.uci.ics.crawler4j.crawler.Page;
import edu.uci.ics.crawler4j.crawler.WebCrawler;
import edu.uci.ics.crawler4j.parser.HtmlParseData;
import edu.uci.ics.crawler4j.url.WebURL;
public class MyCrawler extends WebCrawler
{
private final static Pattern FILTERS = Pattern.compile(".*(\\.(css|js|gif|jpg|png|mp3|mp3|zip|gz))$");
#Override public boolean shouldVisit(Page referringPage, WebURL url)
{
String href = url.getURL().toLowerCase();
return !FILTERS.matcher(href).matches()
&& href.startsWith("http://lyle.smu.edu/~fmoore");
}
#Override public void visit(Page page)
{
String url = page.getWebURL().getURL();
System.out.println("URL: " + url);
if(page.getParseData() instanceof HtmlParseData)
{
HtmlParseData h = (HtmlParseData)page.getParseData();
String text = h.getText();
String html = h.getHtml();
Set<WebURL> links = h.getOutgoingUrls();
}
}
}
Controller:
package crawler_Project1_AndrewCranmer;
import edu.uci.ics.crawler4j.crawler.CrawlConfig;
import edu.uci.ics.crawler4j.crawler.CrawlController;
import edu.uci.ics.crawler4j.fetcher.PageFetcher;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtConfig;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtServer;
public class Controller
{
public static void main(String[] args) throws Exception
{
int numberOfCrawlers = 1;
String crawlStorageFolder = "/data/crawl/root";
CrawlConfig c = new CrawlConfig();
c.setCrawlStorageFolder(crawlStorageFolder);
c.setMaxDepthOfCrawling(-1); //Unlimited Depth
c.setMaxPagesToFetch(-1); //Unlimited Pages
c.setPolitenessDelay(200); //Politeness Delay
PageFetcher pf = new PageFetcher(c);
RobotstxtConfig robots = new RobotstxtConfig();
RobotstxtServer rs = new RobotstxtServer(robots, pf);
CrawlController controller = new CrawlController(c, pf, rs);
controller.addSeed("http://lyle.smu.edu/~fmoore");
controller.start(MyCrawler.class, numberOfCrawlers);
controller.shutdown();
controller.waitUntilFinish();
}
}
crawler4j uses an URL canonicalization process. According to the robotstxt.org website, the de-facto standard, only specifies robots.txt files on the domain root. For this reason, crawler4j will only search there for robots.txt.
In your case http://lyle.smu.edu/ does not provide a robots.txt at http://lyle.smu.edu/robots.txt (this will give a HTTP 404).
Your robots.txt is located here http://lyle.smu.edu/~fmoore/robots.txt, but the framework will only look at the domain root (as the de-facto standard specifies) to find this file. For this reason, it will ignore the directives, declared in your case.
Does anyone know how to get the same information, about what paths are used, like at the start of dw application. I mean the output after this line:
io.dropwizard.jersey.DropwizardResourceConfig: The following paths were found for the configured resources:
GET /path/of/res/test (this.is.the.class.package.info.MyRessource)
POST /path/of/res/test2 (this.is.the.class.package.info.MyRessource2)
I have to check if specific path exists.
You'll have to do this on your own. Take a look at the logEndpoints method (which is what actually logs this information - with private methods). You should be able to adapt this method to handle the resources from your environment.jersey().getResourceConfig() after you configure your resources in your run method.
Something like:
final ImmutableList.Builder<Class<?>> builder = ImmutableList.builder();
for (Object o : environment.jersey().getResourceConfig().getSingletons()) {
if (o.getClass().isAnnotationPresent(Path.class)) {
builder.add(o.getClass());
}
}
for (Class<?> klass : environment.jersey().getResourceConfig().getClasses()) {
if (klass.isAnnotationPresent(Path.class)) {
builder.add(klass);
}
}
final List<String> endpoints = Lists.newArrayList();
for (Class<?> klass : builder.build()) {
AbstractResource resource = IntrospectionModeller.createResource(klass);
endpoints.add(resource.getPath().getValue());
}
Note that what's in master is slightly ahead of what's in Maven - the above example shows how to get the AbstractResource which will work with 0.7.1. You'll have to be sure to adapt your method as dropwizard evolves. This example also doesn't normalize the path but I you can easily add that based on logEndpoints.
This solution works for me (DW 0.7.1):
private Multimap<String, String> getEndpoints(Environment environment)
{
Multimap<String, String> resources = ArrayListMultimap.create();
ResourceConfig jrConfig = environment.jersey().getResourceConfig();
Set<Object> dwSingletons = jrConfig.getSingletons();
for (Object singletons : dwSingletons) {
if (singletons.getClass().isAnnotationPresent(Path.class)) {
AbstractResource resource = IntrospectionModeller.createResource(singletons.getClass());
AbstractResource superResource = IntrospectionModeller.createResource(singletons.getClass().getSuperclass());
String uriPrefix = getStringWithoutStartingSlash(resource.getPath().getValue());
for (AbstractResourceMethod srm :resource.getResourceMethods())
{
String uri = uriPrefix;
resources.put(uri,srm.getHttpMethod());
LOG.info("Found http method " +srm.getHttpMethod() + " for the path " + uri + " returning (class) " + srm.getReturnType().getName());
}
for (AbstractSubResourceMethod srm :resource.getSubResourceMethods())
{
//extended resources methods will be added by hand
if(superResource != null){
for (AbstractSubResourceMethod superSrm : superResource.getSubResourceMethods())
{
String srmPath = getStringWithoutStartingSlash(srm.getPath().getValue());
String superSrmPath = getStringWithoutStartingSlash(superSrm.getPath().getValue());
Class<?> srmClass = srm.getDeclaringResource().getResourceClass();
Class<?> superSrmClass = superSrm.getDeclaringResource().getResourceClass();
//add superclass method if methodName is not equal superMethodName
if(srmClass.getSuperclass().equals(superSrmClass) && !srm.getMethod().getName().equals(superSrm.getMethod().getName())){
String uri = uriPrefix + "/" + srmPath + "/" + superSrmPath ;
resources.put(uri,superSrm.getHttpMethod());
LOG.info("Found http method " +superSrm.getHttpMethod() + " for the path " + uri + " returning (class) " + superSrm.getReturnType().getName());
}
}
}
String uri = uriPrefix + "/" + getStringWithoutStartingSlash(srm.getPath().getValue());
resources.put(uri,srm.getHttpMethod());
LOG.info("Found http method " +srm.getHttpMethod() + " for the path " + uri + " returning (class) " + srm.getReturnType().getName());
}
}
}
return resources;
}
But #PathParam annoations are also plain, e.g. if #Path("/{id}") then sth. like '.../{id}' will be used!!!
If you extend your resources and super class does also have path annotation, then this method will produce also informations and even more than the default DW logEndpoints() method!
FYI: The imports used in class
import java.util.Set;
import javax.ws.rs.Path;
import com.google.common.collect.ArrayListMultimap;
import com.google.common.collect.Multimap;
import com.sun.jersey.api.core.ResourceConfig;
import com.sun.jersey.api.model.AbstractResource;
import com.sun.jersey.api.model.AbstractResourceMethod;
import com.sun.jersey.api.model.AbstractSubResourceMethod;
import com.sun.jersey.server.impl.modelapi.annotation.IntrospectionModeller;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import io.dropwizard.setup.Environment;
I used a simpler approach for getting the same data. All the resources here are jersey resources.
Map<String, Object> beansWithAnnotation = applicationContext.getBeansWithAnnotation(Path.class);
Collection<Object> values = beansWithAnnotation.values();
for (Object next : values) {
ResourceUtil.getResourceUrls(next);
}
public static List<String> getResourceUrls(Object obj)
{
Resource resource = Resource.from(obj.getClass());
String uriPrefix = resource.getPath();
List<String> urls = new ArrayList<>();
for (Resource res :resource.getChildResources())
{
String uri = uriPrefix + res.getPath();
urls.add(uri);
}
return urls;
}
I have following code that outputs my and my users twitter time line messages in java.
I followed this tutorial to get the code below
http://namingexception.wordpress.com/2011/09/12/how-easy-to-make-your-own-twitter-client-using-java/
import java.io.IOException;
import java.util.List;
import twitter4j.Status;
import twitter4j.Twitter;
import twitter4j.TwitterException;
import twitter4j.TwitterFactory;
import twitter4j.auth.AccessToken;
public class SimpleTweet {
List<Status> statuses;
private final static String CONSUMER_KEY = "XXXXXX";
private final static String CONSUMER_KEY_SECRET = "XXXXXXX-123";
public void start() throws TwitterException, IOException {
Twitter twitter = new TwitterFactory().getInstance();
twitter.setOAuthConsumer(CONSUMER_KEY, CONSUMER_KEY_SECRET);
String accessToken = getSavedAccessToken();
String accessTokenSecret = getSavedAccessTokenSecret();
AccessToken oathAccessToken = new AccessToken(accessToken,accessTokenSecret);
twitter.setOAuthAccessToken(oathAccessToken);
twitter.updateStatus("Hello world :).");
statuses = twitter.getHomeTimeline();
for (Status each : statuses) {
System.out.println("Sent by: #" + each.getUser().getScreenName()
+ " - " + each.getUser().getName() + "\n" + each.getText()
+ "\n");
}
}// start method ends here
private String getSavedAccessTokenSecret() {
return "vxcvvxcvxcvx";
}
private String getSavedAccessToken() {
return "eweweqweqweqwe";
}
public static void main(String[] args) throws Exception {
new SimpleTweet().start();
}
}
And I get following output
Sent by: #tweetrr - rr
Hello to all :).
Sent by: #addthis - AddThis
Just in time for #wordcampnyc, we have updated the AddThis WordPress plugin! Check it:
http://t.co/cgOgRwyl
Now I want the output to be in XML format. I would like to know if there are API's that does this work. Thanks in advance
You can use betwixt from apache (Bitwix example), using which you can convert either bean or hashmap to XML format easily. So, you create bean called UserStatusBean with fields like sentBy, status, message etc, populate the bean and output as XML using [BeanWriter][2].
I want to implement an application that uses image recognition system IQEngines. I'm using eclipse for that. I'm trying test codes IQEngines provide but the result I'm getting is that "File apple.jpg does't exist"
I'm not sure where the problem lies. It seems like I imported all relevant jar files so the code should work. In eclipse I changed Main method into onCreate so that the activity can run
Does anyone have a clue?? Has anyone ever used IQEngines?
There's javaiqe.java that javaiqe_test.java is using but it's very long so I don't want to attach
package com.Camera;
import java.io.File;
import java.util.ArrayList;
import android.app.Activity;
import android.os.Bundle;
import android.widget.TextView;
import com.iqengines.javaiqe.javaiqe;
import com.iqengines.javaiqe.javaiqe.IQEQuery;
/**
* IQEngines Java API
*
* test file
*
* #author
*/
public class javaiqe_test extends Activity{
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
TextView tv = (TextView) findViewById(R.id.resultImg);
final String KEY = "64bc6a7fd77643899d3af8b305924165";
final String SECRET = "c27e162ea7c24c619f850014124598";
/*
* An API object is initialized using the API key and secret
*/
iqe = new javaiqe(KEY, SECRET);
uploadObject();
/*
* You can quickly query an image and retrieve results by doing:
*/
File test_file = new File("apple.jpg");
// Query
IQEQuery query = iqe.query(test_file);
System.out.println("query.result : " + query.getResult());
System.out.println("query.qid : " + query.getQID());
tv.setText(query.getResult());
// Update
/* String update = iqe.update();
System.out.println("Update : " + update);*/
// Result
String result = iqe.result(query.getQID(), true);
System.out.println("Result : " + result);
// Upload
//uploadObject();
}
/**
* Sample code for uploading an object
*/
public static void uploadObject() {
// Your object images
ArrayList<File> images = new ArrayList();
images.add(new File("res/drawable/apple.jpg"));
// Object infos
String name = "Computational Geometry, Algorithms and Applications, Third Edition";
// Optional infos
String custom_id = "book0001";
String meta = "{\n\"isbn\": \"9783540779735\"\n}";
boolean json = false;
String collection = "books";
// Upload
//System.out.println("Upload : " + iqe.upload(images, name, custom_id, meta, json, collection));
System.out.println("Upload : " + iqe.upload(images, name));
}
private static javaiqe iqe = null;
}
If you're sure you've imported everything you need, then it could be useful, if you clean your eclipse project, and rebuild it again. Other reason might be - that you've forgotten to 'redeploy' your application.
This might be helpful too.
Good luck! Probably you've missed some minor detail - I've been using external jars without any problems in Android projects.