Google place API key - java

So my question is.. I have created a map activity & I am using the google place API key for requests to find locations on the map.
I have took the basic structure of the search function & I am trying to use it in order to get results.. but I have 2 problems with this.
1. My API key is limited to 1 search per day.. cant understand why.
2. when I am running a search with it I always get zero results.. also don't know why.
Can some one please help me with this issue ?
Thx.
Nimrod
Below, is my search line :
https://maps.googleapis.com/maps/api/place/findplacefromtext/json?input="+Uri.encode(query)+"&inputtype=textquery&fields=photos,formatted_address,name&key="+apiKey

New pricing changes went into effect on July 16, 2018. For more information, check out the Guide for Existing Users.
Please check for more info:https://developers.google.com/places/web-service/usage-and-billing

Related

Autodesk Forge java tutorial new bucket creation failure

I was following autodesk forge tutorial for java, but using their code examples, new bucket creation fails for me with almost no error information except " com.autodesk.client.ApiException: error". So I was wondering if anyone else already tried to create simple viewer using their tutorial and managed to solve this problem or at least encountered it.
Their sample program in GitHub is sadly incomplete, so I can't exactly check if there are any mistakes regarding servlet mappings.
com.autodesk.client.ApiClient.invokeAPI(ApiClient.java:581),
com.autodesk.client.api.BucketsApi.createBucket(BucketsApi.java:113),
forgesample.oss.doPost(oss.java:141),
javax.servlet.http.HttpServlet.service(HttpServlet.java:661),
javax.servlet.http.HttpServlet.service(HttpServlet.java:742),
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231),
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166),
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52),
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193),
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166),
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198),
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96),
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:496),
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140),
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81),
org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:650),
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87),
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342),
org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:803),
org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66),
org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:790),
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1468),
org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49),
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142),
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617),
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61),
java.lang.Thread.run(Thread.java:745)
EDIT
Apparently using a more unique bucket name I managed to create one, but the given error was plain 400 and stack trace does not give exactly much help.
For the record, here are the requirements for bucket names:
A unique name you assign to a bucket. It must be globally unique
across all applications and regions, otherwise the call will fail.
Possible values: -_.a-z0-9 (between 3-128 characters in length). Note
that you cannot change a bucket key.
And thanks #WidnmaxJ for the feedback, logged for improvements on the tutorial.

Get TEXT only tweets using Twitter4j ? tweet that do not contain any media

There are a lot of forums asking to get medias from twitter but how to do the opposite? I want to get only the tweets which do not contain any image/video/url. If found those tweets just skip and search for the next because I want to display a full text without "http://t..." thing at the end. I put this in ...
cb.setIncludeEntitiesEnabled(false);
but, was not sure I did it right. Also, I write this code in Processing library but in Eclipse so I guess if you can show me the way in Java I will be fine, but a complete example please. I am very new to Java.
However, I have seen some people say about "filter=image" in tweet method but, I could not figure it out where to put this in. I have tried and fail.
Any suggestion? --Thanks in advance.
Stack Overflow isn't really designed for general "how do I do this" type questions. It's for specific "I tried X, expected Y, but got Z instead" type questions. Please try to be more specific. Saying "I could not figure it out where to put this in" doesn't really tell us anything. Where exactly did you try to put it? What exactly happened when you tried that? Can you please post a MCVE?
That being said, the best thing you can do is Google something like "twitter4j filter out images" or "twitter4j exclude images". More info:
Get Tweets with Pictures using twitter search api: Notice the second reply mentions the approach of getting all tweets and then manually checking whether it has an image. You could do the same thing to check that a tweet does not contain an image.
How to exclude retweets from my search query results Options: This mentions an exclude filter and a -filter that might be helpful.
Please try something and post a MCVE in a new question post if you get stuck. Good luck.

How to get the Google's search result using Java

According to the answer in here, using Gson we can programmatically achieve to retrieve the result that Google will return to a query. Nonetheless, yet there are 2 questions are remaining in my mind:
How can we do similar thing for Bing?
How can we get more than 4 results based on the referred answer? Because the results.getResponseData().getResults().get(n).getUrl() for n>4 returns exception.
As #Niklas noted, google search api is deprecated, thus you should not use it for your project. Currently the only solution would be to get search result by http request to get a html search results and than parse it yourself.
In case of Bing, there is a search API, but it has a limited number of calls for free users. If you need to make a lot of requests, than you will have to pay for it. https://datamarket.azure.com/dataset/5BA839F1-12CE-4CCE-BF57-A49D98D29A44

SVNClient.logMessages never returns a result

I'm using JavaHL to connect to a 1.6 svn repos. While I managed to list the contents of the repository, I'm not able to get the item history (the comments made on the check ins as well as the dates and the authors).
As far as I see, SVNClient.logMessages is the right method, but the callback method is never been executed. I used Revision.HEAD for the path revision and a revision range object holding Revision.START and Revision.HEAD; the limit is set to 0 (which is no limit according to the documentation). I'm trying to fetch the revision, the date, the author and the comment.
If someone knows about example code on using JavaHL I'm maybe able to find my fault by comparing that code to mine.
BTW: I know about SVNKit, but the management decided not to buy it. Thus I have to use JavaHL, where next-to-no sample programs exist (and the doc will merely list the classes and interfaces without a very detailed description). So, please point me in that direction of SVNKit as this is impossible for me.
Any pointers appreciated.
Gnarf
The issue has been solved. The problem was the call to SVNClient.logMessages(), especially the revision range used.
The start revision had been Revision.START that, according to the documentation, is used to describe the "first existing revision".
The problem disappeared when I used Revision.getInstance(1) instead. As it is reasonable that any item has at least one revision (the initial one) with that number, it should be save to use that.
Hopefully this will save anyone else from spending another two-and-a-half days to figure it out!
Gnarf

running a large number of "counting QL" on a noSQL "GAE" data store

before we start let me give you some information about our environment:
it is written fully in Java/J2EE.
it is developed to be deployed on GAE "Google App Engine"
its GUI is developed by GWT.
our problem is in a core development issue.
Here is my problem,
i am building a web application where users "online" can search for listings in this website.
first please open the web site careerbuilder.com and search for any keyword e.g. "Accounting".
a page will be opened , [Narrow Search] has a way to allow you go to your target job easier "lets call this a filter" ,lots of jobs down there.
search filter includes sub-filters [Category , Company , City , State ].
each sub-filter has many cases or options. like for "State has (California ,Iowa , Kansas , ...etc)" beside each one of them is the number of jobs that matches your current filter/sub-filter selection. you will find it between brackets i.e. (23)
Now we want to allow this filter functionality and we want to make it fast.
making a count query for each sub-filter option is going to be an effective idea.
kindly keep in mind that:
users can add/remove listing.
also listings can expire.
number of sub-filters are higher for us "can reach 20".
each sub-filter has between 2 and 200 options.
we are searching for the best practice or a suggestion of an algorithm or whatever to solve this problem.
here are 2 options we have reached so far:
1.building a statistics table to save these results in it, then update it each time listings number is changed , also keep a nightly background job to recalculate results. and we can show number of results directly from this table.
2.build a tree data structure to be loaded on memory and saved in a table each time it is updated. this tree contains the resulting numbers of listings in each option of sub-filters.
even though i still think this is not enough !!!
can anyone suggest a better idea?
all comments, questions, suggestions are very welcomed.
Regards
Mohammad S.
Have you noticed how Google applications rarely give exact counts on anything? Especially, when using filters? You always get these guesstimates, like 'more than 1000' or 'tens of thousands', or 'showing 20 of about 23123123 results'. Well, now you see why. Welcome to the world of noSQL
(although, frankly, counts with filters are bad in the sql land as well).
It's not a solution, but a workaround, but it's common:
make a query;
try to load 1001 entities, even if you only wish to show 20;
if it works, show a "more than 1000"; if you get less than 1001, show the exact number.
This can be pretty effective and users do not seem to mind (nor notice).

Categories

Resources