I am trying to find the travel duration from origin to destination using google's direction services api. I need to find out travel duration according to traffic conditions. I tried using javascript and it returns the duration in traffic value. But when i tried the same problem in java it doesn't seem to return the duration in traffic value. So how to obtain duration in traffic using direction services api of google?
URL url = new URL ("http://maps.googleapis.com/maps/api/directions/json?origin=" + URLEncoder.encode(origin, "UTF-8") +"&destination="+ URLEncoder.encode(destination, "UTF-8") +"&waypoints=optimize:true|Bagbazar,Kathmandu|Thapathali,Kathmandu|Kamal+Pokhari,Kathmandu"+"&sensor=false");
Above I have pasted the url I used to send to google server.
Below is the java script code, in the following code I can set durationInTraffic: true, which returns the time required to travel under traffic conditions. What is the equivalent process in java?
var request = {
origin:"Wollongong, Australia",
destination:"Sydney,Australia",
travelMode: google.maps.DirectionsTravelMode.DRIVING,
provideRouteAlternatives: true,
durationInTraffic: true
};
Take a look at the duration_in_traffic definition under the Legs section on the maps documentation page:
The directions request includes a departure_time parameter set to a value within a few minutes of the current time.
The request includes a valid Maps for Business client and signature parameter.
Traffic conditions are available for the requested route.
The directions request does not include stopover waypoints.
Maybe not all of the 4 conditions they list are being met?
There is also a JSON example on that page w/ a sample url you can use if it helps any.
In Google API URL add &departure_time=now it returns duration_in_traffic time .now means currentTime or you can put millisecond time in place of now.
if you adding waypoints then it will not returns duration_in_traffic
because google Api mentioned The directions request does not include stopover waypoints.
but if you adding waypoints using via:lat,long then it will returns duration in traffic because google API will not consider waypoints as stopover
If you set departure_time you can get duration_in_traffic for each your legs , but if you want duration_in_traffic for each steps you should use distance_matrix API , and merge this two .
In distance google API you need departure_time set .
Related
I am trying to retrieve a list of projects from the OpenStack API, and would like to use pagination in order to retrieve n projects at a time.
In the OpenStack documentation, it states that I can append "/?limit=n" to the URL and up to n results will be fetched accordingly.
However, when executing the GET request to the URL as follows:
https://identity-3.eu-de-1.cloud.sap/v3/auth/projects/?limit=1
I still get ALL projects. I can't seem to understand what I am missing.
NOTE: the request itself works and returns results as needed, but simply ignores the limit parameter (this is not an authentication issue).
I think it does not all OpenStack API provide limit parameter
In keystone API doc, there is no limit parameter in Request parameter descriptions for /v3/auth/projects API
keystone-project-API-doc
Other services like cinder volume list, it provides limit parameter in doc
cinder-volume-API-doc
I trying to get Sessions, Revenue, Transactions, Bounce Rate data from Google Analytics Report API v4
with grouping by Chanel:
Organic search
Email
Direct
Branded Paid Search
Social
Referral
.. etc
Right now I'm programming a Java module with test Request which has setted following parameters:
Dimensions:
ga:acquisitionTrafficChannel;
Metrics:
ga:sessions
ga:percentNewSessions
ga:newUsers
When I use ga:acquisitionTrafficChannel + ga:sessions GA Report api returns values, but when I try to add in request ga:percentNewSessions, ga:newUsers, it returns error:
{
"domain": "global",
"message": "Selected dimensions and metrics cannot be queried together.",
"reason": "badRequest"
}
To perform request in code I do following:
DateRange dateRange = new DateRange();
dateRange.setStartDate("2015-06-15");
dateRange.setEndDate("2015-06-30");
ReportRequest request = new ReportRequest()
.setViewId(context.getProperty(VIEW_ID).evaluateAttributeExpressions().getValue())
.setDateRanges(Arrays.asList(dateRange))
.setDimensions(Arrays.asList(
new Dimension().setName("ga:acquisitionTrafficChannel")
))
.setMetrics(Arrays.asList(
new Metric().setExpression("ga:sessions"),
new Metric().setExpression("ga:percentNewSessions"),
new Metric().setExpression("ga:newUsers")
));
ArrayList<ReportRequest> requests = new ArrayList<>();
requests.add(request);
GetReportsRequest getReport = new GetReportsRequest().setReportRequests(requests);
GetReportsResponse response = service.reports().batchGet(getReport).execute();
How to do request correctly? Is in the right direction do I go?
Because as I said, I will need to do same thing with Revenue, Bounce Rate..
but I not fully understand how to combine Metrics and Dimensions without errors.
Thanks for any help
About my question:
As solution for my needs I used following combination in code:
To get all Channel groups ("Organic Search, Email, Direct, etc") I used following dimension:
ga:channelGrouping - it will return all
To get values for Sessions, Revenue, Transactions, Bounce Rate, etc I used following metrics:
ga:sessions
ga:transactionRevenue
ga:transactions
ga:bounceRate
Also here can be more metrics if it is needed.
Maybe it will be useful to somebody.
Actually, question about error with combination in question (with ga:acquisitionTrafficChannel) is still open :)
I have been trying for months to get access to a certain api (which has almost no documentation) to work using signpost. The api has oauth 2.0 authentication. The problem is that I have never used oauth before. But I have spent a long time reseaching so I think I have a functional understanding of how it works. I thought that using the handy singpost api it wouldn't be too much trouble to hack through it, but alas I have encountered a wall. The api docs are here:
https://btcjam.com/faq/api
It gives three URLs that are needed for the oauth authentication, which I am writing as java here for consistency with some code below:
String Authorization= "https://btcjam.com/oauth/authorize";
String Token ="https://btcjam.com/oauth/token";
String Applications = "https://btcjam.com/oauth/applications";
I have an application with a name, key, and secret. I also have set my callback URL to be the localhost, i.e.
http://localhost:3000/users/auth/btcjam/callback.
Now, as I am reading the signpost docs, it tells me that in order to request an access token, I need to do something like the following:
OAuthProvider provider = new DefaultOAuthProvider(
REQUEST_TOKEN_ENDPOINT_URL, ACCESS_TOKEN_ENDPOINT_URL,
AUTHORIZE_WEBSITE_URL);
String url = provider.retrieveRequestToken(consumer, CALLBACK_URL);
However, I am unsure exactly what to put for the URL's in these various spots, and I am getting errors. The problem is that The names of the URLs required above do not correspond to the URLs given. The "authorization" and "callback" URLs seem to match up nicely, but I am not sure how the URLs "REQUEST_TOKEN_ENDPOINT_URL" and "ACCESS_TOKEN_ENDPOINT_URL" required in the signpost docs correspond to the URLs given by the api docs on the serverI am trying to access. Of course, there are only two possible permutations, but when I try them both I get two different errors:
"Authorization failed (server replied with a 401). This can happen if the consumer key was not correct or the signatures did not match."
"Communication with the service provider failed: URLDecoder: Illegal hex characters in escape (%) pattern - For input string: " 1""
Could someone please help explain what might be going on here? Am I very close to getting this to work or do I have to take a bunch of steps back?
Any help is much appreciated.
Thanks,
Paul
I followed the Quickstart from HBC and I managed to get some tweets from the Twitter Stream specifying some track terms, here is the code:
/** Declare the host you want to connect to, the endpoint, and authentication (basic auth or oauth) */
Hosts hosebirdHosts = new HttpHosts(Constants.STREAM_HOST);
StreamingEndpoint endpoint = new StatusesFilterEndpoint();
// Optional: set up some followings and track terms
List<Long> followings = Lists.newArrayList(1234L, 566788L);
List<String> terms = Lists.newArrayList("twitter", "api");
endpoint.followings(followings);
endpoint.trackTerms(terms);
Is it possible to get the twitter Stream with Hbc without specifying any track terms?
I simply tried to remove the line "endpoint.trackTerms(terms);" but doing so it doesn't work.
Help me! Thanks!
It should work. I tried the example, and 'followed' myself and I received the Tweet I made whilst I was connected.
I suspect that the users you are following didn't have any activity whilst you were consuming the stream and that's why you didn't see any output - e.g. themselves Tweeting or somebody replying to one of their Tweets etc...
The follow parameter documentation outlines what activity you will see related to a followed user.
By the way, when specifying followings and trackTerms on the filter steam it's actually saying get me Tweets containing these terms or from these users. That's why you would see output when trackTerms was specified. This also goes for the additional locations parameter.
I've been playing with Amazon S3 presigned URLs all night attempting to PUT a file. I generate the presigned URL in java code.
AWSCredentials credentials = new BasicAWSCredentials( accessKey, secretKey );
client = new AmazonS3Client( credentials );
GeneratePresignedUrlRequest request = new GeneratePresignedUrlRequest( bucketName, "myfilename", HttpMethod.PUT);
request.setExpiration( new Date( System.currentTimeMillis() + (120 * 60 * 1000) ));
return client.generatePresignedUrl( request ).toString();
I then want to use the generated, presigned URL to PUT a file using curl.
curl -v -H "content-type:image/jpg" -T mypicture.jpg https://mybucket.s3.amazonaws.com/myfilename?Expires=1334126943&AWSAccessKeyId=<accessKey>&Signature=<generatedSignature>
I assumed that, like a GET, this would work on a bucket which is not public (that's the point of presigned, right?) Well, I got access denied on every attempt. Finally out of frustration I changed the permission of the bucket to allow EVERYONE to write. Of course, then the presigned URL worked. I quickly removed the EVERYONE permission from the bucket. Now, I don't have permission to delete the item that was uploaded into my bucket by my own self-pre-signed URL. I see now that I probably should have put a x-amz-acl header on what I uploaded. I suspect I'll create several more undelete-able objects before I get that right.
This leads to a few questions:
How can I upload with curl using PUT and a generated presigned URL?
How can I delete the uploaded file and the bucket I created to test it with?
The end goal is that a mobile phone will use this presigned URL to PUT images. I'm trying to get it going in curl as a proof of concept.
Update: I asked a question on the amazon forums. If an answer is provided there I'll put it as an answer here.
This is indeed a bit puzzling, I consider it to be a bug in the AWS SDK for Java (see below) - but first and foremost, the following curl command will upload your file as such (assuming an updated pre-signed URL of course):
curl -v -T mypicture.jpg https://mybucket.s3.amazonaws.com/myfilename?Expires=1334126943&AWSAccessKeyId=<accessKey>&Signature=<generatedSignature>
That is, I've excluded the Content type header, which yields application/octet-stream (or binary/octet-stream) as a result, which is obviously not desired; thus, further digging had been order.
Background / Analysis
Pre-signed URLs for PUT (and DELETE as well as HEAD) requests to Amazon S3 are known to work in principle, not the least evidenced in related questions on this site (see e.g. my answer to Upload to s3 with curl using pre-signed URL (getting 403)).
The facilitated Query String Request Authentication Alternative is documented to use the following pseudo-grammar that illustrates the query string request authentication method:
StringToSign = HTTP-VERB + "\n" +
Content-MD5 + "\n" +
Content-Type + "\n" +
Expires + "\n" +
CanonicalizedAmzHeaders +
CanonicalizedResource;
It does include the Content-Type header, and (as you already discovered) this has been the missing piece in some documented cases, see e.g. the AWS team response to GetPreSignedURL with PUT request, yielding a working pre-signed URL once added.
This is easy to achieve with the AWS SDK for .NET indeed, which provides the convenience method GetPreSignedUrlRequest.WithContentType to do just that:
Sets the ContentType property for this request. This property defaults
to "binary/octet-stream", but if you require something else you can
set this property.
Accordingly, extending the respective sample Upload an Object Using Pre-Signed URL - AWS SDK for .NET as follows yields a working pre-signed URL with content type, that can be uploaded via curl as expected (i.e. exactly as you attempted to):
// ...
GetPreSignedUrlRequest request = new GetPreSignedUrlRequest();
// ...
request.WithContentType("image/jpg");
// ...
Now, one would like to extend the semantically identical sample Upload an Object Using Pre-Signed URL - AWS SDK for Java in a similar fashion, but (as you've discovered already as well), there is no dedicated method to achieve this. This might just be a lacking convenience method though and could be achievable via addRequestParameter() or setResponseHeaders() eventually, e.g.:
// ...
request.setExpiration( new Date( System.currentTimeMillis() + (120 * 60 * 1000) ));
request.addRequestParameter("content-type", "image/jpg");
return client.generatePresignedUrl( request ).toString();
// ...
However, both method's documentation suggests other purposes, and it doesn't work indeed, i.e. they always yield the identical signature, no matter which content type is set like so (if any).
Debugging further into the SDKs reveals, that both provide a semantically similar core method to calculate the query string authentication according to the pseudo-grammar referenced above, see buildSigningString() for .NET and makeS3CanonicalString() for Java.
But the respective code in the Java version to Add all interesting headers to a list, then sort them, where "Interesting" is defined as Content-MD5, Content-Type, Date, and x-amz- is never executed in fact, because there is indeed no method to provide these headers somehow, which are only available for class DefaultRequest and not class GeneratePresignedUrlRequest used to initialize the former, which is used as input for calculating the signature in turn, see protected method createRequest().
Interestingly/Notably, the two methods to calculate the query string authentication in .NET vs. Java compose their input from an almost inverse combination of header vs. parameter sources on the call stack, which could hint on the cause of the Java bug, but obviously that might as well be just difficult to decipher, i.e. the internal architecture could differ significantly of course.
Preliminary Conclusion
There are two angles to this:
The AWS SDK for Java is definitely lacking the convenience method for setting the content type, which might be a comparatively rare, but nonetheless obvious use case accounted for in other AWS SDKs accordingly - this is surprising, given its widespread use in AWS related backend services.
Regardless, there seems to be something fishy with the way the Query String Request Authentication is implemented in comparison to the .NET version for example - again this is surprising, given it is a core functionality, however, this is still within the S3 model/namespace and thus might only be required by the respective uses cases above.
In conclusion, the only reasonable way to resolve this would be an updated SDK, so a bug report is in order - obviously one could as well duplicate/extend the SDK functionality to account for this special case separately (ideally in a way allowing to submit a pull request for the aws-sdk-for-java project), but getting this right in a compatible and maintainable way seems to be a bit tricky, thus is likely best done by the SDK maintainers themselves.
Ran into this problem as well. We're already tracking when the file is uploaded on the backend, so our work around was to set the content type after the client uploads the file using the Rails app with a call to copy_from.