There are many similar threads out there, so I'll try to be simple and specific.
My API Gateway has GET method, without "Use Lambda Proxy integration" check marked. (Yes, to make my life little bit more difficult)
My assumption is that I have API Gateway portion working correctly, with query string parameters.
It has been deployed through Deploy API button
I also have mapping template written, as exactly said by this instruction provided by AWS.
Now, in java, I have the following:
public APIGatewayProxyResponseEvent handleRequest(APIGatewayProxyRequestEvent event, Context context) {
The concern is that event object is empty. Have I not been using the correct request event object?
ADDITIONAL NOTE
Per request, here's my lambda function below:
LambdaLogger logger = context.getLogger();
logger.log("EVENT: " + gson.toJson(event));
And here's what CloudWatch prints:
EVENT: {}
Did you configured this under GET - > Method Request?
After doublechecking, did you press the deploy button ?
Related
I'm using Vert.x for my web service, where a part of it required authorization. I've set an AuthenticationHandler (using the OAuth2 implementation from vertx-auth-oath2) to listen on the protected paths (lets say "/*") and it is correct called, sends a redirect to the authentication provider, which redirects back and then correctly to the real handler. This works fine.
But the next time we call the protected endpoint - it does the whole thing again. I see that in the abstract AuthenticationHandlerImpl class it checks if the context already has a user() and if so - will not run the actual auth handler, which is the behavior I need - but it obviously doesn't happen because every call is a new request with a new RoutingContext.
What is the "correct" way to retain the User object across requests, so that the auth handler will be happy?
I'm guessing it has something to do with session storage but I've never used that - up until now I was using a custom "API key" style solution, and I'm trying to do this "The Right Way(tm)" in this new project.
I'm using the latest Vert.x 4.3.5.
You will need CookieHandler and SessionHandler to store and handle session with user. This will work out of the box with provided vertx-auth-oath2.
Here is a simple example to get you started:
https://github.com/vert-x3/vertx-examples/blob/master/web-examples/src/main/java/io/vertx/example/web/auth/Server.java
I am writing a cloud function in java triggered by events in the Firebase RTDB.
The documentation for Firebase has not been extended to java yet.But is seems that context generally has the property auth. However for java this doesn't seems to be possible
I can only get resource like in the example below, timestamps and a couple of other properties, but not auth.
#Override
public void accept(String json, Context context) {
logger.info("Function triggered by change to: " + context.resource());
Is there maybe a different way to perform this in java ?
Thank you very much
When you have an HTTP functions, you can get the auth in the header. This auth authenticate the requester.
In your case, you have a background function, triggered by a Cloud Event. There is no authentication in this case. You have info in the context of the "requester" (the event emitter) but nothing about the "auth" itself.
I'm trying to IT my spark server. My intentions are to test all the controller functions.
I have thought about few options:
1. Set up a server that will start when running the tests, and terminate when the tests are over.
The problem with this solution is that I have to rewrite my whole server logic to the new server (we start server from scratch every time we set the server before the testing).
Initiate a controller from the test class (essential to initiate and not static call, in order to configure the right db to the controller) that will call the controller functions and check their answers.
This is my favorite one, but it means that I have to mock a spark request. I'm trying to build a spark request, and spark response objects, to send to my controller, and haven't found a single way to do that properly (and how to send parameters, set url routes etc..)
#Test
Public void testTry(){
String expectedName = "mark";
myController myCtl = new myController()
Request req = null;
Response res = null;
String childName = myCtl.getChildNameFromDB(req, res);
assertEquals(childName, expectedName);
}
The last one is to do the exact logic of the controller function in the test, and instead of getting the parameters from the request, ill initiate them myself.
For example, instead of:
String username = req.params(""usrName")
It will be:
Strimg username = "mark"
But that solution will demand copying a lot of code, and you might miss a little code line which might make the test succeed when in reality, the controller function fails (or doesn't deliver as wanted).
What do you think about Integratiom testing a spark driven server? I'm open minded to new solutions as well.
If you want to do integration testing, I would suggest to use your first approach, using a randomly chosen free TCP port and a HTTP client library (I often use the excellent HttpRequest library to that effect).
The main issue with this approach is that since Spark API is static, you won't be able to stop/start the server between test cases/suites.
I have a basic service in Java, for example:
public interface FolderService {
void deleteFolder(String path);
void createFolder(String path, String folderName);
void moveFolder(String oldPath, String newPath);
}
which has multiple implementations. How can I map this service on AWS Lambda and API Gateway ?
I am expecting the API to have the format
POST {some_url}/folderService/createFolder
or
GET {some_url}/folderService/createFolder?path=/home/user&folderName=test
First, design your API mapping each HTTP method to a Java method.
DELETE /{path}
POST /{path}/{folderName}
PUT /{oldPath}?to={newPath} or PUT /{newPath}?from={oldPath}
Second, create the API Gateway Mapping. Each HTTP method has its own mapping. Define a constant value with the name of the method. Ex.
"action" : "deleteFolder"
Create three lambda functions. Each function, in the function handler, reads the "action" attribute and call the correct method.
or
Create one single lambda function that reads the action and calls the respective Java method.
API Gateway Mapping Template
Lambda Function Handler (Java)
You already have experience with AWS Lambda? The mapping part can be tricky. Feel free to ask for more details.
I have a requirement and I am bit confused about its design.
Requirement: iOS makes a call to backend(java), backend makes a call to the cloud API which return a token for future calls. The cloud API might take approximately 6 to 10 seconds to return the actual result, so instead of waiting for 6 to 10 seconds it gives a token back and let the caller(in my case the backend java server) to pull the results.
Current Approach: iOS calls the backend(java server), the backend calls cloud API and get's the token, then it sleeps the thread for 1 second and once the thread is invoked it hit the cloud API to get the status, if the status is not completed thread.sleep is invoked again and this continues till the cloud API call give's the complete result. Once the cloud API returns the result the backend returns the result to iOS.
The approach is not scalable and was done to test the cloud API but now we need a more scalable approach.
This is what I am thinking about iOS calls backend, backend calls the API and send back the result to iOS(it displays some static screen just to keep users engaged) and in the mean time it puts the object in Spring Thread pool Executor. The executor hits the API every one second and update the iOS through push notification and this continues till we get the final result from cloud API.
This is better then existing approach but even this doesn't look scalable and thread pool executor will get exhausted after some time(making it slow) and also thread.sleep is also not a good option.
I thought about using AWS SQS but it doesn't provide real time processing and running background jobs every 1 second doesn't seem to be a good option.
I am also exploring Apache Kafka and trying to understand whether it can fit to my use case.
Let me know if someone has tacked the similar kind of use case.
Here #EventListener in tandem with #Scheduled can be utilized, if Spring 4.2 (or newer) version is used.
First Create an event object say APIResult which will hold the API result
public class APIResult extends ApplicationEvent {
public APIResult(Object result) {
super(source);
}
}
Next register a listener for the event published as APIResult
#Component
public class MyListener {
#EventListener
public void handleResult(APIResult result) {
// do something ...
}
}
Next create a scheduled process which will hold the token(s) for which result is not yet retrieved
#Component
public class MyScheduled {
private final ApplicationEventPublisher publisher;
private List<String> tokens = new ArrayList<>();
#Autowired
public MyScheduled (ApplicationEventPublisher publisher) {
this.publisher = publisher;
}
#Scheduled(initialDelay=1000, fixedRate=5000) // modify it as per requirement
public void callAPIForResult() {
// call the API and get result for each token(s) ....
this.publisher.publishEvent(new APIResult(result));
}
// method to add & remove tokens
}
The overall process flow should be like
Application submit a request to API and collect the respective token.
Token is passed to scheduled service to fetch the result.
In its next run the scheduled service iterates over the available token(s) and call API to fetch the results (if result is available publish the event else continue)
The published event is intercepted by registered listener; which itself process the result or delegates as applicable
This approach will transparently fetch results without messing with the business logic and at same time leveraging the standard framework features viz. scheduling and asynchronous event publishing & processing.
Although I have not tested this but it should work, at least giving an idea on how to implement. The setup is tested with Spring boot ver. 1.5.1.RELEASE which is backed by Spring's 4.3.6.RELEASE
Do let know in comments if any further information is required.
Reference - Application Event in Spring (link)
I am thinking about using Spring ConcurrentTaskExecutor(let's call it cloudApiCall) and as soon as I received the token from Cloud API, I will submit a future job to the executor and return the token to the Mobile Client. The thread associated with ConcurrentTaskExecutor will pick the job, call the Cloud API and submit the response to the another ConcurrentTaskExecutor(let's call it pushNotification) which will be responsible for pushing the silent notification to the Mobile client. The thread associated ConcurrentTaskExecutor(cloudApiCall), will also check the status of the call, if the future call is required, it will submit the job back to ConcurrentTaskExecutor(cloudApiCall). This will continue till we get the complete response.