As of now, I have an API and an existing AWS Lambda that more or less have the same functionality. What I was trying to do, was instead of performing the same task in each project, I was attempting to have the API itself imply trigger the lambda. At the moment, sending it data isn't my primary concern, but that would be something I would try to do. That said, in Java, if you have all the credentials, the lambda name, and so on, is it possible to trigger an AWS Lambda locally and eventually through an API?
I've been going through a few solutions now, but it seems that many of them involve redeploying the lambda, or making a new one altogether. At the moment, I've been using these resources, A, B, C, and D.
My current function in my API looks something like this. The one in my lambda, let's call it foobar-lambda for now, is pretty much the same.
public Entity<Foos> Foos(#RequestHeader(value= "ApiKey", required = false) String apiKey,
#RequestParam String data) {
Foos foos = FoosService.getFoos(data);
Entity<Foos> response = null;
if (foos != null) {
response = Entity.ok().body(foos);
} else {
response = new Entity<>(HttpStatus.NOT_FOUND);
}
return response;
}
What I'd like to change this to, is something like this:
void Entity<Foos> Foos(#RequestHeader(value= "ApiKey", required = false) String apiKey,
#RequestParam String data) {
triggerAndSend("foobar-lambda",data);
}
So, in this context, I'm trying to figure out exactly how to create the void triggerAndSend(String lambdaTarget, Integer... data) function. Ideally, I'd run this, and I'd be able to see that my lambda was triggered. Do I have to add an additional trigger in my Lambda to catch these? Is it possible, and if so, does anyone have any recommendations for how I can accomplish my goal?
This AWS Blog post describes one way to do what you're describing: Invoking AWS Lambda Functions from Java. It involves defining a Plain Old Java Object for the return value and an interface for the lambda function.
Related
Please bear with me, i dont usually use spring and havent used newer versions of java (I say newer I mean anything past prob 1.4)
Anyway, I have an issue where I have to do rest calls to do a search using multiple parallel requests. Ive been looking around online and I see you can use CompletableFuture.
So I created my method to get the objects I need form the rest call like:
#Async
public CompletableFuture<QueryObject[]> queryObjects(String url)
{
QueryObject[] objects= restTemplate.getForObject(url, QueryObject[].class);
return CompletableFuture.completedFuture(objects);
}
Now I need to call that with something like:
CompletableFuture<QueryObject> page1 = QueryController.queryObjects("http://myrest.com/ids=[abc, def, ghi]);
CompletableFuture<QueryObject> page2 = QueryController.queryObjects("http://myrest.com/ids=[jkl, mno, pqr]);
The problem I have is that the call needs to only do three ids at a time and there could be a list of variable number ids. So I parse the idlist and create a query string like above. The problem with that I am having is that while I can call the queries I dont have separate objects that I can then call CompletableFuture.allOf on.
Can anyone tell me the way to do this? Ive been at it for a while now and Im not getting any further than where I am now.
Happy to provide more info if the above isnt sufficient
You are not getting any benefit of using the CompletableFuture in a way you're using it right now.
The restTemplate method you're using is a synchronous method, so it has to finish and return a result, before proceeding. Because of that wrapping the final result in a CompletableFuture doesn't cause it to be executed asynchronously (neither in parallel). You just wrap a response, that you have already retrieved.
If you want to benefit from the async execution, then you can use for example the AsyncRestTemplate or the WebClient .
A simplified code example:
public ListenableFuture<ResponseEntity<QueryObject[]>> queryForObjects(String url) {
return asyncRestTemplate.getForEntity(url, QueryObject[].class);
}
public List<QueryObject> queryForList(String[] elements) {
return Arrays.stream(elements)
.map(element -> queryForObjects("http://myrest.com/ids=[" + element + "]"))
.map(future -> future.completable().join())
.filter(Objects::nonNull)
.map(HttpEntity::getBody)
.flatMap(Arrays::stream)
.collect(Collectors.toList());
}
We have developed some lambda function and deployed on AWS which are working fine,
Anyhow, client is now planning for AZURE.
They may even switch back to AWS or any other vendor in future.
We have a separate maven project for AWS related stuff.
Hence, our business logic and classes remains same.
What I have done is created a maven project and added individual lambda functions to this project as dependencies.
Then made a factory class which will get impl based on property AZURE or AWS(using class.forName and reflection).
SO, I can switch to Azure by just removing maven dependency and adding AZURE dependency.
According to picture my plan was to create new AzureUtils and AzureWrapper project and Directly use Azure Cloud, by switching cloud in cloudFactory which is present in Generic utils and that would even work hopefully (Not tested) AWS is working anyhow like that.
Now the problem is client does not want everything packed up in 1 jar, i.e no no to all lambdas in a single jar. He want some layer where the switching should take place.
Now Which design patter would be useful, what would be the approach.
Currently my Lambda function looks like below
public class Hello implements RequestHandler<S3Event, Context > {
public String handleRequest(S3Event s3event, Context context) {
.................
call to business processor as in diag
}
}
And azure function looks somewhat like a simple class with annotations
public class Function {
#FunctionName("hello")
public HttpResponseMessage run(
#HttpTrigger(name = "req", methods = { HttpMethod.GET, HttpMethod.POST }, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
// Parse query parameter
String query = request.getQueryParameters().get("name");
String name = request.getBody().orElse(query);
if (name != null) {
call to business processor as in diagram
}
}
}
After all this I have only 2 questions
I would like to know first if the design in diagram is right thing to do.
And what my client is asking for a wrapper something magical which should handle both type of cloud implementations. is this even possible?
if possible guide me in right direction
Any help is greatly appreciated.
about you secound question how to handle both type of cloud, please check this 3rd part solution serverless.com. It's a company that create own serverless wrapper, so that you can be free of vendor lock
I'm kind of forced to switch to reactive programming (and in a short time frame), using WebFlux and I'm having a really hard time understanding it. Maybe because the lack of examples or because I never did functional programming.
Anyways, my question is where to use Mono/Flux and where can I work with normal objects? E.g. my controller is waiting for a #Valid User object, should that be #Valid Mono or something like Mono<#Valid User>? If let's say it was just a User object, I pass it to my service layer, and I want to encode the password before saving it to the reactive MongoDb, should I write:
User.setPassword(...);
return reactiveMongoDbRepository.save(user); //returns Mono<User> which is returned by the Controller to the View
Or it should be something like
return Mono.just(user).map.flatmap(setPasswordSomewhereInThisHardToFollowChain).then.whatever.doOnSuccess(reactiveMongoDbRepository::save);
In other words, am I forced to use this pipeline thing EVERYWHERE to maintain reactiveness or doing some steps the imperative way, like unwrapping the object, working on it, and wrapping it back is OK?
I know my question seems to be silly but I don't have the big picture at all, reading books about it didn't really help yet, please be gentle on me. :)
Use pipelining when you require sequential, asynchronous and lazy execution. In all other cases (when you are using a non-blocking code) you're free to choose any approach and it's generally better to use the simplest one.
Sequential non-blocking code can be organised in functions that you can integrate with reactive pipeline using map/filter/doOnNext/... components.
For example, consider the following Order price calculation code.
class PriceCalculator {
private final Map<ProductCode, Price> prices;
PriceCalculator(Map<ProductCode, Price> prices) {
this.prices = prices;
}
PricedOrder calculatePrice(Order order) { // doesn't deal with Mono/Flux stuff
Double price = order.orderLines.stream()
.map(orderLine -> prices.get(orderLine.productCode))
.map(Price::doubleValue)
.sum();
return new PricedOrder(order, price);
}
}
class PricingController {
public Mono<PricedOrder> getPricedOrder(ServerRequest request) {
OrderId orderId = new OrderId(request.pathVariable("orderId"));
Mono<Order> order = orderRepository.get(orderId);
return order.map(priceCalculator::calculatePrice)
}
}
So in a lot of AWS Lambda tutorials, it teaches us to write a few lines of code, package it, and upload it.
Is there a code example where you can just trigger/call the lambda in your current project using the ARN or something? My current project is huge and I can't/it's not preferable to upload the function package to AWS Lambda, I just want to trigger it in my current code.
One link I found is: https://aws.amazon.com/blogs/developer/invoking-aws-lambda-functions-from-java/ but it does not work for me so far.
Apologies if it's been asked already; I didn't find anything useful to me.
EDIT:
My problem is the lambda function only gets invoked because I've uploaded it as a JAR (ie. its not a part of my main project, I just did it as a test), but I want to write the code to be invoked in my main project. I don't know how to invoke the lambda in my Java code. Like #MaxPower said, perhaps I have this all wrong and this is not possible.
What I do is create an interface with the #LambdaFunction annotation.
public interface Foo {
#LambdaFunction(functionName = "LambdaName")
OutputObject doFoo(InputObject inputObject);
}
Then in the class that is to call the lambda I make a Lambda client
private final Foo fooCaller;
RunTest() {
ProfileCredentialsProvider lambdaCredentialsProvider = new ProfileCredentialsProvider("lambda");
AWSLambdaClientBuilder builder = AWSLambdaClientBuilder.standard().withCredentials(lambdaCredentialsProvider);
builder.setRegion("us-east-1");
AWSLambda awsLambda = builder.build();
LambdaInvokerFactory.Builder lambdaBuilder = LambdaInvokerFactory.builder();
lambdaBuilder.lambdaClient(awsLambda);
fooCaller = lambdaBuilder.build(Foo.class);
}
then when you want to call the lambda
fooCaller.doFoo();
I need to geocode an Address object, and then store the updated Address in a search engine. This can be simplified to taking an object, performing one long-running operation on the object, and then persisting the object. This means there is an order of operations requirement that the first operation be complete before persistence occurs.
I would like to use Akka to move this off the main thread of execution.
My initial thought was to use a pair of Futures to accomplish this, but the Futures documentation is not entirely clear on which behavior (fold, map, etc) guarantees one Future to be executed before another.
I started out by creating two functions, defferedGeocode and deferredWriteToSearchEngine which return Futures for the respective operations. I chain them together using Future<>.andThen(new OnComplete...), but this gets clunky very quickly:
Future<Address> geocodeFuture = defferedGeocode(ec, address);
geocodeFuture.andThen(new OnComplete<Address>() {
public void onComplete(Throwable failure, Address geocodedAddress) {
if (geocodedAddress != null) {
Future<Address> searchEngineFuture = deferredWriteToSearchEngine(ec, addressSearchService, geocodedAddress);
searchEngineFuture.andThen(new OnComplete<Address>() {
public void onComplete(Throwable failure, Address savedAddress) {
// process search engine results
}
});
}
}
}, ec);
And then deferredGeocode is implemented like this:
private Future<Address> defferedGeocode(
final ExecutionContext ec,
final Address address) {
return Futures.future(new Callable<Address>() {
public Address call() throws Exception {
log.debug("Geocoding Address...");
return address;
}
}, ec);
};
deferredWriteToSearchEngine is pretty similar to deferredGeocode, except it takes the search engine service as an additional final parameter.
My understand is that Futures are supposed to be used to perform calculations and should not have side effects. In this case, geocoding the address is calculation, so I think using a Future is reasonable, but writing to the search engine is definitely a side effect.
What is the best practice here for Akka? How can I avoid all the nested calls, but ensure that both the geocoding and the search engine write are done off the main thread?
Is there a more appropriate tool?
Update:
Based on Viktor's comments below, I am trying this code out now:
ExecutionContext ec;
private Future<Address> addressBackgroundProcess(Address address) {
Future<Address> geocodeFuture = addressGeocodeFutureFactory.defferedGeocode(address);
return geocodeFuture.flatMap(new Mapper<Address, Future<Address>>() {
#Override
public Future<Address> apply(Address geoAddress) {
return addressSearchEngineFutureFactory.deferredWriteToSearchEngine(geoAddress);
}
}, ec);
}
This seems to work ok except for one issue which I'm not thrilled with. We are working in a Spring IOC code base, and so I would like to inject the ExecutionContext into the FutureFactory objects, but it seems wrong for this function (in our DAO) to need to be aware of the ExecutionContext.
It seems odd to me that the flatMap() function needs an EC at all, since both futures provide one.
Is there a way to maintain the separation of concerns? Am I structuring the code badly, or is this just the way it needs to be?
I thought about creating an interface in the FutureFactory's that would allow chaining of FutureFactory's, so the flatMap() call would be encapsulated in a FutureFactory base class, but this seems like it would be deliberately subverting an intentional Akka design decision.
Warning: Pseudocode ahead.
Future<Address> myFutureResult = deferredGeocode(ec, address).flatMap(
new Mapper<Address, Future<Address>>() {
public Future<Address> apply(Address geocodedAddress) {
return deferredWriteToSearchEngine(ec, addressSearchService, geocodedAddress);
}
}, ec).map(
new Mapper<Address, SomeResult>() {
public SomeResult apply(Address savedAddress) {
// Create SomeResult after deferredWriteToSearchEngine is done
}
}, ec);
See how it is not nested. flatMap and map is used for sequencing the operations. "andThen" is useful for when you want a side-effecting-only operation to run to full completion before passing the result on. Of course, if you map twice on the SAME future-instance then there is no ordering guaranteed, but since we are flatMapping and mapping on the returned futures (new ones according to the docs), there is a clear data-flow in our program.