We have developed some lambda function and deployed on AWS which are working fine,
Anyhow, client is now planning for AZURE.
They may even switch back to AWS or any other vendor in future.
We have a separate maven project for AWS related stuff.
Hence, our business logic and classes remains same.
What I have done is created a maven project and added individual lambda functions to this project as dependencies.
Then made a factory class which will get impl based on property AZURE or AWS(using class.forName and reflection).
SO, I can switch to Azure by just removing maven dependency and adding AZURE dependency.
According to picture my plan was to create new AzureUtils and AzureWrapper project and Directly use Azure Cloud, by switching cloud in cloudFactory which is present in Generic utils and that would even work hopefully (Not tested) AWS is working anyhow like that.
Now the problem is client does not want everything packed up in 1 jar, i.e no no to all lambdas in a single jar. He want some layer where the switching should take place.
Now Which design patter would be useful, what would be the approach.
Currently my Lambda function looks like below
public class Hello implements RequestHandler<S3Event, Context > {
public String handleRequest(S3Event s3event, Context context) {
.................
call to business processor as in diag
}
}
And azure function looks somewhat like a simple class with annotations
public class Function {
#FunctionName("hello")
public HttpResponseMessage run(
#HttpTrigger(name = "req", methods = { HttpMethod.GET, HttpMethod.POST }, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
// Parse query parameter
String query = request.getQueryParameters().get("name");
String name = request.getBody().orElse(query);
if (name != null) {
call to business processor as in diagram
}
}
}
After all this I have only 2 questions
I would like to know first if the design in diagram is right thing to do.
And what my client is asking for a wrapper something magical which should handle both type of cloud implementations. is this even possible?
if possible guide me in right direction
Any help is greatly appreciated.
about you secound question how to handle both type of cloud, please check this 3rd part solution serverless.com. It's a company that create own serverless wrapper, so that you can be free of vendor lock
Related
I am developing a web-application using java and spring-boot on AWS Lambda Service.
I am designing it to have one database-service. This will be collections of Entity(table) and JPARepositories classes. So If I need to have any database schema changes I just have to make the change only in this service.
The other services which will be exposed through an API-gateway will be using this database-service as a Lambda Layer.
parent-project
|
|---database-service
|
|---API-service1
|
|---API-service2
...
The Problem is I need to create the tables before any of the Lambda Service is deployed. So that this API-Services can use them. One way to solve this is to deploy the database-service as a Lambda function and invoke the function which will call a method like below to create all the tables.
#SpringBootApplication
public class DatabaseServiceApplication implements CommandLineRunner {
private DynamoDBMapper dynamoDBMapper;
private final AmazonDynamoDB amazonDynamoDB;
public DatabaseServiceApplication(AmazonDynamoDB amazonDynamoDB) {
this.amazonDynamoDB = amazonDynamoDB;
}
public static void main(String[] args) {
SpringApplication.run(DatabaseServiceApplication.class, args);
}
#Override
public void run(String... strings) {
dynamoDBMapper = new DynamoDBMapper(amazonDynamoDB);
CreateTableRequest tableRequest = dynamoDBMapper
.generateCreateTableRequest(Association.class);
tableRequest.setProvisionedThroughput(
new ProvisionedThroughput(1L, 1L));
TableUtils.createTableIfNotExists(amazonDynamoDB, tableRequest);
}
}
Or use a script to create the tables. I am not sure which is a better option or is there any better option.
Can anyone suggest me if anyone has faced this problem before and fixed it?
To me the best way to do this is on Lambda cold start. Your code needs to be smart enough to not care if the DB is already correct. Based on the code you're showing I would do something on the order of:
public class LambdaExample implements RequestStreamHandler {
// only called on cold start
public LambdaExample() {
dynamoDBMapper = new DynamoDBMapper(amazonDynamoDB);
CreateTableRequest tableRequest = dynamoDBMapper
.generateCreateTableRequest(Association.class);
tableRequest.setProvisionedThroughput(
new ProvisionedThroughput(1L, 1L));
TableUtils.createTableIfNotExists(amazonDynamoDB, tableRequest);
}
public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context) {
// handle request. this lambda type requires reading the inputStream
// yourself but use whatever you normally have here.
}
If you're using a traditional relational database, you could use Flyway instead. It too knows if a DB has already been updated.
Note that if you have thousands of Lambdas they will all call this, slowing the cold start of every single one of them. That is why #MarkB is suggesting a process to externalize the DB creation as really only the very first Lambda kicked off does anything useful. After that you're wasting a bit of time/money with every new Lambda.
Since you are deploying via Terraform then the correct way to do this is to have Terraform create the DynamoDB tables as well. You would configure your aws_lambda_function resources in Terraform with depends_on property referencing the aws_dynamodb_table resource, so that Terraform would ensure the table is created before the Lambda functions.
Can you please answer the below questions?
1) Are you deploying your springboot application in lambda?
If Yes, that doesn't sound like a good use of Springboot, Springboot application should be hosted in EC2/ECS instance to be up and running (24/7).
Think about Lambda as a function that runs to handle a simple task. To achieve that, you can write a simple Java application, and deploy the jar to lambda function.
2) CloudFormation, TerraForm and other languages are used to create the infrastructure, you usually run the infrastructure job first, and the deployment after it.
Here's a link of a terraform structure I built for a personal project.
https://github.com/saifmasadeh/terraform-project-structure
Greetings to the community! I am using alfresco community edition 6.0.0 and currently I am trying to implement a workflow where i have a serviceTask calling a custom class implementing the JavaDelegate class.
serviceTask in bpmn code:
<serviceTask id="delegate"
activiti:class="org.nick.java.GenerateDocument"
name="Get the document">
</serviceTask>
Java Delegate class
public class GenerateDocument implements JavaDelegate {
#Autowired
RelatedContentService relatedContentService;
public void execute(DelegateExecution execution) throws Exception {
ProcessEngine p = ProcessEngines.getDefaultProcessEngine();
}
}
what I would like to do is the service task calls the GenerateDocument class, I could somehow retrieve a document that is stored inside my alfresco repository (I know it's name and it's id in case there is a method needed).
Ideally, if I retrieve this file, I would like to perform changes on it and save it as a new file in the alfresco repository? Is the above scenario feasible? According to my so-far search on the web i will may need this RelatedContentService relatedContentService to do this, is this correct?
Thanks in advance for any help :)
Something that is cool about JavaDelegates running in Activiti embedded within Alfresco is that you have access to the ServiceRegistry. From there you can get any bean you might need.
For example, suppose your JavaDelegate needed to run an Alfresco action. You can use the ServiceRegistry to get the ActionService, and away you go:
ActionService actionService = getServiceRegistry().getActionService();
Action mailAction = actionService.createAction(MailActionExecuter.NAME);
mailAction.setParameterValue(MailActionExecuter.PARAM_SUBJECT, SUBJECT);
mailAction.setParameterValue(MailActionExecuter.PARAM_TO, notificationEmailAddress);
In your case, if you want to find a node, you probably want to use the SearchService to run a query or to find a node using its node reference.
Take a look at the Alfresco foundational Java API to see the collection of services that are available for finding, updating, and creating nodes.
I develop a web app with Spark famework. On one of the sites I want to enable dynamic content loading. What I mean by that is that in Java controller I search for some info in the server and I want to update the website when search finishes, e.g.:
// this is called by get("/module", (req, resp)-> ...);
public static ModelAndView getModules(Request req, Response res) {
Map<String, Object> model = new HashMap<String, Object>();
List<Module> modules = new ArrayList<>();
model.put("modules", modules);
lookForModules(this);
return new ModelAndView(model, "pathToSiteSource");
}
private lookForModules(Listener listener){
// modules search in the background thread
// when any module is found I inform the listener;
// different modules can be found in various times
}
public void onModulesFound(List<Module> modules){
// I want to update the site using the modules that I got
}
I read that WebSockets are a way to go, but examples with WebSockets on Spark website uses AJAX calls, and my search has to be done in my java class. Are WebSockets the correct way to do this anyway?
I managed to solve my problem in some way.
Tha Java code is as above, plus in onModuleFound method I update a static list of modules that I store in my controller class (not as a variable in getModules method).
Then in the site code I added AJAX call that updates this specific div every three seconds. That causes calling getModules, and setting the most recent modules list into my site's model.
Not sure if this is the best solution, but it works pretty OK for me.
I have essentially the same question as here but am hoping to get a less vague, more informative answer.
I'm looking for a way to configure DropWizard programmatically, or at the very least, to be able to tweak configs at runtime. Specifically I have a use case where I'd like to configure metrics in the YAML file to be published with a frequency of, say, 2 minutes. This would be the "normal" default. However, under certain circumstances, I may want to speed that up to, say, every 10 seconds, and then throttle it back to the normal/default.
How can I do this, and not just for the metrics.frequency property, but for any config that might be present inside the YAML config file?
Dropwizard reads the YAML config file and configures all the components only once on startup. Neither the YAML file nor the Configuration object is used ever again. That means there is no direct way to configure on run-time.
It also doesn't provide special interfaces/delegates where you can manipulate the components. However, you can access the objects of the components (usually; if not you can always send a pull request) and configure them manually as you see fit. You may need to read the source code a bit but it's usually easy to navigate.
In the case of metrics.frequency you can see that MetricsFactory class creates ScheduledReporterManager objects per metric type using the frequency setting and doesn't look like you can change them on runtime. But you can probably work around it somehow or even better, modify the code and send a Pull Request to dropwizard community.
Although this feature isn't supported out of the box by dropwizard, you're able to accomplish this fairly easy with the tools they give you. Note that the below solution definitely works on config values you've provided, but it may not work for built in configuration values.
Also note that this doesn't persist the updated config values to the config.yml. However, this would be easy enough to implement yourself simply by writing to the config file from the application. If anyone would like to write this implementation feel free to open a PR on the example project I've linked below.
Code
Start off with a minimal config:
config.yml
myConfigValue: "hello"
And it's corresponding configuration file:
ExampleConfiguration.java
public class ExampleConfiguration extends Configuration {
private String myConfigValue;
public String getMyConfigValue() {
return myConfigValue;
}
public void setMyConfigValue(String value) {
myConfigValue = value;
}
}
Then create a task which updates the config:
UpdateConfigTask.java
public class UpdateConfigTask extends Task {
ExampleConfiguration config;
public UpdateConfigTask(ExampleConfiguration config) {
super("updateconfig");
this.config = config;
}
#Override
public void execute(Map<String, List<String>> parameters, PrintWriter output) {
config.setMyConfigValue("goodbye");
}
}
Also for demonstration purposes, create a resource which allows you to get the config value:
ConfigResource.java
#Path("/config")
public class ConfigResource {
private final ExampleConfiguration config;
public ConfigResource(ExampleConfiguration config) {
this.config = config;
}
#GET
public Response handleGet() {
return Response.ok().entity(config.getMyConfigValue()).build();
}
}
Finally wire everything up in your application:
ExampleApplication.java (exerpt)
environment.jersey().register(new ConfigResource(configuration));
environment.admin().addTask(new UpdateConfigTask(configuration));
Usage
Start up the application then run:
$ curl 'http://localhost:8080/config'
hello
$ curl -X POST 'http://localhost:8081/tasks/updateconfig'
$ curl 'http://localhost:8080/config'
goodbye
How it works
This works simply by passing the same reference to the constructor of ConfigResource.java and UpdateConfigTask.java. If you aren't familiar with the concept see here:
Is Java "pass-by-reference" or "pass-by-value"?
The linked classes above are to a project I've created which demonstrates this as a complete solution. Here's a link to the project:
scottg489/dropwizard-runtime-config-example
Footnote: I haven't verified this works with the built in configuration. However, the dropwizard Configuration class which you need to extend for your own configuration does have various "setters" for internal configuration, but it may not be safe to update those outside of run().
Disclaimer: The project I've linked here was created by me.
I solved this with bytecode manipulation via Javassist
In my case, I wanted to change the "influx" reporter
and modifyInfluxDbReporterFactory should be ran BEFORE dropwizard starts
private static void modifyInfluxDbReporterFactory() throws Exception {
ClassPool cp = ClassPool.getDefault();
CtClass cc = cp.get("com.izettle.metrics.dw.InfluxDbReporterFactory"); // do NOT use InfluxDbReporterFactory.class.getName() as this will force the class into the classloader
CtMethod m = cc.getDeclaredMethod("setTags");
m.insertAfter(
"if (tags.get(\"cloud\") != null) tags.put(\"cloud_host\", tags.get(\"cloud\") + \"_\" + host);tags.put(\"app\", \"sam\");");
cc.toClass();
}
I have written some code which I thought was quite well-designed, but then I started writing unit tests for it and stopped being so sure.
It turned out that in order to write some reasonable unit tests, I need to change some of my variables access modifiers from private to default, i.e. expose them (only within a package, but still...).
Here is some rough overview of my code in question. There is supposed to be some sort of address validation framework, that enables address validation by different means, e.g. validate them by some external webservice or by data in DB, or by any other source. So I have a notion of Module, which is just this: a separate way to validate addresses. I have an interface:
interface Module {
public void init(InitParams params);
public ValidationResponse validate(Address address);
}
There is some sort of factory, that based on a request or session state chooses a proper module:
class ModuleFactory {
Module selectModule(HttpRequest request) {
Module module = chooseModule(request);// analyze request and choose a module
module.init(createInitParams(request)); // init module
return module;
}
}
And then, I have written a Module that uses some external webservice for validation, and implemented it like that:
WebServiceModule {
private WebServiceFacade webservice;
public void init(InitParams params) {
webservice = new WebServiceFacade(createParamsForFacade(params));
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
So basically I have this WebServiceFacade which is a wrapper over external web service, and my module calls this facade, processes its response and returns some framework-standard response.
I want to test if WebServiceModule processes reponses from external web service correctly. Obviously, I can't call real web service in unit tests, so I'm mocking it. But then again, in order for the module to use my mocked web service, the field webservice must be accessible from the outside. It breaks my design and I wonder if there is anything I could do about it. Obviously, the facade cannot be passed in init parameters, because ModuleFactory does not and should not know that it is needed.
I have read that dependency injection might be the answer to such problems, but I can't see how? I have not used any DI frameworks before, like Guice, so I don't know if it could be easily used in this situation. But maybe it could?
Or maybe I should just change my design?
Or screw it and make this unfortunate field package private (but leaving a sad comment like // default visibility to allow testing (oh well...) doesn't feel right)?
Bah! While I was writing this, it occurred to me, that I could create a WebServiceProcessor which takes a WebServiceFacade as a constructor argument and then test just the WebServiceProcessor. This would be one of the solutions to my problem. What do you think about it? I have one problem with that, because then my WebServiceModule would be sort of useless, just delegating all its work to another components, I would say: one layer of abstraction too far.
Yes, your design is wrong. You should do dependency injection instead of new ... inside your class (which is also called "hardcoded dependency"). Inability to easily write a test is a perfect indicator of a wrong design (read about "Listen to your tests" paradigm in Growing Object-Oriented Software Guided by Tests).
BTW, using reflection or dependency breaking framework like PowerMock is a very bad practice in this case and should be your last resort.
I agree with what yegor256 said and would like to suggest that the reason why you ended up in this situation is that you have assigned multiple responsibilities to your modules: creation and validation. This goes against the Single responsibility principle and effectively limits your ability to test creation separately from validation.
Consider constraining the responsibility of your "modules" to creation alone. When they only have this responsibility, the naming can be improved as well:
interface ValidatorFactory {
public Validator createValidator(InitParams params);
}
The validation interface becomes separate:
interface Validator {
public ValidationResponse validate(Address address);
}
You can then start by implementing the factory:
class WebServiceValidatorFactory implements ValidatorFactory {
public Validator createValidator(InitParams params) {
return new WebServiceValidator(new ProdWebServiceFacade(createParamsForFacade(params)));
}
}
This factory code becomes hard to unit-test, since it is explicitly referencing prod code, so keep this impl very concise. Put any logic (like createParamsForFacade) on the side, so that you can test it separately.
The web service validator itself only gets the responsibility of validation, and takes in the façade as a dependency, following the Inversion of Control (IoC) principle:
class WebServiceValidator implements Validator {
private final WebServiceFacade facade;
public WebServiceValidator(WebServiceFacade facade) {
this.facade = facade;
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
Since WebServiceValidator is not controlling the creation of its dependencies anymore, testing becomes a breeze:
#Test
public void aTest() {
WebServiceValidator validator = new WebServiceValidator(new MockWebServiceFacade());
...
}
This way you have effectively inverted the control of the creation of the dependencies: Inversion of Control (IoC)!
Oh, and by the way, write your tests first. This way you will naturally gravitate towards a testable solution, which is usually also the best design. I think that this is due to the fact that testing requires modularity, and modularity is coincidentally the hallmark of good design.