Intercept outgoing HTTP requests [closed] - java

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
so I'm looking for a library or some way of intercepting any outgoing HTTP requests from my Java application? Why? Because I want to unit test an API integration and since I'm using a library (wrapper) for that API, I can't modify any of it's code. It's not my code that's actually making the HTTP requests.
So, I need to intercept them, view them and assert they are correct according to the API's documentation.
I've tried looking online, but I couldn't find the exact thing I'm looking for. Most libraries out there will let me do it, only if I configure the requests themselves properly, but I can't do that, since they're made by an API wrapper, not my code.
Cheers
P.S Some example code
String path = "/some/path";
String repoOwner = "John Doe";
String repoName = "John repo"
GitHubClient client = new GitHubClient();
client.setCredentials(this.username, this.password);
RepositoryService repositoryService = new RepositoryService(client);
CommitService commitService = new CommitService(client);
Repository repo = repositoryService.getRepository(repoOwner, repoName);
List<RepositoryCommit> commits = commitService.getCommits(repo, null, path);
This will use the API wrapper to get all the commits in a given repository. Suppose I wanted to see the HTTP requests this code is making and then I wanted to Unit Test them and assert they are correct. How would I do that? Is there any way to intercept them, catch them kind of like an exception, and do stuff with them, such as assertions?

What you're trying to do is not easy and from my point of view it doesn't make much sense.
You're trying to test your application but mostly you're trying to test the GitHubClient library.
If I were you I would use only mocks in your tests and verify the input arguments. So the test would like like this (using mockito):
List<RepositoryCommit> commits = new ArrayList<>();
commits.add(new RepositoryCommit(....));
commits.add(new RepositoryCommit(....));
when(commitService.getCommits(repo, null, path)).thenReturn(commits);
That way you avoid of the library testing.
If you want to actually investigate what the library does - what requests it sends and what is going on - I would recommend you to go through the library code, find the place where the request is actually set up and just run your app in debug mode and add a breakpoint there.
If you really want to handle (e.g. log, replicate...) the real requests to GitHub even in production you could use AspectJ for that. That would change the byte code so that you could wrap particular method calls. Again you would have to find the place in the GitHub library where the real GitHub (I suppose HTTP) call is performed and attach aspect to that call. In aspect you basically declare which method you want to intercept and what you want to do before its call or after. I'm not really expert in this area so I can't provide you more info - there is a tutorial about Aspects - https://eclipse.org/aspectj/doc/next/progguide/starting.html
Other option is also install some network watch tool like wireshark
EDIT
Other option how to avoid costly calls to GitHub is to create a simple wrapper like this:
public class GitHubServiceImpl implements GitHubService {
private Rpository repo;
/** other methods **/
#Override
public List<RepositoryCommit> getAllCommits(String path) {
return commitService.getCommits(repo, null, path));
}
}
And here is the actual service you want to test.
public class CommitService {
private GitHubService gitHubService;
private String path;
public List<RepositoryCommit> getAll() {
return gitHubService.getAllCommits(path);
}
}
and for integration tests create special implementation of the GitHubService interface:
public class TestGitHubService implements GitHubService {
private Map<String, RepositoryCommit> preArrangedCommits;
/** other methods **/
public void setPreArrangedCommits(Map<String, RepositoryCommit> preArrangedCommits) {
this.preArrangedCommits = preArrangedCommits;
}
#Override
public List<RepositoryCommit> getAllCommits(String path) {
return preArrangedCommits;
}
}
and in the test call:
public class CommitServiceIntegrationTest {
// #Autowired, #Bean maybe?
private CommitService commitService;
// #Autowired, #Bean maybe?
private GitHubService gitHubService;
public void testGetAll() {
Map<String, RepositoryCommit> preArrangedCommits = new HashMap<>();
preArrangedCommits.put("path/", new RepositoryCommit(...));
gitHubService.setPreArrangedCommits(preArrangedCommits);
List<RepositoryCommit> commits = commitService.getAll();
assertEquals(commits, preArrangedCommits);
}
}
I would go this way.
Frank

Related

Switch between AWS, Azure. Design pattern

We have developed some lambda function and deployed on AWS which are working fine,
Anyhow, client is now planning for AZURE.
They may even switch back to AWS or any other vendor in future.
We have a separate maven project for AWS related stuff.
Hence, our business logic and classes remains same.
What I have done is created a maven project and added individual lambda functions to this project as dependencies.
Then made a factory class which will get impl based on property AZURE or AWS(using class.forName and reflection).
SO, I can switch to Azure by just removing maven dependency and adding AZURE dependency.
According to picture my plan was to create new AzureUtils and AzureWrapper project and Directly use Azure Cloud, by switching cloud in cloudFactory which is present in Generic utils and that would even work hopefully (Not tested) AWS is working anyhow like that.
Now the problem is client does not want everything packed up in 1 jar, i.e no no to all lambdas in a single jar. He want some layer where the switching should take place.
Now Which design patter would be useful, what would be the approach.
Currently my Lambda function looks like below
public class Hello implements RequestHandler<S3Event, Context > {
public String handleRequest(S3Event s3event, Context context) {
.................
call to business processor as in diag
}
}
And azure function looks somewhat like a simple class with annotations
public class Function {
#FunctionName("hello")
public HttpResponseMessage run(
#HttpTrigger(name = "req", methods = { HttpMethod.GET, HttpMethod.POST }, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
// Parse query parameter
String query = request.getQueryParameters().get("name");
String name = request.getBody().orElse(query);
if (name != null) {
call to business processor as in diagram
}
}
}
After all this I have only 2 questions
I would like to know first if the design in diagram is right thing to do.
And what my client is asking for a wrapper something magical which should handle both type of cloud implementations. is this even possible?
if possible guide me in right direction
Any help is greatly appreciated.
about you secound question how to handle both type of cloud, please check this 3rd part solution serverless.com. It's a company that create own serverless wrapper, so that you can be free of vendor lock

Architecture of a class, and best practice? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
So I am in charge of creating a project that has to do with calculations of pay. One aspect of the project is to analyze a proper input such as 4:00 PM, and other aspects including calculating the pay for the hours put in, and the type of job etc.
my question more so has to do with the best practices for designing the classes around this.
Should I have one class that analyzes the input string, and only does that? and one class for the calculator to display the proper output or should it all be in one class?
both ways are fine for me to do, but what is considered professional?
is it best practice to split classes based on their unique functionality even if you dedicate a class to simply one method?
At the boundary of your application you'll be accepting requests through a user interface or a system interface. You should treat anything originating from outside your application as untrusted and potentially wrong. For example, if you receive a HTTP request there is no guarantee that it is valid and contains the fields you expect. If you read form a file, it might be incorrectly formatted.
There should be a layer at the boundary of your application which takes input (which is just a bunch of bytes in the end) and turns it into a representation as Java objects of the suitable type (e.g. Boolean, LocalDate). If everything is a String, you are probably doing it wrong.. If this layer is unable to do this, it should send back an error.
Once you have expressed the request as a correctly typed Java objects, your business logic should process the request. This makes it possible to use the same logic when data is provided through a different interface, separates plumbing (parsing) from business logic (calculations). It allows the business logic to be more easily unit tested.
When you output a response back to the user (or system), you should convert from your nicely structured Java objects back to the output representation at the last moment.
I suggest you take a look at the javax.validation package and the Bean Validation JSR's 1.0 and 2.0
Using this approach you can create Java classes to represent your data and annotate them with the required validations. Triggering the validation to happen depends a little bit on the context.
In a Spring Boot application putting #Valid on the received controller parameter does the trick. See also this cheat sheet:
import javax.validation.Valid;
import com.company.app.model.Article;
#Controller
public class ArticleController {
...
#RequestMapping(value="/postArticle", method=RequestMethod.POST)
public #ResponseBody String postArticle(#Valid Article article, BindingResult result, HttpServletResponse response){
if(result.hasErrors()){
String errorMessage = "";
response.setStatus(HttpServletResponse.SC_BAD_REQUEST);
List<ObjectError> errors = result.getAllErrors();
for( ObjectError e : errors){
errorMessage+= "ERROR: " + e.getDefaultMessage();
}
return errorMessage;
}
else{
return "Validation Successful";
}
}
}
In a standalone application it could be done like this:
public class BeanValidationExample {
public static void main (String[] args) {
Configuration<?> config = Validation.byDefaultProvider()
.configure();
ValidatorFactory factory = config.buildValidatorFactory();
Validator validator = factory.getValidator();
factory.close();
Person person = new Person();
person.setDateOfBirth(new Date(System.currentTimeMillis() + 10000));
Set<ConstraintViolation<Person>> violations = validator.validate(person);
violations.forEach(v -> System.out.println(v.getPropertyPath() +
"- " + v.getMessage()));
}
}

Unit Test class that uses only local variables for composition

I am writing app that uses various REST api endpoints with very similar properties. Only difference is in endpoint adress and payload. Headers, method and other stuff remain the same. That is why I created class to communicate with my remote host, it is called RestApiCommunicator that has method generateRequestAndCallEndpoint(List payload) that wraps payload with all required stuff needed to perform rest call.
Than, I have various classes that only call this communicator class with proper endpoint suffix an its resources.
Everything is working fine but I want to unit test all of those classes. I was trying to figure out how to do that by reading a lot of SO questions but they are rather complicated cases, my is very simple.
I am trying to figure out a proper way to unit test a class that looks like this one:
class MyRestClient {
public void useRestApi(List<MyResources> myResources) {
RestApiCommunicator restApiCommunicator = new RestApiCommunicator ("/some/endpoint");
restApiCommunicator.generateRequestAndCallEndpoint(myResources);
}
}
I want to test if communicator was created with proper enpoint adress and if generateRequestAndCallEndpoint was called exacly once with my sample payload.
Only thing that comes to my mind is that make restApiCommunicator a field, create setter for this field and mock it in Unit tests. But this seems to me as rather dirty solution and I wouldn't like to modify my code to allow tests.
Maybe you can point me in some direction where I could have this class tested using some good pattern.
(ps. If that matters - this is a Spring Boot app)
You could provide a factory for the communicator
class MyRestClient {
private RestApiCommunicatorFactory factory = ...
public void useRestApi(List<MyResources> myResources) {
factory.getCommunicator("/some/endpoint")
.generateRequestAndCallEndpoint(myResources);
}
In your unit test, you provide a mock of the factory, which returns mock communicators. The specific language for that depends on your mocking library of choice.
One way to do exactly what you ask (ie, "to test if communicator was created with proper enpoint adress and if generateRequestAndCallEndpoint was called exactly once with my sample payload") is to mock it using JMockit:
public final class MyRestClientTest {
#Tested MyRestClient restClient;
#Mocked RestApiCommunicator restApi;
#Test
public void verifyUseOfRestApi() {
List<MyResource> resources = asList(new MyResource("a"), new MyResource("b"));
restClient.useRestApi(resources);
new Verifications() {{
new RestApiCommunicator("/some/endpoint");
restApi.generateRequestAndCallEndpoint(resources); times = 1;
}};
}
}

Configuring DropWizard Programmatically

I have essentially the same question as here but am hoping to get a less vague, more informative answer.
I'm looking for a way to configure DropWizard programmatically, or at the very least, to be able to tweak configs at runtime. Specifically I have a use case where I'd like to configure metrics in the YAML file to be published with a frequency of, say, 2 minutes. This would be the "normal" default. However, under certain circumstances, I may want to speed that up to, say, every 10 seconds, and then throttle it back to the normal/default.
How can I do this, and not just for the metrics.frequency property, but for any config that might be present inside the YAML config file?
Dropwizard reads the YAML config file and configures all the components only once on startup. Neither the YAML file nor the Configuration object is used ever again. That means there is no direct way to configure on run-time.
It also doesn't provide special interfaces/delegates where you can manipulate the components. However, you can access the objects of the components (usually; if not you can always send a pull request) and configure them manually as you see fit. You may need to read the source code a bit but it's usually easy to navigate.
In the case of metrics.frequency you can see that MetricsFactory class creates ScheduledReporterManager objects per metric type using the frequency setting and doesn't look like you can change them on runtime. But you can probably work around it somehow or even better, modify the code and send a Pull Request to dropwizard community.
Although this feature isn't supported out of the box by dropwizard, you're able to accomplish this fairly easy with the tools they give you. Note that the below solution definitely works on config values you've provided, but it may not work for built in configuration values.
Also note that this doesn't persist the updated config values to the config.yml. However, this would be easy enough to implement yourself simply by writing to the config file from the application. If anyone would like to write this implementation feel free to open a PR on the example project I've linked below.
Code
Start off with a minimal config:
config.yml
myConfigValue: "hello"
And it's corresponding configuration file:
ExampleConfiguration.java
public class ExampleConfiguration extends Configuration {
private String myConfigValue;
public String getMyConfigValue() {
return myConfigValue;
}
public void setMyConfigValue(String value) {
myConfigValue = value;
}
}
Then create a task which updates the config:
UpdateConfigTask.java
public class UpdateConfigTask extends Task {
ExampleConfiguration config;
public UpdateConfigTask(ExampleConfiguration config) {
super("updateconfig");
this.config = config;
}
#Override
public void execute(Map<String, List<String>> parameters, PrintWriter output) {
config.setMyConfigValue("goodbye");
}
}
Also for demonstration purposes, create a resource which allows you to get the config value:
ConfigResource.java
#Path("/config")
public class ConfigResource {
private final ExampleConfiguration config;
public ConfigResource(ExampleConfiguration config) {
this.config = config;
}
#GET
public Response handleGet() {
return Response.ok().entity(config.getMyConfigValue()).build();
}
}
Finally wire everything up in your application:
ExampleApplication.java (exerpt)
environment.jersey().register(new ConfigResource(configuration));
environment.admin().addTask(new UpdateConfigTask(configuration));
Usage
Start up the application then run:
$ curl 'http://localhost:8080/config'
hello
$ curl -X POST 'http://localhost:8081/tasks/updateconfig'
$ curl 'http://localhost:8080/config'
goodbye
How it works
This works simply by passing the same reference to the constructor of ConfigResource.java and UpdateConfigTask.java. If you aren't familiar with the concept see here:
Is Java "pass-by-reference" or "pass-by-value"?
The linked classes above are to a project I've created which demonstrates this as a complete solution. Here's a link to the project:
scottg489/dropwizard-runtime-config-example
Footnote: I haven't verified this works with the built in configuration. However, the dropwizard Configuration class which you need to extend for your own configuration does have various "setters" for internal configuration, but it may not be safe to update those outside of run().
Disclaimer: The project I've linked here was created by me.
I solved this with bytecode manipulation via Javassist
In my case, I wanted to change the "influx" reporter
and modifyInfluxDbReporterFactory should be ran BEFORE dropwizard starts
private static void modifyInfluxDbReporterFactory() throws Exception {
ClassPool cp = ClassPool.getDefault();
CtClass cc = cp.get("com.izettle.metrics.dw.InfluxDbReporterFactory"); // do NOT use InfluxDbReporterFactory.class.getName() as this will force the class into the classloader
CtMethod m = cc.getDeclaredMethod("setTags");
m.insertAfter(
"if (tags.get(\"cloud\") != null) tags.put(\"cloud_host\", tags.get(\"cloud\") + \"_\" + host);tags.put(\"app\", \"sam\");");
cc.toClass();
}

I can't unit test my class without exposing private fields -- is there something wrong with my design?

I have written some code which I thought was quite well-designed, but then I started writing unit tests for it and stopped being so sure.
It turned out that in order to write some reasonable unit tests, I need to change some of my variables access modifiers from private to default, i.e. expose them (only within a package, but still...).
Here is some rough overview of my code in question. There is supposed to be some sort of address validation framework, that enables address validation by different means, e.g. validate them by some external webservice or by data in DB, or by any other source. So I have a notion of Module, which is just this: a separate way to validate addresses. I have an interface:
interface Module {
public void init(InitParams params);
public ValidationResponse validate(Address address);
}
There is some sort of factory, that based on a request or session state chooses a proper module:
class ModuleFactory {
Module selectModule(HttpRequest request) {
Module module = chooseModule(request);// analyze request and choose a module
module.init(createInitParams(request)); // init module
return module;
}
}
And then, I have written a Module that uses some external webservice for validation, and implemented it like that:
WebServiceModule {
private WebServiceFacade webservice;
public void init(InitParams params) {
webservice = new WebServiceFacade(createParamsForFacade(params));
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
So basically I have this WebServiceFacade which is a wrapper over external web service, and my module calls this facade, processes its response and returns some framework-standard response.
I want to test if WebServiceModule processes reponses from external web service correctly. Obviously, I can't call real web service in unit tests, so I'm mocking it. But then again, in order for the module to use my mocked web service, the field webservice must be accessible from the outside. It breaks my design and I wonder if there is anything I could do about it. Obviously, the facade cannot be passed in init parameters, because ModuleFactory does not and should not know that it is needed.
I have read that dependency injection might be the answer to such problems, but I can't see how? I have not used any DI frameworks before, like Guice, so I don't know if it could be easily used in this situation. But maybe it could?
Or maybe I should just change my design?
Or screw it and make this unfortunate field package private (but leaving a sad comment like // default visibility to allow testing (oh well...) doesn't feel right)?
Bah! While I was writing this, it occurred to me, that I could create a WebServiceProcessor which takes a WebServiceFacade as a constructor argument and then test just the WebServiceProcessor. This would be one of the solutions to my problem. What do you think about it? I have one problem with that, because then my WebServiceModule would be sort of useless, just delegating all its work to another components, I would say: one layer of abstraction too far.
Yes, your design is wrong. You should do dependency injection instead of new ... inside your class (which is also called "hardcoded dependency"). Inability to easily write a test is a perfect indicator of a wrong design (read about "Listen to your tests" paradigm in Growing Object-Oriented Software Guided by Tests).
BTW, using reflection or dependency breaking framework like PowerMock is a very bad practice in this case and should be your last resort.
I agree with what yegor256 said and would like to suggest that the reason why you ended up in this situation is that you have assigned multiple responsibilities to your modules: creation and validation. This goes against the Single responsibility principle and effectively limits your ability to test creation separately from validation.
Consider constraining the responsibility of your "modules" to creation alone. When they only have this responsibility, the naming can be improved as well:
interface ValidatorFactory {
public Validator createValidator(InitParams params);
}
The validation interface becomes separate:
interface Validator {
public ValidationResponse validate(Address address);
}
You can then start by implementing the factory:
class WebServiceValidatorFactory implements ValidatorFactory {
public Validator createValidator(InitParams params) {
return new WebServiceValidator(new ProdWebServiceFacade(createParamsForFacade(params)));
}
}
This factory code becomes hard to unit-test, since it is explicitly referencing prod code, so keep this impl very concise. Put any logic (like createParamsForFacade) on the side, so that you can test it separately.
The web service validator itself only gets the responsibility of validation, and takes in the façade as a dependency, following the Inversion of Control (IoC) principle:
class WebServiceValidator implements Validator {
private final WebServiceFacade facade;
public WebServiceValidator(WebServiceFacade facade) {
this.facade = facade;
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
Since WebServiceValidator is not controlling the creation of its dependencies anymore, testing becomes a breeze:
#Test
public void aTest() {
WebServiceValidator validator = new WebServiceValidator(new MockWebServiceFacade());
...
}
This way you have effectively inverted the control of the creation of the dependencies: Inversion of Control (IoC)!
Oh, and by the way, write your tests first. This way you will naturally gravitate towards a testable solution, which is usually also the best design. I think that this is due to the fact that testing requires modularity, and modularity is coincidentally the hallmark of good design.

Categories

Resources