Counting times a function is called via JUnit - java

I want to count how many times I make a HTTP GET when I use websockets and when I do not use websockets. I expect once when using websockets and n-times otherwise. I want to do this via JUnit, and I happen to be using Spring too. Are there any creative ways to count the times I make a GET with Jersey?
client.target(.....).get(....)
I don't know how to do this without cluttering my production code with test specific code.

If you code is defined using an interface, then I would use a Decorator pattern to add additional behavior. In this case additional behavior would be keeping track of the count of calls.
This approach is easy to configure if your concrete class is configured through Spring. Then in your Spring resource for the JUnit test, modify it to inject the Decorated class. There is no impact to existing production code.

If you add one static variable COUNT and increment it with every call - it will not hurt production at all. And you can use this variable not only for unit testing but even for production monitoring.

Related

How to make ActivityOptions like it's lifetime dynamic in the workflow

Since my activity workload could be diffrent dramatically we can not use a fixed scheduleToCloseTimeoutSeconds
In the workerImpl's constructor I new the stubs for our activities which are going to be used in the workflowmethods. but the problem is that the adviced method of registering the workflow is by type:
registerWorkflowImplementationTypes
which only accepts a class. so there is no way to pass in the options like lifetime to the workflow which could be used to make the acitivityOptions dynamic.
So is what I am trying to achieve doing an antipattern to the Cadense?
If not, what is the correct way of doing it? probably workflow factory methods should be used, but the docs indicate those are to be used for unit test and mocking mostly and looks like using the registerWorkflowImplementationTypes is the prefered method.
The Cadence workflow implementation code must be deterministic. One way to break determinism is to directly rely on a configuration that can change during a workflow execution.
The standard way to solve this problem is to pass the configuration parameters to a workflow method as an argument or load them using an activity. Usually a local activity which is more efficient is used for this purpose.

How is if condition different from JUnit's assumingThat?

Why would one use JUnit's assumingThat() method instead of a plain old simple if clause? If one can use simple thing why would you complicate it with something else that does it the same way.
Is it just a expressionality thing, or what's the advantage, I don't see other benefits.
Junit's assume is not a new feature in version 5, it has been there since v4.4 and it has other applications.
You could skip testing with if, but with assume you can tag failure lifecycle method to it, using a Listener.
Example Situation (Most Common) - You could have a listener, which creates reports of the test. And there could be a code to add the failed tests, passed tests and assume failed tests to the report. If you want to achieve this without using listener or testAssumptionFailure method, then you would have to repeatedly call it everywhere.
Instead adding a listener makes it modular and maintainable.
You have many varities of assume methods which you could use to stop repeatedly write if, else and messages.

A very specific usage of callbacks in Java

This question is about a specific usage of a callback pattern. By callback i mean an interface from which i can define method(s) that is (are) optionnaly (= with a default set to 'do nothing', thanks Java 8) called from a lower layer in my application. My "application" is in fact a product which may have a lot of changes between client projects, so i need to separates somethings in order to reuse what won't change (technical code, integration of technologies) from the rest (model, rules).
Let's take an example :
I developped a Search Service which is based upon Apache CXF JAX-RS Search.
This service parses a FIQL query which can only handle AND/OR condition with =/</&gt/LIKE/... condition to create a JPA criteria query. I can't use a a condition like 'isNull'.
Using a specific interface i can define a callback that will be called when i got the criteria query from apache CXF layer in my search service and add my condition to the existing ones before the query is executed. This condition are defined on the upper layer of my searchService (RestController). This is in order to reduce code duplicate, like retuning a criteria query and finalize it in every methods where i need it. And because using #Transactional in CXF JAX-RS controller does not work well Spring proxy and CXF work (some JAX-RS annotation are ignored);
First question : does this example seems to be a good idea in terms of design ?
Now another example : i have an object which have some basic fields created from a service layer. But i want to be able to set others non-nullable fields not related to the service's process before the entity is persisted. These fields may move from a projects to another so i'd like to not have to change the signature of my service's method every time we add / remove columns. So again i'm considering using a callback pattern to be able to set within the same transaction and before object is persisted by the Service layer.
Second question : What about this example ?
Global question : Except the classic usage of callback for events : is this a pratice to use this pattern for some specific usage or is there any better way to handle it ?
If you need some code sample ask me, i'll make some (can't post my current code).
I wouldn't say that what you've described is a very specific usage of "an interface from which i can define method(s) that is (are) optionally called from a lower layer". I think that it is reasonable and also quite common solution.
Your doubts may be due to the naming. I'd rather use the term command pattern here. It seems to me that it is less confusing. Your approach also resembles the strategy pattern i.e. you provide (inject) an object which performs some calculations. Depending, on the context you inject objects that behave in a different way (for example add different conditions to a query).
To sum up callbacks/commands are not only used for events. I'd even say that events are specific usage of them. Command/callback pattern is used whenever we need to encapsulate an operation within an object and transfer/pass it somehow (by the way, in Java there is no other way to do so but for example in C++ there are pointers to methods, in C# there are delegates...).
As to your second example. I'm not sure if I understand it correctly. Why can't you simply populate all required fields of an object before calling the service?

Do I need to unit test web service request dispatcher (Java)?

This class is simply a request dispatcher. It takes request and response objects, and pass down the work according the request type. Application logic is tested. Mocking has to be avoided. How can I write unit test for this dispatcher without turning the test into integration or system test? How are dispatchers usually tested?
EDIT: I was told to avoid mocking. I don't think I can change that decision.
There should be two parts to the code; the first is the marshalling of data between the web layer and dispatching, the second is dispatching to handlers.
Dispatching can be tested using "plain" unit testing, it's just logic to map arbitrary criteria to handlers.
The marshalling layer requires either mocking, or enough integration to create a web request and watch its routing, what's returned from its handler, etc. HtmlUnit is one solution, there are a ton of others.
Use mocks. Do the unit tests.
If you start picking and chosing which parts to test and what not to test, you might as well not test anything at all.
Then again, you might just name them "Bootstraps" or "Imposter" or some other name and get around the restrictions. Alternative, you might be able to hand-code the mocked objects and get around the restrictions that way.

Unit testing a servlet that makes an URL call

I want to write a unit test for a servlet class that makes a call to a web service through java.net.URL.
I can create mock request and response objects to send to the servlet's doGet method easily (using the techniques from the pragmatic programmer text on junit), i.e., creating MockHttpServletRequest, MockHttpServletResponse, and passing these to doGet.
The part I'm having trouble with is the URL open in the servlet.
Right now, i'm just choosing between a call to a function that opens the URL and returns a string (the production code) and a call to a function that returns that directly returns a string for a fixed URL (the test code)
Ideally i'd like to have a doGet method in which the testing code is invisible - the choice between the function that makes the network access, and the one that directly returns a string should be transparent to doGet.
i can think of a number of ways of achieving this, but none feel right.
Example 1: wrap the function in a class that has a testOn boolean, and a setTestMode method; junit init can set the testMode to true, default is false. the testOn decides which method to call. negative is that i need a new class, seems like it could get out of hand.
Example 2: have two classes implementing the network access, one of which is the mock; have junit reload the mock class, production code load the regular class (or somehow remap the production class to the mock class). negative: not sure how this would be done; seems clumsy.
Example 3: have a class with static fields indicating if i want to use mocks, and condition the URL access in the servlet based on the field values. negative: feels like global variables.
Example 4: extend URL, so that the production code will work fine if i switch to URL only (but java.net.URL is final).
I couldn't find quite the right answer through a morning of searches, hence my turning to the collective wisdom of SO.
Thanks,
Adnan
ps - I should mention that I don't have to use java.net.URL, anything that's equivalent will work.
Your second option is the "right" option. The call to the external URL should be encapsulated in a service. The service, then, is injected into the servlet that uses it. This is one place where Inversion of Control comes in handy.
In your unit test you'd inject the test implementation, in real life you'd inject a real implementation. It can be as simple as providing a setter for the service and defaulting the implementation to the "real" one.
This kind of thing is a canonical example for IoC/DI.
Looks like you are reinventing the wheel - and you invented it yet again in Example 2. This is typically implemented using dependency injection and is actually the best solution software developers came up with so far.
Hide your web service call behind an interface. One implementation does the actual call while the other is a mock that you can configure. If you are not using any DI framework (Spring, Guice, EJB/CDI), replace production implementation with mock manually in the test.

Categories

Resources