Use placeholders in feature files - java

I would like to use placeholders in a feature file, like this:
Feature: Talk to two servers
Scenario: Forward data from Server A to Server B
Given MongoDb collection "${db1}/foo" contains the following record:
"""
{"key": "value"}
"""
When I send GET "${server1}/data"
When I forward the respone to PUT "${server2}/data"
Then MongoDB collection "${db2}/bar" MUST contain the following record:
"""
{"key": "value"}
"""
The values of ${server1} etc. would depend on the environment in which the test is to be executed (dev, uat, stage, or prod). Therefore, Scenario Outlines are not applicable in this situation.
Is there any standard way of doing this? Ideally there would be something which maintains a Map<String, String> that can be filled in a #Before or so, and runs automatically between Cucumber and the Step Definition so that inside the step definitions no code is needed.
Given the following step definitions
public class MyStepdefs {
#When("^I send GET "(.*)"$)
public void performGET(final String url) {
// …
}
}
And an appropriate setup, when performGET() is called, the placeholder ${server1} in String uri should already be replaced with a lookup of a value in a Map.
Is there a standard way or feature of Cucumber-Java of doing this? I do not mind if this involves dependency injection. If dependency injection is involved, I would prefer Spring, as Spring is already in use for other reasons in my use case.

The simple answer is that you can't.
The solution to your problem is to remove the incidental details from your scenario all together and access specific server information in the step defintions.
The server and database obviously belong together so lets describe them as a single entity, a service.
The details about the rest calls doesn't really help to convey what you're
actually doing. Features don't describe implementation details, they describe behavior.
Testing if records have been inserted into the database is another bad practice and again doesn't describe behavior. You should be able to replace that by an other API call that fetches the data or some other process that proves the other server has received the information. If there are no such means to extract the data available you should create them. If they can't be created you can wonder if the information even needs to be stored (your service would then appear to have the same properties as a black hole :) ).
I would resolve this all by rewriting the story such that:
Feature: Talk to two services
Scenario: Forward foobar data from Service A to Service B
Given "Service A" has key-value information
When I forward the foobar data from "Service A" to "Service B"
Then "Service B" has received the key-value information
Now that we have two entities Service A and Service B you can create a ServiceInformationService to look up information about Service A and B. You can inject this ServiceInformationService into your step definitions.
So when ever you need some information about Service A, you do
Service a = serviceInformationService.lookup("A");
String apiHost = a.getApiHost():
String dbHost = a.getDatabaseHOst():
In the implementation of the Service you look up the property for that service System.getProperty(serviceName + "_" + apiHostKey) and you make sure that your CI sets A_APIHOST and A_DBHOST, B_APIHOST, B_DBHOST, ect.
You can put the name of the collections in a property file that you look up in a similar way as you'd look up the system properties. Though I would avoid direct interaction with the DB if possible.

The feature you are looking for is supported in gherkin with qaf. It supports to use properties defined in properties file using ${prop.key}. In addition it offers strong resource configuration features to work with different environments. It also supports web-services

Related

how to publish multiple rest endpoints with same base address?

first our scenario:
we have an OSGI environment, where several bundles publish their own rest endpoint, e.g.:
http://localhost:8080/api/cars
http://localhost:8080/api/food
http://localhost:8080/api/toys
This was done using JAXRSServerFactoryBean.create() method, with address being the ones listed above.
Now we need to add a tenant id to the users requests (not user auth, which is different, as users may be part of several tenants). URLs should look like this:
http://localhost:8080/api/tenant/{tenantid}/cars
http://localhost:8080/api/tenant/{tenantid}/food
http://localhost:8080/api/tenant/{tenantid}/toys
I tried two approaches to achieve this now:
Add tenant-id to address of service (http://localhost:8080/api/tenant/{tenantid}) - Result: I could access my service under the given URL, but I couldn't fill any data for tenantid but had to type {tenantid} in the URL, which is not how I need to use it.
Publish all three services under the same URL (http://localhost:8080/api) moving the tenant-part to the #Path annotation of each api class - Result: Exception, that address was already taken by other endpoint
Does anyone have an idea, how this can be done properly? I know that the ServiceBean can take an array of implementors as an argument instead of a single class, but this is not an option, as the bundles load separately and I had some dependency issues, when I tried make this "all in one".
As a sidenote: I know, we could put tenant id in a header, but typically tenant info is somewhere in a URL (host or path) and we wanna go with this "common" style instead of adding a custom header, though implementation of header style would be much easier (already got it to work).
Any ideas would help.
Thanks,
Kay
Try something like:
#Path("/tenants")
public class TenantResource{
#Path("/{tenantId}/cars")
#Get
public List<Car> getTenantCars(#PathParam("tenantId") long tenantId){...}
#Path("/{tenantId}/food")
#Get
public Food getTenantFood(#PathParam("tenantId") long tenantId){...}
#Path("/{tenantId}/toys")
#Get
public List<Toy> getTenantToys(#PathParam("tenantId") long tenantId){...}
}
If you have URLs such as tenants/{tenantid}/cars then this usually means "the cars of the tenant with id = tenantid".
"cars" is a property of the "tenant" resource and thus should be in the same resource.
I think it might be hard to modularize properties of a resource/ object.
But you could consider a "car" resource and query the resource like: /cars?tenantid={tenantid}
#Path("/cars")
public class CarResource{
#Get
public List<Car> getCarsByTenantId(#QueryParam("tenantId") long tenantId){...}
}
or similar.

spring cloud contract dsl specify path parameter

I am trying to create a contract for a GET request and I'd like to use a path parameter, that can be reused in the response as well. Is this at all possible? I can only find examples for POST, query parameters and body's.
So if I want to define a contract that requests an entity i.e. /books/12345-6688, I want to reuse the specified ID in the response.
How do I create a contract for something like this?
Possible since Spring Cloud Contract 1.2.0-RC1 (fixed in this issue).
response {
status 200
body(
path: fromRequest().path(),
pathIndex: fromRequest().path(1) // <-- here
)
}
See the docs.
Nope that's not possible due to https://github.com/tomakehurst/wiremock/issues/383 . Theoretically you could create your own transformer + override the way stubs are generated in Spring Cloud Contract. That way the WireMock stubs would contain a reference to your new transformer (like presented in the WireMock docs - http://wiremock.org/docs/extending-wiremock/). But it sounds like a lot of work for sth that seems not really that necessary. Why do you need to do it like this? On the consumer side you want to test the integration, right? So just hardcode some values in the contract instead of referencing them and just check if you can parse those values.
UPDATE:
If you just need to parametrize the request URL but don't want to reference it in the response you can use regular expressions like here - https://cloud.spring.io/spring-cloud-contract/single/spring-cloud-contract.html#_regular_expressions
UPDATE2:
Like #laffuste has mentioned, starting from RC1 you can reference a concrete path element

Best practice to inform client about the itemId newly created

I own a DDD/CQRS application.
My question concerns the handling of an item creation through POST (Rest).
CQRS (based on CQS principle) promotes that commands should never return a value.
Queries are there for that.
So I wonder how to handle the use case of Item creation.
Here's my current command handler pattern (light for the sample (no interfaces etc.)):
#Service
#Transactional
public CreateItem {
public void handle(CreateItemCommand command) {
Customer customer = customerRepository.findById(command.customerId);
ItemId generatedItemId = itemRepository.nextIdentity(); //generating the GUID
customer.createItem(generatedItemId, .....);
}
}
By reading this article, an easy method would be to declare an output property in the command, populated at the end of the handle method like this:
public void handle(CreateItemCommand command) {
Customer customer = customerRepository.findById(command.customerId);
ItemId generatedItemId = itemRepository.nextIdentity(); //generating the GUID
customer.createItem(generatedItemId, .....);
command.itemId = generatedItemId; //populating the output property
}
However, I see one drawback with this approach:
- A command, in theory, is meant to be immutable.
This itemId would then be sent thanks to the calling controller (webapp) through Location Header with the status 201 or 202 (depending if I expect async or not).
An other solution would be to let the controller initialize the GUID by accessing the repository itself, thus letting the command immutable:
//in my controller:
ItemId generatedItemId = itemRepository.nextIdentity(); //controller generating the GUID
createItem.handle(command);
// setting here the location header (201-202) containing the URL to the newly created item with the using the previous itemId.
Drawback: Controller (adapter layer) accessing directly the repository ..., that is too low-level IMO.
My extreme client being a Javascript application, I might have another solution to let the Javascript itself generate a GUID, and feed CreateItemCommand with it before sending the whole command to server.
Advantage: No more issues about potential violation of CQ(R)S guidelines.
Drawback: Should check the validity of the passed id at server side. Although there would have an index unique on this preventing an unexpected insertion in database.
What is the best (or just a good) strategy to handle this?
I am the developer of a CRM application based on the CQRS pattern. I tend to see commands as immutable. The team decided early on, that all IDs are generated on the client to have immutable commands. This is perfectly ok, as we are using UUIDs. So we are quite confident, that the IDs are valid and there are no ID collisions. We went well with that approach up to this point - I can definitely recommend this. In that scenario the client just knows the IDs.
Sometimes it happens though - especially in manual testing - that a create command is dispatched twice with the same ID. In that case the addition of events in the event store fails (we use event sourcing) with a duplicate key exception. The exception is passed to the controller. In fact we do return results from command executions with a call back, even though it's just "everything ok" most of the time - so no exception thrown. Also command validation is done this way. We do this using a command bus concept.
I would recommend taking a look at the Axon framework. We use it, it provides the common building blocks, and it just works. Maybe you can get some inspirations from that!

Guice IOC: Manual (and Optional) Creation of a Singleton

Trying to get started with Guice, and struggling to see how my use-case fits in.
I have a command-line application, which takes several optional parameters.
Let's say I've got the tool shows a customer's orders, for example
order-tool display --customerId 123
This shows all the orders owned by customer with ID 123. Now, the user can also specify a user's name:
order-tool display --customerName "Bob Smith"
BUT the interface to query for orders relies on customer IDs. Thus, we need to map from a customer name to a customer ID. To do this, we need a connection to the customer API. Thus, the user has to specify:
order-tool display --customerName "Bob Smith" --customerApi "http://localhost:8080/customer"
When starting the application, I want to parse all the arguments. In the case where --customerApi is specified, I want to place a CustomerApi singleton in my IoC context - which is parameterized by the CLI arg with the API URL.
Then, when the code runs to display a customer by name - it asks the context if it has a CustomerApi singleton. If it doesn't it throws an exception, telling the CLI user that they need to specify --customerApi if they want to use --customerName. However, if one has been created - then it simply retrieves it from the IoC context.
It sounds like "optionally creating a singleton" isn't exactly what you're trying to do here. I mean, it is, but that's as simple as:
if (args.hasCustomerApi()) {
bind(CustomerApi.class).toInstance(new CustomerApi(args.getCustomerApi()));
}
To allow for optional bindings, you will probably need to annotate their use with #Nullable.
I think your real question is how to structure an application so that you can partially configure it, use the configuration to read and validate some command-line flags, then use the flags to finish configuring your application. I think the best way to do that is with a child injector.
public static void main(String[] args) {
Injector injector = Guice.createInjector(new AModule(), new BModule(), ...);
Arguments arguments = injector.getInstance(ArgParser.class).parse(args);
validateArguments(arguments); // throw if required arguments are missing
Injector childInjector =
injector.createChildInjector(new ArgsModule(arguments));
childInjector.getInstance(Application.class).run();
}
Child injectors are just like normal injectors that defer to a parent if they don't contain the given bindings themselves. You can also read documents on how Guice resolves bindings.

How to dynamically differentiate the memcahce instances in java code?

Can anyone suggest any design pattern to dynamically differentiate the memcahce instances in java code?
Previously in my application there is only one memcache instance configured this way
Step-1:
dev.memcached.location=33.10.77.88:11211
dev.memcached.poolsize=5
Step-2:
Then i am accessing that memcache in code as follows,
private MemcachedInterface() throws IOException {
String location =stringParam("memcached.location", "33.10.77.88:11211");
MemcachedClientBuilder builder = new XMemcachedClientBuilder(AddrUtil.getAddresses(location));
}
Then i am invoking that memcache as follows in code using above MemcachedInterface(),
Step-3:
MemcachedInterface.getSoleInstance();
And then i am using that MemcachedInterface() to get/set data as follows,
MemcachedInterface.set(MEMCACHED_CUSTS, "{}");
resp = MemcachedInterface.gets(MEMCACHED_CUSTS);
My question is if i introduce an new memcache instance in our architechture,configuration is done as follows,
Step-1:
dev.memcached.location=33.10.77.89:11211
dev.memcached.poolsize=5
So, first memcache instance is in 33.10.77.88:11211 and second memcache instance is in 33.10.77.89:11211
until this its ok...but....
how to handle Step-2 and Step-3 in this case,To get the MemcachedInterface dynamically.
1)should i use one more interface called MemcachedInterface2() in step-2
Now the actual problem comes in,
I am having 4 webservers in my application.Previoulsy all are writing to MemcachedInterface(),but now as i will introduce one more memcache instance ex:MemcachedInterface2() ws1 and ws2 should write in MemcachedInterface() and ws3 and ws4 should write in ex:MemcachedInterface2()
So,if i use one more interface called MemcachedInterface2() as mentioned above,
This an code burden as i should change all the classes using WS3 and WS4 to Ex:MemcachedInterface2() .
Can anyone suggest one approach with limited code changes??
xmemcached supports constistent hashing which will allow your client to choose the right memcached server instance from the pool. You can refer to this answer for a bit more detail Do client need to worry about multiple memcache servers?
So, if I understood correctly, you'll have to
use only one memcached client in all your webapps
since you have your own wrapper class around the memcached client MemcachedInterface, you'll have to add some method to this interface, that enables to add/remove server to an existing client. See the user guide (scroll down a little): https://code.google.com/p/xmemcached/wiki/User_Guide#JMX_Support
as far as i can see is, you have duplicate code running on different machines as like parallel web services. thus, i recommend this to differentiate each;
Use Singleton Facade service for wrapping your memcached client. (I think you are already doing this)
Use Encapsulation. Encapsulate your memcached client for de-couple from your code. interface L2Cache
For each server, give them a name in global variable. Assign those values via JVM or your own configuration files via jar or whatever. JVM: --Dcom.projectname.servername=server-1
Use this global variable as a parameter, configure your Service getInstance method.
public static L2Cache getCache() {
if (System.getProperty("com.projectname.servername").equals("server-1"))
return new L2CacheImpl(SERVER_1_L2_REACHIBILITY_ADDRESSES, POOL_SIZE);
}
good luck with your design!
You should list all memcached server instances as space separated in your config.
e.g.
33.10.77.88:11211 33.10.77.89:11211
So, in your code (Step2):
private MemcachedInterface() throws IOException
{
String location =stringParam("memcached.location", "33.10.77.88:11211 33.10.77.89:11211");
MemcachedClientBuilder builder = new XMemcachedClientBuilder(AddrUtil.getAddresses(location));
}
Then in Step3 you don't need to change anything...e.g. MemcachedInterface.getSoleInstance(); .
You can read more in memcached tutorial article:
Use Memcached for Java enterprise performance, Part 1: Architecture and setup
http://www.javaworld.com/javaworld/jw-04-2012/120418-memcached-for-java-enterprise-performance.html
Use Memcached for Java enterprise performance, Part 2: Database-driven web apps
http://www.javaworld.com/javaworld/jw-05-2012/120515-memcached-for-java-enterprise-performance-2.html

Categories

Resources