How to dynamically differentiate the memcahce instances in java code? - java

Can anyone suggest any design pattern to dynamically differentiate the memcahce instances in java code?
Previously in my application there is only one memcache instance configured this way
Step-1:
dev.memcached.location=33.10.77.88:11211
dev.memcached.poolsize=5
Step-2:
Then i am accessing that memcache in code as follows,
private MemcachedInterface() throws IOException {
String location =stringParam("memcached.location", "33.10.77.88:11211");
MemcachedClientBuilder builder = new XMemcachedClientBuilder(AddrUtil.getAddresses(location));
}
Then i am invoking that memcache as follows in code using above MemcachedInterface(),
Step-3:
MemcachedInterface.getSoleInstance();
And then i am using that MemcachedInterface() to get/set data as follows,
MemcachedInterface.set(MEMCACHED_CUSTS, "{}");
resp = MemcachedInterface.gets(MEMCACHED_CUSTS);
My question is if i introduce an new memcache instance in our architechture,configuration is done as follows,
Step-1:
dev.memcached.location=33.10.77.89:11211
dev.memcached.poolsize=5
So, first memcache instance is in 33.10.77.88:11211 and second memcache instance is in 33.10.77.89:11211
until this its ok...but....
how to handle Step-2 and Step-3 in this case,To get the MemcachedInterface dynamically.
1)should i use one more interface called MemcachedInterface2() in step-2
Now the actual problem comes in,
I am having 4 webservers in my application.Previoulsy all are writing to MemcachedInterface(),but now as i will introduce one more memcache instance ex:MemcachedInterface2() ws1 and ws2 should write in MemcachedInterface() and ws3 and ws4 should write in ex:MemcachedInterface2()
So,if i use one more interface called MemcachedInterface2() as mentioned above,
This an code burden as i should change all the classes using WS3 and WS4 to Ex:MemcachedInterface2() .
Can anyone suggest one approach with limited code changes??

xmemcached supports constistent hashing which will allow your client to choose the right memcached server instance from the pool. You can refer to this answer for a bit more detail Do client need to worry about multiple memcache servers?
So, if I understood correctly, you'll have to
use only one memcached client in all your webapps
since you have your own wrapper class around the memcached client MemcachedInterface, you'll have to add some method to this interface, that enables to add/remove server to an existing client. See the user guide (scroll down a little): https://code.google.com/p/xmemcached/wiki/User_Guide#JMX_Support

as far as i can see is, you have duplicate code running on different machines as like parallel web services. thus, i recommend this to differentiate each;
Use Singleton Facade service for wrapping your memcached client. (I think you are already doing this)
Use Encapsulation. Encapsulate your memcached client for de-couple from your code. interface L2Cache
For each server, give them a name in global variable. Assign those values via JVM or your own configuration files via jar or whatever. JVM: --Dcom.projectname.servername=server-1
Use this global variable as a parameter, configure your Service getInstance method.
public static L2Cache getCache() {
if (System.getProperty("com.projectname.servername").equals("server-1"))
return new L2CacheImpl(SERVER_1_L2_REACHIBILITY_ADDRESSES, POOL_SIZE);
}
good luck with your design!

You should list all memcached server instances as space separated in your config.
e.g.
33.10.77.88:11211 33.10.77.89:11211
So, in your code (Step2):
private MemcachedInterface() throws IOException
{
String location =stringParam("memcached.location", "33.10.77.88:11211 33.10.77.89:11211");
MemcachedClientBuilder builder = new XMemcachedClientBuilder(AddrUtil.getAddresses(location));
}
Then in Step3 you don't need to change anything...e.g. MemcachedInterface.getSoleInstance(); .
You can read more in memcached tutorial article:
Use Memcached for Java enterprise performance, Part 1: Architecture and setup
http://www.javaworld.com/javaworld/jw-04-2012/120418-memcached-for-java-enterprise-performance.html
Use Memcached for Java enterprise performance, Part 2: Database-driven web apps
http://www.javaworld.com/javaworld/jw-05-2012/120515-memcached-for-java-enterprise-performance-2.html

Related

Use placeholders in feature files

I would like to use placeholders in a feature file, like this:
Feature: Talk to two servers
Scenario: Forward data from Server A to Server B
Given MongoDb collection "${db1}/foo" contains the following record:
"""
{"key": "value"}
"""
When I send GET "${server1}/data"
When I forward the respone to PUT "${server2}/data"
Then MongoDB collection "${db2}/bar" MUST contain the following record:
"""
{"key": "value"}
"""
The values of ${server1} etc. would depend on the environment in which the test is to be executed (dev, uat, stage, or prod). Therefore, Scenario Outlines are not applicable in this situation.
Is there any standard way of doing this? Ideally there would be something which maintains a Map<String, String> that can be filled in a #Before or so, and runs automatically between Cucumber and the Step Definition so that inside the step definitions no code is needed.
Given the following step definitions
public class MyStepdefs {
#When("^I send GET "(.*)"$)
public void performGET(final String url) {
// …
}
}
And an appropriate setup, when performGET() is called, the placeholder ${server1} in String uri should already be replaced with a lookup of a value in a Map.
Is there a standard way or feature of Cucumber-Java of doing this? I do not mind if this involves dependency injection. If dependency injection is involved, I would prefer Spring, as Spring is already in use for other reasons in my use case.
The simple answer is that you can't.
The solution to your problem is to remove the incidental details from your scenario all together and access specific server information in the step defintions.
The server and database obviously belong together so lets describe them as a single entity, a service.
The details about the rest calls doesn't really help to convey what you're
actually doing. Features don't describe implementation details, they describe behavior.
Testing if records have been inserted into the database is another bad practice and again doesn't describe behavior. You should be able to replace that by an other API call that fetches the data or some other process that proves the other server has received the information. If there are no such means to extract the data available you should create them. If they can't be created you can wonder if the information even needs to be stored (your service would then appear to have the same properties as a black hole :) ).
I would resolve this all by rewriting the story such that:
Feature: Talk to two services
Scenario: Forward foobar data from Service A to Service B
Given "Service A" has key-value information
When I forward the foobar data from "Service A" to "Service B"
Then "Service B" has received the key-value information
Now that we have two entities Service A and Service B you can create a ServiceInformationService to look up information about Service A and B. You can inject this ServiceInformationService into your step definitions.
So when ever you need some information about Service A, you do
Service a = serviceInformationService.lookup("A");
String apiHost = a.getApiHost():
String dbHost = a.getDatabaseHOst():
In the implementation of the Service you look up the property for that service System.getProperty(serviceName + "_" + apiHostKey) and you make sure that your CI sets A_APIHOST and A_DBHOST, B_APIHOST, B_DBHOST, ect.
You can put the name of the collections in a property file that you look up in a similar way as you'd look up the system properties. Though I would avoid direct interaction with the DB if possible.
The feature you are looking for is supported in gherkin with qaf. It supports to use properties defined in properties file using ${prop.key}. In addition it offers strong resource configuration features to work with different environments. It also supports web-services

How to subclass SubethaSmtp SMTPClient class

I am trying to develop a simple SMTPclient for testing purposes using the SubethaSmtp client package. i want to use the SMTPClient class instead of the SmartClient class for more control but i have not been able to figure out how to write mail data using SMTPClient, the only OutputStream exposed to public or external subclasses is the one for sending commands, the ones for sending data (after sending the DATA command) is exposed only to classes in the same package (SmartClient).
am i missing something here? i would like to know how a direct subclass of SMARTClient can written to work around this problem.
Looks like you are correct, you cannot simply extend the SMTPClient and get access similar to the one that SmartClient has, being a same-package class.
At this point you can either:
1) Fork your own version of the app from https://github.com/voodoodyne/subethasmtp and do whatever the hell you like with it, or
2) Go all the way and implement your own version of SMTPClient, as the package protected SMTPClient.dotTerminatedOutput;, used by SmartClient.dataWrite() actually is just instantiated like so
...
this.rawOutput = this.socket.getOutputStream();
this.dotTerminatedOutput = new DotTerminatedOutputStream(this.rawOutput);
...

GWT different interface implementation for client and server

Assume that we've some interface my.gwt.shared.Facade in shared package of our GWT project (exists both server and client) and two implementation of it: class my.gwt.client.ClientFacadeImpl (exists only client) and class my.gwt.server.ServerFacadeImpl (exists only server).
Is there any way to write a piece of code or annotation that substitute ClientFacadeImpl in client side and ServerFacadeImpl in server side?
Thanks all for the answers and discussion. I've found simple and elegant solution for my needs.
So, I've interface my.gwt.shared.Facade and two classes: class my.gwt.client.ClientFacadeImpl and class my.gwt.server.ServerFacadeImpl.
interface Facade {
Map<Boolean, Facade> FACADES = new HashMap<Boolean, Facade>();
}
Now, we should fill you FACADES interface. This is done like that:
public class MyEntry implements EntryPoint {
static {
Facade.FACADES.put(true, ClientFacadeImpl.INSTANCE); // client side
}
And
#Startup
#Singleton
public class Initializer {
#PostConstruct
private void init() {
Facade.FACADES.put(false, ServerFacadeImpl.INSTANCE); // server side
// other things
}
}
Now, when I need to get appropriate Facade, I just write
Facade facade = Facade.FACADES.get(GWT.isClient());
Also in this case in map is only corresponding to server or client side implementation.
P. S. Goal of this question was to allow handling of some GwtEvents fired on client direclty on server and vice-versa. This solution removed large set of DTO (data transfer objects) and simplified code a lot.
There's no answer to your question other than "it depends". Or rather, of course there are ways of doing what you ask, but would you accept the tradeoffs?
Given that you tagged the question with dependency-injection, let's start with that. If you use a DI tool with GWT, it's likely GIN (Dagger 2 would work, but it's still under development). In that case, just use distinct modules for GIN client-side and Guice server-side that bind() the appropriate implementation.
For a few releases, GWT.create() can be made to work outside a GWT (client) environment (i.e. on the server side). You have to register a ClassInstantiator on the ServerGwtBridge as an alternative to the rebind rules from gwt;xml files. So you could have a <replace-with class="my.gwt.client.ClientFacadeImpl"> rule in your gwt.xml, and a ClassInstantiator returning a ServerFacadeImpl on the server side.
Finally, you can also use a static factory and replace it with a client-side specific version by way of <super-source>.
A last one, but I'm unsure whether it'd work: you could use an if/else using GWT.isClient(), and annotate your ServerFacadeImpl with #GwtIncompatible to tell the GWT compiler that you know it's not client-compatible.

Best practice to inform client about the itemId newly created

I own a DDD/CQRS application.
My question concerns the handling of an item creation through POST (Rest).
CQRS (based on CQS principle) promotes that commands should never return a value.
Queries are there for that.
So I wonder how to handle the use case of Item creation.
Here's my current command handler pattern (light for the sample (no interfaces etc.)):
#Service
#Transactional
public CreateItem {
public void handle(CreateItemCommand command) {
Customer customer = customerRepository.findById(command.customerId);
ItemId generatedItemId = itemRepository.nextIdentity(); //generating the GUID
customer.createItem(generatedItemId, .....);
}
}
By reading this article, an easy method would be to declare an output property in the command, populated at the end of the handle method like this:
public void handle(CreateItemCommand command) {
Customer customer = customerRepository.findById(command.customerId);
ItemId generatedItemId = itemRepository.nextIdentity(); //generating the GUID
customer.createItem(generatedItemId, .....);
command.itemId = generatedItemId; //populating the output property
}
However, I see one drawback with this approach:
- A command, in theory, is meant to be immutable.
This itemId would then be sent thanks to the calling controller (webapp) through Location Header with the status 201 or 202 (depending if I expect async or not).
An other solution would be to let the controller initialize the GUID by accessing the repository itself, thus letting the command immutable:
//in my controller:
ItemId generatedItemId = itemRepository.nextIdentity(); //controller generating the GUID
createItem.handle(command);
// setting here the location header (201-202) containing the URL to the newly created item with the using the previous itemId.
Drawback: Controller (adapter layer) accessing directly the repository ..., that is too low-level IMO.
My extreme client being a Javascript application, I might have another solution to let the Javascript itself generate a GUID, and feed CreateItemCommand with it before sending the whole command to server.
Advantage: No more issues about potential violation of CQ(R)S guidelines.
Drawback: Should check the validity of the passed id at server side. Although there would have an index unique on this preventing an unexpected insertion in database.
What is the best (or just a good) strategy to handle this?
I am the developer of a CRM application based on the CQRS pattern. I tend to see commands as immutable. The team decided early on, that all IDs are generated on the client to have immutable commands. This is perfectly ok, as we are using UUIDs. So we are quite confident, that the IDs are valid and there are no ID collisions. We went well with that approach up to this point - I can definitely recommend this. In that scenario the client just knows the IDs.
Sometimes it happens though - especially in manual testing - that a create command is dispatched twice with the same ID. In that case the addition of events in the event store fails (we use event sourcing) with a duplicate key exception. The exception is passed to the controller. In fact we do return results from command executions with a call back, even though it's just "everything ok" most of the time - so no exception thrown. Also command validation is done this way. We do this using a command bus concept.
I would recommend taking a look at the Axon framework. We use it, it provides the common building blocks, and it just works. Maybe you can get some inspirations from that!

Communication between Java programs with non-JDK objects

I'm looking for a communication channel between two java programs running on the same machine. I've found a few options (RMI and XML-RCP) but none of the examples that I found show exchange of objects wich class it's non-primitive and not know on JDK (our own objects).
So, what's the easy technology to use when I want to do this (note that Utils.jar it's on the classpath of Server.jar and Client.jar):
Utils.jar:
class MyClassRequestParams { ... }
class MyClassReturnParams { ... }
Client.jar:
// Server creation
...
// Send request
MyClassRequestParams params = new MyClass...
MyClassReturnParams response = server.send("serverMethodName", params);
Server.jar:
MyClassRequestParams serverMethodName(MyClassRequestParams params)
{
MyClassReturnParams response = new MyC...
// do processing
return response;
}
Just make your transport classes implement the Serializable interface, and everything will be fine with RMI. Note that every object referenced bt the transport object should also be Serializable.
The RMI tutorial uses an example with a custom Task interface implemented by a Pi custom class that is not a "standard" JDK class.
You may also consider Versile Java (I am one of its developers). Follow the link for an example of making remote calls and defining remote interfaces. It implements a platform-independent standard for remote ORB interaction, currently also available for python.

Categories

Resources