My friend build a web service in java
I build one in .net
I want them to implement the same interface
then in my program change the web.config to point to one or the other.
In my mind this would be done by implementing the same interface. Not sure how it would actually be done...
Perhaps the safest way would be to generate the interface from WSDL. Describe your service(s) in a WSDL document then use 'wsimport -d src WSDL_file' (in Java).
The subtle differences between ASP.NET and Java web services will make this a hard task.
An alternative might be to create an adapter service in front of them, which exposes the same semantic interface, and has service references to both.
This adapter service can be configured to pass on commands to either the Java one or the .NET one based on the same approach of modifying the web.config. IE:
[WebMethod]
public int AddTwoNumbers(int numberA, int numberB)
{
if(useJavaService)
return javaService.AddTwoNumbers(numberA, numberB);
else
return dotnetService.AddTwoNumbers(numberA, numberB);
}
Your application can target this wrapper service, so from your application's perspective you would simply call:
int result = theService.AddTwoNumbers(5, 10);
and your application won't know if its going to hit the Java one or the .NET one.
Related
I'm implementing a series of REST micro services in Java - let's call them "adapters".
Every service reads the data from a particular source type, and provides result in the same way. The main idea is to have the same interface (service contract) for all of them, to get interchangeability. I would like to avoid code duplication and reuse the service contract for the services.
And it seems that I'm reinventing the wheel. Is there a standard approach for this?
I tried to extract the service contract in form of Java interface for Spring MVC Controller class and accompanying DAO class CustomObject:
public interface AdapterController {
#RequestMapping(method = RequestMethod.GET, value = "/objects/{name}")
CustomObject getObject(#PathVariable final String name);
}
Then put them into separate Maven project, set it as a dependency in the original project, and rewrote REST controller class as following:
#RestController
public class DdAdapterController implements AdapterController {
#Override
public CustomObject getObject(String name) {
return model.getByName(name);
}
I can reuse DAO object in a client code as well, but the interface class is useless at client side.
1) Summarizing: is it OK to reuse/share service contract between different service implementations? What's the cost of this? Is there the best practice how to share service contract?
2) The next question is about service contract and consuming client. Is it OK to share the contract between service and client? Is there some tools in Java / approach for this?
This goes against the microservice mentality and in the long run is a bad idea to share code.
If you start sharing code you will slowly just build a distributed monolith, where multiple services are dependent on each other.
Many have talked about this earlier:
microservices-dont-create-shared-libraries
The evils of too much coupling between services are far worse than the problems caused by code duplication
Micro services: shared library vs code duplication
The key to build microservices is:
One service should be very good at one thing
Keep them small
Have an extremely well documented api
When you need to delete a microservice this should be done with as few needs to update other services
Avoid code sharing, and treat all libraries like 3rd party libraries even your own
Microservises should by loosely coupled = minimum dependencies.
Microservices is an
architectural style that structures an application as a collection of
services that are
Highly maintainable and testable
Loosely coupled
Independently deployable
Organized around business capabilities.
https://microservices.io/
Contract can be defined with WADL
Using contract between client and server means less bugs, less missunderstandings when implementing client. That is what the contract good for.
I'll give a brief overview of my goals below just in case there are any better, alternative ways of accomplishing what I want. This question is very similar to what I need, but not quite exactly what I need. My question...
I have an interface:
public interface Command<T> extends Serializable {}
..plus an implementation:
public class EchoCommand implements Command<String> {
private final String stringToEcho;
public EchoCommand(String stringToEcho) {
this.stringToEcho = stringToEcho;
}
public String getStringToEcho() {
return stringToEcho;
}
}
If I create another interface:
public interface AuthorizedCommand {
String getAuthorizedUser();
}
..is there a way I can implement the AuthorizedCommand interface on EchoCommand at runtime without knowing the subclass type?
public <C extends Command<T>,T> C authorize(C command) {
// can this be done so the returned Command is also an
// instance of AuthorizedCommand?
return (C) theDecoratedCommand;
}
The why... I've used Netty to build myself a very simple proof-of-concept client / server framework based on commands. There's a one-to-one relationship between a command, shared between the client and server, and a command handler. The handler is only on the server and they're extremely simple to implement. Here's the interface.
public interface CommandHandler<C extends Command<T>,T> {
public T execute(C command);
}
On the client side, things are also extremely simple. Keeping things simple in the client is the main reason I decided to try a command based API. A client dispatches a command and gets back a Future. It's clear the call is asynchronous plus the client doesn't have deal with things like wrapping the call in a SwingWorker. Why build a synchronous API against asynchronous calls (anything over the network) just to wrap the synchronous calls in an asynchronous helper methods? I'm using Guava for this.
public <T> ListenableFuture<T> dispatch(Command<T> command)
Now I want to add authentication and authorization. I don't want to force my command handlers to know about authorization, but, in some cases, I want them to be able to interrogate something with regards to which user the command is being executed for. Mainly I want to be able to have a lastModifiedBy attribute on some data.
I'm looking at using Apache Shiro, so the obvious answer seems to be to use their SubjectAwareExecutor to get authorization information into ThreadLocal, but then my handlers need to be aware of Shiro or I need to abstract it away by finding some way of mapping commands to the authentication / authorization info in Shiro.
Since each Command is already carrying state and getting passed through my entire pipeline, things are much simpler if I can just decorate commands that have been authorized so they implement the AuthorizedCommand interface. Then my command handlers can use the info that's been decorated in, but it's completely optional.
if(command instanceof AuthorizedCommand) {
// We can interrogate the command for the extra meta data
// we're interested in.
}
That way I can also develop everything related to authentication / authorization independent of the core business logic of my application. It would also (I think) let me associate session information with a Netty Channel or ChannelGroup which I think makes more sense for an NIO framework, right? I think Netty 4 might even allow typed attributes to be set on a Channel which sounds well suited to keeping track of things like session information (I haven't looked into it though).
The main thing I want to accomplish is to be able to build a prototype of an application very quickly. I'd like to start with a client side dispatcher that's a simple map of command types to command handlers and completely ignore the networking and security side of things. Once I'm satisfied with my prototype, I'll swap in my Netty based dispatcher (using Guice) and then, very late in the development cycle, I'll add Shiro.
I'd really appreciate any comments or constructive criticism. If what I explained makes sense to do and isn't possible in plain old Java, I'd consider building that specific functionality in another JVM language. Maybe Scala?
You could try doing something like this:
Java: Extending Class At Runtime
At runtime your code would extend the class of the Command to be instantiated and implement the AuthorizedCommand interface. This would make the class an instance of AuthorizedCommand while retaining the original Command class structure.
One thing to watch for, you wouldn't be able to extend any classes with the "final" keyword.
I am quite new to OSGi and everything that is close to that.
Jump into the problem: I have a server-class that keeps a list of listeners, the listeners can register theirselves via a method (register(this)) that puts the listener into that above mentioned list (all listeners implement the server-class listener interface of course):
public void register(ServerListener listener) {
if(theListeners == null)
theListeners = new ArrayList<ServerListener>();
theListeners.add(listener);
}
That's the ServerListener interface:
public interface ServerListener {
public void update(JsonObject data);
}
Now the server-class provides the listeners with new data from time to time via an update(JsonObject object) method.
public void updateListeners() {
new Thread() {
public void run() {
for(ServerListener l : theListeners) {
l.update(jsonObject);
}
}
}.start();
}
Now, I want to modify the server-class into a service bundle in an OSGi framework (Knopflerfish). I am not familiar with that at all. I want to try just for fun, but the way I am doing it right now would not work, the listeners actually don't know that they should implement the ServerListener interface. So the server can't register them via the interface.
The thing is, I want to server to push data, not the clients to pull (that would be easier, in my understanding). Can someone (who understood my poor explanation) point me in the right direction?
An 'OSGi-centric' approach is to use a pattern known as the whiteboard pattern, rather than the listener pattern as you might with plain Java. Instead of registering with the server-class, your listeners register as services with the OSGi service registry. This means the framework takes care of handling registration, unregistration, and listeners which go away.
There's a free whitepaper on the whiteboard pattern here: http://www.osgi.org/wiki/uploads/Links/whiteboard.pdf, and we've also got a discussion of it in chapter 5 of Enterprise OSGi in Action (http://www.manning.com/cummins ).
It can take a while to get your head wrapped around the listener pattern, because it's your listeners which provide services, and your 'server' which consumes the services, which feels backwards at first. Once you get used to it, though, it works really well. Your server consumes all the services which implement the ServiceListener interface, and pushes data out to them as required.
It's best not to use the OSGi APIs directly to register and consume the services - use either declarative services (SCR) or blueprint for dependency injection of the OSGi services. These allow you to register and consume the services with an XML metadata file or annotations.
(As other have suggested, using a package-level dependencies for the ServerListener interface should allow it to be imported by both the server and listener bundles, no matter which bundle exports the package.)
You've got multiple problems here:
You need to expose a service (the server class) for other objects to register with
Interested objects need to find the service in order to register themselves
The other objects need to implement a particular interface for this to work
In general, trying to retrofit existing code into the OSGi can be painful unless you already have a modular architecture.
The listener interface can live in the server bundle or you could put it in a seperate API/contract bundle - both are valid designs.
From how you are describing the problem it sounds like you don't know the different types of dependency you can have in OSGi.
Coming from traditional Java development, most developers will start with the 'my dependencies are based on JARs' - this is not the best model.
OSGi offers package-level dependencies. In this way as long as some bundle offers the needed package, you're bundle doesn't care which bundle/JAR provided the dependency.
So if you used a package-level dependency for your listener interface, the implementation doesn't need to care if it comes from the server bundle or a contract/API bundle.
As a final note your design tightly couples the server to the listeners. What happens if a listener fails? Or hangs? Pub/sub is a better model for this type of communication.
* edit *
And Holly's answer reminded me again of the whiteboard pattern - definitely a good idea.
I'm new to the rails environment and come from a java enterprise web application background. I want to create a few classes that allow you to easily interface with an external application that exposes restful web services. In java I would simply create these as stateless java beans/facade's that return Data Transfer Objects which are nice usable objects instead of ugly xml maps/data. What is the best way to do this in Rails/Ruby? Here's my main questions:
Should the facade classes be static or should you instantiate them before using the service?
Where should the DTO's be placed?
Thanks,
Pierre
UPDATE: We ended up using services as explained in this answer: Moving transactional operations away from the controller
Code that doesn't fit as a model or controller lives in the lib folder. helpers is typically just for view related code that generates HTML or other UI related results.
I'd generally create them as regular classes that are instantiated and have instance methods to access the external rest service - this can make testing them easier. But this is really just a matter of preference (and also depends on how much state/reuse of those objects is required per request - depends on what you're doing exactly).
The "DTOs", would just be plain Ruby classes in this case - maybe even simple Struct instances if they don't have any logic in them. If they are Ruby classes, they'd live in app/models, but they wouldn't extend ActiveRecord::Base (or anything else)
You probably want to take a look at httparty
Here's an example of how you'd consume the twitter api.
# lib/twitter.rb
require 'httparty'
require 'hashie'
class Twitter
include HTTParty
base_uri 'twitter.com'
def timeline
self.class.get("/statuses/public_timeline.xml")["statuses"].map do |status|
Hashie::Mash.new(status)
end
end
end
# client code
client = Twitter.new
message = client.timeline.first
puts message.text
Notice that you don't have to create DTOs. httparty maps xml (take a look at http://dev.twitter.com/doc/get/statuses/public_timeline for the structure in this example) to Hashes and then Hashie::Mash maps them to methods, hence you can just do message.text. It even works recursively so you can do client.timeline.first.user.name.
If you're creating a rails project then I'd place twitter.rb in the lib folder.
If you'd rather use static methods you can do:
require 'httparty'
require 'hashie'
class Twitter
include HTTParty
base_uri 'twitter.com'
def self.timeline
get("/statuses/public_timeline.xml")["statuses"].map do |status|
Hashie::Mash.new(status)
end
end
end
# client code
message = Twitter.timeline.first
puts message.text
Is it possible to split a web service in to multiple classes and still provide a single path to the web service?
I know this isn't possible because of the duplicate url-pattern values. It sort of illustrates where we're wanting to go :)
<endpoint name="OneBigService"
implementation="SmallImpl1"
url-pattern="/OneBigService"/>
<endpoint name="OneBigService"
implementation="SmallImpl2"
url-pattern="/OneBigService"/>
Basically, how do avoid having one monolithic #WebService class?
Thanks!
Rob
Is it possible to split a web service in to multiple classes and still provide a single path to the web service?
No. A URI is a connection point to one web service (defined by the Port/Endpoint).
Basically, how do avoid having one monolithic #WebService class?
Well, in my opinion the real question is more when should I use several Port/Endpoint? And I would be tempted to answer: regroup/split things logically.
For example, while it make sense for a Calculator service to expose add, subtract, multiply and divide operations, I would use another service to expose a getQuote operation.
Now, you can always split the logic into several classes and delegate to them from your #WebService.
You could delegate functionality to a composed class from your web service class:
#WebService
public class OneBigService {
ISmall delegate = new SmallImpl1(); // or new SmallImpl2();
#WebMethod
public Result webMethodStuff() {
// do something with delegate
}
}