Huge remote interface -> best practices? [closed] - java

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I just had this interesting discussion with a colleague of mine.
We have a remote interface which is 2000+ lines of code and has 100+ methods in it.
The implementation of this interface has some logic but also delegated to other interfaces that are related to a certain concern.
I argued we should also split up the remote interface based on concern.
Advantages:
- Separation of concerns
- Just create different endpoint for each interface, client developers should only use interfaces they are interested in
- No "monster interface"
- Eg. security possible per-endpoint
He opposes this, arguing:
- One remote interface is easy for the client developers
I'm wondering what the "general opinion" on this is?
Is it a good practice to create a remote facade which groups all your concerns in one endpoint?

If you think in terms of scalability and the fact that behind that interface you could end up having multiple services (at least in a SOA architecture) it is for sure worth splitting into multiple interfaces.
And let us see why:
Indeed as you pointed out you will have separation of concerns
You will have a light weight API - developers will end up using only features they need
It is easier to implement versioning - it might not be the case now but can be the case in the future
Readability and maintainability on your side - smaller interfaces make it easier to debug code and understand what each concern is supposed to do
Indeed security/authorization/authentication per endpoint or per remote interface is much easier to implement
I am sure that this list can continue but these are just a few I can think of now
And to give you one last argument - "the bigger they are the harder they fall"

Related

Is it right to synchronize a method when building an API [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
When building an API RESTful webservice which performs CRUD operations, is it a right practice to synchronize the methods? Or should we avoid synchronizing the methods since it reduces the performance.
Could anyone please explain.
It is not "right practice" or "best practice".
It doesn't even make sense to me. In general.
A RESTful API consists of a URLs for making http / https "calls". The notion of "synchronized" is foreign to this.
Now a there might be use-cases where it would make sense to use some form of mutual exclusion in the implementation of a RESTful API. However, it is not clear that declaring Java API methods as synchronized gives the correct semantics. Certainly not without knowing how your RESTful API is being mapped onto your Java methods and your domain objects.
If we are talking about mapped Java methods in a Spring #RestController class, declaring those methods as synchronized would result in mutual exclusion on the current instance of the controller class. Spring controller objects are singletons, so you would end up processing your RESTful requests serially. I can't see why you would want to do that.
The correct way to approach this is to work out what you are actually trying to achieve, and then look for ways to achieve it. Don't go looking for a solution before you understand the problem. And don't look for "best practice" justifications. Think things through ... in the context of your problem.
There are no best practices.

How to design an API wrapper with bulky operations on domain object? (Need guidance) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I need some guidance in designing an API wrapper for my backend APIs. I have tried to keep it as specific as possible.
Context: We have a project which supports certain file operations like edit, create, merge etc. All the services are exposed as rest APIs. Now I have to create an API wrapper over this (client library) in Java. I've been reading about DDD and trying to approach the problem using that.
As per my thinking, the core object in my project would be File, along with some minor DTOs for talking to the backend. Edit, create, merge will be the verbs here acting on my domain object. I want to make it as easy as possible for the external developer to integrate the API. I would like the design to be something like that
For Creating a file : File.create() For editing : File.edit() Same for other operations Also, I want to have the capability of chaining operations (along the lines of fluent interfaces) for readability
For. eg. if you want to create a file and then convert it, it should be something like : File.create().convert(Required params)
My problem is each of the operation is bulky and async. I don't wanna write all the async error handling logic in the File class. Chaining the methods like above wont be easy as well if they return CompletableFuture objects, and this will be harder to maintain.
Question: What is a better way of solving this problem?
I am not looking for a spoonfed design. I just want to be guided to a design approach which fits the scenario. Feel free to point if I am understanding DDD wrong.
Very roughly: your domain model is responsible for bookkeeping. The effects on the state of the filesystem are implemented in your infrastructure layer, and you send the results back to the bookkeeper.

Orchestration and Choregraphie in SOA are they a deprecated architectures? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
About 3 month ago, I was charged to do a presentation and a demo explaining what is SOA composition.it wasn't easy to do it because latest eclipse version (neon) doesn't support BPEL projects any more, Eclipse Luna and an extension helped me in this situation.
From that time there's some questions that roam around in my mind :
why there's no new tutoriel about SOA composition ?
Are these architectures are deprecated ? if yes why are they ?
I do think (and this is an opinion) that SOAP/SOA/ESB/BPEL is obsolete and taken over by RESTful Architectures. By RESTful I don't mean things that have a primitive JSON+HTTP API, but real distributed applications, where endpoints are not dumb, but define the part of the workflow that belongs to them.
So, the two conceptual things colliding are: Do I want a central "smart" component (like ESB, and pure BPEL services) and dumb (for example SOAP) endpoints. Or do I rather have no central components and smart "endpoints" (like REST resources).
I think conceptually the latter is the clear winner in many cases (not all arguably). There are practical problems however. Companies always like to centralize. Centralization looks "neat" and "tidy", especially for an Enterprise Architect. Until the central components grow out of proportions that is.
One of my clients introduced an ESB just last year, so it's definitely not over. But I do think (again, this is just my opinion), that we already tried centralized architectures and monoliths. They always end in the "legacy system" bin, that can not be replaced because it does everything. So we know where they lead, we need to try something different. :)

Android game development - Should I use a framework like AndEngine? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am going to develop a (relatively) simple game for the Android platform.
It's gonna be 2D-Game (no heavy stuff, maybe simple animations)
I am considering using a framework (like AndEngine)
What are the advantages/disadvantages of using a framework? (rather than developing from scratch)
Thanks!
Well this can be a broad subject, nonetheless I'll toss in my 2 cents.
There are plenty of advantages using a framework and this applies to many other scenarios. Just think of a framework as a bridge to shorten the path and not have to reinvent the wheel.
A framework will pretty much handle the boring plumbing you would have to do otherwise.
Using a framework will, in most cases, make you code faster and some will probably «force» you to code in a cleaner and more organized fashion. Although this has much more to do with the programmer itself...but there are opinionated frameworks out there that will at least lead you the way.
The biggest disadvantage is not using a framework in itself, but picking the right framework. I think you have to make a few questions before deciding to pick Framework A, B or C such as : Is it sufficientlly mature for my needs? Does it have a good community or vendor support? Is it here to stay? What happens if the framework loses pace or support? Will I be in trouble?
There are other disadvantages of course. You may be putting yourself open to the risk of learning the Framework and neglecting the language behind it. For example, you may know jQuery but it's not liquid that you know javascript. See where I'm going?
Also, you can find yourself shackled by the framework limits. You may not be able to have full control of the code you write or at least not be able to express your code better because the framework it self has tight bounds. In other words, you are forced to respect its limits and work the way it is required. Again, pick the right framework for your needs.
I hope this helped.

Java and "generic" interfaces [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I soon need to implement an interface. The interface will need to provide a contract between a web service and 'n' other web services for highway traffic control. The company plans to investigate/test with a single traffic control service at first and add more later as they become available. I can define an interface that's "generic" for this single use case, but the problem is that at any point in the future, we might want to communicate with another web service that may or may not be compatible with the interface we have at that time.
I could modify the Java interface as we go to accommodate the differences in API's from third party services. This would also mean updating all implementors of the interface, too.
I would like to know if there are any patterns that would be suitable for this. Almost like "dynamically extending an interface" at run time. Or, any clever use of Java generics that would allow us to implement a single Java interface once that could be used with any/all traffic control systems.
Bottom line: When we come to communicate with any other third party services, I want as minimal effort on our side to integrate them.
Any thoughts?
If the issue is adapting different representations for the same semantics, then define your own interface containing all the semantics you need, and create an adapter layer that transforms the custom representations to yours. This is the same principle behind device drivers. A uniform client interface and multiple adapters to different devices.
If you expect to encounter "devices" (traffic control services) with wildly differing semantics, then you will have to have multiple driver types... again, exactly the same situation as the difference between block devices and character devices.
Your situation is just another example of a very well known and solved pattern :-)

Categories

Resources